text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
ASIM cardorSIM(subscriber identity module) is anintegrated circuit(IC) intended to securely store aninternational mobile subscriber identity(IMSI) number and its related key, which are used to identify and authenticate subscribers onmobile telephonedevices (such asmobile phones,tablets, andlaptops). SIMs are also able to storeaddress bookcontacts information,[1]and may be protected using aPIN codeto prevent unauthorized use.
SIMs are always used onGSMphones; forCDMAphones, they are needed only forLTE-capable handsets. SIM cards are also used in varioussatellite phones, smart watches, computers, or cameras.[2]The first SIM cards were the size ofcredit and bank cards; sizes were reduced several times over the years, usually keeping electrical contacts the same, to fit smaller-sized devices.[3]SIMs are transferable between different mobile devices by removing the card itself.
Technically, the actual physical card is known as auniversal integrated circuit card(UICC); thissmart cardis usually made ofPVCwith embedded contacts andsemiconductors, with the SIM as its primary component. In practice the term "SIM card" is still used to refer to the entire unit and not simply the IC. A SIM contains a unique serial number, integrated circuit card identification (ICCID), international mobile subscriber identity (IMSI) number, security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to, and four passwords: apersonal identification number(PIN) for ordinary use, and apersonal unblocking key(PUK) for PIN unlocking as well as a second pair (called PIN2 and PUK2 respectively) which are used for managingfixed dialing numberand some other functionality.[4][5]In Europe, the serial SIM number (SSN) is also sometimes accompanied by aninternational article number(IAN) or aEuropean article number(EAN) required when registering online for the subscription of a prepaid card.
As of 2020,eSIMis superseding physical SIM cards in some domains, including cellular telephony. eSIM uses a software-based SIM embedded into an irremovableeUICC.
The SIM card is a type ofsmart card,[2]the basis for which is thesiliconintegrated circuit(IC) chip.[6]The idea of incorporating a silicon IC chip onto a plastic card originates from the late 1960s.[6]Smart cards have since usedMOS integrated circuitchips, along withMOS memorytechnologies such asflash memoryandEEPROM(electricallyEPROM).[7]
The SIM was initially specified by theETSIin the specification TS 11.11. This describes the physical and logical behaviour of the SIM. With the development ofUMTS, the specification work was partially transferred to3GPP. 3GPP is now responsible for the further development of applications like SIM (TS 51.011[8]) and USIM (TS 31.102[9]) and ETSI for the further development of the physical cardUICC.
The first SIM card was manufactured in 1991 byMunichsmart-card makerGiesecke+Devrient, who sold the first 300 SIM cards to the Finnishwireless network operatorRadiolinja,[10][11]who launched the world's first commercial2GGSMcell network that year.[12]
Today, SIM cards are considered ubiquitous, allowing over 8 billion devices to connect to cellular networks around the world daily. According to the International Card Manufacturers Association (ICMA), there were 5.4 billion SIM cards manufactured globally in 2016 creating over $6.5 billion in revenue for traditional SIM card vendors.[13]The rise of cellular IoT and 5G networks was predicted by Ericsson to drive the growth of the addressable market for SIM cards to over 20 billion devices by 2020.[14]The introduction ofembedded-SIM(eSIM) andremote SIM provisioning(RSP) from the GSMA[15]may disrupt the traditional SIM card ecosystem with the entrance of new players specializing in "digital" SIM card provisioning and other value-added services for mobile network operators.[7]
There are three operating voltages for SIM cards:5 V,3 Vand1.8 V(ISO/IEC 7816-3 classes A, B and C, respectively). The operating voltage of the majority of SIM cards launched before 1998 was5 V. SIM cards produced subsequently are compatible with3 Vand5 V. Modern cards support5 V,3 Vand1.8 V.[7]
Modern SIM cards allow applications to load when the SIM is in use by the subscriber. These applications communicate with the handset or a server usingSIM Application Toolkit, which was initially specified by3GPPin TS 11.14. (There is an identical ETSI specification with different numbering.) ETSI and 3GPP maintain the SIM specifications. The main specifications are: ETSI TS 102 223 (the toolkit for smart cards), ETSI TS 102 241 (API), ETSI TS 102 588 (application invocation), and ETSI TS 131 111 (toolkit for more SIM-likes). SIM toolkit applications were initially written in native code using proprietary APIs. To provide interoperability of the applications, ETSI choseJava Card.[16]A multi-company collaboration calledGlobalPlatformdefines some extensions on the cards, with additional APIs and features like more cryptographic security andRFIDcontactless use added.[17]
SIM cards store network-specific information used to authenticate and identify subscribers on the network. The most important of these are the ICCID, IMSI,authentication key (Ki), local area identity (LAI) and operator-specific emergency number. The SIM also stores other carrier-specific data such as the SMSC (Short Message service center) number, service provider name (SPN), service dialing numbers (SDN), advice-of-charge parameters and value-added service (VAS) applications. (Refer to GSM 11.11.[18])
SIM cards can come in various data capacities, from8 KBto at least256 KB.[11]All can store a maximum of 250 contacts on the SIM, but while the32 KBhas room for 33Mobile country code(MCCs) ornetwork identifiers, the64 KBversion has room for 80 MNCs.[1]This is used by network operators to store data on preferred networks, mostly used when the SIM is not in its home network but isroaming. The network operator that issued the SIM card can use this to have a phone connect to a preferred network that is more economic for the provider instead of having to pay the network operator that the phone discovered first. This does not mean that a phone containing this SIM card can connect to a maximum of only 33 or 80 networks, instead it means that the SIM card issuer can specify only up to that number of preferred networks. If a SIM is outside these preferred networks, it uses the first or best available network.[14]
Each SIM is internationally identified by itsintegrated circuit card identifier(ICCID). Nowadays ICCID numbers are also used to identify eSIM profiles, not only physical SIM cards. ICCIDs are stored in the SIM cards and are also engraved or printed on the SIM card body during a process called personalisation.
The ICCID is defined by the ITU-T recommendationE.118as theprimary account number.[19]Its layout is based onISO/IEC 7812. According to E.118, the number can be up to 19 digits long, including a single check digit calculated using theLuhn algorithm. However, the GSM Phase 1[20]defined the ICCID length as an opaque data field, 10 octets (20 digits) in length, whose structure is specific to amobile network operator.
The number is composed of three subparts:
Their format is as follows.
Issuer identification number (IIN)
Individual account identification
Check digit
With the GSM Phase 1 specification using 10octetsinto which ICCID is stored as packed BCD[clarification needed], the data field has room for 20 digits with hexadecimal digit "F" being used as filler when necessary. In practice, this means that on GSM cards there are 20-digit (19+1) and 19-digit (18+1) ICCIDs in use, depending upon the issuer. However, a single issuer always uses the same size for its ICCIDs.
As required by E.118, the ITU-T updates a list of all current internationally assigned IIN codes in its Operational Bulletins which are published twice a month (the last as of January 2019 was No. 1163 from 1 January 2019).[22]ITU-T also publishes complete lists: as of August 2023, the list issued on 1 December 2018 was current, having all issuer identifier numbers before 1 December 2018.[23]
SIM cards are identified on their individual operator networks by a uniqueinternational mobile subscriber identity(IMSI).Mobile network operatorsconnect mobile phone calls and communicate with their market SIM cards using their IMSIs. The format is:
The Kiis a 128-bit value used in authenticating the SIMs on aGSMmobile network (for USIM network, the Kiis still needed but other parameters are also needed). Each SIM holds a unique Kiassigned to it by the operator during the personalisation process. The Kiis also stored in a database (termedauthentication centeror AuC) on the carrier's network.
The SIM card is designed to prevent someone from getting the Kiby using thesmart-card interface. Instead, the SIM card provides a function,Run GSM Algorithm, that the phone uses to pass data to the SIM card to be signed with the Ki. This, by design, makes using the SIM card mandatory unless the Kican be extracted from the SIM card, or the carrier is willing to reveal the Ki. In practice, the GSM cryptographic algorithm for computing a signed response (SRES_1/SRES_2: see steps 3 and 4, below) from the Kihas certain vulnerabilities[1]that can allow the extraction of the Kifrom a SIM card and the making of aduplicate SIM card.
Authentication process:
The SIM stores network state information, which is received from thelocation area identity(LAI). Operator networks are divided into location areas, each having a unique LAI number. When the device changes locations, it stores the new LAI to the SIM and sends it back to the operator network with its new location. If the device is power cycled, it takes data off the SIM, and searches for the prior LAI.
Most SIM cards store a number of SMS messages and phone book contacts. It stores the contacts in simple "name and number" pairs. Entries that contain multiple phone numbers and additional phone numbers are usually not stored on the SIM card. When a user tries to copy such entries to a SIM, the handset's software breaks them into multiple entries, discarding information that is not a phone number. The number of contacts and messages stored depends on the SIM; early models stored as few as five messages and 20 contacts, while modern SIM cards can usually store over 250 contacts.[24]
SIM cards have been made smaller over the years; functionality is independent of format. Full-size SIM was followed by mini-SIM, micro-SIM, and nano-SIM. SIM cards are also made to embed in devices.
JEDECDesign Guide 4.8, SON-8GSMA SGP.22 V1.0
All versions of the non-embedded SIM cards share the sameISO/IEC 7816pin arrangement.
Themini-SIMor (2FF , 2nd form factor) card has the same contact arrangement as the full-size SIM card and is normally supplied within a full-size card carrier, attached by a number of linking pieces. This arrangement (defined inISO/IEC 7810asID-1/000) lets such a card be used in a device that requires a full-size card – or in a device that requires a mini-SIM card, after breaking the linking pieces. As the full-size SIM is obsolete, some suppliers refer to the mini-SIM as a "standard SIM" or "regular SIM".
Themicro-SIM(or 3FF) card has the same thickness and contact arrangements, but reduced length and width as shown in the table above.[25]
The micro-SIM was introduced by theEuropean Telecommunications Standards Institute(ETSI) along with SCP,3GPP(UTRAN/GERAN),3GPP2(CDMA2000),ARIB,GSM Association(GSMA SCaG and GSMNA), GlobalPlatform,Liberty Alliance, and theOpen Mobile Alliance(OMA) for the purpose of fitting into devices too small for a mini-SIM card.[21][26]
The form factor was mentioned in the December 1998 3GPP SMG9UMTSWorking Party, which is the standards-setting body for GSM SIM cards,[24]and the form factor was agreed upon in late 2003.[27]
The micro-SIM was designed for backward compatibility. The major issue for backward compatibility was the contact area of the chip. Retaining the same contact area makes the micro-SIM compatible with the prior, larger SIM readers through the use of plastic cutout surrounds. The SIM was also designed to run at the same speed (5 MHz) as the prior version. The same size and positions of pins resulted in numerous "How-to" tutorials and YouTube videos with detailed instructions how to cut a mini-SIM card to micro-SIM size.
The chairman of EP SCP, Klaus Vedder, said[27]
ETSI has responded to a market need from ETSI customers, but additionally there is a strong desire not to invalidate, overnight, the existing interface, nor reduce the performance of the cards.
Micro-SIM cards were introduced by various mobile service providers for the launch of the original iPad, and later for smartphones, from April 2010. TheiPhone 4was the first smartphone to use a micro-SIM card in June 2010, followed by many others.[28]
After a debate in early 2012 between a few designs created by Apple,NokiaandRIM, Apple's design for an even smaller SIM card was accepted by the ETSI.[29][30]Thenano-SIM(or 4FF) card was introduced in June 2012, when mobile service providers in various countries first supplied it for phones that supported the format. The nano-SIM measures 12.3 mm × 8.8 mm × 0.67 mm (0.484 in × 0.346 in × 0.026 in) and reduces the previous format to the contact area while maintaining the existing contact arrangements.[31]A small rim of isolating material is left around the contact area to avoid short circuits with the socket. The nano-SIM can be put into adapters for use with devices designed for 2FF or 3FF SIMs, and is made thinner for that purpose,[32]and telephone companies give due warning about this.[33]4FF is 0.67 mm (0.026 in) thick, compared to the 0.76 mm (0.030 in) of its predecessors.
TheiPhone 5, released in September 2012, was the first device to use a nano-SIM card,[34]followed by other handsets.
In July 2013, Karsten Nohl, a security researcher from SRLabs, described[35][36]vulnerabilities in some SIM cards that supportedDES, which, despite its age, is still used by some operators.[36]The attack could lead to the phone being remotelyclonedor let someone steal payment credentials from the SIM.[36]Further details of the research were provided atBlackHaton 31 July 2013.[36][37]In response, theInternational Telecommunication Unionsaid that the development was "hugely significant" and that it would be contacting its members.[38]
In February 2015,The Interceptreported that theNSAandGCHQhad stolen the encryption keys (Ki's) used byGemalto(now known asThales DIS, manufacturer of 2 billion SIM cards annually)[39]), enabling these intelligence agencies to monitor voice and data communications without the knowledge or approval of cellular network providers or judicial oversight.[40]Having finished its investigation, Gemalto claimed that it has “reasonable grounds” to believe that the NSA and GCHQ carried out an operation to hack its network in 2010 and 2011, but says the number of possibly stolen keys would not have been massive.[41]
In September 2019, Cathal Mc Daid, a security researcher from Adaptive Mobile Security, described[42][43]how vulnerabilities in some SIM cards that contained the S@T Browser library were being actively exploited. This vulnerability was namedSimjacker. Attackers were using the vulnerability to track the location of thousands of mobile phone users in several countries.[44]Further details of the research were provided atVirusBulletinon 3 October 2019.[45][46]
When GSM was already in use, the specifications were further developed and enhanced with functionality such asSMSandGPRS. These development steps are referred as releases by ETSI. Within these development cycles, the SIM specification was enhanced as well: new voltage classes, formats and files were introduced.
In GSM-only times, the SIM consisted of the hardware and the software. With the advent of UMTS, this naming was split: the SIM was now an application and hence only software. The hardware part was called UICC. This split was necessary because UMTS introduced a new application, the universal subscriber identity module (USIM). The USIM brought, among other things, security improvements like mutual authentication and longer encryption keys, and an improved address book.
"SIM cards" in developed countries today are usuallyUICCscontaining at least a SIM application and a USIM application. This configuration is necessary because older GSM only handsets are solely compatible with the SIM application and some UMTS security enhancements rely on the USIM application.
OncdmaOnenetworks, the equivalent of the SIM card is theR-UIMand the equivalent of the SIM application is theCSIM.
Avirtual SIMis a mobile phone number provided by amobile network operatorthat does not require a SIM card to connect phone calls to a user's mobile phone.
An embedded SIM (eSIM) is a form of programmable SIM that is embedded directly into a device.[47]The surface mount format provides the same electrical interface as the full size, 2FF and 3FF SIM cards, but is soldered to a circuit board as part of the manufacturing process. In M2M applications where there is no requirement[15]to change the SIM card, this avoids the requirement for a connector, improving reliability and security.[citation needed]An eSIM can beprovisioned remotely; end-users can add or remove operators without the need to physically swap a SIM from the device or use multiple eSIM profiles at the same time.[48][49]
The eSIM standard, initially introduced in 2016, has progressively supplanted traditional physical SIM cards across various sectors, notably in cellular telephony.[50][51][52]In September 2017, Apple introduced the Apple Watch Series 3 featuring eSIM.[53]In October 2018, Apple introduced theiPad Pro (3rd generation),[54]which was the first iPad to support eSIM. In September 2022, Apple introduced the iPhone 14 series which was the first eSIM exclusive iPhone in the United States.[55]
An integrated SIM (iSIM) is a form of SIM directly integrated into the modem chip or main processor of the device itself. As a consequence they are smaller, cheaper and more reliable than eSIMs, they can improve security and ease the logistics and production of small devices i.e. forIoTapplications. In 2021,Deutsche Telekomintroduced thenuSIM, an "Integrated SIM for IoT".[56][57][58]
The use of SIM cards is mandatory inGSMdevices.[59][60]
Thesatellite phonenetworksIridium,ThurayaandInmarsat'sBGANalso use SIM cards. Sometimes, these SIM cards work in regular GSM phones and also allow GSM customers to roam in satellite networks by using their own SIM cards in a satellite phone.
Japan's 2GPDCsystem (which was shut down in 2012;SoftBank Mobileshut down PDC from 31 March 2010) also specified a SIM, but this has never been implemented commercially. The specification of the interface between the Mobile Equipment and the SIM is given in theRCRSTD-27 annexe 4. The Subscriber Identity Module Expert Group was a committee of specialists assembled by the European Telecommunications Standards Institute (ETSI) to draw up the specifications (GSM11.11) for interfacing between smart cards and mobile telephones. In 1994, the name SIMEG was changed to SMG9.
Japan's current and next-generation cellular systems are based on W-CDMA (UMTS) andCDMA2000and all use SIM cards. However, Japanese CDMA2000-based phones are locked to the R-UIM they are associated with and thus, the cards are not interchangeable with other Japanese CDMA2000 handsets (though they may be inserted into GSM/WCDMA handsets for roaming purposes outside Japan).
CDMA-based devices originally did not use a removable card, and the service for these phones is bound to a unique identifier contained in the handset itself. This is most prevalent in operators in the Americas. The first publication of the TIA-820 standard (also known as 3GPP2 C.S0023) in 2000 defined the Removable User Identity Module (R-UIM). Card-based CDMA devices are most prevalent in Asia.
The equivalent of a SIM inUMTSis called the universal integrated circuit card (UICC), which runs a USIM application. The UICC is still colloquially called aSIM card.[61]
The SIM card introduced a new and significant business opportunity forMVNOswho lease capacity from one of the network operators rather than owning or operating a cellular telecoms network and only provide a SIM card to their customers. MVNOs first appeared in Denmark, Hong Kong, Finland and the UK. By 2011 they existed in over 50 countries, including most of Europe, the United States, Canada, Mexico, Australia and parts of Asia, and accounted for approximately 10% of all mobile phone subscribers around the world.[62]
On some networks, the mobile phone islocked to its carrier SIM card, meaning that the phone only works with SIM cards from the specific carrier. This is more common in markets where mobile phones are heavily subsidised by the carriers, and the business model depends on the customer staying with the service provider for a minimum term (typically 12, 18 or 24 months). SIM cards that are issued by providers with an associated contract, but where the carrier does not provide a mobile device (such as a mobile phone) are calledSIM-onlydeals. Common examples are the GSM networks in the United States, Canada, Australia, and Poland. UK mobile networks ended SIM lock practices in December 2021. Many businesses offer the ability to remove the SIM lock from a phone, effectively making it possible to then use the phone on any network by inserting a different SIM card. Mostly, GSM and 3G mobile handsets can easily be unlocked and used on any suitable network with any SIM card.
In countries where the phones are not subsidised, e.g., India, Israel and Belgium, all phones are unlocked. Where the phone is not locked to its SIM card, the users can easily switch networks by simply replacing the SIM card of one network with that of another while using only one phone. This is typical, for example, among users who may want to optimise their carrier's traffic by different tariffs to different friends on different networks, or when travelling internationally.
In 2016, carriers started using the concept of automatic SIM reactivation[63]whereby they let users reuse expired SIM cards instead of purchasing new ones when they wish to re-subscribe to that operator. This is particularly useful in countries whereprepaid callsdominate and where competition drives highchurn rates, as users had to return to a carrier shop to purchase a new SIM each time they wanted to churn back to an operator.
Commonly sold as a product by mobiletelecommunicationscompanies, "SIM-only" refers to a type oflegally liabilitycontract between a mobile network provider and a customer. The contract itself takes the form of a credit agreement and is subject to a credit check.
SIM-only contracts can bepre-pay- where the subscriber buyscreditbefore use (often called pay as you go, abbreviated to PAYG), orpost-pay, where the subscriber pays in arrears, typically monthly.
Within a SIM-only contract, the mobile network provider supplies their customer with just one piece of hardware, a SIM card, which includes an agreed amount of network usage in exchange for a monthly payment. Network usage within a SIM-only contract can be measured in minutes, text, data or any combination of these. The duration of a SIM-only contract varies depending on the deal selected by the customer, but in the UK they are typically available over 1, 3, 6, 12 or 24-month periods.
SIM-only contracts differ from mobile phone contracts in that they do not include any hardware other than a SIM card. In terms of network usage, SIM-only is typically more cost-effective than other contracts because the provider does not charge more to offset the cost of a mobile device over the contract period. The short contract length is one of the key features of SIM-only – made possible by the absence of a mobile device.
SIM-only is increasing in popularity very quickly.[64]In 2010 pay monthly based mobile phone subscriptions grew from 41 percent to 49 percent of all UK mobile phone subscriptions.[65]According to German research companyGfK, 250,000 SIM-only mobile contracts were taken up in the UK during July 2012 alone, the highest figure since GfK began keeping records.
Increasing smartphone penetration combined with financial concerns is leading customers to save money by moving onto a SIM-only when their initial contract term is over.
Dual SIMdevices have two SIM card slots for the use of two SIM cards, from one or multiple carriers. Multiple SIM devices are commonplace in developing markets such as inAfrica,East Asia,South AsiaandSoutheast Asia, where variable billing rates, network coverage and speed make it desirable for consumers to use multiple SIMs from competing networks. Dual-SIM phones are also useful to separate one's personal phone number from a business phone number, without having to carry multiple devices. Some popular devices, such as theBlackBerry KeyOne, have dual-SIM variants; however, dual-SIM devices were not common in the US or Europe due to lack of demand. This has changed with mainline products from Apple and Google featuring either two SIM slots or a combination of a physical SIM slot and an eSIM.
In September 2018,AppleintroducediPhone XS,iPhone XS Max, andiPhone XRfeaturing Dual SIM (nano-SIM andeSIM) andApple Watch Series 4featuring DualeSIM.
Athin SIM(oroverlay SIMorSIM overlay) is a very thin device shaped like a SIM card, approximately 120 microns (1⁄200inch) thick. It has contacts on its front and back. It is used by placing it on top of a regular SIM card. It provides its own functionality while passing through the functionality of the SIM card underneath. It can be used to bypass the mobile operating network and run custom applications, particularly on non-programmable cell phones.[66]
Its top surface is a connector that connects to the phone in place of the normal SIM. Its bottom surface is a connector that connects to the SIM in place of the phone. With electronics, it can modify signals in either direction, thus presenting a modified SIM to the phone, and/or presenting a modified phone to the SIM. (It is a similar concept to theGame Genie, which connects between a game console and a game cartridge, creating a modified game). Similar devices have also been developed for iPhones to circumvent SIM card restrictions on carrier-locked models.[67]
In 2014,Equitel, an MVNO operated by Kenya'sEquity Bank, announced its intention to begin issuing thin SIMs to customers, raising security concerns by competition, particularly concerning the safety of mobile money accounts. However, after months of security testing and legal hearings before the country's Parliamentary Committee on Energy, Information and Communications, theCommunications Authority of Kenya(CAK) gave the bank the green light to roll out its thin SIM cards.[68]
|
https://en.wikipedia.org/wiki/SIM_card
|
Inprobability theory, thebirthday problemasks for the probability that, in a set ofnrandomlychosen people, at least two will share the samebirthday. Thebirthday paradoxis the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.
The birthday paradox is averidical paradox: it seems wrong at first glance but is, in fact, true. While it may seem surprising that only 23 individuals are required to reach a 50% probability of a shared birthday, this result is made more intuitive by considering that the birthday comparisons will be made between every possible pair of individuals. With 23 individuals, there are23 × 22/2= 253 pairs to consider.
Real-world applications for the birthday problem include a cryptographic attack called thebirthday attack, which uses this probabilistic model to reduce the complexity of finding acollisionfor ahash function, as well as calculating the approximate risk of a hash collision existing within the hashes of a given size of population.
The problem is generally attributed toHarold Davenportin about 1927, though he did not publish it at the time. Davenport did not claim to be its discoverer "because he could not believe that it had not been stated earlier".[1][2]The first publication of a version of the birthday problem was byRichard von Misesin 1939.[3]
From apermutationsperspective, let the eventAbe the probability of finding a group of 23 people without any repeated birthdays. And let the eventBbe the probability of finding a group of 23 people with at least two people sharing same birthday,P(B) = 1 −P(A). This is such thatP(A)is the ratio of the total number of birthdays,Vnr{\displaystyle V_{nr}}, without repetitions and order matters (e.g. for a group of 2 people, mm/dd birthday format, one possible outcome is{{01/02,05/20},{05/20,01/02},{10/02,08/04},...}{\displaystyle \left\{\left\{01/02,05/20\right\},\left\{05/20,01/02\right\},\left\{10/02,08/04\right\},...\right\}}) divided by the total number of birthdays with repetition and order matters,Vt{\displaystyle V_{t}}, as it is the total space of outcomes from the experiment (e.g. 2 people, one possible outcome is{{01/02,01/02},{10/02,08/04},...}{\displaystyle \left\{\left\{01/02,01/02\right\},\left\{10/02,08/04\right\},...\right\}}). ThereforeVnr{\displaystyle V_{nr}}andVt{\displaystyle V_{t}}arepermutations.
Another way the birthday problem can be solved is by asking for an approximate probability that in a group ofnpeople at least two have the same birthday. For simplicity,leap years,twins,selection bias, and seasonal and weekly variations in birth rates[4]are generally disregarded, and instead it is assumed that there are 365 possible birthdays, and that each person's birthday is equally likely to be any of these days, independent of the other people in the group.
For independent birthdays, auniform distributionof birthdays minimizes the probability of two people in a group having the same birthday. Any unevenness increases the likelihood of two people sharing a birthday.[5][6]However real-world birthdays are not sufficiently uneven to make much change: the real-world group size necessary to have a greater than 50% chance of a shared birthday is 23, as in the theoretical uniform distribution.[7]
The goal is to computeP(B), the probability that at least two people in the room have the same birthday. However, it is simpler to calculateP(A′), the probability that no two people in the room have the same birthday. Then, becauseBandA′are the only two possibilities and are alsomutually exclusive,P(B) = 1 −P(A′).
Here is the calculation ofP(B)for 23 people. Let the 23 people be numbered 1 to 23. Theeventthat all 23 people have different birthdays is the same as the event that person 2 does not have the same birthday as person 1, and that person 3 does not have the same birthday as either person 1 or person 2, and so on, and finally that person 23 does not have the same birthday as any of persons 1 through 22. Let these events be called Event 2, Event 3, and so on. Event 1 is the event of person 1 having a birthday, which occurs with probability 1. This conjunction of events may be computed usingconditional probability: the probability of Event 2 is364/365, as person 2 may have any birthday other than the birthday of person 1. Similarly, the probability of Event 3 given that Event 2 occurred is363/365, as person 3 may have any of the birthdays not already taken by persons 1 and 2. This continues until finally the probability of Event 23 given that all preceding events occurred is343/365. Finally, the principle of conditional probability implies thatP(A′)is equal to the product of these individual probabilities:
The terms of equation (1) can be collected to arrive at:
Evaluating equation (2) givesP(A′) ≈ 0.492703
Therefore,P(B) ≈ 1 − 0.492703 = 0.507297(50.7297%).
This process can be generalized to a group ofnpeople, wherep(n)is the probability of at least two of thenpeople sharing a birthday. It is easier to first calculate the probabilityp(n)that allnbirthdays aredifferent. According to thepigeonhole principle,p(n)is zero whenn> 365. Whenn≤ 365:
where!is thefactorialoperator,(365n)is thebinomial coefficientandkPrdenotespermutation.
The equation expresses the fact that the first person has no one to share a birthday, the second person cannot have the same birthday as the first(364/365), the third cannot have the same birthday as either of the first two(363/365), and in general thenth birthday cannot be the same as any of then− 1preceding birthdays.
Theeventof at least two of thenpersons having the same birthday iscomplementaryto allnbirthdays being different. Therefore, its probabilityp(n)is
The following table shows the probability for some other values ofn(for this table, the existence of leap years is ignored, and each birthday is assumed to be equally likely):
TheTaylor seriesexpansion of theexponential function(the constante≈2.718281828)
provides a first-order approximation forexfor|x|≪1{\displaystyle |x|\ll 1}:
To apply this approximation to the first expression derived forp(n), setx= −a/365. Thus,
Then, replaceawith non-negative integers for each term in the formula ofp(n)untila=n− 1, for example, whena= 1,
The first expression derived forp(n)can be approximated as
Therefore,
An even coarser approximation is given by
which, as the graph illustrates, is still fairly accurate.
According to the approximation, the same approach can be applied to any number of "people" and "days". If rather than 365 days there ared, if there arenpersons, and ifn≪d, then using the same approach as above we achieve the result that ifp(n,d)is the probability that at least two out ofnpeople share the same birthday from a set ofdavailable days, then:
The probability of any two people not having the same birthday is364/365. In a room containingnpeople, there are(n2)=n(n− 1)/2pairs of people, i.e.(n2)events. The probability of no two people sharing the same birthday can be approximated by assuming that these events are independent and hence by multiplying their probability together. Being independent would be equivalent to pickingwith replacement, any pair of people in the world, not just in a room. In short364/365can be multiplied by itself(n2)times, which gives us
Since this is the probability of no one having the same birthday, then the probability of someone sharing a birthday is
And for the group of 23 people, the probability of sharing is
Applying thePoissonapproximation for the binomial on the group of 23 people,
so
The result is over 50% as previous descriptions. This approximation is the same as the one above based on the Taylor expansion that usesex≈ 1 +x.
A goodrule of thumbwhich can be used formental calculationis the relation
which can also be written as
which works well for probabilities less than or equal to1/2. In these equations,dis the number of days in a year.
For instance, to estimate the number of people required for a1/2chance of a shared birthday, we get
Which is not too far from the correct answer of 23.
This can also be approximated using the following formula for thenumberof people necessary to have at least a1/2chance of matching:
This is a result of the good approximation that an event with1/kprobability will have a1/2chance of occurring at least once if it is repeatedkln 2times.[8]
The lighter fields in this table show the number of hashes needed to achieve the given probability of collision (column) given a hash space of a certain size in bits (row). Using the birthday analogy: the "hash space size" resembles the "available days", the "probability of collision" resembles the "probability of shared birthday", and the "required number of hashed elements" resembles the "required number of people in a group". One could also use this chart to determine the minimum hash size required (given upper bounds on the hashes and probability of error), or the probability of collision (for fixed number of hashes and probability of error).
For comparison,10−18to10−15is the uncorrectablebit error rateof a typical hard disk.[9]In theory, 128-bit hash functions, such asMD5, should stay within that range until about8.2×1011documents, even if its possible outputs are many more.
The argument below is adapted from an argument ofPaul Halmos.[nb 1]
As stated above, the probability that no two birthdays coincide is
As in earlier paragraphs, interest lies in the smallestnsuch thatp(n) >1/2; or equivalently, the smallestnsuch thatp(n) <1/2.
Using the inequality1 −x<e−xin the above expression we replace1 −k/365withe−k⁄365. This yields
Therefore, the expression above is not only an approximation, but also anupper boundofp(n). The inequality
impliesp(n) <1/2. Solving forngives
Now,730 ln 2is approximately 505.997, which is barely below 506, the value ofn2−nattained whenn= 23. Therefore, 23 people suffice. Incidentally, solvingn2−n= 730 ln 2forngives the approximate formula of Frank H. Mathis cited above.
This derivation only shows thatat most23 people are needed to ensure the chances of a birthday match are at least even; it leaves open the possibility thatnis 22 or less could also work.
Given a year withddays, thegeneralized birthday problemasks for the minimal numbern(d)such that, in a set ofnrandomly chosen people, the probability of a birthday coincidence is at least 50%. In other words,n(d)is the minimal integernsuch that
The classical birthday problem thus corresponds to determiningn(365). The first 99 values ofn(d)are given here (sequenceA033810in theOEIS):
A similar calculation shows thatn(d)= 23 whendis in the range 341–372.
A number of bounds and formulas forn(d)have been published.[10]For anyd≥ 1, the numbern(d)satisfies[11]
These bounds are optimal in the sense that the sequencen(d) −√2dln 2gets arbitrarily close to
while it has
as its maximum, taken ford= 43.
The bounds are sufficiently tight to give the exact value ofn(d)in most of the cases. For example, ford=365 these bounds imply that22.7633 <n(365) < 23.7736and 23 is the onlyintegerin that range. In general, it follows from these bounds thatn(d)always equals either
where⌈ · ⌉denotes theceiling function.
The formula
holds for 73% of all integersd.[12]The formula
holds foralmost alld, i.e., for a set of integersdwithasymptotic density1.[12]
The formula
holds for alld≤1018, but it is conjectured that there are infinitely many counterexamples to this formula.[13]
The formula
holds for alld≤1018, and it is conjectured that this formula holds for alld.[13]
It is possible to extend the problem to ask how many people in a group are necessary for there to be a greater than 50% probability that at least 3, 4, 5, etc. of the group share the same birthday.
The first few values are as follows: >50% probability of 3 people sharing a birthday - 88 people; >50% probability of 4 people sharing a birthday - 187 people (sequenceA014088in theOEIS).[14]
The strong birthday problem asks for the number of people that need to be gathered together before there is a 50% chance thateveryonein the gathering shares their birthday with at least one other person. For d=365 days the answer is 3,064 people.[15][16]
The number of people needed for arbitrary number of days is given by (sequenceA380129in theOEIS)
The birthday problem can be generalized as follows:
The generic results can be derived using the same arguments given above.
Conversely, ifn(p;d)denotes the number of random integers drawn from[1,d]to obtain a probabilitypthat at least two numbers are the same, then
The birthday problem in this more generic sense applies tohash functions: the expected number ofN-bithashes that can be generated before getting a collision is not2N, but rather only2N⁄2. This is exploited bybirthday attacksoncryptographic hash functionsand is the reason why a small number of collisions in ahash tableare, for all practical purposes, inevitable.
The theory behind the birthday problem was used by Zoe Schnabel[18]under the name ofcapture-recapturestatistics to estimate the size of fish population in lakes. The birthday problem and its generalizations are also useful tools for modelling coincidences.[19]
The classic birthday problem allows for more than two people to share a particular birthday or for there to be matches on multiple days. The probability that amongnpeople there is exactly one pair of individuals with a matching birthday givendpossible days is[19]
Unlike the standard birthday problem, asnincreases the probability reaches a maximum value before decreasing. For example, ford= 365, the probability of a unique match has a maximum value of 0.3864 occurring whenn= 28.
The basic problem considers all trials to be of one "type". The birthday problem has been generalized to consider an arbitrary number of types.[20]In the simplest extension there are two types of people, saymmen andnwomen, and the problem becomes characterizing the probability of a shared birthday between at least one man and one woman. (Shared birthdays between two men or two women do not count.) The probability of no shared birthdays here is
whered= 365andS2areStirling numbers of the second kind. Consequently, the desired probability is1 −p0.
This variation of the birthday problem is interesting because there is not a unique solution for the total number of peoplem+n. For example, the usual 50% probability value is realized for both a 32-member group of 16 men and 16 women and a 49-member group of 43 women and 6 men.
A related question is, as people enter a room one at a time, which one is most likely to be the first to have the same birthday as someone already in the room? That is, for whatnisp(n) −p(n− 1)maximum? The answer is 20—if there is a prize for first match, the best position in line is 20th.[citation needed]
In the birthday problem, neither of the two people is chosen in advance. By contrast, the probabilityq(n)thatat least one other personin a room ofnother people has the same birthday as aparticularperson (for example, you) is given by
and for generaldby
In the standard case ofd= 365, substitutingn= 23gives about 6.1%, which is less than 1 chance in 16. For a greater than 50% chance thatat leastone other person in a roomful ofnpeople has the same birthday asyou,nwould need to be at least 253. This number is significantly higher than365/2= 182.5: the reason is that it is likely that there are some birthday matches among the other people in the room.
For any one person in a group ofnpeople the probability that he or she shares his birthday with someone else isq(n−1;d){\displaystyle q(n-1;d)}, as explained above. The expected number of people with a shared (non-unique) birthday can now be calculated easily by multiplying that probability by the number of people (n), so it is:
(This multiplication can be done this way because of the linearity of theexpected valueof indicator variables). This implies that the expected number of people with a non-shared (unique) birthday is:
Similar formulas can be derived for the expected number of people who share with three, four, etc. other people.
The expected number of people needed until every birthday is achieved is called theCoupon collector's problem. It can be calculated bynHn, whereHnis thenthharmonic number. For 365 possible dates (the birthday problem), the answer is 2365.
Another generalization is to ask for the probability of finding at least one pair in a group ofnpeople with birthdays withinkcalendar days of each other, if there aredequally likely birthdays.[21]
The number of people required so that the probability that some pair will have a birthday separated bykdays or fewer will be higher than 50% is given in the following table:
Thus in a group of just seven random people, it is more likely than not that two of them will have a birthday within a week of each other.[21]
The expected number of different birthdays, i.e. the number of days that are at least one person's birthday, is:
This follows from the expected number of days that are no one's birthday:
which follows from the probability that a particular day is no one's birthday,(d− 1/d)n, easily summed because of the linearity of the expected value.
For instance, withd= 365, you should expect about 21 different birthdays when there are 22 people, or 46 different birthdays when there are 50 people. When there are 1000 people, there will be around 341 different birthdays (24 unclaimed birthdays).
The above can be generalized from the distribution of the number of people with their birthday on any particular day, which is aBinomial distributionwith probability1/d. Multiplying the relevant probability bydwill then give the expected number of days. For example, the expected number of days which are shared; i.e. which are at least two (i.e. not zero and not one) people's birthday is:
d−d(d−1d)n−d⋅(n1)(1d)1(d−1d)n−1=d−d(d−1d)n−n(d−1d)n−1{\displaystyle d-d\left({\frac {d-1}{d}}\right)^{n}-d\cdot {\binom {n}{1}}\left({\frac {1}{d}}\right)^{1}\left({\frac {d-1}{d}}\right)^{n-1}=d-d\left({\frac {d-1}{d}}\right)^{n}-n\left({\frac {d-1}{d}}\right)^{n-1}}
The probability that thekth integer randomly chosen from[1,d]will repeat at least one previous choice equalsq(k− 1;d)above. The expected total number of times a selection will repeat a previous selection asnsuch integers are chosen equals[22]
This can be seen to equal the number of people minus the expected number of different birthdays.
In an alternative formulation of the birthday problem, one asks theaveragenumber of people required to find a pair with the same birthday. If we consider the probability function Pr[npeople have at least one shared birthday], thisaverageis determining themeanof the distribution, as opposed to the customary formulation, which asks for themedian. The problem is relevant to severalhashing algorithmsanalyzed byDonald Knuthin his bookThe Art of Computer Programming. It may be shown[23][24]that if one samples uniformly, with replacement, from a population of sizeM, the number of trials required for the first repeated sampling ofsomeindividual hasexpected valuen= 1 +Q(M), where
The function
has been studied bySrinivasa Ramanujanand hasasymptotic expansion:
WithM= 365days in a year, the average number of people required to find a pair with the same birthday isn= 1 +Q(M) ≈ 24.61659, somewhat more than 23, the number required for a 50% chance. In the best case, two people will suffice; at worst, the maximum possible number ofM+ 1 = 366people is needed; but on average, only 25 people are required
An analysis using indicator random variables can provide a simpler but approximate analysis of this problem.[25]For each pair (i,j) for k people in a room, we define the indicator random variableXij, for1≤i≤j≤k{\displaystyle 1\leq i\leq j\leq k}, by
Xij=I{personiand personjhave the same birthday}={1,if personiand personjhave the same birthday;0,otherwise.{\displaystyle {\begin{alignedat}{2}X_{ij}&=I\{{\text{person }}i{\text{ and person }}j{\text{ have the same birthday}}\}\\[10pt]&={\begin{cases}1,&{\text{if person }}i{\text{ and person }}j{\text{ have the same birthday;}}\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}
E[Xij]=Pr{personiand personjhave the same birthday}=1n.{\displaystyle {\begin{alignedat}{2}E[X_{ij}]&=\Pr\{{\text{person }}i{\text{ and person }}j{\text{ have the same birthday}}\}={\frac {1}{n}}.\end{alignedat}}}
LetXbe a random variable counting the pairs of individuals with the same birthday.
X=∑i=1k∑j=i+1kXij{\displaystyle X=\sum _{i=1}^{k}\sum _{j=i+1}^{k}X_{ij}}
E[X]=∑i=1k∑j=i+1kE[Xij]=(k2)1n=k(k−1)2n{\displaystyle {\begin{alignedat}{3}E[X]&=\sum _{i=1}^{k}\sum _{j=i+1}^{k}E[X_{ij}]\\[8pt]&={\binom {k}{2}}{\frac {1}{n}}\\[8pt]&={\frac {k(k-1)}{2n}}\end{alignedat}}}
Forn= 365, ifk= 28, the expected number of pairs of individuals with the same birthday is28 × 27/2 × 365≈ 1.0356. Therefore, we can expect at least one matching pair with at least 28 people.
In the2014 FIFA World Cup, each of the 32 squads had 23 players. An analysis of the official squad lists suggested that 16 squads had pairs of players sharing birthdays, and of these 5 squads had two pairs: Argentina, France, Iran, South Korea and Switzerland each had two pairs, and Australia, Bosnia and Herzegovina, Brazil, Cameroon, Colombia, Honduras, Netherlands, Nigeria, Russia, Spain and USA each with one pair.[26]
Voracek, Tran andFormannshowed that the majority of people markedly overestimate the number of people that is necessary to achieve a given probability of people having the same birthday, and markedly underestimate the probability of people having the same birthday when a specific sample size is given.[27]Further results showed that psychology students and women did better on the task than casino visitors/personnel or men, but were less confident about their estimates.
The reverse problem is to find, for a fixed probabilityp,
the greatestnfor which the probabilityp(n)is smaller than the givenp, or the smallestnfor which the probabilityp(n)is greater than the givenp.[citation needed]
Taking the above formula ford= 365, one has
The following table gives some sample calculations.
Some values falling outside the bounds have beencoloredto show that the approximation is not always exact.
A related problem is thepartition problem, a variant of theknapsack problemfromoperations research. Some weights are put on abalance scale; each weight is an integer number of grams randomly chosen between one gram and one million grams (onetonne). The question is whether one can usually (that is, with probability close to 1) transfer the weights between the left and right arms to balance the scale. (In case the sum of all the weights is an odd number of grams, a discrepancy of one gram is allowed.) If there are only two or three weights, the answer is very clearly no; although there are some combinations which work, the majority of randomly selected combinations of three weights do not. If there are very many weights, the answer is clearly yes. The question is, how many are just sufficient? That is, what is the number of weights such that it is equally likely for it to be possible to balance them as it is to be impossible?
Often, people's intuition is that the answer is above100000. Most people's intuition is that it is in the thousands or tens of thousands, while others feel it should at least be in the hundreds. The correct answer is 23.[citation needed]
The reason is that the correct comparison is to the number of partitions of the weights into left and right. There are2N− 1different partitions forNweights, and the left sum minus the right sum can be thought of as a new random quantity for each partition. The distribution of the sum of weights is approximatelyGaussian, with a peak at500000Nand width1000000√N, so that when2N− 1is approximately equal to1000000√Nthe transition occurs. 223 − 1is about 4 million, while the width of the distribution is only 5 million.[28]
Arthur C. Clarke's 1961 novelA Fall of Moondustcontains a section where the main characters, trapped underground for an indefinite amount of time, are celebrating a birthday and find themselves discussing the validity of the birthday problem. As stated by a physicist passenger: "If you have a group of more than twenty-four people, the odds are better than even that two of them have the same birthday." Eventually, out of 22 present, it is revealed that two characters share the same birthday, May 23.
The reasoning is based on important tools that all students of mathematics should have ready access to. The birthday problem used to be a splendid illustration of the advantages of pure thought over mechanical manipulation; the inequalities can be obtained in a minute or two, whereas the multiplications would take much longer, and be much more subject to error, whether the instrument is a pencil or an old-fashioned desk computer. Whatcalculatorsdo not yield is understanding, or mathematical facility, or a solid basis for more advanced, generalized theories.
|
https://en.wikipedia.org/wiki/Birthday_problem
|
Abirthday attackis a bruteforcecollision attackthat exploits the mathematics behind thebirthday probleminprobability theory. This attack can be used to abuse communication between two or more parties. The attack depends on the higher likelihood of collisions found between random attack attempts and a fixed degree of permutations (pigeonholes). LetH{\textstyle H}be the number of possible values of a hash function, withH=2l{\textstyle H=2^{l}}. With a birthday attack, it is possible to find acollision of a hash functionwith50%{\textstyle 50\%}chance in2l=2l/2,{\textstyle {\sqrt {2^{l}}}=2^{l/2},}wherel{\textstyle l}is the bit length of the hash output,[1][2]and with2l−1{\textstyle 2^{l-1}}being the classicalpreimage resistancesecurity with the same probability.[2]There is a general (though disputed[3])resultthat quantum computers can perform birthday attacks, thus breaking collision resistance, in2l3=2l/3{\textstyle {\sqrt[{3}]{2^{l}}}=2^{l/3}}.[4]
Although there are somedigital signaturevulnerabilities associated with the birthday attack, it cannot be used to break an encryption scheme any faster than abrute-force attack.[5]: 36
As an example, consider the scenario in which a teacher with a class of 30 students (n = 30) asks for everybody's birthday (for simplicity, ignoreleap years) to determine whether any two students have the same birthday (corresponding to a hash collision as described further). Intuitively, this chance may seem small. Counter-intuitively, the probability that at least one student has the same birthday asanyother student on any day is around 70% (for n = 30), from the formula1−365!(365−n)!⋅365n{\displaystyle 1-{\frac {365!}{(365-n)!\cdot 365^{n}}}}.[6]
If the teacher had picked aspecificday (say, 16 September), then the chance that at least one student was born on that specific day is1−(364/365)30{\displaystyle 1-(364/365)^{30}}, about 7.9%.
In a birthday attack, the attacker prepares many different variants of benign and malicious contracts, each having adigital signature. A pair of benign and malicious contracts with the same signature is sought. In this fictional example, suppose that the digital signature of a string is the first byte of itsSHA-256hash. The pair found is indicated in green – note that finding a pair of benign contracts (blue) or a pair of malicious contracts (red) is useless. After the victim accepts the benign contract, the attacker substitutes it with the malicious one and claims the victim signed it, as proven by the digital signature.
The birthday attack can be modelled as a variation of theballs into bins problem, where balls (hash function inputs) are randomly placed into bins (hash function outputs). A hash collision occurs when at least two balls are placed into the same bin.
Given a functionf{\displaystyle f}, the goal of the attack is to find two different inputsx1,x2{\displaystyle x_{1},x_{2}}such thatf(x1)=f(x2){\displaystyle f(x_{1})=f(x_{2})}. Such a pairx1,x2{\displaystyle x_{1},x_{2}}is called a collision. The method used to find a collision is simply to evaluate the functionf{\displaystyle f}for different input values that may be chosen randomly or pseudorandomly until the same result is found more than once. Because of the birthday problem, this method can be rather efficient. Specifically, if afunctionf(x){\displaystyle f(x)}yields any ofH{\displaystyle H}different outputs with equal probability andH{\displaystyle H}is sufficiently large, then we expect to obtain a pair of different argumentsx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}withf(x1)=f(x2){\displaystyle f(x_{1})=f(x_{2})}after evaluating the function for about1.25H{\displaystyle 1.25{\sqrt {H}}}different arguments on average.
We consider the following experiment. From a set ofHvalues we choosenvalues uniformly at random thereby allowing repetitions. Letp(n;H) be the probability that during this experiment at least one value is chosen more than once. This probability can be approximated as
wheren{\displaystyle n}is the number of chosen values (inputs) andH{\displaystyle H}is the number of possible outcomes (possible hash outputs).
Letn(p;H) be the smallest number of values we have to choose, such that the probability for finding a collision is at leastp. By inverting this expression above, we find the following approximation
and assigning a 0.5 probability of collision we arrive at
LetQ(H) be the expected number of values we have to choose before finding the first collision. This number can be approximated by
As an example, if a 64-bit hash is used, there are approximately1.8×1019different outputs. If these are all equally probable (the best case), then it would take 'only' approximately 5 billion attempts (5.38×109) to generate a collision using brute force.[8]This value is calledbirthday bound[9]and it could be approximated as 2l/2, wherelis the number of bits in H.[10]Other examples are as follows:
It is easy to see that if the outputs of the function are distributed unevenly, then a collision could be found even faster. The notion of 'balance' of a hash function quantifies the resistance of the function to birthday attacks (exploiting uneven key distribution.) However, determining the balance of a hash function will typically require all possible inputs to be calculated and thus is infeasible for popular hash functions such as the MD and SHA families.[12]The subexpressionln11−p{\displaystyle \ln {\frac {1}{1-p}}}in the equation forn(p;H){\displaystyle n(p;H)}is not computed accurately for smallp{\displaystyle p}when directly translated into common programming languages aslog(1/(1-p))due toloss of significance. Whenlog1pis available (as it is inC99) for example, the equivalent expression-log1p(-p)should be used instead.[13]If this is not done, the first column of the above table is computed as zero, and several items in the second column do not have even one correct significant digit.
A goodrule of thumbwhich can be used formental calculationis the relation
which can also be written as
or
This works well for probabilities less than or equal to 0.5.
This approximation scheme is especially easy to use when working with exponents. For instance, suppose you are building 32-bit hashes (H=232{\displaystyle H=2^{32}}) and want the chance of a collision to be at most one in a million (p≈2−20{\displaystyle p\approx 2^{-20}}), how many documents could we have at the most?
which is close to the correct answer of 93.
Digital signaturescan be susceptible to a birthday attack or more precisely a chosen-prefix collision attack. A messagem{\displaystyle m}is typically signed by first computingf(m){\displaystyle f(m)}, wheref{\displaystyle f}is acryptographic hash function, and then using some secret key to signf(m){\displaystyle f(m)}. SupposeMallorywants to trickBobinto signing afraudulentcontract. Mallory prepares a fair contractm{\displaystyle m}and a fraudulent onem′{\displaystyle m'}. She then finds a number of positions wherem{\displaystyle m}can be changed without changing the meaning, such as inserting commas, empty lines, one versus two spaces after a sentence, replacing synonyms, etc. By combining these changes, she can create a huge number of variations onm{\displaystyle m}which are all fair contracts.
In a similar manner, Mallory also creates a huge number of variations on the fraudulent contractm′{\displaystyle m'}. She then applies the hash function to all these variations until she finds a version of the fair contract and a version of the fraudulent contract which have the same hash value,f(m)=f(m′){\displaystyle f(m)=f(m')}. She presents the fair version to Bob for signing. After Bob has signed, Mallory takes the signature and attaches it to the fraudulent contract. This signature then "proves" that Bob signed the fraudulent contract.
The probabilities differ slightly from the original birthday problem, as Mallory gains nothing by finding two fair or two fraudulent contracts with the same hash. Mallory's strategy is to generate pairs of one fair and one fraudulent contract. For a given hash function2l{\displaystyle 2^{l}}is the number of possible hashes, wherel{\displaystyle l}is the bit length of the hash output.
The birthday problem equations do not exactly apply here. For a 50% chance of a collision, Mallory would need to generate approximately2(l/2)+1{\displaystyle 2^{(l/2)+1}}hashes, which is twice the number required for a simple collision under the classical birthday problem.
To avoid this attack, the output length of the hash function used for a signature scheme can be chosen large enough so that the birthday attack becomes computationally infeasible, i.e. about twice as many bits as are needed to prevent an ordinarybrute-force attack.
Besides using a larger bit length, the signer (Bob) can protect himself by making some random, inoffensive changes to the document before signing it, and by keeping a copy of the contract he signed in his own possession, so that he can at least demonstrate in court that his signature matches that contract, not just the fraudulent one.
Pollard's rho algorithm for logarithmsis an example for an algorithm using a birthday attack for the computation ofdiscrete logarithms.
The same fraud is possible if the signer is Mallory, not Bob. Bob could suggest a contract to Mallory for a signature. Mallory could find both an inoffensively-modified version of this fair contract that has the same signature as a fraudulent contract, and Mallory could provide the modified fair contract and signature to Bob. Later, Mallory could produce the fraudulent copy. If Bob doesn't have the inoffensively-modified version contract (perhaps only finding their original proposal), Mallory's fraud is perfect. If Bob does have it, Mallory can at least claim that it is Bob who is the fraudster.
|
https://en.wikipedia.org/wiki/Birthday_attack
|
Incomputer science,brute-force searchorexhaustive search, also known asgenerate and test, is a very generalproblem-solvingtechnique andalgorithmic paradigmthat consists ofsystematically checkingall possible candidates for whether or not each candidate satisfies the problem's statement.
A brute-force algorithm that finds thedivisorsof anatural numbernwould enumerate all integers from 1 to n, and check whether each of them dividesnwithout remainder. A brute-force approach for theeight queens puzzlewould examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other.[1]
When in doubt, use brute force.
While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion).[2]Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specificheuristicsthat can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed.
This is the case, for example, in critical applications where any errors in thealgorithmwould have very serious consequences or whenusing a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method whenbenchmarkingother algorithms ormetaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused withbacktracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a table – namely, check all entries of the latter, sequentially – is calledlinear search.
In order to apply brute-force search to a specific class of problems, one must implement fourprocedures,first,next,valid, andoutput. These procedures should take as a parameter the dataPfor the particular instance of the problem that is to be solved, and should do the following:
Thenextprocedure must also tell when there are no more candidates for the instanceP, after the current onec. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise thefirstprocedure should return Λ if there are no candidates at all for the instanceP. The brute-force method is then expressed by the algorithm
For example, when looking for the divisors of an integern, the instance dataPis the numbern. The callfirst(n) should return the integer 1 ifn≥ 1, or Λ otherwise; the callnext(n,c) should returnc+ 1 ifc<n, and Λ otherwise; andvalid(n,c) should returntrueif and only ifcis a divisor ofn. (In fact, if we choose Λ to ben+ 1, the testsn≥ 1 andc<nare unnecessary.)The brute-force search algorithm above will calloutputfor every candidate that is a solution to the given instanceP. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount ofCPUtime.
The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given numbern. So ifnhas sixteen decimal digits, say, the search will require executing at least 1015computer instructions, which will take several days on a typicalPC. Ifnis a random 64-bitnatural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letter – which is only a 10% increase in the data size – will multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018or 2.4quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called thecombinatorial explosion, or thecurse of dimensionality.
One example of a case where combinatorial complexity leads to solvability limit is insolving chess. Chess is not asolved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.[3][4][5]
One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by usingheuristicsspecific to the problem class. For example, in theeight queens problemthe challenge is to place eight queens on a standardchessboardso that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648= 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates areall possible ways of choosingof a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutions – about 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements.
As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one.
In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 – which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests.
In applications that require only one solution, rather than all solutions, theexpectedrunning time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random numbern, it is better to enumerate the candidate divisors in increasing order, from 2 ton− 1, than the other way around – because the probability thatnis divisible bycis 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a1bit in a given 1000-bit stringP. In this case, the candidate solutions are the indices 1 to 1000, and a candidatecis valid ifP[c] =1. Now, suppose that the first bit ofPis equally likely to be0or1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the numbertof candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value oftwill be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid,given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance.
There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution.Heuristicscan also be used to make an early cutoff of parts of the search. One example of this is theminimaxprinciple for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such aschart parsingcan exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as inConstraint Satisfaction Problems, one can dramatically reduce the search space by means ofConstraint propagation, that is efficiently implemented inConstraint programminglanguages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, incomputer chess, rather than computing the fullminimaxtree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by astatic evaluation function.
Incryptography, abrute-force attackinvolves systematically checking all possiblekeysuntil the correct key is found.[6]Thisstrategycan in theory be used against any encrypted data[7](except aone-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier.
Thekey lengthused in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective byobfuscatingthe data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it.
|
https://en.wikipedia.org/wiki/Brute-force_search
|
Theone-time pad(OTP) is anencryptiontechnique that cannot becrackedincryptography. It requires the use of a single-usepre-shared keythat is larger than or equal to the size of the message being sent. In this technique, aplaintextis paired with a random secretkey(also referred to as aone-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad usingmodular addition.[1]
The resultingciphertextis impossible to decrypt or break if the following four conditions are met:[2][3]
These requirements make the OTP the only known encryption system that is mathematically proven to be unbreakable under the principles of information theory.[4]
Digital versions of one-time pad ciphers have been used by nations for criticaldiplomaticandmilitary communication, but the problems of securekey distributionmake them impractical for many applications.
The concept was first described byFrank Millerin 1882,[5][6]the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued toGilbert Vernamfor theXORoperation used for the encryption of a one-time pad.[7]One-time use came later, whenJoseph Mauborgnerecognized that if the key tape were totally random, thencryptanalysiswould be impossible.[8]To increase security, one-time pads were sometimes printed onto sheets of highly flammablenitrocellulose, so that they could easily be burned after use.
Frank Millerin 1882 was the first to describe the one-time pad system for securing telegraphy.[6][9]
The next one-time pad system was electrical. In 1917,Gilbert Vernam(ofAT&T Corporation) invented[10]and later patented in 1919 (U.S. patent 1,310,719) a cipher based onteleprintertechnology. Each character in a message was electrically combined with a character on apunched paper tapekey.Joseph Mauborgne(then acaptainin theU.S. Armyand later chief of theSignal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system.[11]
The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimizetelegraphcosts. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-likecodebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was calledsuperencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. Theserial numberof the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923.[11]
A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below.Leo Marksdescribes inventing such a system for the BritishSpecial Operations ExecutiveduringWorld War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance atBletchley Park.[12]
The final discovery was made by information theoristClaude Shannonin the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949.[4]At the same time, Soviet information theoristVladimir Kotelnikovhad independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified.[13]
There also exists a quantum analogue of the one time pad, which can be used to exchangequantum statesalong a one-wayquantum channelwith perfect secrecy, which is sometimes used in quantum computing. It can be shown that a shared secret of at least 2n classical bits is required to exchange an n-qubit quantum state along a one-way quantum channel (by analogue with the result that a key of n bits is required to exchange an n bit message with perfect secrecy). A scheme proposed in 2000 achieves this bound. One way to implement this quantum one-time pad is by dividing the 2n bit key into n pairs of bits. To encrypt the state, for each pair of bits i in the key, one would apply an X gate to qubit i of the state if and only if the first bit of the pair is 1, and apply a Z gate to qubit i of the state if and only if the second bit of the pair is 1. Decryption involves applying this transformation again, since X and Z are their own inverses. This can be shown to be perfectly secret in a quantum setting.[14]
SupposeAlicewishes to send the messagehellotoBob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message".
The material on the selected sheet is thekeyfor this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, toassign each letter a numerical value, e.g.,ais 0,bis 1, and so on.)
In this example, the technique is to combine the key and the message usingmodular addition, not unlike theVigenère cipher. The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins withXMCKLand the message ishello, then the coding would be done as follows:
If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A.
The ciphertext to be sent to Bob is thusEQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain theplaintext. Here the key issubtractedfrom the ciphertext, again using modular arithmetic:
Similar to the above, if a number is negative, then 26 is added to make the number zero or higher.
Thus Bob recovers Alice's plaintext, the messagehello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. TheKGBoften issued itsagentsone-time pads printed on tiny sheets of flash paper, paper chemically converted tonitrocellulose, which burns almost instantly and leaves no ash.[15]
The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and somemental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). Theexclusive or(XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key.
To continue the example from above, suppose Eve intercepts Alice's ciphertext:EQNVZ. If Eve tried every possible key, she would find that the keyXMCKLwould produce the plaintexthello, but she would also find that the keyTQURIwould produce the plaintextlater, an equally plausible message:
In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext.[16]
If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some randomincorrectkey also producing two sensible plaintexts are very slim).
One-time pads are "information-theoretically secure" in that the encrypted message (i.e., theciphertext) provides no information about the original message to acryptanalyst(except the maximum possible length[note 1]of the message). This is a very strong notion of security first developed during WWII byClaude Shannonand proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in theBell System Technical Journalin 1949.[17]If properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.
Shannon proved, usinginformation theoreticconsiderations, that the one-time pad has a property he termedperfect secrecy; that is, the ciphertextCgives absolutely no additionalinformationabout theplaintext.[note 2]This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated intoanyplaintext of the same length, and all are equally likely. Thus, thea prioriprobability of a plaintext messageMis the same as thea posterioriprobability of a plaintext messageMgiven the corresponding ciphertext.
Conventionalsymmetric encryption algorithmsuse complex patterns ofsubstitutionandtranspositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or evenpartially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that arethought to be difficultto solve, such asinteger factorizationor thediscrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack.[note 3]
Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will revealonlythe parts of the key corresponding to them, and they correspond on astrictly one-to-one basis; a uniformly random key's bits will beindependent.
Quantum cryptographyandpost-quantum cryptographyinvolve studying the impact of quantum computers oninformation security.Quantum computershave been shown byPeter Shorand others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problems' difficulty would be rendered obsolete with a powerful enough quantum computer.
One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker.
Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires:
One-time pads solve few current practical problems in cryptography. High-qualityciphersare widely available and their security is not currently considered a major worry.[18]Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller.[16]Additionally,public key cryptographyovercomes the problem of key distribution.
High-quality random numbers are difficult to generate. The random number generation functions in mostprogramming languagelibraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including/dev/randomand manyhardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuringradioactive emissions.[19]
In particular, one-time use is absolutely necessary. For example, ifp1{\displaystyle p_{1}}andp2{\displaystyle p_{2}}represent two distinct plaintext messages and they are each encrypted by a common keyk{\displaystyle k}, then the respective ciphertexts are given by:
where⊕{\displaystyle \oplus }meansXOR. If an attacker were to have both ciphertextsc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}, then simply taking theXORofc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}yields theXORof the two plaintextsp1⊕p2{\displaystyle p_{1}\oplus p_{2}}. (This is because taking theXORof the common keyk{\displaystyle k}with itself yields a constant bitstream of zeros.)p1⊕p2{\displaystyle p_{1}\oplus p_{2}}is then the equivalent of a running key cipher.[citation needed]
If both plaintexts are in anatural language(e.g., English or Russian), each stands a very high chance of being recovered byheuristiccryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with theVenona project.[20]
Because the pad, like allshared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using a one-time pad, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely).[16]However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the messages' sizes equals the size of the pad.Quantum key distributionalso proposes a solution to this problem, assumingfault-tolerantquantum computers.
Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk.[2]The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such asthumb drives,DVD-Rsor personaldigital audio playerscan be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles 1 mm2(0.0016 sq in) in size, leaves over 4megabitsof data on each particle.[citation needed]In addition, the risk of compromise during transit (for example, apickpocketswiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such asAES. Finally, the effort needed to manage one-time pad key materialscalesvery badly for large networks of communicants—the number of pads required goes up as thesquareof the number of users freely exchanging messages. For communication between only two persons, or astar networktopology, this is less of a problem.
The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent.[2]Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable toforensic recoverythan the transient plaintext it protects (because of possible data remanence).
As traditionally used, one-time pads provide nomessage authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is cancelled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different frommalleability[21]where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar toattacks on stream ciphers.[22]
Standard techniques to prevent this, such as the use of amessage authentication codecan be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable lengthpaddingandRussian copulation, but they all lack the perfect security the OTP itself has.Universal hashingprovides a way to authenticate messages up to an arbitrary security bound (i.e., for anyp> 0, a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less thanp), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer.
Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways:
Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can be computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded inmobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion.
A common use of the one-time pad inquantum cryptographyis being used in association withquantum key distribution(QKD). QKD is typically associated with the one-time pad because it provides a way of distributing a long shared secret key securely and efficiently (assuming the existence of practicalquantum networkinghardware). A QKD algorithm uses properties of quantum mechanical systems to let two parties agree on a shared, uniformly random string. Algorithms for QKD, such asBB84, are also able to determine whether an adversarial party has been attempting to intercept key material, and allow for a shared secret key to be agreed upon with relatively few messages exchanged and relatively low computational overhead. At a high level, the schemes work by taking advantage of the destructive way quantum states are measured to exchange a secret and detect tampering. In the original BB84 paper, it was proven that the one-time pad, with keys distributed via QKD, is aperfectly secureencryption scheme.[25]However, this result depends on the QKD scheme being implemented correctly in practice. Attacks on real-world QKD systems exist. For instance, many systems do not send a single photon (or other object in the desired quantum state) per bit of the key because of practical limitations, and an attacker could intercept and measure some of the photons associated with a message, gaining information about the key (i.e. leaking information about the pad), while passing along unmeasured photons corresponding to the same bit of the key.[26]Combining QKD with a one-time pad can also loosen the requirements for key reuse. In 1982,BennettandBrassardshowed that if a QKD protocol does not detect that an adversary was trying to intercept an exchanged key, then the key can safely be reused while preserving perfect secrecy.[27]
The one-time pad is an example of post-quantum cryptography, because perfect secrecy is a definition of security that does not depend on the computational resources of the adversary. Consequently, an adversary with a quantum computer would still not be able to gain any more information about a message encrypted with a one time pad than an adversary with just a classical computer.
One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment.[28]TheWeimar RepublicDiplomatic Service began using the method in about 1920. The breaking of poorSovietcryptography by theBritish, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930.KGBspies are also known to have used pencil and paper one-time pads more recently. Examples include ColonelRudolf Abel, who was arrested and convicted inNew York Cityin the 1950s, and the 'Krogers' (i.e.,MorrisandLona Cohen), who were arrested and convicted of espionage in theUnited Kingdomin the early 1960s. Both were found with physical one-time pads in their possession.
A number of nations have used one-time pad systems for their sensitive traffic.Leo Marksreports that the BritishSpecial Operations Executiveused one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war.[12]A few British one-time tape cipher machines include theRockexandNoreen. The GermanStasiSprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents.[29]
TheWorld War IIvoicescramblerSIGSALYwas also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used.[30]
ThehotlinebetweenMoscowandWashington, D.C., established in 1963 after the 1962Cuban Missile Crisis, usedteleprintersprotected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other.[31]
U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications.[32]
Starting in 1988, theAfrican National Congress(ANC) used disk-based one-time pads as part of asecure communicationsystem between ANC leaders outsideSouth Africaand in-country operatives as part of Operation Vula,[33]a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian flight attendant acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem.[34]
A related notion is theone-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa"[35]cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or sometraffic analysis. However, such strategies (though often used by real operatives, andbaseballcoaches)[36]are not a cryptographic one-time pad in any significant sense.
At least into the 1970s, the U.S.National Security Agency(NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what the NSA called "pro forma" systems, where "the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message." Examples included nuclear launch messages and radio direction finding reports (COMUS).[37]: pp. 16–18
General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece ofcarbon paperwith the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (SeeCommons:Category:NSA one-time padsfor illustrations.)
The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field".[37]: pp. 21–26
During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting againsttraffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced byrotor machinessuch as SIGTOT, and later by electronic devices based onshift registers.[37]: pp. 39–44The NSA describes one-time tape systems like5-UCOand SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher basedKW-26in 1957.[38]
While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis:
|
https://en.wikipedia.org/wiki/Vernam_cipher
|
Elliptic-curve Diffie–Hellman(ECDH) is akey agreementprotocol that allows two parties, each having anelliptic-curvepublic–private key pair, to establish ashared secretover aninsecure channel.[1][2][3]This shared secret may be directly used as a key, or toderive another key. The key, or the derived key, can then be used to encrypt subsequent communications using asymmetric-key cipher. It is a variant of theDiffie–Hellmanprotocol usingelliptic-curve cryptography.
The following example illustrates how a shared key is established. SupposeAlicewants to establish a shared key withBob, but the only channel available for them may be eavesdropped by a third party. Initially, thedomain parameters(that is,(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}in the prime case or(m,f(x),a,b,G,n,h){\displaystyle (m,f(x),a,b,G,n,h)}in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private keyd{\displaystyle d}(a randomly selected integer in the interval[1,n−1]{\displaystyle [1,n-1]}) and a public key represented by a pointQ{\displaystyle Q}(whereQ=d⋅G{\displaystyle Q=d\cdot G}, that is, the result ofaddingG{\displaystyle G}to itselfd{\displaystyle d}times). Let Alice's key pair be(dA,QA){\displaystyle (d_{\text{A}},Q_{\text{A}})}and Bob's key pair be(dB,QB){\displaystyle (d_{\text{B}},Q_{\text{B}})}. Each party must know the other party's public key prior to execution of the protocol.
Alice computes point(xk,yk)=dA⋅QB{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}. Bob computes point(xk,yk)=dB⋅QA{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}. The shared secret isxk{\displaystyle x_{k}}(thexcoordinate of the point). Most standardized protocols based on ECDH derive a symmetric key fromxk{\displaystyle x_{k}}using some hash-based key derivation function.
The shared secret calculated by both parties is equal, becausedA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}.
The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curvediscrete logarithmproblem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curveDiffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral (also known asECDHE, where final 'E' stands for "ephemeral").Ephemeral keysare temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoidman-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neitherforward secrecynor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a securekey derivation functionto the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, seeMQV.
If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. SeveralTLSlibraries were found to be vulnerable to this attack.[4]
The shared secret is uniformly distributed on a subset of[0,p){\displaystyle [0,p)}of size(n+1)/2{\displaystyle (n+1)/2}. For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function.
LetA,B∈Fp{\displaystyle A,B\in F_{p}}such thatB(A2−4)≠0{\displaystyle B(A^{2}-4)\neq 0}. The Montgomery form elliptic curveEM,A,B{\displaystyle E_{M,A,B}}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfying the equationBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with the point at infinity denoted as∞{\displaystyle \infty }. This is called the affine form of the curve. The set of allFp{\displaystyle F_{p}}-rational points ofEM,A,B{\displaystyle E_{M,A,B}}, denoted asEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfyingBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with∞{\displaystyle \infty }. Under a suitably defined addition operation,EM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is a group with∞{\displaystyle \infty }as the identity element. It is known that the order of this group is a multiple of 4. In fact, it is usually possible to obtainA{\displaystyle A}andB{\displaystyle B}such that the order ofEM,A,B{\displaystyle E_{M,A,B}}is4q{\displaystyle 4q}for a primeq{\displaystyle q}. For more extensive discussions of Montgomery curves and their arithmetic one may follow.[5][6][7]
For computational efficiency, it is preferable to work with projective coordinates. The projective form of the Montgomery curveEM,A,B{\displaystyle E_{M,A,B}}isBY2Z=X(X2+AXZ+Z2){\displaystyle BY^{2}Z=X(X^{2}+AXZ+Z^{2})}. For a pointP=[X:Y:Z]{\displaystyle P=[X:Y:Z]}onEM,A,B{\displaystyle E_{M,A,B}}, thex{\displaystyle x}-coordinate mapx{\displaystyle x}is the following:[7]x(P)=[X:Z]{\displaystyle x(P)=[X:Z]}ifZ≠0{\displaystyle Z\neq 0}andx(P)=[1:0]{\displaystyle x(P)=[1:0]}ifP=[0:1:0]{\displaystyle P=[0:1:0]}.Bernstein[8][9]introduced the mapx0{\displaystyle x_{0}}as follows:x0(X:Z)=XZp−2{\displaystyle x_{0}(X:Z)=XZ^{p-2}}which is defined for all values ofX{\displaystyle X}andZ{\displaystyle Z}inFp{\displaystyle F_{p}}. Following Miller,[10]Montgomery[5]and Bernstein,[9]the Diffie-Hellman key agreement can be carried out on a Montgomery curve as follows. LetQ{\displaystyle Q}be a generator of a prime order subgroup ofEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}. Alice chooses a secret keys{\displaystyle s}and has public keyx0(sQ){\displaystyle x_{0}(sQ)};
Bob chooses a secret keyt{\displaystyle t}and has public keyx0(tQ){\displaystyle x_{0}(tQ)}. The shared secret key of Alice and Bob isx0(stQ){\displaystyle x_{0}(stQ)}. Using classical computers, the best known method of obtainingx0(stQ){\displaystyle x_{0}(stQ)}fromQ,x0(sQ){\displaystyle Q,x_{0}(sQ)}andx0(tQ){\displaystyle x_{0}(tQ)}requires aboutO(p1/2){\displaystyle O(p^{1/2})}time using thePollards rho algorithm.[11]
The most famous example of Montgomery curve isCurve25519which was introduced by Bernstein.[9]For Curve25519,p=2255−19,A=486662{\displaystyle p=2^{255}-19,A=486662}andB=1{\displaystyle B=1}.
The other Montgomery curve which is part of TLS 1.3 isCurve448which was introduced
by Hamburg.[12]For Curve448,p=2448−2224−1,A=156326{\displaystyle p=2^{448}-2^{224}-1,A=156326}andB=1{\displaystyle B=1}. Couple of Montgomery curves named M[4698] and M[4058] competitive toCurve25519andCurve448respectively have been proposed in.[13]For M[4698],p=2251−9,A=4698,B=1{\displaystyle p=2^{251}-9,A=4698,B=1}and for M[4058],p=2444−17,A=4058,B=1{\displaystyle p=2^{444}-17,A=4058,B=1}. At 256-bit security level, three Montgomery curves named M[996558], M[952902] and M[1504058] have been proposed in.[14]For M[996558],p=2506−45,A=996558,B=1{\displaystyle p=2^{506}-45,A=996558,B=1}, for M[952902],p=2510−75,A=952902,B=1{\displaystyle p=2^{510}-75,A=952902,B=1}and for M[1504058],p=2521−1,A=1504058,B=1{\displaystyle p=2^{521}-1,A=1504058,B=1}respectively. Apart from these two, other proposals of Montgomery curves can be found at.[15]
|
https://en.wikipedia.org/wiki/Elliptic_Curve_Diffie–Hellman
|
Inmathematics,integer factorizationis the decomposition of apositive integerinto aproductof integers. Every positive integer greater than 1 is either the product of two or more integerfactorsgreater than 1, in which case it is acomposite number, or it is not, in which case it is aprime number. For example,15is a composite number because15 = 3 · 5, but7is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is calledprime factorization; the result is always unique up to the order of the factors by theprime factorization theorem.
To factorize a small integernusing mental or pen-and-paper arithmetic, the simplest method istrial division: checking if the number is divisible by prime numbers2,3,5, and so on, up to thesquare rootofn. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involvestesting whether each factor is primeeach time a factor is found.
When the numbers are sufficiently large, no efficient non-quantuminteger factorizationalgorithmis known. However, it has not been proven that such an algorithm does not exist. The presumeddifficultyof this problem is important for the algorithms used incryptographysuch asRSA public-key encryptionand theRSA digital signature.[1]Many areas ofmathematicsandcomputer sciencehave been brought to bear on this problem, includingelliptic curves,algebraic number theory, and quantum computing.
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) aresemiprimes, the product of two prime numbers. When they are both large, for instance more than two thousandbitslong, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization byFermat's factorization method), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically.
Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related problem –for example, theRSA problem. An algorithm that efficiently factors an arbitrary integer would renderRSA-basedpublic-keycryptography insecure.
By thefundamental theorem of arithmetic, every positive integer has a uniqueprime factorization. (By convention, 1 is theempty product.)Testingwhether the integer is prime can be done inpolynomial time, for example, by theAKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.
Given a general algorithm for integer factorization, any integer can be factored into its constituentprime factorsby repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, ifn= 171 ×p×qwherep<qare very large primes,trial divisionwill quickly produce the factors 3 and 19 but will takepdivisions to find the next factor. As a contrasting example, ifnis the product of the primes13729,1372933, and18848997161, where13729 × 1372933 = 18848997157, Fermat's factorization method will begin with⌈√n⌉ = 18848997159which immediately yieldsb=√a2−n=√4= 2and hence the factorsa−b= 18848997157anda+b= 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of⌈√18848997157⌉ = 137292forais a factor of 10 from1372933.
Among theb-bit numbers, the most difficult to factor in practice using existing algorithms are thosesemiprimeswhose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers includingPaul Zimmermann, utilizing approximately 900 core-years of computing power.[2]These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.[3]
The largest such semiprime yet factored wasRSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using IntelXeon Gold6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of thegeneral number field sieverun on hundreds of machines.
Noalgorithmhas been published that can factor all integers inpolynomial time, that is, that can factor ab-bit numbernin timeO(bk)for some constantk. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist.[4][5]
There are published algorithms that are faster thanO((1 +ε)b)for all positiveε, that is,sub-exponential. As of 2022[update], the algorithm with best theoretical asymptotic running time is thegeneral number field sieve(GNFS), first published in 1993,[6]running on ab-bit numbernin time:
For current computers, GNFS is the best published algorithm for largen(more than about 400 bits). For aquantum computer, however,Peter Shordiscovered an algorithm in 1994 that solves it in polynomial time.Shor's algorithmtakes onlyO(b3)time andO(b)space onb-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by usingNMRtechniques on molecules that provide seven qubits.[7]
In order to talk aboutcomplexity classessuch as P, NP, and co-NP, the problem has to be stated as adecision problem.
Decision problem(Integer factorization)—For every natural numbersn{\displaystyle n}andk{\displaystyle k}, doesnhave a factor smaller thankbesides 1?
It is known to be in bothNPandco-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorizationn=d(n/d)withd≤k. An answer of "no" can be certified by exhibiting the factorization ofninto distinct primes, all larger thank; one can verify their primality using theAKS primality test, and then multiply them to obtainn. Thefundamental theorem of arithmeticguarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in bothUPand co-UP.[8]It is known to be inBQPbecause of Shor's algorithm.
The problem is suspected to be outside all three of the complexity classes P, NP-complete,[9]andco-NP-complete.
It is therefore a candidate for theNP-intermediatecomplexity class.
In contrast, the decision problem "Isna composite number?" (or equivalently: "Isna prime number?") appears to be much easier than the problem of specifying factors ofn. The composite/prime problem can be solved in polynomial time (in the numberbof digits ofn) with theAKS primality test. In addition, there are severalprobabilistic algorithmsthat can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease ofprimality testingis a crucial part of theRSAalgorithm, as it is necessary to find large prime numbers to start with.
A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.
An important subclass of special-purpose factoring algorithms is theCategory 1orFirst Categoryalgorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[10]For example, naivetrial divisionis a Category 1 algorithm.
A general-purpose factoring algorithm, also known as aCategory 2,Second Category, orKraitchikfamilyalgorithm,[10]has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factorRSA numbers. Most general-purpose factoring algorithms are based on thecongruence of squaresmethod.
In number theory, there are many integer factoring algorithms that heuristically have expectedrunning time
inlittle-oandL-notation.
Some examples of those algorithms are theelliptic curve methodand thequadratic sieve.
Another such algorithm is theclass group relations methodproposed by Schnorr,[11]Seysen,[12]and Lenstra,[13]which they proved only assuming the unprovedgeneralized Riemann hypothesis.
The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[14]to have expected running timeLn[1/2, 1+o(1)]by replacing the GRH assumption with the use of multipliers.
The algorithm uses theclass groupof positive binaryquadratic formsofdiscriminantΔdenoted byGΔ.GΔis the set of triples of integers(a,b,c)in which those integers are relative prime.
Given an integernthat will be factored, wherenis an odd positive integer greater than a certain constant. In this factoring algorithm the discriminantΔis chosen as a multiple ofn,Δ = −dn, wheredis some positive multiplier. The algorithm expects that for onedthere exist enoughsmoothforms inGΔ. Lenstra and Pomerance show that the choice ofdcan be restricted to a small set to guarantee the smoothness result.
Denote byPΔthe set of all primesqwithKronecker symbol(Δ/q)= 1. By constructing a set ofgeneratorsofGΔand prime formsfqofGΔwithqinPΔa sequence of relations between the set of generators andfqare produced.
The size ofqcan be bounded byc0(log|Δ|)2for some constantc0.
The relation that will be used is a relation between the product of powers that is equal to theneutral elementofGΔ. These relations will be used to construct a so-called ambiguous form ofGΔ, which is an element ofGΔof order dividing 2. By calculating the corresponding factorization ofΔand by taking agcd, this ambiguous form provides the complete prime factorization ofn. This algorithm has these main steps:
Letnbe the number to be factored.
To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and theJacobi sum test.
The algorithm as stated is aprobabilistic algorithmas it makes random choices. Its expected running time is at mostLn[1/2, 1+o(1)].[14]
|
https://en.wikipedia.org/wiki/Factoring_problem
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Bluetooth_2.1
|
Abiometric passport(also known as anelectronic passport,e-passportor adigital passport) is apassportthat has an embedded electronicmicroprocessorchip, which containsbiometricinformation that can be used to authenticate the identity of the passport holder. It usescontactless smart cardtechnology, including a microprocessor chip (computer chip) and antenna (for both power to the chip and communication) embedded in the front or back cover, or centre page, of the passport. The passport's critical information is printed on the data page of the passport, repeated on themachine readable linesand stored in the chip.Public key infrastructure(PKI) is used to authenticate the data stored electronically in the passport chip, making it expensive and difficult to forge when all security mechanisms are fully and correctly implemented.
Most countries are issuing biometric passports to their citizens.Malaysiawas the first country to issuebiometric passportsin 1998.[1]By the end of 2008, 60 countries were issuing such passports,[2]which increased to over 150 by mid-2019.[3]
The currently standardised biometrics used for this type of identification system arefacial recognition,fingerprint recognition, andiris recognition. These were adopted after assessment of several different kinds of biometrics includingretinal scan. Document and chip characteristics are documented in theInternational Civil Aviation Organization's (ICAO) Doc 9303 (ICAO 9303).[4]The ICAO defines the biometric file formats and communication protocols to be used in passports. Only the digital image (usually inJPEGorJPEG 2000format) of each biometric feature is actually stored in the chip. The comparison of biometric features is performed outside the passport chip by electronic border control systems (e-borders). To store biometric data on the contactless chip, it includes a minimum of 32 kilobytes ofEEPROMstorage memory, and runs on an interface in accordance with theISO/IEC 14443international standard, amongst others. These standards intend interoperability between different countries and different manufacturers of passport books.
Somenational identity cards, such as those fromAlbania,Brazil, theNetherlands, andSaudi Arabiaare fully ICAO 9303 compliant biometrictravel documents. However others, such as theUnited States passport card, are not.[5]
Biometric passports have protection mechanisms to avoid and/or detect attacks:
To assure interoperability and functionality of the security mechanisms listed above, ICAO andGermanFederal Office for Information Security(BSI) have specified several test cases. These test specifications are updated with every new protocol and are covering details starting from the paper used and ending in the chip that is included.[9]
Since the introduction of biometric passports, several attacks have been presented and demonstrated.
Privacyproponents in many countries question and protest the lack of information about exactly what the passports' chip will contain, and whether they affectcivil liberties. The main problem they point out is that data on the passports can be transferred with wirelessRFIDtechnology, which can become a major vulnerability. Although this could allowID-check computers to obtain a person's information without a physical connection, it may also allow anyone with the necessary equipment to perform the same task. If the personal information and passport numbers on the chip are notencrypted, the information might wind up in the wrong hands.
On 15 December 2006, theBBCpublished an article[26]on the British ePassport, citing the above stories and adding that:
and adding that the Future of Identity in the Information Society (FIDIS) network's research team (a body of IT security experts funded by the European Union) has "also come out against the ePassport scheme... [stating that] European governments have forced a document on its people that dramatically decreases security and increases the risk of identity theft."[27]
Most security measures are designed against untrusted citizens (the "provers"), but the scientific security community recently also addressed the threats from untrustworthy verifiers, such as corrupt governmental organizations, or nations using poorly implemented, unsecure electronic systems.[28]New cryptographic solutions such asprivate biometricsare being proposed to mitigate threats of mass theft of identity. These are under scientific study, but not yet implemented in biometric passports.
It was planned that, except for Denmark andIreland,EU passportswould have digital imaging andfingerprintscan biometrics placed on their RFID chips.[116]This combination ofbiometricsaims to create an unrivaled level of security and protection against fraudulent identification papers[vague]. Technical specifications for the new passports have been established by the European Commission.[117]The specifications are binding for theSchengen agreementparties, i.e. the EU countries, except Ireland, and the fourEuropean Free Trade Associationcountries—Iceland, Liechtenstein,[118][119]Norway and Switzerland.[120]These countries are obliged to implement machine readable facial images in the passports by 28 August 2006, and fingerprints by 26 June 2009.[121]TheEuropean Data Protection Supervisorhas stated that the current legal framework fails to "address all the possible and relevant issues triggered by the inherent imperfections of biometric systems".[122]
Irish biometric passports only used a digital image and not fingerprinting. German passports printed after 1 November 2007 contain two fingerprints, one from each hand, in addition to a digital photograph. Romanian passports will also contain two fingerprints, one from each hand. The Netherlands also takes fingerprints and was[123]the only EU member that had plans to store these fingerprints centrally.[124]According to EU requirements, only nations that are signatories to theSchengen acquisare required to add fingerprint biometrics.[125]
In the EU nations, passport prices will be:
In the EFTA, passport prices will be:
The ICAO standard sets a 35x45 mm image with adequate resolution with the following requirements:
Though some countries like USA use a 2x2 inch photo format (51x51 mm), they usually crop it to be closer to 35:45 in ratio when issuing a passport.
|
https://en.wikipedia.org/wiki/E-passport
|
Achosen-ciphertext attack(CCA) is anattack modelforcryptanalysiswhere the cryptanalyst can gather information by obtaining the decryptions of chosen ciphertexts. From these pieces of information the adversary can attempt to recover the secret key used for decryption.
For formal definitions of security against chosen-ciphertext attacks, see for example:Michael Luby[1]andMihir Bellareet al.[2]
A number of otherwise secure schemes can be defeated under chosen-ciphertext attack. For example, theEl Gamalcryptosystem issemantically secureunderchosen-plaintext attack, but this semantic security can be trivially defeated under a chosen-ciphertext attack. Early versions ofRSApadding used in theSSLprotocol were vulnerable to a sophisticatedadaptive chosen-ciphertext attackwhich revealed SSL session keys. Chosen-ciphertext attacks have implications for some self-synchronizingstream ciphersas well. Designers of tamper-resistant cryptographicsmart cardsmust be particularly cognizant of these attacks, as these devices may be completely under the control of an adversary, who can issue a large number of chosen-ciphertexts in an attempt to recover the hidden secret key.
It was not clear at all whether public key cryptosystems could withstand the chosen ciphertext attack until the initial breakthrough work ofMoni NaorandMoti Yungin 1990, which suggested a mode of dual encryption withintegrityproof (now known as the "Naor-Yung" encryption paradigm).[3]This work made understanding of the notion of security against chosen ciphertext attack much clearer than before and open the research direction of constructing systems with various protections against variants of the attack.
When a cryptosystem is vulnerable to chosen-ciphertext attack, implementers must be careful to avoid situations in which an adversary might be able to decrypt chosen-ciphertexts (i.e., avoid providing a decryption oracle). This can be more difficult than it appears, as even partially chosen ciphertexts can permit subtle attacks. Additionally, other issues exist and some cryptosystems (such asRSA) use the same mechanism to sign messages and to decrypt them. This permits attacks whenhashingis not used on the message to be signed. A better approach is to use a cryptosystem which isprovably secureunder chosen-ciphertext attack, including (among others)RSA-OAEPsecure under the random oracle heuristics,Cramer-Shoupwhich was the first public key practical system to be secure. For symmetric encryption schemes it is known thatauthenticated encryptionwhich is a primitive based onsymmetric encryptiongives security against chosen ciphertext attacks, as was first shown byJonathan KatzandMoti Yung.[4]
Chosen-ciphertext attacks, like other attacks, may be adaptive or non-adaptive. In an adaptive chosen-ciphertext attack, the attacker can use the results from prior decryptions to inform their choices of which ciphertexts to have decrypted. In a non-adaptive attack, the attacker chooses the ciphertexts to have decrypted without seeing any of the resulting plaintexts. After seeing the plaintexts, the attacker can no longer obtain the decryption of additional ciphertexts.
A specially noted variant of the chosen-ciphertext attack is the "lunchtime", "midnight", or "indifferent" attack, in which an attacker may make adaptive chosen-ciphertext queries but only up until a certain point, after which the attacker must demonstrate some improved ability to attack the system.[5]The term "lunchtime attack" refers to the idea that a user's computer, with the ability to decrypt, is available to an attacker while the user is out to lunch. This form of the attack was the first one commonly discussed: obviously, if the attacker has the ability to make adaptive chosen ciphertext queries, no encrypted message would be safe, at least until that ability is taken away. This attack is sometimes called the "non-adaptive chosen ciphertext attack";[6]here, "non-adaptive" refers to the fact that the attacker cannot adapt their queries in response to the challenge, which is given after the ability to make chosen ciphertext queries has expired.
A (full) adaptive chosen-ciphertext attack is an attack in which ciphertexts may be chosen adaptively before and after a challenge ciphertext is given to the attacker, with only the stipulation that the challenge ciphertext may not itself be queried. This is a stronger attack notion than the lunchtime attack, and is commonly referred to as a CCA2 attack, as compared to a CCA1 (lunchtime) attack.[6]Few practical attacks are of this form. Rather, this model is important for its use in proofs of security against chosen-ciphertext attacks. A proof that attacks in this model are impossible implies that any realistic chosen-ciphertext attack cannot be performed.
A practical adaptive chosen-ciphertext attack is the Bleichenbacher attack againstPKCS#1.[7]
Numerous cryptosystems are proven secure against adaptive chosen-ciphertext attacks, some proving this security property based only on algebraic assumptions, some additionally requiring an idealized random oracle assumption. For example, theCramer-Shoup system[5]is secure based on number theoretic assumptions and no idealization, and after a number of subtle investigations it was also established that the practical schemeRSA-OAEPis secure under the RSA assumption in the idealized random oracle model.[8]
|
https://en.wikipedia.org/wiki/Chosen-ciphertext_attack
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Inmathematics, especially in the field ofalgebra, apolynomial ringorpolynomial algebrais aringformed from thesetofpolynomialsin one or moreindeterminates(traditionally also calledvariables) withcoefficientsin anotherring, often afield.[1]
Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of theintegers.[2]
Polynomial rings occur and are often fundamental in many parts of mathematics such asnumber theory,commutative algebra, andalgebraic geometry. Inring theory, many classes of rings, such asunique factorization domains,regular rings,group rings,rings of formal power series,Ore polynomials,graded rings, have been introduced for generalizing some properties of polynomial rings.[3]
A closely related notion is that of thering of polynomial functionson avector space, and, more generally,ring of regular functionson analgebraic variety.[2]
LetKbe afieldor (more generally) acommutative ring.
Thepolynomial ringinXoverK, which is denotedK[X], can be defined in several equivalent ways. One of them is to defineK[X]as the set of expressions, calledpolynomialsinX, of the form[4]
wherep0,p1, …,pm, thecoefficientsofp, are elements ofK,pm≠ 0ifm> 0, andX,X2, …,are symbols, which are considered as "powers" ofX, and follow the usual rules ofexponentiation:X0= 1,X1=X, andXkXl=Xk+l{\displaystyle X^{k}\,X^{l}=X^{k+l}}for anynonnegative integerskandl. The symbolXis called an indeterminate[5]or variable.[6](The term of "variable" comes from the terminology ofpolynomial functions. However, here,Xhas no value (other than itself), and cannot vary, being aconstantin the polynomial ring.)
Two polynomials are equal when the corresponding coefficients of eachXkare equal.
One can think of the ringK[X]as arising fromKby adding one new elementXthat is external toK, commutes with all elements ofK, and has no other specific properties. This can be used for an equivalent definition of polynomial rings.
The polynomial ring inXoverKis equipped with an addition, a multiplication and ascalar multiplicationthat make it acommutative algebra. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if
and
then
and
wherek= max(m,n),l=m+n,
and
In these formulas, the polynomialspandqare extended by adding "dummy terms" with zero coefficients, so that allpiandqithat appear in the formulas are defined. Specifically, ifm<n, thenpi= 0form<i≤n.
The scalar multiplication is the special case of the multiplication wherep=p0is reduced to itsconstant term(the term that is independent ofX); that is
It is straightforward to verify that these three operations satisfy the axioms of a commutative algebra overK. Therefore, polynomial rings are also calledpolynomial algebras.
Another equivalent definition is often preferred, although less intuitive, because it is easier to make it completely rigorous, which consists in defining a polynomial as an infinitesequence(p0,p1,p2, …)of elements ofK, having the property that only a finite number of the elements are nonzero, or equivalently, a sequence for which there is somemso thatpn= 0forn>m. In this case,p0andXare considered as alternate notations for
the sequences(p0, 0, 0, …)and(0, 1, 0, 0, …), respectively. A straightforward use of the operation rules shows that the expression
is then an alternate notation for the sequence
Let
be a nonzero polynomial withpm≠0{\displaystyle p_{m}\neq 0}
Theconstant termofpisp0.{\displaystyle p_{0}.}It is zero in the case of the zero polynomial.
Thedegreeofp, writtendeg(p)ism,{\displaystyle m,}the largestksuch that the coefficient ofXkis not zero.[7]
Theleading coefficientofpispm.{\displaystyle p_{m}.}[8]
In the special case of the zero polynomial, all of whose coefficients are zero, the leading coefficient is undefined, and the degree has been variously left undefined,[9]defined to be−1,[10]or defined to be a−∞.[11]
Aconstant polynomialis either the zero polynomial, or a polynomial of degree zero.
A nonzero polynomial ismonicif its leading coefficient is1.{\displaystyle 1.}
Given two polynomialspandq, if the degree of the zero polynomial is defined to be−∞,{\displaystyle -\infty ,}one has
and, over afield, or more generally anintegral domain,[12]
It follows immediately that, ifKis an integral domain, then so isK[X].[13]
It follows also that, ifKis an integral domain, a polynomial is aunit(that is, it has amultiplicative inverse) if and only if it is constant and is a unit inK.
Two polynomials areassociatedif either one is the product of the other by a unit.
Over a field, every nonzero polynomial is associated to a unique monic polynomial.
Given two polynomials,pandq, one says thatpdividesq,pis adivisorofq, orqis a multiple ofp, if there is a polynomialrsuch thatq=pr.
A polynomial isirreducibleif it is not the product of two non-constant polynomials, or equivalently, if its divisors are either constant polynomials or have the same degree.
LetKbe a field or, more generally, acommutative ring, andRa ring containingK. For any polynomialPinK[X]and any elementainR, the substitution ofXwithainPdefines an element ofR, which isdenotedP(a). This element is obtained by carrying on inRafter the substitution the operations indicated by the expression of the polynomial. This computation is called theevaluationofPata. For example, if we have
we have
(in the first exampleR=K, and in the second oneR=K[X]). SubstitutingXfor itself results in
explaining why the sentences "LetPbe a polynomial" and "LetP(X)be a polynomial" are equivalent.
Thepolynomial functiondefined by a polynomialPis the function fromKintoKthat is defined byx↦P(x).{\displaystyle x\mapsto P(x).}IfKis an infinite field, two different polynomials define different polynomial functions, but this property is false for finite fields. For example, ifKis a field withqelements, then the polynomials0andXq−Xboth define the zero function.
For everyainR, the evaluation ata, that is, the mapP↦P(a){\displaystyle P\mapsto P(a)}defines analgebra homomorphismfromK[X]toR, which is the unique homomorphism fromK[X]toRthat fixesK, and mapsXtoa. In other words,K[X]has the followinguniversal property:
As for all universal properties, this defines the pair(K[X],X)up to a unique isomorphism, and can therefore be taken as a definition ofK[X].
Theimageof the mapP↦P(a){\displaystyle P\mapsto P(a)}, that is, the subset ofRobtained by substitutingaforXin elements ofK[X], is denotedK[a].[14]For example,Z[2]={P(2)∣P(X)∈Z[X]}{\displaystyle \mathbb {Z} [{\sqrt {2}}]=\{P({\sqrt {2}})\mid P(X)\in \mathbb {Z} [X]\}}, and the simplification rules for the powers of a square root implyZ[2]={a+b2∣a∈Z,b∈Z}.{\displaystyle \mathbb {Z} [{\sqrt {2}}]=\{a+b{\sqrt {2}}\mid a\in \mathbb {Z} ,b\in \mathbb {Z} \}.}
IfKis afield, the polynomial ringK[X]has many properties that are similar to those of theringof integersZ.{\displaystyle \mathbb {Z} .}Most of these similarities result from the similarity between thelong division of integersand thelong division of polynomials.
Most of the properties ofK[X]that are listed in this section do not remain true ifKis not a field, or if one considers polynomials in several indeterminates.
Like for integers, theEuclidean division of polynomialshas a property of uniqueness. That is, given two polynomialsaandb≠ 0inK[X], there is a unique pair(q,r)of polynomials such thata=bq+r, and eitherr= 0ordeg(r) < deg(b). This makesK[X]aEuclidean domain. However, most other Euclidean domains (except integers) do not have any property of uniqueness for the division nor an easy algorithm (such as long division) for computing the Euclidean division.
The Euclidean division is the basis of theEuclidean algorithm for polynomialsthat computes apolynomial greatest common divisorof two polynomials. Here, "greatest" means "having a maximal degree" or, equivalently, being maximal for thepreorderdefined by the degree. Given a greatest common divisor of two polynomials, the other greatest common divisors are obtained by multiplication by a nonzero constant (that is, all greatest common divisors ofaandbare associated). In particular, two polynomials that are not both zero have a unique greatest common divisor that is monic (leading coefficient equal to1).
Theextended Euclidean algorithmallows computing (and proving)Bézout's identity. In the case ofK[X], it may be stated as follows. Given two polynomialspandqof respective degreesmandn, if their monic greatest common divisorghas the degreed, then there is a unique pair(a,b)of polynomials such that
and
(For making this true in the limiting case wherem=dorn=d, one has to define as negative the degree of the zero polynomial. Moreover, the equalitydeg(a)=n−d{\displaystyle \deg(a)=n-d}can occur only ifpandqare associated.) The uniqueness property is rather specific toK[X]. In the case of the integers the same property is true, if degrees are replaced by absolute values, but, for having uniqueness, one must requirea> 0.
Euclid's lemmaapplies toK[X]. That is, ifadividesbc, and iscoprimewithb, thenadividesc. Here,coprimemeans that the monic greatest common divisor is1.Proof:By hypothesis and Bézout's identity, there aree,p, andqsuch thatae=bcand1 =ap+bq. Soc=c(ap+bq)=cap+aeq=a(cp+eq).{\displaystyle c=c(ap+bq)=cap+aeq=a(cp+eq).}
Theunique factorizationproperty results from Euclid's lemma. In the case of integers, this is thefundamental theorem of arithmetic. In the case ofK[X], it may be stated as:every non-constant polynomial can be expressed in a unique way as the product of a constant, and one or several irreducible monic polynomials; this decomposition is unique up to the order of the factors.In other termsK[X]is aunique factorization domain. IfKis the field of complex numbers, thefundamental theorem of algebraasserts that a univariate polynomial is irreducible if and only if its degree is one. In this case the unique factorization property can be restated as:every non-constant univariate polynomial over the complex numbers can be expressed in a unique way as the product of a constant, and one or several polynomials of the formX−r;this decomposition is unique up to the order of the factors.For each factor,ris arootof the polynomial, and the number of occurrences of a factor is themultiplicityof the corresponding root.
The(formal) derivativeof the polynomial
is the polynomial
In the case of polynomials withrealorcomplexcoefficients, this is the standardderivative. The above formula defines the derivative of a polynomial even if the coefficients belong to a ring on which no notion oflimitis defined. The derivative makes the polynomial ring adifferential algebra.
The existence of the derivative is one of the main properties of a polynomial ring that is not shared with integers, and makes some computations easier on a polynomial ring than on integers.
A polynomial with coefficients in a field or integral domain issquare-freeif it does not have amultiple rootin thealgebraically closed fieldcontaining its coefficients. In particular, a polynomial of degreenwith real or complex coefficients is square-free if it hasndistinct complex roots. Equivalently, a polynomial over a field is square-free if and only if thegreatest common divisorof the polynomial and its derivative is1.
Asquare-free factorizationof a polynomial is an expression for that polynomial as a product of powers ofpairwise relatively primesquare-free factors. Over the real numbers (or any other field ofcharacteristic 0), such a factorization can be computed efficiently byYun's algorithm. Less efficient algorithms are known forsquare-free factorization of polynomials over finite fields.
Given a finite set of ordered pairs(xj,yj){\displaystyle (x_{j},y_{j})}with entries in a field and distinct valuesxj{\displaystyle x_{j}}, among the polynomialsf(x){\displaystyle f(x)}that interpolate these points (so thatf(xj)=yj{\displaystyle f(x_{j})=y_{j}}for allj{\displaystyle j}), there is a unique polynomial of smallest degree. This is theLagrange interpolation polynomialL(x){\displaystyle L(x)}. If there arek{\displaystyle k}ordered pairs, the degree ofL(x){\displaystyle L(x)}is at mostk−1{\displaystyle k-1}. The polynomialL(x){\displaystyle L(x)}can be computed explicitly in terms of the input data(xj,yj){\displaystyle (x_{j},y_{j})}.
Adecompositionof a polynomial is a way of expressing it as acompositionof other polynomials of degree larger than 1. A polynomial that cannot be decomposed isindecomposable.Ritt's polynomial decomposition theoremasserts that iff=g1∘g2∘⋯∘gm=h1∘h2∘⋯∘hn{\displaystyle f=g_{1}\circ g_{2}\circ \cdots \circ g_{m}=h_{1}\circ h_{2}\circ \cdots \circ h_{n}}are two different decompositions of the polynomialf{\displaystyle f}, thenm=n{\displaystyle m=n}and the degrees of the indecomposables in one decomposition are the same as the degrees of the indecomposables in the other decomposition (though not necessarily in the same order).
Except for factorization, all previous properties ofK[X]areeffective, since their proofs, as sketched above, are associated withalgorithmsfor testing the property and computing the polynomials whose existence are asserted. Moreover these algorithms are efficient, as theircomputational complexityis aquadraticfunction of the input size.
The situation is completely different for factorization: the proof of the unique factorization does not give any hint for a method for factorizing. Already for the integers, there is no known algorithm running on a classical (non-quantum) computer for factorizing them inpolynomial time. This is the basis of theRSA cryptosystem, widely used for secure Internet communications.
In the case ofK[X], the factors, and the methods for computing them, depend strongly onK. Over the complex numbers, the irreducible factors (those that cannot be factorized further) are all of degree one, while, over the real numbers, there are irreducible polynomials of degree 2, and, over therational numbers, there are irreducible polynomials of any degree. For example, the polynomialX4−2{\displaystyle X^{4}-2}is irreducible over the rational numbers, is factored as(X−24)(X+24)(X2+2){\displaystyle (X-{\sqrt[{4}]{2}})(X+{\sqrt[{4}]{2}})(X^{2}+{\sqrt {2}})}over the real numbers and, and as(X−24)(X+24)(X−i24)(X+i24){\displaystyle (X-{\sqrt[{4}]{2}})(X+{\sqrt[{4}]{2}})(X-i{\sqrt[{4}]{2}})(X+i{\sqrt[{4}]{2}})}over the complex numbers.
The existence of a factorization algorithm depends also on the ground field. In the case of the real or complex numbers,Abel–Ruffini theoremshows that the roots of some polynomials, and thus the irreducible factors, cannot be computed exactly. Therefore, a factorization algorithm can compute only approximations of the factors. Various algorithms have been designed for computing such approximations, seeRoot finding of polynomials.
There is an example of a fieldKsuch that there exist exact algorithms for the arithmetic operations ofK, but there cannot exist any algorithm for deciding whether a polynomial of the formXp−a{\displaystyle X^{p}-a}isirreducibleor is a product of polynomials of lower degree.[15]
On the other hand, over the rational numbers and over finite fields, the situation is better than forinteger factorization, as there arefactorization algorithmsthat have apolynomial complexity. They are implemented in most general purposecomputer algebra systems.
Ifθis an element of anassociativeK-algebraL, thepolynomial evaluationatθis the uniquealgebra homomorphismφfromK[X]intoLthat mapsXtoθand does not affect the elements ofKitself (it is theidentity maponK). It consists ofsubstitutingXwithθin every polynomial. That is,
The image of thisevaluation homomorphismis the subalgebra generated byθ, which is necessarily commutative.
Ifφis injective, the subalgebra generated byθis isomorphic toK[X]. In this case, this subalgebra is often denoted byK[θ]. The notation ambiguity is generally harmless, because of the isomorphism.
If the evaluation homomorphism is not injective, this means that itskernelis a nonzeroideal, consisting of all polynomials that become zero whenXis substituted withθ. This ideal consists of all multiples of some monic polynomial, that is called theminimal polynomialofθ. The termminimalis motivated by the fact that its degree is minimal among the degrees of the elements of the ideal.
There are two main cases where minimal polynomials are considered.
Infield theoryandnumber theory, an elementθof anextension fieldLofKisalgebraicoverKif it is a root of some polynomial with coefficients inK. Theminimal polynomialoverKofθis thus the monic polynomial of minimal degree that hasθas a root. BecauseLis a field, this minimal polynomial is necessarilyirreducibleoverK. For example, the minimal polynomial (over the reals as well as over the rationals) of thecomplex numberiisX2+1{\displaystyle X^{2}+1}. Thecyclotomic polynomialsare the minimal polynomials of theroots of unity.
Inlinear algebra, then×nsquare matricesoverKform anassociativeK-algebraof finite dimension (as a vector space). Therefore the evaluation homomorphism cannot be injective, and every matrix has aminimal polynomial(not necessarily irreducible). ByCayley–Hamilton theorem, the evaluation homomorphism maps to zero thecharacteristic polynomialof a matrix. It follows that the minimal polynomial divides the characteristic polynomial, and therefore that the degree of the minimal polynomial is at mostn.
In the case ofK[X], thequotient ringby an ideal can be built, as in the general case, as a set ofequivalence classes. However, as each equivalence class contains exactly one polynomial of minimal degree, another construction is often more convenient.
Given a polynomialpof degreed, thequotient ringofK[X]by theidealgenerated bypcan be identified with thevector spaceof the polynomials of degrees less thand, with the "multiplication modulop" as a multiplication, themultiplication modulopconsisting of the remainder under the division bypof the (usual) product of polynomials. This quotient ring is variously denoted asK[X]/pK[X],{\displaystyle K[X]/pK[X],}K[X]/⟨p⟩,{\displaystyle K[X]/\langle p\rangle ,}K[X]/(p),{\displaystyle K[X]/(p),}or simplyK[X]/p.{\displaystyle K[X]/p.}
The ringK[X]/(p){\displaystyle K[X]/(p)}is a field if and only ifpis anirreducible polynomial. In fact, ifpis irreducible, every nonzero polynomialqof lower degree is coprime withp, andBézout's identityallows computingrandssuch thatsp+qr= 1; so,ris themultiplicative inverseofqmodulop. Conversely, ifpis reducible, then there exist polynomialsa, bof degrees lower thandeg(p)such thatab=p; soa, bare nonzerozero divisorsmodulop, and cannot be invertible.
For example, the standard definition of the field of the complex numbers can be summarized by saying that it is the quotient ring
and that the image ofXinC{\displaystyle \mathbb {C} }is denoted byi. In fact, by the above description, this quotient consists of all polynomials of degree one ini, which have the forma+bi, withaandbinR.{\displaystyle \mathbb {R} .}The remainder of the Euclidean division that is needed for multiplying two elements of the quotient ring is obtained by replacingi2by−1in their product as polynomials (this is exactly the usual definition of the product of complex numbers).
Letθbe analgebraic elementin aK-algebraA. Byalgebraic, one means thatθhas a minimal polynomialp. Thefirst ring isomorphism theoremasserts that the substitution homomorphism induces anisomorphismofK[X]/(p){\displaystyle K[X]/(p)}onto the imageK[θ]of the substitution homomorphism. In particular, ifAis asimple extensionofKgenerated byθ, this allows identifyingAandK[X]/(p).{\displaystyle K[X]/(p).}This identification is widely used inalgebraic number theory.
Thestructure theorem for finitely generated modules over a principal ideal domainapplies toK[X], whenKis a field. This means that every finitely generated module overK[X] may be decomposed into adirect sumof afree moduleand finitely many modules of the formK[X]/⟨Pk⟩{\displaystyle K[X]/\left\langle P^{k}\right\rangle }, wherePis anirreducible polynomialoverKandka positive integer.
GivennsymbolsX1,…,Xn,{\displaystyle X_{1},\dots ,X_{n},}calledindeterminates, amonomial(also calledpower product)
is a formal product of these indeterminates, possibly raised to a nonnegative power. As usual, exponents equal to one and factors with a zero exponent can be omitted. In particular,X10⋯Xn0=1.{\displaystyle X_{1}^{0}\cdots X_{n}^{0}=1.}
Thetupleof exponentsα= (α1, …,αn)is called themultidegreeorexponent vectorof the monomial. For a less cumbersome notation, the abbreviation
is often used. Thedegreeof a monomialXα, frequently denoteddegαor|α|, is the sum of its exponents:
Apolynomialin these indeterminates, with coefficients in a fieldK, or more generally aring, is a finitelinear combinationof monomials
with coefficients inK. Thedegreeof a nonzero polynomial is the maximum of the degrees of its monomials with nonzero coefficients.
The set of polynomials inX1,…,Xn,{\displaystyle X_{1},\dots ,X_{n},}denotedK[X1,…,Xn],{\displaystyle K[X_{1},\dots ,X_{n}],}is thus avector space(or afree module, ifKis a ring) that has the monomials as a basis.
K[X1,…,Xn]{\displaystyle K[X_{1},\dots ,X_{n}]}is naturally equipped (see below) with a multiplication that makes aring, and anassociative algebraoverK, calledthe polynomial ring innindeterminatesoverK(the definite articlethereflects that it is uniquely defined up to the name and the order of the indeterminates. If the ringKiscommutative,K[X1,…,Xn]{\displaystyle K[X_{1},\dots ,X_{n}]}is also a commutative ring.
Additionandscalar multiplicationof polynomials are those of avector spaceorfree moduleequipped by a specific basis (here the basis of the monomials). Explicitly, letp=∑α∈IpαXα,q=∑β∈JqβXβ,{\displaystyle p=\sum _{\alpha \in I}p_{\alpha }X^{\alpha },\quad q=\sum _{\beta \in J}q_{\beta }X^{\beta },}whereIandJare finite sets of exponent vectors.
The scalar multiplication ofpand a scalarc∈K{\displaystyle c\in K}is
The addition ofpandqis
wherepα=0{\displaystyle p_{\alpha }=0}ifα∉I,{\displaystyle \alpha \not \in I,}andqβ=0{\displaystyle q_{\beta }=0}ifβ∉J.{\displaystyle \beta \not \in J.}Moreover, if one haspα+qα=0{\displaystyle p_{\alpha }+q_{\alpha }=0}for someα∈I∩J,{\displaystyle \alpha \in I\cap J,}the corresponding zero term is removed from the result.
The multiplication is
whereI+J{\displaystyle I+J}is the set of the sums of one exponent vector inIand one other inJ(usual sum of vectors). In particular, the product of two monomials is a monomial whose exponent vector is the sum of the exponent vectors of the factors.
The verification of the axioms of anassociative algebrais straightforward.
Apolynomial expressionis anexpressionbuilt with scalars (elements ofK), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers.
As all these operations are defined inK[X1,…,Xn]{\displaystyle K[X_{1},\dots ,X_{n}]}a polynomial expression represents a polynomial, that is an element ofK[X1,…,Xn].{\displaystyle K[X_{1},\dots ,X_{n}].}The definition of a polynomial as a linear combination of monomials is a particular polynomial expression, which is often called thecanonical form,normal form, orexpanded formof the polynomial.
Given a polynomial expression, one can compute theexpandedform of the represented polynomial byexpandingwith thedistributive lawall the products that have a sum among their factors, and then usingcommutativity(except for the product of two scalars), andassociativityfor transforming the terms of the resulting sum into products of a scalar and a monomial; then one gets the canonical form by regrouping thelike terms.
The distinction between a polynomial expression and the polynomial that it represents is relatively recent, and mainly motivated by the rise ofcomputer algebra, where, for example, the test whether two polynomial expressions represent the same polynomial may be a nontrivial computation.
IfKis a commutative ring, the polynomial ringK[X1, …,Xn]has the followinguniversal property: for everycommutativeK-algebraA, and everyn-tuple(x1, …,xn)of elements ofA, there is a uniquealgebra homomorphismfromK[X1, …,Xn]toAthat maps eachXi{\displaystyle X_{i}}to the correspondingxi.{\displaystyle x_{i}.}This homomorphism is theevaluation homomorphismthat consists in substitutingXi{\displaystyle X_{i}}withxi{\displaystyle x_{i}}in every polynomial.
As it is the case for every universal property, this characterizes the pair(K[X1,…,Xn],(X1,…,Xn)){\displaystyle (K[X_{1},\dots ,X_{n}],(X_{1},\dots ,X_{n}))}up to a uniqueisomorphism.
This may also be interpreted in terms ofadjoint functors. More precisely, letSETandALGbe respectively thecategoriesof sets and commutativeK-algebras (here, and in the following, the morphisms are trivially defined). There is aforgetful functorF:ALG→SET{\displaystyle \mathrm {F} :\mathrm {ALG} \to \mathrm {SET} }that maps algebras to their underlying sets. On the other hand, the mapX↦K[X]{\displaystyle X\mapsto K[X]}defines a functorPOL:SET→ALG{\displaystyle \mathrm {POL} :\mathrm {SET} \to \mathrm {ALG} }in the other direction. (IfXis infinite,K[X]is the set of all polynomials in a finite number of elements ofX.)
The universal property of the polynomial ring means thatFandPOLareadjoint functors. That is, there is a bijection
This may be expressed also by saying that polynomial rings arefree commutative algebras, since they arefree objectsin the category of commutative algebras. Similarly, a polynomial ring with integer coefficients is thefree commutative ringover its set of variables, since commutative rings and commutative algebras over the integers are the same thing.
Every polynomial ring is agraded ring: one can write the polynomial ringR=K[X1,…,Xn]{\displaystyle R=K[X_{1},\ldots ,X_{n}]}as adirect sumR=⨁i=0∞Ri{\displaystyle R=\bigoplus _{i=0}^{\infty }R_{i}}whereRi{\displaystyle R_{i}}is the subspace consisting of allhomogeneous polynomialsof degreei{\displaystyle i}(along with the zero polynomial); then for any elementsf∈Ri{\displaystyle f\in R_{i}}andg∈Rj{\displaystyle g\in R_{j}}, their productfg{\displaystyle fg}belongs toRi+j{\displaystyle R_{i+j}}.
A polynomial inK[X1,…,Xn]{\displaystyle K[X_{1},\ldots ,X_{n}]}can be considered as a univariate polynomial in the indeterminateXn{\displaystyle X_{n}}over the ringK[X1,…,Xn−1],{\displaystyle K[X_{1},\ldots ,X_{n-1}],}by regrouping the terms that contain the same power ofXn,{\displaystyle X_{n},}that is, by using the identity
which results from the distributivity and associativity of ring operations.
This means that one has analgebra isomorphism
that maps each indeterminate to itself. (This isomorphism is often written as an equality, which is justified by the fact that polynomial rings are defined up to auniqueisomorphism.)
In other words, a multivariate polynomial ring can be considered as a univariate polynomial over a smaller polynomial ring. This is commonly used for proving properties of multivariate polynomial rings, byinductionon the number of indeterminates.
The main such properties are listed below.
In this section,Ris a commutative ring,Kis a field,Xdenotes a single indeterminate, and, as usual,Z{\displaystyle \mathbb {Z} }is the ring of integers. Here is the list of the main ring properties that remain true when passing fromRtoR[X].
Polynomial rings in several variables over a field are fundamental ininvariant theoryandalgebraic geometry. Some of their properties, such as those described above can be reduced to the case of a single indeterminate, but this is not always the case. In particular, because of the geometric applications, many interesting properties must be invariant underaffineorprojectivetransformations of the indeterminates. This often implies that one cannot select one of the indeterminates for a recurrence on the indeterminates.
Bézout's theorem,Hilbert's NullstellensatzandJacobian conjectureare among the most famous properties that are specific to multivariate polynomials over a field.
The Nullstellensatz (German for "zero-locus theorem") is a theorem, first proved byDavid Hilbert, which extends to the multivariate case some aspects of thefundamental theorem of algebra. It is foundational foralgebraic geometry, as establishing a strong link between the algebraic properties ofK[X1,…,Xn]{\displaystyle K[X_{1},\ldots ,X_{n}]}and the geometric properties ofalgebraic varieties, that are (roughly speaking) set of points defined byimplicit polynomial equations.
The Nullstellensatz, has three main versions, each being a corollary of any other. Two of these versions are given below. For the third version, the reader is referred to the main article on the Nullstellensatz.
The first version generalizes the fact that a nonzero univariate polynomial has acomplexzero if and only if it is not a constant. The statement is:a set of polynomialsSinK[X1,…,Xn]{\displaystyle K[X_{1},\ldots ,X_{n}]}has a common zero in analgebraically closed fieldcontainingK, if and only if1does not belong to theidealgenerated byS, that is, if1is not alinear combinationof elements ofSwith polynomial coefficients.
The second version generalizes the fact that theirreducible univariate polynomialsover the complex numbers areassociateto a polynomial of the formX−α.{\displaystyle X-\alpha .}The statement is:IfKis algebraically closed, then themaximal idealsofK[X1,…,Xn]{\displaystyle K[X_{1},\ldots ,X_{n}]}have the form⟨X1−α1,…,Xn−αn⟩.{\displaystyle \langle X_{1}-\alpha _{1},\ldots ,X_{n}-\alpha _{n}\rangle .}
Bézout's theorem may be viewed as a multivariate generalization of the version of thefundamental theorem of algebrathat asserts that a univariate polynomial of degreenhasncomplex roots, if they are counted with their multiplicities.
In the case ofbivariate polynomials, it states that two polynomials of degreesdandein two variables, which have no common factors of positive degree, have exactlydecommon zeros in analgebraically closed fieldcontaining the coefficients, if the zeros are counted with their multiplicity and include thezeros at infinity.
For stating the general case, and not considering "zero at infinity" as special zeros, it is convenient to work withhomogeneous polynomials, and consider zeros in aprojective space. In this context, aprojective zeroof a homogeneous polynomialP(X0,…,Xn){\displaystyle P(X_{0},\ldots ,X_{n})}is, up to a scaling, a(n+ 1)-tuple(x0,…,xn){\displaystyle (x_{0},\ldots ,x_{n})}of elements ofKthat is different from(0, …, 0), and such thatP(x0,…,xn)=0{\displaystyle P(x_{0},\ldots ,x_{n})=0}. Here, "up to a scaling" means that(x0,…,xn){\displaystyle (x_{0},\ldots ,x_{n})}and(λx0,…,λxn){\displaystyle (\lambda x_{0},\ldots ,\lambda x_{n})}are considered as the same zero for any nonzeroλ∈K.{\displaystyle \lambda \in K.}In other words, a zero is a set ofhomogeneous coordinatesof a point in a projective space of dimensionn.
Then, Bézout's theorem states: Givennhomogeneous polynomials of degreesd1,…,dn{\displaystyle d_{1},\ldots ,d_{n}}inn+ 1indeterminates, which have only a finite number of common projective zeros in analgebraically closed extensionofK, the sum of themultiplicitiesof these zeros is the productd1⋯dn.{\displaystyle d_{1}\cdots d_{n}.}
Polynomial rings can be generalized in a great many ways, including polynomial rings with generalized exponents, power series rings,noncommutative polynomial rings,skew polynomial rings, and polynomialrigs.
One slight generalization of polynomial rings is to allow for infinitely many indeterminates. Each monomial still involves only a finite number of indeterminates (so that its degree remains finite), and each polynomial is a still a (finite) linear combination of monomials. Thus, any individual polynomial involves only finitely many indeterminates, and any finite computation involving polynomials remains inside some subring of polynomials in finitely many indeterminates. This generalization has the same property of usual polynomial rings, of being thefree commutative algebra, the only difference is that it is afree objectover an infinite set.
One can also consider a strictly larger ring, by defining as a generalized polynomial an infinite (or finite) formal sum of monomials with a bounded degree. This ring is larger than the usual polynomial ring, as it includes infinite sums of variables. However, it is smaller than thering of power series in infinitely many variables. Such a ring is used for constructing thering of symmetric functionsover an infinite set.
A simple generalization only changes the set from which the exponents on the variable are drawn. The formulas for addition and multiplication make sense as long as one can add exponents:Xi⋅Xj=Xi+j. A set for which addition makes sense (is closed and associative) is called amonoid. The set of functions from a monoidNto a ringRwhich are nonzero at only finitely many places can be given the structure of a ring known asR[N], themonoid ringofNwith coefficients inR. The addition is defined component-wise, so that ifc=a+b, thencn=an+bnfor everyninN. The multiplication is defined as the Cauchy product, so that ifc=a⋅b, then for eachninN,cnis the sum of allaibjwherei,jrange over all pairs of elements ofNwhich sum ton.
WhenNis commutative, it is convenient to denote the functionainR[N] as the formal sum:
and then the formulas for addition and multiplication are the familiar:
and
where the latter sum is taken over alli,jinNthat sum ton.
Some authors such as (Lang 2002, II,§3) go so far as to take this monoid definition as the starting point, and regular single variable polynomials are the special case whereNis the monoid of non-negative integers. Polynomials in several variables simply takeNto be the direct product of several copies of the monoid of non-negative integers.
Several interesting examples of rings and groups are formed by takingNto be the additive monoid of non-negative rational numbers, (Osborne 2000, §4.4). See alsoPuiseux series.
Power series generalize the choice of exponent in a different direction by allowing infinitely many nonzero terms. This requires various hypotheses on the monoidNused for the exponents, to ensure that the sums in the Cauchy product are finite sums. Alternatively, a topology can be placed on the ring, and then one restricts to convergent infinite sums. For the standard choice ofN, the non-negative integers, there is no trouble, and the ring of formal power series is defined as the set of functions fromNto a ringRwith addition component-wise, and multiplication given by the Cauchy product. The ring of power series can also be seen as thering completionof the polynomial ring with respect to the ideal generated byx.
For polynomial rings of more than one variable, the productsX⋅YandY⋅Xare simply defined to be equal. A more general notion of polynomial ring is obtained when the distinction between these two formal products is maintained. Formally, the polynomial ring innnoncommuting variables with coefficients in the ringRis themonoid ringR[N], where the monoidNis thefree monoidonnletters, also known as the set of all strings over an alphabet ofnsymbols, with multiplication given by concatenation. Neither the coefficients nor the variables need commute amongst themselves, but the coefficients and variables commute with each other.
Just as the polynomial ring innvariables with coefficients in the commutative ringRis the free commutativeR-algebra of rankn, the noncommutative polynomial ring innvariables with coefficients in the commutative ringRis the free associative, unitalR-algebra onngenerators, which is noncommutative whenn> 1.
Other generalizations of polynomials are differential and skew-polynomial rings.
Adifferential polynomial ringis a ring ofdifferential operatorsformed from a ringRand aderivationδofRintoR. This derivation operates onR, and will be denotedX, when viewed as an operator. The elements ofRalso operate onRby multiplication. Thecomposition of operatorsis denoted as the usual multiplication. It follows that the relationδ(ab) =aδ(b) +δ(a)bmay be rewritten
as
This relation may be extended to define a skew multiplication between two polynomials inXwith coefficients inR, which make them anoncommutative ring.
The standard example, called aWeyl algebra, takesRto be a (usual) polynomial ringk[Y], andδto be the standard polynomial derivative∂∂Y{\displaystyle {\tfrac {\partial }{\partial Y}}}. Takinga=Yin the above relation, one gets thecanonical commutation relation,X⋅Y−Y⋅X= 1. Extending this relation by associativity and distributivity allows explicitly constructing theWeyl algebra. (Lam 2001, §1,ex1.9).
Theskew-polynomial ringis defined similarly for a ringRand a ringendomorphismfofR, by extending the multiplication from the relationX⋅r=f(r)⋅Xto produce an associative multiplication that distributes over the standard addition. More generally, given a homomorphismFfrom the monoidNof the positive integers into theendomorphism ringofR, the formulaXn⋅r=F(n)(r)⋅Xnallows constructing a skew-polynomial ring. (Lam 2001, §1,ex 1.11) Skew polynomial rings are closely related tocrossed productalgebras.
The definition of a polynomial ring can be generalised by relaxing the requirement that the algebraic structureRbe afieldor aringto the requirement thatRonly be asemifieldorrig; the resulting polynomial structure/extensionR[X] is apolynomial rig. For example, the set of all multivariate polynomials withnatural numbercoefficients is a polynomial rig.
|
https://en.wikipedia.org/wiki/Polynomial_ring
|
Inmathematics, the concept of aninverse elementgeneralises the concepts ofopposite(−x) andreciprocal(1/x) of numbers.
Given anoperationdenoted here∗, and anidentity elementdenotede, ifx∗y=e, one says thatxis aleft inverseofy, and thatyis aright inverseofx. (An identity element is an element such thatx*e=xande*y=yfor allxandyfor which the left-hand sides are defined.[1])
When the operation∗isassociative, if an elementxhas both a left inverse and a right inverse, then these two inverses are equal and unique; they are called theinverse elementor simply theinverse. Often an adjective is added for specifying the operation, such as inadditive inverse,multiplicative inverse, andfunctional inverse. In this case (associative operation), aninvertible elementis an element that has an inverse. In aring, aninvertible element, also called aunit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition).
Inverses are commonly used ingroups—where every element is invertible, andrings—where invertible elements are also calledunits. They are also commonly used for operations that are not defined for all possible operands, such asinverse matricesandinverse functions. This has been generalized tocategory theory, where, by definition, anisomorphismis an invertiblemorphism.
The word 'inverse' is derived fromLatin:inversusthat means 'turned upside down', 'overturned'. This may take its origin from the case offractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse ofxy{\displaystyle {\tfrac {x}{y}}}isyx{\displaystyle {\tfrac {y}{x}}}).
The concepts ofinverse elementandinvertible elementare commonly defined forbinary operationsthat are everywhere defined (that is, the operation is defined for any two elements of itsdomain). However, these concepts are also commonly used withpartial operations, that is operations that are not defined everywhere. Common examples arematrix multiplication,function compositionand composition ofmorphismsin acategory. It follows that the common definitions ofassociativityandidentity elementmust be extended to partial operations; this is the object of the first subsections.
In this section,Xis aset(possibly aproper class) on which a partial operation (possibly total) is defined, which is denoted with∗.{\displaystyle *.}
A partial operation isassociativeif
for everyx,y,zinXfor which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined.
Examples of non-total associative operations aremultiplication of matricesof arbitrary size, andfunction composition.
Let∗{\displaystyle *}be a possiblypartialassociative operation on a setX.
Anidentity element, or simply anidentityis an elementesuch that
for everyxandyfor which the left-hand sides of the equalities are defined.
Ifeandfare two identity elements such thate∗f{\displaystyle e*f}is defined, thene=f.{\displaystyle e=f.}(This results immediately from the definition, bye=e∗f=f.{\displaystyle e=e*f=f.})
It follows that a total operation has at most one identity element, and ifeandfare different identities, thene∗f{\displaystyle e*f}is not defined.
For example, in the case ofmatrix multiplication, there is onen×nidentity matrixfor every positive integern, and two identity matrices of different size cannot be multiplied together.
Similarly,identity functionsare identity elements forfunction composition, and the composition of the identity functions of two different sets are not defined.
Ifx∗y=e,{\displaystyle x*y=e,}whereeis an identity element, one says thatxis aleft inverseofy, andyis aright inverseofx.
Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation onnonnegative integers, which has0asadditive identity, and0is the only element that has anadditive inverse. This lack of inverses is the main motivation for extending thenatural numbersinto the integers.
An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider thefunctionsfrom the integers to the integers. Thedoubling functionx↦2x{\displaystyle x\mapsto 2x}has infinitely many left inverses underfunction composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that mapsnto either2n{\displaystyle 2n}or2n+1{\displaystyle 2n+1}is a right inverse of the functionn↦⌊n2⌋,{\textstyle n\mapsto \left\lfloor {\frac {n}{2}}\right\rfloor ,}thefloor functionthat mapsnton2{\textstyle {\frac {n}{2}}}orn−12,{\textstyle {\frac {n-1}{2}},}depending whethernis even or odd.
More generally, a function has a left inverse forfunction compositionif and only if it isinjective, and it has a right inverse if and only if it issurjective.
Incategory theory, right inverses are also calledsections, and left inverses are calledretractions.
An element isinvertibleunder an operation if it has a left inverse and a right inverse.
In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, iflandrare respectively a left inverse and a right inverse ofx, then
The inverseof an invertible element is its unique left or right inverse.
If the operation is denoted as an addition, the inverse, oradditive inverse, of an elementxis denoted−x.{\displaystyle -x.}Otherwise, the inverse ofxis generally denotedx−1,{\displaystyle x^{-1},}or, in the case of acommutativemultiplication1x.{\textstyle {\frac {1}{x}}.}When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as inx∗−1.{\displaystyle x^{*-1}.}The notationf∘−1{\displaystyle f^{\circ -1}}is not commonly used forfunction composition, since1f{\textstyle {\frac {1}{f}}}can be used for themultiplicative inverse.
Ifxandyare invertible, andx∗y{\displaystyle x*y}is defined, thenx∗y{\displaystyle x*y}is invertible, and its inverse isy−1x−1.{\displaystyle y^{-1}x^{-1}.}
An invertiblehomomorphismis called anisomorphism. Incategory theory, an invertiblemorphismis also called anisomorphism.
Agroupis asetwith anassociative operationthat has an identity element, and for which every element has an inverse.
Thus, the inverse is afunctionfrom the group to itself that may also be considered as an operation ofarityone. It is also aninvolution, since the inverse of the inverse of an element is the element itself.
A group mayacton a set astransformationsof this set. In this case, the inverseg−1{\displaystyle g^{-1}}of a group elementg{\displaystyle g}defines a transformation that is the inverse of the transformation defined byg,{\displaystyle g,}that is, the transformation that "undoes" the transformation defined byg.{\displaystyle g.}
For example, theRubik's cube grouprepresents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order.
Amonoidis a set with anassociative operationthat has anidentity element.
Theinvertible elementsin a monoid form agroupunder monoid operation.
Aringis a monoid for ring multiplication. In this case, the invertible elements are also calledunitsand form thegroup of unitsof the ring.
If a monoid is notcommutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible).
For example, the set of thefunctionsfrom a set to itself is a monoid underfunction composition. In this monoid, the invertible elements are thebijective functions; the elements that have left inverses are theinjective functions, and those that have right inverses are thesurjective functions.
Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have thecancellation property(an elementxhas the cancellation property ifxy=xz{\displaystyle xy=xz}impliesy=z,{\displaystyle y=z,}andyx=zx{\displaystyle yx=zx}impliesy=z{\displaystyle y=z}).This extension of a monoid is allowed byGrothendieck groupconstruction. This is the method that is commonly used for constructingintegersfromnatural numbers,rational numbersfromintegersand, more generally, thefield of fractionsof anintegral domain, andlocalizationsofcommutative rings.
Aringis analgebraic structurewith two operations,additionandmultiplication, which are denoted as the usual operations on numbers.
Under addition, a ring is anabelian group, which means that addition iscommutativeandassociative; it has an identity, called theadditive identity, and denoted0; and every elementxhas an inverse, called itsadditive inverseand denoted−x. Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses.
Under multiplication, a ring is amonoid; this means that multiplication is associative and has an identity called themultiplicative identityand denoted1. Aninvertible elementfor multiplication is called aunit. The inverse ormultiplicative inverse(for avoiding confusion with additive inverses) of a unitxis denotedx−1,{\displaystyle x^{-1},}or, when the multiplication is commutative,1x.{\textstyle {\frac {1}{x}}.}
The additive identity0is never a unit, except when the ring is thezero ring, which has0as its unique element.
If0is the only non-unit, the ring is afieldif the multiplication is commutative, or adivision ringotherwise.
In anoncommutative ring(that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of thelinear functionsfrom aninfinite-dimensional vector spaceto itself.
Acommutative ring(that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are notzero divisors(that is, their product with a nonzero element cannot be0). This is the process oflocalization, which produces, in particular, the field ofrational numbersfrom the ring of integers, and, more generally, thefield of fractionsof anintegral domain. Localization is also used with zero divisors, but, in this case the original ring is not asubringof the localisation; instead, it is mapped non-injectively to the localization.
Matrix multiplicationis commonly defined formatricesover afield, and straightforwardly extended to matrices overrings,rngsandsemirings. However,in this section, only matrices over acommutative ringare considered, because of the use of the concept ofrankanddeterminant.
IfAis am×nmatrix (that is, a matrix withmrows andncolumns), andBis ap×qmatrix, the productABis defined ifn=p, and only in this case. Anidentity matrix, that is, an identity element for matrix multiplication is asquare matrix(same number for rows and columns) whose entries of themain diagonalare all equal to1, and all other entries are0.
Aninvertible matrixis an invertible element under matrix multiplication. A matrix over a commutative ringRis invertible if and only if its determinant is aunitinR(that is, is invertible inR. In this case, itsinverse matrixcan be computed withCramer's rule.
IfRis a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings.
In the case ofinteger matrices(that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called aunimodular matrixfor distinguishing it from matrices that are invertible over thereal numbers. A square integer matrix is unimodular if and only if its determinant is1or−1, since these two numbers are the only units in the ring of integers.
A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix.
Compositionis apartial operationthat generalizes tohomomorphismsofalgebraic structuresandmorphismsofcategoriesinto operations that are also calledcomposition, and share many properties with function composition.
In all the case, composition isassociative.
Iff:X→Y{\displaystyle f\colon X\to Y}andg:Y′→Z,{\displaystyle g\colon Y'\to Z,}the compositiong∘f{\displaystyle g\circ f}is defined if and only ifY′=Y{\displaystyle Y'=Y}or, in the function and homomorphism cases,Y⊂Y′.{\displaystyle Y\subset Y'.}In the function and homomorphism cases, this means that thecodomainoff{\displaystyle f}equals or is included in thedomainofg. In the morphism case, this means that thecodomainoff{\displaystyle f}equals thedomainofg.
There is anidentityidX:X→X{\displaystyle \operatorname {id} _{X}\colon X\to X}for every objectX(set, algebraic structure orobject), which is called also anidentity functionin the function case.
A function is invertible if and only if it is abijection. An invertible homomorphism or morphism is called anisomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called aninverse function. In the other cases, one talks ofinverse isomorphisms.
A function has a left inverse or a right inverse if and only it isinjectiveorsurjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true forvector spacesbut not formodulesover a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively asplit epimorphismor asplit monomorphism. This terminology is also used for morphisms in any category.
LetS{\displaystyle S}be a unitalmagma, that is, asetwith abinary operation∗{\displaystyle *}and anidentity elemente∈S{\displaystyle e\in S}. If, fora,b∈S{\displaystyle a,b\in S}, we havea∗b=e{\displaystyle a*b=e}, thena{\displaystyle a}is called aleft inverseofb{\displaystyle b}andb{\displaystyle b}is called aright inverseofa{\displaystyle a}. If an elementx{\displaystyle x}is both a left inverse and a right inverse ofy{\displaystyle y}, thenx{\displaystyle x}is called atwo-sided inverse, or simply aninverse, ofy{\displaystyle y}. An element with a two-sided inverse inS{\displaystyle S}is calledinvertibleinS{\displaystyle S}. An element with an inverse element only on one side isleft invertibleorright invertible.
Elements of a unital magma(S,∗){\displaystyle (S,*)}may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table
the elements 2 and 3 each have two two-sided inverses.
A unital magma in which all elements are invertible need not be aloop. For example, in the magma(S,∗){\displaystyle (S,*)}given by theCayley table
every element has a unique two-sided inverse (namely itself), but(S,∗){\displaystyle (S,*)}is not a loop because the Cayley table is not aLatin square.
Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table
the only element with a two-sided inverse is the identity element 1.
If the operation∗{\displaystyle *}isassociativethen if an element has both a left inverse and a right inverse, they are equal. In other words, in amonoid(an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is agroup, called thegroup of unitsofS{\displaystyle S}, and denoted byU(S){\displaystyle U(S)}orH1.
The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in asemigroup.
In a semigroupSan elementxis called(von Neumann) regularif there exists some elementzinSsuch thatxzx=x;zis sometimes called apseudoinverse. An elementyis called (simply) aninverseofxifxyx=xandy=yxy. Every regular element has at least one inverse: ifx=xzxthen it is easy to verify thaty=zxzis an inverse ofxas defined in this section. Another easy to prove fact: ifyis an inverse ofxthene=xyandf=yxareidempotents, that isee=eandff=f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, andex=xf=x,ye=fy=y, andeacts as a left identity onx, whilefacts a right identity, and the left/right roles are reversed fory. This simple observation can be generalized usingGreen's relations: every idempotentein an arbitrary semigroup is a left identity forReand right identity forLe.[2]An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity.
In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green classH1have an inverse from the unital magma perspective, whereas for any idempotente, the elements ofHehave an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called aninverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have anabsorbing element0 because 000 = 0, whereas a group may not.
Outside semigroup theory, a unique inverse as defined in this section is sometimes called aquasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (seeGeneralized inverse).
A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° =afor allainS; this endowsSwith a type ⟨2,1⟩ algebra. A semigroup endowed with such an operation is called aU-semigroup. Although it may seem thata° will be the inverse ofa, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes ofU-semigroups have been studied:[3]
Clearly a group is both anI-semigroup and a *-semigroup. A class of semigroups important in semigroup theory arecompletely regular semigroups; these areI-semigroups in which one additionally hasaa° =a°a; in other words every element has commuting pseudoinversea°. There are few concrete examples of such semigroups however; most arecompletely simple semigroups. In contrast, a subclass of *-semigroups, the*-regular semigroups(in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, theMoore–Penrose inverse. In this case however the involutiona* is not the pseudoinverse. Rather, the pseudoinverse ofxis the unique elementysuch thatxyx=x,yxy=y, (xy)* =xy, (yx)* =yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called thegeneralized inverseorMoore–Penrose inverse.
All examples in this section involve associative operators.
The lower and upper adjoints in a (monotone)Galois connection,LandGare quasi-inverses of each other; that is,LGL=LandGLG=Gand one uniquely determines the other. They are not left or right inverses of each other however.
Asquare matrixM{\displaystyle M}with entries in afieldK{\displaystyle K}is invertible (in the set of all square matrices of the same size, undermatrix multiplication) if and only if itsdeterminantis different from zero. If the determinant ofM{\displaystyle M}is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. Seeinvertible matrixfor more.
More generally, a square matrix over acommutative ringR{\displaystyle R}is invertibleif and only ifits determinant is invertible inR{\displaystyle R}.
Non-square matrices offull rankhave several one-sided inverses:[4]
The left inverse can be used to determine the least norm solution ofAx=b{\displaystyle Ax=b}, which is also theleast squaresformula forregressionand is given byx=(ATA)−1ATb.{\displaystyle x=\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}b.}
Norank deficientmatrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists.
As an example of matrix inverses, consider:
So, asm<n, we have a right inverse,Aright−1=AT(AAT)−1.{\displaystyle A_{\text{right}}^{-1}=A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}.}By components it is computed as
The left inverse doesn't exist, because
which is asingular matrix, and cannot be inverted.
|
https://en.wikipedia.org/wiki/Invertible_element#Definition_of_invertible_elements_in_modular_arithmetic
|
Innumber theory, twointegersaandbarecoprime,relatively primeormutually primeif the only positive integer that is adivisorof both of them is 1.[1]Consequently, anyprime numberthat dividesadoes not divideb, and vice versa. This is equivalent to theirgreatest common divisor(GCD) being 1.[2]One says alsoais prime toborais coprime withb.
The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of areduced fractionare coprime, by definition.
When the integersaandbare coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formulagcd(a,b) = 1or(a,b) = 1. In their 1989 textbookConcrete Mathematics,Ronald Graham,Donald Knuth, andOren Patashnikproposed an alternative notationa⊥b{\displaystyle a\perp b}to indicate thataandbare relatively prime and that the term "prime" be used instead of coprime (as inaisprimetob).[3]
A fast way to determine whether two numbers are coprime is given by theEuclidean algorithmand its faster variants such asbinary GCD algorithmorLehmer's GCD algorithm.
The number of integers coprime with a positive integern, between 1 andn, is given byEuler's totient function, also known as Euler's phi function,φ(n).
Asetof integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means thataandbare coprime for every pair(a,b)of different integers in the set. The set{2, 3, 4}is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime.
The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0.
A number of conditions are equivalent toaandbbeing coprime:
As a consequence of the third point, ifaandbare coprime andbr≡bs(moda), thenr≡s(moda).[5]That is, we may "divide byb" when working moduloa. Furthermore, ifb1,b2are both coprime witha, then so is their productb1b2(i.e., moduloait is a product of invertible elements, and therefore invertible);[6]this also follows from the first point byEuclid's lemma, which states that if a prime numberpdivides a productbc, thenpdivides at least one of the factorsb, c.
As a consequence of the first point, ifaandbare coprime, then so are any powersakandbm.
Ifaandbare coprime andadivides the productbc, thenadividesc.[7]This can be viewed as a generalization of Euclid's lemma.
The two integersaandbare coprime if and only if the point with coordinates(a,b)in aCartesian coordinate systemwould be "visible" via an unobstructed line of sight from the origin(0, 0), in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and(a,b). (See figure 1.)
In a sense that can be made precise, theprobabilitythat two randomly chosen integers are coprime is6/π2, which is about 61% (see§ Probability of coprimality, below).
Twonatural numbersaandbare coprime if and only if the numbers2a− 1and2b− 1are coprime.[8]As a generalization of this, following easily from theEuclidean algorithminbasen> 1:
Asetof integersS={a1,a2,…,an}{\displaystyle S=\{a_{1},a_{2},\dots ,a_{n}\}}can also be calledcoprimeorsetwise coprimeif thegreatest common divisorof all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them.
If every pair in a set of integers is coprime, then the set is said to bepairwise coprime(orpairwise relatively prime,mutually coprimeormutually relatively prime). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividingallof them is 1), but they are notpairwisecoprime (becausegcd(4, 6) = 2).
The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as theChinese remainder theorem.
It is possible for aninfinite setof integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements inSylvester's sequence, and the set of allFermat numbers.
Given two randomly chosen integersaandb, it is reasonable to ask how likely it is thataandbare coprime. In this determination, it is convenient to use the characterization thataandbare coprime if and only if no prime number divides both of them (seeFundamental theorem of arithmetic).
Informally, the probability that any number is divisible by a prime (or in fact any integer)pis1p;{\displaystyle {\tfrac {1}{p}};}for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible bypis1p2,{\displaystyle {\tfrac {1}{p^{2}}},}and the probability that at least one of them is not is1−1p2.{\displaystyle 1-{\tfrac {1}{p^{2}}}.}Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primespandqif and only if it is divisible bypq; the latter event has probability1pq.{\displaystyle {\tfrac {1}{pq}}.}If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes,
Hereζrefers to theRiemann zeta function, the identity relating the product over primes toζ(2)is an example of anEuler product, and the evaluation ofζ(2)asπ2/6is theBasel problem, solved byLeonhard Eulerin 1735.
There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion ofnatural density. For each positive integerN, letPNbe the probability that two randomly chosen numbers in{1,2,…,N}{\displaystyle \{1,2,\ldots ,N\}}are coprime. AlthoughPNwill never equal6/π2exactly, with work[9]one can show that in the limit asN→∞,{\displaystyle N\to \infty ,}the probabilityPNapproaches6/π2.
More generally, the probability ofkrandomly chosen integers being setwise coprime is1ζ(k).{\displaystyle {\tfrac {1}{\zeta (k)}}.}
All pairs of positive coprime numbers(m,n)(withm>n) can be arranged in two disjoint completeternary trees, one tree starting from(2, 1)(for even–odd and odd–even pairs),[10]and the other tree starting from(3, 1)(for odd–odd pairs).[11]The children of each vertex(m,n)are generated as follows:
This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if(a,b){\displaystyle (a,b)}is a coprime pair witha>b,{\displaystyle a>b,}then
In all cases(m,n){\displaystyle (m,n)}is a "smaller" coprime pair withm>n.{\displaystyle m>n.}This process of "computing the father" can stop only if eithera=2b{\displaystyle a=2b}ora=3b.{\displaystyle a=3b.}In these cases, coprimality, implies that the pair is either(2,1){\displaystyle (2,1)}or(3,1).{\displaystyle (3,1).}
Another (much simpler) way to generate a tree of positive coprime pairs(m,n)(withm>n) is by means of two generatorsf:(m,n)→(m+n,n){\displaystyle f:(m,n)\rightarrow (m+n,n)}andg:(m,n)→(m+n,m){\displaystyle g:(m,n)\rightarrow (m+n,m)}, starting with the root(2,1){\displaystyle (2,1)}. The resulting binary tree, theCalkin–Wilf tree, is exhaustive and non-redundant, which can be seen as follows. Given a coprime pair one recursively appliesf−1{\displaystyle f^{-1}}org−1{\displaystyle g^{-1}}depending on which of them yields a positive coprime pair withm>n. Since only one does, the tree is non-redundant. Since by this procedure one is bound to arrive at the root, the tree is exhaustive.
In machine design, an even, uniformgearwear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1gear ratiois desired, a gear relatively prime to the two equal-size gears may be inserted between them.
In pre-computercryptography, someVernam ciphermachines combined several loops of key tape of different lengths. Manyrotor machinescombine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime.[12][13][14][15]
This concept can be extended to other algebraic structures thanZ;{\displaystyle \mathbb {Z} ;}for example,polynomialswhosegreatest common divisoris 1 are calledcoprime polynomials.
TwoidealsAandBin acommutative ringRare called coprime (orcomaximal) ifA+B=R.{\displaystyle A+B=R.}This generalizesBézout's identity: with this definition, twoprincipal ideals(a) and (b) in the ring of integersZ{\displaystyle \mathbb {Z} }are coprime if and only ifaandbare coprime. If the idealsAandBofRare coprime, thenAB=A∩B;{\displaystyle AB=A\cap B;}furthermore, ifCis a third ideal such thatAcontainsBC, thenAcontainsC. TheChinese remainder theoremcan be generalized to any commutative ring, using coprime ideals.
|
https://en.wikipedia.org/wiki/Coprime_integers
|
Incryptographyandcomputer security, aman-in-the-middle[a](MITM)attack, oron-path attack, is acyberattackwhere the attacker secretly relays and possibly alters thecommunicationsbetween two parties who believe that they are directly communicating with each other, where in actuality the attacker has inserted themselves between the two user parties.[9]
One example of a MITM attack is activeeavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker.[10]In this scenario, the attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within range of aWi-Fi access pointhosting a network without encryption could insert themselves as a man in the middle.[11][12][13]
As it aims to circumvent mutual authentication, a MITM attack can succeed only when the attacker impersonates each endpoint sufficiently well to satisfy their expectations. Most cryptographic protocols include some form of endpoint authentication specifically to prevent MITM attacks. For example,TLScan authenticate one or both parties using a mutually trusted certificate authority.[14][12]
SupposeAlicewishes to communicate withBob. Meanwhile,Mallorywishes to intercept the conversation to eavesdrop (breaking confidentiality) with the option to deliver a false message to Bob under the guise of Alice (breaking non-repudiation). Mallory would perform a man-in-the-middle attack as described in the following sequence of events.
This example shows the need for Alice and Bob to have a means to ensure that they are truly each using each other's public keys, and not the public key of an attacker.[15]Otherwise, such attacks are generally possible, in principle, against any message sent using public-key technology.
There are several attack types that can fall into the category of MITM. The most notable are:
MITM attacks can be prevented or detected by two means: authentication and tamper detection. Authentication provides some degree of certainty that a given message has come from a legitimate source.Tamper detectionmerely shows evidence that a message may have been altered and has broken integrity.
All cryptographic systems that are secure against MITM attacks provide some method of authentication for messages. Most require an exchange of information (such as public keys) in addition to the message over asecure channel. Such protocols, often usingkey-agreement protocols, have been developed with different security requirements for the secure channel, though some have attempted to remove the requirement for any secure channel at all.[16]
Apublic key infrastructure, such asTransport Layer Security, may hardenTransmission Control Protocolagainst MITM attacks. In such structures, clients and servers exchange certificates which are issued and verified by a trusted third party called acertificate authority(CA). If the original key to authenticate this CA has not been itself the subject of a MITM attack, then the certificates issued by the CA may be used to authenticate the messages sent by the owner of that certificate. Use ofmutual authentication, in which both the server and the client validate the other's communication, covers both ends of a MITM attack. If the server or client's identity is not verified or deemed as invalid, the session will end.[17]However, the default behavior of most connections is to only authenticate the server, which means mutual authentication is not always employed and MITM attacks can still occur.
Attestments, such as verbal communications of a shared value (as inZRTP), or recorded attestments such as audio/visual recordings of a public key hash[18]are used to ward off MITM attacks, as visual media is much more difficult and time-consuming to imitate than simple data packet communication. However, these methods require a human in the loop in order to successfully initiate the transaction.
HTTP Public Key Pinning(HPKP), sometimes called "certificate pinning", helps prevent a MITM attack in which the certificate authority itself is compromised, by having the server provide a list of "pinned" public key hashes during the first transaction. Subsequent transactions then require one or more of the keys in the list must be used by the server in order to authenticate that transaction.
DNSSECextends the DNS protocol to use signatures to authenticate DNS records, preventing simple MITM attacks from directing a client to a maliciousIP address.
Latency examination can potentially detect the attack in certain situations,[19]such as with long calculations that lead into tens of seconds likehash functions. To detect potential attacks, parties check for discrepancies in response times. For example: Say that two parties normally take a certain amount of time to perform a particular transaction. If one transaction, however, were to take an abnormal length of time to reach the other party, this could be indicative of a third party's presence interfering with the connection and inserting additional latency in the transaction.
Quantum cryptography, in theory, provides tamper-evidence for transactions through theno-cloning theorem. Protocols based on quantum cryptography typically authenticate part or all of their classical communication with an unconditionally secure authentication scheme. As an exampleWegman-Carter authentication.[20]
Captured network trafficfrom what is suspected to be an attack can be analyzed in order to determine whether there was an attack and, if so, determine the source of the attack. Important evidence to analyze when performingnetwork forensicson a suspected attack includes:[21]
AStingray phone trackeris acellular phonesurveillance device that mimics a wireless carrier cell tower in order to force all nearby mobile phones and other cellular data devices to connect to it. The tracker relays all communications back and forth between cellular phones and cell towers.[22]
In 2011, a security breach of the Dutch certificate authorityDigiNotarresulted in the fraudulent issuing ofcertificates. Subsequently, the fraudulent certificates were used to perform MITM attacks.[23]
In 2013,Nokia'sXpress Browserwas revealed to be decrypting HTTPS traffic on Nokia'sproxy servers, giving the companyclear textaccess to its customers' encrypted browser traffic. Nokia responded by saying that the content was not stored permanently, and that the company had organizational and technical measures to prevent access to private information.[24]
In 2017,Equifaxwithdrew its mobile phone apps following concern about MITM vulnerabilities.[25]
Bluetooth, a wireless communication protocol, has also been susceptible to man-in-the-middle attacks due to its wireless transmission of data.[26]
Other notable real-life implementations include the following:
|
https://en.wikipedia.org/wiki/Man-in-the-middle_attack
|
Incryptography,key sizeorkey lengthrefers to the number ofbitsin akeyused by acryptographicalgorithm (such as acipher).
Key length defines the upper-bound on an algorithm'ssecurity(i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated bybrute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Mostsymmetric-key algorithmsare designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance,Triple DESwas designed to have a 168-bit key, but an attack of complexity 2112is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important forasymmetric-key algorithms, because no such algorithm is known to satisfy this property;elliptic curve cryptographycomes the closest with an effective security of roughly half its key length.
Keysare used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) toplaintext. All commonly-used ciphers are based on publicly knownalgorithmsor areopen sourceand so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated byAuguste Kerckhoffs(in the 1880s) andClaude Shannon(in the 1940s); the statements are known asKerckhoffs' principleand Shannon's Maxim respectively.
A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute.Shannon'swork oninformation theoryshowed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called theone-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses oncomputational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.
Encryption systems are often grouped into families. Common families include symmetric systems (e.g.AES) and asymmetric systems (e.g.RSAandElliptic-curve cryptography[ECC]). They may be grouped according to the centralalgorithmused (e.g.ECCandFeistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the samelevel of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetricRSAis considered approximately equal in security to an 80-bit key in a symmetric algorithm.[1]
The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007[update], a 1039-bit integer was factored with thespecial number field sieveusing 400 computers over 11 months.[2]The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should bedeprecated, since they may become breakable in the foreseeable future. Cryptography professorArjen Lenstraobserved that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."[3]
The 2015Logjam attackrevealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.[4][5]
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entirespaceof keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.
With a key of lengthnbits, there are 2npossible keys. This number grows very rapidly asnincreases. The large number of operations (2128) required to try all possible 128-bit keys is widely consideredout of reachfor conventional digital computing techniques for the foreseeable future.[6]However, aquantum computercapable of runningGrover's algorithmwould be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly aDESequivalent. This is one of the reasons whyAESsupports key lengths of 256 bits and longer.[a]
IBM'sLucifer cipherwas selected in 1974 as the base for what would become theData Encryption Standard. Lucifer's key length was reduced from 128 bits to56 bits, which theNSAand NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers includingWhitfield DiffieandMartin Hellmancomplained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute forceparallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years".[7]
However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government.[8][9]The bookCracking DES(O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; seeEFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length forsymmetric algorithmkeys for general use. Because of this, DES was replaced in most security applications byTriple DES, which has 112 bits of security when using 168-bit keys (triple key).[1]
TheAdvanced Encryption Standardpublished in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms ofAES's quality untilquantum computersbecome available.[citation needed]However, as of 2015, the U.S.National Security Agencyhas issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for dataclassified up to Top Secret.[10]
In 2003, the U.S. National Institute for Standards and Technology,NISTproposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.[11]
Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits ofsecurity strengthfor key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-keyTriple DES, andAES. Approvals for two-key Triple DES andSkipjackwere withdrawn in 2015; theNSA's Skipjack algorithm used in itsFortezzaprogram employs 80-bit keys.[1]
The effectiveness ofpublic key cryptosystemsdepends on the intractability (computational and theoretical) of certain mathematical problems such asinteger factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus,asymmetric keysmust be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerfulquantum computersin the future.
Since 2015, NIST recommends a minimum of 2048-bit keys forRSA,[12]an update to the widely accepted recommendation of a 1024-bit minimum since at least 2002.[13]
1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys.[14]In 2003,RSA Securityclaimed that 1024-bit keys were likely to become crackable sometime between 2006 and 2010, while 2048-bit keys are sufficient until 2030.[15]As of 2020[update]the largest RSA key publicly known to be cracked isRSA-250with 829 bits.[16]
The Finite FieldDiffie-Hellmanalgorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on thediscrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.
Elliptic-curve cryptography(ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bitElliptic-curve Diffie–Hellman(ECDH) key has approximately the same safety factor as a 128-bitAESkey.[12]A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.[17]
TheNSApreviously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET;[10]In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.[18]
The two best known quantum computing attacks are based onShor's algorithmandGrover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms includingRSA,Diffie-Hellmanandelliptic curve cryptography. According to Professor GillesBrassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitousSSLused to protect e-commerce and Internet banking andSSHused to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest now, decrypt later".
Mainstream symmetric ciphers (such asAESorTwofish) and collision resistant hash functions (such asSHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable toGrover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2invocations of the underlying cryptographic algorithm, compared with roughly 2nin the classical case.[19]Thus in the presence of large quantum computers ann-bit key can provide at leastn/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.[10]
In a 2016 Quantum Computing FAQ, the NSA affirmed:
"A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. [...] It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. [...] The public-key algorithms (RSA,Diffie-Hellman,[Elliptic-curve Diffie–Hellman] ECDH, and[Elliptic Curve Digital Signature Algorithm] ECDSA) are all vulnerable to attack by a sufficiently large quantum computer. [...] While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized byNIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. [...] Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. [...] The AES-256 and SHA-384 algorithms are symmetric, and believed to be safe from attack by a large quantum computer."[20]
In a 2022 press release, the NSA notified:
"A cryptanalytically-relevant quantum computer (CRQC) would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used today. Given foreign pursuits in quantum computing, now is the time to plan, prepare and budget for a transition to [quantum-resistant] QR algorithms to assure sustained protection of [National Security Systems] NSS and related assets in the event a CRQC becomes an achievable reality."[21]
Since September 2022, the NSA has been transitioning from theCommercial National Security Algorithm Suite(now referred to as CNSA 1.0), originally launched in January 2016, to the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), both summarized below:[22][b]
CNSA 2.0
CNSA 1.0
|
https://en.wikipedia.org/wiki/Key_size#Key_sizes_and_security
|
Symmetric-key algorithms[a]arealgorithmsforcryptographythat use the samecryptographic keysfor both the encryption ofplaintextand the decryption ofciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys.[1]The keys, in practice, represent ashared secretbetween two or more parties that can be used to maintain a private information link.[2]The requirement that both parties have access to the secret key is one of the main drawbacks ofsymmetric-key encryption, in comparison topublic-key encryption(also known as asymmetric-key encryption).[3][4]However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of theone-time padthey have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption.[5][6][7]
Symmetric-key encryption can use eitherstream ciphersorblock ciphers.[8]
Stream ciphers encrypt the digits (typicallybytes), or letters (in substitution ciphers) of a message one at a time. An example isChaCha20.Substitution ciphersare well-known ciphers, but can be easily decrypted using afrequency table.[9]
Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. TheAdvanced Encryption Standard(AES) algorithm, approved byNISTin December 2001, uses 128-bit blocks.
Examples of popular symmetric-key algorithms includeTwofish,Serpent,AES(Rijndael),Camellia,Salsa20,ChaCha20,Blowfish,CAST5,Kuznyechik,RC4,DES,3DES,Skipjack,Safer, andIDEA.[10]
Symmetric ciphers are commonly used to achieve othercryptographic primitivesthan just encryption.[citation needed]
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often amessage authentication codeis added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from anAEADcipher (e.g.AES-GCM).
However, symmetric ciphers cannot be used fornon-repudiationpurposes except by involving additional parties.[11]See theISO/IEC 13888-2 standard.
Another application is to buildhash functionsfrom block ciphers. Seeone-way compression functionfor descriptions of several such methods.
Many modern block ciphers are based on a construction proposed byHorst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.[citation needed]
Symmetric ciphers have historically been susceptible toknown-plaintext attacks,chosen-plaintext attacks,differential cryptanalysisandlinear cryptanalysis. Careful construction of the functions for eachroundcan greatly reduce the chances of a successful attack.[citation needed]It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.[12]
Most modern symmetric-key algorithms appear to be resistant to the threat ofpost-quantum cryptography.[13]Quantum computerswould exponentially increase the speed at which these ciphers can be decoded; notably,Grover's algorithmwould take the square-root of the time traditionally required for abrute-force attack, although these vulnerabilities can be compensated for by doubling key length.[14]For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher.[15]For this reason, AES-256 is believed to be "quantum resistant".[16][17]
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by usingDiffie–Hellman key exchangeor some otherpublic-key protocolto securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
When used with asymmetric ciphers for key transfer,pseudorandom key generatorsare nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in theirinitialization vectorsis disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of highentropyfor its initialization.[18][19][20]
A reciprocal cipher is a cipher where, just as one enters theplaintextinto thecryptographysystem to get theciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred asself-reciprocal cipher.[21][22]
Practically all mechanical cipher machines implement a reciprocal cipher, amathematical involutionon each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.[23]
Examples of reciprocal ciphers include:
The majority of all modern ciphers can be classified as either astream cipher, most of which use a reciprocalXOR ciphercombiner, or ablock cipher, most of which use aFeistel cipherorLai–Massey schemewith a reciprocal transformation in each round.[citation needed]
|
https://en.wikipedia.org/wiki/Symmetric-key_algorithm#Key_length
|
Wired Equivalent Privacy(WEP) is an obsolete, severely flawedsecurityalgorithm for 802.11wireless networks. Introduced as part of the originalIEEE 802.11standard ratified in 1997, its intention was to provide security/privacy comparable to that of a traditional wirednetwork.[1]WEP, recognizable by its key of 10 or 26hexadecimaldigits (40 or 104 bits), was at one time widely used, and was often the first security choice presented to users by router configuration tools.[2][3]After a severe design flaw in the algorithm was disclosed in 2001,[4]WEP was no longer considered a secure method of wireless connection; however, in the vast majority of cases, Wi-Fi hardware devices relying on WEP security could not be upgraded to secure operation. Some of WEP's design flaws were addressed in WEP2, but it also proved insecure, and never saw wide adoption or standardization.[5]
In 2003, theWi-Fi Allianceannounced that WEP and WEP2 had been superseded byWi-Fi Protected Access(WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated.[6]WPA retained some design characteristics of WEP that remained problematic.
WEP was the only encryption protocol available to802.11aand802.11bdevices built before the WPA standard, which was available for802.11gdevices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in.[7]
WEP was ratified as a Wi-Fi security standard in 1999. The first versions of WEP were not particularly strong, even for the time they were released, due to U.S. restrictions on the export of various cryptographic technologies. These restrictions led to manufacturers restricting their devices to only 64-bit encryption. When the restrictions were lifted, the encryption was increased to 128 bits. Despite the introduction of 256-bit WEP, 128-bit remains one of the most common implementations.[8]
WEP was included as the privacy component of the originalIEEE 802.11[9]standard ratified in 1997.[10][11]WEP uses thestream cipherRC4forconfidentiality,[12]and theCRC-32checksum forintegrity.[13]It was deprecated in 2004 and is documented in the current standard.[14]
Standard 64-bit WEP uses a 40-bitkey (also known as WEP-40), which is concatenated with a 24-bitinitialization vector(IV) to form the RC4 key. At the time that the original WEP standard was drafted,the U.S. Government's export restrictions on cryptographic technologylimited thekey size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).
A 64-bit WEP key is usually entered as a string of 10hexadecimal(base 16) characters (0–9 and A–F). Each character represents 4 bits, 10 digits of 4 bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key (4 bits × 10 + 24-bit IV = 64-bit WEP key). Most devices also allow the user to enter the key as 5ASCIIcharacters (0–9, a–z, A–Z), each of which is turned into 8 bits using the character's byte value in ASCII (8 bits × 5 + 24-bit IV = 64-bit WEP key); however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.
A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of 4 bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key (4 bits × 26 + 24-bit IV = 128-bit WEP key). Most devices also allow the user to enter it as 13 ASCII characters (8 bits × 13 + 24-bit IV = 128-bit WEP key).
152-bit and 256-bit WEP systems are available from some vendors. As with the other WEP variants, 24 bits of that is for the IV, leaving 128 or 232 bits for actual protection. These 128 or 232 bits are typically entered as 32 or 58 hexadecimal characters (4 bits × 32 + 24-bit IV = 152-bit WEP key, 4 bits × 58 + 24-bit IV = 256-bit WEP key). Most devices also allow the user to enter it as 16 or 29 ASCII characters (8 bits × 16 + 24-bit IV = 152-bit WEP key, 8 bits × 29 + 24-bit IV = 256-bit WEP key).
Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.
In Open System authentication, the WLAN client does not provide its credentials to the access point during authentication. Any client can authenticate with the access point and then attempt to associate. In effect, no authentication occurs. Subsequently, WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.
In Shared Key authentication, the WEP key is used for authentication in a four-stepchallenge–responsehandshake:
After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.
At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication.[15]Therefore, data can be more easily intercepted and decrypted with Shared Key authentication than with Open System authentication. If privacy is a primary concern, it is more advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication; however, this also means that any WLAN client can connect to the AP. (Both authentication mechanisms are weak; Shared Key WEP is deprecated in favor of WPA/WPA2.)
Because RC4 is astream cipher, the same traffic key must never be used twice. The purpose of an IV, which is transmitted as plaintext, is to prevent any repetition, but a 24-bit IV is not long enough to ensure this on a busy network. The way the IV was used also opened WEP to arelated-key attack. For a 24-bit IV, there is a 50% probability the same IV will repeat after 5,000 packets.
In August 2001,Scott Fluhrer,Itsik Mantin, andAdi Shamirpublished acryptanalysisof WEP[4]that exploits the way the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can recover the RC4keyafter eavesdropping on the network. Depending on the amount of network traffic, and thus the number of packets available for inspection, a successful key recovery could take as little as one minute. If an insufficient number of packets are being sent, there are ways for an attacker to send packets on the network and thereby stimulate reply packets, which can then be inspected to find the key. The attack was soon implemented, and automated tools have since been released. It is possible to perform the attack with a personal computer, off-the-shelf hardware, and freely available software such asaircrack-ngto crackanyWEP key in minutes.
Cam-Winget et al.[16]surveyed a variety of shortcomings in WEP. They wrote "Experiments in the field show that, with proper equipment, it is practical to eavesdrop on WEP-protected networks from distances of a mile or more from the target." They also reported two generic weaknesses:
In 2005, a group from the U.S.Federal Bureau of Investigationgave a demonstration where they cracked a WEP-protected network in three minutes using publicly available tools.[17]Andreas Klein presented another analysis of the RC4 stream cipher. Klein showed that there are more correlations between the RC4 keystream and the key than the ones found by Fluhrer, Mantin, and Shamir, which can additionally be used to break WEP in WEP-like usage modes.
In 2006, Bittau,Handley, and Lackey showed[2]that the 802.11 protocol itself can be used against WEP to enable earlier attacks that were previously thought impractical. After eavesdropping a single packet, an attacker can rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped packet can then be decrypted one byte at a time (by transmitting about 128 packets per byte to decrypt) to discover the local network IP addresses. Finally, if the 802.11 network is connected to the Internet, the attacker can use 802.11 fragmentation to replay eavesdropped packets while crafting a new IP header onto them. The access point can then be used to decrypt these packets and relay them on to a buddy on the Internet, allowing real-time decryption of WEP traffic within a minute of eavesdropping the first packet.
In 2007, Erik Tews, Andrei Pyshkin, and Ralf-Philipp Weinmann were able to extend Klein's 2005 attack and optimize it for usage against WEP. With the new attack[18]it is possible to recover a 104-bit WEP key with a probability of 50% using only 40,000 captured packets. For 60,000 available data packets, the success probability is about 80%, and for 85,000 data packets, about 95%. Using active techniques likeWi-Fi deauthentication attacksandARPre-injection, 40,000 packets can be captured in less than one minute under good conditions. The actual computation takes about 3 seconds and 3 MB of main memory on aPentium-M1.7 GHz and can additionally be optimized for devices with slower CPUs. The same attack can be used for 40-bit keys with an even higher success probability.
In 2008 thePayment Card Industry Security Standards Council(PCI SSC) updated theData Security Standard(DSS) to prohibit use of WEP as part of any credit-card processing after 30 June 2010, and prohibit any new system from being installed that uses WEP after 31 March 2009. The use of WEP contributed to theTJ Maxxparent company network invasion.[19]
The Caffe Latte attack is another way to defeat WEP. It is not necessary for the attacker to be in the area of thenetworkusing this exploit. By using a process that targets theWindowswireless stack, it is possible to obtain the WEP key from a remote client.[20]By sending a flood of encryptedARPrequests, the assailant takes advantage of the shared key authentication and the message modification flaws in 802.11 WEP. The attacker uses the ARP responses to obtain the WEP key in less than 6 minutes.[21]
Use of encryptedtunneling protocols(e.g.,IPsec,Secure Shell) can provide secure data transmission over an insecure network. However, replacements for WEP have been developed with the goal of restoring security to the wireless network itself.
The recommended solution to WEP security problems is to switch to WPA2.WPAwas an intermediate solution for hardware that could not support WPA2. Both WPA and WPA2 are much more secure than WEP.[22]To add support for WPA or WPA2, some old Wi-Fiaccess pointsmight need to be replaced or have theirfirmwareupgraded. WPA was designed as an interim software-implementable solution for WEP that could forestall immediate deployment of new hardware.[23]However,TKIP(the basis of WPA) has reached the end of its designed lifetime, has been partially broken, and has been officially deprecated with the release of the 802.11-2012 standard.[24]
This stopgap enhancement to WEP was present in some of the early 802.11i drafts. It was implementable onsome(not all) hardware not able to handle WPA or WPA2, and extended both the IV and the key values to 128 bits.[9]It was hoped to eliminate the duplicate IV deficiency as well as stopbrute-force key attacks.
After it became clear that the overall WEP algorithm was deficient (and not just the IV and key sizes) and would require even more fixes, both the WEP2 name and original algorithm were dropped. The two extended key lengths remained in what eventually became WPA'sTKIP.
WEPplus, also known as WEP+, is a proprietary enhancement to WEP byAgere Systems(formerly a subsidiary ofLucent Technologies) that enhances WEP security by avoiding "weak IVs".[25]It is only completely effective when WEPplus is used atboth endsof the wireless connection. As this cannot easily be enforced, it remains a serious limitation. It also does not necessarily preventreplay attacks, and is ineffective against later statistical attacks that do not rely on weak IVs.
Dynamic WEP refers to the combination of 802.1x technology and theExtensible Authentication Protocol. Dynamic WEP changes WEP keys dynamically. It is a vendor-specific feature provided by several vendors such as3Com.
The dynamic change idea made it into 802.11i as part of TKIP, but not for the WEP protocol itself.
|
https://en.wikipedia.org/wiki/Wired_Equivalent_Privacy
|
Incomputingandmathematics, themodulo operationreturns theremainderor signed remainder of adivision, after one number is divided by another, the latter being called themodulusof the operation.
Given two positive numbersaandn,amodulon(often abbreviated asamodn) is the remainder of theEuclidean divisionofabyn, whereais thedividendandnis thedivisor.[1]
For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has aquotientof 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0.
Although typically performed withaandnboth beingintegers, many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation ofnis 0 ton− 1.amod 1 is always 0.
When exactly one ofaornis negative, the basic definition breaks down, andprogramming languagesdiffer in how these values are defined.
Inmathematics, the result of themodulooperation is anequivalence class, and any member of the class may be chosen asrepresentative; however, the usual representative is theleast positive residue, the smallest non-negative integer that belongs to that class (i.e., the remainder of theEuclidean division).[2]However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on theprogramming languageor the underlyinghardware.
In nearly all computing systems, the quotientqand the remainderrofadivided bynsatisfy the following conditions:
This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive; that choice determines which of the two consecutive quotients must be used to satisfy equation (1). In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs ofaorn.[a]StandardPascalandALGOL 68, for example, give a positive remainder (or 0) even for negative divisors, and some programming languages, such as C90, leave it to the implementation when either ofnorais negative (see the table under§ In programming languagesfor details). Some systems leaveamodulo 0 undefined, though others define it asa.
Many implementations usetruncated division, for which the quotient is defined by
wheretrunc{\displaystyle \operatorname {trunc} }is theintegral part function(rounding toward zero), i.e. thetruncationto zero significant digits.
Thus according to equation (1), the remainder has thesame sign as the dividendaso can take2|n| − 1values:
Donald Knuth[3]promotesfloored division, for which the quotient is defined by
where⌊⌋{\displaystyle \lfloor \,\rfloor }is thefloor function(rounding down).
Thus according to equation (1), the remainder has thesame sign as the divisorn:
Raymond T. Boute[4]promotesEuclidean division, for which the quotient is defined by
wheresgnis thesign function,⌊⌋{\displaystyle \lfloor \,\rfloor }is thefloor function(rounding down), and⌈⌉{\displaystyle \lceil \,\rceil }is theceiling function(rounding up).
Thus according to equation (1), the remainder isnon negative:
Common Lisp andIEEE 754userounded division, for which the quotient is defined by
whereroundis theround function(rounding half to even).
Thus according to equation (1), the remainder falls between−n2{\displaystyle -{\frac {n}{2}}}andn2{\displaystyle {\frac {n}{2}}}, and its sign depends on which side of zero it falls to be within these boundaries:
Common Lisp also usesceiling division, for which the quotient is defined by
where ⌈⌉ is theceiling function(rounding up).
Thus according to equation (1), the remainder has theopposite sign of that of the divisor:
If both the dividend and divisor are positive, then the truncated, floored, and Euclidean definitions agree.
If the dividend is positive and the divisor is negative, then the truncated and Euclidean definitions agree.
If the dividend is negative and the divisor is positive, then the floored and Euclidean definitions agree.
If both the dividend and divisor are negative, then the truncated and floored definitions agree.
As described by Leijen,
Boute argues that Euclidean division is superior to the other ones in terms of regularity and useful mathematical properties, although floored division, promoted by Knuth, is also a good definition. Despite its widespread use, truncated division is shown to be inferior to the other definitions.
However, truncated division satisfies the identity(−a)/b=−(a/b)=a/(−b){\displaystyle ({-a})/b={-(a/b)}=a/({-b})}.[6][7]
Some calculators have amod()function button, and many programming languages have a similar function, expressed asmod(a,n), for example. Some also support expressions that use "%", "mod", or "Mod" as a modulo or remainderoperator, such asa % nora mod n.
For environments lacking a similar function, any of the three definitions above can be used.
When the result of a modulo operation has the sign of the dividend (truncated definition), it can lead to surprising mistakes.
For example, to test if an integer isodd, one might be inclined to test if the remainder by 2 is equal to 1:
But in a language where modulo has the sign of the dividend, that is incorrect, because whenn(the dividend) is negative and odd,nmod 2 returns −1, and the function returns false.
One correct alternative is to test that the remainder is not 0 (because remainder 0 is the same regardless of the signs):
Or with the binary arithmetic:
Modulo operations might be implemented such that a division with a remainder is calculated each time. For special cases, on some hardware, faster alternatives exist. For example, the modulo ofpowers of 2can alternatively be expressed as abitwiseAND operation (assumingxis a positive integer, or using a non-truncating definition):
Examples:
In devices and software that implement bitwise operations more efficiently than modulo, these alternative forms can result in faster calculations.[8]
Compiler optimizationsmay recognize expressions of the formexpression % constantwhereconstantis a power of two and automatically implement them asexpression & (constant-1), allowing the programmer to write clearer code without compromising performance. This simple optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend (includingC), unless the dividend is of anunsignedinteger type. This is because, if the dividend is negative, the modulo will be negative, whereasexpression & (constant-1)will always be positive. For these languages, the equivalencex % 2n== x < 0 ? x | ~(2n- 1) : x & (2n- 1)has to be used instead, expressed using bitwise OR, NOT and AND operations.
Optimizations for general constant-modulus operations also exist by calculating the division first using theconstant-divisor optimization.
Some modulo operations can be factored or expanded similarly to other mathematical operations. This may be useful incryptographyproofs, such as theDiffie–Hellman key exchange. The properties involving multiplication, division, and exponentiation generally require thataandnare integers.
In addition, many computer systems provide adivmodfunctionality, which produces the quotient and the remainder at the same time. Examples include thex86 architecture'sIDIVinstruction, the C programming language'sdiv()function, andPython'sdivmod()function.
Sometimes it is useful for the result ofamodulonto lie not between 0 andn− 1, but between some numberdandd+n− 1. In that case,dis called anoffsetandd= 1is particularly common.
There does not seem to be a standard notation for this operation, so let us tentatively useamoddn. We thus have the following definition:[61]x=amoddnjust in cased≤x≤d+n− 1andxmodn=amodn. Clearly, the usual modulo operation corresponds to zero offset:amodn=amod0n.
The operation of modulo with offset is related to thefloor functionas follows:
To see this, letx=a−n⌊a−dn⌋{\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor }. We first show thatxmodn=amodn. It is in general true that(a+bn) modn=amodnfor all integersb; thus, this is true also in the particular case whenb=−⌊a−dn⌋{\textstyle b=-\!\left\lfloor {\frac {a-d}{n}}\right\rfloor }; but that means thatxmodn=(a−n⌊a−dn⌋)modn=amodn{\textstyle x{\bmod {n}}=\left(a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor \right)\!{\bmod {n}}=a{\bmod {n}}}, which is what we wanted to prove. It remains to be shown thatd≤x≤d+n− 1. Letkandrbe the integers such thata−d=kn+rwith0 ≤r≤n− 1(seeEuclidean division). Then⌊a−dn⌋=k{\textstyle \left\lfloor {\frac {a-d}{n}}\right\rfloor =k}, thusx=a−n⌊a−dn⌋=a−nk=d+r{\textstyle x=a-n\left\lfloor {\frac {a-d}{n}}\right\rfloor =a-nk=d+r}. Now take0 ≤r≤n− 1and adddto both sides, obtainingd≤d+r≤d+n− 1. But we've seen thatx=d+r, so we are done.
The modulo with offsetamoddnis implemented inMathematicaasMod[a, n, d].[61]
Despite the mathematical elegance of Knuth's floored division and Euclidean division, it is generally much more common to find a truncated division-based modulo in programming languages. Leijen provides the following algorithms for calculating the two divisions given a truncated integer division:[5]
For both cases, the remainder can be calculated independently of the quotient, but not vice versa. The operations are combined here to save screen space, as the logical branches are the same.
|
https://en.wikipedia.org/wiki/Modulo_operation
|
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801.
A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12).
Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12).
Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that
Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by
The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b).
This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm).
The congruence relation may be rewritten as
explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is,
where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q.
Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym.
This means that every non-zero integermmay be taken as modulus.
In modulus 12, one can assert that:
because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12.
The definition of congruence also applies to negative values. For example:
The congruence relation satisfies all the conditions of anequivalence relation:
Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1]
Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true:
For cancellation of common terms, we have the following rules:
The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b.
Themodular multiplicative inverseis defined by the following rules:
The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm.
In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop.
Some of the more advanced properties of congruence relations are the following:
The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context.
Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes.
It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes.
Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom.
In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context.
Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3]
The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom.
The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include:
Some sets that arenotcomplete residue systems modulo 4 are:
Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4.
Covering systems represent yet another type of residue system that may contain residues with varying moduli.
In the context of this paragraph, the modulusmis almost always taken as positive.
The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow).
(In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.)
Form> 0one has
Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}.
Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules:
The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has
as in the arithmetic for the 24-hour clock.
The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .}
Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7]
The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors.
Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function.
In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts.
A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.
In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation.
In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8]
In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2.
The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10.
In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat).
The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).
Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic.
More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis.
Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers.
Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate.
Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
|
https://en.wikipedia.org/wiki/Integers_modulo_n
|
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801.
A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12).
Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12).
Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that
Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by
The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b).
This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm).
The congruence relation may be rewritten as
explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is,
where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q.
Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym.
This means that every non-zero integermmay be taken as modulus.
In modulus 12, one can assert that:
because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12.
The definition of congruence also applies to negative values. For example:
The congruence relation satisfies all the conditions of anequivalence relation:
Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1]
Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true:
For cancellation of common terms, we have the following rules:
The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b.
Themodular multiplicative inverseis defined by the following rules:
The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm.
In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop.
Some of the more advanced properties of congruence relations are the following:
The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context.
Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes.
It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes.
Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom.
In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context.
Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3]
The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom.
The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include:
Some sets that arenotcomplete residue systems modulo 4 are:
Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4.
Covering systems represent yet another type of residue system that may contain residues with varying moduli.
In the context of this paragraph, the modulusmis almost always taken as positive.
The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow).
(In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.)
Form> 0one has
Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}.
Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules:
The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has
as in the arithmetic for the 24-hour clock.
The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .}
Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7]
The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors.
Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function.
In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts.
A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.
In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation.
In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8]
In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2.
The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10.
In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat).
The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).
Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic.
More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis.
Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers.
Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate.
Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
|
https://en.wikipedia.org/wiki/Modular_arithmetic
|
48(forty-eight) is thenatural numberfollowing47and preceding49. It is one third of agross, or fourdozens.
48 is ahighly composite number, and aStørmer number.[1]
By a classical result ofHonsberger, the number ofincongruentinteger-sidedtrianglesofperimeterm{\displaystyle m}is given by the equationsm248{\displaystyle {\tfrac {m^{2}}{48}}}forevenm{\displaystyle m}, and(m+3)248{\displaystyle {\tfrac {(m+3)^{2}}{48}}}for oddm{\displaystyle m}.[2]
48 is theorderoffull octahedral symmetry, which describes three-dimensional mirrorsymmetriesassociated with theregularoctahedron andcube.
There are 48symmetriesof acube.
Forty-eightmay also refer to:
|
https://en.wikipedia.org/wiki/48_(number)
|
Incryptography,Triple DES(3DESorTDES), officially theTriple Data Encryption Algorithm(TDEAorTriple DEA), is asymmetric-keyblock cipher, which applies theDEScipher algorithm three times to each data block. The 56-bit key of the Data Encryption Standard (DES) is no longer considered adequate in the face of modern cryptanalytic techniques and supercomputing power; Triple DES increases the effective security to 112 bits. ACVEreleased in 2016,CVE-2016-2183, disclosed a major security vulnerability in the DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of 3DES, led toNISTdeprecating 3DES in 2019 and disallowing all uses (except processing already encrypted data) by the end of 2023.[1]It has been replaced with the more secure, more robustAES.
While US government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and TDEA (Triple Data Encryption Algorithm),[2]RFC 1851 referred to it as 3DES from the time it first promulgated the idea, and this namesake has since come into wide use by most vendors, users, and cryptographers.[3][4][5][6]
In 1978, a triple encryption method using DES with two 56-bit keys was proposed byWalter Tuchman; in 1981,MerkleandHellmanproposed a more secure triple-key version of 3DES with 112 bits of security.[7]
The Triple Data Encryption Algorithm is variously defined in several standards documents:
The original DES cipher'skey sizeof 56 bits was considered generally sufficient when it was designed, but the availability of increasing computational power madebrute-force attacksfeasible. Triple DES provides a relatively simple method of increasing the key size of DES to protect against such attacks, without the need to design a completely new block cipher algorithm.
A naive approach to increase the strength of a block encryption algorithm with a short key length (like DES) would be to use two keys(K1,K2){\displaystyle (K1,K2)}instead of one, and encrypt each block twice:EK2(EK1(plaintext)){\displaystyle E_{K2}(E_{K1}({\textrm {plaintext}}))}. If the original key length isn{\displaystyle n}bits, one would hope this scheme provides security equivalent to using a key2n{\displaystyle 2n}bits long. Unfortunately, this approach is vulnerable to themeet-in-the-middle attack: given aknown plaintextpair(x,y){\displaystyle (x,y)}, such thaty=EK2(EK1(x)){\displaystyle y=E_{K2}(E_{K1}(x))}, one can recover the key pair(K1,K2){\displaystyle (K1,K2)}in2n+1{\displaystyle 2^{n+1}}steps, instead of the22n{\displaystyle 2^{2n}}steps one would expect from an ideally secure algorithm with2n{\displaystyle 2n}bits of key.
Therefore, Triple DES uses a "key bundle" that comprises three DESkeys,K1{\displaystyle K1},K2{\displaystyle K2}andK3{\displaystyle K3}, each of 56 bits (excludingparity bits). The encryption algorithm is:
That is, encrypt withK1{\displaystyle K1},decryptwithK2{\displaystyle K2}, then encrypt withK3{\displaystyle K3}.
Decryption is the reverse:
That is, decrypt withK3{\displaystyle K3},encryptwithK2{\displaystyle K2}, then decrypt withK1{\displaystyle K1}.
Each triple encryption encrypts oneblockof 64 bits of data.
In each case, the middle operation is the reverse of the first and last. This improves the strength of the algorithm when usingkeying option2 and providesbackward compatibilitywith DES with keying option 3.
The standards define three keying options:
This is the strongest, with 3 × 56 = 168 independent key bits. It is still vulnerable to themeet-in-the-middle attack, but the attack requires 22 × 56steps.
This provides a shorter key length of 56 × 2 or 112 bits and a reasonable compromise between DES and keying option 1, with the same caveat as above.[18]This is an improvement over "double DES" which only requires 256steps to attack. NIST disallowed this option in 2015.[16]
This is backward-compatible with DES, since two of the operations cancel out. ISO/IEC 18033-3 never allowed this option, and NIST no longer allows K1= K2or K2= K3.[16][13]
Each DES key is 8odd-paritybytes, with 56 bits of key and 8 bits of error-detection.[9]A key bundle requires 24 bytes for option 1, 16 for option 2, or 8 for option 3.
NIST (and the current TCG specifications version 2.0 of approved algorithms forTrusted Platform Module) also disallows using any one of the 64 following 64-bit values in any keys (note that 32 of them are the binary complement of the 32 others; and that 32 of these keys are also the reverse permutation of bytes of the 32 others), listed here in hexadecimal (in each byte, the least significant bit is an odd-parity generated bit, which is discarded when forming the effectively 56-bit key):
With these restrictions on allowed keys, Triple DES was reapproved with keying options 1 and 2 only. Generally, the three keys are generated by taking 24 bytes from a strong random generator, and only keying option 1 should be used (option 2 needs only 16 random bytes, but strong random generators are hard to assert and it is considered best practice to use only option 1).
As with all block ciphers, encryption and decryption of multiple blocks of data may be performed using a variety ofmodes of operation, which can generally be defined independently of the block cipher algorithm. However, ANS X9.52 specifies directly, and NIST SP 800-67 specifies via SP 800-38A,[19]that some modes shall only be used with certain constraints on them that do not necessarily apply to general specifications of those modes. For example, ANS X9.52 specifies that forcipher block chaining, theinitialization vectorshall be different each time, whereas ISO/IEC 10116[20]does not. FIPS PUB 46-3 and ISO/IEC 18033-3 define only the single-block algorithm, and do not place any restrictions on the modes of operation for multiple blocks.
In general, Triple DES with three independent keys (keying option1) has a key length of 168 bits (three 56-bit DES keys), but due to themeet-in-the-middle attack, the effective security it provides is only 112 bits.[16]Keying option 2 reduces the effective key size to 112 bits (because the third key is the same as the first). However, this option is susceptible to certainchosen-plaintextorknown-plaintextattacks,[21][22]and thus it is designated by NIST to have only 80bits of security.[16]This can be considered insecure; as a consequence, Triple DES's planned deprecation was announced by NIST in 2017.[23]
The short block size of 64 bits makes 3DES vulnerable to block collision attacks if it is used to encrypt large amounts of data with the same key. The Sweet32 attack shows how this can be exploited in TLS and OpenVPN.[24]Practical Sweet32 attack on 3DES-based cipher-suites in TLS required236.6{\displaystyle 2^{36.6}}blocks (785 GB) for a full attack, but researchers were lucky to get a collision just after around220{\displaystyle 2^{20}}blocks, which took only 25 minutes.
The security of TDEA is affected by the number of blocks processed with one key bundle. One key bundle shall not be used to apply cryptographic protection (e.g., encrypt) more than220{\displaystyle 2^{20}}64-bit data blocks.
OpenSSLdoes not include 3DES by default since version 1.1.0 (August 2016) and considers it a "weak cipher".[25]
As of 2008, theelectronic paymentindustry uses Triple DES and continues to develop and promulgate standards based upon it, such asEMV.[26]
Earlier versions ofMicrosoft OneNote,[27]Microsoft Outlook2007[28]and MicrosoftSystem Center Configuration Manager2012[29]use Triple DES to password-protect user content and system data. However, in December 2018, Microsoft announced the retirement of 3DES throughout their Office 365 service.[30]
FirefoxandMozilla Thunderbirduse Triple DES inCBC modeto encrypt website authentication login credentials when using a master password.[31]
Below is a list of cryptography libraries that support Triple DES:
Some implementations above may not include 3DES in the default build, in later or more recent versions, but may still support decryption in order to handle existing data.
|
https://en.wikipedia.org/wiki/Double_DES
|
Incryptography, theInternational Data Encryption Algorithm(IDEA), originally calledImproved Proposed Encryption Standard(IPES), is asymmetric-keyblock cipherdesigned byJames MasseyofETH ZurichandXuejia Laiand was first described in 1991. The algorithm was intended as a replacement for theData Encryption Standard(DES). IDEA is a minor revision of an earliercipher, the Proposed Encryption Standard (PES).
The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also atrademark. The lastpatentsexpired in 2012, and IDEA is now patent-free and thus completely free for all uses.[2]
IDEA was used inPretty Good Privacy(PGP) v2.0 and was incorporated after the original cipher used in v1.0,BassOmatic, was found to be insecure.[3]IDEA is an optional algorithm in theOpenPGPstandard.
IDEA operates on 64-bitblocksusing a 128-bitkeyand consists of a series of 8 identical transformations (around, see the illustration) and an output transformation (thehalf-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from differentgroups—modularaddition and multiplication, and bitwiseeXclusive OR (XOR)— which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:
After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap):
The overall structure of IDEA follows theLai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round.
Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations.
Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption.
The designers analysed IDEA to measure its strength againstdifferential cryptanalysisand concluded that it is immune under certain assumptions. No successfullinearor algebraic weaknesses have been reported. As of 2007[update], the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds).[4]Note that a "break" is any attack that requires less than 2128operations; the 6-round attack requires 264known plaintexts and 2126.8operations.
Bruce Schneierthought highly of IDEA in 1996, writing: "In my opinion, it is the best and most secure block algorithm available to the public at this time." (Applied Cryptography, 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in itscryptanalysis, and the issue of patents.[5]
In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack.[6]Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack onAES; however, this attack does not threaten the security of IDEA in practice.[7]
The very simple key schedule makes IDEA subject to a class ofweak keys; some keys containing a large number of 0 bits produceweak encryption.[8]These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE.[8][9]
Larger classes of weak keys were found in 2002.[10]
This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable.[10]
A patent application for IDEA was first filed inSwitzerland(CH A 1690/90) on May 18, 1990, then an international patent application was filed under thePatent Cooperation Treatyon May 16, 1991. Patents were eventually granted inAustria,France,Germany,Italy, theNetherlands,Spain,Sweden,Switzerland, theUnited Kingdom, (European Patent Register entry forEuropean patent no. 0482154, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), theUnited States(U.S. patent 5,214,703, issued May 25, 1993 and expired January 7, 2012) andJapan(JP 3225440, expired May 16, 2011).[11]
MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release in May 2005)IDEA NXT, which was previously called FOX.
|
https://en.wikipedia.org/wiki/International_Data_Encryption_Algorithm
|
SHA-3(Secure Hash Algorithm 3) is the latest[4]member of theSecure Hash Algorithmfamily of standards, released byNISTon August 5, 2015.[5][6][7]Although part of the same series of standards, SHA-3 is internally different from theMD5-likestructureofSHA-1andSHA-2.
SHA-3 is a subset of the broader cryptographic primitive familyKeccak(/ˈkɛtʃæk/or/ˈkɛtʃɑːk/),[8][9]designed byGuido Bertoni,Joan Daemen,Michaël Peeters, andGilles Van Assche, building uponRadioGatún. Keccak's authors have proposed additional uses for the function, not (yet) standardized by NIST, including astream cipher, anauthenticated encryptionsystem, a "tree" hashing scheme for faster hashing on certain architectures,[10][11]andAEADciphers Keyak and Ketje.[12][13]
Keccak is based on a novel approach calledsponge construction.[14]Sponge construction is based on a wide random function or randompermutation, and allows inputting ("absorbing" in sponge terminology) any amount of data, and outputting ("squeezing") any amount of data, while acting as a pseudorandom function with regard to all previous inputs. This leads to great flexibility.
As of 2022, NIST does not plan to withdraw SHA-2 or remove it from the revised Secure Hash Standard.[15]The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, and to significantly improve the robustness of NIST's overall hash algorithm toolkit.[16]
For small message sizes, the creators of the Keccak algorithms and the SHA-3 functions suggest using the faster functionKangarooTwelvewith adjusted parameters and a new tree hashing mode without extra overhead.
The Keccak algorithm is the work of Guido Bertoni,Joan Daemen(who also co-designed theRijndaelcipher withVincent Rijmen), Michaël Peeters, andGilles Van Assche. It is based on earlier hash function designsPANAMAandRadioGatún. PANAMA was designed by Daemen and Craig Clapp in 1998. RadioGatún, a successor of PANAMA, was designed by Daemen, Peeters, and Van Assche, and was presented at the NIST Hash Workshop in 2006.[17]Thereference implementationsource codewas dedicated topublic domainviaCC0waiver.[18]
In 2006,NISTstarted to organize theNIST hash function competitionto create a new hash standard, SHA-3. SHA-3 is not meant to replaceSHA-2, as no significant attack on SHA-2 has been publicly demonstrated[needs update]. Because of the successful attacks onMD5,SHA-0andSHA-1,[19][20]NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3.
After a setup period, admissions were to be submitted by the end of 2008. Keccak was accepted as one of the 51 candidates. In July 2009, 14 algorithms were selected for the second round. Keccak advanced to the last round in December 2010.[21]
During the competition, entrants were permitted to "tweak" their algorithms to address issues that were discovered. Changes that have been made to Keccak are:[22][23]
On October 2, 2012, Keccak was selected as the winner of the competition.[8]
In 2014, the NIST published a draftFIPS202 "SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions".[24]FIPS 202 was approved on August 5, 2015.[25]
On August 5, 2015, NIST announced that SHA-3 had become a hashing standard.[26]
In early 2013 NIST announced they would select different values for the "capacity", the overall strength vs. speed parameter, for the SHA-3 standard, compared to the submission.[27][28]The changes caused some turmoil.
The hash function competition called for hash functions at least as secure as the SHA-2 instances. It means that ad-bit output should haved/2-bit resistance tocollision attacksandd-bit resistance topreimage attacks, the maximum achievable fordbits of output. Keccak's security proof allows an adjustable level of security based on a "capacity"c, providingc/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposedc= 2d. The announced change was to accept the samed/2-bit security for all forms of attack and standardizec=d. This would have sped up Keccak by allowing an additionaldbits of input to be hashed each iteration. However, the hash functions would not have been drop-in replacements with the same preimage resistance as SHA-2 any more; it would have been cut in half, making it vulnerable to advances in quantum computing, which effectively would cut it in half once more.[29]
In September 2013,Daniel J. Bernsteinsuggested on theNISThash-forum mailing list[30]to strengthen the security to the 576-bit capacity that was originally proposed as the default Keccak, in addition to and not included in the SHA-3 specifications.[31]This would have provided at least a SHA3-224 and SHA3-256 with the same preimage resistance as their SHA-2 predecessors, but SHA3-384 and SHA3-512 would have had significantly less preimage resistance than their SHA-2 predecessors. In late September, the Keccak team responded by stating that they had proposed 128-bit security by settingc= 256as an option already in their SHA-3 proposal.[32]Although the reduced capacity was justifiable in their opinion, in the light of the negative response, they proposed raising the capacity toc= 512bits for all instances. This would be as much as any previous standard up to the 256-bit security level, while providing reasonable efficiency,[33]but not the 384-/512-bit preimage resistance offered by SHA2-384 and SHA2-512. The authors stated that "claiming or relying onsecurity strengthlevels above 256 bits is meaningless".
In early October 2013,Bruce Schneiercriticized NIST's decision on the basis of its possible detrimental effects on the acceptance of the algorithm, saying:
There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.[34]
He later retracted his earlier statement, saying:
I misspoke when I wrote that NIST made "internal changes" to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function's capacity in the name of performance. One of Keccak's nice features is that it's highly tunable.[34]
Paul Crowley, a cryptographer and senior developer at an independent software development company, expressed his support of the decision, saying that Keccak is supposed to be tunable and there is no reason for different security levels within one primitive. He also added:
Yes, it's a bit of a shame for the competition that they demanded a certain security level for entrants, then went to publish a standard with a different one. But there's nothing that can be done to fix that now, except re-opening the competition. Demanding that they stick to their mistake doesn't improve things for anyone.[35]
There was some confusion that internal changes may have been made to Keccak, which were cleared up by the original team, stating that NIST's proposal for SHA-3 is a subset of the Keccak family, for which one can generate test vectors using their reference code submitted to the contest, and that this proposal was the result of a series of discussions between them and the NIST hash team.[36]
In response to the controversy, in November 2013 John Kelsey of NIST proposed to go back to the originalc= 2dproposal for all SHA-2 drop-in replacement instances.[37]The reversion was confirmed in subsequent drafts[38]and in the final release.[5]
SHA-3 uses thesponge construction,[14]in which data is "absorbed" into the sponge, then the result is "squeezed" out. In the absorbing phase, message blocks areXORedinto a subset of the state, which is then transformed as a whole using apermutation function(ortransformation)f{\displaystyle f}. In the "squeeze" phase, output blocks are read from the same subset of the state, alternated with the state transformation functionf{\displaystyle f}. The size of the part of the state that is written and read is called the "rate" (denotedr{\displaystyle r}), and the size of the part that is untouched by input/output is called the "capacity" (denotedc{\displaystyle c}). The capacity determines the security of the scheme. The maximumsecurity levelis half the capacity.
Given an input bit stringN{\displaystyle N}, a padding functionpad{\displaystyle pad}, a permutation functionf{\displaystyle f}that operates on bit blocks of widthb{\displaystyle b}, a rater{\displaystyle r}and an output lengthd{\displaystyle d}, we have capacityc=b−r{\displaystyle c=b-r}and the sponge constructionZ=sponge[f,pad,r](N,d){\displaystyle Z={\text{sponge}}[f,pad,r](N,d)}, yielding a bit stringZ{\displaystyle Z}of lengthd{\displaystyle d}, works as follows:[6]: 18
The fact that the internal stateScontainscadditional bits of information in addition to what is output toZprevents thelength extension attacksthat SHA-2, SHA-1, MD5 and other hashes based on theMerkle–Damgård constructionare susceptible to.
In SHA-3, the stateSconsists of a5 × 5array ofw-bit words (withw= 64),b= 5 × 5 ×w= 5 × 5 × 64 = 1600 bits total. Keccak is also defined for smaller power-of-2 word sizeswdown to 1 bit (total state of 25 bits). Small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes (fromw= 8, 200 bits, tow= 32, 800 bits) can be used in practical, lightweight applications.[12][13]
For SHA3-224, SHA3-256, SHA3-384, and SHA3-512 instances,ris greater thand, so there is no need for additional block permutations in the squeezing phase; the leadingdbits of the state are the desired hash. However, SHAKE128 and SHAKE256 allow an arbitrary output length, which is useful in applications such asoptimal asymmetric encryption padding.
To ensure the message can be evenly divided intor-bit blocks, padding is required. SHA-3 uses the pattern 10...01 in its padding function: a 1 bit, followed by zero or more 0 bits (maximumr− 1) and a final 1 bit.
The maximum ofr− 1zero bits occurs when the last message block isr− 1bits long. Then another block is added after the initial 1 bit, containingr− 1zero bits before the final 1 bit.
The two 1 bits will be added even if the length of the message is already divisible byr.[6]: 5.1In this case, another block is added to the message, containing a 1 bit, followed by a block ofr− 2zero bits and another 1 bit. This is necessary so that a message with length divisible byrending in something that looks like padding does not produce the same hash as the message with those bits removed.
The initial 1 bit is required so messages differing only in a few additional 0 bits at the end do not produce the same hash.
The position of the final 1 bit indicates which raterwas used (multi-rate padding), which is required for the security proof to work for different hash variants. Without it, different hash variants of the same short message would be the same up to truncation.
The block transformationf, which is Keccak-f[1600] for SHA-3, is a permutation that usesXOR,ANDandNOToperations, and is designed for easy implementation in both software and hardware.
It is defined for any power-of-twowordsize,w= 2ℓbits. The main SHA-3 submission uses 64-bit words,ℓ= 6.
The state can be considered to be a5 × 5 ×warray of bits. Leta[i][j][k]be bit(5i+j) ×w+kof the input, using alittle-endianbit numbering convention androw-majorindexing. I.e.iselects the row,jthe column, andkthe bit.
Index arithmetic is performed modulo 5 for the first two dimensions and modulowfor the third.
The basic block permutation function consists of12 + 2ℓrounds of five steps:
The speed of SHA-3 hashing of long messages is dominated by the computation off= Keccak-f[1600] and XORingSwith the extendedPi, an operation onb= 1600 bits. However, since the lastcbits of the extendedPiare 0 anyway, and XOR with 0 is a NOP, it is sufficient to perform XOR operations only forrbits (r= 1600 − 2 × 224 = 1152 bits for SHA3-224, 1088 bits for SHA3-256, 832 bits for SHA3-384 and 576 bits for SHA3-512). The lowerris (and, conversely, the higherc=b−r= 1600 −r), the less efficient but more secure the hashing becomes since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensivef.
The authors report the following speeds for software implementations of Keccak-f[1600] plus XORing 1024 bits,[1]which roughly corresponds to SHA3-256:
For the exact SHA3-256 on x86-64, Bernstein measures 11.7–12.25 cpb depending on the CPU.[40]: 7SHA-3 has been criticized for being slow on instruction set architectures (CPUs) which do not have instructions meant specially for computing Keccak functions faster – SHA2-512 is more than twice as fast as SHA3-512, and SHA-1 is more than three times as fast on an Intel Skylake processor clocked at 3.2 GHz.[41]The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512, at the expense of cutting the preimage resistance in half (but while keeping the collision resistance). With this, performance is on par with SHA2-256 and SHA2-512.
However, inhardware implementations, SHA-3 is notably faster than all other finalists,[42]and also faster than SHA-2 and SHA-1.[41]
As of 2018, ARM's ARMv8[43]architecture includes special instructions which enable Keccak algorithms to execute faster and IBM'sz/Architecture[44]includes a complete implementation of SHA-3 and SHAKE in a single instruction. There have also been extension proposals forRISC-Vto add Keccak-specific instructions.[45]
The NIST standard defines the following instances, for messageMand output lengthd:[6]: 20, 23
With the following definitions
SHA-3 instances are drop-in replacements for SHA-2, intended to have identical security properties.
SHAKE will generate as many bits from its sponge as requested, thus beingextendable-output functions(XOFs). For example, SHAKE128(M, 256) can be used as a hash function with a 256 character bitstream with 128-bit security strength. Arbitrarily large lengths can be used as pseudo-random number generators. Alternately, SHAKE256(M, 128) can be used as a hash function with a 128-bit length and 128-bit resistance.[6]
All instances append some bits to the message, the rightmost of which represent thedomain separationsuffix. The purpose of this is to ensure that it is not possible to construct messages that produce the same hash output for different applications of the Keccak hash function. The following domain separation suffixes exist:[6][46]
In December 2016NISTpublished a new document, NIST SP.800-185,[47]describing additional SHA-3-derived functions:
• X is the main input bit string. It may be of any length, including zero.
• L is an integer representing the requested output length in bits.
• N is a function-name bit string, used by NIST to define functions based on cSHAKE. When no function other than cSHAKE is desired, N is set to the empty string.
• S is a customization bit string. The user selects this string to define a variant of the function. When no customization is desired, S is set to the empty string.
• K is a key bit string of any length, including zero.
• B is the block size in bytes for parallel hashing. It may be any integer such that 0 < B < 22040.
In 2016 the same team that made the SHA-3 functions and the Keccak algorithm introduced faster reduced-rounds (reduced to 12 and 14 rounds, from the 24 in SHA-3) alternatives which can exploit the availability of parallel execution because of usingtree hashing: KangarooTwelve and MarsupilamiFourteen.[49]
These functions differ from ParallelHash, the FIPS standardized Keccak-based parallelizable hash function, with regard to the parallelism, in that they are faster than ParallelHash for small message sizes.
The reduced number of rounds is justified by the huge cryptanalytic effort focused on Keccak which did not produce practical attacks on anything close to twelve-round Keccak. These higher-speed algorithms are not part of SHA-3 (as they are a later development), and thus are not FIPS compliant; but because they use the same Keccak permutation they are secure for as long as there are no attacks on SHA-3 reduced to 12 rounds.[49]
KangarooTwelve is a higher-performance reduced-round (from 24 to 12 rounds) version of Keccak which claims to have 128 bits of security[50]while having performance as high as 0.55cycles per byteon aSkylakeCPU.[51]This algorithm is anIETFRFCdraft.[52]
MarsupilamiFourteen, a slight variation on KangarooTwelve, uses 14 rounds of the Keccak permutation and claims 256 bits of security. Note that 256-bit security is not more useful in practice than 128-bit security, but may be required by some standards.[50]128 bits are already sufficient to defeat brute-force attacks on current hardware, so having 256-bit security does not add practical value, unless the user is worried about significant advancements in the speed ofclassicalcomputers. For resistance againstquantumcomputers, see below.
KangarooTwelve and MarsupilamiFourteen are Extendable-Output Functions, similar to SHAKE, therefore they generate closely related output for a common message with different output length (the longer output is an extension of the shorter output). Such property is not exhibited by hash functions such as SHA-3 or ParallelHash (except of XOF variants).[6]
In 2016, the Keccak team released a different construction calledFarfalle construction, and Kravatte, an instance of Farfalle using the Keccak-p permutation,[53]as well as two authenticated encryption algorithms Kravatte-SANE and Kravatte-SANSE[54]
RawSHAKE is the basis for the Sakura coding for tree hashing, which has not been standardized yet. Sakura uses a suffix of 1111 for single nodes, equivalent to SHAKE, and other generated suffixes depending on the shape of the tree.[46]: 16
There is a general result (Grover's algorithm) that quantum computers can perform a structuredpreimage attackin2d=2d/2{\displaystyle {\sqrt {2^{d}}}=2^{d/2}}, while a classical brute-force attack needs 2d. A structured preimage attack implies a second preimage attack[29]and thus acollision attack. A quantum computer can also perform abirthday attack, thus break collision resistance, in2d3=2d/3{\displaystyle {\sqrt[{3}]{2^{d}}}=2^{d/3}}[55](although that is disputed).[56]Noting that the maximum strength can bec/2{\displaystyle c/2}, this gives the following upper[57]bounds on the quantum security of SHA-3:
It has been shown that theMerkle–Damgård construction, as used by SHA-2, is collapsing and, by consequence, quantum collision-resistant,[58]but for the sponge construction used by SHA-3, the authors provide proofs only for the case when the block functionfis not efficiently invertible; Keccak-f[1600], however, is efficiently invertible, and so their proof does not apply.[59][original research]
The following hash values are from NIST.gov:[60]
Changing a single bit causes each bit in the output to change with 50% probability, demonstrating anavalanche effect:
In the table below,internal statemeans the number of bits that are carried over to the next block.
Optimized implementation usingAVX-512VL(i.e. fromOpenSSL, running onSkylake-XCPUs) of SHA3-256 do achieve about 6.4 cycles per byte for large messages,[66]and about 7.8 cycles per byte when usingAVX2onSkylakeCPUs.[67]Performance on other x86, Power and ARM CPUs depending on instructions used, and exact CPU model varies from about 8 to 15 cycles per byte,[68][69][70]with some older x86 CPUs up to 25–40 cycles per byte.[71]
Below is a list of cryptography libraries that support SHA-3:
Apple A13ARMv8 six-coreSoCCPU cores have support[72]for accelerating SHA-3 (and SHA-512) using specialized instructions (EOR3, RAX1, XAR, BCAX) from ARMv8.2-SHA crypto extension set.[73]
Some software libraries usevectorizationfacilities of CPUs to accelerate usage of SHA-3. For example, Crypto++ can useSSE2on x86 for accelerating SHA3,[74]andOpenSSLcan useMMX,AVX-512orAVX-512VLon many x86 systems too.[75]AlsoPOWER8CPUs implement 2x64-bit vector rotate, defined in PowerISA 2.07, which can accelerate SHA-3 implementations.[76]Most implementations for ARM do not useNeonvector instructions asscalar codeis faster. ARM implementations can however be accelerated usingSVEand SVE2 vector instructions; these are available in theFujitsu A64FXCPU for instance.[77]
The IBMz/Architecturesupports SHA-3 since 2017 as part of the Message-Security-Assist Extension 6.[78]The processors support a complete implementation of the entire SHA-3 and SHAKE algorithms via the KIMD and KLMD instructions using a hardware assist engine built into each core.
Ethereumuses the Keccak-256 hash function (as per version 3 of the winning entry to the SHA-3 contest by Bertoni et al., which is different from the final SHA-3 specification).[79]
|
https://en.wikipedia.org/wiki/Keccak
|
Aspace–time trade-off, also known astime–memory trade-offorthe algorithmic space-time continuumincomputer scienceis a case where analgorithmorprogramtradesincreased space usage with decreased time. Here,spacerefers to thedata storageconsumed in performing a given task (RAM,HDD, etc.), andtimerefers to the time consumed in performing a given task (computationtime orresponse time).
The utility of a given space–time tradeoff is affected by relatedfixedandvariable costs(of, e.g.,CPUspeed, storage space), and is subject todiminishing returns.
Biological usage of time–memory tradeoffs can be seen in the earlier stages ofanimal behavior. Using stored knowledge or encoding stimuli reactions as "instincts" in the DNA avoids the need for "calculation" in time-critical situations. More specific to computers,look-up tableshave been implemented since the very earliest operating systems.[citation needed]
In 1980Martin Hellmanfirst proposed using a time–memory tradeoff forcryptanalysis.[1]
A common situation is an algorithm involving alookup table: an implementation can include the entire table, which reduces computing time, but increases the amount of memory needed, or it can compute table entries as needed, increasing computing time, but reducing memory requirements.
Database Management Systems offer the capability to createDatabase indexdata structures. Indexes improve the speed of lookup operations at the cost of additional space. Without indexes, time-consumingFull table scanoperations are sometimes required to locate desired data.
A space–time trade off can be applied to the problem of data storage. If data is stored uncompressed, it takes more space but access takes less time than if the data were stored compressed (since compressing the data reduces the amount of space it takes, but it takes time to run thedecompression algorithm). Depending on the particular instance of the problem, either way is practical. There are also rare instances where it is possible to directly work with compressed data, such as in the case of compressedbitmap indices, where it is faster to work with compression than without compression.
Storing only theSVGsource of avector imageand rendering it as abitmap imageevery time the page is requested would be trading time for space; more time used, but less space. Rendering the image when the page is changed and storing the rendered images would be trading space for time; more space used, but less time. This technique is more generally known ascaching.
Larger code size can be traded for higher program speed when applyingloop unrolling. This technique makes the code longer for each iteration of a loop, but saves the computation time required for jumping back to the beginning of the loop at the end of each iteration.
Algorithms that also make use of space–time tradeoffs include:
|
https://en.wikipedia.org/wiki/Time-memory_tradeoff
|
Incryptography, apreimage attackoncryptographic hash functionstries to find amessagethat has a specific hash value. A cryptographic hash function should resist attacks on itspreimage(set of possible inputs).
In the context of attack, there are two types of preimage resistance:
These can be compared with acollision resistance, in which it is computationally infeasible to find any two distinct inputsx,x′that hash to the same output; i.e., such thath(x) =h(x′).[1]
Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs.[1]Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition tox′,xis already known right from the start).
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through abrute-force attack. For ann-bit hash, this attack has atime complexity2n, which is considered too high for a typical output size ofn= 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result thatquantum computersperform a structured preimage attack in2n=2n2{\displaystyle {\sqrt {2^{n}}}=2^{\frac {n}{2}}}, which also implies second preimage[2]and thus a collision attack.
Faster preimage attacks can be found bycryptanalysingcertain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical.
All currently known practical or almost-practical attacks[3][4]onMD5andSHA-1arecollision attacks.[5]In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only2n2{\displaystyle 2^{\frac {n}{2}}}.
The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function.
A common example is the use of hashes to storepasswordvalidation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks.[6]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. For a method to prevent the testing of short passwords seesalt (cryptography).
|
https://en.wikipedia.org/wiki/Preimage_attack
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Inmathematics, aring homomorphismis a structure-preservingfunctionbetween tworings. More explicitly, ifRandSare rings, then a ring homomorphism is a functionf:R→Sthat preserves addition, multiplication andmultiplicative identity; that is,[1][2][3][4][5]
for alla,binR.
These conditions imply that additive inverses and the additive identity are also preserved.
If, in addition,fis abijection, then itsinversef−1is also a ring homomorphism. In this case,fis called aring isomorphism, and the ringsRandSare calledisomorphic. From the standpoint of ring theory, isomorphic rings have exactly the same properties.
IfRandSarerngs, then the corresponding notion is that of arng homomorphism,[a]defined as above except without the third conditionf(1R) = 1S. A rng homomorphism between (unital) rings need not be a ring homomorphism.
Thecompositionof two ring homomorphisms is a ring homomorphism. It follows that the rings forms acategorywith ring homomorphisms asmorphisms(seeCategory of rings).
In particular, one obtains the notions of ring endomorphism, ring isomorphism, and ring automorphism.
Letf:R→Sbe a ring homomorphism. Then, directly from these definitions, one can deduce:
Moreover,
Injective ring homomorphisms are identical tomonomorphismsin the category of rings: Iff:R→Sis a monomorphism that is not injective, then it sends somer1andr2to the same element ofS. Consider the two mapsg1andg2fromZ[x] toRthat mapxtor1andr2, respectively;f∘g1andf∘g2are identical, but sincefis a monomorphism this is impossible.
However, surjective ring homomorphisms are vastly different fromepimorphismsin the category of rings. For example, the inclusionZ⊆Qwith the identity mapping is a ring epimorphism, but not a surjection. However, every ring epimorphism is also astrong epimorphism, the converse being true in every category.[citation needed]
|
https://en.wikipedia.org/wiki/Ring_isomorphism
|
TheFermat primality testis aprobabilistictest to determine whether a number is aprobable prime.
Fermat's little theoremstates that ifpis prime andais not divisible byp, then
If one wants to test whetherpis prime, then we can pick random integersanot divisible bypand see whether the congruence holds. If it does not hold for a value ofa, thenpis composite. This congruence is unlikely to hold for a randomaifpis composite.[1]Therefore, if the equality does hold for one or more values ofa, then we say thatpisprobably prime.
However, note that the above congruence holds trivially fora≡1(modp){\displaystyle a\equiv 1{\pmod {p}}}, because the congruence relation iscompatible with exponentiation. It also holds trivially fora≡−1(modp){\displaystyle a\equiv -1{\pmod {p}}}ifpis odd, for the same reason. That is why one usually chooses a randomain the interval1<a<p−1{\displaystyle 1<a<p-1}.
Anyasuch that
whennis composite is known as aFermat liar. In this casenis calledFermat pseudoprimeto basea.
If we do pick anasuch that
thenais known as aFermat witnessfor the compositeness ofn.
Suppose we wish to determine whethern= 221 is prime. Randomly pick 1 <a< 220, saya= 38. We check the above congruence and find that it holds:
Either 221 is prime, or 38 is a Fermat liar, so we take anothera, say 24:
So 221 is composite and 38 was indeed a Fermat liar. Furthermore, 24 is a Fermat witness for the compositeness of 221.
The algorithm can be written as follows:
Theavalues 1 andn− 1 are not used as the equality holds for allnand all oddnrespectively, hence testing them adds no value.
Using fast algorithms formodular exponentiationand multiprecision multiplication, the running time of this algorithm isO(klog2nlog logn) =Õ(klog2n), wherekis the number of times we test a randoma, andnis the value we want to test for primality; seeMiller–Rabin primality testfor details.
There are infinitely manyFermat pseudoprimesto any given basisa> 1.[1]: Theorem 1Even worse, there are infinitely manyCarmichael numbers.[2]These are numbersn{\displaystyle n}for whichallvalues ofa{\displaystyle a}withgcd(a,n)=1{\displaystyle \operatorname {gcd} (a,n)=1}are Fermat liars. For these numbers, repeated application of the Fermat primality test performs the same as a simple random search for factors. While Carmichael numbers are substantially rarer than prime numbers (Erdös' upper bound for the number of Carmichael numbers[3]is lower than theprime number function n/log(n)) there are enough of them that Fermat's primality test is not often used in the above form. Instead, other more powerful extensions of the Fermat test, such asBaillie–PSW,Miller–Rabin, andSolovay–Strassenare more commonly used.
In general, ifn{\displaystyle n}is a composite number that is not a Carmichael number, then at least half of all
are Fermat witnesses. For proof of this, leta{\displaystyle a}be a Fermat witness anda1{\displaystyle a_{1}},a2{\displaystyle a_{2}}, ...,as{\displaystyle a_{s}}be Fermat liars. Then
and so alla⋅ai{\displaystyle a\cdot a_{i}}fori=1,2,...,s{\displaystyle i=1,2,...,s}are Fermat witnesses.
As mentioned above, most applications use aMiller–RabinorBaillie–PSWtest for primality. Sometimes a Fermat test (along with some trial division by small primes) is performed first to improve performance.GMPsince version 3.0 uses a base-210 Fermat test after trial division and before running Miller–Rabin tests.Libgcryptuses a similar process with base 2 for the Fermat test, butOpenSSLdoes not.
In practice with most big number libraries such as GMP, the Fermat test is not noticeably faster than a Miller–Rabin test, and can be slower for many inputs.[4]
As an exception, OpenPFGW uses only the Fermat test for probable prime testing. The program is typically used with multi-thousand digit inputs with a goal of maximum speed with very large inputs. Another well known program that relies only on the Fermat test isPGPwhere it is only used for testing of self-generated large random values (an open source counterpart,GNU Privacy Guard, uses a Fermat pretest followed by Miller–Rabin tests).
|
https://en.wikipedia.org/wiki/Fermat_primality_test
|
TheMiller–Rabin primality testorRabin–Miller primality testis a probabilisticprimality test: analgorithmwhich determines whether a given number islikely to be prime, similar to theFermat primality testand theSolovay–Strassen primality test.
It is of historical significance in the search for apolynomial-timedeterministic primality test. Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known.
Gary L. Millerdiscovered the test in 1976. Miller's version of the test isdeterministic, but its correctness relies on the unprovenextended Riemann hypothesis.[1]Michael O. Rabinmodified it to obtain an unconditionalprobabilistic algorithmin 1980.[2][a]
Similarly to the Fermat and Solovay–Strassen tests, the Miller–Rabin primality test checks whether a specific property, which is known to hold for prime values, holds for the number under testing.
The property is the following. For a given odd integern>2{\displaystyle n>2}, let’s writen−1{\displaystyle n-1}as2sd{\displaystyle 2^{s}d}wheres{\displaystyle s}is a positive integer andd{\displaystyle d}is an odd positive integer. Let’s consider an integera{\displaystyle a}, called abase, which iscoprimeton{\displaystyle n}.
Then,n{\displaystyle n}is said to be astrongprobable primeto baseaif one of thesecongruence relationsholds:
This simplifies to first checking foradmodn=1{\displaystyle a^{d}{\bmod {n}}=1}and thena2rd=n−1{\displaystyle a^{2^{r}d}=n-1}for successive values ofr{\displaystyle r}. For each value ofr{\displaystyle r}, the value of the expression may be calculated using the value obtained for the previous value ofr{\displaystyle r}by squaring under the modulus ofn{\displaystyle n}.
The idea beneath this test is that whenn{\displaystyle n}is an odd prime, it passes the test because of two facts:
Hence, bycontraposition, ifn{\displaystyle n}is not a strong probable prime to basea{\displaystyle a}, thenn{\displaystyle n}is definitely composite, anda{\displaystyle a}is called awitnessfor the compositeness ofn{\displaystyle n}.
However, this property is not an exact characterization of prime numbers. Ifn{\displaystyle n}is composite, it may nonetheless be a strong probable prime to basea{\displaystyle a}, in which case it is called astrong pseudoprime, anda{\displaystyle a}is astrong liar.
No composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: theCarmichael numbers). However no simple way of finding a witness is known. A naïve solution is to try all possible bases, which yields an inefficient deterministic algorithm. The Miller test is a more efficient variant of this (seesectionMiller testbelow).
Another solution is to pick a base at random. This yields a fastprobabilistic test. Whennis composite, most bases are witnesses, so the test will detectnas composite with a reasonably high probability (seesectionAccuracybelow). We can quickly reduce the probability of afalse positiveto an arbitrarily small rate, by combining the outcome of as many independently chosen bases as necessary to achieve the said rate. This is the Miller–Rabin test. There seems to be diminishing returns in trying many bases, because ifnis a pseudoprime to some base, then it seems more likely to be a pseudoprime to another base.[4]: §8
Note thatad≡ 1 (modn)holds trivially fora≡ 1 (modn), because the congruence relation iscompatible with exponentiation. Andad=a20d≡ −1 (modn)holds trivially fora≡ −1 (modn)sincedis odd, for the same reason. That is why randomaare usually chosen in the interval1 <a<n− 1.
For testing arbitrarily largen, choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3, ...,n− 2.[b]
However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum. This maximum is generally quite large compared to the bases. This gives very fast deterministic tests for small enoughn(seesectionTesting against small sets of basesbelow).
Here is a proof that, ifnis a prime, then the only square roots of 1 modulonare 1 and −1.
Certainly 1 and −1, when squared modulon, always yield 1. It remains to show that there are no other square roots of 1 modulon. This is a special case, here applied with thepolynomialX2− 1over thefinite fieldZ/nZ, of the more general fact that a polynomial over somefieldhas no morerootsthan its degree (this theorem follows from the existence of anEuclidean division for polynomials). Here follows a more elementary proof. Suppose thatxis a square root of 1 modulon. Then:
In other words,ndivides the product(x− 1)(x+ 1). ByEuclid's lemma, sincenis prime, it divides one of the factorsx− 1orx+ 1,implying thatxis congruent to either 1 or −1 modulon.
Here is a proof that, ifnis an odd prime, then it is a strong probable prime to basea.
Ifnis an odd prime and we writen− 1= 2sdwheresis a positive integer anddis an odd positive integer, by Fermat's little theorem:
Each term of the sequencea2sd,a2s−1d,…,a2d,ad{\displaystyle a^{2^{s}d},a^{2^{s-1}d},\dots ,a^{2d},a^{d}}is a square root of the previous term. Since the first term is congruent to 1, the second term is a square root of 1 modulon. By the previouslemma, it is congruent to either 1 or −1 modulon. If it is congruent to −1, we are done. Otherwise, it is congruent to 1 and we caniterate the reasoning. At the end, either one of the terms is congruent to −1, or all of them are congruent to 1, and in particular the last term,ad, is.
Suppose we wish to determine ifn=221{\displaystyle n=221}is prime. We writen−1as22×55{\displaystyle n-1{\text{ as }}2^{2}\times 55}, so that we haves=2andd=55{\displaystyle s=2{\text{ and }}d=55}. We randomly select a numbera{\displaystyle a}such that2≤a≤n−2{\displaystyle 2\leq a\leq n-2}.
Saya=174{\displaystyle a=174}:
as0dmodn→1742055mod221≡17455≡47. Since47≠1and47≠n−1, we continue.1742155mod221≡174110≡220=n−1{\displaystyle {\begin{aligned}a^{{s^{0}}d}{\text{ mod }}n\rightarrow &174^{{2^{0}}55}{\text{ mod }}221\equiv 174^{55}\equiv 47{\text{. Since }}47\neq 1{\text{ and }}47\neq n-1{\text{, we continue.}}\\&174^{{2^{1}}55}{\text{ mod }}221\equiv 174^{110}\equiv 220=n-1\end{aligned}}}
Since220≡−1modn{\displaystyle 220\equiv -1{\text{ mod }}n}, either 221 is prime, or 174 is a strong liar for 221.
We try another randoma{\displaystyle a}, this time choosinga=137{\displaystyle a=137}:
as0dmodn→1372055mod221≡13755≡188. Since188≠1and188≠n−1, we continue.1372155mod221≡137110≡205≠n−1{\displaystyle {\begin{aligned}a^{{s^{0}}d}{\text{ mod }}n\rightarrow &137^{{2^{0}}55}{\text{ mod }}221\equiv 137^{55}\equiv 188{\text{. Since }}188\neq 1{\text{ and }}188\neq n-1{\text{, we continue.}}\\&137^{{2^{1}}55}{\text{ mod }}221\equiv 137^{110}\equiv 205\neq n-1\end{aligned}}}
Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221 (which are 13 and 17). However, the example with 341 ina later sectionshows how these calculations can sometimes produce a factor ofn.
For a practical guide to choosing the value ofa, seeTesting against small sets of bases.
The algorithm can be written inpseudocodeas follows. The parameterkdetermines the accuracy of the test. The greater the number of rounds, the more accurate the result.[6]
Usingrepeated squaring, the running time of this algorithm isO(kn3), for ann-digit number, andkis the number of rounds performed; thus this is an efficient, polynomial-time algorithm.FFT-based multiplication, for example theSchönhage–Strassen algorithm, can decrease the running time toO(kn2lognlog logn) =Õ(kn2).
The error made by the primality test is measured by the probability that a composite number is declared probably prime. The more basesaare tried, the better the accuracy of the test. It can be shown that ifnis composite, then at most1/4of the basesaare strong liars forn.[2][7]As a consequence, ifnis composite then runningkiterations of the Miller–Rabin test will declarenprobably prime with a probability at most 4−k.
This is an improvement over theSolovay–Strassen test, whose worst‐case error bound is 2−k. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every compositen, the set of strong liars fornis a subset of the set ofEuler liarsforn, and for manyn, the subset is proper.
In addition, for large values ofn, the probability for a composite number to be declared probably prime is often significantly smaller than 4−k. For instance, for most numbersn, this probability is bounded by 8−k; the proportion of numbersnwhich invalidate this upper bound vanishes as we consider larger values ofn.[8]Hence theaveragecase has a much better accuracy than 4−k, a fact which can be exploited forgeneratingprobable primes (seebelow). However, such improved error bounds should not be relied upon toverifyprimes whoseprobability distributionis not controlled, since acryptographicadversary might send a carefully chosen pseudoprime in order to defeat the primality test.[c]In such contexts, only theworst‐caseerror bound of 4−kcan be relied upon.
The above error measure is the probability for a composite number to be declared as a strong probable prime afterkrounds of testing; in mathematical words, it is theconditional probabilityPr(MRk∣¬P){\displaystyle \Pr(M\!R_{k}\mid \lnot P)}wherePis theeventthat the number being tested is prime, andMRkis the event that it passes the Miller–Rabin test withkrounds. We are often interested instead in the inverse conditional probabilityPr(¬P∣MRk){\displaystyle \Pr(\lnot P\mid M\!R_{k})}: the probability that a number which has been declared as a strong probable prime is in fact composite. These two probabilities are related byBayes' law:
In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes (the test has nofalse negative). By dropping the left part of thedenominator, we derive a simple upper bound:
Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4−k— but also to theprobability distributionof the input number. In the general case, as said earlier, this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much aboutPr(¬P∣MRk){\displaystyle \Pr(\lnot P\mid M\!R_{k})}. However, in the case when we use the Miller–Rabin test togenerateprimes (seebelow), the distribution is chosen by the generator itself, so we can exploit this result.
Caldwell[10]points out that strong probable prime tests to different bases sometimes provide an additional primality test. Just as the strong test checks for the existence of more than two square roots of 1 modulon, two such tests can sometimes check for the existence of more than two square roots of −1.
Suppose that, in the course of our probable prime tests, we come across two basesaanda′for whicha2rd≡a′2r′d≡−1(modn){\displaystyle a^{2^{r}d}\equiv a^{\prime \,2^{r'}d}\equiv -1{\pmod {n}}}withr,r′≥ 1. This means that we have computed two square roots as part of the testing, and can check whethera2r−1d≡±a′2r′−1d(modn){\displaystyle a^{2^{r-1}d}\equiv \pm a^{\prime \,2^{r'-1}d}{\pmod {n}}}. This must always hold ifnis prime; if not, we have found more than two square roots of −1 and proved thatnis composite.
This is only possible ifn≡ 1 (mod 4), and we pass probable prime tests with two or more basesasuch thatad≢ ±1 (modn), but it is an inexpensive addition to the basic Miller–Rabin test.
The Miller–Rabin algorithm can be made deterministic by trying all possible values ofabelow a certain limit. Takingnas the limit would implyO(n)trials, hence the running time would be exponential with respect to the sizelognof the input. To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable.
If the tested numbernis composite, the strong liarsacoprime tonare contained in a propersubgroupof the group (Z/nZ)*, which means that if we test allafrom a set whichgenerates(Z/nZ)*, one of them must lie outside the said subgroup, hence must be a witness for the compositeness ofn. Assuming the truth of thegeneralized Riemann hypothesis(which Miller, confusingly, calls the "extended Riemann hypothesis"), it is known that the group is generated by its elements smaller thanO((lnn)2), which was already noted by Miller.[1]The constant involved in theBig O notationwas reduced to 2 byEric Bach.[11]This leads to the following primality testing algorithm, known as theMiller test, which is deterministic assuming the extended Riemann hypothesis:
The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of evenindex, it suffices to assume the validity of GRH forquadraticDirichlet characters.[7]
The running time of the algorithm is, in thesoft-Onotation,Õ((logn)4)(using FFT‐based multiplication).
The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or theBaillie–PSW primality testgives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such asAPR-CLandECPPwhich give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by theAKS primality test, which also does not rely on unproven assumptions.
When the numbernto be tested is small, trying alla< 2(lnn)2is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff[4]and Jaeschke[12]have verified that
Using the 2010 work of Feitsma and Galway[13]enumerating all base 2 pseudoprimes up to 264, this was extended (seeOEIS:A014233), with the first result later shown using different methods in Jiang and Deng:[14]
Sorenson and Webster[15]verify the above and calculate precise results for these larger than 64‐bit results:
Other criteria of this sort, often more efficient (fewer bases required) than those shown above, exist.[10][16][17][18]They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.
There is a small list of potential witnesses for every possible input size (at mostbvalues forb‐bit numbers). However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbersnwhose smallest compositeness witness is at least(lnn)1/(3ln ln lnn).[19]They also argue heuristically that the smallest numberwsuch that every composite number belownhas a compositeness witness less thanwshould be of orderΘ(lognlog logn).
By insertinggreatest common divisorcalculations into the above algorithm, we can sometimes obtain a factor ofninstead of merely determining thatnis composite. This occurs for example whennis a probable prime to baseabut not a strong probable prime to basea.[20]: 1402
Ifxis a nontrivial square root of 1 modulon,
From this we deduce thatA= gcd(x− 1,n)andB= gcd(x+ 1,n)are nontrivial (not necessarily prime) factors ofn(in fact, sincenis odd, these factors are coprime andn=AB). Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted:
This isnota probabilisticfactorizationalgorithm because it is only able to find factors for numbersnwhich are pseudoprime to basea(in other words, for numbersnsuch thatan−1≡ 1 modn). For other numbers, the algorithm only returns "composite" with no further information.
For example, considern= 341 anda= 2. We haven− 1 = 85 × 4. Then285mod 341 = 32and322mod 341 = 1. This tells us thatnis a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a gcd at this stage, we find a factor of 341:gcd(32 − 1, 341) = 31. Indeed,341 = 11 × 31.
The same technique can be applied to the square roots of any other value, particularly the square roots of −1 mentioned in§ Combining multiple tests. If two (successful) strong probable prime tests findx2≡ −1 (modn)andy2≡ −1 (modn), butx≢ ±y(modn), thengcd(x−y,n)andgcd(x+y,n)are nontrivial factors ofn.[10]
For example,n= 46,856,248,255,981is a strong pseudoprime to bases 2 and 7, but in the course of performing the tests we find
This gives us the factorgcd(34456063004337 − 21307242304265,n) = 4840261.
The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminatesalmost surely(since at each iteration there is a chance to draw a prime number). The pseudocode for generatingb‐bitstrong probable primes (with the most significant bit set) is as follows:
Of course theworst-case running timeis infinite, since the outer loop may never terminate, but that happens with probability zero. As per thegeometric distribution, theexpectednumber of draws is1Pr(MRk){\displaystyle {\tfrac {1}{\Pr(M\!R_{k})}}}(reusing notations fromearlier).
As any prime number passes the test, the probability of being prime gives a coarse lower bound to the probability of passing the test. If we draw odd integersuniformlyin the range [2b−1, 2b−1], then we get:
where π is theprime-counting function. Using anasymptotic expansionof π (an extension of theprime number theorem), we can approximate this probability whenbgrows towards infinity. We find:
Hence we can expect the generator to run no more Miller–Rabin tests than a number proportional tob. Taking into account the worst-case complexity of each Miller–Rabin test (seeearlier), the expected running time of the generator with inputsbandkis then bounded byO(kb4)(orÕ(kb3)using FFT-based multiplication).
The error measure of this generator is the probability that it outputs a composite number.
Using the relation between conditional probabilities (shown in anearlier section) and the asymptotic behavior ofPr(P){\displaystyle \Pr(P)}(shown just before), this error measure can be given a coarse upper bound:
Hence, for large enoughb, this error measure is less thanln224−kb{\displaystyle {\tfrac {\ln 2}{2}}4^{-k}b}. However, much better bounds exist.
Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4−k(seeearlier),Damgård,LandrockandPomerancederived several error bounds for the generator, with various classes of parametersbandk.[8]These error bounds allow an implementor to choose a reasonablekfor a desired accuracy.
One of these error bounds is 4−k, which holds for allb≥ 2 (the authors only showed it forb≥ 51, while Ronald Burthe Jr. completed the proof with the remaining values 2 ≤b≤ 50[21]). Again this simple bound can be improved for large values ofb. For instance, another bound derived by the same authors is:
which holds for allb≥ 21 andk≥b/4. This bound is smaller than 4−kas soon asb≥ 32.
|
https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test
|
Wi-Fi Protected Access(WPA) (Wireless Protected Access),Wi-Fi Protected Access 2(WPA2), andWi-Fi Protected Access 3(WPA3) are the three security certification programs developed after 2000 by theWi-Fi Allianceto secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system,Wired Equivalent Privacy(WEP).[1]
WPA (sometimes referred to as the TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (orIEEE 802.11i-2004) standard.
In January 2018, the Wi-Fi Alliance announced the release of WPA3, which has several security improvements over WPA2.[2]
As of 2023, most computers that connect to a wireless network have support for using WPA, WPA2, or WPA3. All versions thereof, at least as implemented through May, 2021, are vulnerable to compromise.[3]
WEP (Wired Equivalent Privacy) is an early encryption protocol for wireless networks, designed to secure WLAN connections. It supports 64-bit and 128-bit keys, combining user-configurable and factory-set bits. WEP uses the RC4 algorithm for encrypting data, creating a unique key for each packet by combining a new Initialization Vector (IV) with a shared key (it has 40 bits of vectored key and 24 bits of random numbers). Decryption involves reversing this process, using the IV and the shared key to generate a key stream and decrypt the payload. Despite its initial use, WEP's significant vulnerabilities led to the adoption of more secure protocols.[4]
The Wi-Fi Alliance intended WPA as an intermediate measure to take the place ofWEPpending the availability of the fullIEEE 802.11istandard. WPA could be implemented throughfirmware upgradesonwireless network interface cardsdesigned for WEP that began shipping as far back as 1999. However, since the changes required in thewireless access points(APs) were more extensive than those needed on the network cards, most pre-2003 APs were not upgradable by vendor-provided methods to support WPA.
The WPA protocol implements theTemporal Key Integrity Protocol(TKIP). WEP uses a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromise WEP.[5]
WPA also includes aMessage Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces thecyclic redundancy check(CRC) that was used by the WEP standard. CRC's main flaw is that it does not provide a sufficiently strongdata integrityguarantee for the packets it handles.[6]Well-testedmessage authentication codesexisted to solve these problems, but they require too much computation to be used on old network cards. WPA uses a message integrity check algorithm calledTKIPto verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of the message integrity code hash function, namedMichael, to retrieve the keystream from short packets to use for re-injection andspoofing.[7][8]
Ratified in 2004, WPA2 replaced WPA. WPA2, which requires testing and certification by the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In particular, it includes support forCCMP, anAES-based encryption mode.[9][10][11]Certification began in September, 2004. From March 13, 2006, to June 30, 2020, WPA2 certification was mandatory for all new devices to bear the Wi-Fi trademark.[12]In WPA2-protected WLANs, secure communication is established through a multi-step process. Initially, devices associate with the Access Point (AP) via an association request. This is followed by a 4-way handshake, a crucial the for step ensuring both the client and AP have the correctPre-Shared Key(PSK) without actually transmitting it. During this handshake, aPairwise Transient Key(PTK) is generated for secure data exchange key fution for the exchange RP = 2025
WPA2 employs the Advanced Encryption Standard (AES) with a 128-bit key, enhancing security through the Counter-Mode/CBC-Mac ProtocolCCMP. This protocol ensures robust encryption and data integrity, using different Initialization Vectors (IVs) for encryption and authentication purposes.[13]
The 4-way handshake involves:
Post-handshake, the established PTK is used for encrypting unicast traffic, and theGroup Temporal Key(GTK) is used for broadcast traffic. This comprehensive authentication and encryption mechanism is what makes WPA2 a robust security standard for wireless networks.[14]
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2.[15][16]Certification began in June 2018,[17]and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.[18]
The new standard uses an equivalent 192-bit cryptographic strength in WPA3-Enterprise mode[19](AES-256inGCM modewithSHA-384asHMAC), and still mandates the use ofCCMP-128(AES-128inCCM mode) as the minimum encryption algorithm in WPA3-Personal mode.TKIPis not allowed in WPA3.
The WPA3 standard also replaces thepre-shared key(PSK) exchange withSimultaneous Authentication of Equals(SAE) exchange, a method originally introduced withIEEE 802.11s, resulting in a more secure initial key exchange in personal mode[20][21]andforward secrecy.[22]The Wi-Fi Alliance also says that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.[2][23]WPA3 also supportsOpportunistic Wireless Encryption (OWE)for open Wi-Fi networks that do not have passwords.
Protection of management frames as specified in theIEEE 802.11wamendment is also enforced by the WPA3 specifications.
WPA has been designed specifically to work with wireless hardware produced prior to the introduction of WPA protocol,[24]which provides inadequate security throughWEP. Some of these devices support WPA only after applying firmware upgrades, which are not available for some legacy devices.[24]
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security protocols. WPA3 is required since July 1, 2020.[18]
Different WPA versions and protection mechanisms can be distinguished based on the target end-user (such as WEP, WPA, WPA2, WPA3) and the method of authentication key distribution, as well as the encryption protocol used. As of July 2020, WPA3 is the latest iteration of the WPA standard, bringing enhanced security features and addressing vulnerabilities found in WPA2. WPA3 improves authentication methods and employs stronger encryption protocols, making it the recommended choice for securing Wi-Fi networks.[23]
Also referred to asWPA-PSK(pre-shared key) mode, this is designed for home, small office and basic uses and does not require an authentication server.[25]Each wireless network device encrypts the network traffic by deriving its 128-bit encryption key from a 256-bit sharedkey. This key may be entered either as a string of 64hexadecimaldigits, or as apassphraseof 8 to 63printable ASCII characters.[26]This pass-phrase-to-PSK mapping is nevertheless not binding, as Annex J is informative in the latest 802.11 standard.[27]If ASCII characters are used, the 256-bit key is calculated by applying thePBKDF2key derivation functionto the passphrase, using theSSIDas thesaltand 4096 iterations ofHMAC-SHA1.[28]WPA-Personal mode is available on all three WPA versions.
This enterprise mode uses an802.1Xserver for authentication, offering higher security control by replacing the vulnerable WEP with the more advanced TKIP encryption. TKIP ensures continuous renewal of encryption keys, reducing security risks. Authentication is conducted through aRADIUSserver, providing robust security, especially vital in corporate settings. This setup allows integration with Windows login processes and supports various authentication methods likeExtensible Authentication Protocol, which uses certificates for secure authentication, and PEAP, creating a protected environment for authentication without requiring client certificates.[29]
Originally, only EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) was certified by the Wi-Fi alliance. In April 2010, theWi-Fi Allianceannounced the inclusion of additional EAP[31]types to its WPA- and WPA2-Enterprise certification programs.[32]This was to ensure that WPA-Enterprise certified products can interoperate with one another.
As of 2010[update]the certification program includes the following EAP types:
802.1X clients and servers developed by specific firms may support other EAP types. This certification is an attempt for popular EAP types to interoperate; their failure to do so as of 2013[update]is one of the major issues preventing rollout of 802.1X on heterogeneous networks.
Commercial 802.1X servers include MicrosoftNetwork Policy ServerandJuniper NetworksSteelbelted RADIUS as well as Aradial Radius server.[34]FreeRADIUSis an open source 802.1X server.
WPA-Personal and WPA2-Personal remain vulnerable topassword crackingattacks if users rely on aweak password or passphrase. WPA passphrase hashes are seeded from the SSID name and its length;rainbow tablesexist for the top 1,000 network SSIDs and a multitude of common passwords, requiring only a quick lookup to speed up cracking WPA-PSK.[35]
Brute forcing of simple passwords can be attempted using theAircrack Suitestarting from the four-way authentication handshake exchanged during association or periodic re-authentication.[36][37][38][39][40]
WPA3 replaces cryptographic protocols susceptible to off-line analysis with protocols that require interaction with the infrastructure for each guessed password, supposedly placing temporal limits on the number of guesses.[15]However, design flaws in WPA3 enable attackers to plausibly launch brute-force attacks (see§ Dragonblood).
WPA and WPA2 do not provideforward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place. In other words, WPA only protects from attackers who do not have access to the password. Because of that, it's safer to useTransport Layer Security(TLS) or similar on top of that for the transfer of any sensitive data. However starting from WPA3, this issue has been addressed.[22]
In 2013, Mathy Vanhoef and Frank Piessens[41]significantly improved upon theWPA-TKIPattacks of Erik Tews and Martin Beck.[42][43]They demonstrated how to inject an arbitrary number of packets, with each packet containing at most 112 bytes of payload. This was demonstrated by implementing aport scanner, which can be executed against any client usingWPA-TKIP. Additionally, they showed how to decrypt arbitrary packets sent to a client. They mentioned this can be used to hijack aTCP connection, allowing an attacker to inject maliciousJavaScriptwhen the victim visits a website.
In contrast, the Beck-Tews attack could only decrypt short packets with mostly known content, such asARPmessages, and only allowed injection of 3 to 7 packets of at most 28 bytes. The Beck-Tews attack also requiresquality of service(as defined in802.11e) to be enabled, while the Vanhoef-Piessens attack does not. Neither attack leads to recovery of the shared session key between the client andAccess Point. The authors say using a short rekeying interval can prevent some attacks but not all, and strongly recommend switching fromTKIPto AES-basedCCMP.
Halvorsen and others show how to modify the Beck-Tews attack to allow injection of 3 to 7 packets having a size of at most 596 bytes.[44]The downside is that their attack requires substantially more time to execute: approximately 18 minutes and 25 seconds. In other work Vanhoef and Piessens showed that, when WPA is used to encrypt broadcast packets, their original attack can also be executed.[45]This is an important extension, as substantially more networks use WPA to protectbroadcast packets, than to protectunicast packets. The execution time of this attack is on average around 7 minutes, compared to the 14 minutes of the original Vanhoef-Piessens and Beck-Tews attack.
The vulnerabilities of TKIP are significant because WPA-TKIP had been held before to be an extremely safe combination; indeed, WPA-TKIP is still a configuration option upon a wide variety of wireless routing devices provided by many hardware vendors. A survey in 2013 showed that 71% still allow usage of TKIP, and 19% exclusively support TKIP.[41]
A more serious security flaw was revealed in December 2011 by Stefan Viehböck is the production that affects wireless routers with theWi-Fi Protected Setup(WPS) feature, regardless of which encryption method they use. Most recent models have this feature and enable it by default. Many consumer Wi-Fi device manufacturers had taken steps to eliminate the potential of weak passphrase choices by promoting alternative methods of automatically generating and distributing strong keys when users add a new wireless adapter or appliance to a network. These methods include pushing buttons on the devices or entering an 8-digitPIN.
The Wi-Fi Alliance standardized these methods as Wi-Fi Protected Setup; however, the PIN feature as widely implemented introduced a major new security flaw. The flaw allows a remote attacker to recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few hours.[46]Users have been urged to turn off the WPS feature,[47]although this may not be possible on some router models. Also, the PIN is written on a label on most Wi-Fi routers with WPS, which cannot be changed if compromised.
In 2018, the Wi-Fi Alliance introduced Wi-Fi Easy Connect[48]as a new alternative for the configuration of devices that lack sufficient user interface capabilities by allowing nearby devices to serve as an adequate UI for network provisioning purposes, thus mitigating the need for WPS.[49]
Several weaknesses have been found inMS-CHAPv2, some of which severely reduce the complexity of brute-force attacks, making them feasible with modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was reduced to that of breaking a singleDESkey (work byMoxie Marlinspikeand Marsh Ray). Moxie advised: "Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else."[50]
Tunneled EAP methods using TTLS or PEAP which encrypt the MSCHAPv2 exchange are widely deployed to protect against exploitation of this vulnerability. However, prevalent WPA2 client implementations during the early 2000s were prone to misconfiguration by end users, or in some cases (e.g.Android), lacked any user-accessible way to properly configure validation of AAA server certificate CNs. This extended the relevance of the original weakness in MSCHAPv2 withinMiTMattack scenarios.[51]Under stricter compliance tests for WPA2 announced alongside WPA3, certified client software will be required to conform to certain behaviors surrounding AAA certificate validation.[15]
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared Group Temporal Key (GTK). It can be used to conduct man-in-the-middle anddenial-of-serviceattacks. However, it assumes that the attacker is already authenticated against Access Point and thus in possession of the GTK.[52][53]
In 2016 it was shown that the WPA and WPA2 standards contain an insecure expositoryrandom number generator(RNG). Researchers showed that, if vendors implement the proposed RNG, an attacker is able to predict the group key (GTK) that is supposed to be randomly generated by theaccess point(AP). Additionally, they showed that possession of the GTK enables the attacker to inject any traffic into the network, and allowed the attacker to decrypt unicast internet traffic transmitted over the wireless network. They demonstrated their attack against anAsusRT-AC51U router that uses theMediaTekout-of-tree drivers, which generate the GTK themselves, and showed the GTK can be recovered within two minutes or less. Similarly, they demonstrated the keys generated by Broadcom access daemons running on VxWorks 5 and later can be recovered in four minutes or less, which affects, for example, certain versions of Linksys WRT54G and certain Apple AirPort Extreme models. Vendors can defend against this attack by using a secure RNG. By doing so,Hostapdrunning on Linux kernels is not vulnerable against this attack and thus routers running typicalOpenWrtorLEDEinstallations do not exhibit this issue.[54]
In October 2017, details of theKRACK(Key Reinstallation Attack) attack on WPA2 were published.[55][56]The KRACK attack is believed to affect all variants of WPA and WPA2; however, the security implications vary between implementations, depending upon how individual developers interpreted a poorly specified part of the standard. Software patches can resolve the vulnerability but are not available for all devices.[57]KRACK exploits a weakness in the WPA2 4-Way Handshake, a critical process for generating encryption keys. Attackers can force multiple handshakes, manipulating key resets. By intercepting the handshake, they could decrypt network traffic without cracking encryption directly. This poses a risk, especially with sensitive data transmission.[58]
Manufacturers have released patches in response, but not all devices have received updates. Users are advised to keep their devices updated to mitigate such security risks. Regular updates are crucial for maintaining network security against evolving threats.[58]
The Dragonblood attacks exposed significant vulnerabilities in the Dragonfly handshake protocol used in WPA3 and EAP-pwd. These included side-channel attacks potentially revealing sensitive user information and implementation weaknesses in EAP-pwd and SAE. Concerns were also raised about the inadequate security in transitional modes supporting both WPA2 and WPA3. In response, security updates and protocol changes are being integrated into WPA3 and EAP-pwd to address these vulnerabilities and enhance overall Wi-Fi security.[59]
On May 11, 2021,FragAttacks, a set of new security vulnerabilities, were revealed, affecting Wi-Fi devices and enabling attackers within range to steal information or target devices. These include design flaws in the Wi-Fi standard, affecting most devices, and programming errors in Wi-Fi products, making almost all Wi-Fi products vulnerable. The vulnerabilities impact all Wi-Fi security protocols, including WPA3 and WEP. Exploiting these flaws is complex but programming errors in Wi-Fi products are easier to exploit. Despite improvements in Wi-Fi security, these findings highlight the need for continuous security analysis and updates. In response, security patches were developed, and users are advised to use HTTPS and install available updates for protection.
|
https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA2
|
Inmathematics, afieldis aseton whichaddition,subtraction,multiplication, anddivisionare defined and behave as the corresponding operations onrationalandreal numbers. A field is thus a fundamentalalgebraic structurewhich is widely used inalgebra,number theory, and many other areas of mathematics.
The best known fields are the field ofrational numbers, the field ofreal numbersand the field ofcomplex numbers. Many other fields, such asfields of rational functions,algebraic function fields,algebraic number fields, andp-adic fieldsare commonly used and studied in mathematics, particularly in number theory andalgebraic geometry. Mostcryptographic protocolsrely onfinite fields, i.e., fields with finitely manyelements.
The theory of fields proves thatangle trisectionandsquaring the circlecannot be done with acompass and straightedge.Galois theory, devoted to understanding the symmetries offield extensions, provides an elegant proof of theAbel–Ruffini theoremthat generalquintic equationscannot besolved in radicals.
Fields serve as foundational notions in several mathematical domains. This includes different branches ofmathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as thescalarsfor avector space, which is the standard general context forlinear algebra.Number fields, the siblings of the field of rational numbers, are studied in depth innumber theory.Function fieldscan help describe properties of geometric objects.
Informally, a field is a set, along with twooperationsdefined on that set: an addition operationa+band a multiplication operationa⋅b, both of which behave similarly as they do forrational numbersandreal numbers. This includes the existence of anadditive inverse−afor all elementsaand of amultiplicative inverseb−1for every nonzero elementb. This allows the definition of the so-calledinverse operations, subtractiona−band divisiona/b, asa−b=a+ (−b)anda/b=a⋅b−1.
Often the producta⋅bis represented by juxtaposition, asab.
Formally, a field is asetFtogether with twobinary operationsonFcalledadditionandmultiplication.[1]A binary operation onFis a mappingF×F→F, that is, a correspondence that associates with each ordered pair of elements ofFa uniquely determined element ofF.[2][3]The result of the addition ofaandbis called the sum ofaandb, and is denoteda+b. Similarly, the result of the multiplication ofaandbis called the product ofaandb, and is denoteda⋅b. These operations are required to satisfy the following properties, referred to asfield axioms.
These axioms are required to hold for allelementsa,b,cof the fieldF:
An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is agroupunder addition with0as the additive identity; the nonzero elements form a group under multiplication with1as the multiplicative identity; and multiplication distributes over addition.
Even more succinctly: a field is acommutative ringwhere0 ≠ 1and all nonzero elements areinvertibleunder multiplication.
Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties.Division by zerois, by definition, excluded.[4]In order to avoidexistential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and twonullaryoperations (the constants0and1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important inconstructive mathematicsandcomputing.[5]One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants1and−1, since0 = 1 + (−1)and−a= (−1)a.[a]
Rational numbers have been widely used a long time before the elaboration of the concept of field.
They are numbers that can be written asfractionsa/b, whereaandbareintegers, andb≠ 0. The additive inverse of such a fraction is−a/b, and the multiplicative inverse (provided thata≠ 0) isb/a, which can be seen as follows:
The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows:[6]
Thereal numbersR, with the usual operations of addition and multiplication, also form a field. Thecomplex numbersCconsist of expressions
whereiis theimaginary unit, i.e., a (non-real) number satisfyingi2= −1.
Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold forC. For example, the distributive law enforces
It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in theplane, withCartesian coordinatesgiven by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines.
In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers withcompass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field ofconstructible numbers.[7]Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using onlycompassandstraightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the fieldQof rational numbers. The illustration shows the construction ofsquare rootsof constructible numbers, not necessarily contained withinQ. Using the labeling in the illustration, construct the segmentsAB,BD, and asemicircleoverAD(center at themidpointC), which intersects theperpendicularline throughBin a pointF, at a distance of exactlyh=p{\displaystyle h={\sqrt {p}}}fromBwhenBDhas length one.
Not all real numbers are constructible. It can be shown that23{\displaystyle {\sqrt[{3}]{2}}}is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of acube with volume 2, another problem posed by the ancient Greeks.
In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements calledO,I,A, andB. The notation is chosen such thatOplays the role of the additive identity element (denoted 0 in the axioms above), andIis the multiplicative identity (denoted1in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example,
This field is called afinite fieldorGalois fieldwith four elements, and is denotedF4orGF(4).[8]Thesubsetconsisting ofOandI(highlighted in red in the tables at the right) is also a field, known as thebinary fieldF2orGF(2).
In this section,Fdenotes an arbitrary field andaandbare arbitraryelementsofF.
One hasa⋅ 0 = 0and−a= (−1) ⋅a. In particular, one may deduce the additive inverse of every element as soon as one knows−1.[9]
Ifab= 0thenaorbmust be0, since, ifa≠ 0, thenb= (a−1a)b=a−1(ab) =a−1⋅ 0 = 0. This means that every field is anintegral domain.
In addition, the following properties are true for any elementsaandb:
The axioms of a fieldFimply that it is anabelian groupunder addition. This group is called theadditive groupof the field, and is sometimes denoted by(F, +)when denoting it simply asFcould be confusing.
Similarly, thenonzeroelements ofFform an abelian group under multiplication, called themultiplicative group, and denoted by(F∖{0},⋅){\displaystyle (F\smallsetminus \{0\},\cdot )}or justF∖{0}{\displaystyle F\smallsetminus \{0\}}, orF×.
A field may thus be defined as setFequipped with two operations denoted as an addition and a multiplication such thatFis an abelian group under addition,F∖{0}{\displaystyle F\smallsetminus \{0\}}is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication isdistributiveover addition.[b]Some elementary statements about fields can therefore be obtained by applying general facts ofgroups. For example, the additive and multiplicative inverses−aanda−1are uniquely determined bya.
The requirement1 ≠ 0is imposed by convention to exclude thetrivial ring, which consists of a single element; this guides any choice of the axioms that define fields.
Every finitesubgroupof the multiplicative group of a field iscyclic(seeRoot of unity § Cyclic groups).
In addition to the multiplication of two elements ofF, it is possible to define the productn⋅aof an arbitrary elementaofFby a positiveintegernto be then-fold sum
If there is no positive integer such that
thenFis said to havecharacteristic0.[11]For example, the field of rational numbersQhas characteristic 0 since no positive integernis zero. Otherwise, if thereisa positive integernsatisfying this equation, the smallest such positive integer can be shown to be aprime number. It is usually denoted bypand the field is said to have characteristicpthen.
For example, the fieldF4has characteristic2since (in the notation of the above addition table)I+I= O.
IfFhas characteristicp, thenp⋅a= 0for allainF. This implies that
since all otherbinomial coefficientsappearing in thebinomial formulaare divisible byp. Here,ap:=a⋅a⋅ ⋯ ⋅a(pfactors) is thepth power, i.e., thep-fold product of the elementa. Therefore, theFrobenius map
is compatible with the addition inF(and also with the multiplication), and is therefore a field homomorphism.[12]The existence of this homomorphism makes fields in characteristicpquite different from fields of characteristic0.
AsubfieldEof a fieldFis a subset ofFthat is a field with respect to the field operations ofF. EquivalentlyEis a subset ofFthat contains1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that1 ∊E, that for alla,b∊Ebotha+banda⋅bare inE, and that for alla≠ 0inE, both−aand1/aare inE.
Field homomorphismsare mapsφ:E→Fbetween two fields such thatφ(e1+e2) =φ(e1) +φ(e2),φ(e1e2) =φ(e1)φ(e2), andφ(1E) = 1F, wheree1ande2are arbitrary elements ofE. All field homomorphisms areinjective.[13]Ifφis alsosurjective, it is called anisomorphism(or the fieldsEandFare called isomorphic).
A field is called aprime fieldif it has no proper (i.e., strictly smaller) subfields. Any fieldFcontains a prime field. If thecharacteristicofFisp(a prime number), the prime field is isomorphic to the finite fieldFpintroduced below. Otherwise the prime field is isomorphic toQ.[14]
Finite fields(also calledGalois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory exampleF4is a field with four elements. Its subfieldF2is the smallest field, because by definition a field has at least two distinct elements,0and1.
The simplest finite fields, with prime order, are most directly accessible usingmodular arithmetic. For a fixed positive integern, arithmetic "modulon" means to work with the numbers
The addition and multiplication on this set are done by performing the operation in question in the setZof integers, dividing bynand taking the remainder as result. This construction yields a field precisely ifnis aprime number. For example, taking the primen= 2results in the above-mentioned fieldF2. Forn= 4and more generally, for anycomposite number(i.e., any numbernwhich can be expressed as a productn=r⋅sof two strictly smaller natural numbers),Z/nZis not a field: the product of two non-zero elements is zero sincer⋅s= 0inZ/nZ, which, as was explainedabove, preventsZ/nZfrom being a field. The fieldZ/pZwithpelements (pbeing prime) constructed in this way is usually denoted byFp.
Every finite fieldFhasq=pnelements, wherepis prime andn≥ 1. This statement holds sinceFmay be viewed as avector spaceover its prime field. Thedimensionof this vector space is necessarily finite, sayn, which implies the asserted statement.[15]
A field withq=pnelements can be constructed as thesplitting fieldof thepolynomial
Such a splitting field is an extension ofFpin which the polynomialfhasqzeros. This meansfhas as many zeros as possible since thedegreeoffisq. Forq= 22= 4, it can be checked case by case using the above multiplication table that all four elements ofF4satisfy the equationx4=x, so they are zeros off. By contrast, inF2,fhas only two zeros (namely0and1), sofdoes not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic.[16]It is thus customary to speak ofthefinite field withqelements, denoted byFqorGF(q).
Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations,algebraic number theory, andalgebraic geometry.[17]A first step towards the notion of a field was made in 1770 byJoseph-Louis Lagrange, who observed that permuting the zerosx1,x2,x3of acubic polynomialin the expression
(withωbeing a thirdroot of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method ofScipione del FerroandFrançois Viète, which proceeds by reducing a cubic equation for an unknownxto a quadratic equation forx3.[18]Together with a similar observation forequations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups.[19]Vandermonde, also in 1770, and to a fuller extent,Carl Friedrich Gauss, in hisDisquisitiones Arithmeticae(1801), studied the equation
for a primepand, again using modern language, the resulting cyclicGalois group. Gauss deduced that aregularp-goncan be constructed ifp= 22k+ 1. Building on Lagrange's work,Paolo Ruffiniclaimed (1799) thatquintic equations(polynomial equations of degree5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled byNiels Henrik Abelin 1824.[20]Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known asGalois theorytoday. Both Abel and Galois worked with what is today called analgebraic number field, but conceived neither an explicit notion of a field, nor of a group.
In 1871Richard Dedekindintroduced, for a set of real or complex numbers that is closed under the four arithmetic operations, theGermanwordKörper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced byMoore (1893).[21]
By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system.
In 1881Leopold Kroneckerdefined what he called adomain of rationality, which is a field ofrational fractionsin modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such asQ(π)abstractly as the rational function fieldQ(X). Prior to this, examples of transcendental numbers were known sinceJoseph Liouville's work in 1844, untilCharles Hermite(1873) andFerdinand von Lindemann(1882) proved the transcendence ofeandπ, respectively.[23]
The first clear definition of an abstract field is due toWeber (1893).[24]In particular,Heinrich Martin Weber's notion included the fieldFp.Giuseppe Veronese(1891) studied the field of formal power series, which ledHensel (1904)to introduce the field ofp-adic numbers.Steinitz (1910)synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sectionsGalois theory,Constructing fieldsandElementary notionscan be found in Steinitz's work.Artin & Schreier (1927)linked the notion oforderings in a field, and thus the area of analysis, to purely algebraic properties.[25]Emil Artinredeveloped Galois theory from 1928 through 1942, eliminating the dependency on theprimitive element theorem.
Acommutative ringis a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inversesa−1.[26]For example, the integersZform a commutative ring, but not a field: thereciprocalof an integernis not itself an integer, unlessn= ±1.
In the hierarchy of algebraic structures fields can be characterized as the commutative ringsRin which every nonzero element is aunit(which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinctideals,(0)andR. Fields are also precisely the commutative rings in which(0)is the onlyprime ideal.
Given a commutative ringR, there are two ways to construct a field related toR, i.e., two ways of modifyingRsuch that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions ofZisQ, the rationals, while the residue fields ofZare the finite fieldsFp.
Given anintegral domainR, itsfield of fractionsQ(R)is built with the fractions of two elements ofRexactly asQis constructed from the integers. More precisely, the elements ofQ(R)are the fractionsa/bwhereaandbare inR, andb≠ 0. Two fractionsa/bandc/dare equal if and only ifad=bc. The operation on the fractions work exactly as for rational numbers. For example,
It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field.[27]
The fieldF(x)of therational fractionsover a field (or an integral domain)Fis the field of fractions of thepolynomial ringF[x]. The fieldF((x))ofLaurent series
over a fieldFis the field of fractions of the ringF[[x]]offormal power series(in whichk≥ 0). Since any Laurent series is a fraction of a power series divided by a power ofx(as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though.
In addition to the field of fractions, which embedsRinjectivelyinto a field, a field can be obtained from a commutative ringRby means of asurjective maponto a fieldF. Any field obtained in this way is aquotientR/m, wheremis amaximal idealofR. IfRhas only one maximal idealm, this field is called theresidue fieldofR.[28]
Theideal generated by a single polynomialfin the polynomial ringR=E[X](over a fieldE) is maximal if and only iffisirreducibleinE, i.e., iffcannot be expressed as the product of two polynomials inE[X]of smallerdegree. This yields a field
This fieldFcontains an elementx(namely theresidue classofX) which satisfies the equation
For example,Cis obtained fromRbyadjoiningtheimaginary unitsymboli, which satisfiesf(i) = 0, wheref(X) =X2+ 1. Moreover,fis irreducible overR, which implies that the map that sends a polynomialf(X) ∊R[X]tof(i)yields an isomorphism
Fields can be constructed inside a given bigger container field. Suppose given a fieldE, and a fieldFcontainingEas a subfield. For any elementxofF, there is a smallest subfield ofFcontainingEandx, called the subfield ofFgenerated byxand denotedE(x).[29]The passage fromEtoE(x)is referred to byadjoiningan elementtoE. More generally, for a subsetS⊂F, there is a minimal subfield ofFcontainingEandS, denoted byE(S).
Thecompositumof two subfieldsEandE′of some fieldFis the smallest subfield ofFcontaining bothEandE′. The compositum can be used to construct the biggest subfield ofFsatisfying a certain property, for example the biggest subfield ofF, which is, in the language introduced below, algebraic overE.[c]
The notion of a subfieldE⊂Fcan also be regarded from the opposite point of view, by referring toFbeing afield extension(or just extension) ofE, denoted by
and read "FoverE".
A basic datum of a field extension is itsdegree[F:E], i.e., the dimension ofFas anE-vector space. It satisfies the formula[30]
Extensions whose degree is finite are referred to as finite extensions. The extensionsC/RandF4/F2are of degree2, whereasR/Qis an infinite extension.
A pivotal notion in the study of field extensionsF/Earealgebraic elements. An elementx∈FisalgebraicoverEif it is arootof apolynomialwithcoefficientsinE, that is, if it satisfies apolynomial equation
withen, ...,e0inE, anden≠ 0.
For example, theimaginary unitiinCis algebraic overR, and even overQ, since it satisfies the equation
A field extension in which every element ofFis algebraic overEis called analgebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula.[31]
The subfieldE(x)generated by an elementx, as above, is an algebraic extension ofEif and only ifxis an algebraic element. That is to say, ifxis algebraic, all other elements ofE(x)are necessarily algebraic as well. Moreover, the degree of the extensionE(x) /E, i.e., the dimension ofE(x)as anE-vector space, equals the minimal degreensuch that there is a polynomial equation involvingx, as above. If this degree isn, then the elements ofE(x)have the form
For example, the fieldQ(i)ofGaussian rationalsis the subfield ofCconsisting of all numbers of the forma+biwhere bothaandbare rational numbers: summands of the formi2(and similarly for higher exponents) do not have to be considered here, sincea+bi+ci2can be simplified toa−c+bi.
The above-mentioned field ofrational fractionsE(X), whereXis anindeterminate, is not an algebraic extension ofEsince there is no polynomial equation with coefficients inEwhose zero isX. Elements, such asX, which are not algebraic are calledtranscendental. Informally speaking, the indeterminateXand its powers do not interact with elements ofE. A similar construction can be carried out with a set of indeterminates, instead of just one.
Once again, the field extensionE(x) /Ediscussed above is a key example: ifxis not algebraic (i.e.,xis not arootof a polynomial with coefficients inE), thenE(x)is isomorphic toE(X). This isomorphism is obtained by substitutingxtoXin rational fractions.
A subsetSof a fieldFis atranscendence basisif it isalgebraically independent(do not satisfy any polynomial relations) overEand ifFis an algebraic extension ofE(S). Any field extensionF/Ehas a transcendence basis.[32]Thus, field extensions can be split into ones of the formE(S) /E(purely transcendental extensions) and algebraic extensions.
A field isalgebraically closedif it does not have any strictly bigger algebraic extensions or, equivalently, if anypolynomial equation
has a solutionx∊F.[33]By thefundamental theorem of algebra,Cis algebraically closed, i.e.,anypolynomial equation with complex coefficients has a complex solution. The rational and the real numbers arenotalgebraically closed since the equation
does not have any rational or real solution. A field containingFis called analgebraic closureofFif it isalgebraicoverF(roughly speaking, not too big compared toF) and is algebraically closed (big enough to contain solutions of all polynomial equations).
By the above,Cis an algebraic closure ofR. The situation that the algebraic closure is a finite extension of the fieldFis quite special: by theArtin–Schreier theorem, the degree of this extension is necessarily2, andFiselementarily equivalenttoR. Such fields are also known asreal closed fields.
Any fieldFhas an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to asthealgebraic closure and denotedF. For example, the algebraic closureQofQis called the field ofalgebraic numbers. The fieldFis usually rather implicit since its construction requires theultrafilter lemma, a set-theoretic axiom that is weaker than theaxiom of choice.[34]In this regard, the algebraic closure ofFq, is exceptionally simple. It is the union of the finite fields containingFq(the ones of orderqn). For any algebraically closed fieldFof characteristic0, the algebraic closure of the fieldF((t))ofLaurent seriesis the field ofPuiseux series, obtained by adjoining roots oft.[35]
Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas.
A fieldFis called anordered fieldif any two elements can be compared, so thatx+y≥ 0andxy≥ 0wheneverx≥ 0andy≥ 0. For example, the real numbers form an ordered field, with the usual ordering≥. TheArtin–Schreier theoremstates that a field can be ordered if and only if it is aformally real field, which means that any quadratic equation
only has the solutionx1=x2= ⋯ =xn= 0.[36]The set of all possible orders on a fixed fieldFis isomorphic to the set ofring homomorphismsfrom theWitt ringW(F)ofquadratic formsoverF, toZ.[37]
AnArchimedean fieldis an ordered field such that for each element there exists a finite expression
whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains noinfinitesimals(elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield ofR.
An ordered field isDedekind-completeif allupper bounds,lower bounds(seeDedekind cut) and limits, which should exist, do exist. More formally, eachbounded subsetofFis required to have a least upper bound. Any complete field is necessarily Archimedean,[38]since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit.
Since every proper subfield of the reals also contains such gaps,Ris the unique complete ordered field, up to isomorphism.[39]Several foundational results incalculusfollow directly from this characterization of the reals.
ThehyperrealsR*form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis ofnon-standard analysis.
Another refinement of the notion of a field is atopological field, in which the setFis atopological space, such that all operations of the field (addition, multiplication, the mapsa↦ −aanda↦a−1) arecontinuous mapswith respect to the topology of the space.[40]The topology of all the fields discussed below is induced from ametric, i.e., afunction
that measures adistancebetween any two elements ofF.
ThecompletionofFis another field in which, informally speaking, the "gaps" in the original fieldFare filled, if there are any. For example, anyirrational numberx, such asx=√2, is a "gap" in the rationalsQin the sense that it is a real number that can be approximated arbitrarily closely by rational numbersp/q, in the sense that distance ofxandp/qgiven by theabsolute value|x−p/q|is as small as desired.
The following table lists some examples of this construction. The fourth column shows an example of a zerosequence, i.e., a sequence whose limit (forn→ ∞) is zero.
The fieldQpis used in number theory andp-adic analysis. The algebraic closureQpcarries a unique norm extending the one onQp, but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field ofcomplexp-adic numbersand is denoted byCp.[41]
The following topological fields are calledlocal fields:[42][d]
These two types of local fields share some fundamental similarities. In this relation, the elementsp∈Qpandt∈Fp((t))(referred to asuniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients inFp. (However, since the addition inQpis done usingcarrying, which is not the case inFp((t)), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper:
Differential fieldsare fields equipped with aderivation, i.e., allow to take derivatives of elements in the field.[44]For example, the fieldR(X), together with the standard derivative of polynomials forms a differential field. These fields are central todifferential Galois theory, a variant of Galois theory dealing withlinear differential equations.
Galois theory studiesalgebraic extensionsof a field by studying thesymmetryin the arithmetic operations of addition and multiplication. An important notion in this area is that offiniteGalois extensionsF/E, which are, by definition, those that areseparableandnormal. Theprimitive element theoremshows that finite separable extensions are necessarilysimple, i.e., of the form
wherefis an irreducible polynomial (as above).[45]For such an extension, being normal and separable means that all zeros offare contained inFand thatfhas only simple zeros. The latter condition is always satisfied ifEhas characteristic0.
For a finite Galois extension, theGalois groupGal(F/E)is the group offield automorphismsofFthat are trivial onE(i.e., thebijectionsσ:F→Fthat preserve addition and multiplication and that send elements ofEto themselves). The importance of this group stems from thefundamental theorem of Galois theory, which constructs an explicitone-to-one correspondencebetween the set ofsubgroupsofGal(F/E)and the set of intermediate extensions of the extensionF/E.[46]By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is notsolvable(cannot be built fromabelian groups), then the zeros offcannotbe expressed in terms of addition, multiplication, and radicals, i.e., expressions involvingn{\displaystyle {\sqrt[{n}]{~}}}. For example, thesymmetric groupsSnis not solvable forn≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as theAbel–Ruffini theorem:
Thetensor product of fieldsis not usually a field. For example, a finite extensionF/Eof degreenis a Galois extension if and only if there is an isomorphism ofF-algebras
This fact is the beginning ofGrothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects.[48]
Basic invariants of a fieldFinclude the characteristic and thetranscendence degreeofFover its prime field. The latter is defined as the maximal number of elements inFthat are algebraically independent over the prime field. Two algebraically closed fieldsEandFare isomorphic precisely if these two data agree.[49]This implies that any twouncountablealgebraically closed fields of the samecardinalityand the same characteristic are isomorphic. For example,Qp,CpandCare isomorphic (butnotisomorphic as topological fields).
Inmodel theory, a branch ofmathematical logic, two fieldsEandFare calledelementarily equivalentif every mathematical statement that is true forEis also true forFand conversely. The mathematical statements in question are required to befirst-ordersentences (involving0,1, the addition and multiplication). A typical example, forn> 0,nan integer, is
The set of such formulas for allnexpresses thatEis algebraically closed.
TheLefschetz principlestates thatCis elementarily equivalent to any algebraically closed fieldFof characteristic zero. Moreover, any fixed statementφholds inCif and only if it holds in any algebraically closed field of sufficiently high characteristic.[50]
IfUis anultrafilteron a setI, andFiis a field for everyiinI, theultraproductof theFiwith respect toUis a field.[51]It is denoted by
since it behaves in several ways as a limit of the fieldsFi:Łoś's theoremstates that any first order statement that holds for all but finitely manyFi, also holds for the ultraproduct. Applied to the above sentenceφ, this shows that there is an isomorphism[e]
The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primesp)
In addition, model theory also studies the logical properties of various other types of fields, such asreal closed fieldsorexponential fields(which are equipped with an exponential functionexp :F→F×).[52]
For fields that are not algebraically closed (or not separably closed), theabsolute Galois groupGal(F)is fundamentally important: extending the case of finite Galois extensions outlined above, this group governsallfinite separable extensions ofF. By elementary means, the groupGal(Fq)can be shown to be thePrüfer group, theprofinite completionofZ. This statement subsumes the fact that the only algebraic extensions ofGal(Fq)are the fieldsGal(Fqn)forn> 0, and that the Galois groups of these finite extensions are given by
A description in terms of generators and relations is also known for the Galois groups ofp-adic number fields (finite extensions ofQp).[53]
Representations of Galois groupsand of related groups such as theWeil groupare fundamental in many branches of arithmetic, such as theLanglands program. The cohomological study of such representations is done usingGalois cohomology.[54]For example, theBrauer group, which is classically defined as the group ofcentral simpleF-algebras, can be reinterpreted as a Galois cohomology group, namely
Milnor K-theoryis defined as
Thenorm residue isomorphism theorem, proved around 2000 byVladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism
Algebraic K-theoryis related to the group ofinvertible matriceswith coefficients the given field. For example, the process of taking thedeterminantof an invertible matrix leads to an isomorphismK1(F) =F×.Matsumoto's theoremshows thatK2(F)agrees withK2M(F). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general.
Ifa≠ 0, then theequation
has a unique solutionxin a fieldF, namelyx=a−1b.{\displaystyle x=a^{-1}b.}This immediate consequence of the definition of a field is fundamental inlinear algebra. For example, it is an essential ingredient ofGaussian eliminationand of the proof that anyvector spacehas abasis.[55]
The theory ofmodules(the analogue of vector spaces overringsinstead of fields) is much more complicated, because the above equation may have several or no solutions. In particularsystems of linear equations over a ringare much more difficult to solve than in the case of fields, even in the specially simple case of the ringZof the integers.
A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing
in a (large) finite fieldFqcan be performed much more efficiently than thediscrete logarithm, which is the inverse operation, i.e., determining the solutionnto an equation
Inelliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on anelliptic curve, i.e., the solutions of an equation of the form
Finite fields are also used incoding theoryandcombinatorics.
Functionson a suitabletopological spaceXinto a fieldFcan be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain:
This makes these functions aF-commutative algebra.
For having afieldof functions, one must consider algebras of functions that areintegral domains. In this case the ratios of two functions, i.e., expressions of the form
form a field, called field of functions.
This occurs in two main cases. WhenXis acomplex manifoldX. In this case, one considers the algebra ofholomorphic functions, i.e., complex differentiable functions. Their ratios form the field ofmeromorphic functionsonX.
Thefunction field of an algebraic varietyX(a geometric object defined as the common zeros of polynomial equations) consists of ratios ofregular functions, i.e., ratios of polynomial functions on the variety. The function field of then-dimensionalspaceover a fieldFisF(x1, ...,xn), i.e., the field consisting of ratios of polynomials innindeterminates. The function field ofXis the same as the one of anyopendense subvariety. In other words, the function field is insensitive to replacingXby a (slightly) smaller subvariety.
The function field is invariant underisomorphismandbirational equivalenceof varieties. It is therefore an important tool for the study ofabstract algebraic varietiesand for the classification of algebraic varieties. For example, thedimension, which equals the transcendence degree ofF(X), is invariant under birational equivalence.[56]Forcurves(i.e., the dimension is one), the function fieldF(X)is very close toX: ifXissmoothandproper(the analogue of beingcompact),Xcan be reconstructed, up to isomorphism, from its field of functions.[f]In higher dimension the function field remembers less, but still decisive information aboutX. The study of function fields and their geometric meaning in higher dimensions is referred to asbirational geometry. Theminimal model programattempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field.
Global fieldsare in the limelight inalgebraic number theoryandarithmetic geometry.
They are, by definition,number fields(finite extensions ofQ) or function fields overFq(finite extensions ofFq(t)). As for local fields, these two types of fields share several similar features, even though they are of characteristic0and positive characteristic, respectively. Thisfunction field analogycan help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, theRiemann hypothesisconcerning the zeros of theRiemann zeta function(open as of 2017) can be regarded as being parallel to theWeil conjectures(proven in 1974 byPierre Deligne).
Cyclotomic fieldsare among the most intensely studied number fields. They are of the formQ(ζn), whereζnis a primitiventhroot of unity, i.e., a complex numberζthat satisfiesζn= 1andζm≠ 1for all0 <m<n.[57]Fornbeing aregular prime,Kummerused cyclotomic fields to proveFermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation
Local fields are completions of global fields.Ostrowski's theoremasserts that the only completions ofQ, a global field, are the local fieldsQpandR. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called thelocal–global principle. For example, theHasse–Minkowski theoremreduces the problem of finding rational solutions of quadratic equations to solving these equations inRandQp, whose solutions can easily be described.[58]
Unlike for local fields, the Galois groups of global fields are not known.Inverse Galois theorystudies the (unsolved) problem whether any finite group is the Galois groupGal(F/Q)for some number fieldF.[59]Class field theorydescribes theabelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, theKronecker–Weber theorem, describes the maximal abelianQabextension ofQ: it is the field
obtained by adjoining all primitiventh roots of unity.Kronecker's Jugendtraumasks for a similarly explicit description ofFabof general number fieldsF. Forimaginary quadratic fields,F=Q(−d){\displaystyle F=\mathbf {Q} ({\sqrt {-d}})},d> 0, the theory ofcomplex multiplicationdescribesFabusingelliptic curves. For general number fields, no such explicit description is known.
In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept offield with one element, which is suggested to be a limit of the finite fieldsFp, asptends to1.[60]In addition to division rings, there are various other weaker algebraic structures related to fields such asquasifields,near-fieldsandsemifields.
There are alsoproper classeswith field structure, which are sometimes calledFields, with a capital 'F'. Thesurreal numbersform a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. Thenimbers, a concept fromgame theory, form such a Field as well.[61]
Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of adivision ringorskew field;[g]sometimes associativity is weakened as well. The only division rings that are finite-dimensionalR-vector spaces areRitself,C(which is a field), and thequaternionsH(in which multiplication is non-commutative). This result is known as theFrobenius theorem. TheoctonionsO, for which multiplication is neither commutative nor associative, is a normedalternativedivision algebra, but is not a division ring. This fact was proved using methods ofalgebraic topologyin 1958 byMichel Kervaire,Raoul Bott, andJohn Milnor.[62]
Wedderburn's little theoremstates that all finitedivision ringsare fields.
|
https://en.wikipedia.org/wiki/Field_(mathematics)
|
Inmathematics, aprime poweris apositive integerwhich is a positive integerpowerof a singleprime number.
For example:7 = 71,9 = 32and64 = 26are prime powers, while6 = 2 × 3,12 = 22× 3and36 = 62= 22× 32are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, …
(sequenceA246655in theOEIS).
The prime powers are those positive integers that aredivisibleby exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also calledprimary numbers, as in theprimary decomposition.
Prime powers are powers of prime numbers. Every prime power (exceptpowers of 2greater than 4) has aprimitive root; thus themultiplicative groupof integers modulopn(that is, thegroup of unitsof theringZ/pnZ) iscyclic.[1]
The number of elements of afinite fieldis always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up toisomorphism).[2]
A property of prime powers used frequently inanalytic number theoryis that thesetof prime powers which are not prime is asmall setin the sense that theinfinite sumof their reciprocalsconverges, although the primes are a large set.[3]
Thetotient function(φ) andsigma functions(σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers aredeficient numbers. A prime powerpnis ann-almost prime. It is not known whether a prime powerpncan be a member of anamicable pair. If there is such a number, thenpnmust be greater than 101500andnmust be greater than 1400.
|
https://en.wikipedia.org/wiki/Prime_power
|
Information securityis the practice of protectinginformationby mitigating information risks. It is part of information risk management.[1]It typically involves preventing or reducing the probability of unauthorized or inappropriate access todataor the unlawful use,disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g., electronic or physical, tangible (e.g.,paperwork), or intangible (e.g.,knowledge).[2][3]Information security's primary focus is the balanced protection ofdata confidentiality,integrity, andavailability(also known as the 'CIA' triad)[4][5]while maintaining a focus on efficientpolicyimplementation, all without hampering organizationproductivity.[6]This is largely achieved through a structuredrisk managementprocess.[7]
To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards onpasswords,antivirus software,firewalls,encryption software,legal liability,security awarenessand training, and so forth.[8]Thisstandardizationmay be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred, and destroyed.[9]
While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized,[10][11]withinformation assurancenow typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system).
IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses.[12]They are responsible for keeping all of thetechnologywithin the company secure from malicious attacks that often attempt to acquire critical private information or gain control of the internal systems.[13][14]
There are many specialist roles in Information Security including securing networks and alliedinfrastructure, securingapplicationsanddatabases,security testing, information systemsauditing,business continuity planning, electronic record discovery, anddigital forensics.[15]
Information security standards are techniques generally outlined in published materials that attempt to protect the information of a user or organization.[16]This environment includes users themselves, networks, devices, all software, processes, information in storage or transit, applications, services, and systems that can be connected directly or indirectly to networks.
The principal objective is to reduce the risks, including preventing or mitigating attacks. These published materials consist of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies.
Various definitions of information security are suggested below, summarized from different sources:
Information securitythreatscome in many different forms.[27]Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion.[28][29]Viruses,[30]worms,phishing attacks, andTrojan horsesare a few common examples of software attacks. Thetheft of intellectual propertyhas also been an extensive issue for many businesses.[31]Identity theftis the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information throughsocial engineering.[32][33]Sabotageusually consists of the destruction of an organization'swebsitein an attempt to cause loss of confidence on the part of its customers.[34]Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as withransomware.[35]One of the most functional precautions against these attacks is to conduct periodical user awareness.[36]
Governments,military,corporations,financial institutions,hospitals, non-profit organizations, and privatebusinessesamass a great deal of confidential information about their employees, customers, products, research, and financial status.[37]Should confidential information about a business's customers or finances or new product line fall into the hands of a competitor orhacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation.[38]From a business perspective, information security must be balanced against cost; theGordon-Loeb Modelprovides a mathematical economic approach for addressing this concern.[39]
For the individual, information security has a significant effect onprivacy, which is viewed very differently in variouscultures.[40]
Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detectingtampering.[41]Julius Caesaris credited with the invention of theCaesar cipherc. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands.[42]However, for the most part protection was achieved through the application of procedural handling controls.[43][44]Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box.[45]As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653[46]).
In the mid-nineteenth century more complexclassification systemswere developed to allow governments to manage their information according to the degree of sensitivity.[47]For example, the British Government codified this, to some extent, with the publication of theOfficial Secrets Actin 1889.[48]Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust.[49]A public interest defense was soon added to defend disclosures in the interest of the state.[50]A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies.[51]A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance.[52]By the time of theFirst World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters.[53]Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information.[54]
The establishment ofcomputer securityinaugurated the history of information security. The need for such appeared duringWorld War II.[55]The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls.[56]An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed.[57]TheEnigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted byAlan Turing, can be regarded as a striking example of creating and using secured information.[58]Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture ofU-570[58]).
Variousmainframe computerswere connected online during theCold Warto complete more sophisticated tasks, in a communication process easier than mailingmagnetic tapesback and forth by computer centers. As such, theAdvanced Research Projects Agency(ARPA), of theUnited States Department of Defense, started researching the feasibility of a networked system of communication to trade information within theUnited States Armed Forces. In 1968, theARPANETproject was formulated byLarry Roberts, which would later evolve into what is known as theinternet.[59]
In 1973, important elements of ARPANET security were found by internet pioneerRobert Metcalfeto have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures fordial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public.[60]Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity".[60]
The end of the twentieth century and the early years of the twenty-first century saw rapid advancements intelecommunications, computinghardwareandsoftware, and dataencryption.[61]The availability of smaller, more powerful, and less expensive computing equipment madeelectronic data processingwithin the reach ofsmall businessand home users.[62]The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate.[63]These computers quickly became interconnected through theinternet.[64]
The rapid growth and widespread use of electronic data processing andelectronic businessconducted through the internet, along with numerous occurrences of internationalterrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit.[65]The academic disciplines ofcomputer securityandinformation assuranceemerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability ofinformation systems.[66]
The "CIA triad" ofconfidentiality,integrity, andavailabilityis at the heart of information security.[67]The concept was introduced in the Anderson Report in 1972 and later repeated inThe Protection of Information in Computer Systems.The abbreviation was coined by Steve Lipner around 1986.[68]
Debate continues about whether or not this triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy.[4]Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such asnon-repudiationdo not fit well within the three core concepts.[69]
In information security,confidentiality"is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes."[70]While similar to "privacy", the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers.[71]Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals.[72]
In IT security,data integritymeans maintaining and assuring the accuracy and completeness of data over its entire lifecycle.[73]This means that data cannot be modified in an unauthorized or undetected manner.[74]This is not the same thing asreferential integrityindatabases, although it can be viewed as a special case of consistency as understood in the classicACIDmodel oftransaction processing.[75]Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats.[76]Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches.[77]
More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance.[78]
For any information system to serve its purpose, the information must beavailablewhen it is needed.[79]This means the computing systems used to store and process the information, thesecurity controlsused to protect it, and the communication channels used to access it must be functioning correctly.[80]High availabilitysystems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.[81]Ensuring availability also involves preventingdenial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.[82]
In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program.[citation needed]Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect.[83]This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails.[84]Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management.[85]A successful information security team involves many different key roles to mesh and align for the "CIA" triad to be provided effectively.[86]
In addition to the classic CIA triad of security goals, some organisations may want to include security goals like authenticity, accountability, non-repudiation, and reliability.
In law,non-repudiationimplies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction.[87]
It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology.[88]It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity).[89]The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised.[90]The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation).[91]
In 1992 and revised in 2002, theOECD'sGuidelines for the Security of Information Systems and Networks[92]proposed the nine generally accepted principles:awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment.[93]Building upon those, in 2004 theNIST'sEngineering Principles for Information Technology Security[69]proposed 33 principles.
In 1998,Donn Parkerproposed an alternative model for the classic "CIA" triad that he called thesix atomic elements of information. The elements areconfidentiality,possession,integrity,authenticity,availability, andutility. The merits of theParkerian Hexadare a subject of debate amongst security professionals.[94]
In 2011,The Open Grouppublished the information security management standardO-ISM3.[95]This standard proposed anoperational definitionof the key concepts of security, with elements called "security objectives", related toaccess control(9),availability(3),data quality(1), compliance, and technical (4).
Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset).[96]A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made oract of nature) that has the potential to cause harm.[97]The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact.[98]In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property).[99]
TheCertified Information Systems Auditor(CISA) Review Manual 2006definesrisk managementas "the process of identifyingvulnerabilitiesandthreatsto the information resources used by an organization in achieving business objectives, and deciding whatcountermeasures,[100]if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[101]
There are two things in this definition that may need some clarification. First, theprocessof risk management is an ongoing, iterativeprocess. It must be repeated indefinitely. The business environment is constantly changing and newthreatsandvulnerabilitiesemerge every day.[102]Second, the choice ofcountermeasures(controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected.[103]Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated.[104]Thus, any process and countermeasure should itself be evaluated for vulnerabilities.[105]It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk".[106]
Arisk assessmentis carried out by a team of people who have knowledge of specific areas of the business.[107]Membership of the team may vary over time as different parts of the business are assessed.[108]The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may usequantitativeanalysis.
Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human.[109]TheISO/IEC 27002:2005Code of practice forinformation security managementrecommends the following be examined during a risk assessment:
In broad terms, the risk management process consists of:[110][111]
For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business.[118]Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business.[119]The reality of some risks may be disputed. In such cases leadership may choose to deny the risk.[120]
Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels.[121]Control selection should follow and should be based on the risk assessment.[122]Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information.ISO/IEC 27001has defined controls in different areas.[123]Organizations can implement additional controls according to requirement of the organization.[124]ISO/IEC 27002offers a guideline for organizational information security standards.[125]
Defense in depth is a fundamental security philosophy that relies on overlapping security systems designed to maintain protection even if individual components fail. Rather than depending on a single security measure, it combines multiple layers of security controls both in the cloud and at network endpoints. This approach includes combinations like firewalls with intrusion-detection systems, email filtering services with desktop anti-virus, and cloud-based security alongside traditional network defenses.[126]The concept can be implemented through three distinct layers of administrative, logical, and physical controls,[127]or visualized as an onion model with data at the core, surrounded by people, network security, host-based security, and application security layers.[128]The strategy emphasizes that security involves not just technology, but also people and processes working together, with real-time monitoring and response being crucial components.[126]
An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information.[129]Not all information is equal and so not all information requires the same degree of protection.[130]This requires information to be assigned asecurity classification.[131]The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy.[132]The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the requiredsecurity controlsfor each classification.[133]
Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete.[134]Laws and other regulatory requirements are also important considerations when classifying information.[135]TheInformation Systems Audit and Control Association(ISACA) and itsBusiness Model for Information Securityalso serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed.[136]
The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:[133]
All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification.[139]The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures.[140]
Access to protected information must be restricted to people who are authorized to access the information.[141]The computer programs, and in many cases the computers that process the information, must also be authorized.[142]This requires that mechanisms be in place to control the access to protected information.[142]The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be.[143]The foundation on which access control mechanisms are built start with identification andauthentication.[144]
Access control is generally considered in three steps: identification,authentication, andauthorization.[145][72]
Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name isJohn Doe" they are making a claim of who they are.[146]However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe.[147]Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to".[148]
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells thebank tellerhe is John Doe, a claim of identity.[149]The bank teller asks to see a photo ID, so he hands the teller hisdriver's license.[150]The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe.[151]If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to.[152]
There are three different types of information that can be used for authentication:[153][154]
Strong authentication requires providing more than one type of authentication information (two-factor authentication).[160]Theusernameis the most common form of identification on computer systems today and the password is the most common form of authentication.[161]Usernames and passwords have served their purpose, but they are increasingly inadequate.[162]Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such astime-based one-time password algorithms.[163]
After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change).[164]This is calledauthorization. Authorization to access information and other computing services begins with administrative policies and procedures.[165]The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies.[166]Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms.[167]The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches.[72]
The non-discretionary approach consolidates all access control under a centralized administration.[168]The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform.[169][170]The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources.[168]In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource.[141]
Examples of common access control mechanisms in use today includerole-based access control, available in many advanced database management systems; simplefile permissionsprovided in the UNIX and Windows operating systems;[171]Group Policy Objectsprovided in Windows network systems; andKerberos,RADIUS,TACACS, and the simple access lists used in manyfirewallsandrouters.[172]
To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions.[173]TheU.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type ofaudit trail.[174]
Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions.[175]This principle is used in the government when dealing with difference clearances.[176]Even though two employees in different departments have atop-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to.[177]Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad.[178]
Information security usescryptographyto transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is calledencryption.[179]Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses thecryptographic key, through the process of decryption.[180]Cryptography is used in information security to protect information from unauthorized or accidental disclosure while theinformationis in transit (either electronically or physically) and while information is in storage.[72]
Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures,non-repudiation, and encrypted network communications.[181]Older, less secure applications such asTelnetandFile Transfer Protocol(FTP) are slowly being replaced with more secure applications such asSecure Shell(SSH) that use encrypted network communications.[182]Wireless communications can be encrypted using protocols such asWPA/WPA2or the older (and less secure)WEP. Wired communications (such asITU‑TG.hn) are secured usingAESfor encryption andX.1035for authentication and key exchange.[183]Software applications such asGnuPGorPGPcan be used to encrypt data files and email.[184]
Cryptography can introduce security problems when it is not implemented correctly.[185]Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography.[186]Thelength and strengthof the encryption key is also an important consideration.[187]A key that isweakor too short will produceweak encryption.[187]The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information.[188]They must be protected from unauthorized disclosure and destruction, and they must be available when needed.[citation needed]Public key infrastructure(PKI) solutions address many of the problems that surroundkey management.[72]
U.S.Federal Sentencing Guidelinesnow make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems.[189]
In the field of information security, Harris[190]offers the following definitions of due care and due diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees[191]."And, [Due diligence are the]"continual activities that make sure the protection mechanisms are continually maintained and operational."[192]
Attention should be made to two important points in these definitions.[193][194]First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts.[195][196]Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing.[197]
Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA)[198]provides principles and practices for evaluating risk.[199]It considers all parties that could be affected by those risks.[200]DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden.[201]With increased data breach litigation, companies must balance security controls, compliance, and its mission.[202]
Computer security incident management is a specialized form of incident management focused on monitoring, detecting, and responding to security events on computers and networks in a predictable way.[203]
Organizations implement this through incident response plans (IRPs) that are activated when security breaches are detected.[204]These plans typically involve an incident response team (IRT) with specialized skills in areas like penetration testing, computer forensics, and network security.[205]
Change management is a formal process for directing and controlling alterations to the information processing environment.[206][207]This includes alterations to desktop computers, the network, servers, and software.[208]The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made.[209]It is not the objective of change management to prevent or hinder necessary changes from being implemented.[210][211]
Any change to the information processing environment introduces an element of risk.[212]Even apparently simple changes can have unexpected effects.[213]One of management's many responsibilities is the management of risk.[214][215]Change management is a tool for managing the risks introduced by changes to the information processing environment.[216]Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.[217]
Not every change needs to be managed.[218][219]Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment.[220]Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management.[221]However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity.[222]The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system.[223]
Change management is usually overseen by a change review board composed of representatives from key business areas,[224]security, networking, systems administrators, database administration, application developers, desktop support, and the help desk.[225]The tasks of the change review board can be facilitated with the use of automated work flow application.[226]The responsibility of the change review board is to ensure the organization's documented change management procedures are followed.[227]The change management process is as follows[228]
Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment.[260]Good change management procedures improve the overall quality and success of changes as they are implemented.[261]This is accomplished through planning, peer review, documentation, and communication.[262]
ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps[263](Full book summary),[264]andITILall provide valuable guidance on implementing an efficient and effective change management program information security.[265]
Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects.[266][267]BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual.[268]The BCM should be included in an organizationsrisk analysisplan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function.[269]
It encompasses:
Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, adisaster recovery plan(DRP) focuses specifically on resuming business operations as quickly as possible after a disaster.[279]A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover criticalinformation and communications technology(ICT) infrastructure.[280]Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan.[281]
Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security.[282][283]Important industry sector regulations have also been included when they have a significant impact on information security.[282]
The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[318]
Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways.[319]Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations:[320]
Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests.[322]Research shows information security culture needs to be improved continuously. InInformation Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[323]
|
https://en.wikipedia.org/wiki/Information_security
|
Acyclic redundancy check(CRC) is anerror-detecting codecommonly used in digitalnetworksand storage devices to detect accidental changes to digital data. Blocks of data entering these systems get a shortcheck valueattached, based on the remainder of apolynomial divisionof their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used forerror correction(seebitfilters).[1]
CRCs are so called because thecheck(data verification) value is aredundancy(it expands the message without addinginformation) and thealgorithmis based oncycliccodes. CRCs are popular because they are simple to implement in binaryhardware, easy to analyze mathematically, and particularly good at detecting common errors caused bynoisein transmission channels. Because the check value has a fixed length, thefunctionthat generates it is occasionally used as ahash function.
CRCs are based on the theory ofcyclicerror-correcting codes. The use ofsystematiccyclic codes, which encode messages by adding a fixed-length check value, for the purpose of error detection in communication networks, was first proposed byW. Wesley Petersonin 1961.[2]Cyclic codes are not only simple to implement but have the benefit of being particularly well suited for the detection ofburst errors: contiguous sequences of erroneous data symbols in messages. This is important because burst errors are common transmission errors in manycommunication channels, including magnetic and optical storage devices. Typically ann-bit CRC applied to a data block of arbitrary length will detect any single error burst not longer thannbits, and the fraction of all longer error bursts that it will detect is approximately(1 − 2−n).
Specification of a CRC code requires definition of a so-calledgenerator polynomial. This polynomial becomes thedivisorin apolynomial long division, which takes the message as thedividendand in which thequotientis discarded and theremainderbecomes the result. The important caveat is that the polynomialcoefficientsare calculated according to the arithmetic of afinite field, so the addition operation can always be performed bitwise-parallel (there is no carry between digits).
In practice, all commonly used CRCs employ the finite field of two elements,GF(2). The two elements are usually called 0 and 1, comfortably matching computer architecture.
A CRC is called ann-bit CRC when its check value isnbits long. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, which means it hasn+ 1terms. In other words, the polynomial has a length ofn+ 1; its encoding requiresn+ 1bits. Note that most polynomial specifications either drop theMSborLSb, since they are always 1. The CRC and associated polynomial typically have a name of the form CRC-n-XXX as in thetablebelow.
The simplest error-detection system, theparity bit, is in fact a 1-bit CRC: it uses the generator polynomialx+ 1(two terms),[3]and has the name CRC-1.
A CRC-enabled device calculates a short, fixed-length binary sequence, known as thecheck valueorCRC, for each block of data to be sent or stored and appends it to the data, forming acodeword.
When a codeword is received or read, the device either compares its check value with one freshly calculated from the data block, or equivalently, performs a CRC on the whole codeword and compares the resulting check value with an expectedresidueconstant.
If the CRC values do not match, then the block contains a data error.
The device may take corrective action, such as rereading the block or requesting that it be sent again. Otherwise, the data is assumed to be error-free (though, with some small probability, it may contain undetected errors; this is inherent in the nature of error-checking).[4]
CRCs are specifically designed to protect against common types of errors on communication channels, where they can provide quick and reasonable assurance of theintegrityof messages delivered. However, they are not suitable for protecting against intentional alteration of data.
Firstly, as there is no authentication, an attacker can edit a message and recompute the CRC without the substitution being detected. When stored alongside the data, CRCs and cryptographic hash functions by themselves do not protect againstintentionalmodification of data. Any application that requires protection against such attacks must use cryptographic authentication mechanisms, such asmessage authentication codesordigital signatures(which are commonly based oncryptographic hashfunctions).
Secondly, unlike cryptographic hash functions, CRC is an easily reversible function, which makes it unsuitable for use in digital signatures.[5]
Thirdly, CRC satisfies a relation similar to that of alinear function(or more accurately, anaffine function):[6]
wherec{\displaystyle c}depends on the length ofx{\displaystyle x}andy{\displaystyle y}. This can be also stated as follows, wherex{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}have the same length
as a result, even if the CRC is encrypted with astream cipherthat usesXORas its combining operation (ormodeofblock cipherwhich effectively turns it into a stream cipher, such as OFB or CFB), both the message and the associated CRC can be manipulated without knowledge of the encryption key; this was one of the well-known design flaws of theWired Equivalent Privacy(WEP) protocol.[7]
To compute ann-bit binary CRC, line the bits representing the input in a row, and position the (n+ 1)-bit pattern representing the CRC's divisor (called a "polynomial") underneath the left end of the row.
In this example, we shall encode 14 bits of message with a 3-bit CRC, with a polynomialx3+x+ 1. The polynomial is written in binary as the coefficients; a 3rd-degree polynomial has 4 coefficients (1x3+ 0x2+ 1x+ 1). In this case, the coefficients are 1, 0, 1 and 1. The result of the calculation is 3 bits long, which is why it is called a 3-bit CRC. However, you need 4 bits to explicitly state the polynomial.
Start with the message to be encoded:
This is first padded with zeros corresponding to the bit lengthnof the CRC. This is done so that the resulting code word is insystematicform. Here is the first calculation for computing a 3-bit CRC:
The algorithm acts on the bits directly above the divisor in each step. The result for that iteration is the bitwise XOR of the polynomial divisor with the bits above it. The bits not above the divisor are simply copied directly below for that step. The divisor is then shifted right to align with the highest remaining 1 bit in the input, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation:
Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the right-hand end of the row. Thesenbits are the remainder of the division step, and will also be the value of the CRC function (unless the chosen CRC specification calls for some postprocessing).
The validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes. The remainder should equal zero if there are no detectable errors.
The followingPythoncode outlines a function which will return the initial CRC remainder for a chosen input and polynomial, with either 1 or 0 as the initial padding. Note that this code works with string inputs rather than raw numbers:
Mathematical analysis of this division-like process reveals how to select a divisor that guarantees good error-detection properties. In this analysis, the digits of the bit strings are taken as the coefficients of a polynomial in some variablex—coefficients that are elements of the finite fieldGF(2)(the integers modulo 2, i.e. either a zero or a one), instead of more familiar numbers. The set of binary polynomials is a mathematicalring.
The selection of the generator polynomial is the most important part of implementing the CRC algorithm. The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities.
The most important attribute of the polynomial is its length (largest degree(exponent) +1 of any one term in the polynomial), because of its direct influence on the length of the computed check value.
The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64).[3]
A CRC is called ann-bit CRC when its check value isn-bits. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, and hencen+ 1terms (the polynomial has a length ofn+ 1). The remainder has lengthn. The CRC has a name of the form CRC-n-XXX.
The design of the CRC polynomial depends on the maximum total length of the block to be protected (data + CRC bits), the desired error protection features, and the type of resources for implementing the CRC, as well as the desired performance. A common misconception is that the "best" CRC polynomials are derived from eitherirreducible polynomialsor irreducible polynomials times the factor1 +x, which adds to the code the ability to detect all errors affecting an odd number of bits.[8]In reality, all the factors described above should enter into the selection of the polynomial and may lead to a reducible polynomial. However, choosing a reducible polynomial will result in a certain proportion of missed errors, due to the quotient ring havingzero divisors.
The advantage of choosing aprimitive polynomialas the generator for a CRC code is that the resulting code has maximal total block length in the sense that all 1-bit errors within that block length have different remainders (also calledsyndromes) and therefore, since the remainder is a linear function of the block, the code can detect all 2-bit errors within that block length. Ifr{\displaystyle r}is the degree of the primitive generator polynomial, then the maximal total block length is2r−1{\displaystyle 2^{r}-1}, and the associated code is able to detect any single-bit or double-bit errors.[9]However, if we use the generator polynomialg(x)=p(x)(1+x){\displaystyle g(x)=p(x)(1+x)}, wherep{\displaystyle p}is a primitive polynomial of degreer−1{\displaystyle r-1}, then the maximal total block length is2r−1−1{\displaystyle 2^{r-1}-1}, and the code is able to detect single, double, triple and any odd number of errors.
A polynomialg(x){\displaystyle g(x)}that admits other factorizations may be chosen then so as to balance the maximal total blocklength with a desired error detection power. TheBCH codesare a powerful class of such polynomials. They subsume the two examples above. Regardless of the reducibility properties of a generator polynomial of degreer, if it includes the "+1" term, the code will be able to detect error patterns that are confined to a window ofrcontiguous bits. These patterns are called "error bursts".
The concept of the CRC as an error-detecting code gets complicated when an implementer or standards committee uses it to design a practical system. Here are some of the complications:
These complications mean that there are three common ways to express a polynomial as an integer: the first two, which are mirror images in binary, are the constants found in code; the third is the number found in Koopman's papers.In each case, one term is omitted.So the polynomialx4+x+1{\displaystyle x^{4}+x+1}may be transcribed as:
In the table below they are shown as:
CRCs inproprietary protocolsmight beobfuscatedby using a non-trivial initial value and a final XOR, but these techniques do not add cryptographic strength to the algorithm and can bereverse engineeredusing straightforward methods.[10]
Numerous varieties of cyclic redundancy checks have been incorporated intotechnical standards. By no means does one algorithm, or one of each degree, suit every purpose; Koopman and Chakravarty recommend selecting a polynomial according to the application requirements and the expected distribution of message lengths.[11]The number of distinct CRCs in use has confused developers, a situation which authors have sought to address.[8]There are three polynomials reported for CRC-12,[11]twenty-two conflicting definitions of CRC-16, and seven of CRC-32.[12]
The polynomials commonly applied are not the most efficient ones possible. Since 1993, Koopman, Castagnoli and others have surveyed the space of polynomials between 3 and 64 bits in size,[11][13][14][15]finding examples that have much better performance (in terms ofHamming distancefor a given message size) than the polynomials of earlier protocols, and publishing the best of these with the aim of improving the error detection capacity of future standards.[14]In particular,iSCSIandSCTPhave adopted one of the findings of this research, the CRC-32C (Castagnoli) polynomial.
The design of the 32-bit polynomial most commonly used by standards bodies, CRC-32-IEEE, was the result of a joint effort for theRome Laboratoryand the Air Force Electronic Systems Division by Joseph Hammond, James Brown and Shyan-Shiang Liu of theGeorgia Institute of Technologyand Kenneth Brayer of theMitre Corporation. The earliest known appearances of the 32-bit polynomial were in their 1975 publications: Technical Report 2956 by Brayer for Mitre, published in January and released for public dissemination throughDTICin August,[16]and Hammond, Brown and Liu's report for the Rome Laboratory, published in May.[17]Both reports contained contributions from the other team. During December 1975, Brayer and Hammond presented their work in a paper at the IEEE National Telecommunications Conference: the IEEE CRC-32 polynomial is the generating polynomial of aHamming codeand was selected for its error detection performance.[18]Even so, the Castagnoli CRC-32C polynomial used in iSCSI or SCTP matches its performance on messages from 58 bits to 131 kbits, and outperforms it in several size ranges including the two most common sizes of Internet packet.[14]TheITU-TG.hnstandard also uses CRC-32C to detect errors in the payload (although it uses CRC-16-CCITT forPHY headers).
CRC-32C computation is implemented in hardware as an operation (CRC32) ofSSE4.2instruction set, first introduced inIntelprocessors'Nehalemmicroarchitecture.ARMAArch64architecture also provides hardware acceleration for both CRC-32 and CRC-32C operations.
The table below lists only the polynomials of the various algorithms in use. Variations of a particular protocol can impose pre-inversion, post-inversion and reversed bit ordering as described above. For example, the CRC32 used in Gzip and Bzip2 use the same polynomial, but Gzip employs reversed bit ordering, while Bzip2 does not.[12]Note that even parity polynomials inGF(2)with degree greater than 1 are never primitive. Even parity polynomial marked as primitive in this table represent a primitive polynomial multiplied by(x+1){\displaystyle \left(x+1\right)}. The most significant bit of a polynomial is always 1, and is not shown in the hex representations.
|
https://en.wikipedia.org/wiki/Cyclic_redundancy_check
|
TheData Encryption Standard(DES/ˌdiːˌiːˈɛs,dɛz/) is asymmetric-key algorithmfor theencryptionof digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement ofcryptography.
Developed in the early 1970s atIBMand based on an earlier design byHorst Feistel, the algorithm was submitted to theNational Bureau of Standards(NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with theNational Security Agency(NSA), the NBS selected a slightly modified version (strengthened againstdifferential cryptanalysis, but weakened againstbrute-force attacks), which was published as an officialFederal Information Processing Standard(FIPS) for the United States in 1977.[2]
The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose fromclassifieddesign elements, a relatively shortkey lengthof thesymmetric-keyblock cipherdesign, and the involvement of the NSA, raising suspicions about abackdoor. TheS-boxesthat had prompted those suspicions were designed by the NSA to address a vulnerability they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack.[2][failed verification]The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and theircryptanalysis.
DES is insecure due to the relatively short56-bit key size. In January 1999,distributed.netand theElectronic Frontier Foundationcollaborated to publicly break a DES key in 22 hours and 15 minutes (see§ Chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice[citation needed]. DES has been withdrawn as a standard by theNIST.[3]Later, the variantTriple DESwas developed to increase the security level, but it is considered insecure today as well. DES has been superseded by theAdvanced Encryption Standard(AES).
Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as theDEA(Data Encryption Algorithm).
The origins of DES date to 1972, when aNational Bureau of Standardsstudy of US governmentcomputer securityidentified a need for a government-wide standard for encrypting unclassified, sensitive information.[4]
Around the same time, engineerMohamed Atallain 1972 foundedAtalla Corporationand developed the firsthardware security module(HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a securePINgenerating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard.[3]Atalla was an early competitor toIBMin the banking market, and was cited as an influence by IBM employees who worked on the DES standard.[5]TheIBM 3624later adopted a similar PIN verification system to the earlier Atalla system.[6]
On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time,IBMsubmitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm,Horst Feistel'sLucifercipher. The team at IBM involved in cipher design and analysis included Feistel,Walter Tuchman,Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas,Roy Adler,Edna Grossman, Bill Notz, Lynn Smith, andBryant Tuckerman.
On 17 March 1975, the proposed DES was published in theFederal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received frompublic-key cryptographypioneersMartin HellmanandWhitfield Diffie,[1]citing a shortenedkey lengthand the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages.[7]Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different."[8]TheUnited States Senate Select Committee on Intelligencereviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote:
In the development of DES, NSA convincedIBMthat a reduced key size was sufficient; indirectly assisted in the development of the S-box structures; and certified that the final DES algorithm was, to the best of their knowledge, free from any statistical or mathematical weakness.[9]
However, it also found that
NSA did not tamper with the design of the algorithm in any way. IBM invented and designed the algorithm, made all pertinent decisions regarding it, and concurred that the agreed upon key size was more than adequate for all commercial applications for which the DES was intended.[10]
Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"[11]In contrast, a declassified NSA book on cryptologic history states:
In 1973 NBS solicited private industry for a data encryption standard (DES). The first offerings were disappointing, so NSA began working on its own algorithm. Then Howard Rosenblum, deputy director for research and engineering, discovered that Walter Tuchman of IBM was working on a modification to Lucifer for general use. NSA gave Tuchman a clearance and brought him in to work jointly with the Agency on his Lucifer modification."[12]
and
NSA worked closely with IBM to strengthen the algorithm against all except brute-force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately they compromised on a 56-bit key.[13][14]
Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication byEli BihamandAdi Shamirofdifferential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes.[15]According toSteven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret.[16]Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it".[16]Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."[17]
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 asFIPSPUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), followinga public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, butNISThas approvedTriple DESthrough the year 2030 for sensitive government information.[18]
The algorithm is also specified inANSIX3.92 (Today X3 is known asINCITSand ANSI X3.92 as ANSIINCITS92),[19]NIST SP 800-67[18]and ISO/IEC 18033-3[20](as a component ofTDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was theElectronic Frontier Foundation'sDES crackerin 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods ofcryptanalysisare discussed in more detail later in this article.
The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES,
DES is the archetypalblock cipher—analgorithmthat takes a fixed-length string ofplaintextbits and transforms it through a series of complicated operations into anotherciphertextbitstring of the same length. In the case of DES, theblock sizeis 64 bits. DES also uses akeyto customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checkingparity, and are thereafter discarded. Hence the effectivekey lengthis 56 bits.
The key is nominally stored or transmitted as 8bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSIINCITS92–1981), section 3.5:
One bit in each 8-bit byte of theKEYmay be utilized for error detection in key generation, distribution, and storage. Bits 8, 16,..., 64 are for use in ensuring that each byte is of odd parity.
Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in amode of operation. FIPS-81 specifies several modes for use with DES.[27]Further comments on the usage of DES are contained in FIPS-74.[28]
Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.)
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termedrounds. There is also an initial and finalpermutation, termedIPandFP, which areinverses(IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware.[29]
Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as theFeistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms.
The ⊕ symbol denotes theexclusive-OR(XOR) operation. TheF-functionscrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes.
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages:
The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified byClaude Shannonin the 1940s as a necessary condition for a secure yet practical cipher.
Figure 3 illustrates thekey schedulefor encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 byPermuted Choice 1(PC-1)—the remaining eight bits are either discarded or used asparitycheck bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected byPermuted Choice 2(PC-2)—24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.
The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes.
Pseudocodefor the DES algorithm follows.
Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number ofknownorchosen plaintextsto carry out, and are not a concern in practice.
For anycipher, the most basic method of attack isbrute force—trying every possible key in turn. Thelength of the keydetermines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacementalgorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 256 bits to 56 bits to fit on a single chip.[30]
In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day.[1][31]By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s.[32]In 1997,RSA Securitysponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by theDESCHALL Project, led by Rocke Verser,Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by theElectronic Frontier Foundation(EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (seeEFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching.
The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of theUniversities of BochumandKiel, both inGermany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of thesefield-programmable gate arrays(FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well.[33]One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000.[34]The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement ofdigital hardware—seeMoore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007,SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs.[35]Their 256 Spartan-6 LX150 model has further lowered this time.
In 2012, David Hulton andMoxie Marlinspikeannounced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online.[36][37]However, the service has been offline since the year 2024, supposedly for maintenance but probably permanently switched off.[38]
There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search:differential cryptanalysis(DC),[39]linear cryptanalysis(LC),[40]andDavies' attack.[41]However, the attacks are theoretical and are generally considered infeasible to mount in practice;[42]these types of attack are sometimes termed certificational weaknesses.
There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains.
Differential-linear cryptanalysiswas proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack.[47]An enhanced version of the attack can break 9-round DES with 215.8chosen plaintexts and has a 229.2time complexity (Biham and others, 2002).[48]
DES exhibits the complementation property, namely that
wherex¯{\displaystyle {\overline {x}}}is thebitwise complementofx.{\displaystyle x.}EK{\displaystyle E_{K}}denotes encryption with keyK.{\displaystyle K.}P{\displaystyle P}andC{\displaystyle C}denote plaintext and ciphertext blocks respectively. The complementation property means that the work for abrute-force attackcould be reduced by a factor of 2 (or a single bit) under achosen-plaintextassumption. By definition, this property also applies to TDES cipher.[49]
DES also has four so-calledweak keys. Encryption (E) and decryption (D) under a weak key have the same effect (seeinvolution):
There are also six pairs ofsemi-weak keys. Encryption with one of the pair of semiweak keys,K1{\displaystyle K_{1}}, operates identically to decryption with the other,K2{\displaystyle K_{2}}:
It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage.
DES has also been proved not to be agroup, or more precisely, the set{EK}{\displaystyle \{E_{K}\}}(for all possible keysK{\displaystyle K}) underfunctional compositionis not a group, nor "close" to being a group.[50]This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such asTriple DESwould not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key.[51]
Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques.
SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper.
Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them.[52][53][54][55][56][57][58][59][60]
Concerns about security and the relatively slow operation of DES insoftwaremotivated researchers to propose a variety of alternativeblock cipherdesigns, which started to appear in the late 1980s and early 1990s: examples includeRC5,Blowfish,IDEA,NewDES,SAFER,CAST5andFEAL. Most of these designs kept the 64-bitblock sizeof DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In theSoviet UniontheGOST 28147-89algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used inRussialater.
Another approach to strengthening DES was the development ofTriple DES (3DES), which applies the DES algorithm three times to each data block to increase security. However, 3DES was later deprecated by NIST due to its inefficiencies and susceptibility to certain cryptographic attacks.
To address these security concerns, modern cryptographic systems rely on more advanced encryption techniques such as RSA, ECC, and post-quantum cryptography. These replacements aim to provide stronger resistance against both classical and quantum computing attacks.
A crucial aspect of DES involves itspermutations and key scheduling, which play a significant role in its encryption process. Analyzing these permutations helps in understanding DES's security limitations and the need for replacement algorithms. A detailed breakdown of DES permutations and their role in encryption is available in this analysis of Data Encryption Standards Permutations.[61]
DES itself can be adapted and reused in a more secure scheme. Many former DES users now useTriple DES(TDES) which was described and analysed by one of DES's patentees (seeFIPSPub 46–3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative isDES-X, which increases the key size by XORing extra key material before and after DES.GDESwas a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis.
On January 2, 1997, NIST announced that they wished to choose a successor to DES.[62]In 2001, after an international competition, NIST selected a new cipher, theAdvanced Encryption Standard(AES), as a replacement.[63]The algorithm which was selected as the AES was submitted by its designers under the nameRijndael. Other finalists in the NISTAES competitionincludedRC6,Serpent,MARS, andTwofish.
|
https://en.wikipedia.org/wiki/Data_Encryption_Standard#Key_schedule
|
Inabstract algebra,group theorystudies thealgebraic structuresknown asgroups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such asrings,fields, andvector spaces, can all be seen as groups endowed with additionaloperationsandaxioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra.Linear algebraic groupsandLie groupsare two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such ascrystalsand thehydrogen atom, andthree of the fourknown fundamental forces in the universe, may be modelled bysymmetry groups. Thus group theory and the closely relatedrepresentation theoryhave many important applications inphysics,chemistry, andmaterials science. Group theory is also central topublic key cryptography.
The earlyhistory of group theorydates from the 19th century. One of the most important mathematical achievements of the 20th century[1]was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a completeclassification of finite simple groups.
Group theory has three main historical sources:number theory, the theory ofalgebraic equations, andgeometry. The number-theoretic strand was begun byLeonhard Euler, and developed byGauss'swork onmodular arithmeticand additive and multiplicative groups related toquadratic fields. Early results about permutation groups were obtained byLagrange,Ruffini, andAbelin their quest for general solutions of polynomial equations of high degree.Évariste Galoiscoined the term "group" and established a connection, now known asGalois theory, between the nascent theory of groups andfield theory. In geometry, groups first became important inprojective geometryand, later,non-Euclidean geometry.Felix Klein'sErlangen programproclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability ofpolynomial equations.Arthur CayleyandAugustin Louis Cauchypushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems fromgeometricalsituations. In an attempt to come to grips with possible geometries (such aseuclidean,hyperbolicorprojective geometry) using group theory,Felix Kleininitiated theErlangen programme.Sophus Lie, in 1884, started using groups (now calledLie groups) attached toanalyticproblems. Thirdly, groups were, at first implicitly and later explicitly, used inalgebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth ofabstract algebrain the early 20th century,representation theory, and many more influential spin-off domains. Theclassification of finite simple groupsis a vast body of work from the mid 20th century, classifying all thefinitesimple groups.
The range of groups being considered has gradually expanded from finite permutation groups and special examples ofmatrix groupsto abstract groups that may be specified through apresentationbygeneratorsandrelations.
The firstclassof groups to undergo a systematic study waspermutation groups. Given any setXand a collectionGofbijectionsofXinto itself (known aspermutations) that is closed under compositions and inverses,Gis a groupactingonX. IfXconsists ofnelements andGconsists ofallpermutations,Gis thesymmetric groupSn; in general, any permutation groupGis asubgroupof the symmetric group ofX. An early construction due toCayleyexhibited any group as a permutation group, acting on itself (X=G) by means of the leftregular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that forn≥ 5, thealternating groupAnissimple, i.e. does not admit any propernormal subgroups. This fact plays a key role in theimpossibility of solving a general algebraic equation of degreen≥ 5in radicals.
The next important class of groups is given bymatrix groups, orlinear groups. HereGis a set consisting of invertiblematricesof given ordernover afieldKthat is closed under the products and inverses. Such a group acts on then-dimensional vector spaceKnbylinear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the groupG.
Permutation groups and matrix groups are special cases oftransformation groups: groups that act on a certain spaceXpreserving its inherent structure. In the case of permutation groups,Xis a set; for matrix groups,Xis avector space. The concept of a transformation group is closely related with the concept of asymmetry group: transformation groups frequently consist ofalltransformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory withdifferential geometry. A long line of research, originating withLieandKlein, considers group actions onmanifoldsbyhomeomorphismsordiffeomorphisms. The groups themselves may bediscreteorcontinuous.
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of anabstract groupbegan to take hold, where "abstract" means that the nature of the elements are ignored in such a way that twoisomorphic groupsare considered as the same group. A typical way of specifying an abstract group is through apresentationbygenerators and relations,
A significant source of abstract groups is given by the construction of afactor group, orquotient group,G/H, of a groupGby anormal subgroupH.Class groupsofalgebraic number fieldswere among the earliest examples of factor groups, of much interest innumber theory. If a groupGis a permutation group on a setX, the factor groupG/His no longer acting onX; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant underisomorphism, as well as the classes of group with a given such property:finite groups,periodic groups,simple groups,solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation ofabstract algebrain the works ofHilbert,Emil Artin,Emmy Noether, and mathematicians of their school.[citation needed]
An important elaboration of the concept of a group occurs ifGis endowed with additional structure, notably, of atopological space,differentiable manifold, oralgebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they arecontinuous,smoothorregular(in the sense of algebraic geometry) maps, thenGis atopological group, aLie group, or analgebraic group.[2]
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain forabstract harmonic analysis, whereasLie groups(frequently realized as transformation groups) are the mainstays ofdifferential geometryand unitaryrepresentation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus,compact connected Lie groupshave been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a groupΓcan be realized as alatticein a topological groupG, the geometry and analysis pertaining toGyield important results aboutΓ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a singlep-adic analytic groupGhas a family of quotients which are finitep-groupsof various orders, and properties ofGtranslate into the properties of its finite quotients.
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially thelocal theoryof finite groups and the theory ofsolvableandnilpotent groups.[citation needed]As a consequence, the completeclassification of finite simple groupswas achieved, meaning that all thosesimple groupsfrom which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such asChevalleyandSteinbergalso increased our understanding of finite analogs ofclassical groups, and other related groups. One such family of groups is the family ofgeneral linear groupsoverfinite fields.
Finite groups often occur when consideringsymmetryof mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory ofLie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associatedWeyl groups. These are finite groups generated by reflections which act on a finite-dimensionalEuclidean space. The properties of finite groups can thus play a role in subjects such astheoretical physicsandchemistry.
Saying that a groupGactson a setXmeans that every element ofGdefines a bijective map on the setXin a way compatible with the group structure. WhenXhas more structure, it is useful to restrict this notion further: a representation ofGon avector spaceVis agroup homomorphism:
whereGL(V) consists of the invertiblelinear transformationsofV. In other words, to every group elementgis assigned anautomorphismρ(g) such thatρ(g) ∘ρ(h) =ρ(gh)for anyhinG.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[3]On the one hand, it may yield new information about the groupG: often, the group operation inGis abstractly given, but viaρ, it corresponds to themultiplication of matrices, which is very explicit.[4]On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, ifGis finite, it is known thatVabove decomposes intoirreducible parts(seeMaschke's theorem). These parts, in turn, are much more easily manageable than the wholeV(viaSchur's lemma).
Given a groupG,representation theorythen asks what representations ofGexist. There are several settings, and the employed methods and obtained results are rather different in every case:representation theory of finite groupsand representations ofLie groupsare two main subdomains of the theory. The totality of representations is governed by the group'scharacters. For example,Fourier polynomialscan be interpreted as the characters ofU(1), the group ofcomplex numbersofabsolute value1, acting on theL2-space of periodic functions.
ALie groupis agroupthat is also adifferentiable manifold, with the property that the group operations are compatible with thesmooth structure. Lie groups are named afterSophus Lie, who laid the foundations of the theory of continuoustransformation groups. The termgroupes de Liefirst appeared in French in 1893 in the thesis of Lie's studentArthur Tresse, page 3.[5]
Lie groups represent the best-developed theory ofcontinuous symmetryofmathematical objectsandstructures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for moderntheoretical physics. They provide a natural framework for analysing the continuous symmetries ofdifferential equations(differential Galois theory), in much the same way as permutation groups are used inGalois theoryfor analysing the discrete symmetries ofalgebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Groups can be described in different ways. Finite groups can be described by writing down thegroup tableconsisting of all possible multiplicationsg•h. A more compact way of defining a group is bygenerators and relations, also called thepresentationof a group. Given any setFof generators{gi}i∈I{\displaystyle \{g_{i}\}_{i\in I}}, thefree groupgenerated byFsurjects onto the groupG. The kernel of this map is called the subgroup of relations, generated by some subsetD. The presentation is usually denoted by⟨F∣D⟩.{\displaystyle \langle F\mid D\rangle .}For example, the group presentation⟨a,b∣aba−1b−1⟩{\displaystyle \langle a,b\mid aba^{-1}b^{-1}\rangle }describes a group which is isomorphic toZ×Z.{\displaystyle \mathbb {Z} \times \mathbb {Z} .}A string consisting of generator symbols and their inverses is called aword.
Combinatorial group theorystudies groups from the perspective of generators and relations.[6]It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection ofgraphsvia theirfundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. Theword problemasks whether two words are effectively the same group element. By relating the problem toTuring machines, one can show that there is in general noalgorithmsolving this task. Another, generally harder, algorithmically insoluble problem is thegroup isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation⟨x,y∣xyxyx=e⟩,{\displaystyle \langle x,y\mid xyxyx=e\rangle ,}is isomorphic to the additive groupZof integers, although this may not be immediately apparent. (Writingz=xy{\displaystyle z=xy}, one hasG≅⟨z,y∣z3=y⟩≅⟨z⟩.{\displaystyle G\cong \langle z,y\mid z^{3}=y\rangle \cong \langle z\rangle .})
Geometric group theoryattacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[7]The first idea is made precise by means of theCayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs theword metricgiven by the length of the minimal path between the elements. A theorem ofMilnorand Svarc then says that given a groupGacting in a reasonable manner on ametric spaceX, for example acompact manifold, thenGisquasi-isometric(i.e. looks similar from a distance) to the spaceX.
Given a structured objectXof any sort, asymmetryis a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
The axioms of a group formalize the essential aspects ofsymmetry. Symmetries form a group: they areclosedbecause if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theoremsays that every group is the symmetry group of somegraph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in acategory. Maps preserving the structure are then themorphisms, and the symmetry group is theautomorphism groupof the object in question.
Applications of group theory abound. Almost all structures inabstract algebraare special cases of groups.Rings, for example, can be viewed asabelian groups(corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theoryuses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). Thefundamental theorem of Galois theoryprovides a link betweenalgebraic field extensionsand group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the correspondingGalois group. For example,S5, thesymmetric groupin 5 elements, is not solvable which implies that the generalquintic equationcannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such asclass field theory.
Algebraic topologyis another domain which prominentlyassociatesgroups to the objects the theory is interested in. There, groups are used to describe certain invariants oftopological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to somedeformation. For example, thefundamental group"counts" how many paths in the space are essentially different. ThePoincaré conjecture, proved in 2002/2003 byGrigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use ofEilenberg–MacLane spaceswhich are spaces with prescribedhomotopy groups. Similarlyalgebraic K-theoryrelies in a way onclassifying spacesof groups. Finally, the name of thetorsion subgroupof an infinite group shows the legacy of topology in group theory.
Algebraic geometrylikewise uses group theory in many ways.Abelian varietieshave been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example theHodge conjecture(in certain cases).) The one-dimensional case, namelyelliptic curvesis studied in particular detail. They are both theoretically and practically intriguing.[8]In another direction,toric varietiesarealgebraic varietiesacted on by atorus. Toroidal embeddings have recently led to advances inalgebraic geometry, in particularresolution of singularities.[9]
Algebraic number theorymakes uses of groups for some important applications. For example,Euler's product formula,
capturesthe factthat any integer decomposes in a unique way intoprimes. The failure of this statement formore general ringsgives rise toclass groupsandregular primes, which feature inKummer'streatment ofFermat's Last Theorem.
Analysis on Lie groups and certain other groups is calledharmonic analysis.Haar measures, that is, integrals invariant under the translation in a Lie group, are used forpattern recognitionand otherimage processingtechniques.[10]
Incombinatorics, the notion ofpermutationgroup and the concept of group action are often used to simplify the counting of a set of objects; see in particularBurnside's lemma.
The presence of the 12-periodicityin thecircle of fifthsyields applications ofelementary group theoryinmusical set theory.Transformational theorymodels musical transformations as elements of a mathematical group.
Inphysics, groups are important because they describe the symmetries which the laws of physics seem to obey. According toNoether's theorem, every continuous symmetry of a physical system corresponds to aconservation lawof the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include theStandard Model,gauge theory, theLorentz group, and thePoincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed byWillard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.[11]
Inchemistryandmaterials science,point groupsare used to classify regular polyhedra, and thesymmetries of molecules, andspace groupsto classifycrystal structures. The assigned groups can then be used to determine physical properties (such aschemical polarityandchirality), spectroscopic properties (particularly useful forRaman spectroscopy,infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to constructmolecular orbitals.
Molecular symmetryis responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
Inchemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of achiralmolecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, wherenis an integer, about a rotation axis. For example, if awatermolecule rotates 180° around the axis that passes through theoxygenatom and between thehydrogenatoms, it is in the same configuration as it started. In this case,n= 2, since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cnaxis having the largest value of n is the highest order rotation axis or principal axis. For example inboron trifluoride(BF3), the highest order of rotation axis isC3, so the principal axis of rotation isC3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is calledσh(horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example,methaneand othertetrahedralmolecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Very large groups of prime order constructed inelliptic curve cryptographyserve forpublic-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make thediscrete logarithmvery hard to calculate. One of the earliest encryption protocols,Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particularDiffie–Hellman key exchangeuses finitecyclic groups. So the termgroup-based cryptographyrefers mostly tocryptographic protocolsthat use infinitenon-abelian groupssuch as abraid group.
|
https://en.wikipedia.org/wiki/Group_theory#Finite_groups
|
Incomputer science,brute-force searchorexhaustive search, also known asgenerate and test, is a very generalproblem-solvingtechnique andalgorithmic paradigmthat consists ofsystematically checkingall possible candidates for whether or not each candidate satisfies the problem's statement.
A brute-force algorithm that finds thedivisorsof anatural numbernwould enumerate all integers from 1 to n, and check whether each of them dividesnwithout remainder. A brute-force approach for theeight queens puzzlewould examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other.[1]
When in doubt, use brute force.
While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion).[2]Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specificheuristicsthat can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed.
This is the case, for example, in critical applications where any errors in thealgorithmwould have very serious consequences or whenusing a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method whenbenchmarkingother algorithms ormetaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused withbacktracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a table – namely, check all entries of the latter, sequentially – is calledlinear search.
In order to apply brute-force search to a specific class of problems, one must implement fourprocedures,first,next,valid, andoutput. These procedures should take as a parameter the dataPfor the particular instance of the problem that is to be solved, and should do the following:
Thenextprocedure must also tell when there are no more candidates for the instanceP, after the current onec. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise thefirstprocedure should return Λ if there are no candidates at all for the instanceP. The brute-force method is then expressed by the algorithm
For example, when looking for the divisors of an integern, the instance dataPis the numbern. The callfirst(n) should return the integer 1 ifn≥ 1, or Λ otherwise; the callnext(n,c) should returnc+ 1 ifc<n, and Λ otherwise; andvalid(n,c) should returntrueif and only ifcis a divisor ofn. (In fact, if we choose Λ to ben+ 1, the testsn≥ 1 andc<nare unnecessary.)The brute-force search algorithm above will calloutputfor every candidate that is a solution to the given instanceP. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount ofCPUtime.
The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given numbern. So ifnhas sixteen decimal digits, say, the search will require executing at least 1015computer instructions, which will take several days on a typicalPC. Ifnis a random 64-bitnatural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letter – which is only a 10% increase in the data size – will multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018or 2.4quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called thecombinatorial explosion, or thecurse of dimensionality.
One example of a case where combinatorial complexity leads to solvability limit is insolving chess. Chess is not asolved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.[3][4][5]
One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by usingheuristicsspecific to the problem class. For example, in theeight queens problemthe challenge is to place eight queens on a standardchessboardso that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648= 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates areall possible ways of choosingof a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutions – about 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements.
As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one.
In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 – which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests.
In applications that require only one solution, rather than all solutions, theexpectedrunning time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random numbern, it is better to enumerate the candidate divisors in increasing order, from 2 ton− 1, than the other way around – because the probability thatnis divisible bycis 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a1bit in a given 1000-bit stringP. In this case, the candidate solutions are the indices 1 to 1000, and a candidatecis valid ifP[c] =1. Now, suppose that the first bit ofPis equally likely to be0or1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the numbertof candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value oftwill be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid,given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance.
There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution.Heuristicscan also be used to make an early cutoff of parts of the search. One example of this is theminimaxprinciple for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such aschart parsingcan exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as inConstraint Satisfaction Problems, one can dramatically reduce the search space by means ofConstraint propagation, that is efficiently implemented inConstraint programminglanguages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, incomputer chess, rather than computing the fullminimaxtree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by astatic evaluation function.
Incryptography, abrute-force attackinvolves systematically checking all possiblekeysuntil the correct key is found.[6]Thisstrategycan in theory be used against any encrypted data[7](except aone-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier.
Thekey lengthused in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective byobfuscatingthe data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it.
|
https://en.wikipedia.org/wiki/Exhaustive_search
|
Intheoretical computer scienceand mathematics,computational complexity theoryfocuses on classifyingcomputational problemsaccording to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as analgorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematicalmodels of computationto study these problems and quantifying theircomputational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used incommunication complexity), the number ofgatesin a circuit (used incircuit complexity) and the number of processors (used inparallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. TheP versus NP problem, one of the sevenMillennium Prize Problems,[1]is part of the field of computational complexity.
Closely related fields intheoretical computer scienceareanalysis of algorithmsandcomputability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.
Acomputational problemcan be viewed as an infinite collection ofinstancestogether with a set (possibly empty) ofsolutionsfor every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem ofprimality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, theinstanceis a particular input to the problem, and thesolutionis the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of thetravelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites inMilanwhose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is astringover analphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings arebitstrings. As in a real-worldcomputer, mathematical objects other than bitstrings must be suitably encoded. For example,integerscan be represented inbinary notation, andgraphscan be encoded directly via theiradjacency matrices, or by encoding theiradjacency listsin binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
Decision problemsare one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is eitheryesorno(alternatively, 1 or 0). A decision problem can be viewed as aformal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of analgorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answeryes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrarygraph. The problem consists in deciding whether the given graph isconnectedor not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Afunction problemis a computational problem where a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem—that is, the output is not just yes or no. Notable examples include thetraveling salesman problemand theinteger factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples(a,b,c){\displaystyle (a,b,c)}such that the relationa×b=c{\displaystyle a\times b=c}holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with2n{\displaystyle 2n}vertices compared to the time taken for a graph withn{\displaystyle n}vertices?
If the input size isn{\displaystyle n}, the time taken can be expressed as a function ofn{\displaystyle n}. Since the time taken on different inputs of the same size can be different, the worst-case time complexityT(n){\displaystyle T(n)}is defined to be the maximum time taken over all inputs of sizen{\displaystyle n}. IfT(n){\displaystyle T(n)}is a polynomial inn{\displaystyle n}, then the algorithm is said to be apolynomial timealgorithm.Cobham's thesisargues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of theChurch–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as aRAM machine,Conway's Game of Life,cellular automata,lambda calculusor any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such asdeterministic Turing machines,probabilistic Turing machines,non-deterministic Turing machines,quantum Turing machines,symmetric Turing machinesandalternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are calledrandomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, seenon-deterministic algorithm.
Many machine models different from the standardmulti-tape Turing machineshave been proposed in the literature, for examplerandom-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.[2]What all these models have in common is that the machines operatedeterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so thatnon-deterministic timeis a very important resource in analyzing computational problems.
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as thedeterministic Turing machineis used. The time required by a deterministic Turing machineM{\displaystyle M}on inputx{\displaystyle x}is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machineM{\displaystyle M}is said to operate within timef(n){\displaystyle f(n)}if the time required byM{\displaystyle M}on each input of lengthn{\displaystyle n}is at mostf(n){\displaystyle f(n)}. A decision problemA{\displaystyle A}can be solved in timef(n){\displaystyle f(n)}if there exists a Turing machine operating in timef(n){\displaystyle f(n)}that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within timef(n){\displaystyle f(n)}on a deterministic Turing machine is then denoted byDTIME(f(n){\displaystyle f(n)}).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, anycomplexity measurecan be viewed as a computational resource. Complexity measures are very generally defined by theBlum complexity axioms. Other complexity measures used in complexity theory includecommunication complexity,circuit complexity, anddecision tree complexity.
The complexity of an algorithm is often expressed usingbig O notation.
Thebest, worst and average casecomplexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of sizen{\displaystyle n}may be faster to solve than others, we define the following complexities:
The order from cheap to costly is: Best, average (ofdiscrete uniform distribution), amortized, worst.
For example, the deterministic sorting algorithmquicksortaddresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes timeO(n2{\displaystyle n^{2}}). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting isO(nlogn){\displaystyle O(n\log n)}. The best case occurs when each pivoting divides the list in half, also needingO(nlogn){\displaystyle O(n\log n)}time.
To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field ofanalysis of algorithms. To show an upper boundT(n){\displaystyle T(n)}on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at mostT(n){\displaystyle T(n)}. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound ofT(n){\displaystyle T(n)}for a problem requires showing that no algorithm can have time complexity lower thanT(n){\displaystyle T(n)}.
Upper and lower bounds are usually stated using thebig O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, ifT(n)=7n2+15n+40{\displaystyle T(n)=7n^{2}+15n+40}, in big O notation one would writeT(n)∈O(n2){\displaystyle T(n)\in O(n^{2})}.
Acomplexity classis a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
But bounding the computation time above by some concrete functionf(n){\displaystyle f(n)}often yields complexity classes that depend on the chosen machine model. For instance, the language{xx∣xis any binary string}{\displaystyle \{xx\mid x{\text{ is any binary string}}\}}can be solved inlinear timeon a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time,Cobham-Edmonds thesisstates that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity classP, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems isFP.
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Logarithmic-space classes do not account for the space required to represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE bySavitch's theorem.
Other important complexity classes includeBPP,ZPPandRP, which are defined usingprobabilistic Turing machines;ACandNC, which are defined using Boolean circuits; andBQPandQMA, which are defined using quantum Turing machines.#Pis an important complexity class of counting problems (not decision problems). Classes likeIPandAMare defined usingInteractive proof systems.ALLis the class of all decision problems.
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n{\displaystyle n}) is contained in DTIME(n2{\displaystyle n^{2}}), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, thetime hierarchy theoremstates thatDTIME(o(f(n)))⊊DTIME(f(n)⋅log(f(n))){\displaystyle {\mathsf {DTIME}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DTIME}}{\big (}f(n)\cdot \log(f(n)){\big )}}.
Thespace hierarchy theoremstates thatDSPACE(o(f(n)))⊊DSPACE(f(n)){\displaystyle {\mathsf {DSPACE}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DSPACE}}{\big (}f(n){\big )}}.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problemX{\displaystyle X}can be solved using an algorithm forY{\displaystyle Y},X{\displaystyle X}is no more difficult thanY{\displaystyle Y}, and we say thatX{\displaystyle X}reducestoY{\displaystyle Y}. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such aspolynomial-time reductionsorlog-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problemX{\displaystyle X}ishardfor a class of problemsC{\displaystyle C}if every problem inC{\displaystyle C}can be reduced toX{\displaystyle X}. Thus no problem inC{\displaystyle C}is harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inC{\displaystyle C}. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set ofNP-hardproblems.
If a problemX{\displaystyle X}is inC{\displaystyle C}and hard forC{\displaystyle C}, thenX{\displaystyle X}is said to becompleteforC{\displaystyle C}. This means thatX{\displaystyle X}is the hardest problem inC{\displaystyle C}. (Since many problems could be equally hard, one might say thatX{\displaystyle X}is one of the hardest problems inC{\displaystyle C}.) Thus the class ofNP-completeproblems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem,Π2{\displaystyle \Pi _{2}}, to another problem,Π1{\displaystyle \Pi _{1}}, would indicate that there is no known polynomial-time solution forΠ1{\displaystyle \Pi _{1}}. This is because a polynomial-time solution toΠ1{\displaystyle \Pi _{1}}would yield a polynomial-time solution toΠ2{\displaystyle \Pi _{2}}. Similarly, because all NP problems can be reduced to the set, finding anNP-completeproblem that can be solved in polynomial time would mean that P = NP.[3]
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called theCobham–Edmonds thesis. The complexity classNP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as theBoolean satisfiability problem, theHamiltonian path problemand thevertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[3]If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types ofinteger programmingproblems inoperations research, many problems inlogistics,protein structure predictioninbiology,[5]and the ability to find formal proofs ofpure mathematicstheorems.[6]The P versus NP problem is one of theMillennium Prize Problemsproposed by theClay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.[7]
It was shown by Ladner that ifP≠NP{\displaystyle {\textsf {P}}\neq {\textsf {NP}}}then there exist problems inNP{\displaystyle {\textsf {NP}}}that are neither inP{\displaystyle {\textsf {P}}}norNP{\displaystyle {\textsf {NP}}}-complete.[4]Such problems are calledNP-intermediateproblems. Thegraph isomorphism problem, thediscrete logarithm problemand theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be inP{\displaystyle {\textsf {P}}}or to beNP{\displaystyle {\textsf {NP}}}-complete.
Thegraph isomorphism problemis the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is inP{\displaystyle {\textsf {P}}},NP{\displaystyle {\textsf {NP}}}-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[8]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[9]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló BabaiandEugene Lukshas run timeO(2nlogn){\displaystyle O(2^{\sqrt {n\log n}})}for graphs withn{\displaystyle n}vertices, although some recent work by Babai offers some potentially new perspectives on this.[10]
Theinteger factorization problemis the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less thank{\displaystyle k}. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is inNP{\displaystyle {\textsf {NP}}}and inco-NP{\displaystyle {\textsf {co-NP}}}(and even in UP and co-UP[11]). If the problem isNP{\displaystyle {\textsf {NP}}}-complete, the polynomial time hierarchy will collapse to its first level (i.e.,NP{\displaystyle {\textsf {NP}}}will equalco-NP{\displaystyle {\textsf {co-NP}}}). The best known algorithm for integer factorization is thegeneral number field sieve, which takes timeO(e(6493)(logn)3(loglogn)23){\displaystyle O(e^{\left({\sqrt[{3}]{\frac {64}{9}}}\right){\sqrt[{3}]{(\log n)}}{\sqrt[{3}]{(\log \log n)^{2}}}})}[12]to factor an odd integern{\displaystyle n}. However, the best knownquantum algorithmfor this problem,Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Many known complexity classes are suspected to be unequal, but this has not been proved. For instanceP⊆NP⊆PP⊆PSPACE{\displaystyle {\textsf {P}}\subseteq {\textsf {NP}}\subseteq {\textsf {PP}}\subseteq {\textsf {PSPACE}}}, but it is possible thatP=PSPACE{\displaystyle {\textsf {P}}={\textsf {PSPACE}}}. IfP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, thenP{\displaystyle {\textsf {P}}}is not equal toPSPACE{\displaystyle {\textsf {PSPACE}}}either. Since there are many known complexity classes betweenP{\displaystyle {\textsf {P}}}andPSPACE{\displaystyle {\textsf {PSPACE}}}, such asRP{\displaystyle {\textsf {RP}}},BPP{\displaystyle {\textsf {BPP}}},PP{\displaystyle {\textsf {PP}}},BQP{\displaystyle {\textsf {BQP}}},MA{\displaystyle {\textsf {MA}}},PH{\displaystyle {\textsf {PH}}}, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines,co-NP{\displaystyle {\textsf {co-NP}}}is the class containing thecomplementproblems (i.e. problems with theyes/noanswers reversed) ofNP{\displaystyle {\textsf {NP}}}problems. It is believed[13]thatNP{\displaystyle {\textsf {NP}}}is not equal toco-NP{\displaystyle {\textsf {co-NP}}}; however, it has not yet been proven. It is clear that if these two complexity classes are not equal thenP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, sinceP=co-P{\displaystyle {\textsf {P}}={\textsf {co-P}}}. Thus ifP=NP{\displaystyle P=NP}we would haveco-P=co-NP{\displaystyle {\textsf {co-P}}={\textsf {co-NP}}}whenceNP=P=co-P=co-NP{\displaystyle {\textsf {NP}}={\textsf {P}}={\textsf {co-P}}={\textsf {co-NP}}}.
Similarly, it is not known ifL{\displaystyle {\textsf {L}}}(the set of all problems that can be solved in logarithmic space) is strictly contained inP{\displaystyle {\textsf {P}}}or equal toP{\displaystyle {\textsf {P}}}. Again, there are many complexity classes between the two, such asNL{\displaystyle {\textsf {NL}}}andNC{\displaystyle {\textsf {NC}}}, and it is not known if they are distinct or equal classes.
It is suspected thatP{\displaystyle {\textsf {P}}}andBPP{\displaystyle {\textsf {BPP}}}are equal. However, it is currently open ifBPP=NEXP{\displaystyle {\textsf {BPP}}={\textsf {NEXP}}}.
A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as anintractable problem.[14]Conversely, a problem that can be solved in practice is called atractable problem, literally "a problem that can be handled". The terminfeasible(literally "cannot be done") is sometimes used interchangeably withintractable,[15]though this risks confusion with afeasible solutioninmathematical optimization.[16]
Tractable problems are frequently identified with problems that have polynomial-time solutions (P{\displaystyle {\textsf {P}}},PTIME{\displaystyle {\textsf {PTIME}}}); this is known as theCobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that areEXPTIME-hard. IfNP{\displaystyle {\textsf {NP}}}is not the same asP{\displaystyle {\textsf {P}}}, thenNP-hardproblems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not inP{\displaystyle {\textsf {P}}}does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem inPresburger arithmetichas been shown not to be inP{\displaystyle {\textsf {P}}}, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-completeknapsack problemover a wide range of sizes in less than quadratic time andSAT solversroutinely handle large instances of the NP-completeBoolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes2n{\displaystyle 2^{n}}operations before halting. For smalln{\displaystyle n}, say 100, and assuming for the sake of example that the computer does1012{\displaystyle 10^{12}}operations each second, the program would run for about4×1010{\displaystyle 4\times 10^{10}}years, which is the same order of magnitude as theage of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes1.0001n{\displaystyle 1.0001^{n}}operations is practical untiln{\displaystyle n}gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say,n15{\displaystyle n^{15}}, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice evenn3{\displaystyle n^{3}}orn2{\displaystyle n^{2}}algorithms are often impractical on realistic sizes of problems.
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied innumerical analysis. One approach to complexity theory of numerical analysis[17]isinformation based complexity.
Continuous complexity theory can also refer to complexity theory of the use ofanalog computation, which uses continuousdynamical systemsanddifferential equations.[18]Control theorycan be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[19]
An early example of algorithm complexity analysis is the running time analysis of theEuclidean algorithmdone byGabriel Laméin 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines byAlan Turingin 1936, which turned out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" byJuris HartmanisandRichard E. Stearns, which laid out the definitions oftime complexityandspace complexity, and proved the hierarchy theorems.[20]In addition, in 1965Edmondssuggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.[21]
Earlier papers studying problems solvable by Turing machines with specific bounded resources include[20]John Myhill's definition oflinear bounded automata(Myhill 1960),Raymond Smullyan's study of rudimentary sets (1961), as well asHisao Yamada's paper[22]on real-time computations (1962). Somewhat earlier,Boris Trakhtenbrot(1956), a pioneer in the field from the USSR, studied another specific complexity measure.[23]As he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited fromswitching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".[24]
In 1967,Manuel Blumformulated a set ofaxioms(now known asBlum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-calledspeed-up theorem. The field began to flourish in 1971 whenStephen CookandLeonid Levinprovedthe existence of practically relevant problems that areNP-complete. In 1972,Richard Karptook this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diversecombinatorialandgraph theoreticalproblems, each infamous for its computational intractability, are NP-complete.[25]
|
https://en.wikipedia.org/wiki/Computational_complexity_theory
|
Jacobi symbol(k/n)for variousk(along top) andn(along left side). Only0 ≤k<nare shown, since due to rule (2) below any otherkcan be reduced modulon.Quadratic residuesare highlighted in yellow — note that no entry with a Jacobi symbol of −1 is a quadratic residue, and ifkis a quadratic residue modulo a coprimen, then(k/n)= 1, but not all entries with a Jacobi symbol of 1 (see then= 9andn= 15rows) are quadratic residues. Notice also that when eithernorkis a square, all values are nonnegative.
TheJacobi symbolis a generalization of theLegendre symbol. Introduced byJacobiin 1837,[1]it is of theoretical interest inmodular arithmeticand other branches ofnumber theory, but its main use is incomputational number theory, especiallyprimality testingandinteger factorization; these in turn are important incryptography.
For any integeraand any positive odd integern, the Jacobi symbol(a/n)is defined as the product of theLegendre symbolscorresponding to the prime factors ofn:
where
is the prime factorization ofn.
The Legendre symbol(a/p)is defined for all integersaand all odd primespby
Following the normal convention for theempty product,(a/1)= 1.
When the lower argument is an odd prime, the Jacobi symbol is equal to the Legendre symbol.
The following is a table of values of Jacobi symbol(a/n)withn≤ 59,a≤ 30,nodd.
The following facts, even the reciprocity laws, are straightforward deductions from the definition of the Jacobi symbol and the corresponding properties of the Legendre symbol.[2]
The Jacobi symbol is defined only when the upper argument ("numerator") is an integer and the lower argument ("denominator") is a positive odd integer.
If either the top or bottom argument is fixed, the Jacobi symbol is acompletely multiplicative functionin the remaining argument:
Thelaw of quadratic reciprocity: ifmandnare odd positivecoprime integers, then
and its supplements
and(1n)=(n1)=1{\displaystyle \left({\frac {1}{n}}\right)=\left({\frac {n}{1}}\right)=1}
Combining properties 4 and 8 gives:
Like the Legendre symbol:
But, unlike the Legendre symbol:
This is because forato be a quadratic residue modulon, it has to be a quadratic residue moduloeveryprime factor ofn. However, the Jacobi symbol equals one if, for example,ais a non-residue modulo exactly two of the prime factors ofn.
Although the Jacobi symbol cannot be uniformly interpreted in terms of squares and non-squares, it can be uniformly interpreted as the sign of a permutation byZolotarev's lemma.
The Jacobi symbol(a/n)is aDirichlet characterto the modulusn.
The above formulas lead to an efficientO(logalogb)[3]algorithmfor calculating the Jacobi symbol, analogous to theEuclidean algorithmfor finding the gcd of two numbers. (This should not be surprising in light of rule 2.)
In addition to the codes below, Riesel[4]has it inPascal.
The Legendre symbol(a/p)is only defined for odd primesp. It obeys the same rules as the Jacobi symbol (i.e., reciprocity and the supplementary formulas for(−1/p)and(2/p)and multiplicativity of the "numerator".)
Problem: Given that 9907 is prime, calculate(1001/9907).
The difference between the two calculations is that when the Legendre symbol is used the "numerator" has to be factored into prime powers before the symbol is flipped. This makes the calculation using the Legendre symbol significantly slower than the one using the Jacobi symbol, as there is no known polynomial-time algorithm for factoring integers.[5]In fact, this is why Jacobi introduced the symbol.
There is another way the Jacobi and Legendre symbols differ. If theEuler's criterionformula is used modulo acomposite number, the result may or may not be the value of the Jacobi symbol, and in fact may not even be −1 or 1. For example,
So if it is unknown whether a numbernis prime or composite, we can pick a random numbera, calculate the Jacobi symbol(a/n)and compare it with Euler's formula; if they differ modulon, thennis composite; if they have the same residue modulonfor many different values ofa, thennis "probably prime".
This is the basis for the probabilisticSolovay–Strassen primality testand refinements such as theBaillie–PSW primality testand theMiller–Rabin primality test.
As an indirect use, it is possible to use it as an error detection routine during the execution of theLucas–Lehmer primality testwhich, even on modern computer hardware, can take weeks to complete when processingMersenne numbersover2136,279,841−1{\displaystyle {\begin{aligned}2^{136,279,841}-1\end{aligned}}}(the largest known Mersenne prime as of October 2024). In nominal cases, the Jacobi symbol:
(si−2Mp)=−1i≠0{\displaystyle {\begin{aligned}\left({\frac {s_{i}-2}{M_{p}}}\right)&=-1&i\neq 0\end{aligned}}}
This also holds for the final residuesp−2{\displaystyle {\begin{aligned}s_{p-2}\end{aligned}}}and hence can be used as a verification of probable validity. However, if an error occurs in the hardware, there is a 50% chance that the result will become 0 or 1 instead, and won't change with subsequent terms ofs{\displaystyle {\begin{aligned}s\end{aligned}}}(unless another error occurs and changes it back to −1).
|
https://en.wikipedia.org/wiki/Jacobi_symbol
|
Inmathematics,integer factorizationis the decomposition of apositive integerinto aproductof integers. Every positive integer greater than 1 is either the product of two or more integerfactorsgreater than 1, in which case it is acomposite number, or it is not, in which case it is aprime number. For example,15is a composite number because15 = 3 · 5, but7is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is calledprime factorization; the result is always unique up to the order of the factors by theprime factorization theorem.
To factorize a small integernusing mental or pen-and-paper arithmetic, the simplest method istrial division: checking if the number is divisible by prime numbers2,3,5, and so on, up to thesquare rootofn. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involvestesting whether each factor is primeeach time a factor is found.
When the numbers are sufficiently large, no efficient non-quantuminteger factorizationalgorithmis known. However, it has not been proven that such an algorithm does not exist. The presumeddifficultyof this problem is important for the algorithms used incryptographysuch asRSA public-key encryptionand theRSA digital signature.[1]Many areas ofmathematicsandcomputer sciencehave been brought to bear on this problem, includingelliptic curves,algebraic number theory, and quantum computing.
Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) aresemiprimes, the product of two prime numbers. When they are both large, for instance more than two thousandbitslong, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization byFermat's factorization method), even the fastest prime factorization algorithms on the fastest classical computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any classical computer increases drastically.
Many cryptographic protocols are based on the presumed difficulty of factoring large composite integers or a related problem –for example, theRSA problem. An algorithm that efficiently factors an arbitrary integer would renderRSA-basedpublic-keycryptography insecure.
By thefundamental theorem of arithmetic, every positive integer has a uniqueprime factorization. (By convention, 1 is theempty product.)Testingwhether the integer is prime can be done inpolynomial time, for example, by theAKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.
Given a general algorithm for integer factorization, any integer can be factored into its constituentprime factorsby repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, ifn= 171 ×p×qwherep<qare very large primes,trial divisionwill quickly produce the factors 3 and 19 but will takepdivisions to find the next factor. As a contrasting example, ifnis the product of the primes13729,1372933, and18848997161, where13729 × 1372933 = 18848997157, Fermat's factorization method will begin with⌈√n⌉ = 18848997159which immediately yieldsb=√a2−n=√4= 2and hence the factorsa−b= 18848997157anda+b= 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of⌈√18848997157⌉ = 137292forais a factor of 10 from1372933.
Among theb-bit numbers, the most difficult to factor in practice using existing algorithms are thosesemiprimeswhose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers includingPaul Zimmermann, utilizing approximately 900 core-years of computing power.[2]These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.[3]
The largest such semiprime yet factored wasRSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using IntelXeon Gold6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of thegeneral number field sieverun on hundreds of machines.
Noalgorithmhas been published that can factor all integers inpolynomial time, that is, that can factor ab-bit numbernin timeO(bk)for some constantk. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist.[4][5]
There are published algorithms that are faster thanO((1 +ε)b)for all positiveε, that is,sub-exponential. As of 2022[update], the algorithm with best theoretical asymptotic running time is thegeneral number field sieve(GNFS), first published in 1993,[6]running on ab-bit numbernin time:
For current computers, GNFS is the best published algorithm for largen(more than about 400 bits). For aquantum computer, however,Peter Shordiscovered an algorithm in 1994 that solves it in polynomial time.Shor's algorithmtakes onlyO(b3)time andO(b)space onb-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by usingNMRtechniques on molecules that provide seven qubits.[7]
In order to talk aboutcomplexity classessuch as P, NP, and co-NP, the problem has to be stated as adecision problem.
Decision problem(Integer factorization)—For every natural numbersn{\displaystyle n}andk{\displaystyle k}, doesnhave a factor smaller thankbesides 1?
It is known to be in bothNPandco-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorizationn=d(n/d)withd≤k. An answer of "no" can be certified by exhibiting the factorization ofninto distinct primes, all larger thank; one can verify their primality using theAKS primality test, and then multiply them to obtainn. Thefundamental theorem of arithmeticguarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in bothUPand co-UP.[8]It is known to be inBQPbecause of Shor's algorithm.
The problem is suspected to be outside all three of the complexity classes P, NP-complete,[9]andco-NP-complete.
It is therefore a candidate for theNP-intermediatecomplexity class.
In contrast, the decision problem "Isna composite number?" (or equivalently: "Isna prime number?") appears to be much easier than the problem of specifying factors ofn. The composite/prime problem can be solved in polynomial time (in the numberbof digits ofn) with theAKS primality test. In addition, there are severalprobabilistic algorithmsthat can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease ofprimality testingis a crucial part of theRSAalgorithm, as it is necessary to find large prime numbers to start with.
A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.
An important subclass of special-purpose factoring algorithms is theCategory 1orFirst Categoryalgorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[10]For example, naivetrial divisionis a Category 1 algorithm.
A general-purpose factoring algorithm, also known as aCategory 2,Second Category, orKraitchikfamilyalgorithm,[10]has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factorRSA numbers. Most general-purpose factoring algorithms are based on thecongruence of squaresmethod.
In number theory, there are many integer factoring algorithms that heuristically have expectedrunning time
inlittle-oandL-notation.
Some examples of those algorithms are theelliptic curve methodand thequadratic sieve.
Another such algorithm is theclass group relations methodproposed by Schnorr,[11]Seysen,[12]and Lenstra,[13]which they proved only assuming the unprovedgeneralized Riemann hypothesis.
The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[14]to have expected running timeLn[1/2, 1+o(1)]by replacing the GRH assumption with the use of multipliers.
The algorithm uses theclass groupof positive binaryquadratic formsofdiscriminantΔdenoted byGΔ.GΔis the set of triples of integers(a,b,c)in which those integers are relative prime.
Given an integernthat will be factored, wherenis an odd positive integer greater than a certain constant. In this factoring algorithm the discriminantΔis chosen as a multiple ofn,Δ = −dn, wheredis some positive multiplier. The algorithm expects that for onedthere exist enoughsmoothforms inGΔ. Lenstra and Pomerance show that the choice ofdcan be restricted to a small set to guarantee the smoothness result.
Denote byPΔthe set of all primesqwithKronecker symbol(Δ/q)= 1. By constructing a set ofgeneratorsofGΔand prime formsfqofGΔwithqinPΔa sequence of relations between the set of generators andfqare produced.
The size ofqcan be bounded byc0(log|Δ|)2for some constantc0.
The relation that will be used is a relation between the product of powers that is equal to theneutral elementofGΔ. These relations will be used to construct a so-called ambiguous form ofGΔ, which is an element ofGΔof order dividing 2. By calculating the corresponding factorization ofΔand by taking agcd, this ambiguous form provides the complete prime factorization ofn. This algorithm has these main steps:
Letnbe the number to be factored.
To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and theJacobi sum test.
The algorithm as stated is aprobabilistic algorithmas it makes random choices. Its expected running time is at mostLn[1/2, 1+o(1)].[14]
|
https://en.wikipedia.org/wiki/Integer_factorization
|
TheTonelli–Shanksalgorithm(referred to by Shanks as the RESSOL algorithm) is used inmodular arithmeticto solve forrin a congruence of the formr2≡n(modp), wherepis aprime: that is, to find a square root ofnmodulop.
Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent tointeger factorization.[1]
An equivalent, but slightly more redundant version of this algorithm was developed byAlberto Tonelli[2][3]in 1891. The version discussed here was developed independently byDaniel Shanksin 1973, who explained:
My tardiness in learning of these historical references was because I had lent Volume 1 ofDickson'sHistoryto a friend and it was never returned.[4]
According to Dickson,[3]Tonelli's algorithm can take square roots ofxmodulo prime powerspλapart from primes.
Given a non-zeron{\displaystyle n}and a primep>2{\displaystyle p>2}(which will always be odd),Euler's criteriontells us thatn{\displaystyle n}has a square root (i.e.,n{\displaystyle n}is aquadratic residue) if and only if:
In contrast, if a numberz{\displaystyle z}has no square root (is a non-residue), Euler's criterion tells us that:
It is not hard to find suchz{\displaystyle z}, because half of the integers between 1 andp−1{\displaystyle p-1}have this property. So we assume that we have access to such a non-residue.
By (normally) dividing by 2 repeatedly, we can writep−1{\displaystyle p-1}asQ2S{\displaystyle Q2^{S}}, whereQ{\displaystyle Q}is odd. Note that if we try
thenR2≡nQ+1=(n)(nQ)(modp){\displaystyle R^{2}\equiv n^{Q+1}=(n)(n^{Q}){\pmod {p}}}. Ift≡nQ≡1(modp){\displaystyle t\equiv n^{Q}\equiv 1{\pmod {p}}}, thenR{\displaystyle R}is a square root ofn{\displaystyle n}. Otherwise, forM=S{\displaystyle M=S}, we haveR{\displaystyle R}andt{\displaystyle t}satisfying:
If, given a choice ofR{\displaystyle R}andt{\displaystyle t}for a particularM{\displaystyle M}satisfying the above (whereR{\displaystyle R}is not a square root ofn{\displaystyle n}), we can easily calculate anotherR{\displaystyle R}andt{\displaystyle t}forM−1{\displaystyle M-1}such that the above relations hold, then we can repeat this untilt{\displaystyle t}becomes a20{\displaystyle 2^{0}}-th root of 1, i.e.,t=1{\displaystyle t=1}. At that pointR{\displaystyle R}is a square root ofn{\displaystyle n}.
We can check whethert{\displaystyle t}is a2M−2{\displaystyle 2^{M-2}}-th root of 1 by squaring itM−2{\displaystyle M-2}times and check whether it is 1. If it is, then we do not need to do anything, as the same choice ofR{\displaystyle R}andt{\displaystyle t}works. But if it is not,t2M−2{\displaystyle t^{2^{M-2}}}must be -1 (because squaring it gives 1, and there can only be two square roots 1 and -1 of 1 modulop{\displaystyle p}).
To find a new pair ofR{\displaystyle R}andt{\displaystyle t}, we can multiplyR{\displaystyle R}by a factorb{\displaystyle b}, to be determined. Thent{\displaystyle t}must be multiplied by a factorb2{\displaystyle b^{2}}to keepR2≡nt(modp){\displaystyle R^{2}\equiv nt{\pmod {p}}}. So, whent2M−2{\displaystyle t^{2^{M-2}}}is -1, we need to find a factorb2{\displaystyle b^{2}}so thattb2{\displaystyle tb^{2}}is a2M−2{\displaystyle 2^{M-2}}-th root of 1, or equivalentlyb2{\displaystyle b^{2}}is a2M−2{\displaystyle 2^{M-2}}-th root of -1.
The trick here is to make use ofz{\displaystyle z}, the known non-residue. The Euler's criterion applied toz{\displaystyle z}shown above says thatzQ{\displaystyle z^{Q}}is a2S−1{\displaystyle 2^{S-1}}-th root of -1. So by squaringzQ{\displaystyle z^{Q}}repeatedly, we have access to a sequence of2i{\displaystyle 2^{i}}-th root of -1. We can select the right one to serve asb{\displaystyle b}. With a little bit of variable maintenance and trivial case compression, the algorithm below emerges naturally.
Operations and comparisons on elements of themultiplicative group of integers modulo pZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }are implicitly modp.
Inputs:
Outputs:
Algorithm:
Once you have solved the congruence withrthe second solution is−r(modp){\displaystyle -r{\pmod {p}}}. If the leastisuch thatt2i=1{\displaystyle t^{2^{i}}=1}isM, then no solution to the congruence exists, i.e.nis not a quadratic residue.
This is most useful whenp≡ 1 (mod 4).
For primes such thatp≡ 3 (mod 4), this problem has possible solutionsr=±np+14(modp){\displaystyle r=\pm n^{\frac {p+1}{4}}{\pmod {p}}}. If these satisfyr2≡n(modp){\displaystyle r^{2}\equiv n{\pmod {p}}}, they are the only solutions. If not,r2≡−n(modp){\displaystyle r^{2}\equiv -n{\pmod {p}}},nis a quadratic non-residue, and there are no solutions.
We can show that at the start of each iteration of the loop the followingloop invariantshold:
Initially:
At each iteration, withM',c',t',R'the new values replacingM,c,t,R:
Fromt2M−1=1{\displaystyle t^{2^{M-1}}=1}and the test againstt= 1 at the start of the loop, we see that we will always find aniin 0 <i<Msuch thatt2i=1{\displaystyle t^{2^{i}}=1}.Mis strictly smaller on each iteration, and thus the algorithm is guaranteed to halt. When we hit the conditiont= 1 and halt, the last loop invariant implies thatR2=n.
We can alternately express the loop invariants using theorderof the elements:
Each step of the algorithm movestinto a smaller subgroup by measuring the exact order oftand multiplying it by an element of the same order.
Solving the congruencer2≡ 5 (mod 41). 41 is prime as required and 41 ≡ 1 (mod 4). 5 is a quadratic residue by Euler's criterion:541−12=520=1{\displaystyle 5^{\frac {41-1}{2}}=5^{20}=1}(as before, operations in(Z/41Z)×{\displaystyle (\mathbb {Z} /41\mathbb {Z} )^{\times }}are implicitly mod 41).
Indeed, 282≡ 5 (mod 41) and (−28)2≡ 132≡ 5 (mod 41). So the algorithm yields the two solutions to our congruence.
The Tonelli–Shanks algorithm requires (on average over all possible input (quadratic residues and quadratic nonresidues))
modular multiplications, wherem{\displaystyle m}is the number of digits in the binary representation ofp{\displaystyle p}andk{\displaystyle k}is the number of ones in the binary representation ofp{\displaystyle p}. If the required quadratic nonresiduez{\displaystyle z}is to be found by checking if a randomly taken numbery{\displaystyle y}is a quadratic nonresidue, it requires (on average)2{\displaystyle 2}computations of theLegendre symbol.[5]The average of two computations of theLegendre symbolare explained as follows:y{\displaystyle y}is a quadratic residue with chancep+12p=1+1p2{\displaystyle {\tfrac {\tfrac {p+1}{2}}{p}}={\tfrac {1+{\tfrac {1}{p}}}{2}}}, which is smaller than1{\displaystyle 1}but≥12{\displaystyle \geq {\tfrac {1}{2}}}, so we will on average need to check if ay{\displaystyle y}is a quadratic residue two times.
This shows essentially that the Tonelli–Shanks algorithm works very well if the modulusp{\displaystyle p}is random, that is, ifS{\displaystyle S}is not particularly large with respect to the number of digits in the binary representation ofp{\displaystyle p}. As written above,Cipolla's algorithmworks better than Tonelli–Shanks if (and only if)S(S−1)>8m+20{\displaystyle S(S-1)>8m+20}.
However, if one instead uses Sutherland's algorithm to perform the discrete logarithm computation in the 2-Sylow subgroup ofFp∗{\displaystyle \mathbb {F} _{p}^{\ast }}, one may replaceS(S−1){\displaystyle S(S-1)}with an expression that is asymptotically bounded byO(SlogS/loglogS){\displaystyle O(S\log S/\log \log S)}.[6]Explicitly, one computese{\displaystyle e}such thatce≡nQ{\displaystyle c^{e}\equiv n^{Q}}and thenR≡c−e/2n(Q+1)/2{\displaystyle R\equiv c^{-e/2}n^{(Q+1)/2}}satisfiesR2≡n{\displaystyle R^{2}\equiv n}(note thate{\displaystyle e}is a multiple of 2 becausen{\displaystyle n}is a quadratic residue).
The algorithm requires us to find a quadratic nonresiduez{\displaystyle z}. There is no known deterministic algorithm that runs in polynomial time for finding such az{\displaystyle z}. However, if thegeneralized Riemann hypothesisis true, there exists a quadratic nonresiduez<2ln2p{\displaystyle z<2\ln ^{2}{p}},[7]making it possible to check everyz{\displaystyle z}up to that limit and find a suitablez{\displaystyle z}withinpolynomial time. Keep in mind, however, that this is a worst-case scenario; in general,z{\displaystyle z}is found in on average 2 trials as stated above.
The Tonelli–Shanks algorithm can (naturally) be used for any process in which square roots modulo a prime are necessary. For example, it can be used for finding points onelliptic curves. It is also useful for the computations in theRabin cryptosystemand in the sieving step of thequadratic sieve.
Tonelli–Shanks can be generalized to any cyclic group (instead of(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}) and tokth roots for arbitrary integerk, in particular to taking thekth root of an element of afinite field.[8]
If many square-roots must be done in the same cyclic group and S is not too large, a table of square-roots of the elements of 2-power order can be prepared in advance and the algorithm simplified and sped up as follows.
According to Dickson's "Theory of Numbers"[3]
A. Tonelli[9]gave an explicit formula for the roots ofx2=c(modpλ){\displaystyle x^{2}=c{\pmod {p^{\lambda }}}}[3]
The Dickson reference shows the following formula for the square root ofx2modpλ{\displaystyle x^{2}{\bmod {p^{\lambda }}}}.
Noting that232mod293≡529{\displaystyle 23^{2}{\bmod {29^{3}}}\equiv 529}and noting thatβ=7⋅292{\displaystyle \beta =7\cdot 29^{2}}then
To take another example:23332mod293≡4142{\displaystyle 2333^{2}{\bmod {29^{3}}}\equiv 4142}and
Dickson also attributes the following equation to Tonelli:
Usingp=23{\displaystyle p=23}and using the modulus ofp3{\displaystyle p^{3}}the math follows:
First, find the modular square root modp{\displaystyle p}which can be done by the regular Tonelli algorithm for one or the other roots:
And applying Tonelli's equation (see above):
Dickson's reference[3]clearly shows that Tonelli's algorithm works on moduli ofpλ{\displaystyle p^{\lambda }}.
|
https://en.wikipedia.org/wiki/Tonelli%E2%80%93Shanks_algorithm
|
Inalgebra, ahomomorphismis astructure-preservingmapbetween twoalgebraic structuresof the same type (such as twogroups, tworings, or twovector spaces). The wordhomomorphismcomes from theAncient Greek language:ὁμός(homos) meaning "same" andμορφή(morphe) meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of Germanähnlichmeaning "similar" toὁμόςmeaning "same".[1]The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematicianFelix Klein(1849–1925).[2]
Homomorphisms of vector spaces are also calledlinear maps, and their study is the subject oflinear algebra.
The concept of homomorphism has been generalized, under the name ofmorphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point ofcategory theory.
A homomorphism may also be anisomorphism, anendomorphism, anautomorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms.
A homomorphism is a map between twoalgebraic structuresof the same type (e.g. two groups, two fields, two vector spaces), that preserves theoperationsof the structures. This means amapf:A→B{\displaystyle f:A\to B}between twosetsA{\displaystyle A},B{\displaystyle B}equipped with the same structure such that, if⋅{\displaystyle \cdot }is an operation of the structure (supposed here, for simplification, to be abinary operation), then
f(x⋅y)=f(x)⋅f(y){\displaystyle f(x\cdot y)=f(x)\cdot f(y)}
for every pairx{\displaystyle x},y{\displaystyle y}of elements ofA{\displaystyle A}.[note 1]One says often thatf{\displaystyle f}preserves the operation or is compatible with the operation.
Formally, a mapf:A→B{\displaystyle f:A\to B}preserves an operationμ{\displaystyle \mu }ofarityk{\displaystyle k}, defined on bothA{\displaystyle A}andB{\displaystyle B}if
f(μA(a1,…,ak))=μB(f(a1),…,f(ak)),{\displaystyle f(\mu _{A}(a_{1},\ldots ,a_{k}))=\mu _{B}(f(a_{1}),\ldots ,f(a_{k})),}
for all elementsa1,...,ak{\displaystyle a_{1},...,a_{k}}inA{\displaystyle A}.
The operations that must be preserved by a homomorphism include0-ary operations, that is the constants. In particular, when anidentity elementis required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure.
For example:
An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism.
The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, thereal numbersform a group for addition, and the positive real numbers form a group for multiplication. Theexponential function
x↦ex{\displaystyle x\mapsto e^{x}}
satisfies
ex+y=exey,{\displaystyle e^{x+y}=e^{x}e^{y},}
and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as itsinverse function, thenatural logarithm, satisfies
ln(xy)=ln(x)+ln(y),{\displaystyle \ln(xy)=\ln(x)+\ln(y),}and is also a group homomorphism.
Thereal numbersare aring, having both addition and multiplication. The set of all 2×2matricesis also a ring, undermatrix additionandmatrix multiplication. If we define a function between these rings as follows:
f(r)=(r00r){\displaystyle f(r)={\begin{pmatrix}r&0\\0&r\end{pmatrix}}}
whereris a real number, thenfis a homomorphism of rings, sincefpreserves both addition:
f(r+s)=(r+s00r+s)=(r00r)+(s00s)=f(r)+f(s){\displaystyle f(r+s)={\begin{pmatrix}r+s&0\\0&r+s\end{pmatrix}}={\begin{pmatrix}r&0\\0&r\end{pmatrix}}+{\begin{pmatrix}s&0\\0&s\end{pmatrix}}=f(r)+f(s)}
and multiplication:
f(rs)=(rs00rs)=(r00r)(s00s)=f(r)f(s).{\displaystyle f(rs)={\begin{pmatrix}rs&0\\0&rs\end{pmatrix}}={\begin{pmatrix}r&0\\0&r\end{pmatrix}}{\begin{pmatrix}s&0\\0&s\end{pmatrix}}=f(r)\,f(s).}
For another example, the nonzerocomplex numbersform agroupunder the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have amultiplicative inverse, which is required for elements of a group.) Define a functionf{\displaystyle f}from the nonzero complex numbers to the nonzero real numbers by
f(z)=|z|.{\displaystyle f(z)=|z|.}
That is,f{\displaystyle f}is theabsolute value(or modulus) of the complex numberz{\displaystyle z}. Thenf{\displaystyle f}is a homomorphism of groups, since it preserves multiplication:
f(z1z2)=|z1z2|=|z1||z2|=f(z1)f(z2).{\displaystyle f(z_{1}z_{2})=|z_{1}z_{2}|=|z_{1}||z_{2}|=f(z_{1})f(z_{2}).}
Note thatfcannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition:
|z1+z2|≠|z1|+|z2|.{\displaystyle |z_{1}+z_{2}|\neq |z_{1}|+|z_{2}|.}
As another example, the diagram shows amonoidhomomorphismf{\displaystyle f}from the monoid(N,+,0){\displaystyle (\mathbb {N} ,+,0)}to the monoid(N,×,1){\displaystyle (\mathbb {N} ,\times ,1)}. Due to the different names of corresponding operations, the structure preservation properties satisfied byf{\displaystyle f}amount tof(x+y)=f(x)×f(y){\displaystyle f(x+y)=f(x)\times f(y)}andf(0)=1{\displaystyle f(0)=1}.
Acomposition algebraA{\displaystyle A}over a fieldF{\displaystyle F}has aquadratic form, called anorm,N:A→F{\displaystyle N:A\to F}, which is a group homomorphism from themultiplicative groupofA{\displaystyle A}to the multiplicative group ofF{\displaystyle F}.
Several kinds of homomorphisms have a specific name, which is also defined for generalmorphisms.
Anisomorphismbetweenalgebraic structuresof the same type is commonly defined as abijectivehomomorphism.[3]: 134[4]: 28
In the more general context ofcategory theory, an isomorphism is defined as amorphismthat has aninversethat is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set.
More precisely, if
f:A→B{\displaystyle f:A\to B}
is a (homo)morphism, it has an inverse if there exists a homomorphism
g:B→A{\displaystyle g:B\to A}
such that
f∘g=IdBandg∘f=IdA.{\displaystyle f\circ g=\operatorname {Id} _{B}\qquad {\text{and}}\qquad g\circ f=\operatorname {Id} _{A}.}
IfA{\displaystyle A}andB{\displaystyle B}have underlying sets, andf:A→B{\displaystyle f:A\to B}has an inverseg{\displaystyle g}, thenf{\displaystyle f}is bijective. In fact,f{\displaystyle f}isinjective, asf(x)=f(y){\displaystyle f(x)=f(y)}impliesx=g(f(x))=g(f(y))=y{\displaystyle x=g(f(x))=g(f(y))=y}, andf{\displaystyle f}issurjective, as, for anyx{\displaystyle x}inB{\displaystyle B}, one hasx=f(g(x)){\displaystyle x=f(g(x))}, andx{\displaystyle x}is the image of an element ofA{\displaystyle A}.
Conversely, iff:A→B{\displaystyle f:A\to B}is a bijective homomorphism between algebraic structures, letg:B→A{\displaystyle g:B\to A}be the map such thatg(y){\displaystyle g(y)}is the unique elementx{\displaystyle x}ofA{\displaystyle A}such thatf(x)=y{\displaystyle f(x)=y}. One hasf∘g=IdBandg∘f=IdA,{\displaystyle f\circ g=\operatorname {Id} _{B}{\text{ and }}g\circ f=\operatorname {Id} _{A},}and it remains only to show thatgis a homomorphism. If∗{\displaystyle *}is a binary operation of the structure, for every pairx{\displaystyle x},y{\displaystyle y}of elements ofB{\displaystyle B}, one has
g(x∗By)=g(f(g(x))∗Bf(g(y)))=g(f(g(x)∗Ag(y)))=g(x)∗Ag(y),{\displaystyle g(x*_{B}y)=g(f(g(x))*_{B}f(g(y)))=g(f(g(x)*_{A}g(y)))=g(x)*_{A}g(y),}
andg{\displaystyle g}is thus compatible with∗.{\displaystyle *.}As the proof is similar for anyarity, this shows thatg{\displaystyle g}is a homomorphism.
This proof does not work for non-algebraic structures. For example, fortopological spaces, a morphism is acontinuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, calledhomeomorphismorbicontinuous map, is thus a bijective continuous map, whose inverse is also continuous.
Anendomorphismis a homomorphism whosedomainequals thecodomain, or, more generally, amorphismwhose source is equal to its target.[3]: 135
The endomorphisms of an algebraic structure, or of an object of acategory, form amonoidunder composition.
The endomorphisms of avector spaceor of amoduleform aring. In the case of a vector space or afree moduleof finitedimension, the choice of abasisinduces aring isomorphismbetween the ring of endomorphisms and the ring ofsquare matricesof the same dimension.
Anautomorphismis an endomorphism that is also an isomorphism.[3]: 135
The automorphisms of an algebraic structure or of an object of a category form agroupunder composition, which is called theautomorphism groupof the structure.
Many groups that have received a name are automorphism groups of some algebraic structure. For example, thegeneral linear groupGLn(k){\displaystyle \operatorname {GL} _{n}(k)}is the automorphism group of avector spaceof dimensionn{\displaystyle n}over afieldk{\displaystyle k}.
The automorphism groups offieldswere introduced byÉvariste Galoisfor studying therootsofpolynomials, and are the basis ofGalois theory.
For algebraic structures,monomorphismsare commonly defined asinjectivehomomorphisms.[3]: 134[4]: 29
In the more general context ofcategory theory, a monomorphism is defined as amorphismthat isleft cancelable.[5]This means that a (homo)morphismf:A→B{\displaystyle f:A\to B}is a monomorphism if, for any pairg{\displaystyle g},h{\displaystyle h}of morphisms from any other objectC{\displaystyle C}toA{\displaystyle A}, thenf∘g=f∘h{\displaystyle f\circ g=f\circ h}impliesg=h{\displaystyle g=h}.
These two definitions ofmonomorphismare equivalent for all common algebraic structures. More precisely, they are equivalent forfields, for which every homomorphism is a monomorphism, and forvarietiesofuniversal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as themultiplicative inverseis defined either as aunary operationor as a property of the multiplication, which are, in both cases, defined only for nonzero elements).
In particular, the two definitions of a monomorphism are equivalent forsets,magmas,semigroups,monoids,groups,rings,fields,vector spacesandmodules.
Asplit monomorphismis a homomorphism that has aleft inverseand thus it is itself a right inverse of that other homomorphism. That is, a homomorphismf:A→B{\displaystyle f\colon A\to B}is a split monomorphism if there exists a homomorphismg:B→A{\displaystyle g\colon B\to A}such thatg∘f=IdA.{\displaystyle g\circ f=\operatorname {Id} _{A}.}A split monomorphism is always a monomorphism, for both meanings ofmonomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures.
An injective homomorphism is left cancelable: Iff∘g=f∘h,{\displaystyle f\circ g=f\circ h,}one hasf(g(x))=f(h(x)){\displaystyle f(g(x))=f(h(x))}for everyx{\displaystyle x}inC{\displaystyle C}, the common source ofg{\displaystyle g}andh{\displaystyle h}. Iff{\displaystyle f}is injective, theng(x)=h(x){\displaystyle g(x)=h(x)}, and thusg=h{\displaystyle g=h}. This proof works not only for algebraic structures, but also for anycategorywhose objects are sets and arrows are maps between these sets. For example, an injective continuous map is a monomorphism in the category oftopological spaces.
For proving that, conversely, a left cancelable homomorphism is injective, it is useful to consider afree objectonx{\displaystyle x}. Given avarietyof algebraic structures a free object onx{\displaystyle x}is a pair consisting of an algebraic structureL{\displaystyle L}of this variety and an elementx{\displaystyle x}ofL{\displaystyle L}satisfying the followinguniversal property: for every structureS{\displaystyle S}of the variety, and every elements{\displaystyle s}ofS{\displaystyle S}, there is a unique homomorphismf:L→S{\displaystyle f:L\to S}such thatf(x)=s{\displaystyle f(x)=s}. For example, for sets, the free object onx{\displaystyle x}is simply{x}{\displaystyle \{x\}}; forsemigroups, the free object onx{\displaystyle x}is{x,x2,…,xn,…},{\displaystyle \{x,x^{2},\ldots ,x^{n},\ldots \},}which, as, a semigroup, is isomorphic to the additive semigroup of the positive integers; formonoids, the free object onx{\displaystyle x}is{1,x,x2,…,xn,…},{\displaystyle \{1,x,x^{2},\ldots ,x^{n},\ldots \},}which, as, a monoid, is isomorphic to the additive monoid of the nonnegative integers; forgroups, the free object onx{\displaystyle x}is theinfinite cyclic group{…,x−n,…,x−1,1,x,x2,…,xn,…},{\displaystyle \{\ldots ,x^{-n},\ldots ,x^{-1},1,x,x^{2},\ldots ,x^{n},\ldots \},}which, as, a group, is isomorphic to the additive group of the integers; forrings, the free object onx{\displaystyle x}is thepolynomial ringZ[x];{\displaystyle \mathbb {Z} [x];}forvector spacesormodules, the free object onx{\displaystyle x}is the vector space or free module that hasx{\displaystyle x}as a basis.
If a free object overx{\displaystyle x}exists, then every left cancelable homomorphism is injective: letf:A→B{\displaystyle f\colon A\to B}be a left cancelable homomorphism, anda{\displaystyle a}andb{\displaystyle b}be two elements ofA{\displaystyle A}suchf(a)=f(b){\displaystyle f(a)=f(b)}. By definition of the free objectF{\displaystyle F}, there exist homomorphismsg{\displaystyle g}andh{\displaystyle h}fromF{\displaystyle F}toA{\displaystyle A}such thatg(x)=a{\displaystyle g(x)=a}andh(x)=b{\displaystyle h(x)=b}. Asf(g(x))=f(h(x)){\displaystyle f(g(x))=f(h(x))}, one hasf∘g=f∘h,{\displaystyle f\circ g=f\circ h,}by the uniqueness in the definition of a universal property. Asf{\displaystyle f}is left cancelable, one hasg=h{\displaystyle g=h}, and thusa=b{\displaystyle a=b}. Therefore,f{\displaystyle f}is injective.
Existence of a free object onx{\displaystyle x}for avariety(see alsoFree object § Existence): For building a free object overx{\displaystyle x}, consider the setW{\displaystyle W}of thewell-formed formulasbuilt up fromx{\displaystyle x}and the operations of the structure. Two such formulas are said equivalent if one may pass from one to the other by applying the axioms (identitiesof the structure). This defines anequivalence relation, if the identities are not subject to conditions, that is if one works with a variety. Then the operations of the variety are well defined on the set ofequivalence classesofW{\displaystyle W}for this relation. It is straightforward to show that the resulting object is a free object onx{\displaystyle x}.
Inalgebra,epimorphismsare often defined assurjectivehomomorphisms.[3]: 134[4]: 43On the other hand, incategory theory,epimorphismsare defined asright cancelablemorphisms.[5]This means that a (homo)morphismf:A→B{\displaystyle f:A\to B}is an epimorphism if, for any pairg{\displaystyle g},h{\displaystyle h}of morphisms fromB{\displaystyle B}to any other objectC{\displaystyle C}, the equalityg∘f=h∘f{\displaystyle g\circ f=h\circ f}impliesg=h{\displaystyle g=h}.
A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions ofepimorphismare equivalent forsets,vector spaces,abelian groups,modules(see below for a proof), andgroups.[6]The importance of these structures in all mathematics, especially inlinear algebraandhomological algebra, may explain the coexistence of two non-equivalent definitions.
Algebraic structures for which there exist non-surjective epimorphisms includesemigroupsandrings. The most basic example is the inclusion ofintegersintorational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism.[5][7]
A wide generalization of this example is thelocalization of a ringby a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental incommutative algebraandalgebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred.
Asplit epimorphismis a homomorphism that has aright inverseand thus it is itself a left inverse of that other homomorphism. That is, a homomorphismf:A→B{\displaystyle f\colon A\to B}is a split epimorphism if there exists a homomorphismg:B→A{\displaystyle g\colon B\to A}such thatf∘g=IdB.{\displaystyle f\circ g=\operatorname {Id} _{B}.}A split epimorphism is always an epimorphism, for both meanings ofepimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures.
In summary, one has
split epimorphism⟹epimorphism (surjective)⟹epimorphism (right cancelable);{\displaystyle {\text{split epimorphism}}\implies {\text{epimorphism (surjective)}}\implies {\text{epimorphism (right cancelable)}};}
the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces.
Letf:A→B{\displaystyle f\colon A\to B}be a homomorphism. We want to prove that if it is not surjective, it is not right cancelable.
In the case of sets, letb{\displaystyle b}be an element ofB{\displaystyle B}that not belongs tof(A){\displaystyle f(A)}, and defineg,h:B→B{\displaystyle g,h\colon B\to B}such thatg{\displaystyle g}is theidentity function, and thath(x)=x{\displaystyle h(x)=x}for everyx∈B,{\displaystyle x\in B,}except thath(b){\displaystyle h(b)}is any other element ofB{\displaystyle B}. Clearlyf{\displaystyle f}is not right cancelable, asg≠h{\displaystyle g\neq h}andg∘f=h∘f.{\displaystyle g\circ f=h\circ f.}
In the case of vector spaces, abelian groups and modules, the proof relies on the existence ofcokernelsand on the fact that thezero mapsare homomorphisms: letC{\displaystyle C}be the cokernel off{\displaystyle f}, andg:B→C{\displaystyle g\colon B\to C}be the canonical map, such thatg(f(A))=0{\displaystyle g(f(A))=0}. Leth:B→C{\displaystyle h\colon B\to C}be the zero map. Iff{\displaystyle f}is not surjective,C≠0{\displaystyle C\neq 0}, and thusg≠h{\displaystyle g\neq h}(one is a zero map, while the other is not). Thusf{\displaystyle f}is not cancelable, asg∘f=h∘f{\displaystyle g\circ f=h\circ f}(both are the zero map fromA{\displaystyle A}toC{\displaystyle C}).
Any homomorphismf:X→Y{\displaystyle f:X\to Y}defines anequivalence relation∼{\displaystyle \sim }onX{\displaystyle X}bya∼b{\displaystyle a\sim b}if and only iff(a)=f(b){\displaystyle f(a)=f(b)}. The relation∼{\displaystyle \sim }is called thekerneloff{\displaystyle f}. It is acongruence relationonX{\displaystyle X}. Thequotient setX/∼{\displaystyle X/{\sim }}can then be given a structure of the same type asX{\displaystyle X}, in a natural way, by defining the operations of the quotient set by[x]∗[y]=[x∗y]{\displaystyle [x]\ast [y]=[x\ast y]}, for each operation∗{\displaystyle \ast }ofX{\displaystyle X}. In that case the image ofX{\displaystyle X}inY{\displaystyle Y}under the homomorphismf{\displaystyle f}is necessarilyisomorphictoX/∼{\displaystyle X/\!\sim }; this fact is one of theisomorphism theorems.
When the algebraic structure is agroupfor some operation, theequivalence classK{\displaystyle K}of theidentity elementof this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted byX/K{\displaystyle X/K}(usually read as "X{\displaystyle X}modK{\displaystyle K}"). Also in this case, it isK{\displaystyle K}, rather than∼{\displaystyle \sim }, that is called thekerneloff{\displaystyle f}. The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case ofabelian groups,vector spacesandmodules, but is different and has received a specific name in other cases, such asnormal subgroupfor kernels ofgroup homomorphismsandidealsfor kernels ofring homomorphisms(in the case of non-commutative rings, the kernels are thetwo-sided ideals).
Inmodel theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. LetLbe a signature consisting of function and relation symbols, andA,Bbe twoL-structures. Then ahomomorphismfromAtoBis a mappinghfrom the domain ofAto the domain ofBsuch that
In the special case with just one binary relation, we obtain the notion of agraph homomorphism.[8]
Homomorphisms are also used in the study offormal languages[9]and are often briefly referred to asmorphisms.[10]Given alphabetsΣ1{\displaystyle \Sigma _{1}}andΣ2{\displaystyle \Sigma _{2}}, a functionh:Σ1∗→Σ2∗{\displaystyle h\colon \Sigma _{1}^{*}\to \Sigma _{2}^{*}}such thath(uv)=h(u)h(v){\displaystyle h(uv)=h(u)h(v)}for allu,v∈Σ1{\displaystyle u,v\in \Sigma _{1}}is called ahomomorphismonΣ1∗{\displaystyle \Sigma _{1}^{*}}.[note 2]Ifh{\displaystyle h}is a homomorphism onΣ1∗{\displaystyle \Sigma _{1}^{*}}andε{\displaystyle \varepsilon }denotes the empty string, thenh{\displaystyle h}is called anε{\displaystyle \varepsilon }-free homomorphismwhenh(x)≠ε{\displaystyle h(x)\neq \varepsilon }for allx≠ε{\displaystyle x\neq \varepsilon }inΣ1∗{\displaystyle \Sigma _{1}^{*}}.
A homomorphismh:Σ1∗→Σ2∗{\displaystyle h\colon \Sigma _{1}^{*}\to \Sigma _{2}^{*}}onΣ1∗{\displaystyle \Sigma _{1}^{*}}that satisfies|h(a)|=k{\displaystyle |h(a)|=k}for alla∈Σ1{\displaystyle a\in \Sigma _{1}}is called ak{\displaystyle k}-uniformhomomorphism.[11]If|h(a)|=1{\displaystyle |h(a)|=1}for alla∈Σ1{\displaystyle a\in \Sigma _{1}}(that is,h{\displaystyle h}is 1-uniform), thenh{\displaystyle h}is also called acodingor aprojection.[citation needed]
The setΣ∗{\displaystyle \Sigma ^{*}}of words formed from the alphabetΣ{\displaystyle \Sigma }may be thought of as thefree monoidgenerated byΣ{\displaystyle \Sigma }.Here the monoid operation isconcatenationand the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.[note 3]
|
https://en.wikipedia.org/wiki/Homomorphism
|
Inmathematics, given twogroups, (G,∗) and (H, ·), agroup homomorphismfrom (G,∗) to (H, ·) is afunctionh:G→Hsuch that for alluandvinGit holds that
where the group operation on the left side of the equation is that ofGand on the right side that ofH.
From this property, one can deduce thathmaps theidentity elementeGofGto the identity elementeHofH,
and it also maps inverses to inverses in the sense that
Hence one can say thath"is compatible with the group structure".
In areas of mathematics where one considers groups endowed with additional structure, ahomomorphismsometimes means a map which respects not only the group structure (as above) but also the extra structure. For example, a homomorphism oftopological groupsis often required to be continuous.
LeteH{\displaystyle e_{H}}be the identity element of the (H, ·) group andu∈G{\displaystyle u\in G}, then
Now by multiplying for the inverse ofh(u){\displaystyle h(u)}(or applying the cancellation rule) we obtain
Similarly,
Therefore for the uniqueness of the inverse:h(u−1)=h(u)−1{\displaystyle h(u^{-1})=h(u)^{-1}}.
We define thekernelof hto be the set of elements inGwhich are mapped to the identity inH
and theimageof hto be
The kernel and image of a homomorphism can be interpreted as measuring how close it is to being an isomorphism. Thefirst isomorphism theoremstates that the image of a group homomorphism,h(G) is isomorphic to the quotient groupG/kerh.
The kernel of h is anormal subgroupofG. Assumeu∈ker(h){\displaystyle u\in \operatorname {ker} (h)}and showg−1∘u∘g∈ker(h){\displaystyle g^{-1}\circ u\circ g\in \operatorname {ker} (h)}for arbitraryu,g{\displaystyle u,g}:
The image of h is asubgroupofH.
The homomorphism,h, is agroup monomorphism; i.e.,his injective (one-to-one) if and only ifker(h) = {eG}. Injection directly gives that there is a unique element in the kernel, and, conversely, a unique element in the kernel gives injection:
forms a group under matrix multiplication. For any complex numberuthe functionfu:G→C*defined by
Ifh:G→Handk:H→Kare group homomorphisms, then so isk∘h:G→K. This shows that the class of all groups, together with group homomorphisms as morphisms, forms acategory(specifically thecategory of groups).
IfGandHareabelian(i.e., commutative) groups, then the setHom(G,H)of all group homomorphisms fromGtoHis itself an abelian group: the sumh+kof two homomorphisms is defined by
The commutativity ofHis needed to prove thath+kis again a group homomorphism.
The addition of homomorphisms is compatible with the composition of homomorphisms in the following sense: iffis inHom(K,G),h,kare elements ofHom(G,H), andgis inHom(H,L), then
Since the composition isassociative, this shows that the set End(G) of all endomorphisms of an abelian group forms aring, theendomorphism ringofG. For example, the endomorphism ring of the abelian group consisting of thedirect sumofmcopies ofZ/nZis isomorphic to the ring ofm-by-mmatriceswith entries inZ/nZ. The above compatibility also shows that the category of all abelian groups with group homomorphisms forms apreadditive category; the existence of direct sums and well-behaved kernels makes this category the prototypical example of anabelian category.
|
https://en.wikipedia.org/wiki/Group_homomorphism
|
In cryptography, ablock cipher mode of operationis an algorithm that uses ablock cipherto provideinformation securitysuch asconfidentialityorauthenticity.[1]A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group ofbitscalled ablock.[2]A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.[3][4][5]
Most modes require a unique binary sequence, often called aninitialization vector(IV), for each encryption operation. The IV must be non-repeating, and for some modes must also be random. The initialization vector is used to ensure that distinctciphertextsare produced even when the sameplaintextis encrypted multiple times independently with the samekey.[6]Block ciphers may be capable of operating on more than oneblock size, but during transformation the block size is always fixed. Block cipher modes operate on whole blocks and require that the final data fragment bepaddedto a full block if it is smaller than the current block size.[2]There are, however, modes that do not require padding because they effectively use a block cipher as astream cipher.
Historically, encryption modes have been studied extensively in regard to their error propagation properties under various scenarios of data modification. Later development regardedintegrity protectionas an entirely separate cryptographic goal. Some modern modes of operation combineconfidentialityandauthenticityin an efficient way, and are known asauthenticated encryptionmodes.[7]
The earliest modes of operation, ECB, CBC, OFB, and CFB (see below for all), date back to 1981 and were specified inFIPS 81,DES Modes of Operation. In 2001, the USNational Institute of Standards and Technology(NIST) revised its list of approved modes of operation by includingAESas a block cipher and adding CTR mode inSP800-38A,Recommendation for Block Cipher Modes of Operation. Finally, in January, 2010, NIST addedXTS-AESinSP800-38E,Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices. Other confidentiality modes exist which have not been approved by NIST. For example, CTS isciphertext stealingmode and available in many popular cryptographic libraries.
The block cipher modes ECB, CBC, OFB, CFB, CTR, andXTSprovide confidentiality, but they do not protect against accidental modification or malicious tampering. Modification or tampering can be detected with a separatemessage authentication codesuch asCBC-MAC, or adigital signature. The cryptographic community recognized the need for dedicated integrity assurances and NIST responded with HMAC, CMAC, and GMAC.HMACwas approved in 2002 asFIPS 198,The Keyed-Hash Message Authentication Code (HMAC),CMACwas released in 2005 underSP800-38B,Recommendation for Block Cipher Modes of Operation: The CMAC Mode for Authentication, andGMACwas formalized in 2007 underSP800-38D,Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC.
The cryptographic community observed that compositing (combining) a confidentiality mode with an authenticity mode could be difficult and error prone. They therefore began to supply modes which combined confidentiality and data integrity into a single cryptographic primitive (an encryption algorithm). These combined modes are referred to asauthenticated encryption, AE or "authenc". Examples of AE modes areCCM(SP800-38C),GCM(SP800-38D),CWC,EAX,IAPM, andOCB.
Modes of operation are defined by a number of national and internationally recognized standards bodies. Notable standards organizations includeNIST,ISO(with ISO/IEC 10116[5]), theIEC, theIEEE,ANSI, and theIETF.
An initialization vector (IV) or starting variable (SV)[5]is a block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times, without the need for a slower re-keying process.[citation needed]
An initialization vector has different security requirements than a key, so the IV usually does not need to be secret. For most block cipher modes it is important that an initialization vector is never reused under the same key, i.e. it must be acryptographic nonce. Many block cipher modes have stronger requirements, such as the IV must berandomorpseudorandom. Some block ciphers have particular problems with certain initialization vectors, such as all zero IV generating no encryption (for some keys).
It is recommended to review relevant IV requirements for the particular block cipher mode in relevant specification, for exampleSP800-38A.
For CBC and CFB, reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages.
For OFB and CTR, reusing an IV causes key bitstream re-use, which breaks security.[8]This can be seen because both modes effectively create a bitstream that is XORed with the plaintext, and this bitstream is dependent on the key and IV only.
In CBC mode, the IV must be unpredictable (random or pseudorandom) at encryption time; in particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0). If an attacker knows the IV (or the previous block of ciphertext) before the next plaintext is specified, they can check their guess about plaintext of some block that was encrypted with the same key before (this is known as the TLS CBC IV attack).[9]
For some keys, an all-zero initialization vector may generate some block cipher modes (CFB-8, OFB-8) to get the internal state stuck at all-zero. For CFB-8, an all-zero IV and an all-zero plaintext, causes 1/256 of keys to generate no encryption, plaintext is returned as ciphertext.[10]For OFB-8, using all zero initialization vector will generate no encryption for 1/256 of keys.[11]OFB-8 encryption returns the plaintext unencrypted for affected keys.
Some modes (such as AES-SIV and AES-GCM-SIV) are built to be more nonce-misuse resistant, i.e. resilient to scenarios in which the randomness generation is faulty or under the control of the attacker.
Ablock cipherworks on units of a fixedsize(known as ablock size), but messages come in a variety of lengths. So some modes (namelyECBandCBC) require that the final block be padded before encryption. Severalpaddingschemes exist. The simplest is to addnull bytesto theplaintextto bring its length up to a multiple of the block size, but care must be taken that the original length of the plaintext can be recovered; this is trivial, for example, if the plaintext is aCstylestringwhich contains no null bytes except at the end. Slightly more complex is the originalDESmethod, which is to add a single onebit, followed by enough zerobitsto fill out the block; if the message ends on a block boundary, a whole padding block will be added. Most sophisticated are CBC-specific schemes such asciphertext stealingorresidual block termination, which do not cause any extra ciphertext, at the expense of some additional complexity.SchneierandFergusonsuggest two possibilities, both simple: append a byte with value 128 (hex 80), followed by as many zero bytes as needed to fill the last block, or pad the last block withnbytes all with valuen.
CFB, OFB and CTR modes do not require any special measures to handle messages whose lengths are not multiples of the block size, since the modes work byXORingthe plaintext with the output of the block cipher. The last partial block of plaintext is XORed with the first few bytes of the lastkeystreamblock, producing a final ciphertext block that is the same size as the final partial plaintext block. This characteristic of stream ciphers makes them suitable for applications that require the encrypted ciphertext data to be the same size as the original plaintext data, and for applications that transmit data in streaming form where it is inconvenient to add padding bytes.
A number of modes of operation have been designed to combine secrecy and authentication in a single cryptographic primitive. Examples of such modes are ,[12]integrity-aware cipher block chaining (IACBC)[clarification needed], integrity-aware parallelizable mode (IAPM),[13]OCB,EAX,CWC,CCM, andGCM.Authenticated encryptionmodes are classified as single-pass modes or double-pass modes.
In addition, some modes also allow for the authentication of unencrypted associated data, and these are calledAEAD(authenticated encryption with associated data) schemes. For example, EAX mode is a double-pass AEAD scheme while OCB mode is single-pass.
Galois/counter mode (GCM) combines the well-known counter mode of encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of the Galois field multiplication used for authentication. This feature permits higher throughput than encryption algorithms.
GCM is defined for block ciphers with a block size of 128 bits. Galois message authentication code (GMAC) is an authentication-only variant of the GCM which can form an incremental message authentication code. Both GCM and GMAC can accept initialization vectors of arbitrary length. GCM can take full advantage of parallel processing and implementing GCM can make efficient use of aninstruction pipelineor a hardware pipeline. The CBC mode of operation incurspipeline stallsthat hamper its efficiency and performance.
Like in CTR, blocks are numbered sequentially, and then this block number is combined with an IV and encrypted with a block cipherE, usually AES. The result of this encryption is then XORed with the plaintext to produce the ciphertext. Like all counter modes, this is essentially a stream cipher, and so it is essential that a different IV is used for each stream that is encrypted.
The ciphertext blocks are considered coefficients of apolynomialwhich is then evaluated at a key-dependent pointH, usingfinite field arithmetic. The result is then encrypted, producing anauthentication tagthat can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag.
Counter with cipher block chaining message authentication code(counter with CBC-MAC; CCM) is anauthenticated encryptionalgorithm designed to provide both authentication and confidentiality. CCM mode is only defined for block ciphers with a block length of 128 bits.[14][15]
Synthetic initialization vector (SIV) is a nonce-misuse resistant block cipher mode.
SIV synthesizes an internal IV using the pseudorandom function S2V. S2V is a keyed hash based on CMAC, and the input to the function is:
SIV encrypts the S2V output and the plaintext using AES-CTR, keyed with the encryption key (K2).
SIV can support external nonce-based authenticated encryption, in which case one of the authenticated data fields is utilized for this purpose. RFC5297[16]specifies that for interoperability purposes the last authenticated data field should be used external nonce.
Owing to the use of two keys, the authentication key K1and encryption key K2, naming schemes for SIV AEAD-variants may lead to some confusion; for example AEAD_AES_SIV_CMAC_256 refers to AES-SIV with two AES-128 keys andnotAES-256.
AES-GCM-SIVis a mode of operation for the Advanced Encryption Standard which provides similar performance to Galois/counter mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452.[17]
AES-GCM-SIV synthesizes the internal IV. It derives a hash of the additional authenticated data and plaintext using the POLYVAL Galois hash function. The hash is then encrypted an AES-key, and used as authentication tag and AES-CTR initialization vector.
AES-GCM-SIVis an improvement over the very similarly named algorithmGCM-SIV, with a few very small changes (e.g. how AES-CTR is initialized), but which yields practical benefits to its security "This addition allows for encrypting up to 250messages with the same key, compared to the significant limitation of only 232messages that were allowed with GCM-SIV."[18]
Many modes of operation have been defined. Some of these are described below. The purpose of cipher modes is to mask patterns which exist in encrypted data, as illustrated in the description of theweakness of ECB.
Different cipher modes mask patterns by cascading outputs from the cipher block or other globally deterministic variables into the subsequent cipher block. The inputs of the listed modes are summarized in the following table:
Note:g(i) is any deterministic function, often theidentity function.
The simplest of the encryption modes is theelectronic codebook(ECB) mode (named after conventional physicalcodebooks[19]). The message is divided into blocks, and each block is encrypted separately. ECB is not recommended for use in cryptographic protocols: the disadvantage of this method is a lack ofdiffusion, wherein it fails to hide data patterns when it encrypts identicalplaintextblocks into identicalciphertextblocks.[20][21][22]
A striking example of the degree to which ECB can leave plaintext data patterns in the ciphertext can be seen when ECB mode is used to encrypt abitmap imagewhich contains large areas of uniform color. While the color of each individualpixelhas supposedly been encrypted, the overall image may still be discerned, as the pattern of identically colored pixels in the original remains visible in the encrypted version.
ECB mode can also make protocols without integrity protection even more susceptible toreplay attacks, since each block gets decrypted in exactly the same way.[citation needed]
Ehrsam, Meyer, Smith and Tuchman invented the cipher block chaining (CBC) mode of operation in 1976. In CBC mode, each block of plaintext is XORed with the previous ciphertext block before being encrypted. This way, each ciphertext block depends on all plaintext blocks processed up to that point. To make each message unique, an initialization vector must be used in the first block.
If the first block has index 1, the mathematical formula for CBC encryption is
while the mathematical formula for CBC decryption is
CBC has been the most commonly used mode of operation. Its main drawbacks are that encryption is sequential (i.e., it cannot be parallelized), and that the message must be padded to a multiple of the cipher block size. One way to handle this last issue is through the method known as ciphertext stealing. Note that a one-bit change in a plaintext or initialization vector (IV) affects all following ciphertext blocks.
Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but subsequent plaintext blocks will be correct. This is because each block is XORed with the ciphertext of the previous block, not the plaintext, so one does not need to decrypt the previous block before using it as the IV for the decryption of the current one. This means that a plaintext block can be recovered from two adjacent blocks of ciphertext. As a consequence, decryptioncanbe parallelized. Note that a one-bit change to the ciphertext causes complete corruption of the corresponding block of plaintext, and inverts the corresponding bit in the following block of plaintext, but the rest of the blocks remain intact. This peculiarity is exploited in different padding oracle attacks, such as POODLE.
Explicit initialization vectorstake advantage of this property by prepending a single random block to the plaintext. Encryption is done as normal, except the IV does not need to be communicated to the decryption routine. Whatever IV decryption uses, only the random block is "corrupted". It can be safely discarded and the rest of the decryption is the original plaintext.
Thepropagating cipher block chaining[23]orplaintext cipher-block chaining[24]mode was designed to cause small changes in the ciphertext to propagate indefinitely when decrypting, as well as when encrypting. In PCBC mode, each block of plaintext is XORed with both the previous plaintext block and the previous ciphertext block before being encrypted. Like with CBC mode, an initialization vector is used in the first block.
Unlike CBC, decrypting PCBC with the incorrect IV (initialization vector) causes all blocks of plaintext to be corrupt.
Encryption and decryption algorithms are as follows:
PCBC is used inKerberos v4andWASTE, most notably, but otherwise is not common.
On a message encrypted in PCBC mode, if two adjacent ciphertext blocks are exchanged, this does not affect the decryption of subsequent blocks.[25]For this reason, PCBC is not used in Kerberos v5.
Thecipher feedback(CFB) mode, in its simplest form uses the entire output of the block cipher. In this variation, it is very similar to CBC, turning a block cipher into a self-synchronizingstream cipher. CFB decryption in this variation is almost identical to CBC encryption performed in reverse:
NIST SP800-38A defines CFB with a bit-width.[26]The CFB mode also requires an integer parameter, denoted s, such that 1 ≤ s ≤ b. In the specification of the CFB mode below, each plaintext segment (Pj) and ciphertext segment (Cj) consists of s bits. The value of s is sometimes incorporated into the name of the mode, e.g., the 1-bit CFB mode, the 8-bit CFB mode, the 64-bit CFB mode, or the 128-bit CFB mode.
These modes will truncate the output of the underlying block cipher.
CFB-1 is considered self synchronizing and resilient to loss of ciphertext; "When the 1-bit CFB mode is used, then the synchronization is automatically restored b+1 positions after the inserted or deleted bit. For other values of s in the CFB mode, and for the other confidentiality modes in this recommendation, the synchronization must be restored externally." (NIST SP800-38A). I.e. 1-bit loss in a 128-bit-wide block cipher like AES will render 129 invalid bits before emitting valid bits.
CFB may also self synchronize in some special cases other than those specified. For example, a one bit change in CFB-128 with an underlying 128 bit block cipher, will re-synchronize after two blocks. (However, CFB-128 etc. will not handle bit loss gracefully; a one-bit loss will cause the decryptor to lose alignment with the encryptor)
Like CBC mode, changes in the plaintext propagate forever in the ciphertext, and encryption cannot be parallelized. Also like CBC, decryption can be parallelized.
CFB, OFB and CTR share two advantages over CBC mode: the block cipher is only ever used in the encrypting direction, and the message does not need to be padded to a multiple of the cipher block size (thoughciphertext stealingcan also be used for CBC mode to make padding unnecessary).
Theoutput feedback(OFB) mode makes a block cipher into a synchronousstream cipher. It generateskeystreamblocks, which are thenXORedwith the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows manyerror-correcting codesto function normally even when applied before encryption.
Because of the symmetry of the XOR operation, encryption and decryption are exactly the same:
Each output feedback block cipher operation depends on all previous ones, and so cannot be performed in parallel. However, because the plaintext or ciphertext is only used for the final XOR, the block cipher operations may be performed in advance, allowing the final step to be performed in parallel once the plaintext or ciphertext is available.
It is possible to obtain an OFB mode keystream by using CBC mode with a constant string of zeroes as input. This can be useful, because it allows the usage of fast hardware implementations of CBC mode for OFB mode encryption.
Using OFB mode with a partial block as feedback like CFB mode reduces the average cycle length by a factor of 232or more. A mathematical model proposed by Davies and Parkin and substantiated by experimental results showed that only with full feedback an average cycle length near to the obtainable maximum can be achieved. For this reason, support for truncated feedback was removed from the specification of OFB.[27]
Like OFB, counter mode turns ablock cipherinto astream cipher. It generates the nextkeystreamblock by encrypting successive values of a "counter". The counter can be any function which produces a sequence which is guaranteed not to repeat for a long time, although an actual increment-by-one counter is the simplest and most popular. The usage of a simple deterministic input function used to be controversial; critics argued that "deliberately exposing a cryptosystem to a known systematic input represents an unnecessary risk".[28]However, today CTR mode is widely accepted, and any problems are considered a weakness of the underlying block cipher, which is expected to be secure regardless of systemic bias in its input.[29]Along with CBC, CTR mode is one of two block cipher modes recommended by Niels Ferguson and Bruce Schneier.[30]
CTR mode was introduced byWhitfield DiffieandMartin Hellmanin 1979.[29]
CTR mode has similar characteristics to OFB, but also allows a random-access property during decryption. CTR mode is well suited to operate on a multi-processor machine, where blocks can be encrypted in parallel. Furthermore, it does not suffer from the short-cycle problem that can affect OFB.[31]
If the IV/nonce is random, then they can be combined with the counter using any invertible operation (concatenation, addition, or XOR) to produce the actual unique counter block for encryption. In case of a non-random nonce (such as a packet counter), the nonce and counter should be concatenated (e.g., storing the nonce in the upper 64 bits and the counter in the lower 64 bits of a 128-bit counter block). Simply adding or XORing the nonce and counter into a single value would break the security under achosen-plaintext attackin many cases, since the attacker may be able to manipulate the entire IV–counter pair to cause a collision. Once an attacker controls the IV–counter pair and plaintext, XOR of the ciphertext with the known plaintext would yield a value that, when XORed with the ciphertext of the other block sharing the same IV–counter pair, would decrypt that block.[32]
Note that thenoncein this diagram is equivalent to theinitialization vector(IV) in the other diagrams. However, if the offset/location information is corrupt, it will be impossible to partially recover such data due to the dependence on byte offset.
"Error propagation" properties describe how a decryption behaves during bit errors, i.e. how error in one bit cascades to different decrypted bits.
Bit errors may occur intentionally in attacks or randomly due to transmission errors.
For modernauthenticated encryption(AEAD) or protocols withmessage authentication codeschained in MAC-Then-Encrypt order, any bit error should completely abort decryption and must not generate any specific bit errors to decryptor. I.e. if decryption succeeded, there should not be any bit error. As such error propagation is less important subject in modern cipher modes than in traditional confidentiality-only modes.
(Source: SP800-38A Table D.2: Summary of Effect of Bit Errors on Decryption)
It might be observed, for example, that a one-block error in the transmitted ciphertext would result in a one-block error in the reconstructed plaintext for ECB mode encryption, while in CBC mode such an error would affect two blocks. Some felt that such resilience was desirable in the face of random errors (e.g., line noise), while others argued that error correcting increased the scope for attackers to maliciously tamper with a message.
However, when proper integrity protection is used, such an error will result (with high probability) in the entire message being rejected. If resistance to random error is desirable,error-correcting codesshould be applied to the ciphertext before transmission.
Many more modes of operation for block ciphers have been suggested. Some have been accepted, fully described (even standardized), and are in use. Others have been found insecure, and should never be used. Still others don't categorize as confidentiality, authenticity, or authenticated encryption – for example key feedback mode andDavies–Meyerhashing.
NISTmaintains a list of proposed modes for block ciphers atModes Development.[26][33]
Disk encryption often uses special purpose modes specifically designed for the application. Tweakable narrow-block encryption modes (LRW,XEX, andXTS) and wide-block encryption modes (CMCandEME) are designed to securely encrypt sectors of a disk (seedisk encryption theory).
Many modes use an initialization vector (IV) which, depending on the mode, may have requirements such as being only used once (a nonce) or being unpredictable ahead of its publication, etc. Reusing an IV with the same key in CTR, GCM or OFB mode results in XORing the same keystream with two or more plaintexts, a clear misuse of a stream, with a catastrophic loss of security. Deterministic authenticated encryption modes such as the NISTKey Wrapalgorithm and the SIV (RFC 5297) AEAD mode do not require an IV as an input, and return the same ciphertext and authentication tag every time for a given plaintext and key. Other IV misuse-resistant modes such asAES-GCM-SIVbenefit from an IV input, for example in the maximum amount of data that can be safely encrypted with one key, while not failing catastrophically if the same IV is used multiple times.
Block ciphers can also be used in othercryptographic protocols. They are generally used in modes of operation similar to the block modes described here. As with all protocols, to be cryptographically secure, care must be taken to design these modes of operation correctly.
There are several schemes which use a block cipher to build acryptographic hash function. Seeone-way compression functionfor descriptions of several such methods.
Cryptographically secure pseudorandom number generators(CSPRNGs) can also be built using block ciphers.
Message authentication codes(MACs) are often built from block ciphers.CBC-MAC,OMACandPMACare examples.
|
https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_mode
|
Ininformation theory, thenoisy-channel coding theorem(sometimesShannon's theoremorShannon's limit), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete data (digitalinformation) nearly error-free up to a computable maximum rate through the channel. This result was presented byClaude Shannonin 1948 and was based in part on earlier work and ideas ofHarry NyquistandRalph Hartley.
TheShannon limitorShannon capacityof a communication channel refers to the maximumrateof error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon andWarren WeaverentitledThe Mathematical Theory of Communication(1949). This founded the modern discipline ofinformation theory.
Stated byClaude Shannonin 1948, the theorem describes the maximum possible efficiency oferror-correcting methodsversus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications anddata storage. This theorem is of foundational importance to the modern field ofinformation theory. Shannon only gave an outline of the proof. The first rigorous proof for the discrete case is given in (Feinstein 1954).
The Shannon theorem states that given a noisy channel withchannel capacityCand information transmitted at a rateR, then ifR<C{\displaystyle R<C}there existcodesthat allow theprobability of errorat the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate,C.
The converse is also important. IfR>C{\displaystyle R>C}, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
The channel capacityC{\displaystyle C}can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using theShannon–Hartley theorem.
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such asReed–Solomon codesand, more recently,low-density parity-check(LDPC) codes andturbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today'sdigital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045 dB of the Shannon limit (for binaryadditive white Gaussian noise(AWGN) channels, with very long block lengths).[1]
The basic mathematical model for a communication system is the following:
AmessageWis transmitted through a noisy channel by using encoding and decoding functions. AnencodermapsWinto a pre-defined sequence of channel symbols of lengthn. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into adecoderwhich maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:
Theorem(Shannon, 1948):
(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)
As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
The following outlines are only one set of many different styles available for study in information theory texts.
This particular proof of achievability follows the style of proofs that make use of theasymptotic equipartition property(AEP). Another style can be found in information theory texts usingerror exponents.
Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below thechannel capacity.
By an AEP-related argument, given a channel, lengthn{\displaystyle n}strings of source symbolsX1n{\displaystyle X_{1}^{n}}, and lengthn{\displaystyle n}strings of channel outputsY1n{\displaystyle Y_{1}^{n}}, we can define ajointly typical setby the following:
We say that two sequencesX1n{\displaystyle {X_{1}^{n}}}andY1n{\displaystyle Y_{1}^{n}}arejointly typicalif they lie in the jointly typical set defined above.
Steps
The probability of error of this scheme is divided into two parts:
Define:Ei={(X1n(i),Y1n)∈Aε(n)},i=1,2,…,2nR{\displaystyle E_{i}=\{(X_{1}^{n}(i),Y_{1}^{n})\in A_{\varepsilon }^{(n)}\},i=1,2,\dots ,2^{nR}}
as the event that message i is jointly typical with the sequence received when message 1 is sent.
We can observe that asn{\displaystyle n}goes to infinity, ifR<I(X;Y){\displaystyle R<I(X;Y)}for the channel, the probability of error will go to 0.
Finally, given that the average codebook is shown to be "good," we know that there exists a codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel.
Suppose a code of2nR{\displaystyle 2^{nR}}codewords. Let W be drawn uniformly over this set as an index. LetXn{\displaystyle X^{n}}andYn{\displaystyle Y^{n}}be the transmitted codewords and received codewords, respectively.
The result of these steps is thatPe(n)≥1−1nR−CR{\displaystyle P_{e}^{(n)}\geq 1-{\frac {1}{nR}}-{\frac {C}{R}}}. As the block lengthn{\displaystyle n}goes to infinity, we obtainPe(n){\displaystyle P_{e}^{(n)}}is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C.
A strong converse theorem, proven by Wolfowitz in 1957,[3]states that,
for some finite positive constantA{\displaystyle A}. While the weak converse states that the error probability is bounded away from zero asn{\displaystyle n}goes to infinity, the strong converse states that the error goes to 1. Thus,C{\displaystyle C}is a sharp threshold between perfectly reliable and completely unreliable communication.
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
Then the channel capacity is given by
The maximum is attained at the capacity achieving distributions for each respective channel. That is,C=liminf1n∑i=1nCi{\displaystyle C=\lim \inf {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}whereCi{\displaystyle C_{i}}is the capacity of the ithchannel.
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in theasymptotic equipartition propertyarticle.
The technicality oflim infcomes into play when1n∑i=1nCi{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}C_{i}}does not converge.
|
https://en.wikipedia.org/wiki/Shannon's_theorem
|
Ininformation theory, theentropyof arandom variablequantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variableX{\displaystyle X}, which may be any memberx{\displaystyle x}within the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p\colon {\mathcal {X}}\to [0,1]}, the entropy isH(X):=−∑x∈Xp(x)logp(x),{\displaystyle \mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x),}whereΣ{\displaystyle \Sigma }denotes the sum over the variable's possible values.[Note 1]The choice of base forlog{\displaystyle \log }, thelogarithm, varies for different applications. Base 2 gives the unit ofbits(or "shannons"), while baseegives "natural units"nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is theexpected valueof theself-informationof a variable.[1]
The concept of information entropy was introduced byClaude Shannonin his 1948 paper "A Mathematical Theory of Communication",[2][3]and is also referred to asShannon entropy. Shannon's theory defines adata communicationsystem composed of three elements: a source of data, acommunication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel.[2][3]Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in hissource coding theoremthat the entropy represents an absolute mathematical limit on how well data from the source can belosslesslycompressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in hisnoisy-channel coding theorem.
Entropy in information theory is directly analogous to theentropyinstatistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such ascombinatoricsandmachine learning. The definition can be derived from a set ofaxiomsestablishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable,differential entropyis analogous to entropy. The definitionE[−logp(X)]{\displaystyle \mathbb {E} [-\log p(X)]}generalizes the above.
The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular numberwill notbe the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular numberwillwin a lottery has high informational value because it communicates the occurrence of a very low probability event.
Theinformation content,also called thesurprisalorself-information,of an eventE{\displaystyle E}is a function that increases as the probabilityp(E){\displaystyle p(E)}of an event decreases. Whenp(E){\displaystyle p(E)}is close to 1, the surprisal of the event is low, but ifp(E){\displaystyle p(E)}is close to 0, the surprisal of the event is high. This relationship is described by the functionlog(1p(E)),{\displaystyle \log \left({\frac {1}{p(E)}}\right),}wherelog{\displaystyle \log }is thelogarithm, which gives 0 surprise when the probability of the event is 1.[4]In fact,logis the only function that satisfies а specific set of conditions defined in section§ Characterization.
Hence, we can define the information, or surprisal, of an eventE{\displaystyle E}byI(E)=−log(p(E)),{\displaystyle I(E)=-\log(p(E)),}or equivalently,I(E)=log(1p(E)).{\displaystyle I(E)=\log \left({\frac {1}{p(E)}}\right).}
Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial.[5]: 67This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (p=1/6{\displaystyle p=1/6}) than each outcome of a coin toss (p=1/2{\displaystyle p=1/2}).
Consider a coin with probabilitypof landing on heads and probability1 −pof landing on tails. The maximum surprise is whenp= 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of onebit(similarly, onetritwith equiprobable values containslog23{\displaystyle \log _{2}3}(about 1.58496) bits of information because it can have one of three values). The minimum surprise is whenp= 0(impossibility) orp= 1(certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity[Note 2], there is no uncertainty at all – no freedom of choice – noinformation.[6]Other values ofpgive entropies between zero and one bits.
Information theory is useful to calculate the smallest amount of information required to convey a message, as indata compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.[7]: 234
Named afterBoltzmann's Η-theorem, Shannon defined the entropyΗ(Greek capital lettereta) of adiscrete random variableX{\textstyle X}, which takes values in the setX{\displaystyle {\mathcal {X}}}and is distributed according top:X→[0,1]{\displaystyle p:{\mathcal {X}}\to [0,1]}such thatp(x):=P[X=x]{\displaystyle p(x):=\mathbb {P} [X=x]}:
H(X)=E[I(X)]=E[−logp(X)].{\displaystyle \mathrm {H} (X)=\mathbb {E} [\operatorname {I} (X)]=\mathbb {E} [-\log p(X)].}
HereE{\displaystyle \mathbb {E} }is theexpected value operator, andIis theinformation contentofX.[8]: 11[9]: 19–20I(X){\displaystyle \operatorname {I} (X)}is itself a random variable.
The entropy can explicitly be written as:H(X)=−∑x∈Xp(x)logbp(x),{\displaystyle \mathrm {H} (X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{b}p(x),}wherebis thebase of the logarithmused. Common values ofbare 2,Euler's numbere, and 10, and the corresponding units of entropy are thebitsforb= 2,natsforb=e, andbansforb= 10.[10]
In the case ofp(x)=0{\displaystyle p(x)=0}for somex∈X{\displaystyle x\in {\mathcal {X}}}, the value of the corresponding summand0 logb(0)is taken to be0, which is consistent with thelimit:[11]: 13limp→0+plog(p)=0.{\displaystyle \lim _{p\to 0^{+}}p\log(p)=0.}
One may also define theconditional entropyof two variablesX{\displaystyle X}andY{\displaystyle Y}taking values from setsX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}respectively, as:[11]: 16H(X|Y)=−∑x,y∈X×YpX,Y(x,y)logpX,Y(x,y)pY(y),{\displaystyle \mathrm {H} (X|Y)=-\sum _{x,y\in {\mathcal {X}}\times {\mathcal {Y}}}p_{X,Y}(x,y)\log {\frac {p_{X,Y}(x,y)}{p_{Y}(y)}},}wherepX,Y(x,y):=P[X=x,Y=y]{\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]}andpY(y)=P[Y=y]{\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]}. This quantity should be understood as the remaining randomness in the random variableX{\displaystyle X}given the random variableY{\displaystyle Y}.
Entropy can be formally defined in the language ofmeasure theoryas follows:[12]Let(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}be aprobability space. LetA∈Σ{\displaystyle A\in \Sigma }be anevent. ThesurprisalofA{\displaystyle A}isσμ(A)=−lnμ(A).{\displaystyle \sigma _{\mu }(A)=-\ln \mu (A).}
Theexpectedsurprisal ofA{\displaystyle A}ishμ(A)=μ(A)σμ(A).{\displaystyle h_{\mu }(A)=\mu (A)\sigma _{\mu }(A).}
Aμ{\displaystyle \mu }-almostpartitionis aset familyP⊆P(X){\displaystyle P\subseteq {\mathcal {P}}(X)}such thatμ(∪P)=1{\displaystyle \mu (\mathop {\cup } P)=1}andμ(A∩B)=0{\displaystyle \mu (A\cap B)=0}for all distinctA,B∈P{\displaystyle A,B\in P}. (This is a relaxation of the usual conditions for a partition.) The entropy ofP{\displaystyle P}isHμ(P)=∑A∈Phμ(A).{\displaystyle \mathrm {H} _{\mu }(P)=\sum _{A\in P}h_{\mu }(A).}
LetM{\displaystyle M}be asigma-algebraonX{\displaystyle X}. The entropy ofM{\displaystyle M}isHμ(M)=supP⊆MHμ(P).{\displaystyle \mathrm {H} _{\mu }(M)=\sup _{P\subseteq M}\mathrm {H} _{\mu }(P).}Finally, the entropy of the probability space isHμ(Σ){\displaystyle \mathrm {H} _{\mu }(\Sigma )}, that is, the entropy with respect toμ{\displaystyle \mu }of the sigma-algebra ofallmeasurable subsets ofX{\displaystyle X}.
Recent studies on layered dynamical systems have introduced the concept of symbolic conditional entropy, further extending classical entropy measures to more abstract informational structures.[13]
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as aBernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is becauseH(X)=−∑i=1np(xi)logbp(xi)=−∑i=1212log212=−∑i=1212⋅(−1)=1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-\sum _{i=1}^{n}{p(x_{i})\log _{b}p(x_{i})}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\log _{2}{\frac {1}{2}}}\\&=-\sum _{i=1}^{2}{{\frac {1}{2}}\cdot (-1)}=1.\end{aligned}}}
However, if we know the coin is not fair, but comes up heads or tails with probabilitiespandq, wherep≠q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, ifp= 0.7, thenH(X)=−plog2p−qlog2q=−0.7log2(0.7)−0.3log2(0.3)≈−0.7⋅(−0.515)−0.3⋅(−1.737)=0.8816<1.{\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log _{2}p-q\log _{2}q\\[1ex]&=-0.7\log _{2}(0.7)-0.3\log _{2}(0.3)\\[1ex]&\approx -0.7\cdot (-0.515)-0.3\cdot (-1.737)\\[1ex]&=0.8816<1.\end{aligned}}}
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.[11]: 14–15
To understand the meaning of−Σpilog(pi), first define an information functionIin terms of an eventiwith probabilitypi. The amount of information acquired due to the observation of eventifollows from Shannon's solution of the fundamental properties ofinformation:[14]
Given two independent events, if the first event can yield one ofnequiprobableoutcomes and another has one ofmequiprobable outcomes then there aremnequiprobable outcomes of the joint event. This means that iflog2(n)bits are needed to encode the first value andlog2(m)to encode the second, one needslog2(mn) = log2(m) + log2(n)to encode both.
Shannon discovered that a suitable choice ofI{\displaystyle \operatorname {I} }is given by:[15]I(p)=log(1p)=−log(p).{\displaystyle \operatorname {I} (p)=\log \left({\tfrac {1}{p}}\right)=-\log(p).}
In fact, the only possible values ofI{\displaystyle \operatorname {I} }areI(u)=klogu{\displaystyle \operatorname {I} (u)=k\log u}fork<0{\displaystyle k<0}. Additionally, choosing a value forkis equivalent to choosing a valuex>1{\displaystyle x>1}fork=−1/logx{\displaystyle k=-1/\log x}, so thatxcorresponds to thebase for the logarithm. Thus, entropy ischaracterizedby the above four properties.
I(p1p2)=I(p1)+I(p2)Starting from property 3p2I′(p1p2)=I′(p1)taking the derivative w.r.tp1I′(p1p2)+p1p2I″(p1p2)=0taking the derivative w.r.tp2I′(u)+uI″(u)=0introducingu=p1p2(uI′(u))′=0combining terms into oneuI′(u)−k=0integrating w.r.tu,producing constantk{\displaystyle {\begin{aligned}&\operatorname {I} (p_{1}p_{2})&=\ &\operatorname {I} (p_{1})+\operatorname {I} (p_{2})&&\quad {\text{Starting from property 3}}\\&p_{2}\operatorname {I} '(p_{1}p_{2})&=\ &\operatorname {I} '(p_{1})&&\quad {\text{taking the derivative w.r.t}}\ p_{1}\\&\operatorname {I} '(p_{1}p_{2})+p_{1}p_{2}\operatorname {I} ''(p_{1}p_{2})&=\ &0&&\quad {\text{taking the derivative w.r.t}}\ p_{2}\\&\operatorname {I} '(u)+u\operatorname {I} ''(u)&=\ &0&&\quad {\text{introducing}}\,u=p_{1}p_{2}\\&(u\operatorname {I} '(u))'&=\ &0&&\quad {\text{combining terms into one}}\ \\&u\operatorname {I} '(u)-k&=\ &0&&\quad {\text{integrating w.r.t}}\ u,{\text{producing constant}}\,k\\\end{aligned}}}
Thisdifferential equationleads to the solutionI(u)=klogu+c{\displaystyle \operatorname {I} (u)=k\log u+c}for somek,c∈R{\displaystyle k,c\in \mathbb {R} }. Property 2 givesc=0{\displaystyle c=0}. Property 1 and 2 give thatI(p)≥0{\displaystyle \operatorname {I} (p)\geq 0}for allp∈[0,1]{\displaystyle p\in [0,1]}, so thatk<0{\displaystyle k<0}.
The differentunits of information(bitsfor thebinary logarithmlog2,natsfor thenatural logarithmln,bansfor thedecimal logarithmlog10and so on) areconstant multiplesof each other. For instance, in case of a fair coin toss, heads provideslog2(2) = 1bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity,ntosses providenbits of information, which is approximately0.693nnats or0.301ndecimal digits.
Themeaningof the events observed (the meaning ofmessages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlyingprobability distribution, not the meaning of the events themselves.
Another characterization of entropy uses the following properties. We denotepi= Pr(X=xi)andΗn(p1, ...,pn) = Η(X).
The rule of additivity has the following consequences: forpositive integersbiwhereb1+ ... +bk=n,Hn(1n,…,1n)=Hk(b1n,…,bkn)+∑i=1kbinHbi(1bi,…,1bi).{\displaystyle \mathrm {H} _{n}\left({\frac {1}{n}},\ldots ,{\frac {1}{n}}\right)=\mathrm {H} _{k}\left({\frac {b_{1}}{n}},\ldots ,{\frac {b_{k}}{n}}\right)+\sum _{i=1}^{k}{\frac {b_{i}}{n}}\,\mathrm {H} _{b_{i}}\left({\frac {1}{b_{i}}},\ldots ,{\frac {1}{b_{i}}}\right).}
Choosingk=n,b1= ... =bn= 1this implies that the entropy of a certain outcome is zero:Η1(1) = 0. This implies that the efficiency of a source set withnsymbols can be defined simply as being equal to itsn-ary entropy. See alsoRedundancy (information theory).
The characterization here imposes an additive property with respect to apartition of a set. Meanwhile, theconditional probabilityis defined in terms of a multiplicative property,P(A∣B)⋅P(B)=P(A∩B){\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)}. Observe that a logarithm mediates between these two operations. Theconditional entropyand related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisalsμ(A)⋅lnμ(A){\displaystyle \mu (A)\cdot \ln \mu (A)}for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string,log2{\displaystyle \log _{2}}lends itself to practical interpretations.
Motivated by such relations, a plethora of related and competing quantities have been defined. For example,David Ellerman's analysis of a "logic of partitions" defines a competing measure in structuresdualto that of subsets of a universal set.[16]Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted intoShannon's bits, to get the formulas for conditional entropy, and so on.
Another succinct axiomatic characterization of Shannon entropy was given byAczél, Forte and Ng,[17]via the following properties:
It was shown that any functionH{\displaystyle \mathrm {H} }satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant.[17]Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vectorp1,…,pn{\displaystyle p_{1},\ldots ,p_{n}}.
It is worth noting that if we drop the "small for small probabilities" property, thenH{\displaystyle \mathrm {H} }must be a non-negative linear combination of the Shannon entropy and theHartley entropy.[17]
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variableX:
The inspiration for adopting the wordentropyin information theory came from the close resemblance between Shannon's formula and very similar known formulae fromstatistical mechanics.
Instatistical thermodynamicsthe most general formula for the thermodynamicentropySof athermodynamic systemis theGibbs entropyS=−kB∑ipilnpi,{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i}\,,}wherekBis theBoltzmann constant, andpiis the probability of amicrostate. TheGibbs entropywas defined byJ. Willard Gibbsin 1878 after earlier work byLudwig Boltzmann(1872).[18]
The Gibbs entropy translates over almost unchanged into the world ofquantum physicsto give thevon Neumann entropyintroduced byJohn von Neumannin 1927:S=−kBTr(ρlnρ),{\displaystyle S=-k_{\text{B}}\,{\rm {Tr}}(\rho \ln \rho )\,,}where ρ is thedensity matrixof the quantum mechanical system and Tr is thetrace.[19]
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested inchangesin entropy as a system spontaneously evolves away from its initial conditions, in accordance with thesecond law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constantkBindicates, the changes inS/kBfor even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything indata compressionorsignal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by hisequation:
S=kBlnW,{\displaystyle S=k_{\text{B}}\ln W,}
whereS{\displaystyle S}is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.),Wis the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, andkBis the Boltzmann constant.[20]It is assumed that each microstate is equally likely, so that the probability of a given microstate ispi= 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalentlykBtimes the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.
In the view ofJaynes(1957),[21]thermodynamic entropy, as explained bystatistical mechanics, should be seen as anapplicationof Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article:maximum entropy thermodynamics).Maxwell's demoncan (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, asLandauer(from 1961) and co-workers[22]have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox).Landauer's principleimposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using thetypical setor in practice usingHuffman,Lempel–Zivorarithmetic coding. (See alsoKolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form ofchecksumsto protect against errors. Theentropy rateof a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English;[23]thePPM compression algorithmcan achieve a compression ratio of 1.5 bits per character in English text.
If acompressionscheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has lessredundancy.Shannon's source coding theoremstates a lossless compression scheme cannot compress messages, on average, to havemorethan one bit of information per bit of message, but that any valuelessthan one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shortenallmessages. If some messages come out shorter, at least one must come out longer due to thepigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
A 2011 study inScienceestimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.[24]: 60–65
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-waybroadcastnetworks, or to exchange information through two-waytelecommunications networks.[24]
Entropy is one of several ways to measure biodiversity and is applied in the form of theShannon index.[25]A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecologicalrichness,evenness, anddominance. Specifically, Shannon entropy is the logarithm of1D, thetrue diversityindex with parameter equal to 1. The Shannon index is related to the proportional abundances of types.
There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of astationary process.) Otherquantities of informationare also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropyrate. Shannon himself used the term in this way.
If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there areNpublished books, and each book is only published once, the estimate of the probability of each book is1/N, and the entropy (in bits) is−log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book aunique identifierand using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered.Kolmogorov complexityis a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortestprogramfor auniversal computerthat outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximatelylog2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2)forn= 3, 4, 5, ...,F(1) =1,F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
Incryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its realuncertaintyis unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)2127{\displaystyle 2^{127}}guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly.[26][27]Instead, a measure calledguessworkcan be used to measure the effort required for a brute force attack.[28]
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binaryone-time padusing exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
A common way to define entropy for text is based on theMarkov modelof text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
H(S)=−∑ipilogpi,{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\log p_{i},}
wherepiis the probability ofi. For a first-orderMarkov source(one in which the probability of selecting a character is dependent only on the immediately preceding character), theentropy rateis:[citation needed]
H(S)=−∑ipi∑jpi(j)logpi(j),{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}\ p_{i}(j)\log p_{i}(j),}
whereiis astate(certain preceding characters) andpi(j){\displaystyle p_{i}(j)}is the probability ofjgivenias the previous character.
For a second order Markov source, the entropy rate is
H(S)=−∑ipi∑jpi(j)∑kpi,j(k)logpi,j(k).{\displaystyle \mathrm {H} ({\mathcal {S}})=-\sum _{i}p_{i}\sum _{j}p_{i}(j)\sum _{k}p_{i,j}(k)\ \log p_{i,j}(k).}
A source setX{\displaystyle {\mathcal {X}}}with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:[29]
η(X)=HHmax=−∑i=1np(xi)logb(p(xi))logb(n).{\displaystyle \eta (X)={\frac {H}{H_{\text{max}}}}=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}.}Applying the basic properties of the logarithm, this quantity can also be expressed as:η(X)=−∑i=1np(xi)logb(p(xi))logb(n)=∑i=1nlogb(p(xi)−p(xi))logb(n)=∑i=1nlogn(p(xi)−p(xi))=logn(∏i=1np(xi)−p(xi)).{\displaystyle {\begin{aligned}\eta (X)&=-\sum _{i=1}^{n}{\frac {p(x_{i})\log _{b}(p(x_{i}))}{\log _{b}(n)}}=\sum _{i=1}^{n}{\frac {\log _{b}\left(p(x_{i})^{-p(x_{i})}\right)}{\log _{b}(n)}}\\[1ex]&=\sum _{i=1}^{n}\log _{n}\left(p(x_{i})^{-p(x_{i})}\right)=\log _{n}\left(\prod _{i=1}^{n}p(x_{i})^{-p(x_{i})}\right).\end{aligned}}}
Efficiency has utility in quantifying the effective use of acommunication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropylogb(n){\displaystyle {\log _{b}(n)}}. Furthermore, the efficiency is indifferent to the choice of (positive) baseb, as indicated by the insensitivity within the final logarithm above thereto.
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable withprobability density functionf(x)with finite or infinite supportX{\displaystyle \mathbb {X} }on the real line is defined by analogy, using the above form of the entropy as an expectation:[11]: 224
H(X)=E[−logf(X)]=−∫Xf(x)logf(x)dx.{\displaystyle \mathrm {H} (X)=\mathbb {E} [-\log f(X)]=-\int _{\mathbb {X} }f(x)\log f(x)\,\mathrm {d} x.}
This is the differential entropy (or continuous entropy). A precursor of the continuous entropyh[f]is the expression for the functionalΗin theH-theoremof Boltzmann.
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notablylimiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as thebin sizegoes to zero. In the discrete case, the bin size is the (implicit) width of each of then(finite or infinite) bins whose probabilities are denoted bypn. As the continuous domain is generalized, the width must be made explicit.
To do this, start with a continuous functionfdiscretized into bins of sizeΔ{\displaystyle \Delta }.
By the mean-value theorem there exists a valuexiin each bin such thatf(xi)Δ=∫iΔ(i+1)Δf(x)dx{\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\,dx}the integral of the functionfcan be approximated (in the Riemannian sense) by∫−∞∞f(x)dx=limΔ→0∑i=−∞∞f(xi)Δ,{\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{\Delta \to 0}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta ,}where this limit and "bin size goes to zero" are equivalent.
We will denoteHΔ:=−∑i=−∞∞f(xi)Δlog(f(xi)Δ){\displaystyle \mathrm {H} ^{\Delta }:=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log \left(f(x_{i})\Delta \right)}and expanding the logarithm, we haveHΔ=−∑i=−∞∞f(xi)Δlog(f(xi))−∑i=−∞∞f(xi)Δlog(Δ).{\displaystyle \mathrm {H} ^{\Delta }=-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))-\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(\Delta ).}
AsΔ → 0, we have
∑i=−∞∞f(xi)Δ→∫−∞∞f(x)dx=1∑i=−∞∞f(xi)Δlog(f(xi))→∫−∞∞f(x)logf(x)dx.{\displaystyle {\begin{aligned}\sum _{i=-\infty }^{\infty }f(x_{i})\Delta &\to \int _{-\infty }^{\infty }f(x)\,dx=1\\\sum _{i=-\infty }^{\infty }f(x_{i})\Delta \log(f(x_{i}))&\to \int _{-\infty }^{\infty }f(x)\log f(x)\,dx.\end{aligned}}}
Note;log(Δ) → −∞asΔ → 0, requires a special definition of the differential or continuous entropy:
h[f]=limΔ→0(HΔ+logΔ)=−∫−∞∞f(x)logf(x)dx,{\displaystyle h[f]=\lim _{\Delta \to 0}\left(\mathrm {H} ^{\Delta }+\log \Delta \right)=-\int _{-\infty }^{\infty }f(x)\log f(x)\,dx,}
which is, as said before, referred to as the differential entropy. This means that the differential entropyis nota limit of the Shannon entropy forn→ ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article oninformation dimension).
It turns out as a result that, unlike the Shannon entropy, the differential entropy isnotin general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units whenxis a dimensioned variable.f(x)will then have the units of1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. IfΔis some "standard" value ofx(i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as:H=∫−∞∞f(x)log(f(x)Δ)dx,{\displaystyle \mathrm {H} =\int _{-\infty }^{\infty }f(x)\log(f(x)\,\Delta )\,dx,}and the result will be the same for any choice of units forx. In fact, the limit of discrete entropy asN→∞{\displaystyle N\rightarrow \infty }would also include a term oflog(N){\displaystyle \log(N)}, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. Thelimiting density of discrete pointsis really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.
Another useful measure of entropy that works equally well in the discrete and the continuous case is therelative entropyof a distribution. It is defined as theKullback–Leibler divergencefrom the distribution to a reference measuremas follows. Assume that a probability distributionpisabsolutely continuouswith respect to a measurem, i.e. is of the formp(dx) =f(x)m(dx)for some non-negativem-integrable functionfwithm-integral 1, then the relative entropy can be defined asDKL(p‖m)=∫log(f(x))p(dx)=∫f(x)log(f(x))m(dx).{\displaystyle D_{\mathrm {KL} }(p\|m)=\int \log(f(x))p(dx)=\int f(x)\log(f(x))m(dx).}
In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measuremis thecounting measure, and the differential entropy, where the measuremis theLebesgue measure. If the measuremis itself a probability distribution, the relative entropy is non-negative, and zero ifp=mas measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measurem. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measurem.
Terence Taoused entropy to make a useful connection trying to solve theErdős discrepancy problem.[30][31]
Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using theLiouville function(which is a useful mathematical function for studying distribution of primes)XH=λ(n+H){\displaystyle \lambda (n+H)}. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values ofXHcould take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per theErdős discrepancy problem.
The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used theLiouville functionalong with averages of modulated multiplicative functions[32]in short intervals. Proving it also broke the "parity barrier"[33]for this specific problem.
While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.
Entropy has become a useful quantity incombinatorics.
A simple example of this is an alternative proof of theLoomis–Whitney inequality: for every subsetA⊆Zd, we have|A|d−1≤∏i=1d|Pi(A)|{\displaystyle |A|^{d-1}\leq \prod _{i=1}^{d}|P_{i}(A)|}wherePiis theorthogonal projectionin theith coordinate:Pi(A)={(x1,…,xi−1,xi+1,…,xd):(x1,…,xd)∈A}.{\displaystyle P_{i}(A)=\{(x_{1},\ldots ,x_{i-1},x_{i+1},\ldots ,x_{d}):(x_{1},\ldots ,x_{d})\in A\}.}
The proof follows as a simple corollary ofShearer's inequality: ifX1, ...,Xdare random variables andS1, ...,Snare subsets of{1, ...,d} such that every integer between 1 anddlies in exactlyrof these subsets, thenH[(X1,…,Xd)]≤1r∑i=1nH[(Xj)j∈Si]{\displaystyle \mathrm {H} [(X_{1},\ldots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}\mathrm {H} [(X_{j})_{j\in S_{i}}]}where(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is the Cartesian product of random variablesXjwith indexesjinSi(so the dimension of this vector is equal to the size ofSi).
We sketch how Loomis–Whitney follows from this: Indeed, letXbe a uniformly distributed random variable with values inAand so that each point inAoccurs with equal probability. Then (by the further properties of entropy mentioned above)Η(X) = log|A|, where|A|denotes the cardinality ofA. LetSi= {1, 2, ...,i−1,i+1, ...,d}. The range of(Xj)j∈Si{\displaystyle (X_{j})_{j\in S_{i}}}is contained inPi(A)and henceH[(Xj)j∈Si]≤log|Pi(A)|{\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|}. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
For integers0 <k<nletq=k/n. Then2nH(q)n+1≤(nk)≤2nH(q),{\displaystyle {\frac {2^{n\mathrm {H} (q)}}{n+1}}\leq {\tbinom {n}{k}}\leq 2^{n\mathrm {H} (q)},}where[34]: 43H(q)=−qlog2(q)−(1−q)log2(1−q).{\displaystyle \mathrm {H} (q)=-q\log _{2}(q)-(1-q)\log _{2}(1-q).}
∑i=0n(ni)qi(1−q)n−i=(q+(1−q))n=1.{\displaystyle \sum _{i=0}^{n}{\tbinom {n}{i}}q^{i}(1-q)^{n-i}=(q+(1-q))^{n}=1.}Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,(nk)qqn(1−q)n−nq≥1n+1{\displaystyle {\binom {n}{k}}q^{qn}(1-q)^{n-nq}\geq {\frac {1}{n+1}}}since there aren+ 1terms in the summation. Rearranging gives the lower bound.
A nice interpretation of this is that the number of binary strings of lengthnwith exactlykmany 1's is approximately2nH(k/n){\displaystyle 2^{n\mathrm {H} (k/n)}}.[35]
Machine learningtechniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.
Decision tree learningalgorithms use relative entropy to determine the decision rules that govern the data at each node.[36]Theinformation gain in decision treesIG(Y,X){\displaystyle IG(Y,X)}, which is equal to the difference between the entropy ofY{\displaystyle Y}and the conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attributeX{\displaystyle X}. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.
Bayesian inferencemodels often apply theprinciple of maximum entropyto obtainprior probabilitydistributions.[37]The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.
Classification in machine learningperformed bylogistic regressionorartificial neural networksoften employs a standard loss function, calledcross-entropyloss, that minimizes the average cross entropy between ground truth and predicted distributions.[38]In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).
This article incorporates material from Shannon's entropy onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Information_entropy
|
TheUniversal Mobile Telecommunications System(UMTS) is a3Gmobile cellular system for networks based on theGSMstandard.[1]UMTS useswideband code-division multiple access(W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth tomobile network operatorscompared to previous2Gsystems likeGPRSandCSD.[2]UMTS on its provides a peak theoretical data rate of 2Mbit/s.[3]
Developed and maintained by the3GPP(3rd Generation Partnership Project), UMTS is a component of theInternational Telecommunication UnionIMT-2000standard set and compares with theCDMA2000standard set for networks based on the competingcdmaOnetechnology. The technology described in UMTS is sometimes also referred to asFreedom of Mobile Multimedia Access(FOMA)[4]or 3GSM.
UMTS specifies a complete network system, which includes theradio access network(UMTS Terrestrial Radio Access Network, or UTRAN), thecore network(Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. UnlikeEDGE(IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. UMTS has since been enhanced asHigh Speed Packet Access(HSPA).[5]
UMTS supports theoretical maximum datatransfer ratesof 42Mbit/swhenEvolved HSPA(HSPA+) is implemented in the network.[6]Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s forHigh-Speed Downlink Packet Access(HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels inHigh-Speed Circuit-Switched Data(HSCSD) and 14.4 kbit/s for CDMAOne channels.
Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as3.5G. Currently, HSDPA enablesdownlinktransfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with theHigh-Speed Uplink Packet Access(HSUPA). The 3GPPLTEstandard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based uponorthogonal frequency-division multiplexing.
The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV andvideo calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web – either directly on a handset or connected to a computer viaWi-Fi,BluetoothorUSB.[citation needed]
UMTS combines three different terrestrialair interfaces,GSM'sMobile Application Part(MAP) core, and the GSM family ofspeech codecs.
The air interfaces are called UMTS Terrestrial Radio Access (UTRA).[7]All air interface options are part ofITU'sIMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network.
Please note that the termsW-CDMA,TD-CDMAandTD-SCDMAare misleading. While they suggest covering just achannel access method(namely a variant ofCDMA), they are actually the common names for the whole air interface standards.[8]
W-CDMA (WCDMA; WidebandCode-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in3Gmobile telecommunicationsnetworks. It supports conventional cellular voice, text andMMSservices, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access.[9]
W-CDMA uses theDS-CDMAchannel access method with a pair of 5 MHz wide channels. In contrast, the competingCDMA2000system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States).
The specificfrequency bandsoriginally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used.[10]While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US byAT&T Mobility, New Zealand byTelecom New Zealandon theXT Mobile Networkand in Australia byTelstraon theNext Gnetwork. Some carriers such asT-Mobileuse band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz).
UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) –frequency-division duplexing(FDD) and a3GPPstandardizedversion of UMTS networks that makes use of frequency-division duplexing forduplexingover an UMTS Terrestrial Radio Access (UTRA) air interface.[11]
W-CDMA is the basis of Japan'sNTT DoCoMo'sFOMAservice and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS.[12]It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously usedtime-division multiple access(TDMA) andtime-division duplex(TDD) schemes.
While not an evolutionary upgrade on the airside, it uses the samecore networkas the2GGSM networks deployed worldwide, allowingdual-mode mobileoperation along with GSM/EDGE; a feature it shares with other members of the UMTS family.
In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G networkFOMA. Later NTT DoCoMo submitted the specification to theInternational Telecommunication Union(ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short rangeDECTsystem. Later, W-CDMA was selected as an air interface forUMTS.
As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS.[13]However, this has been resolved by NTT DoCoMo updating their network.
Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated byQualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its earlyIS-95air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries as of 2006[update]. However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012.[14]Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path toLTE.
Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard.
W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use adirect-sequenceCDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density[citation needed]; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, andcross-licensingofpatentsbetween Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents.[15]
W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements.
The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo inJapanin 2001.
Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand.
W-CDMA has also been adapted for use in satellite communications on the U.S.Mobile User Objective Systemusing geosynchronous satellites in place of cell towers.
J-PhoneJapan (onceVodafoneand nowSoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004.
Beginning in 2003,Hutchison Whampoagradually launched their upstart UMTS networks.
Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" – asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service – where a certain "coverage" must be achieved within a given date or the licence will be revoked.
Vodafone launched several UMTS networks in Europe in February 2004.MobileOneofSingaporecommercially launched its 3G (W-CDMA) services in February 2005.New Zealandin August 2005 andAustraliain October 2005.
AT&T Mobilityutilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022.
Rogers inCanadaMarch 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007.
TeliaSoneraopened W-CDMA service inFinlandOctober 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB.[citation needed]
SK TelecomandKTF, two largest mobile phone service providers inSouth Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities whileSK Telecomhas announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007.KT Freecelwill thus cut funding to its CDMA2000 network development to the minimum.
InNorway,Telenorintroduced W-CDMA in major cities by the end of 2004, while their competitor,NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006).
Maxis CommunicationsandCelcom, two mobile phone service providers inMalaysia, started offering W-CDMA services in 2005.
InSweden,Teliaintroduced W-CDMA in March 2004.
UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) – time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD.[11]UTRA-TDD is a UTRA that usestime-division duplexingfor duplexing.[11]While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those whereWiMAXmight be used.[citation needed]UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used.[citation needed]It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD).[16][17]
The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods,code-division multiple access(CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic.
TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on usingspread-spectrummultiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[16]
UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5MHzof spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second).[16]The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tightfrequency bands.[18]
TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA.[19]
In the United States, the technology has been used for public safety and government use in theNew York Cityand a few other areas.[needs update][20]In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started.
Time-Division Synchronous Code-Division Multiple Access(TD-SCDMA) or UTRA TDD 1.28Mcpslow chip rate (UTRA-TDD LCR)[17][8]is an air interface[17]found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA.
TD-SCDMA uses the TDMA channel access method combined with an adaptivesynchronous CDMAcomponent[17]on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification.
Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000.
The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification.[8]
TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks.
TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT),Datang TelecomandSiemensin an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders.
TD-SCDMA proponents also claim it is better suited for densely populated areas.[17]Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells.[17]
TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005[21]but only reached large scale commercial trials with 60,000 users across eight cities in 2008.[22]
On January 7, 2009, China granted a TD-SCDMA 3G licence toChina Mobile.[23]
On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009.
TD-SCDMA is not commonly used outside of China.[24]
TD-SCDMA uses TDD, in contrast to the FDD scheme used byW-CDMA. By dynamically adjusting the number of timeslots used for downlink anduplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and thebase stationcan deduce the downlink channel information from uplink channel estimates, which is helpful to the application ofbeamformingtechniques.
TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity ofmultiuser detectionand beamforming schemes, but the non-continuous transmission also reducescoverage(because of the higher peak power needed), mobility (because of lowerpower controlfrequency) and complicatesradio resource managementalgorithms.
The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces theinterferencebetween users of the same timeslot using different codes by improving theorthogonalitybetween the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization.
On January 20, 2006,Ministry of Information Industryof the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers,China TelecomandChina Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008.
The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option".[17]
On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch.
In January 2009, theMinistry of Industry and Information Technology(MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA andCDMA2000 1xEV-DO, were assigned toChina Unicomand China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth.
The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network.[25]Deployment of base stations has also been slow, resulting in lack of improvement of service for users.[26]The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE.[27][28]Gradual closures of TD-SCDMA stations started in 2016.[29][30]
The following is a list ofmobile telecommunicationsnetworks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology.
In Europe,CEPTallocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use.[33]Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD,[34]due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band.
Ordinary UMTS uses UTRA-FDD as an air interface and is known asUMTS-FDD. UMTS-FDD uses W-CDMA for multiple access andfrequency-division duplexfor duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for1G,2G, or 3G mobile telephone service in the countries of operation.
UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response.
UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed oncellular,PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum.
Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 2500–2690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz.[35]
UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA.
Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York,[36]but outside of some experimental systems, notably one fromNextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system.
A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX andHIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulatorOFCOMis discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure.
Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower.
Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter.
UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands.
UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network toGERAN(GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN.
UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000.
The UE (User Equipment) interface of theRAN(Radio Access Network) primarily consists ofRRC(Radio Resource Control),PDCP(Packet Data Convergence Protocol),RLC(Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three Modes – Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header toSDUof higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters.
The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC andNASmessages go on SRBs.
Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs.
With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high.
The CN can be connected to variousbackbone networks, such as theInternetor anIntegrated Services Digital Network(ISDN) telephone network. UMTS (and GERAN) include the three lowest layers ofOSI model. The network layer (OSI 3) includes theRadio Resource Managementprotocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers.
A UARFCN (abbreviationfor UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in theUMTS frequency bands.
Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels.
Over 130 licenses had been awarded to operators worldwide, as of December 2004, specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. InGermany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and theSonera/Telefónicaconsortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notablyKPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years.[needs update]
The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink.[needs update]
AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out[37]some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones.
T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4GLTEservices.[38]
In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providersWind Mobile,MobilicityandVideotronhave begun operations in the 1700 MHz band.
In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded asNextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned byHutchison 3G Australia, and this is the primary network used by their customers.Optusis currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas.Vodafoneis also building a 3G network using the 900 MHz band.
In India,BSNLhas started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber.
Carriers in South America are now also rolling out 850 MHz networks.
UMTS phones (and data cards) are highly portable – they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges.
Most UMTS licensees consider ubiquitous, transparent globalroamingan important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bands – Europe initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world – including frequencies formerly used solely for 2G services.
UMTS phones can use aUniversal Subscriber Identity Module, USIM (based on GSM'sSIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone.
Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS network – using a pre-release specification,[39]it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS.
All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world.
Using acellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as atablet PCor aPDA). Some softwareinstalls itselffrom the modem, so that in some cases absolutely no knowledge of technology is required to getonlinein moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobileWLAN access point.
There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones withPentaband3G coverage, including theN8andE7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple'siPhone 4contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed.
The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum.
Another competitor to UMTS isEDGE(IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation includingvertical handovers.
China'sTD-SCDMAstandard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). UnlikeTD-CDMA(UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor.
While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks.
All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD.
On the Internet access side, competing systems include WiMAX andFlash-OFDM.
From a GSM/GPRS network, the following network elements can be reused:
From a GSM/GPRS communication radio network, the following elements cannot be reused:
They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network.
The UMTS network introduces new network elements that function as specified by 3GPP:
The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations.
Some countries, including the United States, have allocated spectrum differently from theITUrecommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available.[citation needed]In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today[when?], standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.[40]
In its early days[when?], UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor.[citation needed]The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue.[citation needed]
Compared to GSM, UMTS networks initially required a higherbase stationdensity. For fully-fledged UMTS incorporatingvideo on demandfeatures, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006.[citation needed]
Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks.Apple Inc.cited[41]UMTS power consumption as the reason that the first generationiPhoneonly supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing.
As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information.[42]In August 2014, theWashington Postreported on widespread marketing of surveillance systems usingSignalling System No. 7(SS7) protocols to locate callers anywhere in the world.[42]
In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers.[43]
Deutsche Telekomand Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution.[44]
The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones.
|
https://en.wikipedia.org/wiki/Universal_Mobile_Telecommunications_System
|
A5/1is astream cipherused to provide over-the-air communicationprivacyin theGSMcellular telephonestandard. It is one of several implementations of the A5 security protocol. It was initially kept secret, but became public knowledge through leaks andreverse engineering. A number of serious weaknesses in the cipher have been identified.
A5/1 is used inEuropeand the United States.A5/2was a deliberate weakening of the algorithm for certain export regions.[1]A5/1 was developed in 1987, when GSM was not yet considered for use outside Europe, andA5/2was developed in 1989. Though both were initially kept secret, the general design was leaked in 1994 and the algorithms were entirely reverse engineered in 1999 byMarc Bricenofrom a GSM telephone. In 2000, around 130 million GSM customers relied on A5/1 to protect the confidentiality of their voice communications.[citation needed]
Security researcherRoss Andersonreported in 1994 that "there was a terrific row between theNATOsignal intelligence agenciesin the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with theWarsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design."[2]
A GSM transmission is organised as sequences ofbursts. In a typical channel and in one direction, one burst is sent every 4.615 milliseconds and contains 114 bits available for information. A5/1 is used to produce for each burst a 114 bit sequence ofkeystreamwhich isXORedwith the 114 bits prior to modulation. A5/1 is initialised using a 64-bitkeytogether with a publicly known 22-bit frame number. Older fielded GSM implementations using Comp128v1 for key generation, had 10 of the key bits fixed at zero, resulting in an effectivekey lengthof 54 bits. This weakness was rectified with the introduction of Comp128v3 which yields proper 64 bits keys. When operating in GPRS / EDGE mode, higher bandwidth radio modulation allows for larger 348 bits frames, andA5/3is then used in a stream cipher mode to maintain confidentiality.
A5/1 is based around a combination of threelinear-feedback shift registers(LFSRs) with irregular clocking. The three shift registers are specified as follows:
These degrees were not chosen at random: since the degrees of the three registers are relatively prime, the period of this generator is the product of the periods of the three registers. Thus the period of A5/1 (before repetition) is 2^64 bits (2 to the power of 64).
The bits are indexed with theleast significant bit(LSB) as 0.
The registers are clocked in a stop/go fashion using a majority rule. Each register has an associated clocking bit. At each cycle, the clocking bit of all three registers is examined and the majority bit is determined. A register is clocked if the clocking bit agrees with the majority bit. Hence at each step at least two or three registers are clocked, and each register steps with probability 3/4.
Initially, the registers are set to zero. Then for 64 cycles, the 64-bit secret keyKis mixed in according to the following scheme: in cycle0≤i<64{\displaystyle 0\leq {i}<64}, theith key bit is added to the least significant bit of each register using XOR —
Each register is then clocked.
Similarly, the 22-bits of the frame number are added in 22 cycles. Then the entire system is clocked using the normal majority clocking mechanism for 100 cycles, with the output discarded. After this is completed, the cipher is ready to produce two 114 bit sequences of output keystream, first 114 for downlink, last 114 for uplink.
A number of attacks on A5/1 have been published, and the AmericanNational Security Agencyis able to routinely decrypt A5/1 messages according to released internal documents.[3]
Some attacks require an expensive preprocessing stage after which the cipher can be broken in minutes or seconds. Originally, the weaknesses were passive attacks using theknown plaintextassumption. In 2003, more serious weaknesses were identified which can be exploited in theciphertext-only scenario, or by an active attacker. In 2006 Elad Barkan,Eli Bihamand Nathan Keller demonstrated attacks against A5/1,A5/3, or even GPRS that allow attackers to tap GSM mobile phone conversations and decrypt them either in real-time, or at any later time.
According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now believed that 128 bits would in fact also still be secure until theadvent of quantum computing. Audestad, Peter van der Arend, andThomas Haugsays that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 54 bits.[4]
The first attack on the A5/1 was proposed byRoss Andersonin 1994. Anderson's basic idea was to guess the complete content of the registers R1 and R2 and about half of the register R3. In this way the clocking of all three registers is determined and the second half of R3 can be computed.[2]
In 1997, Golic presented an attack based on solving sets of linear equations which has a time complexity of 240.16(the units are in terms of number of solutions of a system of linear equations which are required).
In 2000,Alex Biryukov,Adi ShamirandDavid Wagnershowed that A5/1 can becryptanalysedin real time using a time-memory tradeoff attack,[5]based on earlier work by Jovan Golic.[6]One tradeoff allows an attacker to reconstruct the key in one second from two minutes of known plaintext or in several minutes from two seconds of known plain text, but he must first complete an expensive preprocessing stage which requires 248steps to compute around 300 GB of data. Several tradeoffs between preprocessing, data requirements, attack time and memory complexity are possible.
The same year,Eli BihamandOrr Dunkelmanalso published an attack on A5/1 with a total work complexity of 239.91A5/1 clockings given 220.8bits ofknown plaintext. The attack requires 32 GB of data storage after aprecomputationstage of 238.[7]
Ekdahl and Johansson published an attack on the initialisation procedure which breaks A5/1 in a few minutes using two to five minutes of conversation plaintext.[8]This attack does not require a preprocessing stage. In 2004, Maximovet al.improved this result to an attack requiring "less than one minute of computations, and a few seconds of known conversation". The attack was further improved byElad BarkanandEli Bihamin 2005.[9]
In 2003, Barkanet al.published several attacks on GSM encryption.[10]The first is an active attack. GSM phones can be convinced to use the much weakerA5/2cipher briefly. A5/2 can be broken easily, and the phone uses the same key as for the stronger A5/1 algorithm. A second attack on A5/1 is outlined, aciphertext-onlytime-memory tradeoff attack which requires a large amount of precomputation.
In 2006,Elad Barkan,Eli Biham,Nathan Kellerpublished the full version of their 2003 paper, with attacks against A5/X сiphers. The authors claim:[11]
We present a very practical ciphertext-only cryptanalysis of GSM encrypted communication, and various active attacks on the GSM protocols. These attacks can even break into GSM networks that use "unbreakable" ciphers. We first describe a ciphertext-only attack on A5/2 that requires a few dozen milliseconds of encrypted off-the-air cellular conversation and finds the correct key in less than a second on a personal computer. We extend this attack to a (more complex) ciphertext-only attack on A5/1. We then describe new (active) attacks on the protocols of networks that use A5/1, A5/3, or even GPRS. These attacks exploit flaws in the GSM protocols, and they work whenever the mobile phone supports a weak cipher such as A5/2. We emphasize that these attacks are on the protocols, and are thus applicable whenever the cellular phone supports a weak cipher, for example, they are also applicable for attacking A5/3 networks using the cryptanalysis of A5/1. Unlike previous attacks on GSM that require unrealistic information, like long known plaintext periods, our attacks are very practical and do not require any knowledge of the content of the conversation. Furthermore, we describe how to fortify the attacks to withstand reception errors. As a result, our attacks allow attackers to tap conversations and decrypt them either in real-time, or at any later time.
In 2007Universities of Bochumand Kiel started a research project to create a massively parallelFPGA-based cryptographic accelerator COPACOBANA. COPACOBANA was the first commercially available solution[12]using fast time-memory trade-off techniques that could be used to attack the popular A5/1 and A5/2 algorithms, used in GSM voice encryption, as well as theData Encryption Standard(DES). It also enablesbrute force attacksagainst GSM eliminating the need of large precomputed lookup tables.
In 2008, the groupThe Hackers Choicelaunched a project to develop a practical attack on A5/1. The attack requires the construction of a large look-up table of approximately 3 terabytes. Together with the scanning capabilities developed as part of the sister project, the group expected to be able to record any GSM call or SMS encrypted with A5/1, and within about 3–5 minutes derive the encryption key and hence listen to the call and read the SMS in clear. But the tables weren't released.[13]
A similar effort, the A5/1 Cracking Project, was announced at the2009 Black Hat security conferenceby cryptographersKarsten Nohland Sascha Krißler. It created the look-up tables usingNvidiaGPGPUsvia apeer-to-peerdistributed computingarchitecture. Starting in the middle of September 2009, the project ran the equivalent of 12 Nvidia GeForce GTX 260. According to the authors, the approach can be used on any cipher with key size up to 64-bits.[14]
In December 2009, the A5/1 Cracking Project attack tables for A5/1 were announced by Chris Paget and Karsten Nohl. The tables use a combination of compression techniques, includingrainbow tablesand distinguished point chains. These tables constituted only parts of the 1.7 TB completed table and had been computed during three months using 40 distributedCUDAnodes and then published overBitTorrent.[13][14][15][16]More recently the project has announced a switch to faster ATIEvergreencode, together with a change in the format of the tables andFrank A. Stevensonannounced breaks of A5/1 using the ATI generated tables.[17]
Documents leaked byEdward Snowdenin 2013 state that the NSA "can process encrypted A5/1".[18]
Since the degrees of the three LFSRs are relatively prime, the period of this generator is the product of the periods of the three LFSRs, which represents 2^64 bits (2 to the power of 64).
One might think of using A5/1 as pseudo-random generator with a 64-bit initialization seed (key size), but it is not reliable. It loses its randomness after only 8 MB (which represents the period of the largest of the three registers).[19]
|
https://en.wikipedia.org/wiki/A5/1
|
Inarithmeticandcomputer programming, theextended Euclidean algorithmis an extension to theEuclidean algorithm, and computes, in addition to thegreatest common divisor(gcd) of integersaandb, also the coefficients ofBézout's identity, which are integersxandysuch that
This is acertifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs.[1]It allows one to compute also, with almost no extra cost, the quotients ofaandbby their greatest common divisor.
Extended Euclidean algorithmalso refers to avery similar algorithmfor computing thepolynomial greatest common divisorand the coefficients of Bézout's identity of twounivariate polynomials.
The extended Euclidean algorithm is particularly useful whenaandbarecoprime. With that provision,xis themodular multiplicative inverseofamodulob, andyis the modular multiplicative inverse ofbmoduloa. Similarly, the polynomial extended Euclidean algorithm allows one to compute themultiplicative inverseinalgebraic field extensionsand, in particular infinite fieldsof non prime order. It follows that both extended Euclidean algorithms are widely used incryptography. In particular, the computation of themodular multiplicative inverseis an essential step in the derivation of key-pairs in theRSApublic-key encryption method.
The standard Euclidean algorithm proceeds by a succession ofEuclidean divisionswhose quotients are not used. Only theremaindersare kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm withaandbas input, consists of computing a sequenceq1,…,qk{\displaystyle q_{1},\ldots ,q_{k}}of quotients and a sequencer0,…,rk+1{\displaystyle r_{0},\ldots ,r_{k+1}}of remainders such that
It is the main property ofEuclidean divisionthat the inequalities on the right define uniquelyqi{\displaystyle q_{i}}andri+1{\displaystyle r_{i+1}}fromri−1{\displaystyle r_{i-1}}andri.{\displaystyle r_{i}.}
The computation stops when one reaches a remainderrk+1{\displaystyle r_{k+1}}which is zero; the greatest common divisor is then the last non zero remainderrk.{\displaystyle r_{k}.}
The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows
The computation also stops whenrk+1=0{\displaystyle r_{k+1}=0}and gives
Moreover, ifaandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}, then
for0≤i≤k,{\displaystyle 0\leq i\leq k,}where⌊x⌋{\displaystyle \lfloor x\rfloor }denotes theintegral partofx, that is the greatest integer not greater thanx.
This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is theminimal pairof Bézout coefficients, as being the unique pair satisfying both above inequalities.
It also means that the algorithm can be done withoutinteger overflowby acomputer programusing integers of a fixed size that is larger than that ofaandb.
The following table shows how the extended Euclidean algorithm proceeds with input240and46. The greatest common divisor is the last non zero entry,2in the column "remainder". The computation stops at row 6, because the remainder in it is0. Bézout coefficients appear in the last two columns of the second-to-last row. In fact, it is easy to verify that−9×240+47×46=2. Finally the last two entries23and−120of the last row are, up to the sign, the quotients of the input46and240by the greatest common divisor2.
As0≤ri+1<|ri|,{\displaystyle 0\leq r_{i+1}<|r_{i}|,}the sequence of theri{\displaystyle r_{i}}is a decreasing sequence of nonnegative integers (fromi= 2 on). Thus it must stop with somerk+1=0.{\displaystyle r_{k+1}=0.}This proves that the algorithm stops eventually.
Asri+1=ri−1−riqi,{\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i},}the greatest common divisor is the same for(ri−1,ri){\displaystyle (r_{i-1},r_{i})}and(ri,ri+1).{\displaystyle (r_{i},r_{i+1}).}This shows that the greatest common divisor of the inputa=r0,b=r1{\displaystyle a=r_{0},b=r_{1}}is the same as that ofrk,rk+1=0.{\displaystyle r_{k},r_{k+1}=0.}This proves thatrk{\displaystyle r_{k}}is the greatest common divisor ofaandb. (Until this point, the proof is the same as that of the classical Euclidean algorithm.)
Asa=r0{\displaystyle a=r_{0}}andb=r1,{\displaystyle b=r_{1},}we haveasi+bti=ri{\displaystyle as_{i}+bt_{i}=r_{i}}fori= 0 and 1. The relation follows by induction for alli>1{\displaystyle i>1}:ri+1=ri−1−riqi=(asi−1+bti−1)−(asi+bti)qi=(asi−1−asiqi)+(bti−1−btiqi)=asi+1+bti+1.{\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i}=(as_{i-1}+bt_{i-1})-(as_{i}+bt_{i})q_{i}=(as_{i-1}-as_{i}q_{i})+(bt_{i-1}-bt_{i}q_{i})=as_{i+1}+bt_{i+1}.}
Thussk{\displaystyle s_{k}}andtk{\displaystyle t_{k}}are Bézout coefficients.
Consider the matrixAi=(si−1siti−1ti).{\displaystyle A_{i}={\begin{pmatrix}s_{i-1}&s_{i}\\t_{i-1}&t_{i}\end{pmatrix}}.}
The recurrence relation may be rewritten in matrix formAi+1=Ai⋅(011−qi).{\displaystyle A_{i+1}=A_{i}\cdot {\begin{pmatrix}0&1\\1&-q_{i}\end{pmatrix}}.}
The matrixA1{\displaystyle A_{1}}is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant ofAi{\displaystyle A_{i}}is(−1)i−1.{\displaystyle (-1)^{i-1}.}In particular, fori=k+1,{\displaystyle i=k+1,}we havesktk+1−tksk+1=(−1)k.{\displaystyle s_{k}t_{k+1}-t_{k}s_{k+1}=(-1)^{k}.}Viewing this as a Bézout's identity, this shows thatsk+1{\displaystyle s_{k+1}}andtk+1{\displaystyle t_{k+1}}arecoprime. The relationask+1+btk+1=0{\displaystyle as_{k+1}+bt_{k+1}=0}that has been proved above andEuclid's lemmashow thatsk+1{\displaystyle s_{k+1}}dividesb, that is thatb=dsk+1{\displaystyle b=ds_{k+1}}for some integerd. Dividing bysk+1{\displaystyle s_{k+1}}the relationask+1+btk+1=0{\displaystyle as_{k+1}+bt_{k+1}=0}givesa=−dtk+1.{\displaystyle a=-dt_{k+1}.}So,sk+1{\displaystyle s_{k+1}}and−tk+1{\displaystyle -t_{k+1}}are coprime integers that are the quotients ofaandbby a common factor, which is thus their greatest common divisor or itsopposite.
To prove the last assertion, assume thataandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}. Then,a≠b{\displaystyle a\neq b}, and ifa<b{\displaystyle a<b}, it can be seen that thesandtsequences for (a,b) under the EEA are, up to initial 0s and 1s, thetandssequences for (b,a). The definitions then show that the (a,b) case reduces to the (b,a) case. So assume thata>b{\displaystyle a>b}without loss of generality.
It can be seen thats2{\displaystyle s_{2}}is 1 ands3{\displaystyle s_{3}}(which exists bygcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}) is a negative integer. Thereafter, thesi{\displaystyle s_{i}}alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact thatqi≥1{\displaystyle q_{i}\geq 1}for1≤i≤k{\displaystyle 1\leq i\leq k}, the casei=1{\displaystyle i=1}holds becausea>b{\displaystyle a>b}. The same is true for theti{\displaystyle t_{i}}after the first few terms, for the same reason. Furthermore, it is easy to see thatqk≥2{\displaystyle q_{k}\geq 2}(whenaandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}). Thus, noticing that|sk+1|=|sk−1|+qk|sk|{\displaystyle |s_{k+1}|=|s_{k-1}|+q_{k}|s_{k}|}, we obtain|sk+1|=|bgcd(a,b)|≥2|sk|and|tk+1|=|agcd(a,b)|≥2|tk|.{\displaystyle |s_{k+1}|=\left|{\frac {b}{\gcd(a,b)}}\right|\geq 2|s_{k}|\qquad {\text{and}}\qquad |t_{k+1}|=\left|{\frac {a}{\gcd(a,b)}}\right|\geq 2|t_{k}|.}
This, accompanied by the fact thatsk,tk{\displaystyle s_{k},t_{k}}are larger than or equal to in absolute value than any previoussi{\displaystyle s_{i}}orti{\displaystyle t_{i}}respectively completed the proof.
Forunivariate polynomialswith coefficients in afield, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality0≤ri+1<|ri|{\displaystyle 0\leq r_{i+1}<|r_{i}|}has to be replaced by an inequality on the degreesdegri+1<degri.{\displaystyle \deg r_{i+1}<\deg r_{i}.}Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials.
A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem.
If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials(s,t)such that
and
A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor.
In mathematics, it is common to require that the greatest common divisor be amonic polynomial. To get this, it suffices to divide every element of the output by theleading coefficientofrk.{\displaystyle r_{k}.}This allows that, ifaandbare coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. Incomputer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient.
The second way to normalize the greatest common divisor in the case of polynomials with integer coefficients is to divide every output by thecontentofrk,{\displaystyle r_{k},}to get aprimitivegreatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation.
A third approach consists in extending the algorithm ofsubresultant pseudo-remainder sequencesin a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainderri{\displaystyle r_{i}}is asubresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes
whereRes(a,b){\displaystyle \operatorname {Res} (a,b)}denotes theresultantofaandb. In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it.
To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables.
For simplicity, the following algorithm (and the other algorithms in this article) usesparallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one,
is equivalent to
and similarly for the other parallel assignments.
This leads to the following code:
The quotients ofaandbby their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if eitheraorbis zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed.
Finally, notice that in Bézout's identity,ax+by=gcd(a,b){\displaystyle ax+by=\gcd(a,b)}, one can solve fory{\displaystyle y}givena,b,x,gcd(a,b){\displaystyle a,b,x,\gcd(a,b)}. Thus, an optimization to the above algorithm is to compute only thesk{\displaystyle s_{k}}sequence (which yields the Bézout coefficientx{\displaystyle x}), and then computey{\displaystyle y}at the end:
However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication ofold_s * ain computation ofbezout_tcan overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together.
A fractiona/bis in canonical simplified form ifaandbarecoprimeandbis positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by
The proof of this algorithm relies on the fact thatsandtare two coprime integers such thatas+bt= 0, and thusab=−ts{\displaystyle {\frac {a}{b}}=-{\frac {t}{s}}}. To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator.
Ifbdividesaevenly, the algorithm executes only one iteration, and we haves= 1at the end of the algorithm. It is the only case where the output is an integer.
The extended Euclidean algorithm is the essential tool for computingmultiplicative inversesin modular structures, typically themodular integersand thealgebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order.
Ifnis a positive integer, theringZ/nZmay be identified with the set{0, 1, ...,n-1}of the remainders ofEuclidean divisionbyn, the addition and the multiplication consisting in taking the remainder bynof the result of the addition and the multiplication of integers. An elementaofZ/nZhas a multiplicative inverse (that is, it is aunit) if it iscoprimeton. In particular, ifnisprime,ahas a multiplicative inverse if it is not zero (modulon). ThusZ/nZis a field if and only ifnis prime.
Bézout's identity asserts thataandnare coprime if and only if there exist integerssandtsuch that
Reducing this identity modulongives
Thust, or, more exactly, the remainder of the division oftbyn, is the multiplicative inverse ofamodulon.
To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient ofnis not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower thann, one may use the fact that the integertprovided by the algorithm satisfies|t| <n. That is, ift< 0, one must addnto it at the end. This results in thepseudocode, in which the inputnis an integer larger than 1.
The extended Euclidean algorithm is also the main tool for computingmultiplicative inversesinsimple algebraic field extensions. An important case, widely used incryptographyandcoding theory, is that offinite fieldsof non-prime order. In fact, ifpis a prime number, andq=pd, the field of orderqis a simple algebraic extension of theprime fieldofpelements, generated by a root of anirreducible polynomialof degreed.
A simple algebraic extensionLof a fieldK, generated by the root of an irreducible polynomialpof degreedmay be identified to thequotient ringK[X]/⟨p⟩,{\displaystyle K[X]/\langle p\rangle ,}, and its elements are inbijective correspondencewith the polynomials of degree less thand. The addition inLis the addition of polynomials. The multiplication inLis the remainder of theEuclidean divisionbypof the product of polynomials. Thus, to complete the arithmetic inL, it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm.
The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less thand. Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements ofK; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element ofK. In the pseudocode which follows,pis a polynomial of degree greater than one, andais a polynomial.
For example, if the polynomial used to define the finite field GF(28) isp=x8+x4+x3+x+ 1, anda=x6+x4+x+ 1is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2n, one has −z=zandz+z= 0 for every elementzin the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed.
Thus, the inverse isx7+x6+x3+x, as can be confirmed bymultiplying the two elements together, and taking the remainder bypof the result.
One can handle the case of more than two numbers iteratively. First we show thatgcd(a,b,c)=gcd(gcd(a,b),c){\displaystyle \gcd(a,b,c)=\gcd(\gcd(a,b),c)}. To prove this letd=gcd(a,b,c){\displaystyle d=\gcd(a,b,c)}. By definition of gcdd{\displaystyle d}is a divisor ofa{\displaystyle a}andb{\displaystyle b}. Thusgcd(a,b)=kd{\displaystyle \gcd(a,b)=kd}for somek{\displaystyle k}. Similarlyd{\displaystyle d}is a divisor ofc{\displaystyle c}soc=jd{\displaystyle c=jd}for somej{\displaystyle j}. Letu=gcd(k,j){\displaystyle u=\gcd(k,j)}. By our construction ofu{\displaystyle u},ud|a,b,c{\displaystyle ud|a,b,c}but sinced{\displaystyle d}is the greatest divisoru{\displaystyle u}is aunit. And sinceud=gcd(gcd(a,b),c){\displaystyle ud=\gcd(\gcd(a,b),c)}the result is proven.
So ifna+mb=gcd(a,b){\displaystyle na+mb=\gcd(a,b)}then there arex{\displaystyle x}andy{\displaystyle y}such thatxgcd(a,b)+yc=gcd(a,b,c){\displaystyle x\gcd(a,b)+yc=\gcd(a,b,c)}so the final equation will be
So then to apply tonnumbers we use induction
with the equations following directly.
|
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
|
Incryptography, apublic key certificate, also known as adigital certificateoridentity certificate, is anelectronic documentused to prove the validity of apublic key.[1][2]The certificate includes the public key and information about it, information about the identity of its owner (called the subject), and thedigital signatureof an entity that has verified the certificate's contents (called the issuer). If the device examining the certificate trusts the issuer and finds the signature to be a valid signature of that issuer, then it can use the included public key to communicate securely with the certificate's subject. Inemail encryption,code signing, ande-signaturesystems, a certificate's subject is typically a person or organization. However, inTransport Layer Security(TLS) a certificate's subject is typically a computer or other device, though TLS certificates may identify organizations or individuals in addition to their core role in identifying devices. TLS, sometimes called by its older nameSecure Sockets Layer(SSL), is notable for being a part ofHTTPS, aprotocolfor securely browsing theweb.
In a typicalpublic-key infrastructure(PKI) scheme, the certificate issuer is acertificate authority(CA),[3]usually a company that charges customers a fee to issue certificates for them. By contrast, in aweb of trustscheme, individuals sign each other's keys directly, in a format that performs a similar function to a public key certificate. In case of key compromise, a certificate may need to berevoked.
The most common format for public key certificates is defined byX.509. Because X.509 is very general, the format is further constrained by profiles defined for certain use cases, such asPublic Key Infrastructure (X.509)as defined inRFC5280.
TheTransport Layer Security(TLS) protocol – as well as its outdated predecessor, theSecure Sockets Layer(SSL) protocol – ensures that the communication between aclient computerand aserveris secure. The protocol requires the server to present a digital certificate, proving that it is the intended destination. The connecting client conductscertification path validation, ensuring that:
TheSubjectfield of the certificate must identify the primary hostname of the server as theCommon Name.[clarification needed]The hostname must be publicly accessible, not usingprivate addressesorreserved domains.[4]A certificate may be valid for multiple hostnames (e.g., a domain and its subdomains). Such certificates are commonly calledSubject Alternative Name (SAN) certificatesorUnified Communications Certificates (UCC). These certificates contain theSubject Alternative Namefield, though many CAs also put them into theSubject Common Namefield for backward compatibility. If some of the hostnames contain an asterisk (*), a certificate may also be called awildcard certificate.
Once the certification path validation is successful, the client can establish an encrypted connection with the server.
Internet-facing servers, such as publicweb servers, must obtain their certificates from a trusted, public certificate authority (CA).
Client certificates authenticate the client connecting to a TLS service, for instance to provide access control. Because most services provide access to individuals, rather than devices, most client certificates contain an email address or personal name rather than a hostname. In addition, the certificate authority that issues the client certificate is usually the service provider to which client connects because it is the provider that needs to perform authentication. Some service providers even offer free SSL certificates as part of their packages.[5]
While most web browsers support client certificates, the most common form of authentication on the Internet is a username and password pair. Client certificates are more common invirtual private networks(VPN) andRemote Desktop Services, where they authenticate devices.
In accordance with theS/MIMEprotocol, email certificates can both establish the message integrity and encrypt messages. To establish encrypted email communication, the communicating parties must have their digital certificates in advance. Each must send the other one digitally signed email and opt to import the sender's certificate.
Some publicly trusted certificate authorities provide email certificates, but more commonly S/MIME is used when communicating within a given organization, and that organization runs its own CA, which is trusted by participants in that email system.
Aself-signed certificateis a certificate with a subject that matches its issuer, and a signature that can be verified by its own public key.
Self-signed certificates have their own limited uses. They have full trust value when the issuer and the sole user are the same entity. For example, the Encrypting File System on Microsoft Windows issues a self-signed certificate on behalf of the encrypting user and uses it to transparently decrypt data on the fly. The digital certificate chain of trust starts with a self-signed certificate, called aroot certificate,trust anchor, ortrust root. A certificate authority self-signs a root certificate to be able to sign other certificates.
An intermediate certificate has a similar purpose to the root certificate – its only use is to sign other certificates. However, an intermediate certificate is not self-signed. A root certificate or another intermediate certificate needs to sign it.
An end-entity or leaf certificate is any certificate that cannot sign other certificates. For instance, TLS/SSL server and client certificates, email certificates, code signing certificates, and qualified certificates are all end-entity certificates.
Subject Alternative Name (SAN) certificates are anextensiontoX.509that allows various values to be associated with a security certificate using asubjectAltNamefield.[6]These values are calledSubject Alternative Names(SANs). Names include:[7]
RFC2818(May 2000) specifies Subject Alternative Names as the preferred method of adding DNS names to certificates, deprecating the previous method of putting DNS names in thecommonNamefield.[8]Google Chromeversion 58 (March 2017) removed support for checking thecommonNamefield at all, instead only looking at the SANs.[8]
As shown in the picture of Wikimedia's section on the right, the SAN field can contain wildcards.[9]Not all vendors support or endorse mixing wildcards into SAN certificates.[10]
A public key certificate which uses anasterisk*(thewildcard) in itsdomain namefragment is called a Wildcard certificate.
Through the use of*, a single certificate may be used for multiplesub-domains. It is commonly used fortransport layer securityincomputer networking.
For example, a single wildcard certificate forhttps://*.example.comwill secure all these subdomains on thehttps://*.example.comdomain:
Instead of getting separate certificates for subdomains, you can use a single certificate for all main domains and subdomains and reduce cost.[11]
Because the wildcard only covers one level of subdomains (the asterisk doesn't match full stops),[12]these domains would not be valid for the certificates:[13]
Note possible exceptions by CAs, for example wildcard-plus cert by DigiCert contains an automatic "Plus" property for the naked domainexample.com.[citation needed]
Only a single level ofsubdomainmatching is supported in accordance withRFC2818.[14]
It is not possible to get a wildcard for anExtended Validation Certificate.[15]A workaround could be to add every virtual host name in theSubject Alternative Name(SAN) extension,[16][17]the major problem being that the certificate needs to be reissued whenever a new virtual server is added. (SeeTransport Layer Security § Support for name-based virtual serversfor more information.)
Wildcards can be added as domains in multi-domain certificates orUnified Communications Certificates(UCC). In addition, wildcards themselves can havesubjectAltNameextensions, including other wildcards. For example, the wildcard certificate*.wikipedia.orghas*.m.wikimedia.orgas a Subject Alternative Name. Thus it secureswww.wikipedia.orgas well as the completely different website namemeta.m.wikimedia.org.[18]
RFC6125argues against wildcard certificates on security grounds, in particular "partial wildcards".[19]
The wildcard applies only to one level of the domain name.*.example.commatchessub1.example.combut notexample.comand notsub2.sub1.domain.com
The wildcard may appear anywhere inside a label as a "partial wildcard" according to early specifications[20]
However, use of "partial-wildcard" certs is not recommended. As of 2011, partial wildcard support is optional, and is explicitly disallowed in SubjectAltName headers that are required for multi-name certificates.[21]All major browsers have deliberatelyremovedsupport for partial-wildcard certificates;[22][23]they will result in a "SSL_ERROR_BAD_CERT_DOMAIN" error. Similarly, it is typical for standard libraries in programming languages to not support "partial-wildcard" certificates. For example, any "partial-wildcard" certificate will not work with the latest versions of both Python[24]and Go. Thus,
Do not allow a label that consists entirely of just a wildcard unless it is the left-most label
A cert with multiple wildcards in a name is not allowed.
A cert with*plus a top-level domain is not allowed.
Too general and should not be allowed.
International domain names encoded in ASCII (A-label) are labels that areASCII-encodedand begin withxn--. URLs with international labels cannot contain wildcards.[25]
These are some of the most common fields in certificates. Most certificates contain a number of fields not listed here. Note that in terms of a certificate's X.509 representation, a certificate is not "flat" but contains these fields nested in various structures within the certificate.
This is an example of a decoded SSL/TLS certificate retrieved from SSL.com's website. The issuer's common name (CN) is shown asSSL.com EV SSL Intermediate CA RSA R3, identifying this as anExtended Validation(EV) certificate. Validated information about the website's owner (SSL Corp) is located in theSubjectfield. TheX509v3 Subject Alternative Namefield contains a list of domain names covered by the certificate. TheX509v3 Extended Key UsageandX509v3 Key Usagefields show all appropriate uses.
In the European Union,(advanced) electronic signatureson legal documents are commonly performed usingdigital signatureswith accompanying identity certificates. However, onlyqualified electronic signatures(which require using a qualified trust service provider and signature creation device) are given the same power as a physical signature.
In theX.509trust model, a certificate authority (CA) is responsible for signing certificates. These certificates act as an introduction between two parties, which means that a CA acts as a trusted third party. A CA processes requests from people or organizations requesting certificates (called subscribers), verifies the information, and potentially signs an end-entity certificate based on that information. To perform this role effectively, a CA needs to have one or more broadly trusted root certificates or intermediate certificates and the corresponding private keys. CAs may achieve this broad trust by having their root certificates included in popular software, or by obtaining a cross-signature from another CA delegating trust. Other CAs are trusted within a relatively small community, like a business, and are distributed by other mechanisms like WindowsGroup Policy.
Certificate authorities are also responsible for maintaining up-to-date revocation information about certificates they have issued, indicating whether certificates are still valid. They provide this information throughOnline Certificate Status Protocol(OCSP) and/or Certificate Revocation Lists (CRLs). Some of the larger certificate authorities in the market includeIdenTrust,DigiCert, andSectigo.[29]
Some major software contain a list of certificate authorities that are trusted by default.[citation needed]This makes it easier for end-users to validate certificates, and easier for people or organizations that request certificates to know which certificate authorities can issue a certificate that will be broadly trusted. This is particularly important in HTTPS, where a web site operator generally wants to get a certificate that is trusted by nearly all potential visitors to their web site.
The policies and processes a provider uses to decide which certificate authorities their software should trust are called root programs. The most influential root programs are:[citation needed]
Browsers other than Firefox generally use the operating system's facilities to decide which certificate authorities are trusted. So, for instance, Chrome on Windows trusts the certificate authorities included in the Microsoft Root Program, while on macOS or iOS, Chrome trusts the certificate authorities in the Apple Root Program.[30]Edge and Safari use their respective operating system trust stores as well, but each is only available on a single OS. Firefox uses the Mozilla Root Program trust store on all platforms.
The Mozilla Root Program is operated publicly, and its certificate list is part of theopen sourceFirefox web browser, so it is broadly used outside Firefox.[citation needed]For instance, while there is no common Linux Root Program, many Linux distributions, like Debian,[31]include a package that periodically copies the contents of the Firefox trust list, which is then used by applications.
Root programs generally provide a set of valid purposes with the certificates they include. For instance, some CAs may be considered trusted for issuing TLS server certificates, but not for code signing certificates. This is indicated with a set of trust bits in a root certificate storage system.
A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or misissued certificate until expiry.[32]Hence, revocation is an important part of apublic key infrastructure.[33]Revocation is performed by the issuingcertificate authority, which produces acryptographically authenticatedstatement of revocation.[34]
For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[35]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[36]
Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[37]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[33]
The most common use of certificates is forHTTPS-based web sites. Aweb browservalidates that an HTTPSweb serveris authentic, so that the user can feel secure that his/her interaction with theweb sitehas no eavesdroppers and that the web site is who it claims to be. This security is important forelectronic commerce. In practice, a web site operator obtains a certificate by applying to a certificate authority with acertificate signing request. The certificate request is an electronic document that contains the web site name, company information and the public key. The certificate provider signs the request, thus producing a public certificate. During web browsing, this public certificate is served to any web browser that connects to the web site and proves to the web browser that the provider believes it has issued a certificate to the owner of the web site.
As an example, when a user connects tohttps://www.example.com/with their browser, if the browser does not give any certificate warning message, then the user can be theoretically sure that interacting withhttps://www.example.com/is equivalent to interacting with the entity in contact with the email address listed in the public registrar under "example.com", even though that email address may not be displayed anywhere on the web site.[citation needed]No other surety of any kind is implied. Further, the relationship between the purchaser of the certificate, the operator of the web site, and the generator of the web site content may be tenuous and is not guaranteed.[citation needed]At best, the certificate guarantees uniqueness of the web site, provided that the web site itself has not been compromised (hacked) or the certificate issuing process subverted.
A certificate provider can opt to issue three types of certificates, each requiring its own degree of vetting rigor. In order of increasing rigor (and naturally, cost) they are: Domain Validation, Organization Validation and Extended Validation. These rigors are loosely agreed upon by voluntary participants in theCA/Browser Forum.[citation needed]
A certificate provider will issue a domain-validated (DV) certificate to a purchaser if the purchaser can demonstrate one vetting criterion: the right to administratively manage the affected DNS domain(s).
A certificate provider will issue an organization validation (OV) class certificate to a purchaser if the purchaser can meet two criteria: the right to administratively manage the domain name in question, and perhaps, the organization's actual existence as a legal entity. A certificate provider publishes its OV vetting criteria through itscertificate policy.
To acquire anExtended Validation(EV) certificate, the purchaser must persuade the certificate provider of its legal identity, including manual verification checks by a human. As with OV certificates, a certificate provider publishes its EV vetting criteria through itscertificate policy.
Until 2019, major browsers such as Chrome and Firefox generally offered users a visual indication of the legal identity when a site presented an EV certificate. This was done by showing the legal name before the domain, and a bright green color to highlight the change. Most browsers deprecated this feature[38][39]providing no visual difference to the user on the type of certificate used. This change followed security concerns raised by forensic experts and successful attempts to purchase EV certificates to impersonate famous organizations, proving the inefficiency of these visual indicators and highlighting potential abuses.[40]
Aweb browserwill give no warning to the user if a web site suddenly presents a different certificate, even if that certificate has a lower number of key bits, even if it has a different provider, and even if the previous certificate had an expiry date far into the future.[citation needed]Where certificate providers are under the jurisdiction of governments, those governments may have the freedom to order the provider to generate any certificate, such as for the purposes of law enforcement. Subsidiary wholesale certificate providers also have the freedom to generate any certificate.
All web browsers come with an extensive built-in list of trustedroot certificates, many of which are controlled by organizations that may be unfamiliar to the user.[1]Each of these organizations is free to issue any certificate for any web site and have the guarantee that web browsers that include its root certificates will accept it as genuine. In this instance, end users must rely on the developer of the browser software to manage its built-in list of certificates and on the certificate providers to behave correctly and to inform the browser developer of problematic certificates. While uncommon, there have been incidents in which fraudulent certificates have been issued: in some cases, the browsers have detected the fraud; in others, some time passed before browser developers removed these certificates from their software.[41][42]
The list of built-in certificates is also not limited to those provided by the browser developer: users (and to a degree applications) are free to extend the list for special purposes such as for company intranets.[43]This means that if someone gains access to a machine and can install a new root certificate in the browser, that browser will recognize websites that use the inserted certificate as legitimate.
Forprovable security, this reliance on something external to the system has the consequence that any public key certification scheme has to rely on some special setup assumption, such as the existence of acertificate authority.[44]
In spite of the limitations described above, certificate-authenticated TLS is considered mandatory by all security guidelines whenever a web site hosts confidential information or performs material transactions. This is because, in practice, in spite of theweaknessesdescribed above, web sites secured by public key certificates are still more secure than unsecuredhttp://web sites.[45]
The National Institute of Standards and Technology (NIST) Computer Security Division[46]provides guidance documents for public key certificates:
|
https://en.wikipedia.org/wiki/Digital_certificate
|
Apublic key infrastructure(PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revokedigital certificatesand managepublic-key encryption.
The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.
Incryptography, a PKI is an arrangement thatbindspublic keyswith respective identities of entities (like people and organizations).[1][2]The binding is established through a process of registration and issuance of certificates at and by acertificate authority(CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. When done over a network, this requires using a secure certificate enrollment or certificate management protocol such asCMP.
The PKI role that may be delegated by a CA to assure valid and correct registration is called aregistration authority(RA). An RA is responsible for accepting requests for digital certificates and authenticating the entity making the request.[3]TheInternet Engineering Task Force's RFC 3647 defines an RA as "An entity that is responsible for one or more of the following functions: the identification and authentication of certificate applicants, the approval or rejection of certificate applications, initiating certificate revocations or suspensions under certain circumstances, processing subscriber requests to revoke or suspend their certificates, and approving or rejecting requests by subscribers to renew or re-key their certificates. RAs, however, do not sign or issue certificates (i.e., an RA is delegated certain tasks on behalf of a CA)."[4]WhileMicrosoftmay have referred to a subordinate CA as an RA,[5]this is incorrect according to the X.509 PKI standards. RAs do not have the signing authority of a CA and only manage the vetting and provisioning of certificates. So in the Microsoft PKI case, the RA functionality is provided either by the Microsoft Certificate Services web site or throughActive DirectoryCertificate Services that enforces Microsoft Enterprise CA, and certificate policy through certificate templates and manages certificate enrollment (manual or auto-enrollment). In the case of Microsoft Standalone CAs, the function of RA does not exist since all of the procedures controlling the CA are based on the administration and access procedure associated with the system hosting the CA and the CA itself rather than Active Directory. Most non-Microsoft commercial PKI solutions offer a stand-alone RA component.
An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-partyvalidation authority(VA) can provide this entity information on behalf of the CA.
TheX.509standard defines the most commonly used format forpublic key certificates.[6]
PKI provides "trust services" - in plain terms trusting the actions or outputs of entities, be they people or computers. Trust service objectives respect one or more of the following capabilities: Confidentiality, Integrity and Authenticity (CIA).
Confidentiality:Assurance that no entity can maliciously or unwittingly view a payload in clear text. Data is encrypted to make it secret, such that even if it was read, it appears as gibberish. Perhaps the most common use of PKI for confidentiality purposes is in the context of Transport Layer Security (TLS). TLS is a capability underpinning the security of data in transit, i.e. during transmission. A classic example of TLS for confidentiality is when using a web browser to log on to a service hosted on an internet based web site by entering a password.
Integrity:Assurance that if an entity changed (tampered) with transmitted data in the slightest way, it would be obvious it happened as its integrity would have been compromised. Often it is not of utmost importance to prevent the integrity being compromised (tamper proof), however, it is of utmost importance that if integrity is compromised there is clear evidence of it having done so (tamper evident).
Authenticity:Assurance that every entity has certainty of what it is connecting to, or can evidence its legitimacy when connecting to a protected service. The former is termed server-side authentication - typically used when authenticating to a web server using a password. The latter is termed client-side authentication - sometimes used when authenticating using a smart card (hosting a digital certificate and private key).
Public-key cryptographyis acryptographictechnique that enables entities tosecurely communicateon an insecure public network, and reliably verify the identity of an entity viadigital signatures.[7]
A public key infrastructure (PKI) is a system for the creation, storage, and distribution ofdigital certificates, which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates that map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.[8][9][10]
A PKI consists of:[9][11][12]
The primary role of the CA is todigitally signand publish thepublic keybound to a given user. This is done using the CA's own private key, so that trust in the user key relies on one's trust in the validity of the CA's key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority (RA), which may or may not be separate from the CA.[13]The key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision.
The termtrusted third party(TTP) may also be used forcertificate authority(CA). Moreover, PKI is itself often used as a synonym for a CA implementation.[14]
A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or mis-issued certificate until expiry.[15]Hence, revocation is an important part of a public key infrastructure.[16]Revocation is performed by the issuingcertificate authority, which produces acryptographically authenticatedstatement of revocation.[17]
For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[18]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[19]
Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[20]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[16]
In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate.
According to NetCraft report from 2015,[21]the industry standard for monitoring activeTransport Layer Security(TLS) certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec,Sectigo,GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (orVeriSignbefore it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share."
Following major issues in how certificate issuing was managed, all major players gradually distrusted Symantec-issued certificates, starting in 2017 and completed in 2021.[22][23][24][25]
This approach involves a server that acts as an offline certificate authority within asingle sign-onsystem. A single sign-on server will issue digital certificates into the client system, but never stores them. Users can execute programs, etc. with the temporary certificate. It is common to find this solution variety withX.509-based certificates.[26]
Starting Sep 2020, TLS Certificate Validity reduced to 13 Months.
An alternative approach to the problem of public authentication of public key information is the web-of-trust scheme, which uses self-signedcertificatesand third-party attestations of those certificates. The singular term "web of trust" does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint "webs of trust". Examples of implementations of this approach arePGP(Pretty Good Privacy) andGnuPG(an implementation ofOpenPGP, the standardized specification of PGP). Because PGP and implementations allow the use ofe-maildigital signatures for self-publication of public key information, it is relatively easy to implement one's own web of trust.
One of the benefits of the web of trust, such as inPGP, is that it can interoperate with a PKI CA fully trusted by all parties in a domain (such as an internal CA in a company) that is willing to guarantee certificates, as a trusted introducer. If the "web of trust" is completely trusted then, because of the nature of a web of trust, trusting one certificate is granting trust to all the certificates in that web. A PKI is only as valuable as the standards and practices that control the issuance of certificates and including PGP or a personally instituted web of trust could significantly degrade the trustworthiness of that enterprise's or domain's implementation of PKI.[27]
The web of trust concept was first put forth by PGP creatorPhil Zimmermannin 1992 in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
Another alternative, which does not deal with public authentication of public key information, is the simple public key infrastructure (SPKI), which grew out of three independent efforts to overcome the complexities ofX.509andPGP's web of trust. SPKI does not associate users with persons, since thekeyis what is trusted, rather than the person. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an "authorization loop" in SPKI terminology, where authorization is integral to its design.[28]This type of PKI is specially useful for making integrations of PKI that do not rely on third parties for certificate authorization, certificate information, etc.; a good example of this is anair-gappednetwork in an office.
Decentralized identifiers(DIDs) eliminate dependence on centralized registries for identifiers as well as centralized certificate authorities for key management, which is the standard in hierarchical PKI. In cases where the DID registry is adistributed ledger, each entity can serve as its own root authority. This architecture is referred to as decentralized PKI (DPKI).[29][30]
Developments in PKI occurred in the early 1970s at the British intelligence agencyGCHQ, whereJames Ellis,Clifford Cocksand others made important discoveries related to encryption algorithms and key distribution.[31]Because developments at GCHQ are highly classified, the results of this work were kept secret and not publicly acknowledged until the mid-1990s.
The public disclosure of both securekey exchangeandasymmetric key algorithmsin 1976 byDiffie,Hellman,Rivest,Shamir, andAdlemanchanged secure communications entirely. With the further development of high-speed digital electronic communications (theInternetand its predecessors), a need became evident for ways in which users could securely communicate with each other, and as a further consequence of that, for ways in which users could be sure with whom they were actually interacting.
Assorted cryptographic protocols were invented and analyzed within which the newcryptographic primitivescould be effectively used. With the invention of theWorld Wide Weband its rapid spread, the need for authentication and secure communication became still more acute. Commercial reasons alone (e.g.,e-commerce, online access to proprietary databases fromweb browsers) were sufficient.Taher Elgamaland others atNetscapedeveloped theSSLprotocol ('https' in WebURLs); it included key establishment, server authentication (prior to v3, one-way only), and so on.[32]A PKI structure was thus created for Web users/sites wishing secure communications.
Vendors and entrepreneurs saw the possibility of a large market, started companies (or new projects at existing companies), and began to agitate for legal recognition and protection from liability. AnAmerican Bar Associationtechnology project published an extensive analysis of some of the foreseeable legal aspects of PKI operations (seeABA digital signature guidelines), and shortly thereafter, several U.S. states (Utahbeing the first in 1995) and other jurisdictions throughout the world began to enact laws and adopt regulations. Consumer groups raised questions aboutprivacy, access, and liability considerations, which were more taken into consideration in some jurisdictions than in others.[33]
The enacted laws and regulations differed, there were technical and operational problems in converting PKI schemes into successful commercial operation, and progress has been much slower than pioneers had imagined it would be.
By the first few years of the 21st century, the underlying cryptographic engineering was clearly not easy to deploy correctly. Operating procedures (manual or automatic) were not easy to correctly design (nor even if so designed, to execute perfectly, which the engineering required). The standards that existed were insufficient.
PKI vendors have found a market, but it is not quite the market envisioned in the mid-1990s, and it has grown both more slowly and in somewhat different ways than were anticipated.[34]PKIs have not solved some of the problems they were expected to, and several major vendors have gone out of business or been acquired by others. PKI has had the most success in government implementations; the largest PKI implementation to date is theDefense Information Systems Agency(DISA) PKI infrastructure for theCommon Access Cardsprogram.
PKIs of one type or another, and from any of several vendors, have many uses, including providing public keys and bindings to user identities, which are used for:
Some argue that purchasing certificates for securing websites bySSL/TLSand securing software bycode signingis a costly venture for small businesses.[41]However, the emergence of free alternatives, such asLet's Encrypt, has changed this.HTTP/2, the latest version of HTTP protocol, allows unsecured connections in theory; in practice, major browser companies have made it clear that they would support this protocol only over a PKI securedTLSconnection.[42]Web browser implementation of HTTP/2 includingChrome,Firefox,Opera, andEdgesupports HTTP/2 only over TLS by using theALPNextension of the TLS protocol. This would mean that, to get the speed benefits of HTTP/2, website owners would be forced to purchase SSL/TLS certificates controlled by corporations.
Currently the majority of web browsers are shipped with pre-installedintermediate certificatesissued and signed by a certificate authority, by public keys certified by so-calledroot certificates. This means browsers need to carry a large number of different certificate providers, increasing the risk of a key compromise.[43]
When a key is known to be compromised, it could be fixed by revoking the certificate, but such a compromise is not easily detectable and can be a huge security breach. Browsers have to issue a security patch to revoke intermediary certificates issued by a compromised root certificate authority.[44]
|
https://en.wikipedia.org/wiki/Public_key_infrastructure
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Galois_field#Galois_field_of_order_2^n
|
Inmathematics, anabelian group, also called acommutative group, is agroupin which the result of applying thegroup operationto two group elements does not depend on the order in which they are written. That is, the group operation iscommutative. With addition as an operation, theintegersand thereal numbersform abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after the Norwegian mathematicianNiels Henrik Abel.[1]
The concept of an abelian group underlies many fundamentalalgebraic structures, such asfields,rings,vector spaces, andalgebras. The theory of abelian groups is generally simpler than that of theirnon-abeliancounterparts, and finite abelian groups are very well understood andfully classified.
An abelian group is asetA{\displaystyle A}, together with anoperation・ , that combines any twoelementsa{\displaystyle a}andb{\displaystyle b}ofA{\displaystyle A}to form another element ofA,{\displaystyle A,}denoteda⋅b{\displaystyle a\cdot b}. The symbol ・ is a general placeholder for a concretely given operation. To qualify as an abelian group, the set and operation,(A,⋅){\displaystyle (A,\cdot )}, must satisfy four requirements known as theabelian group axioms(some authors include in the axioms some properties that belong to the definition of an operation: namely that the operation isdefinedfor any ordered pair of elements ofA, that the result iswell-defined, and that the resultbelongs toA):
A group in which the group operation is not commutative is called a "non-abelian group" or "non-commutative group".[2]
There are two main notational conventions for abelian groups – additive and multiplicative.
Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation formodulesandrings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, with some notable exceptions beingnear-ringsandpartially ordered groups, where an operation is written additively even when non-abelian.[3][4]
To verify that afinite groupis abelian, a table (matrix) – known as aCayley table– can be constructed in a similar fashion to amultiplication table.[5]If the group isG={g1=e,g2,…,gn}{\displaystyle G=\{g_{1}=e,g_{2},\dots ,g_{n}\}}under theoperation⋅{\displaystyle \cdot },the(i,j){\displaystyle (i,j)}-thentry of this table contains the productgi⋅gj{\displaystyle g_{i}\cdot g_{j}}.
The group is abelianif and only ifthis table issymmetricabout the main diagonal. This is true since the group is abelianiffgi⋅gj=gj⋅gi{\displaystyle g_{i}\cdot g_{j}=g_{j}\cdot g_{i}}for alli,j=1,...,n{\displaystyle i,j=1,...,n}, which is iff the(i,j){\displaystyle (i,j)}entry of the table equals the(j,i){\displaystyle (j,i)}entry for alli,j=1,...,n{\displaystyle i,j=1,...,n}, i.e. the table is symmetric about the main diagonal.
In general,matrices, even invertible matrices, do not form an abelian group under multiplication because matrix multiplication is generally not commutative. However, some groups of matrices are abelian groups under matrix multiplication – one example is the group of2×2{\displaystyle 2\times 2}rotation matrices.
Camille Jordannamed abelian groups after theNorwegianmathematicianNiels Henrik Abel, who had found that the commutativity of the group of apolynomialimplies that the roots of the polynomial can becalculated by using radicals.[7][8]
Ifn{\displaystyle n}is anatural numberandx{\displaystyle x}is an element of an abelian groupG{\displaystyle G}written additively, thennx{\displaystyle nx}can be defined asx+x+⋯+x{\displaystyle x+x+\cdots +x}(n{\displaystyle n}summands) and(−n)x=−(nx){\displaystyle (-n)x=-(nx)}. In this way,G{\displaystyle G}becomes amoduleover theringZ{\displaystyle \mathbb {Z} }of integers. In fact, the modules overZ{\displaystyle \mathbb {Z} }can be identified with the abelian groups.[9]
Theorems about abelian groups (i.e.modulesover theprincipal ideal domainZ{\displaystyle \mathbb {Z} }) can often be generalized to theorems about modules over an arbitrary principal ideal domain. A typical example is the classification offinitely generated abelian groupswhich is a specialization of thestructure theorem for finitely generated modules over a principal ideal domain. In the case of finitely generated abelian groups, this theorem guarantees that an abelian group splits as adirect sumof atorsion groupand afree abelian group. The former may be written as a direct sum of finitely many groups of the formZ/pkZ{\displaystyle \mathbb {Z} /p^{k}\mathbb {Z} }forp{\displaystyle p}prime, and the latter is a direct sum of finitely many copies ofZ{\displaystyle \mathbb {Z} }.
Iff,g:G→H{\displaystyle f,g:G\to H}are twogroup homomorphismsbetween abelian groups, then their sumf+g{\displaystyle f+g}, defined by(f+g)(x)=f(x)+g(x){\displaystyle (f+g)(x)=f(x)+g(x)}, is again a homomorphism. (This is not true ifH{\displaystyle H}is a non-abelian group.) The setHom(G,H){\displaystyle {\text{Hom}}(G,H)}of all group homomorphisms fromG{\displaystyle G}toH{\displaystyle H}is therefore an abelian group in its own right.
Somewhat akin to thedimensionofvector spaces, every abelian group has arank. It is defined as the maximalcardinalityof a set oflinearly independent(over the integers) elements of the group.[10]Finite abelian groups and torsion groups have rank zero, and every abelian group of rank zero is a torsion group. The integers and therational numbershave rank one, as well as every nonzeroadditive subgroupof the rationals. On the other hand, themultiplicative groupof the nonzero rationals has an infinite rank, as it is a free abelian group with the set of theprime numbersas a basis (this results from thefundamental theorem of arithmetic).
ThecenterZ(G){\displaystyle Z(G)}of a groupG{\displaystyle G}is the set of elements that commute with every element ofG{\displaystyle G}. A groupG{\displaystyle G}is abelian if and only if it is equal to its centerZ(G){\displaystyle Z(G)}. The center of a groupG{\displaystyle G}is always acharacteristicabelian subgroup ofG{\displaystyle G}. If the quotient groupG/Z(G){\displaystyle G/Z(G)}of a group by its center is cyclic thenG{\displaystyle G}is abelian.[11]
Cyclic groups ofintegers modulon{\displaystyle n},Z/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }, were among the first examples of groups. It turns out that an arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. Theautomorphism groupof a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper ofGeorg FrobeniusandLudwig Stickelbergerand later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter oflinear algebra.
Any group of prime order is isomorphic to a cyclic group and therefore abelian. Any group whose order is a square of a prime number is also abelian.[12]In fact, for every prime numberp{\displaystyle p}there are (up to isomorphism) exactly two groups of orderp2{\displaystyle p^{2}}, namelyZp2{\displaystyle \mathbb {Z} _{p^{2}}}andZp×Zp{\displaystyle \mathbb {Z} _{p}\times \mathbb {Z} _{p}}.
Thefundamental theorem of finite abelian groupsstates that every finite abelian groupG{\displaystyle G}can be expressed as the direct sum of cyclic subgroups ofprime-power order; it is also known as thebasis theorem for finite abelian groups. Moreover, automorphism groups of cyclic groups are examples of abelian groups.[13]This is generalized by thefundamental theorem of finitely generated abelian groups, with finite groups being the special case whenGhas zerorank; this in turn admits numerous further generalizations.
The classification was proven byLeopold Kroneckerin 1870, though it was not stated in modern group-theoretic terms until later, and was preceded by a similar classification of quadratic forms byCarl Friedrich Gaussin 1801; seehistoryfor details.
The cyclic groupZmn{\displaystyle \mathbb {Z} _{mn}}of ordermn{\displaystyle mn}is isomorphic to the direct sum ofZm{\displaystyle \mathbb {Z} _{m}}andZn{\displaystyle \mathbb {Z} _{n}}if and only ifm{\displaystyle m}andn{\displaystyle n}arecoprime. It follows that any finite abelian groupG{\displaystyle G}is isomorphic to a direct sum of the form
in either of the following canonical ways:
For example,Z15{\displaystyle \mathbb {Z} _{15}}can be expressed as the direct sum of two cyclic subgroups of order 3 and 5:Z15≅{0,5,10}⊕{0,3,6,9,12}{\displaystyle \mathbb {Z} _{15}\cong \{0,5,10\}\oplus \{0,3,6,9,12\}}. The same can be said for any abelian group of order 15, leading to the remarkable conclusion that all abelian groups of order 15 areisomorphic.
For another example, every abelian group of order 8 is isomorphic to eitherZ8{\displaystyle \mathbb {Z} _{8}}(the integers 0 to 7 under addition modulo 8),Z4⊕Z2{\displaystyle \mathbb {Z} _{4}\oplus \mathbb {Z} _{2}}(the odd integers 1 to 15 under multiplication modulo 16), orZ2⊕Z2⊕Z2{\displaystyle \mathbb {Z} _{2}\oplus \mathbb {Z} _{2}\oplus \mathbb {Z} _{2}}.
See alsolist of small groupsfor finite abelian groups of order 30 or less.
One can apply thefundamental theoremto count (and sometimes determine) theautomorphismsof a given finite abelian groupG{\displaystyle G}. To do this, one uses the fact that ifG{\displaystyle G}splits as a direct sumH⊕K{\displaystyle H\oplus K}of subgroups ofcoprimeorder, then
Given this, the fundamental theorem shows that to compute the automorphism group ofG{\displaystyle G}it suffices to compute the automorphism groups of theSylowp{\displaystyle p}-subgroups separately (that is, all direct sums of cyclic subgroups, each with order a power ofp{\displaystyle p}). Fix a primep{\displaystyle p}and suppose the exponentsei{\displaystyle e_{i}}of the cyclic factors of the Sylowp{\displaystyle p}-subgroup are arranged in increasing order:
for somen>0{\displaystyle n>0}. One needs to find the automorphisms of
One special case is whenn=1{\displaystyle n=1}, so that there is only one cyclic prime-power factor in the Sylowp{\displaystyle p}-subgroupP{\displaystyle P}. In this case the theory of automorphisms of a finitecyclic groupcan be used. Another special case is whenn{\displaystyle n}is arbitrary butei=1{\displaystyle e_{i}=1}for1≤i≤n{\displaystyle 1\leq i\leq n}. Here, one is consideringP{\displaystyle P}to be of the form
so elements of this subgroup can be viewed as comprising a vector space of dimensionn{\displaystyle n}over the finite field ofp{\displaystyle p}elementsFp{\displaystyle \mathbb {F} _{p}}. The automorphisms of this subgroup are therefore given by the invertible linear transformations, so
whereGL{\displaystyle \mathrm {GL} }is the appropriategeneral linear group. This is easily shown to have order
In the most general case, where theei{\displaystyle e_{i}}andn{\displaystyle n}are arbitrary, the automorphism group is more difficult to determine. It is known, however, that if one defines
and
then one has in particulark≤dk{\displaystyle k\leq d_{k}},ck≤k{\displaystyle c_{k}\leq k}, and
One can check that this yields the orders in the previous examples as special cases (see Hillar & Rhea).
An abelian groupAis finitely generated if it contains a finite set of elements (calledgenerators)G={x1,…,xn}{\displaystyle G=\{x_{1},\ldots ,x_{n}\}}such that every element of the group is alinear combinationwith integer coefficients of elements ofG.
LetLbe afree abelian groupwith basisB={b1,…,bn}.{\displaystyle B=\{b_{1},\ldots ,b_{n}\}.}There is a uniquegroup homomorphismp:L→A,{\displaystyle p\colon L\to A,}such that
This homomorphism issurjective, and itskernelis finitely generated (since integers form aNoetherian ring). Consider the matrixMwith integer entries, such that the entries of itsjth column are the coefficients of thejth generator of the kernel. Then, the abelian group is isomorphic to thecokernelof linear map defined byM. Conversely everyinteger matrixdefines a finitely generated abelian group.
It follows that the study of finitely generated abelian groups is totally equivalent with the study of integer matrices. In particular, changing the generating set ofAis equivalent with multiplyingMon the left by aunimodular matrix(that is, an invertible integer matrix whose inverse is also an integer matrix). Changing the generating set of the kernel ofMis equivalent with multiplyingMon the right by a unimodular matrix.
TheSmith normal formofMis a matrix
whereUandVare unimodular, andSis a matrix such that all non-diagonal entries are zero, the non-zero diagonal entriesd1,1,…,dk,k{\displaystyle d_{1,1},\ldots ,d_{k,k}}are the first ones, anddj,j{\displaystyle d_{j,j}}is a divisor ofdi,i{\displaystyle d_{i,i}}fori>j. The existence and the shape of the Smith normal form proves that the finitely generated abelian groupAis thedirect sum
whereris the number of zero rows at the bottom ofS(and also therankof the group). This is thefundamental theorem of finitely generated abelian groups.
The existence of algorithms for Smith normal form shows that the fundamental theorem of finitely generated abelian groups is not only a theorem of abstract existence, but provides a way for computing expression of finitely generated abelian groups as direct sums.[14]
The simplest infinite abelian group is theinfinite cyclic groupZ{\displaystyle \mathbb {Z} }. Anyfinitely generated abelian groupA{\displaystyle A}is isomorphic to the direct sum ofr{\displaystyle r}copies ofZ{\displaystyle \mathbb {Z} }and a finite abelian group, which in turn is decomposable into a direct sum of finitely manycyclic groupsofprime powerorders. Even though the decomposition is not unique, the numberr{\displaystyle r}, called therankofA{\displaystyle A}, and the prime powers giving the orders of finite cyclic summands are uniquely determined.
By contrast, classification of general infinitely generated abelian groups is far from complete.Divisible groups, i.e. abelian groupsA{\displaystyle A}in which the equationnx=a{\displaystyle nx=a}admits a solutionx∈A{\displaystyle x\in A}for any natural numbern{\displaystyle n}and elementa{\displaystyle a}ofA{\displaystyle A}, constitute one important class of infinite abelian groups that can be completely characterized. Every divisible group is isomorphic to a direct sum, with summands isomorphic toQ{\displaystyle \mathbb {Q} }andPrüfer groupsQp/Zp{\displaystyle \mathbb {Q} _{p}/Z_{p}}for various prime numbersp{\displaystyle p}, and the cardinality of the set of summands of each type is uniquely determined.[15]Moreover, if a divisible groupA{\displaystyle A}is a subgroup of an abelian groupG{\displaystyle G}thenA{\displaystyle A}admits a direct complement: a subgroupC{\displaystyle C}ofG{\displaystyle G}such thatG=A⊕C{\displaystyle G=A\oplus C}. Thus divisible groups areinjective modulesin thecategory of abelian groups, and conversely, every injective abelian group is divisible (Baer's criterion). An abelian group without non-zero divisible subgroups is calledreduced.
Two important special classes of infinite abelian groups with diametrically opposite properties aretorsion groupsandtorsion-free groups, exemplified by the groupsQ/Z{\displaystyle \mathbb {Q} /\mathbb {Z} }(periodic) andQ{\displaystyle \mathbb {Q} }(torsion-free).
An abelian group is calledperiodicortorsion, if every element has finiteorder. A direct sum of finite cyclic groups is periodic. Although the converse statement is not true in general, some special cases are known. The first and secondPrüfer theoremsstate that ifA{\displaystyle A}is a periodic group, and it either has abounded exponent, i.e.,nA=0{\displaystyle nA=0}for some natural numbern{\displaystyle n}, or is countable and thep{\displaystyle p}-heightsof the elements ofA{\displaystyle A}are finite for eachp{\displaystyle p}, thenA{\displaystyle A}is isomorphic to a direct sum of finite cyclic groups.[16]The cardinality of the set of direct summands isomorphic toZ/pmZ{\displaystyle \mathbb {Z} /p^{m}\mathbb {Z} }in such a decomposition is an invariant ofA{\displaystyle A}.[17]These theorems were later subsumed in theKulikov criterion. In a different direction,Helmut Ulmfound an extension of the second Prüfer theorem to countable abelianp{\displaystyle p}-groups with elements of infinite height: those groups are completely classified by means of theirUlm invariants.[18]
An abelian group is calledtorsion-freeif every non-zero element has infinite order. Several classes oftorsion-free abelian groupshave been studied extensively:
An abelian group that is neither periodic nor torsion-free is calledmixed. IfA{\displaystyle A}is an abelian group andT(A){\displaystyle T(A)}is itstorsion subgroup, then the factor groupA/T(A){\displaystyle A/T(A)}is torsion-free. However, in general the torsion subgroup is not a direct summand ofA{\displaystyle A}, soA{\displaystyle A}isnotisomorphic toT(A)⊕A/T(A){\displaystyle T(A)\oplus A/T(A)}. Thus the theory of mixed groups involves more than simply combining the results about periodic and torsion-free groups. The additive groupZ{\displaystyle \mathbb {Z} }of integers is torsion-freeZ{\displaystyle \mathbb {Z} }-module.[20]
One of the most basic invariants of an infinite abelian groupA{\displaystyle A}is itsrank: the cardinality of the maximallinearly independentsubset ofA{\displaystyle A}. Abelian groups of rank 0 are precisely the periodic groups, whiletorsion-free abelian groups of rank 1are necessarily subgroups ofQ{\displaystyle \mathbb {Q} }and can be completely described. More generally, a torsion-free abelian group of finite rankr{\displaystyle r}is a subgroup ofQr{\displaystyle \mathbb {Q} _{r}}. On the other hand, the group ofp{\displaystyle p}-adic integersZp{\displaystyle \mathbb {Z} _{p}}is a torsion-free abelian group of infiniteZ{\displaystyle \mathbb {Z} }-rank and the groupsZpn{\displaystyle \mathbb {Z} _{p}^{n}}with differentn{\displaystyle n}are non-isomorphic, so this invariant does not even fully capture properties of some familiar groups.
The classification theorems for finitely generated, divisible, countable periodic, and rank 1 torsion-free abelian groups explained above were all obtained before 1950 and form a foundation of the classification of more general infinite abelian groups. Important technical tools used in classification of infinite abelian groups arepureandbasicsubgroups. Introduction of various invariants of torsion-free abelian groups has been one avenue of further progress. See the books byIrving Kaplansky,László Fuchs,Phillip Griffith, andDavid Arnold, as well as the proceedings of the conferences on Abelian Group Theory published inLecture Notes in Mathematicsfor more recent findings.
The additive group of aringis an abelian group, but not all abelian groups are additive groups of rings (with nontrivial multiplication). Some important topics in this area of study are:
Many large abelian groups possess a naturaltopology, which turns them intotopological groups.
The collection of all abelian groups, together with thehomomorphismsbetween them, forms thecategoryAb{\displaystyle {\textbf {Ab}}}, the prototype of anabelian category.
Wanda Szmielew(1955) proved that the first-order theory of abelian groups, unlike its non-abelian counterpart, is decidable. Mostalgebraic structuresother thanBoolean algebrasareundecidable.
There are still many areas of current research:
Moreover, abelian groups of infinite order lead, quite surprisingly, to deep questions about theset theorycommonly assumed to underlie all of mathematics. Take theWhitehead problem: are all Whitehead groups of infinite order alsofree abelian groups? In the 1970s,Saharon Shelahproved that the Whitehead problem is:
Among mathematicaladjectivesderived from theproper nameof amathematician, the word "abelian" is rare in that it is usually spelled with a lowercasea, rather than an uppercaseA, the lack of capitalization being a tacit acknowledgment not only of the degree to which Abel's name has been institutionalized but also of how ubiquitous in modern mathematics are the concepts introduced by him.[21]
|
https://en.wikipedia.org/wiki/Abelian_group
|
Inabstract algebra, afinite groupis agroupwhoseunderlying setisfinite. Finite groups often arise when considering symmetry ofmathematicalorphysicalobjects, when those objects admit just a finite number of structure-preserving transformations. Important examples of finite groups includecyclic groupsandpermutation groups.
The study of finite groups has been an integral part ofgroup theorysince it arose in the 19th century. One major area of study has been classification: theclassification of finite simple groups(those with no nontrivialnormal subgroup) was completed in 2004.
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially thelocal theoryof finite groups and the theory ofsolvableandnilpotent groups.[1][2]As a consequence, the completeclassification of finite simple groupswas achieved, meaning that all thosesimple groupsfrom which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such asChevalleyandSteinbergalso increased our understanding of finite analogs ofclassical groups, and other related groups. One such family of groups is the family ofgeneral linear groupsoverfinite fields.
Finite groups often occur when consideringsymmetryof mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory ofLie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associatedWeyl groups. These are finite groups generated by reflections which act on a finite-dimensionalEuclidean space. The properties of finite groups can thus play a role in subjects such astheoretical physicsandchemistry.[3]
Thesymmetric groupSnon afinite setofnsymbols is thegroupwhose elements are all thepermutationsof thensymbols, and whosegroup operationis thecompositionof such permutations, which are treated asbijective functionsfrom the set of symbols to itself.[4]Since there aren! (nfactorial) possible permutations of a set ofnsymbols, it follows that theorder(the number of elements) of the symmetric group Snisn!.
A cyclic group Znis a group all of whose elements are powers of a particular elementawherean=a0= e, the identity. A typical realization of this group is as the complexnthroots of unity. Sendingato aprimitive root of unitygives an isomorphism between the two. This can be done with any finite cyclic group.
Anabelian group, also called acommutative group, is agroupin which the result of applying the groupoperationto two group elements does not depend on their order (the axiom ofcommutativity). They are named afterNiels Henrik Abel.[5]
An arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. Theautomorphism groupof a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper ofGeorg FrobeniusandLudwig Stickelbergerand later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter oflinear algebra.
Agroup of Lie typeis agroupclosely related to the groupG(k) of rational points of a reductivelinear algebraic groupGwith values in thefieldk. Finite groups of Lie type give the bulk of nonabelianfinite simple groups. Special cases include theclassical groups, theChevalley groups, the Steinberg groups, and the Suzuki–Ree groups.
Finite groups of Lie type were among the first groups to be considered in mathematics, aftercyclic,symmetricandalternatinggroups, with theprojective special linear groupsover prime finite fields, PSL(2,p) being constructed byÉvariste Galoisin the 1830s. The systematic exploration of finite groups of Lie type started withCamille Jordan's theorem that theprojective special linear groupPSL(2,q) is simple forq≠ 2, 3. This theorem generalizes to projective groups of higher dimensions and gives an important infinite family PSL(n,q) offinite simple groups. Other classical groups were studied byLeonard Dicksonin the beginning of 20th century. In the 1950sClaude Chevalleyrealized that after an appropriate reformulation, many theorems aboutsemisimple Lie groupsadmit analogues for algebraic groups over an arbitrary fieldk, leading to construction of what are now calledChevalley groups. Moreover, as in the case of compact simple Lie groups, the corresponding groups turned out to be almost simple as abstract groups (Tits simplicity theorem). Although it was known since 19th century that other finite simple groups exist (for example,Mathieu groups), gradually a belief formed that nearly all finite simple groups can be accounted for by appropriate extensions of Chevalley's construction, together with cyclic and alternating groups. Moreover, the exceptions, thesporadic groups, share many properties with the finite groups of Lie type, and in particular, can be constructed and characterized based on theirgeometryin the sense of Tits.
The belief has now become a theorem – theclassification of finite simple groups. Inspection of the list of finite simple groups shows that groups of Lie type over afinite fieldinclude all the finite simple groups other than the cyclic groups, the alternating groups, theTits group, and the 26sporadic simple groups.
For any finite groupG, theorder(number of elements) of everysubgroupHofGdivides the order ofG. The theorem is named afterJoseph-Louis Lagrange.
This provides a partial converse to Lagrange's theorem giving information about how many subgroups of a given order are contained inG.
Cayley's theorem, named in honour ofArthur Cayley, states that everygroupGisisomorphicto asubgroupof thesymmetric groupacting onG.[6]This can be understood as an example of thegroup actionofGon the elements ofG.[7]
Burnside's theoremingroup theorystates that ifGis a finite group oforderpaqb, wherepandqareprime numbers, andaandbarenon-negativeintegers, thenGissolvable. Hence each
non-Abelianfinite simple grouphas order divisible by at least three distinct primes.
TheFeit–Thompson theorem, orodd order theorem, states that every finitegroupof oddorderissolvable. It was proved byWalter FeitandJohn Griggs Thompson(1962,1963)
Theclassification of finite simple groupsis a theorem stating that everyfinite simple groupbelongs to one of the following families:
The finite simple groups can be seen as the basic building blocks of all finite groups, in a way reminiscent of the way theprime numbersare the basic building blocks of thenatural numbers. TheJordan–Hölder theoremis a more precise way of stating this fact about finite groups. However, a significant difference with respect to the case ofinteger factorizationis that such "building blocks" do not necessarily determine uniquely a group, since there might be many non-isomorphic groups with the samecomposition seriesor, put in another way, theextension problemdoes not have a unique solution.
The proof of the theorem consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004.Gorenstein(d.1992),Lyons, andSolomonare gradually publishing a simplified and revised version of the proof.
Given a positive integern, it is not at all a routine matter to determine how manyisomorphismtypes of groups ofordernthere are. Every group ofprimeorder iscyclic, becauseLagrange's theoremimplies that the cyclic subgroup generated by any of its non-identity elements is the whole group.
Ifnis the square of a prime, then there are exactly two possible isomorphism types of group of ordern, both of which are abelian. Ifnis a higher power of a prime, then results ofGraham HigmanandCharles Simsgive asymptotically correct estimates for the number of isomorphism types of groups of ordern, and the number grows very rapidly as the power increases.
Depending on the prime factorization ofn, some restrictions may be placed on the structure of groups of ordern, as a consequence, for example, of results such as theSylow theorems. For example, every group of orderpqis cyclic whenq<pare primes withp− 1not divisible byq. For a necessary and sufficient condition, seecyclic number.
Ifnissquarefree, then any group of ordernis solvable.Burnside's theorem, proved usinggroup characters, states that every group of ordernis solvable whennis divisible by fewer than three distinct primes, i.e. ifn=paqb, wherepandqare prime numbers, andaandbare non-negative integers. By theFeit–Thompson theorem, which has a long and complicated proof, every group of ordernis solvable whennis odd.
For every positive integern, most groups of ordernaresolvable. To see this for any particular order is usually not difficult (for example, there is, up to isomorphism, one non-solvable group and 12 solvable groups of order 60) but the proof of this for all orders uses theclassification of finite simple groups. For any positive integernthere are at most two simple groups of ordern, and there are infinitely many positive integersnfor which there are two non-isomorphic simple groups of ordern.
|
https://en.wikipedia.org/wiki/Finite_group
|
Kerckhoffs's principle(also calledKerckhoffs's desideratum,assumption,axiom,doctrineorlaw) ofcryptographywas stated byDutch-borncryptographerAuguste Kerckhoffsin the 19th century. The principle holds that acryptosystemshould be secure, even if everything about the system, except thekey, is public knowledge. This concept is widely embraced by cryptographers, in contrast tosecurity through obscurity, which is not.
Kerckhoffs's principle was phrased by American mathematicianClaude Shannonas "theenemyknows the system",[1]i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". In that form, it is calledShannon's maxim.
Another formulation by American researcher and professorSteven M. Bellovinis:
In other words—design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.)[2]
The invention oftelegraphyradically changedmilitary communicationsand increased the number of messages that needed to be protected from the enemy dramatically, leading to the development of field ciphers which had to be easy to use without large confidentialcodebooksprone to capture on the battlefield.[3]It was this environment which led to the development of Kerckhoffs's requirements.
Auguste Kerckhoffs was a professor of German language atEcole des Hautes Etudes Commerciales(HEC) in Paris.[4]In early 1883, Kerckhoffs's article,La Cryptographie Militaire,[5]was published in two parts in theJournal of Military Science, in which he stated six design rules for militaryciphers.[6]Translated from French, they are:[7][8]
Some are no longer relevant given the ability of computers to perform complex encryption. The second rule, now known asKerckhoffs's principle, is still critically important.[9]
Kerckhoffs viewed cryptography as a rival to, and a better alternative than,steganographicencoding, which was common in the nineteenth century for hiding the meaning of military messages. One problem with encoding schemes is that they rely on humanly-held secrets such as "dictionaries" which disclose for example, the secret meaning of words. Steganographic-like dictionaries, once revealed, permanently compromise a corresponding encoding system. Another problem is that the risk of exposure increases as the number of users holding the secrets increases.
Nineteenth century cryptography, in contrast, used simple tables which provided for the transposition of alphanumeric characters, generally given row-column intersections which could be modified by keys which were generally short, numeric, and could be committed to human memory. The system was considered "indecipherable" because tables and keys do not convey meaning by themselves. Secret messages can be compromised only if a matching set of table, key, and message falls into enemy hands in a relevant time frame. Kerckhoffs viewed tactical messages as only having a few hours of relevance. Systems are not necessarily compromised, because their components (i.e. alphanumeric character tables and keys) can be easily changed.
Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.[10]
There are a large number of ways the internal details of a widely used system could be discovered. The most obvious is that someone could bribe, blackmail, or otherwise threaten staff or customers into explaining the system. In war, for example, one side will probably capture some equipment and people from the other side. Each side will also use spies to gather information.
If a method involves software, someone could domemory dumpsor run the software under the control of a debugger in order to understand the method. If hardware is being used, someone could buy or steal some of the hardware and build whatever programs or gadgets needed to test it. Hardware can also be dismantled so that the chip details can be examined under the microscope.
A generalization some make from Kerckhoffs's principle is: "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security."Bruce Schneierties it in with a belief that all security systems must be designed tofail as gracefullyas possible:
Kerckhoffs's principle applies beyond codes and ciphers to security systems in general: every secret creates a potentialfailure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.[11]
Any security system depends crucially on keeping some things secret. However, Kerckhoffs's principle points out that the things kept secret ought to be those least costly to change if inadvertently disclosed.[9]
For example, a cryptographic algorithm may be implemented by hardware and software that is widely distributed among users. If security depends on keeping that secret, then disclosure leads to major logistic difficulties in developing, testing, and distributing implementations of a new algorithm – it is "brittle". On the other hand, if keeping the algorithm secret is not important, but only thekeysused with the algorithm must be secret, then disclosure of the keys simply requires the simpler, less costly process of generating and distributing new keys.[12]
In accordance with Kerckhoffs's principle, the majority of civilian cryptography makes use of publicly known algorithms. By contrast, ciphers used to protect classified government or military information are often kept secret (seeType 1 encryption). However, it should not be assumed that government/military ciphers must be kept secret to maintain security. It is possible that they are intended to be as cryptographically sound as public algorithms, and the decision to keep them secret is in keeping with a layered security posture.
It is moderately common for companies, and sometimes even standards bodies as in the case of theCSS encryption on DVDs, to keep the inner workings of a system secret. Some[who?]argue this "security by obscurity" makes the product safer and less vulnerable to attack. A counter-argument is that keeping the innards secret may improve security in the short term, but in the long run, only systems that have been published and analyzed should be trusted.
Steven BellovinandRandy Bushcommented:[13]
Security Through Obscurity Considered Dangerous
Hiding security vulnerabilities in algorithms, software, and/or hardware decreases the likelihood they will be repaired and increases the likelihood that they can and will be exploited. Discouraging or outlawing discussion of weaknesses and vulnerabilities is extremely dangerous and deleterious to the security of computer systems, the network, and its citizens.
Open Discussion Encourages Better Security
The long history of cryptography and cryptoanalysis has shown time and time again that open discussion and analysis of algorithms exposes weaknesses not thought of by the original authors, and thereby leads to better and more secure algorithms. As Kerckhoffs noted about cipher systems in 1883 [Kerc83], "Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvénient tomber entre les mains de l'ennemi." (Roughly, "the system must not require secrecy and must be able to be stolen by the enemy without causing trouble.")
|
https://en.wikipedia.org/wiki/Kerckhoffs%27_principle
|
Symmetric-key algorithms[a]arealgorithmsforcryptographythat use the samecryptographic keysfor both the encryption ofplaintextand the decryption ofciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys.[1]The keys, in practice, represent ashared secretbetween two or more parties that can be used to maintain a private information link.[2]The requirement that both parties have access to the secret key is one of the main drawbacks ofsymmetric-key encryption, in comparison topublic-key encryption(also known as asymmetric-key encryption).[3][4]However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of theone-time padthey have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption.[5][6][7]
Symmetric-key encryption can use eitherstream ciphersorblock ciphers.[8]
Stream ciphers encrypt the digits (typicallybytes), or letters (in substitution ciphers) of a message one at a time. An example isChaCha20.Substitution ciphersare well-known ciphers, but can be easily decrypted using afrequency table.[9]
Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. TheAdvanced Encryption Standard(AES) algorithm, approved byNISTin December 2001, uses 128-bit blocks.
Examples of popular symmetric-key algorithms includeTwofish,Serpent,AES(Rijndael),Camellia,Salsa20,ChaCha20,Blowfish,CAST5,Kuznyechik,RC4,DES,3DES,Skipjack,Safer, andIDEA.[10]
Symmetric ciphers are commonly used to achieve othercryptographic primitivesthan just encryption.[citation needed]
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often amessage authentication codeis added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from anAEADcipher (e.g.AES-GCM).
However, symmetric ciphers cannot be used fornon-repudiationpurposes except by involving additional parties.[11]See theISO/IEC 13888-2 standard.
Another application is to buildhash functionsfrom block ciphers. Seeone-way compression functionfor descriptions of several such methods.
Many modern block ciphers are based on a construction proposed byHorst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.[citation needed]
Symmetric ciphers have historically been susceptible toknown-plaintext attacks,chosen-plaintext attacks,differential cryptanalysisandlinear cryptanalysis. Careful construction of the functions for eachroundcan greatly reduce the chances of a successful attack.[citation needed]It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.[12]
Most modern symmetric-key algorithms appear to be resistant to the threat ofpost-quantum cryptography.[13]Quantum computerswould exponentially increase the speed at which these ciphers can be decoded; notably,Grover's algorithmwould take the square-root of the time traditionally required for abrute-force attack, although these vulnerabilities can be compensated for by doubling key length.[14]For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher.[15]For this reason, AES-256 is believed to be "quantum resistant".[16][17]
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by usingDiffie–Hellman key exchangeor some otherpublic-key protocolto securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
When used with asymmetric ciphers for key transfer,pseudorandom key generatorsare nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in theirinitialization vectorsis disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of highentropyfor its initialization.[18][19][20]
A reciprocal cipher is a cipher where, just as one enters theplaintextinto thecryptographysystem to get theciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred asself-reciprocal cipher.[21][22]
Practically all mechanical cipher machines implement a reciprocal cipher, amathematical involutionon each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.[23]
Examples of reciprocal ciphers include:
The majority of all modern ciphers can be classified as either astream cipher, most of which use a reciprocalXOR ciphercombiner, or ablock cipher, most of which use aFeistel cipherorLai–Massey schemewith a reciprocal transformation in each round.[citation needed]
|
https://en.wikipedia.org/wiki/Symmetric-key_algorithm
|
Intheoretical computer science,communication complexitystudies the amount of communication required to solve a problem when the input to the problem isdistributedamong two or more parties. The study of communication complexity was first introduced byAndrew Yaoin 1979, while studying the problem of computation distributed among several machines.[1]The problem is usually stated as follows: two parties (traditionally calledAlice and Bob) each receive a (potentially different)n{\displaystyle n}-bitstringx{\displaystyle x}andy{\displaystyle y}. The goal is for Alice to compute the value of a certain function,f(x,y){\displaystyle f(x,y)}, that depends on bothx{\displaystyle x}andy{\displaystyle y}, with the least amount ofcommunicationbetween them.
While Alice and Bob can always succeed by having Bob send his wholen{\displaystyle n}-bit string to Alice (who then computes thefunctionf{\displaystyle f}), the idea here is to find clever ways of calculatingf{\displaystyle f}with fewer thann{\displaystyle n}bits of communication. Note that, unlike incomputational complexity theory, communication complexity is not concerned with theamount of computationperformed by Alice or Bob, or the size of thememoryused, as we generally assume nothing about the computational power of either Alice or Bob.
This abstract problem with two parties (called two-party communication complexity), and its general form withmore than two parties, is relevant in many contexts. InVLSIcircuit design, for example, one seeks to minimize energy used by decreasing the amount of electric signals passed between the different components during a distributed computation. The problem is also relevant in the study of data structures and in the optimization of computer networks. For surveys of the field, see the textbooks byRao & Yehudayoff (2020)andKushilevitz & Nisan (2006).
Letf:X×Y→Z{\displaystyle f:X\times Y\rightarrow Z}where we assume in the typical case thatX=Y={0,1}n{\displaystyle X=Y=\{0,1\}^{n}}andZ={0,1}{\displaystyle Z=\{0,1\}}. Alice holds ann{\displaystyle n}-bit stringx∈X{\displaystyle x\in X}while Bob holds ann{\displaystyle n}-bit stringy∈Y{\displaystyle y\in Y}. By communicating to each other onebitat a time (adopting somecommunication protocolwhich is agreed upon in advance), Alice and Bob wish to compute the value off(x,y){\displaystyle f(x,y)}such that at least one party knows the value at the end of the communication. At this point the answer can be communicated back so that at the cost of one extra bit, both parties will know the answer. The worst case communication complexity of this communication problem of computingf{\displaystyle f}, denoted asD(f){\displaystyle D(f)}, is then defined to be
As observed above, for any functionf:{0,1}n×{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\times \{0,1\}^{n}\rightarrow \{0,1\}}, we haveD(f)≤n{\displaystyle D(f)\leq n}.
Using the above definition, it is useful to think of the functionf{\displaystyle f}as amatrixA{\displaystyle A}(called theinput matrixorcommunication matrix) where the rows are indexed byx∈X{\displaystyle x\in X}and columns byy∈Y{\displaystyle y\in Y}. The entries of the matrix areAx,y=f(x,y){\displaystyle A_{x,y}=f(x,y)}. Initially both Alice and Bob have a copy of the entire matrixA{\displaystyle A}(assuming the functionf{\displaystyle f}is known to both parties). Then, the problem of computing the function value can be rephrased as "zeroing-in" on the corresponding matrix entry. This problem can be solved if either Alice or Bob knows bothx{\displaystyle x}andy{\displaystyle y}. At the start of communication, the number of choices for the value of the function on the inputs is the size of matrix, i.e.22n{\displaystyle 2^{2n}}. Then, as and when each party communicates a bit to the other, the number of choices for the answer reduces as this eliminates a set of rows/columns resulting in asubmatrixofA{\displaystyle A}.
More formally, a setR⊆X×Y{\displaystyle R\subseteq X\times Y}is called a(combinatorial) rectangleif whenever(x1,y1)∈R{\displaystyle (x_{1},y_{1})\in R}and(x2,y2)∈R{\displaystyle (x_{2},y_{2})\in R}then(x1,y2)∈R{\displaystyle (x_{1},y_{2})\in R}. Equivalently,R{\displaystyle R}is a combinatorial rectangle if it can be expressed asR=M×N{\displaystyle R=M\times N}for someM⊆X{\displaystyle M\subseteq X}andN⊆Y{\displaystyle N\subseteq Y}. Consider the case whenk{\displaystyle k}bits are already exchanged between the parties. Now, for a particularh∈{0,1}k{\displaystyle h\in \{0,1\}^{k}}, let us define a matrix
Then,Th⊆X×Y{\displaystyle T_{h}\subseteq X\times Y}, and it is not hard to show thatTh{\displaystyle T_{h}}is a combinatorial rectangle inA{\displaystyle A}.
We consider the case where Alice and Bob try to determine whether or not their input strings are equal. Formally, define theEqualityfunction, denotedEQ:{0,1}n×{0,1}n→{0,1}{\displaystyle EQ:\{0,1\}^{n}\times \{0,1\}^{n}\rightarrow \{0,1\}}, byEQ(x,y)=1{\displaystyle EQ(x,y)=1}ifx=y{\displaystyle x=y}. As we demonstrate below, any deterministic communication protocol solvingEQ{\displaystyle EQ}requiresn{\displaystyle n}bits of communication in the worst case. As a warm-up example, consider the simple case ofx,y∈{0,1}3{\displaystyle x,y\in \{0,1\}^{3}}. The equality function in this case can be represented by the matrix below. The rows represent all the possibilities ofx{\displaystyle x}, the columns those ofy{\displaystyle y}.
In this table, the function only evaluates to 1 whenx{\displaystyle x}equalsy{\displaystyle y}(i.e., on the diagonal). It is also fairly easy to see how communicating a single bit divides someone's possibilities in half. When the first bit ofy{\displaystyle y}is 1, consider only half of the columns (wherey{\displaystyle y}can equal 100, 101, 110, or 111).
Proof. Assume thatD(EQ)≤n−1{\displaystyle D(EQ)\leq n-1}. This means that there existsx≠x′{\displaystyle x\neq x'}such that(x,x){\displaystyle (x,x)}and(x′,x′){\displaystyle (x',x')}have the same communication transcripth{\displaystyle h}. Since this transcript defines a rectangle,f(x,x′){\displaystyle f(x,x')}must also be 1. By definitionx≠x′{\displaystyle x\neq x'}and we know that equality is only true for(a,b){\displaystyle (a,b)}whena=b{\displaystyle a=b}. This yields a contradiction.
This technique of proving deterministic communication lower bounds is called thefooling settechnique.[2]
In the above definition, we are concerned with the number of bits that must bedeterministicallytransmitted between two parties. If both the parties are given access to a random number generator, can they determine the value off{\displaystyle f}with much less information exchanged? Yao, in his seminal paper[1]answers this question by defining randomized communication complexity.
A randomized protocolR{\displaystyle R}for a functionf{\displaystyle f}has two-sided error.
A randomized protocol is a deterministic protocol that uses an extra random string in addition to its normal input. There are two models for this: apublic stringis a random string that is known by both parties beforehand, while aprivate stringis generated by one party and must be communicated to the other party. A theorem presented below shows that any public string protocol can be simulated by a private string protocol that usesO(log n)additional bits compared to the original.
In the probability inequalities above, the outcome of the protocol is understood to dependonlyon the random string; both stringsxandyremain fixed. In other words, ifR(x,y) yieldsg(x,y,r) when using random stringr, theng(x,y,r) =f(x,y) for at least 2/3 of all choices for the stringr.
The randomized complexity is simply defined as the number of bits exchanged in such a protocol.
Note that it is also possible to define a randomized protocol with one-sided error, and the complexity is defined similarly.
Returning to the previous example ofEQ, if certainty is not required, Alice and Bob can check for equality using onlyO(logn){\displaystyle O(\log n)}messages. Consider the following protocol: Assume that Alice and Bob both have access to the same random stringz∈{0,1}n{\displaystyle z\in \{0,1\}^{n}}. Alice computesz⋅x{\displaystyle z\cdot x}and sends this bit (call itb) to Bob. (The(⋅){\displaystyle (\cdot )}is thedot productinGF(2).) Then Bob comparesbtoz⋅y{\displaystyle z\cdot y}. If they are the same, then Bob accepts, sayingxequalsy. Otherwise, he rejects.
Clearly, ifx=y{\displaystyle x=y}, thenz⋅x=z⋅y{\displaystyle z\cdot x=z\cdot y}, soProbz[Accept]=1{\displaystyle Prob_{z}[Accept]=1}. Ifxdoes not equaly, it is still possible thatz⋅x=z⋅y{\displaystyle z\cdot x=z\cdot y}, which would give Bob the wrong answer. How does this happen?
Ifxandyare not equal, they must differ in some locations:
Wherexandyagree,zi∗xi=zi∗ci=zi∗yi{\displaystyle z_{i}*x_{i}=z_{i}*c_{i}=z_{i}*y_{i}}so those terms affect the dot products equally. We can safely ignore those terms and look only at wherexandydiffer. Furthermore, we can swap the bitsxi{\displaystyle x_{i}}andyi{\displaystyle y_{i}}without changing whether or not the dot products are equal. This means we can swap bits so thatxcontains only zeros andycontains only ones:
Note thatz′⋅x′=0{\displaystyle z'\cdot x'=0}andz′⋅y′=Σizi′{\displaystyle z'\cdot y'=\Sigma _{i}z'_{i}}. Now, the question becomes: for some random stringz′{\displaystyle z'}, what is the probability thatΣizi′=0{\displaystyle \Sigma _{i}z'_{i}=0}? Since eachzi′{\displaystyle z'_{i}}is equally likely to be0or1, this probability is just1/2{\displaystyle 1/2}. Thus, whenxdoes not equaly,Probz[Accept]=1/2{\displaystyle Prob_{z}[Accept]=1/2}. The algorithm can be repeated many times to increase its accuracy. This fits the requirements for a randomized communication algorithm.
This shows thatif Alice and Bob share a random string of length n, they can send one bit to each other to computeEQ(x,y){\displaystyle EQ(x,y)}. In the next section, it is shown that Alice and Bob can exchange onlyO(logn){\displaystyle O(\log n)}bits that are as good as sharing a random string of lengthn. Once that is shown, it follows thatEQcan be computed inO(logn){\displaystyle O(\log n)}messages.
For yet another example of randomized communication complexity, we turn to an example known as thegap-Hamming problem(abbreviatedGH). Formally, Alice and Bob both maintain binary messages,x,y∈{−1,+1}n{\displaystyle x,y\in \{-1,+1\}^{n}}and would like to determine if the strings are very similar or if they are not very similar. In particular, they would like to find a communication protocol requiring the transmission of as few bits as possible to compute the following partial Boolean function,
Clearly, they must communicate all their bits if the protocol is to be deterministic (this is because, if there is a deterministic, strict subset of indices that Alice and Bob relay to one another, then imagine having a pair of strings that on that set disagree inn−1{\displaystyle {\sqrt {n}}-1}positions. If another disagreement occurs in any position that is not relayed, then this affects the result ofGHn(x,y){\displaystyle {\text{GH}}_{n}(x,y)}, and hence would result in an incorrect procedure.
A natural question one then asks is, if we're permitted to err1/3{\displaystyle 1/3}of the time (over random instancesx,y{\displaystyle x,y}drawn uniformly at random from{−1,+1}n{\displaystyle \{-1,+1\}^{n}}), then can we get away with a protocol with fewer bits? It turns out that the answer somewhat surprisingly is no, due to a result of Chakrabarti and Regev in 2012: they show that for random instances, any procedure which is correct at least2/3{\displaystyle 2/3}of the time must sendΩ(n){\displaystyle \Omega (n)}bits worth of communication, which is to say essentially all of them.
Creating random protocols becomes easier when both parties have access to the same random string, known as a shared string protocol. However, even in cases where the two parties do not share a random string, it is still possible to use private string protocols with only a small communication cost. Any shared string random protocol using any number of random string can be simulated by a private string protocol that uses an extraO(log n)bits.
Intuitively, we can find some set of strings that has enough randomness in it to run the random protocol with only a small increase in error. This set can be shared beforehand, and instead of drawing a random string, Alice and Bob need only agree on which string to choose from the shared set. This set is small enough that the choice can be communicated efficiently. A formal proof follows.
Consider some random protocolPwith a maximum error rate of 0.1. LetR{\displaystyle R}be100n{\displaystyle 100n}strings of lengthn, numberedr1,r2,…,r100n{\displaystyle r_{1},r_{2},\dots ,r_{100n}}. Given such anR{\displaystyle R}, define a new protocolPR′{\displaystyle P'_{R}}which randomly picks someri{\displaystyle r_{i}}and then runsPusingri{\displaystyle r_{i}}as the shared random string. It takesO(log 100n) =O(logn) bits to communicate the choice ofri{\displaystyle r_{i}}.
Let us definep(x,y){\displaystyle p(x,y)}andpR′(x,y){\displaystyle p'_{R}(x,y)}to be the probabilities thatP{\displaystyle P}andPR′{\displaystyle P'_{R}}compute the correct value for the input(x,y){\displaystyle (x,y)}.
For a fixed(x,y){\displaystyle (x,y)}, we can useHoeffding's inequalityto get the following equation:
Thus when we don't have(x,y){\displaystyle (x,y)}fixed:
The last equality above holds because there are22n{\displaystyle 2^{2n}}different pairs(x,y){\displaystyle (x,y)}. Since the probability does not equal 1, there is someR0{\displaystyle R_{0}}so that for all(x,y){\displaystyle (x,y)}:
SinceP{\displaystyle P}has at most 0.1 error probability,PR0′{\displaystyle P'_{R_{0}}}can have at most 0.2 error probability.
Let's say we additionally allow Alice and Bob to share some resource, for example a pair of entangled particles. Using that ressource, Alice and Bob can correlate their information and thus try to 'collapse' (or 'trivialize') communication complexity in the following sense.
Definition.A resourceR{\displaystyle R}is said to be"collapsing"if, using that resourceR{\displaystyle R}, only one bit of classical communication is enough for Alice to know the evaluationf(x,y){\displaystyle f(x,y)}in the worst case scenario for anyBoolean functionf{\displaystyle f}.
The surprising fact of a collapse of communication complexity is that the functionf{\displaystyle f}can have arbitrarily large entry size, but still the number of communication bit is constant to a single one.
Some resources are shown to be non-collapsing, such as quantum correlations[3]or more generally almost-quantum correlations,[4]whereas on the contrary some other resources are shown to collapse randomized communication complexity, such as the PR-box,[5]or some noisy PR-boxes satisfying some conditions.[6][7][8]
One approach to studying randomized communication complexity is through distributional complexity.
Given a joint distributionμ{\displaystyle \mu }on the inputs of both players, the corresponding distributional complexity of a functionf{\displaystyle f}is the minimum cost of adeterministicprotocolR{\displaystyle R}such thatPr[f(x,y)=R(x,y)]≥2/3{\displaystyle \Pr[f(x,y)=R(x,y)]\geq 2/3}, where the inputs are sampled according toμ{\displaystyle \mu }.
Yao's minimax principle[9](a special case ofvon Neumann'sminimax theorem) states that the randomized communication complexity of a function equals its maximum distributional complexity, where the maximum is taken over all joint distributions of the inputs (not necessarily product distributions!).
Yao's principle can be used to prove lower bounds on the randomized communication complexity of a function: design the appropriate joint distribution, and prove a lower bound on the distributional complexity. Since distributional complexity concerns deterministic protocols, this could be easier than proving a lower bound on randomized protocols directly.
As an example, let us consider thedisjointnessfunction DISJ: each of the inputs is interpreted as a subset of{1,…,n}{\displaystyle \{1,\dots ,n\}}, and DISJ(x,y)=1 if the two sets are disjoint. Razborov[10]proved anΩ(n){\displaystyle \Omega (n)}lower bound on the randomized communication complexity by considering the following distribution: with probability 3/4, sample two random disjoint sets of sizen/4{\displaystyle n/4}, and with probability 1/4, sample two random sets of sizen/4{\displaystyle n/4}with a unique intersection.
A powerful approach to the study of distributional complexity is information complexity. Initiated by Bar-Yossef, Jayram, Kumar and Sivakumar,[11]the approach was codified in work of Barak, Braverman, Chen and Rao[12]and by Braverman and Rao.[13]
The (internal) information complexity of a (possibly randomized) protocolRwith respect to a distributionμis defined as follows. Let(X,Y)∼μ{\displaystyle (X,Y)\sim \mu }be random inputs sampled according toμ, and letΠbe the transcript ofRwhen run on the inputsX,Y{\displaystyle X,Y}. The information complexity of the protocol is
whereIdenotesconditional mutual information.
The first summand measures the amount of information that Alice learns about Bob's input from the transcript, and the second measures the amount of information that Bob learns about Alice's input.
Theε-error information complexity of a functionfwith respect to a distributionμis the infimal information complexity of a protocol forfwhose error (with respect toμ) is at mostε.
Braverman and Rao proved that information equals amortized communication. This means that the cost for solvingnindependent copies offis roughlyntimes the information complexity off. This is analogous to the well-known interpretation ofShannon entropyas the amortized bit-length required to transmit data from a given information source. Braverman and Rao's proof uses a technique known as "protocol compression", in which an information-efficient protocol is "compressed" into a communication-efficient protocol.
The techniques of information complexity enable the computation of the exact (up to first order) communication complexity of set disjointness to be1.4923…n{\displaystyle 1.4923\ldots n}.[14]
Information complexity techniques have also been used to analyze extended formulations, proving an essentially optimal lower bound on the complexity of algorithms based onlinear programmingwhich approximately solve themaximum clique problem.[15]
Omri Weinstein's 2015 survey[16]surveys the subject.
Quantum communication complexity tries to quantify the communication reduction possible by using quantum effects during a distributed computation.
At least three quantum generalizations of communication complexity have been proposed; for a survey see the suggested text by G. Brassard.
The first one is thequbit-communication model, where the parties can use quantum communication instead of classical communication, for example by exchangingphotonsthrough anoptical fiber.
In a second model the communication is still performed with classical bits, but the parties are allowed to manipulate an unlimited supply of quantum entangled states as part of their protocols. By doing measurements on their entangled states, the parties can save on classical communication during a distributed computation (see an application inCollapse of Randomized Communication Complexity).
The third model involves access to previously shared entanglement in addition toqubitcommunication, and is the least explored of the three quantum models.
In nondeterministic communication complexity, Alice and Bob have access to an oracle. After receiving the oracle's word, the parties communicate to deducef(x,y){\displaystyle f(x,y)}. The nondeterministic communication complexity is then the maximum over all pairs(x,y){\displaystyle (x,y)}over the sum of number of bits exchanged and the coding length of the oracle word.
Viewed differently, this amounts to covering all 1-entries of the 0/1-matrix by combinatorial 1-rectangles (i.e., non-contiguous, non-convex submatrices, whose entries are all one (see Kushilevitz and Nisan or Dietzfelbinger et al.)). The nondeterministic communication complexity is the binary logarithm of therectangle covering numberof the matrix: the minimum number of combinatorial 1-rectangles required to cover all 1-entries of the matrix, without covering any 0-entries.
Nondeterministic communication complexity occurs as a means to obtaining lower bounds for deterministic communication complexity (see Dietzfelbinger et al.), but also in the theory of nonnegative matrices, where it gives a lower bound on thenonnegative rankof a nonnegative matrix.[17]
In the unbounded-error setting, Alice and Bob have access to a private coin and their own inputs(x,y){\displaystyle (x,y)}. In this setting, Alice succeeds if she responds with the correct value off(x,y){\displaystyle f(x,y)}with probability strictly greater than 1/2. In other words, if Alice's responses haveanynon-zero correlation to the true value off(x,y){\displaystyle f(x,y)}, then the protocol is considered valid.
Note that the requirement that the coin isprivateis essential. In particular, if the number of public bits shared between Alice and Bob are not counted against the communication complexity, it is easy to argue that computing any function hasO(1){\displaystyle O(1)}communication complexity.[18]On the other hand, both models are equivalent if the number of public bits used by Alice and Bob is counted against the protocol's total communication.[19]
Though subtle, lower bounds on this model are extremely strong. More specifically, it is clear that any bound on problems of this class immediately imply equivalent bounds on problems in the deterministic model and the private and public coin models, but such bounds also hold immediately for nondeterministic communication models and quantum communication models.[20]
Forster[21]was the first to prove explicit lower bounds for this class, showing that computing the inner product⟨x,y⟩{\displaystyle \langle x,y\rangle }requires at leastΩ(n){\displaystyle \Omega (n)}bits of communication, though an earlier result of Alon, Frankl, and Rödl proved that the communication complexity for almost all Boolean functionsf:{0,1}n×{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\times \{0,1\}^{n}\to \{0,1\}}isΩ(n){\displaystyle \Omega (n)}.[22]
Lifting is a general technique incomplexity theoryin which a lower bound on a simple measure of complexity is "lifted" to a lower bound on a more difficult measure.
This technique was pioneered in the context of communication complexity by Raz and McKenzie,[23]who proved the first query-to-communication lifting theorem, and used the result to separate themonotoneNChierarchy.
Given a functionf:{0,1}n→{0,1}{\displaystyle f\colon \{0,1\}^{n}\to \{0,1\}}and a gadgetg:{0,1}a×{0,1}b→{0,1}{\displaystyle g\colon \{0,1\}^{a}\times \{0,1\}^{b}\to \{0,1\}}, their compositionf∘g:{0,1}na×{0,1}nb→{0,1}{\displaystyle f\circ g\colon \{0,1\}^{na}\times \{0,1\}^{nb}\to \{0,1\}}is defined as follows:
In words,x{\displaystyle x}is partitioned inton{\displaystyle n}blocks of lengtha{\displaystyle a}, andy{\displaystyle y}is partitioned inton{\displaystyle n}blocks of lengthb{\displaystyle b}. The gadget is appliedn{\displaystyle n}times on the blocks, and the outputs are fed intof{\displaystyle f}. Diagrammatically:
In this diagram, each of the inputsx1,…,xn{\displaystyle \mathbf {x} _{1},\dots ,\mathbf {x} _{n}}isabits long, and each of the inputsy1,…,yn{\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{n}}isbbits long.
Adecision treeof depthΔ{\displaystyle \Delta }forf{\displaystyle f}can be translated to a communication protocol whose cost isΔ⋅D(g){\displaystyle \Delta \cdot D(g)}: each time the tree queries a bit, the corresponding value ofg{\displaystyle g}is computed using an optimal protocol forg{\displaystyle g}. Raz and McKenzie showed that this is optimal up to a constant factor wheng{\displaystyle g}is the so-called "indexing gadget", in whichx{\displaystyle x}has lengthclogn{\displaystyle c\log n}(for a large enough constantc),y{\displaystyle y}has lengthnc{\displaystyle n^{c}}, andg(x,y){\displaystyle g(x,y)}is thex{\displaystyle x}-th bit ofy{\displaystyle y}.
The proof of the Raz–McKenzie lifting theorem uses the method of simulation, in which a protocol for the composed functionf∘g{\displaystyle f\circ g}is used to generate a decision tree forf{\displaystyle f}. Göös, Pitassi and Watson[24]gave an exposition of the original proof. Since then, several works have proved similar theorems with different gadgets, such as inner product.[25]The smallest gadget which can be handled is the indexing gadget withc=1+ϵ{\displaystyle c=1+\epsilon }.[26]Göös, Pitassi and Watson extended the Raz–McKenzie technique to randomized protocols.[27]
A simple modification of the Raz–McKenzie lifting theorem gives a lower bound ofΔ⋅D(g){\displaystyle \Delta \cdot D(g)}on the logarithm of the size of a protocol tree for computingf∘g{\displaystyle f\circ g}, whereΔ{\displaystyle \Delta }is the depth of the optimal decision tree forf{\displaystyle f}. Garg, Göös, Kamath and Sokolov extended this to theDAG-like setting,[28]and used their result to obtain monotonecircuitlower bounds. The same technique has also yielded applications toproof complexity.[29]
A different type of lifting is exemplified by Sherstov's pattern matrix method,[30]which gives a lower bound on the quantum communication complexity off∘g{\displaystyle f\circ g}, wheregis a modified indexing gadget, in terms of the approximate degree off. The approximate degree of a Boolean function is the minimal degree of a polynomial which approximates the function on all Boolean points up to an additive error of 1/3.
In contrast to the Raz–McKenzie proof which uses the method of simulation, Sherstov's proof takes adual witnessto the approximate degree offand gives a lower bound on the quantum query complexity off∘g{\displaystyle f\circ g}using thegeneralized discrepancy method. The dual witness for the approximate degree offis a lower bound witness for the approximate degree obtained viaLP duality. This dual witness is massaged into other objects constituting data for the generalized discrepancy method.
Another example of this approach is the work of Pitassi and Robere,[31]in which analgebraic gapis lifted to a lower bound onRazborov'srank measure. The result is a strongly exponential lower bound on the monotone circuit complexity of an explicit function, obtained via the Karchmer–Wigderson characterization[32]of monotone circuit size in terms of communication complexity.
Considering a 0 or 1 input matrixMf=[f(x,y)]x,y∈{0,1}n{\displaystyle M_{f}=[f(x,y)]_{x,y\in \{0,1\}^{n}}}, the minimum number of bits exchanged to computef{\displaystyle f}deterministically in the worst case,D(f){\displaystyle D(f)}, is known to be bounded from below by the logarithm of therankof the matrixMf{\displaystyle M_{f}}.
Thelog rank conjectureproposes that the communication complexity,D(f){\displaystyle D(f)}, is bounded from above by a constant power of the logarithm of the rank ofMf{\displaystyle M_{f}}. Since D(f) is bounded from above and below by polynomials of log rank(Mf){\displaystyle (M_{f})}, we can say D(f) is polynomially related to log rank(Mf){\displaystyle (M_{f})}. Since the rank of a matrix is polynomial time computable in the size of the matrix, such an upper bound would allow the matrix's communication complexity to be approximated in polynomial time. Note, however, that the size of the matrix itself is exponential in the size of the input.
For a randomized protocol, the number of bits exchanged in the worst case, R(f), was conjectured to be polynomially related to the following formula:
Such log rank conjectures are valuable because they reduce the question of a matrix's communication complexity to a question of linearly independent rows (columns) of the matrix. This particular version, called the Log-Approximate-Rank Conjecture, was recently refuted by Chattopadhyay, Mande and Sherif (2019)[33]using a surprisingly simple counter-example. This reveals that the essence of the communication complexity problem, for example in the EQ case above, is figuring out where in the matrix the inputs are, in order to find out if they're equivalent.
Lower bounds in communication complexity can be used to prove lower bounds indecision tree complexity,VLSI circuits, data structures,streaming algorithms,space–time tradeoffsfor Turing machines and more.[2]
Conitzer and Sandholm[34]studied the communication complexity of some commonvoting rules, which are essential in political and non political organizations.Compilation complexityis a closely related notion, which can be seen as a single-round communication complexity.
|
https://en.wikipedia.org/wiki/Communication_complexity
|
Transport Layer Security(TLS) is acryptographic protocoldesigned to provide communications security over a computer network, such as theInternet. Theprotocolis widely used inapplicationssuch asemail,instant messaging, andvoice over IP, but its use in securingHTTPSremains the most publicly visible.
The TLS protocol aims primarily to provide security, includingprivacy(confidentiality), integrity, and authenticity through the use ofcryptography, such as the use ofcertificates, between two or more communicating computer applications. It runs in thepresentation layerand is itself composed of two layers: the TLS record and the TLShandshake protocols.
The closely relatedDatagram Transport Layer Security(DTLS)is acommunications protocolthat providessecuritytodatagram-based applications. In technical writing, references to "(D)TLS" are often seen when it applies to both versions.[1]
TLS is a proposedInternet Engineering Task Force(IETF) standard, first defined in 1999, and the current version is TLS 1.3, defined in August 2018. TLS builds on the now-deprecatedSSL(Secure Sockets Layer) specifications (1994, 1995, 1996) developed byNetscape Communicationsfor adding theHTTPSprotocol to theirNetscape Navigatorweb browser.
Client-serverapplications use the TLSprotocolto communicate across a network in a way designed to preventeavesdroppingandtampering.
Since applications can communicate either with or without TLS (or SSL), it is necessary for theclientto request that theserverset up a TLS connection.[2]One of the main ways of achieving this is to use a differentport numberfor TLS connections. Port 80 is typically used for unencryptedHTTPtraffic while port 443 is the common port used for encryptedHTTPStraffic. Another mechanism is to make a protocol-specificSTARTTLSrequest to the server to switch the connection to TLS – for example, when using the mail andnewsprotocols.
Once the client and server have agreed to use TLS, they negotiate astatefulconnection by using a handshaking procedure (see§ TLS handshake).[3]The protocols use a handshake with anasymmetric cipherto establish not only cipher settings but also a session-specific shared key with which further communication is encrypted using asymmetric cipher. During this handshake, the client and server agree on various parameters used to establish the connection's security:
This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fails, then the TLS handshake fails and the connection is not created.
TLS and SSL do not fit neatly into any single layer of theOSI modelor theTCP/IP model.[4][5]TLS runs "on top of some reliable transport protocol (e.g., TCP),"[6]: §1which would imply that it is above thetransport layer. It serves encryption to higher layers, which is normally the function of thepresentation layer. However, applications generally use TLS as if it were a transport layer,[4][5]even though applications using TLS must actively control initiating TLS handshakes and handling of exchanged authentication certificates.[6]: §1
When secured by TLS, connections between a client (e.g., a web browser) and a server (e.g., wikipedia.org) will have all of the following properties:[6]: §1
TLS supports many different methods for exchanging keys, encrypting data, and authenticating message integrity. As a result, secure configuration of TLS involves many configurable parameters, and not all choices provide all of the privacy-related properties described in the list above (see the tables below§ Key exchange,§ Cipher security, and§ Data integrity).
Attempts have been made to subvert aspects of the communications security that TLS seeks to provide, and the protocol has been revised several times to address these security threats. Developers of web browsers have repeatedly revised their products to defend against potential security weaknesses after these were discovered (see TLS/SSL support history of web browsers).
Datagram Transport Layer Security, abbreviated DTLS, is a relatedcommunications protocolprovidingsecuritytodatagram-based applications by allowing them to communicate in a way designed[7][8]to preventeavesdropping,tampering, ormessage forgery. The DTLS protocol is based on thestream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. However, unlike TLS, it can be used with most datagram oriented protocols includingUser Datagram Protocol(UDP),Datagram Congestion Control Protocol(DCCP),Control And Provisioning of Wireless Access Points(CAPWAP),Stream Control Transmission Protocol(SCTP) encapsulation, andSecure Real-time Transport Protocol(SRTP).
As the DTLS protocol datagram preserves the semantics of the underlying transport, the application does not suffer from the delays associated with stream protocols. However, the application has to deal withpacket reordering, loss of datagram and data larger than the size of a datagramnetwork packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids theTCP meltdown problem,[9][10]when being used to create a VPN tunnel.
The original 2006 release of DTLS version 1.0 was not a standalone document. It was given as a series of deltas to TLS 1.1.[11]Similarly the follow-up 2012 release of DTLS is a delta to TLS 1.2. It was given the version number of DTLS 1.2 to match its TLS version. Lastly, the 2022 DTLS 1.3 is a delta to TLS 1.3. Like the two previous versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability".[12]
ManyVPN clientsincludingCiscoAnyConnect[13]& InterCloud Fabric,[14]OpenConnect,[15]ZScalertunnel,[16]F5 NetworksEdge VPN Client,[17]and Citrix SystemsNetScaler[18]use DTLS to secure UDP traffic. In addition all modern web browsers support DTLS-SRTP[19]forWebRTC.
In August 1986, the National Security Agency, the National Bureau of Standards, the Defense Communications Agency launched a project, called the Secure Data Network System (SDNS), with the intent of designing the next generation of secure computer communications network and product specifications to be implemented for applications on public and private internets. It was intended to complement the rapidly emerging new OSI internet standards moving forward both in the U.S. government's GOSIP Profiles and in the huge ITU-ISO JTC1 internet effort internationally.[26]
As part of the project, researchers designed a protocol called SP4 (security protocolin layer 4 of the OSI system). This was later renamed the Transport Layer Security Protocol (TLSP) and subsequently published in 1995 as international standard ITU-T X.274|ISO/IEC 10736:1995.[27]Despite the name similarity, this is distinct from today's TLS.
Other efforts towards transport layer security included theSecure Network Programming(SNP)application programming interface(API), which in 1993 explored the approach of having a secure transport layer API closely resemblingBerkeley sockets, to facilitate retrofitting pre-existing network applications with security measures. SNP was published and presented in the 1994USENIXSummer Technical Conference.[28][29]The SNP project was funded by a grant fromNSAto ProfessorSimon LamatUT-Austinin 1991.[30]Secure Network Programmingwon the 2004ACM Software System Award.[31][32]Simon Lam was inducted into theInternet Hall of Famefor "inventing secure sockets and implementing the first secure sockets layer, named SNP, in 1993."[33][34]
Netscape developed the original SSL protocols, andTaher Elgamal, chief scientist atNetscape Communicationsfrom 1995 to 1998, has been described as the "father of SSL".[35][36][37][38]SSL version 1.0 was never publicly released because of serious security flaws in the protocol. Version 2.0, after being released in February 1995 was quickly found to contain a number of security and usability flaws. It used the same cryptographic keys for message authentication and encryption. It had a weak MAC construction that used the MD5 hash function with a secret prefix, making it vulnerable to length extension attacks. It also provided no protection for either the opening handshake or an explicit message close, both of which meantman-in-the-middle attackscould go undetected. Moreover, SSL 2.0 assumed a single service and a fixed domain certificate, conflicting with the widely used feature of virtual hosting in Web servers, so most websites were effectively impaired from using SSL.
These flaws necessitated the complete redesign of the protocol to SSL version 3.0.[39][37]Released in 1996, it was produced byPaul Kocherworking with Netscape engineers Phil Karlton and Alan Freier, with a reference implementation by Christopher Allen and Tim Dierks of Certicom. Newer versions of SSL/TLS are based on SSL 3.0. The 1996 draft of SSL 3.0 was published by IETF as a historical document inRFC6101.
SSL 2.0 was deprecated in 2011 byRFC6176. In 2014, SSL 3.0 was found to be vulnerable to thePOODLEattack that affects allblock ciphersin SSL;RC4, the only non-block cipher supported by SSL 3.0, is also feasibly broken as used in SSL 3.0.[40]SSL 3.0 was deprecated in June 2015 byRFC7568.
TLS 1.0 was first defined inRFC2246in January 1999 as an upgrade of SSL Version 3.0, and written by Christopher Allen and Tim Dierks of Certicom. As stated in the RFC, "the differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough to preclude interoperability between TLS 1.0 and SSL 3.0". Tim Dierks later wrote that these changes, and the renaming from "SSL" to "TLS", were a face-saving gesture to Microsoft, "so it wouldn't look [like] the IETF was just rubberstamping Netscape's protocol".[41]
ThePCI Councilsuggested that organizations migrate from TLS 1.0 to TLS 1.1 or higher before June 30, 2018.[42][43]In October 2018,Apple,Google,Microsoft, andMozillajointly announced they would deprecate TLS 1.0 and 1.1 in March 2020.[20]TLS 1.0 and 1.1 were formally deprecated inRFC8996in March 2021.
TLS 1.1 was defined in RFC 4346 in April 2006.[44]It is an update from TLS version 1.0. Significant differences in this version include:
Support for TLS versions 1.0 and 1.1 was widely deprecated by web sites around 2020,[46]disabling access toFirefoxversions before 24 andChromium-based browsersbefore 29,[47]though third-party fixes can be applied to Netscape Navigator and older versions of Firefox to add TLS 1.2 support.[48]
TLS 1.2 was defined inRFC5246in August 2008.[23]It is based on the earlier TLS 1.1 specification. Major differences include:
All TLS versions were further refined inRFC6176in March 2011, removing their backward compatibility with SSL such that TLS sessions never negotiate the use of Secure Sockets Layer (SSL) version 2.0. As of April 2025 there is no formal date for TLS 1.2 to be deprecated. The specifications for TLS 1.2 became redefined as well by the Standards Track DocumentRFC8446to keep it as secure as possible; it is to be seen as a failover protocol now, meant only to be negotiated with clients which are unable to talk over TLS 1.3 (The original RFC 5246 definition for TLS 1.2 is since then obsolete).
TLS 1.3 was defined in RFC 8446 in August 2018.[6]It is based on the earlier TLS 1.2 specification. Major differences from TLS 1.2 include:[49]
Network Security Services(NSS), the cryptography library developed byMozillaand used by its web browserFirefox, enabled TLS 1.3 by default in February 2017.[51]TLS 1.3 support was subsequently added — but due to compatibility issues for a small number of users, not automatically enabled[52]— toFirefox 52.0, which was released in March 2017. TLS 1.3 was enabled by default in May 2018 with the release ofFirefox 60.0.[53]
Google Chromeset TLS 1.3 as the default version for a short time in 2017. It then removed it as the default, due to incompatible middleboxes such asBlue Coat web proxies.[54]
The intolerance of the new version of TLS wasprotocol ossification; middleboxes had ossified the protocol's version parameter. As a result, version 1.3 mimics thewire imageof version 1.2. This change occurred very late in the design process, only having been discovered during browser deployment.[55]The discovery of this intolerance also led to the prior version negotiation strategy, where the highest matching version was picked, being abandoned due to unworkable levels of ossification.[56]'Greasing' an extension point, where one protocol participant claims support for non-existent extensions to ensure that unrecognised-but-actually-existent extensions are tolerated and so to resist ossification, was originally designed for TLS, but it has since been adopted elsewhere.[56]
During the IETF 100Hackathon, which took place inSingaporein 2017, the TLS Group worked on adaptingopen-source applicationsto use TLS 1.3.[57][58]The TLS group was made up of individuals from Japan, United Kingdom, and Mauritius via the cyberstorm.mu team.[58]This work was continued in the IETF 101 Hackathon inLondon,[59]and the IETF 102 Hackathon in Montreal.[60]
wolfSSLenabled the use of TLS 1.3 as of version 3.11.1, released in May 2017.[61]As the first commercial TLS 1.3 implementation, wolfSSL 3.11.1 supported Draft 18 and now supports Draft 28,[62]the final version, as well as many older versions. A series of blogs were published on the performance difference between TLS 1.2 and 1.3.[63]
InSeptember 2018, the popularOpenSSLproject released version 1.1.1 of its library, in which support for TLS 1.3 was "the headline new feature".[64]
Support for TLS 1.3 was added toSecure Channel(schannel) for theGAreleases ofWindows 11andWindows Server 2022.[65]
TheElectronic Frontier Foundationpraised TLS 1.3 and expressed concern about the variant protocol Enterprise Transport Security (ETS) that intentionally disables important security measures in TLS 1.3.[66]Originally called Enterprise TLS (eTLS), ETS is a published standard known as the 'ETSITS103523-3', "Middlebox Security Protocol, Part3: Enterprise Transport Security". It is intended for use entirely within proprietary networks such as banking systems. ETS does not support forward secrecy so as to allow third-party organizations connected to the proprietary networks to be able to use their private key to monitor network traffic for the detection of malware and to make it easier to conduct audits.[67][68]Despite the claimed benefits, the EFF warned that the loss of forward secrecy could make it easier for data to be exposed along with saying that there are better ways to analyze traffic.[66]
A digital certificate certifies the ownership of a public key by the named subject of the certificate, and indicates certain expected usages of that key. This allows others (relying parties) to rely upon signatures or on assertions made by the private key that corresponds to the certified public key. Keystores and trust stores can be in various formats, such as.pem, .crt,.pfx, and.jks.
TLS typically relies on a set of trusted third-party certificate authorities to establish the authenticity of certificates. Trust is usually anchored in a list of certificates distributed with user agent software,[69]and can be modified by the relying party.
According toNetcraft, who monitors active TLS certificates, the market-leading certificate authority (CA) has beenSymantecsince the beginning of their survey (orVeriSignbefore the authentication services business unit was purchased by Symantec). As of 2015, Symantec accounted for just under a third of all certificates and 44% of the valid certificates used by the 1 million busiest websites, as counted by Netcraft.[70]In 2017, Symantec sold its TLS/SSL business to DigiCert.[71]In an updated report, it was shown thatIdenTrust,DigiCert, andSectigoare the top 3 certificate authorities in terms of market share since May 2019.[72]
As a consequence of choosingX.509certificates, certificate authorities and apublic key infrastructureare necessary to verify the relation between a certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more convenient than verifying the identities via aweb of trust, the2013 mass surveillance disclosuresmade it more widely known that certificate authorities are a weak point from a security standpoint, allowingman-in-the-middle attacks(MITM) if the certificate authority cooperates (or is compromised).[73][74]
Before a client and server can begin to exchange information protected by TLS, they must securely exchange or agree upon an encryption key and a cipher to use when encrypting data (see§ Cipher). Among the methods used for key exchange/agreement are: public and private keys generated withRSA(denoted TLS_RSA in the TLS handshake protocol),Diffie–Hellman(TLS_DH), ephemeral Diffie–Hellman (TLS_DHE),elliptic-curve Diffie–Hellman(TLS_ECDH), ephemeral elliptic-curve Diffie–Hellman (TLS_ECDHE),anonymous Diffie–Hellman(TLS_DH_anon),[23]pre-shared key(TLS_PSK)[75]andSecure Remote Password(TLS_SRP).[76]
The TLS_DH_anon and TLS_ECDH_anon key agreement methods do not authenticate the server or the user and hence are rarely used because those are vulnerable toman-in-the-middle attacks. Only TLS_DHE and TLS_ECDHE provideforward secrecy.
Public key certificates used during exchange/agreement also vary in the size of the public/private encryption keys used during the exchange and hence the robustness of the security provided. In July 2013,Googleannounced that it would no longer use 1024-bit public keys and would switch instead to 2048-bit keys to increase the security of the TLS encryption it provides to its users because the encryption strength is directly related to thekey size.[77][78]
Notes
Amessage authentication code(MAC) is used for data integrity.HMACis used forCBCmode of block ciphers.Authenticated encryption(AEAD) such asGCMandCCM modeuses AEAD-integrated MAC and does not useHMAC.[6]: §8.4HMAC-basedPRF, orHKDFis used for TLS handshake.
In applications design, TLS is usually implemented on top of Transport Layer protocols, encrypting all of the protocol-related data of protocols such asHTTP,FTP,SMTP,NNTPandXMPP.
Historically, TLS has been used primarily with reliable transport protocols such as theTransmission Control Protocol(TCP). However, it has also been implemented with datagram-oriented transport protocols, such as theUser Datagram Protocol(UDP) and theDatagram Congestion Control Protocol(DCCP), usage of which has been standardized independently using the termDatagram Transport Layer Security(DTLS).
A primary use of TLS is to secureWorld Wide Webtraffic between awebsiteand aweb browserencoded with the HTTP protocol. This use of TLS to secure HTTP traffic constitutes theHTTPSprotocol.[93]
Notes
As of March 2025[update], the latest versions of all major web browsers support TLS 1.2 and 1.3 and have them enabled by default, with the exception ofIE 11. TLS 1.0 and 1.1 are disabled by default on the latest versions of all major browsers.
Mitigations against known attacks are not enough yet:
Most SSL and TLS programming libraries arefree and open-source software.
A paper presented at the 2012ACMconference on computer and communications security[98]showed that many applications used some of these SSL libraries incorrectly, leading to vulnerabilities. According to the authors:
"The root cause of most of these vulnerabilities is the terrible design of the APIs to the underlying SSL libraries. Instead of expressing high-level security properties of network tunnels such as confidentiality and authentication, these APIs expose low-level details of the SSL protocol to application developers. As a consequence, developers often use SSL APIs incorrectly, misinterpreting and misunderstanding their manifold parameters, options, side effects, and return values."
TheSimple Mail Transfer Protocol(SMTP) can also be protected by TLS. These applications usepublic key certificatesto verify the identity of endpoints.
TLS can also be used for tunneling an entire network stack to create aVPN, which is the case withOpenVPNandOpenConnect. Many vendors have by now married TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of Web-browsers, in order to enable support for client/server applications. Compared to traditionalIPsecVPN technologies, TLS has some inherent advantages in firewall andNATtraversal that make it easier to administer for large remote-access populations.
TLS is also a standard method for protectingSession Initiation Protocol(SIP) application signaling. TLS can be used for providing authentication and encryption of the SIP signaling associated withVoIPand other SIP-based applications.[99]
Significant attacks against TLS/SSL are listed below.
In February 2015, IETF issued an informational RFC[100]summarizing the various known attacks against TLS/SSL.
A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS.[101]For example, it allows an attacker who can hijack anhttpsconnection to splice their own requests into the beginning of the conversation the client has with the web server. The attacker cannot actually decrypt the client–server communication, so it is different from a typicalman-in-the-middle attack. A short-term fix is for web servers to stop allowing renegotiation, which typically will not require other changes unlessclient certificateauthentication is used. To fix the vulnerability, a renegotiation indication extension was proposed for TLS. It will require the client and server to include and verify information about previous handshakes in any renegotiation handshakes.[102]This extension has become a proposed standard and has been assigned the numberRFC5746. The RFC has been implemented by several libraries.[103][104][105]
A protocoldowngrade attack(also called a version rollback attack) tricks a web server into negotiating connections with previous versions of TLS (such as SSLv2) that have long since been abandoned as insecure.
Previous modifications to the original protocols, likeFalse Start[106](adopted and enabled by Google Chrome[107]) orSnap Start, reportedly introduced limited TLS protocol downgrade attacks[108]or allowed modifications to the cipher suite list sent by the client to the server. In doing so, an attacker might succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite negotiated to use either a weaker symmetric encryption algorithm or a weaker key exchange.[109]A paper presented at anACMconference on computer and communications securityin 2012 demonstrated that the False Start extension was at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data.[110]
Encryption downgrade attacks can force servers and clients to negotiate a connection using cryptographically weak keys. In 2014, aman-in-the-middleattack called FREAK was discovered affecting theOpenSSLstack, the defaultAndroidweb browser, and someSafaribrowsers.[111]The attack involved tricking servers into negotiating a TLS connection using cryptographically weak 512 bit encryption keys.
Logjam is asecurity exploitdiscovered in May 2015 that exploits the option of using legacy"export-grade"512-bitDiffie–Hellmangroups dating back to the 1990s.[112]It forces susceptible servers to downgrade to cryptographically weak 512-bit Diffie–Hellman groups. An attacker can then deduce the keys the client and server determine using theDiffie–Hellman key exchange.
TheDROWN attackis an exploit that attacks servers supporting contemporary SSL/TLS protocol suites by exploiting their support for the obsolete, insecure, SSLv2 protocol to leverage an attack on connections using up-to-date protocols that would otherwise be secure.[113][114]DROWN exploits a vulnerability in the protocols used and the configuration of the server, rather than any specific implementation error. Full details of DROWN were announced in March 2016, together with a patch for the exploit. At that time, more than 81,000 of the top 1 million most popular websites were among the TLS protected websites that were vulnerable to the DROWN attack.[114]
On September 23, 2011, researchers Thai Duong and Juliano Rizzo demonstrated a proof of concept calledBEAST(Browser Exploit Against SSL/TLS)[115]using aJava appletto violatesame origin policyconstraints, for a long-knowncipher block chaining(CBC) vulnerability in TLS 1.0:[116][117]an attacker observing 2 consecutive ciphertext blocks C0, C1 can test if the plaintext block P1 is equal to x by choosing the next plaintext blockP2 = x ⊕ C0 ⊕ C1; as per CBC operation,C2 = E(C1 ⊕ P2) = E(C1 ⊕ x ⊕ C0 ⊕ C1) = E(C0 ⊕ x), which will be equal to C1 ifx = P1. Practicalexploitshad not been previously demonstrated for thisvulnerability, which was originally discovered byPhillip Rogaway[118]in 2002. The vulnerability of the attack had been fixed with TLS 1.1 in 2006, but TLS 1.1 had not seen wide adoption prior to this attack demonstration.
RC4as a stream cipher is immune to BEAST attack. Therefore, RC4 was widely used as a way to mitigate BEAST attack on the server side. However, in 2013, researchers found more weaknesses in RC4. Thereafter enabling RC4 on server side was no longer recommended.[119]
Chrome and Firefox themselves are not vulnerable to BEAST attack,[120][121]however, Mozilla updated theirNSSlibraries to mitigate BEAST-likeattacks. NSS is used byMozilla FirefoxandGoogle Chrometo implement SSL. Someweb serversthat have a broken implementation of the SSL specification may stop working as a result.[122]
Microsoftreleased Security Bulletin MS12-006 on January 10, 2012, which fixed the BEAST vulnerability by changing the way that the Windows Secure Channel (Schannel) component transmits encrypted network packets from the server end.[123]Users of Internet Explorer (prior to version 11) that run on older versions of Windows (Windows 7,Windows 8andWindows Server 2008 R2) can restrict use of TLS to 1.1 or higher.
Applefixed BEAST vulnerability by implementing 1/n-1 split and turning it on by default inOS X Mavericks, released on October 22, 2013.[124]
The authors of the BEAST attack are also the creators of the laterCRIMEattack, which can allow an attacker to recover the content of web cookies whendata compressionis used along with TLS.[125][126]When used to recover the content of secretauthentication cookies, it allows an attacker to performsession hijackingon an authenticated web session.
While the CRIME attack was presented as a general attack that could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such asSPDYorHTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit againstHTTP compressionhas not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined. In 2013 a new instance of the CRIME attack against HTTP compression, dubbedBREACH, was announced. Based on the CRIME attack a BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link or is able to inject content into valid pages the user is visiting (ex: a wireless network under the control of the attacker).[127]All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used.[128]Unlike previous instances of CRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users.[127]This is a known limitation of TLS as it is susceptible tochosen-plaintext attackagainst the application-layer data it was meant to protect.
Earlier TLS versions were vulnerable against thepadding oracle attackdiscovered in 2002. A novel variant, called theLucky Thirteen attack, was published in 2013.
Some experts[90]also recommended avoidingtriple DESCBC. Since the last supported ciphers developed to support any program usingWindows XP's SSL/TLS library like Internet Explorer on Windows XP areRC4and Triple-DES, and since RC4 is now deprecated (see discussion ofRC4 attacks), this makes it difficult to support any version of SSL for any program using this library on XP.
A fix was released as the Encrypt-then-MAC extension to the TLS specification, released asRFC7366.[129]The Lucky Thirteen attack can be mitigated in TLS 1.2 by using only AES_GCM ciphers; AES_CBC remains vulnerable. SSL may safeguard email, VoIP, and other types of communications over insecure networks in addition to its primary use case of secure data transmission between a client and the server.[2]
On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makesCBC mode of operationwith SSL 3.0 vulnerable to apadding attack(CVE-2014-3566). They named this attackPOODLE(Padding Oracle On Downgraded Legacy Encryption). On average, attackers only need to make 256 SSL 3.0 requests to reveal one byte of encrypted messages.[96]
Although this vulnerability only exists in SSL 3.0 and most clients and servers support TLS 1.0 and above, all major browsers voluntarily downgrade to SSL 3.0 if the handshakes with newer versions of TLS fail unless they provide the option for a user or administrator to disable SSL 3.0 and the user or administrator does so[citation needed]. Therefore, the man-in-the-middle can first conduct aversion rollback attackand then exploit this vulnerability.[96]
On December 8, 2014, a variant of POODLE was announced that impacts TLS implementations that do not properly enforce padding byte requirements.[130]
Despite the existence of attacks onRC4that broke its security, cipher suites in SSL and TLS that were based on RC4 were still considered secure prior to 2013 based on the way in which they were used in SSL and TLS. In 2011, the RC4 suite was actually recommended as a workaround for theBEASTattack.[131]New forms of attack disclosed in March 2013 conclusively demonstrated the feasibility of breaking RC4 in TLS, suggesting it was not a good workaround for BEAST.[95]An attack scenario was proposed by AlFardan, Bernstein, Paterson, Poettering and Schuldt that used newly discovered statistical biases in the RC4 key table[132]to recover parts of the plaintext with a large number of TLS encryptions.[133][134]An attack on RC4 in TLS and SSL that requires 13 × 220encryptions to break RC4 was unveiled on 8 July 2013 and later described as "feasible" in the accompanying presentation at aUSENIXSecurity Symposium in August 2013.[135][136]In July 2015, subsequent improvements in the attack make it increasingly practical to defeat the security of RC4-encrypted TLS.[137]
As many modern browsers have been designed to defeat BEAST attacks (except Safari for Mac OS X 10.7 or earlier, for iOS 6 or earlier, and for Windows; see§ Web browsers), RC4 is no longer a good choice for TLS 1.0. The CBC ciphers which were affected by the BEAST attack in the past have become a more popular choice for protection.[90]Mozilla and Microsoft recommend disabling RC4 where possible.[138][139]RFC7465prohibits the use of RC4 cipher suites in all versions of TLS.
On September 1, 2015, Microsoft, Google, and Mozilla announced that RC4 cipher suites would be disabled by default in their browsers (Microsoft Edge [Legacy],Internet Explorer 11on Windows 7/8.1/10,Firefox, andChrome) in early 2016.[140][141][142]
A TLS (logout) truncation attack blocks a victim's account logout requests so that the user unknowingly remains logged into a web service. When the request to sign out is sent, the attacker injects an unencryptedTCPFIN message (no more data from sender) to close the connection. The server therefore does not receive the logout request and is unaware of the abnormal termination.[143]
Published in July 2013,[144][145]the attack causes web services such asGmailandHotmailto display a page that informs the user that they have successfully signed-out, while ensuring that the user's browser maintains authorization with the service, allowing an attacker with subsequent access to the browser to access and take over control of the user's logged-in account. The attack does not rely on installing malware on the victim's computer; attackers need only place themselves between the victim and the web server (e.g., by setting up a rogue wireless hotspot).[143]This vulnerability also requires access to the victim's computer.
Another possibility is when using FTP the data connection can have a false FIN in the data stream, and if the protocol rules for exchanging close_notify alerts is not adhered to a file can be truncated.
In February 2013 two researchers from Royal Holloway, University of London discovered a timing attack[146]which allowed them to recover (parts of the) plaintext from a DTLS connection using the OpenSSL or GnuTLS implementation of DTLS whenCipher Block Chainingmode encryption was used.
This attack, discovered in mid-2016, exploits weaknesses in theWeb Proxy Autodiscovery Protocol(WPAD) to expose the URL that a web user is attempting to reach via a TLS-enabled web link.[147]Disclosure of a URL can violate a user's privacy, not only because of the website accessed, but also because URLs are sometimes used to authenticate users. Document sharing services, such as those offered by Google and Dropbox, also work by sending a user a security token that is included in the URL. An attacker who obtains such URLs may be able to gain full access to a victim's account or data.
The exploit works against almost all browsers and operating systems.
The Sweet32 attack breaks all 64-bit block ciphers used in CBC mode as used in TLS by exploiting abirthday attackand either aman-in-the-middle attackor injection of a maliciousJavaScriptinto a web page. The purpose of the man-in-the-middle attack or the JavaScript injection is to allow the attacker to capture enough traffic to mount a birthday attack.[148]
TheHeartbleedbug is a serious vulnerability specific to the implementation of SSL/TLS in the popularOpenSSLcryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness, reported in April 2014, allows attackers to stealprivate keysfrom servers that should normally be protected.[149]The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret private keys associated with thepublic certificatesused to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.[150]The vulnerability is caused by abuffer over-readbug in the OpenSSL software, rather than a defect in the SSL or TLS protocol specification.
In September 2014, a variant ofDaniel Bleichenbacher's PKCS#1 v1.5 RSA Signature Forgery vulnerability[151]was announced by Intel Security Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a man-in-the-middle attack by forging a public key signature.[152]
In February 2015, after media reported the hidden pre-installation ofsuperfishadware on some Lenovo notebooks,[153]a researcher found a trusted root certificate on affected Lenovo machines to be insecure, as the keys could easily be accessed using the company name, Komodia, as a passphrase.[154]The Komodia library was designed to intercept client-side TLS/SSL traffic for parental control and surveillance, but it was also used in numerous adware programs, including Superfish, that were often surreptitiously installed unbeknownst to the computer user. In turn, thesepotentially unwanted programsinstalled the corrupt root certificate, allowing attackers to completely control web traffic and confirm false websites as authentic.
In May 2016, it was reported that dozens of Danish HTTPS-protected websites belonging toVisa Inc.were vulnerable to attacks allowing hackers to inject malicious code and forged content into the browsers of visitors.[155]The attacks worked because the TLS implementation used on the affected servers incorrectly reused random numbers (nonces) that are intended to be used only once, ensuring that eachTLS handshakeis unique.[155]
In February 2017, an implementation error caused by a single mistyped character in code used to parse HTML created a buffer overflow error onCloudflareservers. Similar in its effects to the Heartbleed bug discovered in 2014, this overflow error, widely known asCloudbleed, allowed unauthorized third parties to read data in the memory of programs running on the servers—data that should otherwise have been protected by TLS.[156]
As of July 2021[update], the Trustworthy Internet Movement estimated the ratio of websites that are vulnerable to TLS attacks.[94]
Forward secrecy is a property of cryptographic systems which ensures that a session key derived from a set of public and private keys will not be compromised if one of the private keys is compromised in the future.[157]Without forward secrecy, if the server's private key is compromised, not only will all future TLS-encrypted sessions using that server certificate be compromised, but also any past sessions that used it as well (provided that these past sessions were intercepted and stored at the time of transmission).[158]An implementation of TLS can provide forward secrecy by requiring the use of ephemeralDiffie–Hellman key exchangeto establish session keys, and some notable TLS implementations do so exclusively: e.g.,Gmailand other Google HTTPS services that useOpenSSL.[159]However, many clients and servers supporting TLS (including browsers and web servers) are not configured to implement such restrictions.[160][161]In practice, unless a web service uses Diffie–Hellman key exchange to implement forward secrecy, all of the encrypted web traffic to and from that service can be decrypted by a third party if it obtains the server's master (private) key; e.g., by means of a court order.[162]
Even where Diffie–Hellman key exchange is implemented, server-side session management mechanisms can impact forward secrecy. The use ofTLS session tickets(a TLS extension) causes the session to be protected by AES128-CBC-SHA256 regardless of any other negotiated TLS parameters, including forward secrecy ciphersuites, and the long-lived TLS session ticket keys defeat the attempt to implement forward secrecy.[163][164][165]Stanford University research in 2014 also found that of 473,802 TLS servers surveyed, 82.9% of the servers deploying ephemeral Diffie–Hellman (DHE) key exchange to support forward secrecy were using weak Diffie–Hellman parameters. These weak parameter choices could potentially compromise the effectiveness of the forward secrecy that the servers sought to provide.[166]
Since late 2011, Google has provided forward secrecy with TLS by default to users of itsGmailservice, along withGoogle Docsand encrypted search, among other services.[167]Since November 2013,Twitterhas provided forward secrecy with TLS to users of its service.[168]As of August 2019[update], about 80% of TLS-enabled websites are configured to use cipher suites that provide forward secrecy to most web browsers.[94]
TLS interception (orHTTPSinterception if applied particularly to that protocol) is the practice of intercepting an encrypted data stream in order to decrypt it, read and possibly manipulate it, and then re-encrypt it and send the data on its way again. This is done by way of a "transparent proxy": the interception software terminates the incoming TLS connection, inspects the HTTP plaintext, and then creates a new TLS connection to the destination.[169]
TLS/HTTPS interception is used as aninformation securitymeasure by network operators in order to be able to scan for and protect against the intrusion of malicious content into the network, such ascomputer virusesand othermalware.[169]Such content could otherwise not be detected as long as it is protected by encryption, which is increasingly the case as a result of the routine use of HTTPS and other secure protocols.
A significant drawback of TLS/HTTPS interception is that it introduces new security risks of its own. One notable limitation is that it provides a point where network traffic is available unencrypted thus giving attackers an incentive to attack this point in particular in order to gain access to otherwise secure content. The interception also allows the network operator, or persons who gain access to its interception system, to performman-in-the-middle attacksagainst network users. A 2017 study found that "HTTPS interception has become startlingly widespread, and that interception products as a class have a dramatically negative impact on connection security".[169]
The TLS protocol exchangesrecords, which encapsulate the data to be exchanged in a specific format (see below). Each record can be compressed, padded, appended with amessage authentication code(MAC), or encrypted, all depending on the state of the connection. Each record has acontent typefield that designates the type of data encapsulated, a length field and a TLS version field. The data encapsulated may be control or procedural messages of the TLS itself, or simply the application data needed to be transferred by TLS. The specifications (cipher suite, keys etc.) required to exchange application data by TLS, are agreed upon in the "TLS handshake" between the client requesting the data and the server responding to requests. The protocol therefore defines both the structure of payloads transferred in TLS and the procedure to establish and monitor the transfer.
When the connection starts, the record encapsulates a "control" protocol – the handshake messaging protocol (content type22). This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the format of messages and the order of their exchange. These may vary according to the demands of the client and server – i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message (as specified below).
A typical connection example follows, illustrating ahandshakewhere the server (but not the client) is authenticated by its certificate:
The followingfullexample shows a client being authenticated (in addition to the server as in the example above; seemutual authentication) via TLS using certificates exchanged between both peers.
Public key operations (e.g., RSA) are relatively expensive in terms of computational power. TLS provides a secure shortcut in the handshake mechanism to avoid these operations: resumed sessions. Resumed sessions are implemented using session IDs or session tickets.
Apart from the performance benefit, resumed sessions can also be used forsingle sign-on, as it guarantees that both the original session and any resumed session originate from the same client. This is of particular importance for theFTP over TLS/SSLprotocol, which would otherwise suffer from a man-in-the-middle attack in which an attacker could intercept the contents of the secondary data connections.[172]
The TLS 1.3 handshake was condensed to only one round trip compared to the two round trips required in previous versions of TLS/SSL.
To start the handshake, the client guesses which key exchange algorithm will be selected by the server and sends aClientHellomessage to the server containing a list of supported ciphers (in order of the client's preference) and public keys for some or all of its key exchange guesses. If the client successfully guesses the key exchange algorithm, 1 round trip is eliminated from the handshake. After receiving theClientHello, the server selects a cipher and sends back aServerHellowith its own public key, followed by serverCertificateandFinishedmessages.[173]
After the client receives the server's finished message, it now is coordinated with the server on which cipher suite to use.[174]
In an ordinaryfullhandshake, the server sends asession idas part of theServerHellomessage. The client associates thissession idwith the server's IP address and TCP port, so that when the client connects again to that server, it can use thesession idto shortcut the handshake. In the server, thesession idmaps to the cryptographic parameters previously negotiated, specifically the "master secret". Both sides must have the same "master secret" or the resumed handshake will fail (this prevents an eavesdropper from using asession id). The random data in theClientHelloandServerHellomessages virtually guarantee that the generated connection keys will be different from in the previous connection. In the RFCs, this type of handshake is called anabbreviatedhandshake. It is also described in the literature as arestarthandshake.
RFC5077extends TLS via use of session tickets, instead of session IDs. It defines a way to resume a TLS session without requiring that session-specific state is stored at the TLS server.
When using session tickets, the TLS server stores its session-specific state in a session ticket and sends the session ticket to the TLS client for storing. The client resumes a TLS session by sending the session ticket to the server, and the server resumes the TLS session according to the session-specific state in the ticket. The session ticket is encrypted and authenticated by the server, and the server verifies its validity before using its contents.
One particular weakness of this method withOpenSSLis that it always limits encryption and authentication security of the transmitted TLS session ticket toAES128-CBC-SHA256, no matter what other TLS parameters were negotiated for the actual TLS session.[164]This means that the state information (the TLS session ticket) is not as well protected as the TLS session itself. Of particular concern is OpenSSL's storage of the keys in an application-wide context (SSL_CTX), i.e. for the life of the application, and not allowing for re-keying of theAES128-CBC-SHA256TLS session tickets without resetting the application-wide OpenSSL context (which is uncommon, error-prone and often requires manual administrative intervention).[165][163]
This is the general format of all TLS records.
Most messages exchanged during the setup of the TLS session are based on this record, unless an error or warning occurs and needs to be signaled by an Alert protocol record (see below), or the encryption mode of the session is modified by another record (see ChangeCipherSpec protocol below).
Note that multiple handshake messages may be combined within one record.
This record should normally not be sent during normal handshaking or application exchanges. However, this message can be sent at any time during the handshake and up to the closure of the session. If this is used to signal a fatal error, the session will be closed immediately after sending this record, so this record is used to give a reason for this closure. If the alert level is flagged as a warning, the remote can decide to close the session if it decides that the session is not reliable enough for its needs (before doing so, the remote may also send its own signal).
From the application protocol point of view, TLS belongs to a lower layer, although the TCP/IP model is too coarse to show it. This means that the TLS handshake is usually (except in theSTARTTLScase) performed before the application protocol can start. In thename-based virtual serverfeature being provided by the application layer, all co-hosted virtual servers share the same certificate because the server has to select and send a certificate immediately after the ClientHello message. This is a big problem in hosting environments because it means either sharing the same certificate among all customers or using a different IP address for each of them.
There are two known workarounds provided byX.509:
To provide the server name,RFC4366Transport Layer Security (TLS) Extensions allow clients to include aServer Name Indicationextension (SNI) in the extended ClientHello message. This extension hints to the server immediately which name the client wishes to connect to, so the server
can select the appropriate certificate to send to the clients.
RFC2817also documents a method to implement name-based virtual hosting by upgrading HTTP to TLS via anHTTP/1.1 Upgrade header. Normally this is to securely implement HTTP over TLS within the main "http"URI scheme(which avoids forking the URI space and reduces the number of used ports), however, few implementations currently support this.[citation needed]
The current approved version of (D)TLS is version 1.3, which is specified in:
The current standards replaces these former versions, which are now considered obsolete:
OtherRFCssubsequently extended (D)TLS.
Extensions to (D)TLS 1.3 include:
Extensions to (D)TLS 1.2 include:
Extensions to (D)TLS 1.1 include:
Extensions to TLS 1.0 include:
|
https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_versions
|
Apassword, sometimes called apasscode, is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to bememorized,[1]but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical.[2]Using the terminology of the NIST Digital Identity Guidelines,[3]the secret is held by a party called theclaimantwhile the party verifying the identity of the claimant is called theverifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an establishedauthentication protocol,[4]the verifier is able to infer the claimant's identity.
In general, a password is an arbitrarystringofcharactersincluding letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called apersonal identification number(PIN).
Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called apassphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security.[5]
Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password orwatchword, and would only allow a person or group to pass if they knew the password.Polybiusdescribes the system for the distribution of watchwords in theRoman militaryas follows:
The way in which they secure the passing round of the watchword for the night is as follows: from the tenthmanipleof each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of thetribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next to him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.[6]
Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of theBattle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—flash—which was presented as a challenge, and answered with the correct response—thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" onD-Dayin place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[7]
Passwords have been used with computers since the earliest days of computing. TheCompatible Time-Sharing System(CTSS), an operating system introduced atMITin 1961, was the first computer system to implement password login.[8][9]CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy."[10]In the early 1970s,Robert Morrisdeveloped a system of storing login passwords in a hashed form as part of theUnixoperating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known ascrypt(3), used a 12-bitsaltand invoked a modified form of theDESalgorithm 25 times to reduce the risk of pre-computeddictionary attacks.[11]
In modern times,user namesand passwords are commonly used by people during alog inprocess thatcontrols accessto protected computeroperating systems,mobile phones,cable TVdecoders,automated teller machines(ATMs), etc. A typicalcomputer userhas passwords for multiple purposes: logging into accounts, retrievinge-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.
The easier a password is for the owner to remember generally means it will be easier for anattackerto guess.[12]However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[13]Others argue longer passwords provide more security (e.g.,entropy) than shorter passwords with a wide variety of characters.[14]
InThe Memorability and Security of Passwords,[15]Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords.
Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method,[16]but a single dictionary word is not. Having a personally designedalgorithmfor generating obscure passwords is another good method.[17]
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions that are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.[18]
In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media), which includes:[19]
Traditional advice to memorize passwords and never write them down has become a challenge because of the sheer number of passwords users of computers and the internet are expected to maintain. One survey concluded that the average user has around 100 passwords.[2]To manage the proliferation of passwords, some users employ the same password for multiple accounts, a dangerous practice since a data breach in one account could compromise the rest. Less risky alternatives include the use ofpassword managers,single sign-onsystems and simply keeping paper lists of less critical passwords.[20]Such practices can reduce the number of passwords that must be memorized, such as the password manager's master password, to a more manageable number.
The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection againstcomputer viruses,man-in-the-middle attacksand the like. Physical security issues are also a concern, from deterringshoulder surfingto more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes.[21]
Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.[21]
Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token.[22]Less extreme measures includeextortion,rubber hose cryptanalysis, andside channel attack.
Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow.
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts, also known as throttling.[3]: 63B Sec 5.2.2In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords if they have been well chosen and are not easily guessed.[23]
Many systems store acryptographic hashof the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running and the strength of the algorithm used to create the hash.
Passwords that are used to generate cryptographic keys (e.g., fordisk encryptionorWi-Fisecurity) can also be subjected to high rate guessing, known aspassword cracking. Lists of common passwords are widely available and can make password attacks efficient. Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such asPGPandWi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks, in a technique known askey stretching.
An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner.[24]Attackers may conversely use knowledge of this mitigation to implement adenial of service attackagainst the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage viasocial engineering.
Some computer systems store user passwords asplaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.
More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure do not store passwords at all, but a one-way derivation, such as apolynomial,modulus, or an advancedhash function.[14]Roger Needhaminvented the now-common approach of storing only a "hashed" form of the plaintext password.[25][26]When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in multiple implementations, another value known as asalt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users.[27]MD5andSHA1are frequently used cryptographic hash functions, but they are not recommended for password hashing unless they are used as part of a larger construction such as inPBKDF2.[28]
The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the/etc/passwdfile or the/etc/shadowfile.[29]
The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted.[30]If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable torainbow tableattacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.[30]
If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover aplaintextpassword. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in multiple languages are widely available on the Internet.[14]The existence ofpassword crackingtools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words, or that use easily guessable patterns.[31]A modified version of theDESalgorithm was used as the basis for the password hashing algorithm in earlyUnixsystems.[32]Thecryptalgorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks.[32]The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix-like systems (e.g.,Linuxor the variousBSDsystems) use more secure password hashing algorithms such asPBKDF2,bcrypt, andscrypt, which have large salts and an adjustable cost or number of iterations.[33]A poorly designed hash function can make attacks feasible even if a strong password is chosen.LM hashis a widely deployed and insecure example.[34]
Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping bywiretappingmethods. If it is carried as packeted data over the Internet, anyone able to watch thepacketscontaining the logon information can snoop with a low probability of detection.
Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent asplaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored asplaintexton at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied tobackup,cacheor history files on any of these systems.
Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text.
The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, usingcryptographicprotection. The most widely used is theTransport Layer Security(TLS, previously calledSSL) feature built into most current Internetbrowsers. Most browsers alert the user of a TLS/SSL-protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use.
There is a conflict between stored hashed-passwords and hash-basedchallenge–response authentication; the latter requires a client to prove to a server that they know what theshared secret(i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On a number of systems (includingUnix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash.
Rather than transmitting a password, or transmitting the hash of the password,password-authenticated key agreementsystems can perform azero-knowledge password proof, which proves knowledge of the password without exposing it.
Moving a step further, augmented systems forpassword-authenticated key agreement(e.g.,AMP,B-SPEKE,PAK-Z,SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the un-hashed password is required to gain access.
Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., viawiretapping) before the new password can even be installed in the passworddatabaseand if the new password is given to a compromised employee, little is gained. Some websites include the user-selected password in anunencryptedconfirmation e-mail message, with the obvious increased vulnerability.
Identity managementsystems are increasingly used to automate the issuance of replacements for lost passwords, a feature calledself-service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened).
Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.[35]
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst.[36]There is often an increase in the number of people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable.[37]Because of these issues, there is some debate as to whether password aging is effective.[38]Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse.[39]
Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users of the system, certainly from a security viewpoint. This is partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also much less convenient to change because multiple people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data.
Common techniques used to improve the security of computer systems protected by a password include:
Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.
It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because anattackerneeds to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusingusernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimized by usingmnemonic techniques,writing passwords down on paper, or using apassword manager.[44]
It has been argued by Redmond researchersDinei Florencioand Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remembering long, complex passwords for a few important accounts, such as bank accounts.[45]Similar arguments were made byForbesin not change passwords as often as some "experts" advise, due to the same limitations in human memory.[37]
Historically, multiple security experts asked people to memorize their passwords: "Never write down a password". More recently, multiple security experts such asBruce Schneierrecommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.[46][47][48][49][50][51][52]
Password managersoftware can also store passwords relatively safely, in an encrypted file sealed with a single master password.
To facilitate estate administration, it is helpful for people to provide a mechanism for their passwords to be communicated to the persons who will administer their affairs in the event of their death. Should a record of accounts and passwords be prepared, care must be taken to ensure that the records are secure, to prevent theft or fraud.[53]
Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code that must be entered in addition to a password.[54]More sophisticated factors include such things as hardware tokens and biometric security.
Password rotation is a policy that is commonly implemented with the goal of enhancingcomputer security. In 2019, Microsoft stated that the practice is "ancient and obsolete".[55][56]
Most organizations specify apassword policythat sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks[57]that define requirements for user authentication to government services, including requirements for passwords.
Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by theNational Institute of Standards and Technology(NIST), authored by Bill Burr.[58]It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 article inThe Wall Street Journal, Burr reported he regrets these proposals and made a mistake when he recommended them.[59]
According to a 2017 rewrite of this NIST report, a number ofwebsiteshave rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts.[60]The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd".[61]A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack.[58]
Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren't fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good."[60]
Pieris Tsokkis and Eliana Stavrou were able to identify some bad password construction strategies through their research and development of a password generator tool. They came up with eight categories of password construction strategies based on exposed password lists, password cracking tools, and online reports citing the most used passwords. These categories include user-related information, keyboard combinations and patterns, placement strategy, word processing, substitution, capitalization, append dates, and a combination of the previous categories[62]
Attempting to crack passwords by trying as many possibilities as time and money permit is abrute force attack. A related method, rather more efficient in most cases, is adictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested.
Password strengthis the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms ofentropy.[14]
Passwords easily discovered are termedweakorvulnerable; passwords difficult or impossible to discover are consideredstrong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such asL0phtCrack,John the Ripper, andCain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users.
Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically.[63]For example, Columbia University found 22% of user passwords could be recovered with little effort.[64]According toBruce Schneier, examining data from a 2006phishingattack, 55% ofMySpacepasswords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[65]He also reported that the single most common password waspassword1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[66])
The multiple ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative.[74]A 2012 paper[75]examines why passwords have proved so hard to supplant (despite multiple predictions that they would soon be a thing of the past[76]); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide."
"The password is dead" is a recurring idea incomputer security. The reasons given often include reference to theusabilityas well as security problems of passwords. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by a number of people at least since 2004.[76][87][88][89][90][91][92][93]
Alternatives to passwords includebiometrics,two-factor authenticationorsingle sign-on,Microsoft'sCardspace, theHiggins project, theLiberty Alliance,NSTIC, theFIDO Allianceand various Identity 2.0 proposals.[94][95]
However, in spite of these predictions and efforts to replace them passwords are still the dominant form of authentication on the web. In "The Persistence of Passwords", Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.[96]They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used."
Following this, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security.[97][98]Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, whileeveryscheme does worse than passwords on deployability. The authors conclude with the following observation: "Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery."
|
https://en.wikipedia.org/wiki/Password#Hashing_and_salt
|
Incryptography, ablock cipheris adeterministic algorithmthat operates on fixed-length groups ofbits, calledblocks. Block ciphers are the elementarybuilding blocksof manycryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated viaencryption.
A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude ofmodes of operationhave been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality andauthenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such asuniversal hash functionsandpseudorandom number generators.
A block cipher consists of two pairedalgorithms, one for encryption,E, and the other for decryption,D.[1]Both algorithms accept two inputs: an input block of sizenbits and akeyof sizekbits; and both yield ann-bit output block. The decryption algorithmDis defined to be theinverse functionof encryption, i.e.,D=E−1. More formally,[2][3]a block cipher is specified by an encryption function
which takes as input a keyK, of bit lengthk(called thekey size), and a bit stringP, of lengthn(called theblock size), and returns a stringCofnbits.Pis called theplaintext, andCis termed theciphertext. For eachK, the functionEK(P) is required to be an invertible mapping on{0,1}n. The inverse forEis defined as a function
taking a keyKand a ciphertextCto return a plaintext valueP, such that
For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plain text.[4]
For each keyK,EKis apermutation(abijectivemapping) over the set of input blocks. Each key selects one permutation from the set of(2n)!{\displaystyle (2^{n})!}possible permutations.[5]
The modern design of block ciphers is based on the concept of an iteratedproduct cipher. In his seminal 1949 publication,Communication Theory of Secrecy Systems,Claude Shannonanalyzed product ciphers and suggested them as a means of effectively improving security by combining simple operations such assubstitutionsandpermutations.[6]Iterated product ciphers carry out encryption in multiplerounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers named aFeistel networkafterHorst Feistelis notably implemented in theDEScipher.[7]Many other realizations of block ciphers, such as theAES, are classified assubstitution–permutation networks.[8]
The root of allcryptographicblock formats used within thePayment Card Industry Data Security Standard(PCI DSS) andAmerican National Standards Institute(ANSI) standards lies with theAtalla Key Block(AKB), which was a key innovation of theAtalla Box, the firsthardware security module(HSM). It was developed in 1972 byMohamed M. Atalla, founder ofAtalla Corporation(nowUtimaco Atalla), and released in 1973. The AKB was a key block, which is required to securely interchangesymmetric keysorPINswith other actors in thebanking industry. This secure interchange is performed using the AKB format.[9]The Atalla Box protected over 90% of allATMnetworks in operation as of 1998,[10]and Atalla products still secure the majority of the world's ATM transactions as of 2014.[11]
The publication of the DES cipher by the United States National Bureau of Standards (subsequently the U.S.National Institute of Standards and Technology, NIST) in 1977 was fundamental in the public understanding of modern block cipher design. It also influenced the academic development ofcryptanalytic attacks. Bothdifferentialandlinear cryptanalysisarose out of studies on DES design. As of 2016[update], there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust againstbrute-force attacks.
Most block cipher algorithms are classified asiterated block cipherswhich means that they transform fixed-size blocks ofplaintextinto identically sized blocks ofciphertext, via the repeated application of an invertible transformation known as theround function, with each iteration referred to as around.[12]
Usually, the round functionRtakes differentround keysKias a second input, which is derived from the original key:[13]
whereM0{\displaystyle M_{0}}is the plaintext andMr{\displaystyle M_{r}}the ciphertext, withrbeing the number of rounds.
Frequently,key whiteningis used in addition to this. At the beginning and the end, the data is modified with key material (often withXOR):
Given one of the standard iterated block cipher design schemes, it is fairly easy to construct a block cipher that is cryptographically secure, simply by using a large number of rounds. However, this will make the cipher inefficient. Thus, efficiency is the most important additional design criterion for professional ciphers. Further, a good block cipher is designed to avoid side-channel attacks, such as branch prediction and input-dependent memory accesses that might leak secret data via the cache state or the execution time. In addition, the cipher should be concise, for small hardware and software implementations.
One important type of iterated block cipher known as asubstitution–permutation network(SPN)takes a block of the plaintext and the key as inputs and applies several alternating rounds consisting of asubstitution stagefollowed by apermutation stage—to produce each block of ciphertext output.[14]The non-linear substitution stage mixes the key bits with those of the plaintext, creating Shannon'sconfusion. The linear permutation stage then dissipates redundancies, creatingdiffusion.[15][16]
Asubstitution box(S-box)substitutes a small block of input bits with another block of output bits. This substitution must beone-to-one, to ensure invertibility (hence decryption). A secure S-box will have the property that changing one input bit will change about half of the output bits on average, exhibiting what is known as theavalanche effect—i.e. it has the property that each output bit will depend on every input bit.[17]
Apermutation box(P-box)is apermutationof all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.[18]
At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typicallyXOR.[citation needed]
Decryptionis done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).[19]
In aFeistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. The round function is applied to one half, using a subkey, and then the output is XORed with the other half. The two halves are then swapped.[20]
LetF{\displaystyle {\rm {F}}}be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:[20]
Split the plaintext block into two equal pieces, (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}})
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}.
The decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again.
One advantage of the Feistel model compared to asubstitution–permutation networkis that the round functionF{\displaystyle {\rm {F}}}does not have to be invertible.[21]
The Lai–Massey scheme offers security properties similar to those of theFeistel structure. It also shares the advantage that the round functionF{\displaystyle \mathrm {F} }does not have to be invertible. Another similarity is that it also splits the input block into two equal pieces. However, the round function is applied to the difference between the two, and the result is then added to both half blocks.
LetF{\displaystyle \mathrm {F} }be the round function andH{\displaystyle \mathrm {H} }a half-round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces, (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}})
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
whereTi=F(Li′−Ri′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i}'-R_{i}',K_{i})}and(L0′,R0′)=H(L0,R0){\displaystyle (L_{0}',R_{0}')=\mathrm {H} (L_{0},R_{0})}
Then the ciphertext is(Ln+1,Rn+1)=(Ln+1′,Rn+1′){\displaystyle (L_{n+1},R_{n+1})=(L_{n+1}',R_{n+1}')}.
The decryption of a ciphertext(Ln+1,Rn+1){\displaystyle (L_{n+1},R_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
whereTi=F(Li+1′−Ri+1′,Ki){\displaystyle T_{i}=\mathrm {F} (L_{i+1}'-R_{i+1}',K_{i})}and(Ln+1′,Rn+1′)=H−1(Ln+1,Rn+1){\displaystyle (L_{n+1}',R_{n+1}')=\mathrm {H} ^{-1}(L_{n+1},R_{n+1})}
Then(L0,R0)=(L0′,R0′){\displaystyle (L_{0},R_{0})=(L_{0}',R_{0}')}is the plaintext again.
Many modern block ciphers and hashes areARXalgorithms—their round function involves only three operations: (A) modular addition, (R)rotationwith fixed rotation amounts, and (X)XOR. Examples includeChaCha20,Speck,XXTEA, andBLAKE. Many authors draw an ARX network, a kind ofdata flow diagram, to illustrate such a round function.[22]
These ARX operations are popular because they are relatively fast and cheap in hardware and software, their implementation can be made extremely simple, and also because they run in constant time, and therefore are immune totiming attacks. Therotational cryptanalysistechnique attempts to attack such round functions.
Other operations often used in block ciphers include data-dependent rotations as inRC5andRC6, asubstitution boximplemented as alookup tableas inData Encryption StandardandAdvanced Encryption Standard, apermutation box, and multiplication as inIDEA.
A block cipher by itself allows encryption only of a single data block of the cipher's block length. For a variable-length message, the data must first be partitioned into separate cipher blocks. In the simplest case, known aselectronic codebook(ECB) mode, a message is first split into separate blocks of the cipher's block size (possibly extending the last block withpaddingbits), and then each block is encrypted and decrypted independently. However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output.[23]
To overcome this limitation, several so-calledblock cipher modes of operationhave been designed[24][25]and specified in national recommendations such as NIST 800-38A[26]andBSITR-02102[27]and international standards such asISO/IEC 10116.[28]The general concept is to userandomizationof the plaintext data based on an additional input value, frequently called aninitialization vector, to create what is termedprobabilistic encryption.[29]In the popularcipher block chaining(CBC) mode, for encryption to besecurethe initialization vector passed along with the plaintext message must be a random orpseudo-randomvalue, which is added in anexclusive-ormanner to the first plaintext block before it is encrypted. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. In thecipher feedback(CFB) mode, which emulates aself-synchronizing stream cipher, the initialization vector is first encrypted and then added to the plaintext block. Theoutput feedback(OFB) mode repeatedly encrypts the initialization vector to create akey streamfor the emulation of asynchronous stream cipher. The newercounter(CTR) mode similarly creates a key stream, but has the advantage of only needing unique and not (pseudo-)random values as initialization vectors; the needed randomness is derived internally by using the initialization vector as a block counter and encrypting this counter for each block.[26]
From asecurity-theoreticpoint of view, modes of operation must provide what is known assemantic security.[30]Informally, it means that given some ciphertext under an unknown key one cannot practically derive any information from the ciphertext (other than the length of the message) over what one would have known without seeing the ciphertext. It has been shown that all of the modes discussed above, with the exception of the ECB mode, provide this property under so-calledchosen plaintext attacks.
Some modes such as the CBC mode only operate on complete plaintext blocks. Simply extending the last block of a message with zero bits is insufficient since it does not allow a receiver to easily distinguish messages that differ only in the number of padding bits. More importantly, such a simple solution gives rise to very efficientpadding oracle attacks.[31]A suitablepadding schemeis therefore needed to extend the last plaintext block to the cipher's block size. While many popular schemes described in standards and in the literature have been shown to be vulnerable to padding oracle attacks,[31][32]a solution that adds a one-bit and then extends the last block with zero-bits, standardized as "padding method 2" in ISO/IEC 9797-1,[33]has been proven secure against these attacks.[32]
This property results in the cipher's security degrading quadratically, and needs to be taken into account when selecting a block size. There is a trade-off though as large block sizes can result in the algorithm becoming inefficient to operate.[34]Earlier block ciphers such as theDEShave typically selected a 64-bit block size, while newer designs such as theAESsupport block sizes of 128 bits or more, with some ciphers supporting a range of different block sizes.[35]
A linear cryptanalysisis a form of cryptanalysis based on findingaffineapproximations to the action of acipher. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other beingdifferential cryptanalysis.[36]
The discovery is attributed toMitsuru Matsui, who first applied the technique to theFEALcipher (Matsui and Yamagishi, 1992).[37]
Integral cryptanalysisis a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences between pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus.[citation needed]
In addition to linear and differential cryptanalysis, there is a growing catalog of attacks:truncated differential cryptanalysis, partial differential cryptanalysis,integral cryptanalysis, which encompasses square and integral attacks,slide attacks,boomerang attacks, theXSL attack,impossible differential cryptanalysis, and algebraic attacks. For a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks.[38]
When a block cipher is used in a givenmode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ECB (discussed above) emphatically lacks this property: regardless of how secure the underlying block cipher is, ECB mode can easily be attacked. On the other hand, CBC mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. Note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to "be secure". This section describes two common notions for what properties a block cipher should have. Each corresponds to a mathematical model that can be used to prove properties of higher-level algorithms, such as CBC.
This general approach to cryptography – proving higher-level algorithms (such as CBC) are secure under explicitly stated assumptions regarding their components (such as a block cipher) – is known asprovable security.
Informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher (equipped with a random key) and a random permutation.
To be a bit more precise, letEbe ann-bit block cipher. We imagine the following game:
The attacker, which we can model as an algorithm, is called anadversary. The functionf(which the adversary was able to query) is called anoracle.
Note that an adversary can trivially ensure a 50% chance of winning simply by guessing at random (or even by, for example, always guessing "heads"). Therefore, letPE(A) denote the probability that adversaryAwins this game againstE, and define theadvantageofAas 2(PE(A) − 1/2). It follows that ifAguesses randomly, its advantage will be 0; on the other hand, ifAalways wins, then its advantage is 1. The block cipherEis apseudo-random permutation(PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions onqand the adversary's running time. If in Step 2 above adversaries have the option of learningf−1(X) instead off(X) (but still have only small advantages) thenEis astrongPRP (SPRP). An adversary isnon-adaptiveif it chooses allqvalues forXbefore the game begins (that is, it does not use any information gleaned from previous queries to choose eachXas it goes).
These definitions have proven useful for analyzing various modes of operation. For example, one can define a similar game for measuring the security of a block cipher-based encryption algorithm, and then try to show (through areduction argument) that the probability of an adversary winning this new game is not much more thanPE(A) for someA. (The reduction typically provides limits onqand the running time ofA.) Equivalently, ifPE(A) is small for all relevantA, then no attacker has a significant probability of winning the new game. This formalizes the idea that the higher-level algorithm inherits the block cipher's security.
Block ciphers may be evaluated according to multiple criteria in practice. Common factors include:[39][40]
Luciferis generally considered to be the first civilian block cipher, developed atIBMin the 1970s based on work done byHorst Feistel. A revised version of the algorithm was adopted as a U.S. governmentFederal Information Processing Standard: FIPS PUB 46Data Encryption Standard(DES).[42]It was chosen by the U.S. National Bureau of Standards (NBS) after a public invitation for submissions and some internal changes byNBS(and, potentially, theNSA). DES was publicly released in 1976 and has been widely used.[citation needed]
DES was designed to, among other things, resist a certain cryptanalytic attack known to the NSA and rediscovered by IBM, though unknown publicly until rediscovered again and published byEli BihamandAdi Shamirin the late 1980s. The technique is calleddifferential cryptanalysisand remains one of the few general attacks against block ciphers;linear cryptanalysisis another but may have been unknown even to the NSA, prior to its publication byMitsuru Matsui. DES prompted a large amount of other work and publications in cryptography andcryptanalysisin the open community and it inspired many new cipher designs.[citation needed]
DES has a block size of 64 bits and akey sizeof 56 bits. 64-bit blocks became common in block cipher designs after DES. Key length depended on several factors, including government regulation. Many observers[who?]in the 1970s commented that the 56-bit key length used for DES was too short. As time went on, its inadequacy became apparent, especially after aspecial-purpose machine designed to break DESwas demonstrated in 1998 by theElectronic Frontier Foundation. An extension to DES,Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). It was widely adopted as a replacement. As of 2011, the three-key version is still considered secure, though theNational Institute of Standards and Technology(NIST) standards no longer permit the use of the two-key version in new applications, due to its 80-bit security level.[43]
TheInternational Data Encryption Algorithm(IDEA) is a block cipher designed byJames MasseyofETH ZurichandXuejia Lai; it was first described in 1991, as an intended replacement for DES.
IDEA operates on 64-bitblocksusing a 128-bit key and consists of a series of eight identical transformations (around) and an output transformation (thehalf-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from differentgroups–modularaddition and multiplication, and bitwiseexclusive or(XOR)– which are algebraically "incompatible" in some sense.
The designers analysed IDEA to measure its strength againstdifferential cryptanalysisand concluded that it is immune under certain assumptions. No successfullinearor algebraic weaknesses have been reported. As of 2012[update], the best attack which applies to all keys can break a full 8.5-round IDEA using a narrow-bicliques attack about four times faster than brute force.
RC5 is a block cipher designed byRonald Rivestin 1994 which, unlike many other ciphers, has a variable block size (32, 64, or 128 bits), key size (0 to 2040 bits), and a number of rounds (0 to 255). The original suggested choice of parameters was a block size of 64 bits, a 128-bit key, and 12 rounds.
A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number ofmodularadditions and XORs. The general structure of the algorithm is aFeistel-like a network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentiallyone-way functionwith the binary expansions of botheand thegolden ratioas sources of "nothing up my sleeve numbers". The tantalizing simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts.
12-round RC5 (with 64-bit blocks) is susceptible to adifferential attackusing 244chosen plaintexts.[44]18–20 rounds are suggested as sufficient protection.
TheRijndaelcipher developed by Belgian cryptographers,Joan DaemenandVincent Rijmenwas one of the competing designs to replace DES. It won the5-year public competitionto become the AES (Advanced Encryption Standard).
Adopted by NIST in 2001, AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum of 128 bits. The block size has a maximum of 256 bits, but the key size has no theoretical maximum. AES operates on a 4×4column-major ordermatrix of bytes, termed thestate(versions of Rijndael with a larger block size have additional columns in the state).
Blowfishis a block cipher, designed in 1993 byBruce Schneierand included in a large number of cipher suites and encryption products. Blowfish has a 64-bit block size and a variablekey lengthfrom 1 bit up to 448 bits.[45]It is a 16-roundFeistel cipherand uses large key-dependentS-boxes. Notable features of the design include the key-dependentS-boxesand a highly complexkey schedule.
It was designed as a general-purpose algorithm, intended as an alternative to the aging DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered bypatents, or were commercial/government secrets. Schneier has stated that "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in thepublic domain, and can be freely used by anyone." The same applies toTwofish, a successor algorithm from Schneier.
M. Liskov, R. Rivest, and D. Wagner have described a generalized version of block ciphers called "tweakable" block ciphers.[46]A tweakable block cipher accepts a second input called thetweakalong with its usual plaintext or ciphertext input. The tweak, along with the key, selects the permutation computed by the cipher. If changing tweaks is sufficiently lightweight (compared with a usually fairly expensive key setup operation), then some interesting new operation modes become possible. Thedisk encryption theoryarticle describes some of these modes.
Block ciphers traditionally work over a binaryalphabet. That is, both the input and the output are binary strings, consisting ofnzeroes and ones. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. This is an example offormat-preserving encryption. More generally, format-preserving encryption requires a keyed permutation on some finitelanguage. This makes format-preserving encryption schemes a natural generalization of (tweakable) block ciphers. In contrast, traditional encryption schemes, such as CBC, are not permutations because the same plaintext can encrypt multiple different ciphertexts, even when using a fixed key.
Block ciphers can be used to build other cryptographic primitives, such as those below. For these other primitives to be cryptographically secure, care has to be taken to build them the right way.
Just as block ciphers can be used to build hash functions, like SHA-1 and SHA-2 are based on block ciphers which are also used independently asSHACAL, hash functions can be used to build block ciphers. Examples of such block ciphers areBEAR and LION.
|
https://en.wikipedia.org/wiki/Block_cipher
|
Inmathematics, theassociative property[1]is a property of somebinary operationsthat rearranging theparenthesesin an expression will not change the result. Inpropositional logic,associativityis avalidrule of replacementforexpressionsinlogical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which theoperationsare performed does not matter as long as the sequence of theoperandsis not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations:
(2+3)+4=2+(3+4)=92×(3×4)=(2×3)×4=24.{\displaystyle {\begin{aligned}(2+3)+4&=2+(3+4)=9\,\\2\times (3\times 4)&=(2\times 3)\times 4=24.\end{aligned}}}
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on anyreal numbers, it can be said that "addition and multiplication of real numbers are associative operations".
Associativity is not the same ascommutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is,a×b=b×a, so we say that the multiplication of real numbers is a commutative operation. However, operations such asfunction compositionandmatrix multiplicationare associative, but not (generally) commutative.
Associative operations are abundant in mathematics; in fact, manyalgebraic structures(such assemigroupsandcategories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples includesubtraction,exponentiation, and thevector cross product. In contrast to the theoretical properties of real numbers, the addition offloating pointnumbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error.
Formally, abinary operation∗{\displaystyle \ast }on asetSis calledassociativeif it satisfies theassociative law:
Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as formultiplication.
The associative law can also be expressed in functional notation thus:(f∘(g∘h))(x)=((f∘g)∘h)(x){\displaystyle (f\circ (g\circ h))(x)=((f\circ g)\circ h)(x)}
If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression.[2]This is called thegeneralized associative law.
The number of possible bracketings is just theCatalan number,Cn{\displaystyle C_{n}}, fornoperations onn+1values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), inC3=5{\displaystyle C_{3}=5}possible ways:
If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as
As the number of elements increases, thenumber of possible ways to insert parenthesesgrows quickly, but they remain unnecessary for disambiguation.
An example where this does not work is thelogical biconditional↔. It is associative; thus,A↔ (B↔C)is equivalent to(A↔B) ↔C, butA↔B↔Cmost commonly means(A↔B) and (B↔C), which is not equivalent.
Some examples of associative operations include the following.
(x+y)+z=x+(y+z)=x+y+z(xy)z=x(yz)=xyz}for allx,y,z∈R.{\displaystyle \left.{\begin{matrix}(x+y)+z=x+(y+z)=x+y+z\quad \\(x\,y)z=x(y\,z)=x\,y\,z\qquad \qquad \qquad \quad \ \ \,\end{matrix}}\right\}{\mbox{for all }}x,y,z\in \mathbb {R} .}
In standard truth-functional propositional logic,association,[4][5]orassociativity[6]are twovalidrules of replacement. The rules allow one to move parentheses inlogical expressionsinlogical proofs. The rules (usinglogical connectivesnotation) are:
(P∨(Q∨R))⇔((P∨Q)∨R){\displaystyle (P\lor (Q\lor R))\Leftrightarrow ((P\lor Q)\lor R)}
and
(P∧(Q∧R))⇔((P∧Q)∧R),{\displaystyle (P\land (Q\land R))\Leftrightarrow ((P\land Q)\land R),}
where "⇔{\displaystyle \Leftrightarrow }" is ametalogicalsymbolrepresenting "can be replaced in aproofwith".
Associativityis a property of somelogical connectivesof truth-functionalpropositional logic. The followinglogical equivalencesdemonstrate that associativity is a property of particular connectives. The following (and their converses, since↔is commutative) are truth-functionaltautologies.[citation needed]
Joint denialis an example of a truth functional connective that isnotassociative.
A binary operation∗{\displaystyle *}on a setSthat does not satisfy the associative law is callednon-associative. Symbolically,
(x∗y)∗z≠x∗(y∗z)for somex,y,z∈S.{\displaystyle (x*y)*z\neq x*(y*z)\qquad {\mbox{for some }}x,y,z\in S.}
For such an operation the order of evaluationdoesmatter. For example:
Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example,(1+−1)+(1+−1)+(1+−1)+(1+−1)+(1+−1)+(1+−1)+⋯=0{\displaystyle (1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+\dots =0}whereas1+(−1+1)+(−1+1)+(−1+1)+(−1+1)+(−1+1)+(−1+1)+⋯=1.{\displaystyle 1+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+\dots =1.}
Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures callednon-associative algebras, which have also an addition and ascalar multiplication. Examples are theoctonionsandLie algebras. In Lie algebras, the multiplication satisfiesJacobi identityinstead of the associative law; this allows abstracting the algebraic nature ofinfinitesimal transformations.
Other examples arequasigroup,quasifield,non-associative ring, andcommutative non-associative magmas.
In mathematics, addition and multiplication of real numbers are associative. By contrast, in computer science, addition and multiplication offloating pointnumbers arenotassociative, as different rounding errors may be introduced when dissimilar-sized values are joined in a different order.[7]
To illustrate this, consider a floating point representation with a 4-bitsignificand:
Even though most computers compute with 24 or 53 bits of significand,[8]this is still an important source of rounding error, and approaches such as theKahan summation algorithmare ways to minimise the errors. It can be especially problematic in parallel computing.[9][10]
In general, parentheses must be used to indicate theorder of evaluationif a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like23/4{\displaystyle {\dfrac {2}{3/4}}}). However,mathematiciansagree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses.
Aleft-associativeoperation is a non-associative operation that is conventionally evaluated from left to right, i.e.,
a∗b∗c=(a∗b)∗ca∗b∗c∗d=((a∗b)∗c)∗da∗b∗c∗d∗e=(((a∗b)∗c)∗d)∗eetc.}for alla,b,c,d,e∈S{\displaystyle \left.{\begin{array}{l}a*b*c=(a*b)*c\\a*b*c*d=((a*b)*c)*d\\a*b*c*d*e=(((a*b)*c)*d)*e\quad \\{\mbox{etc.}}\end{array}}\right\}{\mbox{for all }}a,b,c,d,e\in S}
while aright-associativeoperation is conventionally evaluated from right to left:
x∗y∗z=x∗(y∗z)w∗x∗y∗z=w∗(x∗(y∗z))v∗w∗x∗y∗z=v∗(w∗(x∗(y∗z)))etc.}for allz,y,x,w,v∈S{\displaystyle \left.{\begin{array}{l}x*y*z=x*(y*z)\\w*x*y*z=w*(x*(y*z))\quad \\v*w*x*y*z=v*(w*(x*(y*z)))\quad \\{\mbox{etc.}}\end{array}}\right\}{\mbox{for all }}z,y,x,w,v\in S}
Both left-associative and right-associative operations occur. Left-associative operations include the following:
This notation can be motivated by thecurryingisomorphism, which enables partial application.
Right-associative operations include the following:
Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication:
Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression2x+3{\displaystyle 2^{x+3}}the addition is performedbeforethe exponentiation despite there being no explicit parentheses2(x+3){\displaystyle 2^{(x+3)}}wrapped around it. Thus given an expression such asxyz{\displaystyle x^{y^{z}}}, the full exponentyz{\displaystyle y^{z}}of the basex{\displaystyle x}is evaluated first. However, in some contexts, especially in handwriting, the difference betweenxyz=(xy)z{\displaystyle {x^{y}}^{z}=(x^{y})^{z}},xyz=x(yz){\displaystyle x^{yz}=x^{(yz)}}andxyz=x(yz){\displaystyle x^{y^{z}}=x^{(y^{z})}}can be hard to see. In such a case, right-associativity is usually implied.
Using right-associative notation for these operations can be motivated by theCurry–Howard correspondenceand by thecurryingisomorphism.
Non-associative operations for which no conventional evaluation order is defined include the following.
(Comparematerial nonimplicationin logic.)
William Rowan Hamiltonseems to have coined the term "associative property"[17]around 1844, a time when he was contemplating the non-associative algebra of theoctonionshe had learned about fromJohn T. Graves.[18]
|
https://en.wikipedia.org/wiki/Associative_property
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Anon-associative algebra[1](ordistributive algebra) is analgebra over a fieldwhere thebinary multiplication operationis not assumed to beassociative. That is, analgebraic structureAis a non-associative algebra over afieldKif it is avector spaceoverKand is equipped with aK-bilinearbinary multiplication operationA×A→Awhich may or may not be associative. Examples includeLie algebras,Jordan algebras, theoctonions, and three-dimensional Euclidean space equipped with thecross productoperation. Since it is not assumed that the multiplication is associative, using parentheses to indicate the order of multiplications is necessary. For example, the expressions (ab)(cd), (a(bc))danda(b(cd)) may all yield different answers.
While this use ofnon-associativemeans that associativity is not assumed, it does not mean that associativity is disallowed. In other words, "non-associative" means "not necessarily associative", just as "noncommutative" means "not necessarily commutative" fornoncommutative rings.
An algebra isunitalorunitaryif it has anidentity elementewithex=x=xefor allxin the algebra. For example, theoctonionsare unital, butLie algebrasnever are.
The nonassociative algebra structure ofAmay be studied by associating it with other associative algebras which are subalgebras of the full algebra ofK-endomorphismsofAas aK-vector space. Two such are thederivation algebraand the(associative) enveloping algebra, the latter being in a sense "the smallest associative algebra containingA".
More generally, some authors consider the concept of a non-associative algebra over acommutative ringR: AnR-moduleequipped with anR-bilinear binary multiplication operation.[2]If a structure obeys all of the ring axioms apart from associativity (for example, anyR-algebra), then it is naturally aZ{\displaystyle \mathbb {Z} }-algebra, so some authors refer to non-associativeZ{\displaystyle \mathbb {Z} }-algebras asnon-associative rings.
Ring-like structures with two binary operations and no other restrictions are a broad class, one which is too general to study.
For this reason, the best-known kinds of non-associative algebras satisfyidentities, or properties, which simplify multiplication somewhat.
These include the following ones.
Letx,yandzdenote arbitrary elements of the algebraAover the fieldK.
Let powers to positive (non-zero) integer be recursively defined byx1≝xand eitherxn+1≝xnx[3](right powers) orxn+1≝xxn[4][5](left powers) depending on authors.
ForKof anycharacteristic:
IfK≠GF(2)ordim(A) ≤ 3:
Ifchar(K) ≠ 2:
Ifchar(K) ≠ 3:
Ifchar(K) ∉ {2,3,5}:
Ifchar(K) = 0:
Ifchar(K) = 2:
TheassociatoronAis theK-multilinear map[⋅,⋅,⋅]:A×A×A→A{\displaystyle [\cdot ,\cdot ,\cdot ]:A\times A\times A\to A}given by
It measures the degree of nonassociativity ofA{\displaystyle A}, and can be used to conveniently express some possible identities satisfied byA.
Letx,yandzdenote arbitrary elements of the algebra.
Thenucleusis the set of elements that associate with all others:[30]that is, theninAsuch that
The nucleus is an associative subring ofA.
ThecenterofAis the set of elements that commute and associate with everything inA, that is the intersection of
with the nucleus. It turns out that for elements ofC(A)it is enough that two of the sets([n,A,A],[A,n,A],[A,A,n]){\displaystyle ([n,A,A],[A,n,A],[A,A,n])}are{0}{\displaystyle \{0\}}for the third to also be the zero set.
More classes of algebras:
There are several properties that may be familiar from ring theory, or from associative algebras, which are not always true for non-associative algebras. Unlike the associative case, elements with a (two-sided) multiplicative inverse might also be azero divisor. For example, all non-zero elements of thesedenionshave a two-sided inverse, but some of them are also zero divisors.
Thefree non-associative algebraon a setXover a fieldKis defined as the algebra with basis consisting of all non-associative monomials, finite formal products of elements ofXretaining parentheses. The product of monomialsu,vis just (u)(v). The algebra is unital if one takes the empty product as a monomial.[31]
Kuroshprovedthat every subalgebra of a free non-associative algebra is free.[32]
An algebraAover a fieldKis in particular aK-vector space and so one can consider the associative algebra EndK(A) ofK-linear vector space endomorphism ofA. We can associate to the algebra structure onAtwo subalgebras of EndK(A), thederivation algebraand the(associative) enveloping algebra.
AderivationonAis a mapDwith the property
The derivations onAform a subspace DerK(A) in EndK(A). Thecommutatorof two derivations is again a derivation, so that theLie bracketgives DerK(A) a structure ofLie algebra.[33]
There are linear mapsLandRattached to each elementaof an algebraA:[34]
Here each elementL(a),R(a){\displaystyle L(a),R(a)}is regarded as an element of EndK(A). Theassociative enveloping algebraormultiplication algebraofAis the sub-associative algebra of EndK(A) generated by the left and right linear mapsL(a),R(a){\displaystyle L(a),R(a)}.[29][35]ThecentroidofAis the centraliser of the enveloping algebra in the endomorphism algebra EndK(A). An algebra iscentralif its centroid consists of theK-scalar multiples of the identity.[16]
Some of the possible identities satisfied by non-associative algebras may be conveniently expressed in terms of the linear maps:[36]
Thequadratic representationQis defined by[37]
or equivalently,
The article onuniversal enveloping algebrasdescribes the canonical construction of enveloping algebras, as well as the PBW-type theorems for them. For Lie algebras, such enveloping algebras have a universal property, which does not hold, in general, for non-associative algebras. The best-known example is, perhaps theAlbert algebra, an exceptionalJordan algebrathat is not enveloped by the canonical construction of the enveloping algebra for Jordan algebras.
|
https://en.wikipedia.org/wiki/Non-associative_algebra
|
Intheoretical computer scienceand mathematics,computational complexity theoryfocuses on classifyingcomputational problemsaccording to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as analgorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematicalmodels of computationto study these problems and quantifying theircomputational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used incommunication complexity), the number ofgatesin a circuit (used incircuit complexity) and the number of processors (used inparallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. TheP versus NP problem, one of the sevenMillennium Prize Problems,[1]is part of the field of computational complexity.
Closely related fields intheoretical computer scienceareanalysis of algorithmsandcomputability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.
Acomputational problemcan be viewed as an infinite collection ofinstancestogether with a set (possibly empty) ofsolutionsfor every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem ofprimality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, theinstanceis a particular input to the problem, and thesolutionis the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of thetravelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites inMilanwhose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is astringover analphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings arebitstrings. As in a real-worldcomputer, mathematical objects other than bitstrings must be suitably encoded. For example,integerscan be represented inbinary notation, andgraphscan be encoded directly via theiradjacency matrices, or by encoding theiradjacency listsin binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
Decision problemsare one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is eitheryesorno(alternatively, 1 or 0). A decision problem can be viewed as aformal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of analgorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answeryes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrarygraph. The problem consists in deciding whether the given graph isconnectedor not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Afunction problemis a computational problem where a single output (of atotal function) is expected for every input, but the output is more complex than that of adecision problem—that is, the output is not just yes or no. Notable examples include thetraveling salesman problemand theinteger factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples(a,b,c){\displaystyle (a,b,c)}such that the relationa×b=c{\displaystyle a\times b=c}holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with2n{\displaystyle 2n}vertices compared to the time taken for a graph withn{\displaystyle n}vertices?
If the input size isn{\displaystyle n}, the time taken can be expressed as a function ofn{\displaystyle n}. Since the time taken on different inputs of the same size can be different, the worst-case time complexityT(n){\displaystyle T(n)}is defined to be the maximum time taken over all inputs of sizen{\displaystyle n}. IfT(n){\displaystyle T(n)}is a polynomial inn{\displaystyle n}, then the algorithm is said to be apolynomial timealgorithm.Cobham's thesisargues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of theChurch–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as aRAM machine,Conway's Game of Life,cellular automata,lambda calculusor any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such asdeterministic Turing machines,probabilistic Turing machines,non-deterministic Turing machines,quantum Turing machines,symmetric Turing machinesandalternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are calledrandomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, seenon-deterministic algorithm.
Many machine models different from the standardmulti-tape Turing machineshave been proposed in the literature, for examplerandom-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.[2]What all these models have in common is that the machines operatedeterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so thatnon-deterministic timeis a very important resource in analyzing computational problems.
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as thedeterministic Turing machineis used. The time required by a deterministic Turing machineM{\displaystyle M}on inputx{\displaystyle x}is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machineM{\displaystyle M}is said to operate within timef(n){\displaystyle f(n)}if the time required byM{\displaystyle M}on each input of lengthn{\displaystyle n}is at mostf(n){\displaystyle f(n)}. A decision problemA{\displaystyle A}can be solved in timef(n){\displaystyle f(n)}if there exists a Turing machine operating in timef(n){\displaystyle f(n)}that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within timef(n){\displaystyle f(n)}on a deterministic Turing machine is then denoted byDTIME(f(n){\displaystyle f(n)}).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, anycomplexity measurecan be viewed as a computational resource. Complexity measures are very generally defined by theBlum complexity axioms. Other complexity measures used in complexity theory includecommunication complexity,circuit complexity, anddecision tree complexity.
The complexity of an algorithm is often expressed usingbig O notation.
Thebest, worst and average casecomplexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of sizen{\displaystyle n}may be faster to solve than others, we define the following complexities:
The order from cheap to costly is: Best, average (ofdiscrete uniform distribution), amortized, worst.
For example, the deterministic sorting algorithmquicksortaddresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes timeO(n2{\displaystyle n^{2}}). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting isO(nlogn){\displaystyle O(n\log n)}. The best case occurs when each pivoting divides the list in half, also needingO(nlogn){\displaystyle O(n\log n)}time.
To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field ofanalysis of algorithms. To show an upper boundT(n){\displaystyle T(n)}on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at mostT(n){\displaystyle T(n)}. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound ofT(n){\displaystyle T(n)}for a problem requires showing that no algorithm can have time complexity lower thanT(n){\displaystyle T(n)}.
Upper and lower bounds are usually stated using thebig O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, ifT(n)=7n2+15n+40{\displaystyle T(n)=7n^{2}+15n+40}, in big O notation one would writeT(n)∈O(n2){\displaystyle T(n)\in O(n^{2})}.
Acomplexity classis a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
But bounding the computation time above by some concrete functionf(n){\displaystyle f(n)}often yields complexity classes that depend on the chosen machine model. For instance, the language{xx∣xis any binary string}{\displaystyle \{xx\mid x{\text{ is any binary string}}\}}can be solved inlinear timeon a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time,Cobham-Edmonds thesisstates that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity classP, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems isFP.
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Logarithmic-space classes do not account for the space required to represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE bySavitch's theorem.
Other important complexity classes includeBPP,ZPPandRP, which are defined usingprobabilistic Turing machines;ACandNC, which are defined using Boolean circuits; andBQPandQMA, which are defined using quantum Turing machines.#Pis an important complexity class of counting problems (not decision problems). Classes likeIPandAMare defined usingInteractive proof systems.ALLis the class of all decision problems.
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n{\displaystyle n}) is contained in DTIME(n2{\displaystyle n^{2}}), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, thetime hierarchy theoremstates thatDTIME(o(f(n)))⊊DTIME(f(n)⋅log(f(n))){\displaystyle {\mathsf {DTIME}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DTIME}}{\big (}f(n)\cdot \log(f(n)){\big )}}.
Thespace hierarchy theoremstates thatDSPACE(o(f(n)))⊊DSPACE(f(n)){\displaystyle {\mathsf {DSPACE}}{\big (}o(f(n)){\big )}\subsetneq {\mathsf {DSPACE}}{\big (}f(n){\big )}}.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problemX{\displaystyle X}can be solved using an algorithm forY{\displaystyle Y},X{\displaystyle X}is no more difficult thanY{\displaystyle Y}, and we say thatX{\displaystyle X}reducestoY{\displaystyle Y}. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such aspolynomial-time reductionsorlog-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problemX{\displaystyle X}ishardfor a class of problemsC{\displaystyle C}if every problem inC{\displaystyle C}can be reduced toX{\displaystyle X}. Thus no problem inC{\displaystyle C}is harder thanX{\displaystyle X}, since an algorithm forX{\displaystyle X}allows us to solve any problem inC{\displaystyle C}. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set ofNP-hardproblems.
If a problemX{\displaystyle X}is inC{\displaystyle C}and hard forC{\displaystyle C}, thenX{\displaystyle X}is said to becompleteforC{\displaystyle C}. This means thatX{\displaystyle X}is the hardest problem inC{\displaystyle C}. (Since many problems could be equally hard, one might say thatX{\displaystyle X}is one of the hardest problems inC{\displaystyle C}.) Thus the class ofNP-completeproblems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem,Π2{\displaystyle \Pi _{2}}, to another problem,Π1{\displaystyle \Pi _{1}}, would indicate that there is no known polynomial-time solution forΠ1{\displaystyle \Pi _{1}}. This is because a polynomial-time solution toΠ1{\displaystyle \Pi _{1}}would yield a polynomial-time solution toΠ2{\displaystyle \Pi _{2}}. Similarly, because all NP problems can be reduced to the set, finding anNP-completeproblem that can be solved in polynomial time would mean that P = NP.[3]
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called theCobham–Edmonds thesis. The complexity classNP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as theBoolean satisfiability problem, theHamiltonian path problemand thevertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[3]If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types ofinteger programmingproblems inoperations research, many problems inlogistics,protein structure predictioninbiology,[5]and the ability to find formal proofs ofpure mathematicstheorems.[6]The P versus NP problem is one of theMillennium Prize Problemsproposed by theClay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.[7]
It was shown by Ladner that ifP≠NP{\displaystyle {\textsf {P}}\neq {\textsf {NP}}}then there exist problems inNP{\displaystyle {\textsf {NP}}}that are neither inP{\displaystyle {\textsf {P}}}norNP{\displaystyle {\textsf {NP}}}-complete.[4]Such problems are calledNP-intermediateproblems. Thegraph isomorphism problem, thediscrete logarithm problemand theinteger factorization problemare examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be inP{\displaystyle {\textsf {P}}}or to beNP{\displaystyle {\textsf {NP}}}-complete.
Thegraph isomorphism problemis the computational problem of determining whether two finitegraphsareisomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is inP{\displaystyle {\textsf {P}}},NP{\displaystyle {\textsf {NP}}}-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[8]If graph isomorphism is NP-complete, thepolynomial time hierarchycollapses to its second level.[9]Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due toLászló BabaiandEugene Lukshas run timeO(2nlogn){\displaystyle O(2^{\sqrt {n\log n}})}for graphs withn{\displaystyle n}vertices, although some recent work by Babai offers some potentially new perspectives on this.[10]
Theinteger factorization problemis the computational problem of determining theprime factorizationof a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less thank{\displaystyle k}. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as theRSAalgorithm. The integer factorization problem is inNP{\displaystyle {\textsf {NP}}}and inco-NP{\displaystyle {\textsf {co-NP}}}(and even in UP and co-UP[11]). If the problem isNP{\displaystyle {\textsf {NP}}}-complete, the polynomial time hierarchy will collapse to its first level (i.e.,NP{\displaystyle {\textsf {NP}}}will equalco-NP{\displaystyle {\textsf {co-NP}}}). The best known algorithm for integer factorization is thegeneral number field sieve, which takes timeO(e(6493)(logn)3(loglogn)23){\displaystyle O(e^{\left({\sqrt[{3}]{\frac {64}{9}}}\right){\sqrt[{3}]{(\log n)}}{\sqrt[{3}]{(\log \log n)^{2}}}})}[12]to factor an odd integern{\displaystyle n}. However, the best knownquantum algorithmfor this problem,Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Many known complexity classes are suspected to be unequal, but this has not been proved. For instanceP⊆NP⊆PP⊆PSPACE{\displaystyle {\textsf {P}}\subseteq {\textsf {NP}}\subseteq {\textsf {PP}}\subseteq {\textsf {PSPACE}}}, but it is possible thatP=PSPACE{\displaystyle {\textsf {P}}={\textsf {PSPACE}}}. IfP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, thenP{\displaystyle {\textsf {P}}}is not equal toPSPACE{\displaystyle {\textsf {PSPACE}}}either. Since there are many known complexity classes betweenP{\displaystyle {\textsf {P}}}andPSPACE{\displaystyle {\textsf {PSPACE}}}, such asRP{\displaystyle {\textsf {RP}}},BPP{\displaystyle {\textsf {BPP}}},PP{\displaystyle {\textsf {PP}}},BQP{\displaystyle {\textsf {BQP}}},MA{\displaystyle {\textsf {MA}}},PH{\displaystyle {\textsf {PH}}}, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines,co-NP{\displaystyle {\textsf {co-NP}}}is the class containing thecomplementproblems (i.e. problems with theyes/noanswers reversed) ofNP{\displaystyle {\textsf {NP}}}problems. It is believed[13]thatNP{\displaystyle {\textsf {NP}}}is not equal toco-NP{\displaystyle {\textsf {co-NP}}}; however, it has not yet been proven. It is clear that if these two complexity classes are not equal thenP{\displaystyle {\textsf {P}}}is not equal toNP{\displaystyle {\textsf {NP}}}, sinceP=co-P{\displaystyle {\textsf {P}}={\textsf {co-P}}}. Thus ifP=NP{\displaystyle P=NP}we would haveco-P=co-NP{\displaystyle {\textsf {co-P}}={\textsf {co-NP}}}whenceNP=P=co-P=co-NP{\displaystyle {\textsf {NP}}={\textsf {P}}={\textsf {co-P}}={\textsf {co-NP}}}.
Similarly, it is not known ifL{\displaystyle {\textsf {L}}}(the set of all problems that can be solved in logarithmic space) is strictly contained inP{\displaystyle {\textsf {P}}}or equal toP{\displaystyle {\textsf {P}}}. Again, there are many complexity classes between the two, such asNL{\displaystyle {\textsf {NL}}}andNC{\displaystyle {\textsf {NC}}}, and it is not known if they are distinct or equal classes.
It is suspected thatP{\displaystyle {\textsf {P}}}andBPP{\displaystyle {\textsf {BPP}}}are equal. However, it is currently open ifBPP=NEXP{\displaystyle {\textsf {BPP}}={\textsf {NEXP}}}.
A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as anintractable problem.[14]Conversely, a problem that can be solved in practice is called atractable problem, literally "a problem that can be handled". The terminfeasible(literally "cannot be done") is sometimes used interchangeably withintractable,[15]though this risks confusion with afeasible solutioninmathematical optimization.[16]
Tractable problems are frequently identified with problems that have polynomial-time solutions (P{\displaystyle {\textsf {P}}},PTIME{\displaystyle {\textsf {PTIME}}}); this is known as theCobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that areEXPTIME-hard. IfNP{\displaystyle {\textsf {NP}}}is not the same asP{\displaystyle {\textsf {P}}}, thenNP-hardproblems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not inP{\displaystyle {\textsf {P}}}does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem inPresburger arithmetichas been shown not to be inP{\displaystyle {\textsf {P}}}, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-completeknapsack problemover a wide range of sizes in less than quadratic time andSAT solversroutinely handle large instances of the NP-completeBoolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes2n{\displaystyle 2^{n}}operations before halting. For smalln{\displaystyle n}, say 100, and assuming for the sake of example that the computer does1012{\displaystyle 10^{12}}operations each second, the program would run for about4×1010{\displaystyle 4\times 10^{10}}years, which is the same order of magnitude as theage of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes1.0001n{\displaystyle 1.0001^{n}}operations is practical untiln{\displaystyle n}gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say,n15{\displaystyle n^{15}}, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice evenn3{\displaystyle n^{3}}orn2{\displaystyle n^{2}}algorithms are often impractical on realistic sizes of problems.
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied innumerical analysis. One approach to complexity theory of numerical analysis[17]isinformation based complexity.
Continuous complexity theory can also refer to complexity theory of the use ofanalog computation, which uses continuousdynamical systemsanddifferential equations.[18]Control theorycan be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[19]
An early example of algorithm complexity analysis is the running time analysis of theEuclidean algorithmdone byGabriel Laméin 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines byAlan Turingin 1936, which turned out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" byJuris HartmanisandRichard E. Stearns, which laid out the definitions oftime complexityandspace complexity, and proved the hierarchy theorems.[20]In addition, in 1965Edmondssuggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.[21]
Earlier papers studying problems solvable by Turing machines with specific bounded resources include[20]John Myhill's definition oflinear bounded automata(Myhill 1960),Raymond Smullyan's study of rudimentary sets (1961), as well asHisao Yamada's paper[22]on real-time computations (1962). Somewhat earlier,Boris Trakhtenbrot(1956), a pioneer in the field from the USSR, studied another specific complexity measure.[23]As he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited fromswitching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".[24]
In 1967,Manuel Blumformulated a set ofaxioms(now known asBlum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-calledspeed-up theorem. The field began to flourish in 1971 whenStephen CookandLeonid Levinprovedthe existence of practically relevant problems that areNP-complete. In 1972,Richard Karptook this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diversecombinatorialandgraph theoreticalproblems, each infamous for its computational intractability, are NP-complete.[25]
|
https://en.wikipedia.org/wiki/Computational_intractability
|
Bluetoothis a short-rangewirelesstechnology standard that is used for exchanging data between fixed and mobile devices over short distances and buildingpersonal area networks(PANs). In the most widely used mode, transmission power is limited to 2.5milliwatts, giving it a very short range of up to 10 metres (33 ft). It employsUHFradio wavesin theISM bands, from 2.402GHzto 2.48GHz.[3]It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connectcell phonesand music players withwireless headphones,wireless speakers,HIFIsystems,car audioand wireless transmission betweenTVsandsoundbars.
Bluetooth is managed by theBluetooth Special Interest Group(SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. TheIEEEstandardized Bluetooth asIEEE 802.15.1but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks.[4]A manufacturer must meetBluetooth SIG standardsto market it as a Bluetooth device.[5]A network ofpatentsapplies to the technology, which is licensed to individual qualifying devices. As of 2021[update], 4.7 billion Bluetoothintegrated circuitchips are shipped annually.[6]Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhanceIoTcapabilities.[7]
The name "Bluetooth" was proposed in 1997 by Jim Kardach ofIntel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales fromFrans G. Bengtsson'sThe Long Ships, a historical novel about Vikings and the 10th-century Danish kingHarald Bluetooth. Upon discovering a picture of therunestone of Harald Bluetooth[8]in the bookA History of the VikingsbyGwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.[9][10][11]
According to Bluetooth's official website,
Bluetooth was only intended as a placeholder until marketing could come up with something really cool.
Later, when it came time to select a serious name, Bluetooth was to be replaced with either RadioWire or PAN (Personal Area Networking). PAN was the front runner, but an exhaustive search discovered it already had tens of thousands of hits throughout the internet.
A full trademark search on RadioWire couldn't be completed in time for launch, making Bluetooth the only choice. The name caught on fast and before it could be changed, it spread throughout the industry, becoming synonymous with short-range wireless technology.[12]
Bluetooth is theAnglicisedversion of the ScandinavianBlåtand/Blåtann(or inOld Norseblátǫnn). It was theepithetof King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.[13]
The Bluetooth logois abind runemerging theYounger Futharkrunes(ᚼ,Hagall) and(ᛒ,Bjarkan), Harald's initials.[14][15]
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO atEricsson MobileinLund, Sweden. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman,SE 8902098-6, issued 12 June 1989andSE 9202239, issued 24 July 1992. Nils Rydbeck taskedTord Wingrenwith specifying and DutchmanJaap Haartsenand Sven Mattisson with developing.[16]Both were working for Ericsson in Lund.[17]Principal design and development began in 1994 and by 1997 the team had a workable solution.[18]From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.[19][20][21][22]
In 1997, Adalio Sanchez, then head ofIBMThinkPadproduct R&D, approached Nils Rydbeck about collaborating on integrating amobile phoneinto a ThinkPad notebook. The two assigned engineers fromEricssonandIBMstudied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruitedToshibaandNokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" atCOMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson modelT39that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001,[23]making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, sinceWi-Fiwas not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations withMotorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle[24]ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices, which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by theEuropean Patent Officefor theEuropean Inventor Award.[18]
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, includingguard bands2MHz wide at the bottom end and 3.5MHz wide at the top.[25]This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology calledfrequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, withadaptive frequency-hopping(AFH) enabled.[25]Bluetooth Low Energyuses 2MHz spacing, which accommodates 40 channels.[26]
Originally,Gaussian frequency-shift keying(GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK(differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneousbit rateof 1Mbit/sis possible. The termEnhanced Data Rate(EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSKmodulation on 4 MHz channels with forward error correction (FEC).[27]
Bluetooth is apacket-based protocolwith amaster/slave architecture. One master may communicate with up to seven slaves in apiconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification,[28]whichuses the same spectrum but somewhat differently.
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form ascatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in around-robinfashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.[29]
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-costtransceivermicrochipsin each device.[30]Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, aquasi opticalwireless path must be viable.[31]
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range.[2]The actual range of a given link depends on several qualities of both communicating devices and theair and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.[32]
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device.[33]In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.[34][35][36]
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.[37]
Bluetooth exists in numerous products such as telephones,speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definitionheadsets,modems,hearing aids[53]and even watches.[54]Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices.[55]Bluetooth devices can advertise all of the services they provide.[56]This makes using services easier, because more of the security,network addressand permission configuration can be automated than with many other network types.[55]
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While somedesktop computersand most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor,IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.[57]
ForMicrosoftplatforms,Windows XP Service Pack 2and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR.[58]Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft.[59]Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR.[58]Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).[58]The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP,DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced.[58]Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Appleproducts have worked with Bluetooth sinceMac OSX v10.2, which was released in 2002.[60]
Linuxhas two popularBluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed byQualcomm.[61]Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed byBroadcom.[62]There is also Affix stack, developed byNokia. It was once popular, but has not been updated since 2005.[63]
FreeBSDhas included Bluetooth since its v5.0 release, implemented throughnetgraph.[64][65]
NetBSDhas included Bluetooth since its v4.0 release.[66][67]Its Bluetooth stack was ported toOpenBSDas well, however OpenBSD later removed it as unmaintained.[68][69]
DragonFly BSDhas had NetBSD's Bluetooth implementation since 1.11 (2008).[70][71]Anetgraph-based implementation fromFreeBSDhas also been available in the tree, possibly disabled until 2014-11-15, and may require more work.[72][73]
The specifications were formalized by theBluetooth Special Interest Group(SIG) and formally announced on 20 May 1998.[74]In 2014 it had a membership of over 30,000 companies worldwide.[75]It was established byEricsson,IBM,Intel,NokiaandToshiba, and later joined by many other companies.
All versions of the Bluetooth standards arebackward-compatiblewith all earlier versions.[76]
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
Major enhancements include:
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for fasterdata transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s.[79]EDR uses a combination ofGFSKandphase-shift keyingmodulation (PSK) with two variants, π/4-DQPSKand 8-DPSK.[81]EDR can provide a lower power consumption through a reducedduty cycle.
The specification is published asBluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.[82]
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.[81]
The headline feature of v2.1 issecure simple pairing(SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.[83]
Version 2.1 allows various other improvements, includingextended inquiry response(EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Version 3.0 + HS of the Bluetooth Core Specification[81]was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated802.11link.
The main new feature isAMP(Alternative MAC/PHY), the addition of802.11as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0[84]or earlier Core Specification Addendum 1.[85]
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended forUWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.[86]
On 16 March 2009, theWiMedia Allianceannounced it was entering into technology transfer agreements for the WiMediaUltra-wideband(UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG),Wireless USBPromoter Group and theUSB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.[87][88][89][90][91]
In October 2009, theBluetooth Special Interest Groupsuspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of formerWiMediamembers had not and would not sign up to the necessary agreements for theIPtransfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.[92][93][94]
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010[update]. It includesClassic Bluetooth,Bluetooth high speedandBluetooth Low Energy(BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree,[95]is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by acoin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions.[96]The provisional namesWibreeandBluetooth ULP(Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.[97]
Compared toClassic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining asimilar communication range. In terms of lengthening the battery life of Bluetooth devices,BLErepresents a significant progression.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services withAESEncryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.[106]
New features of this specification include:
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Released on 2 December 2014,[108]it introduces features for theInternet of things.
The major areas of improvement are:
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.[109][110]
The Bluetooth SIG released Bluetooth 5 on 6 December 2016.[111]Its new features are mainly focused on newInternet of Thingstechnology. Sony was the first to announce Bluetooth 5.0 support with itsXperia XZ Premiumin Feb 2017 during the Mobile World Congress 2017.[112]The SamsungGalaxy S8launched with Bluetooth 5 support in April 2017. In September 2017, theiPhone 8, 8 Plus andiPhone Xlaunched with Bluetooth 5 support as well.Applealso integrated Bluetooth 5 in its newHomePodoffering released on 9 February 2018.[113]Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0);[114]the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, forBLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation[115]of low-energy Bluetooth connections.[116][117][118]
The major areas of improvement are:
Features added in CSA5 – integrated in v5.0:
The following features were removed in this version of the specification:
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.[120]
The major areas of improvement are:
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
The following features were removed in this version of the specification:
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:[121]
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:[128]
The following features were removed in this version of the specification:
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:[129]
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024.[130]This version adds the following features:[131]
The Bluetooth SIG released the Bluetooth Core Specification version 6.1 on 7 May 2025.[132]
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g.SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is ashort-rangewirelessdevice. Bluetooth devices arefabricatedonRF CMOSintegrated circuit(RF circuit) chips.[133][134]
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols.[135]Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols:HCIand RFCOMM.[citation needed]
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
The Host Controller Interface provides a command interface between the controller and the host.
TheLogical Link Control and Adaptation Protocol(L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
InBasicmode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the defaultMTU, and 48 bytes as the minimum mandatory supported MTU.
InRetransmission and Flow Controlmodes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
TheService Discovery Protocol(SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine whichBluetooth profilesthe headset can use (Headset Profile, Hands Free Profile (HFP),Advanced Audio Distribution Profile (A2DP)etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by aUniversally unique identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications(RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulatesEIA-232(formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
TheBluetooth Network Encapsulation Protocol(BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function toSNAPin Wireless LAN.
TheAudio/Video Control Transport Protocol(AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
TheAudio/Video Distribution Transport Protocol(AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over anL2CAPchannel intended for video distribution profile in the Bluetooth transmission.
TheTelephony Control Protocol– Binary(TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Depending on packet type, individual packets may be protected byerror correction, either 1/3 rateforward error correction(FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged byautomatic repeat request(ARQ).
Any Bluetooth device indiscoverable modetransmits the following information on demand:
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has aunique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range namedT610(seeBluejacking).
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process calledbonding, and a bond is generated through a process calledpairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
During pairing, the two devices establish a relationship by creating ashared secretknown as alink key. If both devices store the same link key, they are said to bepairedorbonded. A device that wants to communicate only with a bonded device cancryptographicallyauthenticatethe identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticatedACLlink between the devices may beencryptedto protect exchanged data againsteavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
SSP is considered simple for the following reasons:
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simpleXOR attacksto retrieve the encryption key.
Bluetooth v2.1 addresses this in the following ways:
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Bluetooth implementsconfidentiality,authenticationandkeyderivation with custom algorithms based on theSAFER+block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.[136]TheE0stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.[137]
In September 2008, theNational Institute of Standards and Technology(NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.[138]
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See thepairing mechanismssection for more about these changes.
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!"[139]Bluejacking does not involve the removal or alteration of any data from the device.[140]
Some form ofDoSis also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
In 2001, Jakobsson and Wetzel fromBell Laboratoriesdiscovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme.[141]In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data.[142]In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at theCeBITfairgrounds, showing the importance of the problem to the world. A new attack calledBlueBugwas used for this experiment.[143]In 2004 the first purportedvirususing Bluetooth to spread itself among mobile phones appeared on theSymbian OS.[144]The virus was first described byKaspersky Laband requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology orSymbian OSsince the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see alsoBluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to 1.78 km (1.11 mi) with directional antennas and signal amplifiers.[145]This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.[146]
In January 2005, a mobilemalwareworm known as Lasco surfaced. The worm began targeting mobile phones usingSymbian OS(Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other.SISfiles on the device, allowing replication to another device through the use of removable media (Secure Digital,CompactFlash, etc.). The worm can render the mobile device unstable.[147]
In April 2005,University of Cambridgesecurity researchers published results of their actual implementation of passive attacks against thePIN-basedpairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.[148]
In June 2005, Yaniv Shaked[149]and Avishai Wool[150]published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.[151]
In August 2005, police inCambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.[152]
In April 2006, researchers fromSecure NetworkandF-Securepublished a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.[153]
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.[154]
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, includingMicrosoft Windows,Linux, AppleiOS, and GoogleAndroid. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.[155]
In July 2018, Lior Neumann andEli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.[156][157]
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.[158]
In August 2019, security researchers at theSingapore University of Technology and Design, Helmholtz Center for Information Security, andUniversity of Oxforddiscovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".[159][160]Google released anAndroidsecurity patch on 5 August 2019, which removed this vulnerability.[161]
In November 2023, researchers fromEurecomrevealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.[162][163]
Bluetooth uses theradio frequencyspectrum in the 2.402GHz to 2.480GHz range,[164]which isnon-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included byIARCin the possiblecarcinogenlist. Maximum power output from a Bluetooth radio is 100mWfor Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones.[165]UMTSandW-CDMAoutput 250mW,GSM1800/1900outputs 1000mW, andGSM850/900outputs 2000mW.
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.[166]
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World.[167]The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.[168]
|
https://en.wikipedia.org/wiki/Bluetooth#Bluetooth_security
|
Inmathematics, asquare rootof a numberxis a numberysuch thaty2=x{\displaystyle y^{2}=x}; in other words, a numberywhosesquare(the result of multiplying the number by itself, ory⋅y{\displaystyle y\cdot y}) isx.[1]For example, 4 and −4 are square roots of 16 because42=(−4)2=16{\displaystyle 4^{2}=(-4)^{2}=16}.
Everynonnegativereal numberxhas a unique nonnegative square root, called theprincipal square rootor simplythe square root(with a definite article, see below), which is denoted byx,{\displaystyle {\sqrt {x}},}where the symbol "{\displaystyle {\sqrt {~^{~}}}}" is called theradical sign[2]orradix. For example, to express the fact that the principal square root of 9 is 3, we write9=3{\displaystyle {\sqrt {9}}=3}. The term (or number) whose square root is being considered is known as theradicand. The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negativex, the principal square root can also be written inexponentnotation, asx1/2{\displaystyle x^{1/2}}.
Everypositive numberxhas two square roots:x{\displaystyle {\sqrt {x}}}(which is positive) and−x{\displaystyle -{\sqrt {x}}}(which is negative). The two roots can be written more concisely using the± signas±x{\displaystyle \pm {\sqrt {x}}}. Although the principal square root of a positive number is only one of its two square roots, the designation "thesquare root" is often used to refer to the principal square root.[3][4]
Square roots of negative numbers can be discussed within the framework ofcomplex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These includefunction spacesandsquare matrices, among othermathematical structures.
TheYale Babylonian Collectionclay tabletYBC 7289was created between 1800 BC and 1600 BC, showing2{\displaystyle {\sqrt {2}}}and22=12{\textstyle {\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}}respectively as 1;24,51,10 and 0;42,25,35base 60numbers on a square crossed by two diagonals.[5](1;24,51,10) base 60 corresponds to 1.41421296, which is correct to 5 decimal places (1.41421356...).
TheRhind Mathematical Papyrusis a copy from 1650 BC of an earlierBerlin Papyrusand other texts – possibly theKahun Papyrus– that shows how the Egyptians extracted square roots by an inverse proportion method.[6]
InAncient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as theSulba Sutras, dated around 800–500 BC (possibly much earlier).[7]A method for finding very good approximations to the square roots of 2 and 3 are given in theBaudhayana Sulba Sutra.[8]Apastambawho was dated around 600 BCE has given a strikingly accurate value for2{\displaystyle {\sqrt {2}}}which is correct up to five decimal places as1+13+13×4−13×4×34{\textstyle 1+{\frac {1}{3}}+{\frac {1}{3\times 4}}-{\frac {1}{3\times 4\times 34}}}.[9][10][11]Aryabhata, in theAryabhatiya(section 2.4), has given a method for finding the square root of numbers having many digits.
It was known to the ancient Greeks that square roots ofpositive integersthat are notperfect squaresare alwaysirrational numbers: numbers not expressible as aratioof two integers (that is, they cannot be written exactly asmn{\displaystyle {\frac {m}{n}}}, wheremandnare integers). This is the theoremEuclid X, 9, almost certainly due toTheaetetusdating back toc.380 BC.[12]The discovery of irrational numbers, including the particular case of thesquare root of 2, is widely associated with the Pythagorean school.[13][14]Although some accounts attribute the discovery toHippasus, the specific contributor remains uncertain due to the scarcity of primary sources and the secretive nature of the brotherhood.[15][16]It is exactly the length of thediagonalof asquare with side length 1.
In the Chinese mathematical workWritings on Reckoning, written between 202 BC and 186 BC during the earlyHan dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend."[17]
A symbol for square roots, written as an elaborate R, was invented byRegiomontanus(1436–1476). An R was also used for radix to indicate square roots inGerolamo Cardano'sArs Magna.[18]
According to historian of mathematicsD.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe byCataneo—in 1546.
According to Jeffrey A. Oaks, Arabs used the letterjīm/ĝīm(ج), the first letter of the word "جذر" (variously transliterated asjaḏr,jiḏr,ǧaḏrorǧiḏr, "root"), placed in its initial form (ﺟ) over a number to indicate its square root. The letterjīmresembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematicianIbn al-Yasamin.[19]
The symbol "√" for the square root was first used in print in 1525, inChristoph Rudolff'sCoss.[20]
The principal square root functionf(x)=x{\displaystyle f(x)={\sqrt {x}}}(usually just referred to as the "square root function") is afunctionthat maps thesetof nonnegative real numbers onto itself. Ingeometricalterms, the square root function maps theareaof a square to its side length.
The square root ofxis rational if and only ifxis arational numberthat can be represented as a ratio of two perfect squares. (Seesquare root of 2for proofs that this is an irrational number, andquadratic irrationalfor a proof for all non-square natural numbers.) The square root function maps rational numbers intoalgebraic numbers, the latter being asupersetof the rational numbers).
For all real numbersx,x2=|x|={x,ifx≥0−x,ifx<0.{\displaystyle {\sqrt {x^{2}}}=\left|x\right|={\begin{cases}x,&{\text{if }}x\geq 0\\-x,&{\text{if }}x<0.\end{cases}}}(seeabsolute value).
For all nonnegative real numbersxandy,xy=xy{\displaystyle {\sqrt {xy}}={\sqrt {x}}{\sqrt {y}}}andx=x1/2.{\displaystyle {\sqrt {x}}=x^{1/2}.}
The square root function iscontinuousfor all nonnegativex, anddifferentiablefor all positivex. Iffdenotes the square root function, whose derivative is given by:f′(x)=12x.{\displaystyle f'(x)={\frac {1}{2{\sqrt {x}}}}.}
TheTaylor seriesof1+x{\displaystyle {\sqrt {1+x}}}aboutx= 0converges for|x| ≤ 1, and is given by
1+x=∑n=0∞(−1)n(2n)!(1−2n)(n!)2(4n)xn=1+12x−18x2+116x3−5128x4+⋯,{\displaystyle {\sqrt {1+x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)(n!)^{2}(4^{n})}}x^{n}=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+\cdots ,}
The square root of a nonnegative number is used in the definition ofEuclidean norm(anddistance), as well as in generalizations such asHilbert spaces. It defines an important concept ofstandard deviationused inprobability theoryandstatistics. It has a major use in the formula for solutions of aquadratic equation.Quadratic fieldsand rings ofquadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in manyphysicallaws.
A positive number has two square roots, one positive, and one negative, which areoppositeto each other. When talking ofthesquare root of a positive integer, it is usually the positive square root that is meant.
The square roots of an integer arealgebraic integers—more specificallyquadratic integers.
The square root of a positive integer is the product of the roots of itsprimefactors, because the square root of a product is the product of the square roots of the factors. Sincep2k=pk,{\textstyle {\sqrt {p^{2k}}}=p^{k},}only roots of those primes having an odd power in thefactorizationare necessary. More precisely, the square root of a prime factorization isp12e1+1⋯pk2ek+1pk+12ek+1…pn2en=p1e1…pnenp1…pk.{\displaystyle {\sqrt {p_{1}^{2e_{1}+1}\cdots p_{k}^{2e_{k}+1}p_{k+1}^{2e_{k+1}}\dots p_{n}^{2e_{n}}}}=p_{1}^{e_{1}}\dots p_{n}^{e_{n}}{\sqrt {p_{1}\dots p_{k}}}.}
The square roots of theperfect squares(e.g., 0, 1, 4, 9, 16) areintegers. In all other cases, the square roots of positive integers areirrational numbers, and hence have non-repeating decimalsin theirdecimal representations. Decimal approximations of the square roots of the first few natural numbers are given in the following table.
As with before, the square roots of theperfect squares(e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers areirrational numbers, and therefore have non-repeating digits in any standardpositional notationsystem.
The square roots of small integers are used in both theSHA-1andSHA-2hash function designs to providenothing up my sleeve numbers.
A result from the study ofirrational numbersassimple continued fractionswas obtained byJoseph Louis Lagrangec.1780. Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction isperiodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers.
Thesquare bracketnotation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, [3; 3, 6, 3, 6, ...], looks like this:11=3+13+16+13+16+13+⋱{\displaystyle {\sqrt {11}}=3+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+\ddots }}}}}}}}}}}
where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since11 = 32+ 2, the above is also identical to the followinggeneralized continued fractions:
11=3+26+26+26+26+26+⋱=3+620−1−120−120−120−120−⋱.{\displaystyle {\sqrt {11}}=3+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+\ddots }}}}}}}}}}=3+{\cfrac {6}{20-1-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-\ddots }}}}}}}}}}.}
Square roots of positive numbers are not in generalrational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained.
Mostpocket calculatorshave a square root key. Computerspreadsheetsand othersoftwareare also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as theNewton's method(frequently with an initial guess of 1), to compute the square root of a positive real number.[21][22]When computing square roots withlogarithm tablesorslide rules, one can exploit the identitiesa=e(lna)/2=10(log10a)/2,{\displaystyle {\sqrt {a}}=e^{(\ln a)/2}=10^{(\log _{10}a)/2},}wherelnandlog10are thenaturalandbase-10 logarithms.
By trial-and-error,[23]one can square an estimate fora{\displaystyle {\sqrt {a}}}and raise or lower the estimate until it agrees to sufficient accuracy. For this technique it is prudent to use the identity(x+c)2=x2+2xc+c2,{\displaystyle (x+c)^{2}=x^{2}+2xc+c^{2},}as it allows one to adjust the estimatexby some amountcand measure the square of the adjustment in terms of the original estimate and its square.
The most commoniterative methodof square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopherHeron of Alexandria, who first described it.[24]The method uses the same iterative scheme as theNewton–Raphson methodyields when applied to the functiony=f(x) =x2−a, using the fact that its slope at any point isdy/dx=f′(x) = 2x, but predates it by many centuries.[25]The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that ifxis an overestimate to the square root of a nonnegative real numberathena/xwill be an underestimate and so the average of these two numbers is a better approximation than either of them. However, theinequality of arithmetic and geometric meansshows this average is always an overestimate of the square root (as notedbelow), and so it can serve as a new overestimate with which to repeat the process, whichconvergesas a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To findx:
That is, if an arbitrary guess fora{\displaystyle {\sqrt {a}}}isx0, andxn+ 1= (xn+a/xn) / 2, then eachxnis an approximation ofa{\displaystyle {\sqrt {a}}}which is better for largenthan for smalln. Ifais positive, the convergence isquadratic, which means that in approaching the limit, the number of correct digits roughly doubles in each next iteration. Ifa= 0, the convergence is only linear; however,0=0{\displaystyle {\sqrt {0}}=0}so in this case no iteration is needed.
Using the identitya=2−n4na,{\displaystyle {\sqrt {a}}=2^{-n}{\sqrt {4^{n}a}},}the computation of the square root of a positive number can be reduced to that of a number in the range[1, 4). This simplifies finding a start value for the iterative method that is close to the square root, for which apolynomialorpiecewise-linearapproximationcan be used.
Thetime complexityfor computing a square root withndigits of precision is equivalent to that of multiplying twon-digit numbers.
Another useful method for calculating the square root is the shifting nth root algorithm, applied forn= 2.
The name of the square rootfunctionvaries fromprogramming languageto programming language, withsqrt[26](often pronounced "squirt"[27]) being common, used inCand derived languages such asC++,JavaScript,PHP, andPython.
The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have arealsquare root. However, it is possible to work with a more inclusive set of numbers, called thecomplex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted byi(sometimes byj, especially in the context ofelectricitywhereitraditionally represents electric current) and called theimaginary unit, which isdefinedsuch thati2= −1. Using this notation, we can think ofias the square root of −1, but we also have(−i)2=i2= −1and so−iis also a square root of −1. By convention, the principal square root of −1 isi, or more generally, ifxis any nonnegative number, then the principal square root of−xis−x=ix.{\displaystyle {\sqrt {-x}}=i{\sqrt {x}}.}
The right side (as well as its negative) is indeed a square root of−x, since(ix)2=i2(x)2=(−1)x=−x.{\displaystyle (i{\sqrt {x}})^{2}=i^{2}({\sqrt {x}})^{2}=(-1)x=-x.}
For every non-zero complex numberzthere exist precisely two numberswsuch thatw2=z: the principal square root ofz(defined below), and its negative.
To find a definition for the square root that allows us to consistently choose a single value, called theprincipal value, we start by observing that any complex numberx+iy{\displaystyle x+iy}can be viewed as a point in the plane,(x,y),{\displaystyle (x,y),}expressed usingCartesian coordinates. The same point may be reinterpreted usingpolar coordinatesas the pair(r,φ),{\displaystyle (r,\varphi ),}wherer≥0{\displaystyle r\geq 0}is the distance of the point from the origin, andφ{\displaystyle \varphi }is the angle that the line from the origin to the point makes with the positive real (x{\displaystyle x}) axis. In complex analysis, the location of this point is conventionally writtenreiφ.{\displaystyle re^{i\varphi }.}Ifz=reiφwith−π<φ≤π,{\displaystyle z=re^{i\varphi }{\text{ with }}-\pi <\varphi \leq \pi ,}then theprincipal square rootofz{\displaystyle z}is defined to be the following:z=reiφ/2.{\displaystyle {\sqrt {z}}={\sqrt {r}}e^{i\varphi /2}.}The principal square root function is thus defined using the non-positive real axis as abranch cut. Ifz{\displaystyle z}is a non-negative real number (which happens if and only ifφ=0{\displaystyle \varphi =0}) then the principal square root ofz{\displaystyle z}isrei(0)/2=r;{\displaystyle {\sqrt {r}}e^{i(0)/2}={\sqrt {r}};}in other words, the principal square root of a non-negative real number is just the usual non-negative square root. It is important that−π<φ≤π{\displaystyle -\pi <\varphi \leq \pi }because if, for example,z=−2i{\displaystyle z=-2i}(soφ=−π/2{\displaystyle \varphi =-\pi /2}) then the principal square root is−2i=2eiφ=2eiφ/2=2ei(−π/4)=1−i{\displaystyle {\sqrt {-2i}}={\sqrt {2e^{i\varphi }}}={\sqrt {2}}e^{i\varphi /2}={\sqrt {2}}e^{i(-\pi /4)}=1-i}but usingφ~:=φ+2π=3π/2{\displaystyle {\tilde {\varphi }}:=\varphi +2\pi =3\pi /2}would instead produce the other square root2eiφ~/2=2ei(3π/4)=−1+i=−−2i.{\displaystyle {\sqrt {2}}e^{i{\tilde {\varphi }}/2}={\sqrt {2}}e^{i(3\pi /4)}=-1+i=-{\sqrt {-2i}}.}
The principal square root function isholomorphiceverywhere except on the set of non-positive real numbers (on strictly negative reals it is not evencontinuous). The above Taylor series for1+x{\displaystyle {\sqrt {1+x}}}remains valid for complex numbersx{\displaystyle x}with|x|<1.{\displaystyle |x|<1.}
The above can also be expressed in terms oftrigonometric functions:r(cosφ+isinφ)=r(cosφ2+isinφ2).{\displaystyle {\sqrt {r\left(\cos \varphi +i\sin \varphi \right)}}={\sqrt {r}}\left(\cos {\frac {\varphi }{2}}+i\sin {\frac {\varphi }{2}}\right).}
When the number is expressed using its real and imaginary parts, the following formula can be used for the principal square root:[28][29]
x+iy=12(x2+y2+x)+isgn(y)12(x2+y2−x),{\displaystyle {\sqrt {x+iy}}={\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}+x{\bigr )}}}+i\operatorname {sgn}(y){\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}-x{\bigr )}}},}
wheresgn(y) = 1ify≥ 0andsgn(y) = −1otherwise.[30]In particular, the imaginary parts of the original number and the principal value of its square root have the same sign. The real part of the principal value of the square root is always nonnegative.
For example, the principal square roots of±iare given by:
i=1+i2,−i=1−i2.{\displaystyle {\sqrt {i}}={\frac {1+i}{\sqrt {2}}},\qquad {\sqrt {-i}}={\frac {1-i}{\sqrt {2}}}.}
In the following, the complexzandwmay be expressed as:
where−π<θz≤π{\displaystyle -\pi <\theta _{z}\leq \pi }and−π<θw≤π{\displaystyle -\pi <\theta _{w}\leq \pi }.
Because of the discontinuous nature of the square root function in the complex plane, the following laws arenot truein general.
A similar problem appears with other complex functions with branch cuts, e.g., thecomplex logarithmand the relationslogz+ logw= log(zw)orlog(z*) = log(z)*which are not true in general.
Wrongly assuming one of these laws underlies several faulty "proofs", for instance the following one showing that−1 = 1:−1=i⋅i=−1⋅−1=(−1)⋅(−1)=1=1.{\displaystyle {\begin{aligned}-1&=i\cdot i\\&={\sqrt {-1}}\cdot {\sqrt {-1}}\\&={\sqrt {\left(-1\right)\cdot \left(-1\right)}}\\&={\sqrt {1}}\\&=1.\end{aligned}}}
The third equality cannot be justified (seeinvalid proof).[31]: Chapter VI, Section I, Subsection 2The fallacy that +1 = −1It can be made to hold by changing the meaning of √ so that this no longer represents the principal square root (see above) but selects a branch for the square root that contains1⋅−1.{\displaystyle {\sqrt {1}}\cdot {\sqrt {-1}}.}The left-hand side becomes either−1⋅−1=i⋅i=−1{\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=i\cdot i=-1}if the branch includes+ior−1⋅−1=(−i)⋅(−i)=−1{\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=(-i)\cdot (-i)=-1}if the branch includes−i, while the right-hand side becomes(−1)⋅(−1)=1=−1,{\displaystyle {\sqrt {\left(-1\right)\cdot \left(-1\right)}}={\sqrt {1}}=-1,}where the last equality,1=−1,{\displaystyle {\sqrt {1}}=-1,}is a consequence of the choice of branch in the redefinition of√.
The definition of a square root ofx{\displaystyle x}as a numbery{\displaystyle y}such thaty2=x{\displaystyle y^{2}=x}has been generalized in the following way.
Acube rootofx{\displaystyle x}is a numbery{\displaystyle y}such thaty3=x{\displaystyle y^{3}=x}; it is denotedx3.{\displaystyle {\sqrt[{3}]{x}}.}
Ifnis an integer greater than two, an-th rootofx{\displaystyle x}is a numbery{\displaystyle y}such thatyn=x{\displaystyle y^{n}=x}; it is denotedxn.{\displaystyle {\sqrt[{n}]{x}}.}
Given anypolynomialp, arootofpis a numberysuch thatp(y) = 0. For example, thenth roots ofxare the roots of the polynomial (iny)yn−x.{\displaystyle y^{n}-x.}
Abel–Ruffini theoremstates that, in general, the roots of a polynomial of degree five or higher cannot be expressed in terms ofnth roots.
IfAis apositive-definite matrixor operator, then there exists precisely one positive definite matrix or operatorBwithB2=A; we then defineA1/2=B. In general matrices may have multiple square roots or even an infinitude of them. For example, the2 × 2identity matrixhas an infinity of square roots,[32]though only one of them is positive definite.
Each element of anintegral domainhas no more than 2 square roots. Thedifference of two squaresidentityu2−v2= (u−v)(u+v)is proved using thecommutativity of multiplication. Ifuandvare square roots of the same element, thenu2−v2= 0. Because there are nozero divisorsthis impliesu=voru+v= 0, where the latter means that two roots areadditive inversesof each other. In other words if an element a square rootuof an elementaexists, then the only square roots ofaareuand−u. The only square root of 0 in an integral domain is 0 itself.
In a field ofcharacteristic2, an element either has one square root or does not have any at all, because each element is its own additive inverse, so that−u=u. If the field isfiniteof characteristic 2 then every element has a unique square root. In afieldof any other characteristic, any non-zero element either has two square roots, as explained above, or does not have any.
Given an oddprime numberp, letq=pefor some positive integere. A non-zero element of the fieldFqwithqelements is aquadratic residueif it has a square root inFq. Otherwise, it is a quadratic non-residue. There are(q− 1)/2quadratic residues and(q− 1)/2quadratic non-residues; zero is not counted in either class. The quadratic residues form agroupunder multiplication. The properties of quadratic residues are widely used innumber theory.
Unlike in an integral domain, a square root in an arbitrary (unital) ring need not be unique up to sign. For example, in the ringZ/8Z{\displaystyle \mathbb {Z} /8\mathbb {Z} }of integersmodulo 8(which is commutative, but has zero divisors), the element 1 has four distinct square roots: ±1 and ±3.
Another example is provided by the ring ofquaternionsH,{\displaystyle \mathbb {H} ,}which has no zero divisors, but is not commutative. Here, the element −1 hasinfinitely many square roots, including±i,±j, and±k. In fact, the set of square roots of−1is exactly{ai+bj+ck∣a2+b2+c2=1}.{\displaystyle \{ai+bj+ck\mid a^{2}+b^{2}+c^{2}=1\}.}
A square root of 0 is either 0 or a zero divisor. Thus in rings where zero divisors do not exist, it is uniquely 0. However, rings with zero divisors may have multiple square roots of 0. For example, inZ/n2Z,{\displaystyle \mathbb {Z} /n^{2}\mathbb {Z} ,}any multiple ofnis a square root of 0.
The square root of a positive number is usually defined as the side length of asquarewith theareaequal to the given number. But the square shape is not necessary for it: if one of twosimilarplanar Euclideanobjects has the areaatimes greater than another, then the ratio of their linear sizes isa{\displaystyle {\sqrt {a}}}.
A square root can be constructed with a compass and straightedge. In hisElements,Euclid(fl.300 BC) gave the construction of thegeometric meanof two quantities in two different places:Proposition II.14andProposition VI.13. Since the geometric mean ofaandbisab{\displaystyle {\sqrt {ab}}}, one can constructa{\displaystyle {\sqrt {a}}}simply by takingb= 1.
The construction is also given byDescartesin hisLa Géométrie, see figure 2 onpage 2. However, Descartes made no claim to originality and his audience would have been quite familiar with Euclid.
Euclid's second proof in Book VI depends on the theory ofsimilar triangles. Let AHB be a line segment of lengtha+bwithAH =aandHB =b. Construct the circle with AB as diameter and let C be one of the two intersections of the perpendicular chord at H with the circle and denote the length CH ash. Then, usingThales' theoremand, as in theproof of Pythagoras' theorem by similar triangles, triangle AHC is similar to triangle CHB (as indeed both are to triangle ACB, though we don't need that, but it is the essence of the proof of Pythagoras' theorem) so that AH:CH is as HC:HB, i.e.a/h=h/b, from which we conclude by cross-multiplication thath2=ab, and finally thath=ab{\displaystyle h={\sqrt {ab}}}. When marking the midpoint O of the line segment AB and drawing the radius OC of length(a+b)/2, then clearly OC > CH, i.e.a+b2≥ab{\textstyle {\frac {a+b}{2}}\geq {\sqrt {ab}}}(with equality if and only ifa=b), which is thearithmetic–geometric mean inequality for two variablesand, as notedabove, is the basis of theAncient Greekunderstanding of "Heron's method".
Another method of geometric construction usesright trianglesandinduction:1{\displaystyle {\sqrt {1}}}can be constructed, and oncex{\displaystyle {\sqrt {x}}}has been constructed, the right triangle with legs 1 andx{\displaystyle {\sqrt {x}}}has ahypotenuseofx+1{\displaystyle {\sqrt {x+1}}}. Constructing successive square roots in this manner yields theSpiral of Theodorusdepicted above.
|
https://en.wikipedia.org/wiki/Square_root#Modular_square_root
|
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4]
2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used.
GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12]
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G.
In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11]
In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11]
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11]
TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17]
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18]
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19]
TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23]
The network is structured into several discrete sections:
GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands.
Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26]
The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900.
GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28]
GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29]
Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30]
The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation.
GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34]
GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35]
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38]
The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39]
Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41]
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed]
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
|
https://en.wikipedia.org/wiki/GSM#Encryption
|
Modular exponentiationisexponentiationperformed over amodulus. It is useful incomputer science, especially in the field ofpublic-key cryptography, where it is used in bothDiffie–Hellman key exchangeandRSA public/private keys.
Modular exponentiation is the remainder when an integerb(the base) is raised to the powere(the exponent), and divided by apositive integerm(the modulus); that is,c=bemodm. From the definition of division, it follows that0 ≤c<m.
For example, givenb= 5,e= 3andm= 13, dividing53= 125by13leaves a remainder ofc= 8.
Modular exponentiation can be performed with anegativeexponenteby finding themodular multiplicative inversedofbmodulomusing theextended Euclidean algorithm. That is:
Modular exponentiation is efficient to compute, even for very large integers. On the other hand, computing the modulardiscrete logarithm– that is, finding the exponentewhen givenb,c, andm– is believed to be difficult. Thisone-way functionbehavior makes modular exponentiation a candidate for use in cryptographic algorithms.
The most direct method of calculating a modular exponent is to calculatebedirectly, then to take this number modulom. Consider trying to computec, givenb= 4,e= 13, andm= 497:
One could use a calculator to compute 413; this comes out to 67,108,864. Taking this value modulo 497, the answercis determined to be 445.
Note thatbis only one digit in length and thateis only two digits in length, but the valuebeis 8 digits in length.
In strong cryptography,bis often at least 1024bits.[1]Considerb= 5 × 1076ande= 17, both of which are perfectly reasonable values. In this example,bis 77 digits in length andeis 2 digits in length, but the valuebeis 1,304 decimal digits in length. Such calculations are possible on modern computers, but the sheer magnitude of such numbers causes the speed of calculations to drop considerably. Asbandeincrease even further to provide better security, the valuebebecomes unwieldy.
The time required to perform the exponentiation depends on the operating environment and the processor. The method described above requiresΘ(e)multiplications to complete.
Keeping the numbers smaller requires additional modular reduction operations, but the reduced size makes each operation faster, saving time (as well as memory) overall.
This algorithm makes use of the identity
The modified algorithm is:
Note that at the end of every iteration through the loop, the equationc≡be′(modm)holds true. The algorithm ends when the loop has been executedetimes. At that pointccontains the result ofbemodm.
In summary, this algorithm increasese′by one until it is equal toe. At every step multiplying the result from the previous iteration,c, byband performing a modulo operation on the resulting product, thereby keeping the resultingca small integer.
The exampleb= 4,e= 13, andm= 497is presented again. The algorithm performs the iteration thirteen times:
The final answer forcis therefore 445, as in thedirect method.
Like the first method, this requiresO(e)multiplications to complete. However, since the numbers used in these calculations are much smaller than the numbers used in the first algorithm's calculations, the computation time decreases by a factor of at leastO(e)in this method.
In pseudocode, this method can be performed the following way:
A third method drastically reduces the number of operations to perform modular exponentiation, while keeping the samememory footprintas in the previous method. It is a combination of the previous method and a more general principle calledexponentiation by squaring(also known asbinary exponentiation).
First, it is required that the exponentebeconverted to binary notation. That is,ecan be written as:
In such notation, thelengthofeisnbits.aican take the value 0 or 1 for anyisuch that0 ≤i<n. By definition,an− 1= 1.
The valuebecan then be written as:
The solutioncis therefore:
The following is an example in pseudocode based onApplied CryptographybyBruce Schneier.[2]The inputsbase,exponent, andmoduluscorrespond tob,e, andmin the equations given above.
Note that upon entering the loop for the first time, the code variablebaseis equivalent tob. However, the repeated squaring in the third line of code ensures that at the completion of every loop, the variablebaseis equivalent tob2imodm, whereiis the number of times the loop has been iterated. (This makesithe next working bit of the binary exponentexponent, where the least-significant bit isexponent0).
The first line of code simply carries out the multiplication in∏i=0n−1bai2i(modm){\displaystyle \prod _{i=0}^{n-1}b^{a_{i}2^{i}}{\pmod {m}}}. Ifais zero, no code executes since this effectively multiplies the running total by one. Ifainstead is one, the variablebase(containing the valueb2imodmof the original base) is simply multiplied in.
In this example, the basebis raised to the exponente= 13.
The exponent is 1101 in binary. There are four binary digits, so the loop executes four times, with valuesa0= 1,a1= 0,a2= 1, anda3= 1.
First, initialize the resultR{\displaystyle R}to 1 and preserve the value ofbin the variablex:
We are done:Ris nowb13{\displaystyle b^{13}}.
Here is the above calculation, where we computeb= 4to the powere= 13, performed modulo 497.
Initialize:
We are done:Ris now413≡445(mod497){\displaystyle 4^{13}\equiv 445{\pmod {497}}}, the same result obtained in the previous algorithms.
The running time of this algorithm isO(logexponent). When working with large values ofexponent, this offers a substantial speed benefit over the previous two algorithms, whose time isO(exponent). For example, if the exponent was 220= 1048576, this algorithm would have 20 steps instead of 1048576 steps.
We can also use the bits of the exponent in left to right order. In practice, we would usually want the result modulo some modulusm. In that case, we would reduce each multiplication result(modm)before proceeding. For simplicity, the modulus calculation is omitted here. This example shows how to computeb13{\displaystyle b^{13}}using left to right binary exponentiation. The exponent is 1101 in binary; there are 4 bits, so there are 4 iterations.
Initialize the result to 1:r←1(=b0){\displaystyle r\leftarrow 1\,(=b^{0})}.
InThe Art of Computer Programming, Vol. 2, Seminumerical Algorithms, page 463,Donald Knuthnotes that contrary to some assertions, this method doesnotalways give the minimum possible number of multiplications. The smallest counterexample is for a power of 15, when the binary method needs six multiplications. Instead, formx3in two multiplications, thenx6by squaringx3, thenx12by squaringx6, and finallyx15by multiplyingx12andx3, thereby achieving the desired result with only five multiplications. However, many pages follow describing how such sequences might be contrived in general.
Them-th term of anyconstant-recursive sequence(such asFibonacci numbersorPerrin numbers) where each term is a linear function ofkprevious terms can be computed efficiently modulonby computingAmmodn, whereAis the correspondingk×kcompanion matrix. The above methods adapt easily to this application. This can be used forprimality testingof large numbersn, for example.
A recursive algorithm forModExp(A, b, c)=Abmodc, whereAis a square matrix.
Diffie–Hellman key exchangeuses exponentiation in finite cyclic groups. The above methods for modular matrix exponentiation clearly extend to this context. The modular matrix multiplicationC≡AB(modn)is simply replaced everywhere by the group multiplicationc=ab.
Inquantum computing, modular exponentiation appears as the bottleneck ofShor's algorithm, where it must be computed by a circuit consisting ofreversible gates, which can be further broken down intoquantum gatesappropriate for a specific physical device. Furthermore, in Shor's algorithm it is possible to know the base and the modulus of exponentiation at every call, which enables various circuit optimizations.[3]
Because modular exponentiation is an important operation in computer science, and there are efficient algorithms (see above) that are much faster than simply exponentiating and then taking the remainder, many programming languages and arbitrary-precision integer libraries have a dedicated function to perform modular exponentiation:
|
https://en.wikipedia.org/wiki/Modular_exponentiation
|
Cryptography, orcryptology(fromAncient Greek:κρυπτός,romanized:kryptós"hidden, secret"; andγράφεινgraphein, "to write", or-λογία-logia, "study", respectively[1]), is the practice and study of techniques forsecure communicationin the presence ofadversarialbehavior.[2]More generally, cryptography is about constructing and analyzingprotocolsthat prevent third parties or the public from reading private messages.[3]Modern cryptography exists at the intersection of the disciplines of mathematics,computer science,information security,electrical engineering,digital signal processing, physics, and others.[4]Core concepts related toinformation security(data confidentiality,data integrity,authentication, andnon-repudiation) are also central to cryptography.[5]Practical applications of cryptography includeelectronic commerce,chip-based payment cards,digital currencies,computer passwords, andmilitary communications.
Cryptography prior to the modern age was effectively synonymous withencryption, converting readable information (plaintext) to unintelligiblenonsensetext (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literatureoften uses the names"Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for theeavesdroppingadversary.[6]Since the development ofrotor cipher machinesinWorld War Iand the advent of computers inWorld War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based onmathematical theoryand computer science practice; cryptographicalgorithmsare designed aroundcomputational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements ininteger factorizationalgorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted.Information-theoretically secureschemes that provably cannot be broken even with unlimited computing power, such as theone-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raiseda number of legal issuesin theInformation Age. Cryptography's potential for use as a tool for espionage andseditionhas led many governments to classify it as a weapon and to limit or even prohibit its use and export.[7]In some jurisdictions where the use of cryptography is legal, laws permit investigators tocompel the disclosureofencryption keysfor documents relevant to an investigation.[8][9]Cryptography also plays a major role indigital rights managementandcopyright infringementdisputes with regard todigital media.[10]
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story byEdgar Allan Poe.[11][12]
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (calledplaintext) into an unintelligible form (calledciphertext).[13]Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. Acipher(or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such asauthenticationor integrity checks.
There are two main types of cryptosystems:symmetricandasymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key.[14]Examples of asymmetric systems includeDiffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), andPost-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard).[15]Insecure symmetric algorithms include children's language tangling schemes such asPig Latinor othercant, and all historical cryptographic schemes, however seriously intended, prior to the invention of theone-time padearly in the 20th century.
Incolloquialuse, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with acode word(for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysisis the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English,[16]while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis.[17][18]English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above.RFC2828advises thatsteganographyis sometimes included in cryptology.[19]
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is calledcryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.[20][21]
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion ofmessagesfrom a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors oreavesdropperswithout secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensuresecrecyin communications, such as those ofspies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication,digital signatures,interactive proofsandsecure computation, among others.
The main classical cipher types aretransposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), andsubstitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in theLatin alphabet).[22]Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was theCaesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet.[23]Suetoniusreports thatJulius Caesarused it with a shift of three to communicate with his generals.Atbashis an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone inEgypt(c.1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
TheGreeks of Classical timesare said to have known of ciphers (e.g., thescytaletransposition cipher claimed to have been used by theSpartanmilitary).[24]Steganography(i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, fromHerodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair.[13]Other steganography methods involve 'hiding in plain sight,' such as using amusic cipherto disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use ofinvisible ink,microdots, anddigital watermarksto conceal information.
In India, the 2000-year-oldKama SutraofVātsyāyanaspeaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.[13]
InSassanid Persia, there were two secret scripts, according to the Muslim authorIbn al-Nadim: thešāh-dabīrīya(literally "King's script") which was used for official correspondence, and therāz-saharīyawhich was used to communicate secret messages with other countries.[25]
David Kahnnotes inThe Codebreakersthat modern cryptology originated among theArabs, the first people to systematically document cryptanalytic methods.[26]Al-Khalil(717–786) wrote theBook of Cryptographic Messages, which contains the first use ofpermutations and combinationsto list all possible Arabic words with and without vowels.[27]
Ciphertexts produced by aclassical cipher(and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery offrequency analysis, nearly all such ciphers could be broken by an informed attacker.[28]Such classical ciphers still enjoy popularity today, though mostly aspuzzles(seecryptogram). TheArab mathematicianandpolymathAl-Kindi wrote a book on cryptography entitledRisalah fi Istikhraj al-Mu'amma(Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.[29][30]
Language letter frequencies may offer little help for some extended historical encryption techniques such ashomophonic cipherthat tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of thepolyalphabetic cipher, most clearly byLeon Battista Albertiaround the year 1467, though there is some indication that it was already known to Al-Kindi.[30]Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automaticcipher device, a wheel that implemented a partial realization of his invention. In theVigenère cipher, apolyalphabetic cipher, encryption uses akey word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th centuryCharles Babbageshowed that the Vigenère cipher was vulnerable toKasiski examination, but this was first published about ten years later byFriedrich Kasiski.[31]
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 byAuguste Kerckhoffsand is generally calledKerckhoffs's Principle; alternatively and more bluntly, it was restated byClaude Shannon, the inventor ofinformation theoryand the fundamentals of theoretical cryptography, asShannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as thecipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's owncipher disk,Johannes Trithemius'tabula rectascheme, andThomas Jefferson'swheel cypher(not publicly known, and reinvented independently byBazeriesaround 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among themrotor machines—famously including theEnigma machineused by the German government and military from the late 1920s and duringWorld War II.[32]The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[33]
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts atBletchley Parkduring WWII spurred the development of more efficient means for carrying out repetitive tasks, suchas military code breaking (decryption). This culminated in the development of theColossus, the world's first fully electronic, digital,programmablecomputer, which assisted in the decryption of ciphers generated by the German Army'sLorenz SZ40/42machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970sIBMpersonnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States.[34]In 1976Whitfield DiffieandMartin Hellmanpublished the Diffie–Hellman key exchange algorithm.[35]In 1977 theRSA algorithmwas published inMartin Gardner'sScientific Americancolumn.[36]Since then, cryptography has become a widely used tool in communications,computer networks, andcomputer securitygenerally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems areintractable, such as theinteger factorizationor thediscrete logarithmproblems, so there are deep connections withabstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. Theone-time padis one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one byMichael O. Rabinthat are provably secure provided factoringn = pqis impossible; it is quite unusable in practice. Thediscrete logarithm problemis the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.[37]
As well as being aware of cryptographic history,cryptographic algorithmand system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope ofbrute-force attacks, so when specifyingkey lengths, the required key lengths are similarly advancing.[38]The potential impact ofquantum computingare already being considered by some cryptographic system designers developing post-quantum cryptography.[when?]The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.[5]
Claude Shannon's two papers, his1948 paperoninformation theory, and especially his1949 paperon cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography.[39][40]His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis",[41]and as having turned cryptography from an "art to a science".[42]As a result of his contributions and work, he has been described as the "founding father of modern cryptography".[43]
Prior to the early 20th century, cryptography was mainly concerned withlinguisticandlexicographicpatterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory,computational complexity, statistics,combinatorics,abstract algebra,number theory, andfinite mathematics.[44]Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems andquantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation onbinarybitsequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[35]
Symmetric key ciphers are implemented as eitherblock ciphersorstream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
TheData Encryption Standard(DES) and theAdvanced Encryption Standard(AES) are block cipher designs that have been designatedcryptography standardsby the US government (though DES's designation was finally withdrawn after the AES was adopted).[45]Despite its deprecation as an official standard, DES (especially its still-approved and much more securetriple-DESvariant) remains quite popular; it is used across a wide range of applications, from ATM encryption[46]toe-mail privacy[47]andsecure remote access.[48]Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such asFEAL.[5][49]
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like theone-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material.RC4is a widely used stream cipher.[5]Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of aPseudorandom number generator) and applying anXORoperation to each bit of the plaintext with each bit of the keystream.[50]
Message authentication codes(MACs) are much likecryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt;[5][51]this additional complication blocks an attack scheme against baredigest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-lengthhash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash.MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is thekey managementnecessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as thesquareof the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion ofpublic-key(also, more generally, calledasymmetric key) cryptography in which two different but mathematically related keys are used—apublickey and aprivatekey.[54]A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[55]The historianDavid Kahndescribed public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[56]
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, thepublic keyis used for encryption, while theprivateorsecret keyis used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting theDiffie–Hellman key exchangeprotocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on ashared encryption key.[35]TheX.509standard defines the most commonly used format forpublic key certificates.[57]
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 byRonald Rivest,Adi Shamir, andLen Adleman, whose solution has since become known as theRSA algorithm.[58]
TheDiffie–HellmanandRSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Otherasymmetric-key algorithmsinclude theCramer–Shoup cryptosystem,ElGamal encryption, and variouselliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments.[59]Reportedly, around 1970,James H. Ellishad conceived the principles of asymmetric key cryptography. In 1973,Clifford Cocksinvented a solution that was very similar in design rationale to RSA.[59][60]In 1974,Malcolm J. Williamsonis claimed to have developed the Diffie–Hellman key exchange.[61]
Public-key cryptography is also used for implementingdigital signatureschemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else toforge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one forsigning, in which a secret key is used to process the message (or a hash of the message, or both), and one forverification, in which the matching public key is used with the message to check the validity of the signature. RSA andDSAare two of the most popular digital signature schemes. Digital signatures are central to the operation ofpublic key infrastructuresand many network security schemes (e.g.,SSL/TLS, manyVPNs, etc.).[49]
Public-key algorithms are most often based on thecomputational complexityof "hard" problems, often fromnumber theory. For example, the hardness of RSA is related to theinteger factorizationproblem, while Diffie–Hellman and DSA are related to thediscrete logarithmproblem. The security ofelliptic curve cryptographyis based on number theoretic problems involvingelliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such asmodularmultiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonlyhybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[5]
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance).MD4is a long-used hash function that is now broken;MD5, a strengthened variant of MD4, is also widely used but broken in practice. The USNational Security Agencydeveloped the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew;SHA-1is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; theSHA-2family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness ofNIST's overall hash algorithm toolkit."[52]Thus, ahash function design competitionwas meant to select a new U.S. national standard, to be calledSHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced thatKeccakwould be the new SHA-3 hash algorithm.[53]Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work atBell Labs,Claude Shannonproved that theone-time padcipher is unbreakable, provided the key material is trulyrandom, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[62]Mostciphers, apart from the one-time pad, can be broken with enough computational effort bybrute force attack, but the amount of effort needed may beexponentiallydependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In aciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In aknown-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In achosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example isgardening, used by the British during WWII. In achosen-ciphertext attack, Eve may be able tochooseciphertexts and learn their corresponding plaintexts.[5]Finally in aman-in-the-middleattack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient.[63]Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of theprotocolsinvolved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; alinear cryptanalysisattack against DES requires 243known plaintexts (with their corresponding ciphertexts) and approximately 243DES operations.[64]This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty ofinteger factorizationofsemiprimesand the difficulty of calculatingdiscrete logarithms, both of which are not yet proven to be solvable inpolynomial time(P) using only a classicalTuring-completecomputer. Much public-key cryptanalysis concerns designing algorithms inPthat can solve these problems, or using other technologies, such asquantum computers. For instance, the best-known algorithms for solving theelliptic curve-basedversion of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are calledside-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use atiming attackto break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known astraffic analysis[65]and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues.Social engineeringand other attacks against humans (e.g., bribery,extortion,blackmail, espionage,rubber-hose cryptanalysisor torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
Much of the theoretical work in cryptography concernscryptographicprimitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools calledcryptosystemsorcryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographicprimitivesand cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives includepseudorandom functions,one-way functions, etc.
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, orcryptosystem. Cryptosystems (e.g.,El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g.,chosen-plaintext attack (CPA)security in therandom oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protectedbackupdata). Such cryptosystems are sometimes calledcryptographic protocols.
Some widely known cryptosystems include RSA,Schnorr signature,ElGamal encryption, andPretty Good Privacy(PGP). More complex cryptosystems includeelectronic cash[66]systems,signcryptionsystems, etc. Some more 'theoretical'[clarification needed]cryptosystems includeinteractive proof systems,[67](likezero-knowledge proofs)[68]and systems forsecret sharing.[69][70]
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth ofInternet of Things (IoT)has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security.[71]Algorithms such as PRESENT,AES, andSPECKare examples of the many LWC algorithms that have been developed to achieve the standard set by theNational Institute of Standards and Technology.[72]
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys.[73]But, some algorithms likeBitLockerandVeraCryptare generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. TheC++opensource encryption libraryOpenSSLprovidesfree and opensourceencryption software and tools. The most commonly used encryption cipher suit isAES,[74]as it has hardware acceleration for allx86based processors that hasAES-NI. A close contender isChaCha20-Poly1305, which is astream cipher, however it is commonly used for mobile devices as they areARMbased which does not feature AES-NI instruction set extension.
Cryptography can be used to secure communications by encrypting them. Websites use encryption viaHTTPS.[75]"End-to-end" encryption, where only sender and receiver can read messages, is implemented for email inPretty Good Privacyand for secure messaging in general inWhatsApp,SignalandTelegram.[75]
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker.[75]Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.[75]
Encryption is sometimes used to encrypt one's entire drive. For example,University College Londonhas implementedBitLocker(a program by Microsoft) to render drive data opaque without users logging in.[75]
Cryptographic techniques enablecryptocurrencytechnologies, such asdistributed ledger technologies(e.g.,blockchains), which financecryptoeconomicsapplications such asdecentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to:cryptographic keys, cryptographic hash function,asymmetric (public key) encryption,Multi-Factor Authentication (MFA),End-to-End Encryption (E2EE), andZero Knowledge Proofs (ZKP).
Cryptography has long been of interest to intelligence gathering andlaw enforcement agencies.[9]Secret communications may be criminal or eventreasonous.[citation needed]Because of its facilitation ofprivacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. InChinaandIran, a license is still required to use cryptography.[7]Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws inBelarus,Kazakhstan,Mongolia,Pakistan, Singapore,Tunisia, andVietnam.[76]
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography.[9]One particularly important issue has been theexport of cryptographyand cryptographic software and hardware. Probably because of the importance of cryptanalysis inWorld War IIand an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on theUnited States Munitions List.[77]Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
In the 1990s, there were several challenges to US export regulation of cryptography. After thesource codeforPhilip Zimmermann'sPretty Good Privacy(PGP) encryption program found its way onto the Internet in June 1991, a complaint byRSA Security(then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and theFBI, though no charges were ever filed.[78][79]Daniel J. Bernstein, then a graduate student atUC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based onfree speechgrounds. The 1995 caseBernstein v. United Statesultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected asfree speechby the United States Constitution.[80]
In 1996, thirty-nine countries signed theWassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[81]Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000;[82]there are no longer very many restrictions on key sizes in US-exportedmass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourcedweb browserssuch asFirefoxorInternet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., viaTransport Layer Security). TheMozilla ThunderbirdandMicrosoft OutlookE-mail clientprograms similarly can transmit and receive emails via TLS, and can send and receive email encrypted withS/MIME. Many Internet users do not realize that their basic application software contains such extensivecryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.[citation needed]
Another contentious issue connected to cryptography in the United States is the influence of theNational Security Agencyon cipher development and policy.[9]The NSA was involved with the design ofDESduring its development atIBMand its consideration by theNational Bureau of Standardsas a possible Federal Standard for cryptography.[83]DES was designed to be resistant todifferential cryptanalysis,[84]a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[85]According toSteven Levy, IBM discovered differential cryptanalysis,[79]but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993Clipper chipaffair, an encryption microchip intended to be part of theCapstonecryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (calledSkipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation ofKerckhoffs's Principle, as the scheme included a specialescrow keyheld by the government for use by law enforcement (i.e.wiretapping).[79]
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use ofcopyrightedmaterial, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. PresidentBill Clintonsigned theDigital Millennium Copyright Act(DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes.[86]This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in theEU Copyright Directive. Similar restrictions are called for by treaties signed byWorld Intellectual Property Organizationmember-states.
TheUnited States Department of JusticeandFBIhave not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one.Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into anIntelsecurity design for fear of prosecution under the DMCA.[87]CryptologistBruce Schneierhas argued that the DMCA encouragesvendor lock-in, while inhibiting actual measures toward cyber-security.[88]BothAlan Cox(longtimeLinux kerneldeveloper) andEdward Felten(and some of his students at Princeton) have encountered problems related to the Act.Dmitry Sklyarovwas arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible forBlu-rayandHD DVDcontent scrambling werediscovered and released onto the Internet. In both cases, theMotion Picture Association of Americasent out numerous DMCA takedown notices, and there was a massive Internet backlash[10]triggered by the perceived impact of such notices onfair useandfree speech.
In the United Kingdom, theRegulation of Investigatory Powers Actgives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security.[8]Successful prosecutions have occurred under the Act; the first, in 2009,[89]resulted in a term of 13 months' imprisonment.[90]Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case ofUnited States v. Fricosuaddressed whether a search warrant can compel a person to reveal anencryptionpassphraseor password.[91]TheElectronic Frontier Foundation(EFF) argued that this is a violation of the protection from self-incrimination given by theFifth Amendment.[92]In 2012, the court ruled that under theAll Writs Act, the defendant was required to produce an unencrypted hard drive for the court.[93]
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016FBI–Apple encryption disputeconcerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supportsplausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of adrive which has been securely wiped).
|
https://en.wikipedia.org/wiki/Cryptography#Kerckhoffs%27_principle
|
Inmathematics, amultiplicative inverseorreciprocalfor anumberx, denoted by 1/xorx−1, is a number which whenmultipliedbyxyields themultiplicative identity, 1. The multiplicative inverse of afractiona/bisb/a. For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. Thereciprocal function, thefunctionf(x) that mapsxto 1/x, is one of the simplest examples of a function which is its own inverse (aninvolution).
Multiplying by a number is the same asdividingby its reciprocal and vice versa. For example, multiplication by 4/5 (or 0.8) will give the same result as division by 5/4 (or 1.25). Therefore, multiplication by a number followed by multiplication by its reciprocal yields the original number (since the product of the number and its reciprocal is 1).
The termreciprocalwas in common use at least as far back as the third edition ofEncyclopædia Britannica(1797) to describe two numbers whose product is 1; geometrical quantities in inverse proportion are described asreciprocallin a 1570 translation ofEuclid'sElements.[1]
In the phrasemultiplicative inverse, the qualifiermultiplicativeis often omitted and then tacitly understood (in contrast to theadditive inverse). Multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen thatab≠ba; then "inverse" typically implies that an element is both a left and rightinverse.
The notationf−1is sometimes also used for theinverse functionof the functionf, which is for most functions not equal to the multiplicative inverse. For example, the multiplicative inverse1/(sinx) = (sinx)−1is thecosecantof x, and not theinverse sine ofxdenoted bysin−1xorarcsinx. The terminology differencereciprocalversusinverseis not sufficient to make this distinction, since many authors prefer the opposite naming convention, probably for historical reasons (for example inFrench, the inverse function is preferably called thebijection réciproque).
In the real numbers,zerodoes not have a reciprocal (division by zeroisundefined) because no real number multiplied by 0 produces 1 (the product of any number with zero is zero). With the exception of zero, reciprocals of everyreal numberare real, reciprocals of everyrational numberare rational, and reciprocals of everycomplex numberare complex. The property that every element other than zero has a multiplicative inverse is part of the definition of afield, of which these are all examples. On the other hand, nointegerother than 1 and −1 has an integer reciprocal, and so the integers are not a field.
Inmodular arithmetic, themodular multiplicative inverseofais also defined: it is the numberxsuch thatax≡ 1 (modn). This multiplicative inverse existsif and only ifaandnarecoprime. For example, the inverse of 3 modulo 11 is 4 because4 ⋅ 3 ≡ 1 (mod 11). Theextended Euclidean algorithmmay be used to compute it.
Thesedenionsare an algebra in which every nonzero element has a multiplicative inverse, but which nonetheless has divisors of zero, that is, nonzero elementsx,ysuch thatxy= 0.
Asquare matrixhas an inverseif and only ifitsdeterminanthas an inverse in the coefficientring. The linear map that has the matrixA−1with respect to some base is then the inverse function of the map havingAas matrix in the same base. Thus, the two distinct notions of the inverse of a function are strongly related in this case, but they still do not coincide, since the multiplicative inverse ofAxwould be (Ax)−1, notA−1x.
These two notions of an inverse function do sometimes coincide, for example for the functionf(x)=xi=eiln(x){\displaystyle f(x)=x^{i}=e^{i\ln(x)}}whereln{\displaystyle \ln }is theprincipal branch of the complex logarithmande−π<|x|<eπ{\displaystyle e^{-\pi }<|x|<e^{\pi }}:
Thetrigonometric functionsare related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
A ring in which every nonzero element has a multiplicative inverse is adivision ring; likewise analgebrain which this holds is adivision algebra.
As mentioned above, the reciprocal of every nonzero complex numberz=a+bi{\displaystyle z=a+bi}is complex. It can be found by multiplying both top and bottom of 1/zby itscomplex conjugatez¯=a−bi{\displaystyle {\bar {z}}=a-bi}and using the property thatzz¯=‖z‖2{\displaystyle z{\bar {z}}=\|z\|^{2}}, theabsolute valueofzsquared, which is the real numbera2+b2:
The intuition is that
gives us thecomplex conjugatewith amagnitudereduced to a value of1{\displaystyle 1}, so dividing again by‖z‖{\displaystyle \|z\|}ensures that the magnitude is now equal to the reciprocal of the original magnitude as well, hence:
In particular, if ||z||=1 (zhas unit magnitude), then1/z=z¯{\displaystyle 1/z={\bar {z}}}. Consequently, theimaginary units,±i, haveadditive inverseequal to multiplicative inverse, and are the only complex numbers with this property. For example, additive and multiplicative inverses ofiare−(i) = −iand1/i= −i, respectively.
For a complex number in polar formz=r(cos φ +isin φ), the reciprocal simply takes the reciprocal of the magnitude and the negative of the angle:
In realcalculus, thederivativeof1/x=x−1is given by thepower rulewith the power −1:
The power rule for integrals (Cavalieri's quadrature formula) cannot be used to compute the integral of 1/x, because doing so would result in division by 0:∫dxx=x00+C{\displaystyle \int {\frac {dx}{x}}={\frac {x^{0}}{0}}+C}Instead the integral is given by:∫1adxx=lna,{\displaystyle \int _{1}^{a}{\frac {dx}{x}}=\ln a,}∫dxx=lnx+C.{\displaystyle \int {\frac {dx}{x}}=\ln x+C.}where ln is thenatural logarithm. To show this, note thatddyey=ey{\textstyle {\frac {d}{dy}}e^{y}=e^{y}}, so ifx=ey{\displaystyle x=e^{y}}andy=lnx{\displaystyle y=\ln x}, we have:[2]dxdy=x⇒dxx=dy⇒∫dxx=∫dy=y+C=lnx+C.{\displaystyle {\begin{aligned}&{\frac {dx}{dy}}=x\quad \Rightarrow \quad {\frac {dx}{x}}=dy\\[10mu]&\quad \Rightarrow \quad \int {\frac {dx}{x}}=\int dy=y+C=\ln x+C.\end{aligned}}}
The reciprocal may be computed by hand with the use oflong division.
Computing the reciprocal is important in manydivision algorithms, since the quotienta/bcan be computed by first computing 1/band then multiplying it bya. Noting thatf(x)=1/x−b{\displaystyle f(x)=1/x-b}has azeroatx= 1/b,Newton's methodcan find that zero, starting with a guessx0{\displaystyle x_{0}}and iterating using the rule:
This continues until the desired precision is reached. For example, suppose we wish to compute 1/17 ≈ 0.0588 with 3 digits of precision. Takingx0= 0.1, the following sequence is produced:
A typical initial guess can be found by roundingbto a nearby power of 2, then usingbit shiftsto compute its reciprocal.
Inconstructive mathematics, for a real numberxto have a reciprocal, it is not sufficient thatx≠ 0. There must instead be given arationalnumberrsuch that 0 <r< |x|. In terms of the approximationalgorithmdescribed above, this is needed to prove that the change inywill eventually become arbitrarily small.
This iteration can also be generalized to a wider sort of inverses; for example,matrix inverses.
Every real or complex number excluding zero has a reciprocal, and reciprocals of certainirrational numberscan have important special properties. Examples include the reciprocal ofe(≈ 0.367879) and thegolden ratio's reciprocal(≈ 0.618034). The first reciprocal is special because no other positive number can produce a lower number when put to the power of itself;f(1/e){\displaystyle f(1/e)}is theglobal minimumoff(x)=xx{\displaystyle f(x)=x^{x}}. The second number is the only positive number that is equal to its reciprocal plus one:φ=1/φ+1{\displaystyle \varphi =1/\varphi +1}. Itsadditive inverseis the only negative number that is equal to its reciprocal minus one:−φ=−1/φ−1{\displaystyle -\varphi =-1/\varphi -1}.
The functionf(n)=n+n2+1,n∈N,n>0{\textstyle f(n)=n+{\sqrt {n^{2}+1}},n\in \mathbb {N} ,n>0}gives an infinite number of irrational numbers that differ with their reciprocal by an integer. For example,f(2){\displaystyle f(2)}is the irrational2+5{\displaystyle 2+{\sqrt {5}}}. Its reciprocal1/(2+5){\displaystyle 1/(2+{\sqrt {5}})}is−2+5{\displaystyle -2+{\sqrt {5}}}, exactly4{\displaystyle 4}less. Such irrational numbers share an evident property: they have the samefractional partas their reciprocal, since these numbers differ by an integer.
The reciprocal function plays an important role insimple continued fractions, which have a number of remarkable properties relating to the representation of (both rational and) irrational numbers.
If the multiplication is associative, an elementxwith a multiplicative inverse cannot be azero divisor(xis a zero divisor if some nonzeroy,xy= 0). To see this, it is sufficient to multiply the equationxy= 0by the inverse ofx(on the left), and then simplify using associativity. In the absence of associativity, thesedenionsprovide a counterexample.
The converse does not hold: an element which is not azero divisoris not guaranteed to have a multiplicative inverse.
WithinZ, all integers except −1, 0, 1 provide examples; they are not zero divisors nor do they have inverses inZ.
If the ring or algebra isfinite, however, then all elementsawhich are not zero divisors do have a (left and right) inverse. For, first observe that the mapf(x) =axmust beinjective:f(x) =f(y)impliesx=y:
Distinct elements map to distinct elements, so the image consists of the same finite number of elements, and the map is necessarilysurjective. Specifically, ƒ (namely multiplication bya) must map some elementxto 1,ax= 1, so thatxis an inverse fora.
The expansion of the reciprocal 1/qin any base can also act[3]as a source ofpseudo-random numbers, ifqis a "suitable"safe prime, a prime of the form 2p+ 1 wherepis also a prime. A sequence of pseudo-random numbers of lengthq− 1 will be produced by the expansion.
|
https://en.wikipedia.org/wiki/Multiplicative_inverse
|
Multiplicationis one of the four elementary mathematical operations ofarithmetic, with the other ones beingaddition,subtraction, anddivision. The result of a multiplication operation is called aproduct. Multiplication is often denoted by the cross symbol,×, by the mid-line dot operator,·, by juxtaposition, or, on computers, by an asterisk,*.
The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, themultiplicand, as the quantity of the other one, themultiplier; both numbers can be referred to asfactors. This is to be distinguished fromterms, which are added.
Whether the first factor is the multiplier or the multiplicand may be ambiguous or depend upon context. For example, the expression3×4{\displaystyle 3\times 4}, can be phrased as "3 times 4" and evaluated as4+4+4{\displaystyle 4+4+4}, where 3 is the multiplier, but also as "3 multiplied by 4", in which case 3 becomes the multiplicand.[1]One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3. Thus, the designation of multiplier and multiplicand does not affect the result of the multiplication.[2][3]
Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a rectangle does not depend on which side is measured first—a consequence of the commutative property.
The product of two measurements (orphysical quantities) is a new type of measurement (or new quantity), usually with a derivedunit of measurement. For example, multiplying the lengths (in meters or feet) of the two sides of a rectangle gives its area (in square meters or square feet). Such a product is the subject ofdimensional analysis.
The inverse operation of multiplication isdivision. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the original number. The division of a number other than 0 by itself equals 1.
Several mathematical concepts expand upon the fundamental idea of multiplication. The product of a sequence, vector multiplication, complex numbers, and matrices are all examples where this can be seen. These more advanced constructs tend to affect the basic properties in their own ways, such as becoming noncommutative in matrices and some forms of vector multiplication or changing the sign of complex numbers.
Inarithmetic, multiplication is often written using themultiplication sign(either×or×{\displaystyle \times }) between the factors (that is, ininfix notation).[4]For example,
There are othermathematical notationsfor multiplication:
Incomputer programming, theasterisk(as in5*2) is still the most common notation. This is because most computers historically were limited to smallcharacter sets(such asASCIIandEBCDIC) that lacked a multiplication sign (such as⋅or×),[citation needed]while the asterisk appeared on every keyboard.[12]This usage originated in theFORTRANprogramming language.[13]
The numbers to be multiplied are generally called the "factors" (as infactorization). The number to be multiplied is the "multiplicand", and the number by which it is multiplied is the "multiplier". Usually, the multiplier is placed first, and the multiplicand is placed second;[14][15]however, sometimes the first factor is considered the multiplicand and the second the multiplier.
Also, as the result of multiplication does not depend on the order of the factors, the distinction between "multiplicand" and "multiplier" is useful only at a very elementary level and in somemultiplication algorithms, such as thelong multiplication. Therefore, in some sources, the term "multiplicand" is regarded as a synonym for "factor".[16]In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in3xy2{\displaystyle 3xy^{2}}) is called acoefficient.
The result of a multiplication is called aproduct. When one factor is an integer, the product is amultipleof the other or of the product of the others. Thus,2×π{\displaystyle 2\times \pi }is a multiple ofπ{\displaystyle \pi }, as is5133×486×π{\displaystyle 5133\times 486\times \pi }. A product of integers is a multiple of each factor; for example, 15 is the product of 3 and 5 and is both a multiple of 3 and a multiple of 5.
The product of two numbers or the multiplication between two numbers can be defined for common special cases: natural numbers, integers, rational numbers, real numbers, complex numbers, and quaternions.
The product of two natural numbersr,s∈N{\displaystyle r,s\in \mathbb {N} }is defined as:
r⋅s≡∑i=1sr=r+r+⋯+r⏟stimes≡∑j=1rs=s+s+⋯+s⏟rtimes.{\displaystyle r\cdot s\equiv \sum _{i=1}^{s}r=\underbrace {r+r+\cdots +r} _{s{\text{ times}}}\equiv \sum _{j=1}^{r}s=\underbrace {s+s+\cdots +s} _{r{\text{ times}}}.}
An integer can be either zero, a nonzero natural number, or minus a nonzero natural number. The product of zero and another integer is always zero. The product of two nonzero integers is determined by the product of theirpositive amounts, combined with the sign derived from the following rule:
(This rule is a consequence of thedistributivityof multiplication over addition, and is not anadditional rule.)
In words:
Two fractions can be multiplied by multiplying their numerators and denominators:
There are several equivalent ways to define formally the real numbers; seeConstruction of the real numbers. The definition of multiplication is a part of all these definitions.
A fundamental aspect of these definitions is that every real number can be approximated to any accuracy byrational numbers. A standard way for expressing this is that every real number is theleast upper boundof a set of rational numbers. In particular, every positive real number is the least upper bound of thetruncationsof its infinitedecimal representation; for example,π{\displaystyle \pi }is the least upper bound of{3,3.1,3.14,3.141,…}.{\displaystyle \{3,\;3.1,\;3.14,\;3.141,\ldots \}.}
A fundamental property of real numbers is that rational approximations are compatible witharithmetic operations, and, in particular, with multiplication. This means that, ifaandbare positive real numbers such thata=supx∈Ax{\displaystyle a=\sup _{x\in A}x}andb=supy∈By,{\displaystyle b=\sup _{y\in B}y,}thena⋅b=supx∈A,y∈Bx⋅y.{\displaystyle a\cdot b=\sup _{x\in A,y\in B}x\cdot y.}In particular, the product of two positive real numbers is the least upper bound of the term-by-term products of thesequencesof their decimal representations.
As changing the signs transforms least upper bounds into greatest lower bounds, the simplest way to deal with a multiplication involving one or two negative numbers, is to use the rule of signs described above in§ Product of two integers. The construction of the real numbers throughCauchy sequencesis often preferred in order to avoid consideration of the four possible sign configurations.
Two complex numbers can be multiplied by the distributive law and the fact thati2=−1{\displaystyle i^{2}=-1}, as follows:
The geometric meaning of complex multiplication can be understood by rewriting complex numbers inpolar coordinates:
Furthermore,
from which one obtains
The geometric meaning is that the magnitudes are multiplied and the arguments are added.
The product of twoquaternionscan be found in the article onquaternions. Note, in this case, thata⋅b{\displaystyle a\cdot b}andb⋅a{\displaystyle b\cdot a}are in general different.
Many common methods for multiplying numbers using pencil and paper require amultiplication tableof memorized or consulted products of small numbers (typically any two numbers from 0 to 9). However, one method, thepeasant multiplicationalgorithm, does not. The example below illustrates "long multiplication" (the "standard algorithm", "grade-school multiplication"):
In some countries such asGermany, the multiplication above is depicted similarly but with the original problem written on a single line and computation starting with the first digit of the multiplier:[17]
Multiplying numbers to more than a couple of decimal places by hand is tedious and error-prone.Common logarithmswere invented to simplify such calculations, since adding logarithms is equivalent to multiplying. Theslide ruleallowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early 20th century, mechanicalcalculators, such as theMarchant, automated multiplication of up to 10-digit numbers. Modern electroniccomputersand calculators have greatly reduced the need for multiplication by hand.
Methods of multiplication were documented in the writings ofancient Egyptian,Greek, Indian,[citation needed]andChinesecivilizations.
TheIshango bone, dated to about 18,000 to 20,000 BC, may hint at a knowledge of multiplication in theUpper Paleolithicera inCentral Africa, but this is speculative.[18][verification needed]
The Egyptian method of multiplication of integers and fractions, which is documented in theRhind Mathematical Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining2 × 21 = 42,4 × 21 = 2 × 42 = 84,8 × 21 = 2 × 84 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence:[19]
TheBabyloniansused asexagesimalpositional number system, analogous to the modern-daydecimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering60 × 60different products, Babylonian mathematicians employedmultiplication tables. These tables consisted of a list of the first twenty multiples of a certainprincipal numbern:n, 2n, ..., 20n; followed by the multiples of 10n: 30n40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50nand 3ncomputed from the table.[citation needed]
In the mathematical textZhoubi Suanjing, dated prior to 300 BC, and theNine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employedRod calculusinvolving place value addition, subtraction, multiplication, and division. The Chinese were already using adecimal multiplication tableby the end of theWarring Statesperiod.[20]
The modern method of multiplication based on theHindu–Arabic numeral systemwas first described byBrahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication, and division.Henry Burchard Fine, then a professor of mathematics atPrinceton University, wrote the following:
These place value decimal arithmetic algorithms were introduced to Arab countries byAl Khwarizmiin the early 9th century and popularized in the Western world byFibonacciin the 13th century.[22]
Grid method multiplication, or the box method, is used in primary schools in England and Wales and in some areas[which?]of the United States to help teach an understanding of how multiple digit multiplication works. An example of multiplying 34 by 13 would be to lay the numbers out in a grid as follows:
and then add the entries.
The classical method of multiplying twon-digit numbers requiresn2digit multiplications.Multiplication algorithmshave been designed that reduce the computation time considerably when multiplying large numbers. Methods based on thediscrete Fourier transformreduce thecomputational complexitytoO(nlognlog logn). In 2016, the factorlog lognwas replaced by a function that increases much slower, though still not constant.[23]In March 2019, David Harvey and Joris van der Hoeven submitted a paper presenting an integer multiplication algorithm with a complexity ofO(nlogn).{\displaystyle O(n\log n).}[24]The algorithm, also based on the fast Fourier transform, is conjectured to be asymptotically optimal.[25]The algorithm is not practically useful, as it only becomes faster for multiplying extremely large numbers (having more than2172912bits).[26]
One can only meaningfully add or subtract quantities of the same type, but quantities of different types can be multiplied or divided without problems. For example, four bags with three marbles each can be thought of as:[2]
When two measurements are multiplied together, the product is of a type depending on the types of measurements. The general theory is given bydimensional analysis. This analysis is routinely applied in physics, but it also has applications in finance and other applied fields.
A common example in physics is the fact that multiplyingspeedbytimegivesdistance. For example:
In this case, the hour units cancel out, leaving the product with only kilometer units.
Other examples of multiplication involving units include:
The product of a sequence of factors can be written with the product symbol∏{\displaystyle \textstyle \prod }, which derives from the capital letter Π (pi) in theGreek alphabet(much like the same way thesummation symbol∑{\displaystyle \textstyle \sum }is derived from the Greek letter Σ (sigma)).[27][28]The meaning of this notation is given by
which results in
In such a notation, thevariableirepresents a varyinginteger, called the multiplication index, that runs from the lower value1indicated in the subscript to the upper value4given by the superscript. The product is obtained by multiplying together all factors obtained by substituting the multiplication index for an integer between the lower and the upper values (the bounds included) in the expression that follows the product operator.
More generally, the notation is defined as
wheremandnare integers or expressions that evaluate to integers. In the case wherem=n, the value of the product is the same as that of the single factorxm; ifm>n, the product is anempty productwhose value is 1—regardless of the expression for the factors.
By definition,
If all factors are identical, a product ofnfactors is equivalent toexponentiation:
Associativityandcommutativityof multiplication imply
ifais a non-negative integer, or if allxi{\displaystyle x_{i}}are positivereal numbers, and
if allai{\displaystyle a_{i}}are non-negative integers, or ifxis a positive real number.
One may also consider products of infinitely many factors; these are calledinfinite products. Notationally, this consists in replacingnabove by theinfinity symbol∞. The product of such an infinite sequence is defined as thelimitof the product of the firstnfactors, asngrows without bound. That is,
One can similarly replacemwith negative infinity, and define:
provided both limits exist.[citation needed]
When multiplication is repeated, the resulting operation is known asexponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with asuperscriptthree. In this example, the number two is thebase, and three is theexponent.[29]In general, the exponent (or superscript) indicates how many times the base appears in the expression, so that the expression
indicates thatncopies of the baseaare to be multiplied together. This notation can be used whenever multiplication is known to bepower associative.
Forrealandcomplexnumbers, which includes, for example,natural numbers,integers, andfractions, multiplication has certain properties:
Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative formatricesandquaternions.[30]Hurwitz's theoremshows that for thehypercomplex numbersofdimension8 or greater, including theoctonions,sedenions, andtrigintaduonions, multiplication is generally not associative.[34]
In the bookArithmetices principia, nova methodo exposita,Giuseppe Peanoproposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication:
HereS(y) represents thesuccessorofy; i.e., the natural number that followsy. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic, includinginduction. For instance,S(0), denoted by 1, is a multiplicative identity because
The axioms forintegerstypically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent tox−ywhenxandyare treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is
The rule that −1 × −1 = 1 can then be deduced from
Multiplication is extended in a similar way torational numbersand then toreal numbers.[citation needed]
The product of non-negative integers can be defined with set theory usingcardinal numbersor thePeano axioms. Seebelowhow to extend this to multiplying arbitrary integers, and then arbitrary rational numbers. The product of real numbers is defined in terms of products of rational numbers; seeconstruction of the real numbers.[35]
There are many sets that, under the operation of multiplication, satisfy the axioms that definegroupstructure. These axioms are closure, associativity, and the inclusion of an identity element and inverses.
A simple example is the set of non-zerorational numbers. Here identity 1 is had, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, zero must be excluded because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example, anabelian groupis had, but that is not always the case.
To see this, consider the set of invertible square matrices of a given dimension over a givenfield. Here, it is straightforward to verify closure, associativity, and inclusion of identity (theidentity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian.
Another fact worth noticing is that the integers under multiplication do not form a group—even if zero is excluded. This is easily seen by the nonexistence of an inverse for all elements other than 1 and −1.
Multiplication in group theory is typically notated either by a dot or by juxtaposition (the omission of an operation symbol between elements). So multiplying elementaby elementbcould be notated asa⋅{\displaystyle \cdot }borab. When referring to a group via the indication of the set and operation, the dot is used. For example, our first example could be indicated by(Q/{0},⋅){\displaystyle \left(\mathbb {Q} /\{0\},\,\cdot \right)}.[36]
Numbers cancount(3 apples),order(the 3rd apple), ormeasure(3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such asmatrices) or do not look much like numbers (such asquaternions).
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Integer_multiplication#Time_complexity_of_integer_multiplication_algorithms
|
Elliptic-curve cryptography(ECC) is an approach topublic-key cryptographybased on thealgebraic structureofelliptic curvesoverfinite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation inGalois fields, such as theRSA cryptosystemandElGamal cryptosystem.[1]
Elliptic curves are applicable forkey agreement,digital signatures,pseudo-random generatorsand other tasks. Indirectly, they can be used forencryptionby combining the key agreement with asymmetric encryptionscheme. They are also used in severalinteger factorizationalgorithmsthat have applications in cryptography, such asLenstra elliptic-curve factorization.
The use of elliptic curves in cryptography was suggested independently byNeal Koblitz[2]andVictor S. Miller[3]in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4[4]has ten recommended finite fields:
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.[5]
At theRSA Conference2005, theNational Security Agency(NSA) announcedSuite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information.[1]National Institute of Standards and Technology(NIST) has endorsed elliptic curve cryptography in itsSuite Bset of recommended algorithms, specificallyelliptic-curve Diffie–Hellman(ECDH) for key exchange andElliptic Curve Digital Signature Algorithm(ECDSA) for digital signature. The NSA allows their use for protecting information classified up totop secretwith 384-bit keys.[6]
Recently,[when?]a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as theWeilandTate pairings, have been introduced. Schemes based on these primitives provide efficientidentity-based encryptionas well as pairing-based signatures,signcryption,key agreement, andproxy re-encryption.[citation needed]
Elliptic curve cryptography is used successfully in numerous popular protocols, such asTransport Layer SecurityandBitcoin.
In 2013,The New York Timesstated thatDual Elliptic Curve Deterministic Random Bit Generation(or Dual_EC_DRBG) had been included as a NIST national standard due to the influence ofNSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve.[7]RSA Securityin September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG.[8][9]In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves,[10]suggesting a return to encryption based on non-elliptic-curve groups.
Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns aboutquantum computingattacks on ECC.[11][12]
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However,RSA Laboratories[13]andDaniel J. Bernstein[14]have argued that theUS governmentelliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.
For the purposes of this article, anelliptic curveis aplane curveover afinite field(rather than the real numbers) which consists of the points satisfying the equation
along with a distinguishedpoint at infinity, denoted ∞. The coordinates here are to be chosen from a fixedfinite fieldofcharacteristicnot equal to 2 or 3, or the curve equation would be somewhat more complicated.
This set of points, together with thegroup operation of elliptic curves, is anabelian group, with the point at infinity as an identity element. The structure of the group is inherited from thedivisor groupof the underlyingalgebraic variety:
Public-key cryptographyis based on theintractabilityof certain mathematicalproblems. Early public-key systems, such asRSA's 1983 patent, based their security on the assumption that it is difficult tofactora large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding thediscrete logarithmof a random elliptic curve element with respect to a publicly known base point is infeasible (thecomputational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute apoint multiplicationand the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smallerkey size, reducing storage and transmission requirements.[1]For example, a 256-bit elliptic curve public key should providecomparable securityto a 3072-bit RSA public key.
Severaldiscrete logarithm-based protocols have been adapted to elliptic curves, replacing the group(Zp)×{\displaystyle (\mathbb {Z} _{p})^{\times }}with an elliptic curve:
Some common implementation considerations include:
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, thedomain parametersof the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two (2m{\displaystyle 2^{m}}); the latter case is calledthe binary case, and this case necessitates the choice of an auxiliary curve denoted byf. Thus the field is defined bypin the prime case and the pair ofmandfin the binary case. The elliptic curve is defined by the constantsaandbused in its defining equation. Finally, the cyclic subgroup is defined by itsgenerator(a.k.a.base point)G. For cryptographic application, theorderofG, that is the smallest positive numbernsuch thatnG=O{\displaystyle nG={\mathcal {O}}}(thepoint at infinityof the curve, and theidentity element), is normally prime. Sincenis the size of a subgroup ofE(Fp){\displaystyle E(\mathbb {F} _{p})}it follows fromLagrange's theoremthat the numberh=1n|E(Fp)|{\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|}is an integer. In cryptographic applications, this numberh, called thecofactor, must be small (h≤4{\displaystyle h\leq 4}) and, preferably,h=1{\displaystyle h=1}. To summarize: in the prime case, the domain parameters are(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}; in the binary case, they are(m,f,a,b,G,n,h){\displaystyle (m,f,a,b,G,n,h)}.
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parametersmustbe validated before use.
The generation of domain parameters is not usually done by each participant because this involves computingthe number of points on a curvewhich is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the uniqueobject identifierdefined in the standard documents:
SECG test vectors are also available.[17]NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.
If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Several classes of curves are weak and should be avoided:
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step,Pollard's rho, etc.), needO(n){\displaystyle O({\sqrt {n}})}steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve overFq{\displaystyle \mathbb {F} _{q}}, whereq≈2256{\displaystyle q\approx 2^{256}}. This can be contrasted with finite-field cryptography (e.g.,DSA) which requires[27]3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g.,RSA) which requires a 3072-bit value ofn, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date[when?]had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200PlayStation 3game consoles and could have been finished in 3.5 months using this cluster when running continuously.[28]The binary field case was broken in April 2004 using 2600 computers over 17 months.[29]
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.[30]
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications inFq{\displaystyle \mathbb {F} _{q}}but also aninversionoperation. Theinversion(for givenx∈Fq{\displaystyle x\in \mathbb {F} _{q}}findy∈Fq{\displaystyle y\in \mathbb {F} _{q}}such thatxy=1{\displaystyle xy=1}) is one to two orders of magnitude slower[31]than multiplication. However, points on a curve can be represented in different coordinate systems which do not require aninversionoperation to add two points. Several such systems were proposed: in theprojectivesystem each point is represented by three coordinates(X,Y,Z){\displaystyle (X,Y,Z)}using the following relation:x=XZ{\displaystyle x={\frac {X}{Z}}},y=YZ{\displaystyle y={\frac {Y}{Z}}}; in theJacobian systema point is also represented with three coordinates(X,Y,Z){\displaystyle (X,Y,Z)}, but a different relation is used:x=XZ2{\displaystyle x={\frac {X}{Z^{2}}}},y=YZ3{\displaystyle y={\frac {Y}{Z^{3}}}}; in theLópez–Dahab systemthe relation isx=XZ{\displaystyle x={\frac {X}{Z}}},y=YZ2{\displaystyle y={\frac {Y}{Z^{2}}}}; in themodified Jacobiansystem the same relations are used but four coordinates are stored and used for calculations(X,Y,Z,aZ4){\displaystyle (X,Y,Z,aZ^{4})}; and in theChudnovsky Jacobiansystem five coordinates are used(X,Y,Z,Z2,Z3){\displaystyle (X,Y,Z,Z^{2},Z^{3})}. Note that there may be different naming conventions, for example,IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.[32]
Reduction modulop(which is needed for addition and multiplication) can be executed much faster if the primepis apseudo-Mersenne prime, that isp≈2d{\displaystyle p\approx 2^{d}}; for example,p=2521−1{\displaystyle p=2^{521}-1}orp=2256−232−29−28−27−26−24−1.{\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.}Compared toBarrett reduction, there can be an order of magnitude speed-up.[33]The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers withbitwise operations.
The curves overFp{\displaystyle \mathbb {F} _{p}}with pseudo-Mersennepare recommended by NIST. Yet another advantage of the NIST curves is that they usea= −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.[34]
Unlike most otherDLPsystems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P=Q) and general addition (P≠Q) depending on the coordinate system used. Consequently, it is important to counteractside-channel attacks(e.g., timing orsimple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods[clarification needed][35](note that this does not increase computation time). Alternatively one can use anEdwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation.[36]Another concern for ECC-systems is the danger offault attacks, especially when running onsmart cards.[37]
Cryptographic experts have expressed concerns that theNational Security Agencyhas inserted akleptographicbackdoor into at least one elliptic curve-based pseudo random generator.[38]Internal memos leaked by former NSA contractorEdward Snowdensuggest that the NSA put a backdoor in theDual EC DRBGstandard.[39]One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.[40]
The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.[41]
Shor's algorithmcan be used to break elliptic curve cryptography by computing discrete logarithms on a hypotheticalquantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330qubitsand 126 billionToffoli gates.[42]For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security).[43]In comparison, using Shor's algorithm to break theRSAalgorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.[citation needed][44]
Supersingular Isogeny Diffie–Hellman Key Exchangeclaimed to provide apost-quantumsecure form of elliptic curve cryptography by usingisogeniesto implementDiffie–Hellmankey exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems.[45]However, new classical attacks undermined the security of this protocol.[46]
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant toquantumattacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."[11]
When ECC is used invirtual machines, an attacker may use an invalid curve to get a complete PDH private key.[47]
Alternative representations of elliptic curves include:
|
https://en.wikipedia.org/wiki/Elliptic-curve_cryptography
|
Achosen-plaintext attack(CPA) is anattack modelforcryptanalysiswhich presumes that the attacker can obtain theciphertextsfor arbitraryplaintexts.[1]The goal of the attack is to gain information that reduces the security of theencryptionscheme.[2]
Modern ciphers aim to provide semantic security, also known asciphertext indistinguishability under chosen-plaintext attack, and they are therefore, by design, generally immune to chosen-plaintext attacks if correctly implemented.
In a chosen-plaintext attack theadversarycan (possiblyadaptively) ask for the ciphertexts of arbitrary plaintext messages. This is formalized by allowing the adversary to interact with an encryptionoracle, viewed as ablack box. The attacker’s goal is to reveal all or a part of the secret encryption key.
It may seem infeasible in practice that an attacker could obtain ciphertexts for given plaintexts. However, modern cryptography is implemented in software or hardware and is used for a diverse range of applications; for many cases, a chosen-plaintext attack is often very feasible (see alsoIn practice). Chosen-plaintext attacks become extremely important in the context ofpublic key cryptographywhere the encryption key is public and so attackers can encrypt any plaintext they choose.
There are two forms of chosen-plaintext attacks:
A general batch chosen-plaintext attack is carried out as follows[failed verification]:
Consider the following extension of the above situation. After the last step,
A cipher hasindistinguishable encryptions under a chosen-plaintext attackif after running the above experiment the adversary can't guess correctly (b=b') with probability non-negligiblybetter than 1/2.[3]
The following examples demonstrate how some ciphers that meet other security definitions may be broken with a chosen-plaintext attack.
The following attack on theCaesar cipherallows full recovery of the secret key:
With more intricate or complex encryption methodologies the decryption method becomes more resource-intensive, however, the core concept is still relatively the same.
The following attack on aone-time padallows full recovery of the secret key. Suppose the message length and key length are equal ton.
While the one-time pad is used as an example of aninformation-theoretically securecryptosystem, this security only holds under security definitions weaker than CPA security. This is because under the formal definition of CPA security the encryption oracle has no state. This vulnerability may not be applicable to all practical implementations – the one-time pad can still be made secure if key reuse is avoided (hence the name "one-time" pad).
InWorld War IIUS Navy cryptanalysts discovered that Japan was planning to attack a location referred to as "AF". They believed that "AF" might beMidway Island, because other locations in theHawaiian Islandshad codewords that began with "A". To prove their hypothesis that "AF" corresponded to "Midway Island" they asked the US forces at Midway to send a plaintext message about low supplies. The Japanese intercepted the message and immediately reported to their superiors that "AF" was low on water, confirming the Navy's hypothesis and allowing them to position their force to win thebattle.[3][4]
Also duringWorld War II, Allied codebreakers atBletchley Parkwould sometimes ask theRoyal Air Forceto lay mines at a position that didn't have any abbreviations or alternatives in the German naval system's grid reference. The hope was that the Germans, seeing the mines, would use anEnigma machineto encrypt a warning message about the mines and an "all clear" message after they were removed, giving the allies enough information about the message to break the German naval Enigma. This process ofplantinga known-plaintext was calledgardening.[5]Allied codebreakers also helped craft messages sent by double agentJuan Pujol García, whose encrypted radio reports were received in Madrid, manually decrypted, and then re-encrypted with anEnigma machinefor transmission to Berlin.[6]This helped the codebreakers decrypt the code used on the second leg, having supplied the originaltext.[7]
In modern day, chosen-plaintext attacks (CPAs) are often used to breaksymmetric ciphers. To be considered CPA-secure, the symmetric cipher must not be vulnerable to chosen-plaintext attacks. Thus, it is important for symmetric cipher implementors to understand how an attacker would attempt to break their cipher and make relevant improvements.
For some chosen-plaintext attacks, only a small part of the plaintext may need to be chosen by the attacker; such attacks are known as plaintext injection attacks.
A chosen-plaintext attack is more powerful thanknown-plaintext attack, because the attacker can directly target specific terms or patterns without having to wait for these to appear naturally, allowing faster gathering of data relevant to cryptanalysis. Therefore, any cipher that prevents chosen-plaintext attacks is also secure againstknown-plaintextandciphertext-onlyattacks.
However, a chosen-plaintext attack is less powerful than achosen-ciphertext attack, where the attacker can obtain the plaintexts of arbitrary ciphertexts. A CCA-attacker can sometimes break a CPA-secure system.[3]For example, theEl Gamal cipheris secure against chosen plaintext attacks, but vulnerable to chosen ciphertext attacks because it isunconditionally malleable.
|
https://en.wikipedia.org/wiki/Chosen_plaintext_attack
|
Aquantum computeris acomputerthat exploitsquantum mechanicalphenomena. On small scales, physical matter exhibits properties ofboth particles and waves, and quantum computing takes advantage of this behavior using specialized hardware.Classical physicscannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculationsexponentiallyfaster[a]than any modern "classical" computer. Theoretically a large-scale quantum computer couldbreak some widely used encryption schemesand aid physicists in performingphysical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.
The basicunit of informationin quantum computing, thequbit(or "quantum bit"), serves the same function as thebitin classical computing. However, unlike a classical bit, which can be in one of two states (abinary), a qubit can exist in asuperpositionof its two "basis" states, a state that is in an abstract sense "between" the two basis states. Whenmeasuringa qubit, the result is aprobabilistic outputof a classical bit. If a quantum computer manipulates the qubit in a particular way,wave interferenceeffects can amplify the desired measurement results. The design ofquantum algorithmsinvolves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
Quantum computers are not yet practical for real-world applications. Physically engineering high-quality qubits has proven to be challenging. If a physical qubit is not sufficientlyisolatedfrom its environment, it suffers fromquantum decoherence, introducingnoiseinto calculations. National governments have invested heavily in experimental research aimed at developing scalable qubits with longer coherence times and lower error rates. Example implementations includesuperconductors(which isolate anelectrical currentby eliminatingelectrical resistance) andion traps(which confine a singleatomic particleusingelectromagnetic fields).
In principle, a classical computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form oftime complexityrather thancomputability, andquantum complexity theoryshows that some quantum algorithms are exponentially more efficient than the best-known classical algorithms. A large-scale quantum computer could in theory solve computational problems that are not solvable within a reasonable timeframe for a classical computer. This concept of additional ability has been called "quantum supremacy". While such claims have drawn significant attention to the discipline, near-term practical use cases remain limited.
For many years, the fields ofquantum mechanicsandcomputer scienceformed distinct academic communities.[1]Modern quantum theorydeveloped in the 1920s to explain perplexing physical phenomena observed at atomic scales,[2][3]anddigital computersemerged in the following decades to replacehuman computersfor tedious calculations.[4]Both disciplines had practical applications duringWorld War II; computers played a major role inwartime cryptography,[5]and quantum physics was essential fornuclear physicsused in theManhattan Project.[6]
Asphysicistsapplied quantum mechanical models to computational problems and swapped digitalbitsforqubits, the fields of quantum mechanics and computer science began to converge. In 1980,Paul Benioffintroduced thequantum Turing machine, which uses quantum theory to describe a simplified computer.[7]When digital computers became faster, physicists faced anexponentialincrease in overhead whensimulating quantum dynamics,[8]promptingYuri ManinandRichard Feynmanto independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.[9][10][11]In a 1984 paper,Charles BennettandGilles Brassardapplied quantum theory tocryptographyprotocols and demonstrated that quantum key distribution could enhanceinformation security.[12][13]
Quantum algorithmsthen emerged for solvingoracle problems, such asDeutsch's algorithmin 1985,[14]theBernstein–Vazirani algorithmin 1993,[15]andSimon's algorithmin 1994.[16]These algorithms did not solve practical problems, but demonstrated mathematically that one could gain more information by querying ablack boxwith a quantum state insuperposition, sometimes referred to asquantum parallelism.[17]
Peter Shorbuilt on these results withhis 1994 algorithmfor breaking the widely usedRSAandDiffie–Hellmanencryption protocols,[18]which drew significant attention to the field of quantum computing. In 1996,Grover's algorithmestablished a quantum speedup for the widely applicableunstructuredsearch problem.[19][20]The same year,Seth Lloydproved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations,[21]validating Feynman's 1982 conjecture.[22]
Over the years,experimentalistshave constructed small-scale quantum computers usingtrapped ionsand superconductors.[23]In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology,[24][25]and subsequent experiments have increased the number of qubits and reduced error rates.[23]
In 2019,Google AIandNASAannounced that they had achievedquantum supremacywith a 54-qubit machine, performing a computation that is impossible for any classical computer.[26][27][28]However, the validity of this claim is still being actively researched.[29][30]
Computer engineerstypically describe amodern computer's operation in terms ofclassical electrodynamics.
Within these "classical" computers, some components (such assemiconductorsandrandom number generators) may rely on quantum behavior, but these components are notisolatedfrom their environment, so anyquantum informationquicklydecoheres.
Whileprogrammersmay depend onprobability theorywhen designing arandomized algorithm, quantum mechanical notions like superposition andinterferenceare largely irrelevant forprogram analysis.
Quantum programs, in contrast, rely on precise control ofcoherentquantum systems. Physicistsdescribe these systems mathematicallyusinglinear algebra.Complex numbersmodelprobability amplitudes,vectorsmodelquantum states, andmatricesmodel the operations that can be performed on these states. Programming a quantum computer is then a matter ofcomposingoperations in such a way that the resulting program computes a useful result in theory and is implementable in practice.
As physicistCharlie Bennettdescribes the relationship between quantum and classical computers,[31]
A classical computer is a quantum computer ... so we shouldn't be asking about "where do quantum speedups come from?" We should say, "well, all computers are quantum. ... Where do classical slowdowns come from?"
Just as the bit is the basic concept of classical information theory, thequbitis the fundamental unit ofquantum information. The same termqubitis used to refer to an abstract mathematical model and to any physical system that is represented by that model. A classical bit, by definition, exists in either of two physical states, which can be denoted 0 and 1. A qubit is also described by a state, and two states often written|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }serve as the quantum counterparts of the classical states 0 and 1. However, the quantum states|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }belong to avector space, meaning that they can be multiplied by constants and added together, and the result is again a valid quantum state. Such a combination is known as asuperpositionof|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }.[32][33]
A two-dimensionalvectormathematically represents a qubit state. Physicists typically useDirac notationfor quantum mechanicallinear algebra, writing|ψ⟩{\displaystyle |\psi \rangle }'ketpsi'for a vector labeledψ{\displaystyle \psi }. Because a qubit is a two-state system, any qubit state takes the formα|0⟩+β|1⟩{\displaystyle \alpha |0\rangle +\beta |1\rangle }, where|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }are the standardbasis states,[b]andα{\displaystyle \alpha }andβ{\displaystyle \beta }are theprobability amplitudes,which are in generalcomplex numbers.[33]If eitherα{\displaystyle \alpha }orβ{\displaystyle \beta }is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such aquantum state vectoracts similarly to a (classical)probability vector, with one key difference: unlike probabilities, probabilityamplitudesare not necessarily positive numbers.[35]Negative amplitudes allow for destructive wave interference.
When a qubit ismeasuredin thestandard basis, the result is a classical bit. TheBorn ruledescribes thenorm-squaredcorrespondence between amplitudes and probabilities—when measuring a qubitα|0⟩+β|1⟩{\displaystyle \alpha |0\rangle +\beta |1\rangle }, the statecollapsesto|0⟩{\displaystyle |0\rangle }with probability|α|2{\displaystyle |\alpha |^{2}}, or to|1⟩{\displaystyle |1\rangle }with probability|β|2{\displaystyle |\beta |^{2}}.
Any valid qubit state has coefficientsα{\displaystyle \alpha }andβ{\displaystyle \beta }such that|α|2+|β|2=1{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}.
As an example, measuring the qubit1/2|0⟩+1/2|1⟩{\displaystyle 1/{\sqrt {2}}|0\rangle +1/{\sqrt {2}}|1\rangle }would produce either|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }with equal probability.
Each additional qubit doubles thedimensionof thestate space.[34]As an example, the vector1/√2|00⟩+1/√2|01⟩represents a two-qubit state, atensor productof the qubit|0⟩with the qubit1/√2|0⟩+1/√2|1⟩.
This vector inhabits a four-dimensionalvector spacespanned by the basis vectors|00⟩,|01⟩,|10⟩, and|11⟩.
TheBell state1/√2|00⟩+1/√2|11⟩is impossible to decompose into the tensor product of two individual qubits—the two qubits areentangledbecause neither qubit has a state vector of its own.
In general, the vector space for ann-qubit system is 2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100classical values.
The state of this one-qubitquantum memorycan be manipulated by applyingquantum logic gates, analogous to how classical memory can be manipulated withclassical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by amatrixX:=(0110).{\displaystyle X:={\begin{pmatrix}0&1\\1&0\end{pmatrix}}.}Mathematically, the application of such a logic gate to a quantum state vector is modelled withmatrix multiplication. Thus
The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are|00⟩:=(1000);|01⟩:=(0100);|10⟩:=(0010);|11⟩:=(0001).{\displaystyle |00\rangle :={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}};\quad |01\rangle :={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}};\quad |10\rangle :={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}};\quad |11\rangle :={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}.}Thecontrolled NOT (CNOT)gate can then be represented using the following matrix:CNOT:=(1000010000010010).{\displaystyle \operatorname {CNOT} :={\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}}.}As a mathematical consequence of this definition,CNOT|00⟩=|00⟩{\textstyle \operatorname {CNOT} |00\rangle =|00\rangle },CNOT|01⟩=|01⟩{\textstyle \operatorname {CNOT} |01\rangle =|01\rangle },CNOT|10⟩=|11⟩{\textstyle \operatorname {CNOT} |10\rangle =|11\rangle }, andCNOT|11⟩=|10⟩{\textstyle \operatorname {CNOT} |11\rangle =|10\rangle }. In other words, the CNOT applies a NOT gate (X{\textstyle X}from before) to the second qubit if and only if the first qubit is in the state|1⟩{\textstyle |1\rangle }. If the first qubit is|0⟩{\textstyle |0\rangle }, nothing is done to either qubit.
In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, anymeasurement can be deferredto the end of quantum computation, though this deferment may come at a computational cost, so mostquantum circuitsdepict a network consisting only of quantum logic gates and no measurements.
Quantum parallelismis the heuristic that quantum computers can be thought of as evaluating a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, allowing for the computation of multiple outputs simultaneously. This property is key to the speedup of many quantum algorithms. However, "parallelism" in this sense is insufficient to speed up a computation, because the measurement at the end of the computation gives only one value. To be useful, a quantum algorithm must also incorporate some other conceptual ingredient.[36][37]
There are a number ofmodels of computationfor quantum computing, distinguished by the basic elements in which the computation is decomposed.
Aquantum gate arraydecomposes computation into a sequence of few-qubitquantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Any quantum computation (which is, in the above formalism, anyunitary matrixof size2n×2n{\displaystyle 2^{n}\times 2^{n}}overn{\displaystyle n}qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as auniversal gate set, since a computer that can run such circuits is auniversal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to theSolovay-Kitaev theorem. Implementation of Boolean functions using the few-qubit quantum gates is presented here.[38]
Ameasurement-based quantum computerdecomposes computation into a sequence ofBell state measurementsand single-qubitquantum gatesapplied to a highly entangled initial state (acluster state), using a technique calledquantum gate teleportation.
Anadiabatic quantum computer, based onquantum annealing, decomposes computation into a slow continuous transformation of an initialHamiltonianinto a final Hamiltonian, whose ground states contain the solution.[39]
Neuromorphic quantum computing (abbreviated as 'n.quantum computing') is an unconventional type of computing that usesneuromorphic computingto perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and do not follow thevon Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
Atopological quantum computerdecomposes computation into the braiding ofanyonsin a 2D lattice.[40]
Aquantum Turing machineis the quantum analog of aTuring machine.[7]All of these models of computation—quantum circuits,[41]one-way quantum computation,[42]adiabatic quantum computation,[43]and topological quantum computation[44]—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Thethreshold theoremshows how increasing the number of qubits can mitigate errors,[45]yet fully fault-tolerant quantum computing remains "a rather distant dream".[46]According to some researchers,noisy intermediate-scale quantum(NISQ) machines may have specialized uses in the near future, butnoisein quantum gates limits their reliability.[46]Scientists atHarvardUniversity successfully created "quantum circuits" that correct errors more efficiently than alternative methods, which may potentially remove a major obstacle to practical quantum computers.[47][48]The Harvard research team was supported byMIT,QuEra Computing,Caltech, andPrincetonUniversity and funded byDARPA's Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.[49][50]
Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which leverages the principles of quantum mechanics, offers the possibility of secure communication channels that are fundamentally resistant to eavesdropping. Quantum key distribution (QKD) protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Additionally, quantum random number generators (QRNGs) can produce high-quality randomness, which is essential for secure encryption.
At the same time, quantum computing poses substantial challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key encryption schemes like RSA, which rely on the intractability of factoring large numbers. This has prompted a global effort to develop post-quantum cryptography—algorithms designed to resist both classical and quantum attacks. This field remains an active area of research and standardization, aiming to future-proof critical infrastructure against quantum-enabled threats.
Ongoing research in quantum and post-quantum cryptography will be critical for maintaining the integrity of digital infrastructure. Advances such as new QKD protocols, improved QRNGs, and the international standardization of quantum-resistant algorithms will play a key role in ensuring the security of communication and data in the emerging quantum era.[51]
Quantum computing also presents broader systemic and geopolitical risks. These include the potential to break current encryption protocols, disrupt financial systems, and accelerate the development of dual-use technologies such as advanced military systems or engineered pathogens. As a result, nations and corporations are actively investing in post-quantum safeguards, and the race for quantum supremacy is increasingly shaping global power dynamics.[52]
Quantum cryptographyenables new ways to transmit data securely; for example,quantum key distributionuses entangled quantum states to establish securecryptographic keys.[53]When a sender and receiver exchange quantum states, they can guarantee that anadversarydoes not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change.[54]With appropriatecryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.[12][55]
Modernfiber-optic cablescan transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distancequantum networkswith end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhancedquantum sensing.[56][57]
Progress in findingquantum algorithmstypically focuses on this quantum circuit model, though exceptions like thequantum adiabatic algorithmexist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.[58]
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computingdiscrete logarithms, solvingPell's equation, and more generally solving thehidden subgroup problemforabelianfinite groups.[58]These algorithms depend on the primitive of thequantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely.[59]Certain oracle problems likeSimon's problemand theBernstein–Vazirani problemdo give provable speedups, though this is in thequantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certainJones polynomials, and thequantum algorithm for linear systems of equations, have quantum algorithms appearing to give super-polynomial speedups and areBQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply thatno quantum algorithmgives a super-polynomial speedup, which is believed to be unlikely.[60]
Some quantum algorithms, likeGrover's algorithmandamplitude amplification, give polynomial speedups over corresponding classical algorithms.[58]Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.[20]
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically,quantum simulationmay be an important application of quantum computing.[61]Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside acollider.[62]In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.[63][64]
About 2% of the annual global energy output is used fornitrogen fixationto produceammoniafor theHaber processin the agricultural fertilizer industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production.[65]It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process[66]by the mid-2020s[67]although some have predicted it will take longer.[68]
A notable application of quantum computation is forattackson cryptographic systems that are currently in use.Integer factorization, which underpins the security ofpublic key cryptographicsystems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of fewprime numbers(e.g., products of two 300-digit primes).[69]By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors.[70]This ability would allow a quantum computer to break many of thecryptographicsystems in use today, in the sense that there would be apolynomial time(in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popularpublic key ciphersare based on the difficulty of factoring integers or thediscrete logarithmproblem, both of which can be solved by Shor's algorithm. In particular, theRSA,Diffie–Hellman, andelliptic curve Diffie–Hellmanalgorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field ofpost-quantum cryptography.[71][72]Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like theMcEliece cryptosystembased on a problem incoding theory.[71][73]Lattice-based cryptosystemsare also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving thedihedralhidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[74]It has been proven that applying Grover's algorithm to break asymmetric (secret key) algorithmby brute force requires time equal to roughly 2n/2invocations of the underlying cryptographic algorithm, compared with roughly 2nin the classical case,[75]meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (seeKey size).
The most well-known example of a problem that allows for a polynomial quantum speedup isunstructured search, which involves finding a marked item out of a list ofn{\displaystyle n}items in a database. This can be solved by Grover's algorithm usingO(n){\displaystyle O({\sqrt {n}})}queries to the database, quadratically fewer than theΩ(n){\displaystyle \Omega (n)}queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, includingBrassard, Høyer, and Tapp's algorithmfor finding collisions in two-to-one functions,[76]and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees.[77]
Problems that can be efficiently addressed with Grover's algorithm have the following properties:[78][79]
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[80]is aBoolean satisfiability problem, where thedatabasethrough which the algorithm iterates is that of all possible answers. An example and possible application of this is apassword crackerthat attempts to guess a password. Breakingsymmetric cipherswith this algorithm is of interest to government agencies.[81]
Quantum annealingrelies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.Adiabatic optimization may be helpful for solvingcomputational biologyproblems.[82]
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed upmachine learningtasks.[46][83]
For example, theHHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts.[46][84]Some research groups have recently explored the use of quantum annealing hardware for trainingBoltzmann machinesanddeep neural networks.[85][86][87]
Deep generative chemistry models emerge as powerful tools to expeditedrug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems[21]and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models[88]including quantum GANs[89]may eventually be developed into ultimate generative chemistry algorithms.
As of 2023,[update]classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.[90][91]
There are a number of technical challenges in building a large-scale quantum computer.[92]PhysicistDavid DiVincenzohas listedthese requirementsfor a practical quantum computer:[93]
Sourcing parts for quantum computers is also very difficult.Superconducting quantum computers, like those constructed byGoogleandIBM, needhelium-3, anuclearresearch byproduct, and specialsuperconductingcables made only by the Japanese company Coax Co.[94]
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development ofquantum controllersthat enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.[95]
One of the greatest challenges involved in constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation timeT2(forNMRandMRItechnology, also called thedephasing time), typically range between nanoseconds and seconds at low temperatures.[96]Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using adilution refrigerator[97]) in order to prevent significant decoherence.[98]A 2020 study argues thationizing radiationsuch ascosmic rayscan nevertheless cause certain systems to decohere within milliseconds.[99]
As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.[100]
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is opticalpulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time; hence any operation must be completed much more quickly than the decoherence time.
As described by thethreshold theorem, if the error rate is small enough, it is thought to be possible to usequantum error correctionto suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be betweenLandL2, whereLis the number of binary digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor ofL. For a 1000-bit number, this implies a need for about 104bits without error correction.[101]With error correction, the figure would rise to about 107bits. Computation time is aboutL2or about 107steps and at 1MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates[102][103]show that at least 3million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate[104]for practically useful integer factorization problem sizing 1,024-bit or larger.
Another approach to the stability-decoherence problem is to create atopological quantum computerwithanyons,quasi-particlesused as threads, and relying onbraid theoryto form stable logic gates.[105][106]
PhysicistJohn Preskillcoined the termquantum supremacyto describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers.[107][108][109]The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.[110]
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on theSycamore quantum computermore than 3,000,000 times faster than they could be done onSummit, generally considered the world's fastest computer.[27][111][112]This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed,[113][114]and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers[115][116][117]and even beating it.[118][119][120]
In December 2020, a group atUSTCimplemented a type ofBoson samplingon 76 photons with aphotonic quantum computer,Jiuzhang, to demonstrate quantum supremacy.[121][122][123]The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.[124]
Claims of quantum supremacy have generated hype around quantum computing,[125]but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.[90][126]
In January 2024, a study published inPhysical Review Lettersprovided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims.[127]
Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023Naturespotlight article summarized current quantum computers as being "For now, [good for] absolutely nothing".[90]The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023Communications of the ACMarticle[91]found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
This state of affairs can be traced to several current and long-term considerations.
In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for a long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptomatically improve upon those bounds.
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.
Bill Unruhdoubted the practicality of quantum computers in a paper published in 1994.[130]Paul Daviesargued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by theholographic principle.[131]Skeptics likeGil Kalaidoubt that quantum supremacy will ever be achieved.[132][133][134]PhysicistMikhail Dyakonovhas expressed skepticism of quantum computing as follows:
A practical quantum computer must use a physical system as a programmable quantum register.[137]Researchers are exploring several technologies as candidates for reliable qubit implementations.[138]Superconductorsandtrapped ionsare some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.[139]For example,topological quantum computerapproaches are being explored for more fault-tolerance computing systems.[140]
The first quantum logic gates were implemented withtrapped ionsand prototype general purpose machines with up to 20 qubits have been realized. However, the technology behind these devices combines complex vacuum equipment, lasers, microwave and radio frequency equipment making full scale processors difficult to integrate with standard computing equipment. Moreover, the trapped ion system itself has engineering challenges to overcome.[141]
The largest commercial systems are based onsuperconductordevices and have scaled to 2000 qubits. However, the error rates for larger machines have been on the order of 5%. Technologically these devices are all cryogenic and scaling to large numbers of qubits requires wafer-scale integration, a serious engineering challenge by itself.[142]
With focus on business management's point of view, the potential applications of quantum computing into four major categories are cybersecurity, data analytics and artificial intelligence, optimization and simulation, and data management and searching.[143]
Anycomputational problemsolvable by a classical computer is also solvable by a quantum computer.[144]Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described usingquantum mechanics, which underlies the operation of quantum computers.
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by aTuring machine. In other words, quantum computers provide no additional power over classical computers in terms ofcomputability. This means that quantum computers cannot solveundecidable problemslike thehalting problem, and the existence of quantum computers does not disprove theChurch–Turing thesis.[145]
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficientlyfactor integers, while this is not believed to be the case for classical computers.
The class ofproblemsthat can be efficiently solved by a quantum computer with bounded error is calledBQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart toBPP("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-timeprobabilistic Turing machineswith bounded error.[146]It is known thatBPP⊆BQP{\displaystyle {\mathsf {BPP\subseteq BQP}}}and is widely suspected thatBQP⊊BPP{\displaystyle {\mathsf {BQP\subsetneq BPP}}}, which intuitively would mean that quantum computers are more powerful than classical computers in terms oftime complexity.[147]
The exact relationship of BQP toP,NP, andPSPACEis not known. However, it is known thatP⊆BQP⊆PSPACE{\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}}; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and thediscrete logarithm problemare known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected thatNP⊈BQP{\displaystyle {\mathsf {NP\nsubseteq BQP}}}; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class ofNP-completeproblems (if an NP-complete problem were in BQP, then it would follow fromNP-hardnessthat all problems in NP are in BQP).[148]
|
https://en.wikipedia.org/wiki/Quantum_computing
|
Inmathematics, anelliptic curveis asmooth,projective,algebraic curveofgenusone, on which there is a specified pointO. An elliptic curve is defined over afieldKand describes points inK2, theCartesian productofKwith itself. If the field'scharacteristicis different from 2 and 3, then the curve can be described as aplane algebraic curvewhich consists of solutions(x,y)for:
for some coefficientsaandbinK. The curve is required to benon-singular, which means that the curve has nocuspsorself-intersections. (This is equivalent to the condition4a3+ 27b2≠ 0, that is, beingsquare-freeinx.) It is always understood that the curve is really sitting in theprojective plane, with the pointObeing the uniquepoint at infinity. Many sources define an elliptic curve to be simply a curve given by an equation of this form. (When thecoefficient fieldhas characteristic 2 or 3, the above equation is not quite general enough to include all non-singularcubic curves; see§ Elliptic curves over a general fieldbelow.)
An elliptic curve is anabelian variety– that is, it has a group law defined algebraically, with respect to which it is anabelian group– andOserves as the identity element.
Ify2=P(x), wherePis any polynomial of degree three inxwith no repeated roots, the solution set is a nonsingular plane curve ofgenusone, an elliptic curve. IfPhas degree four and issquare-freethis equation again describes a plane curve of genus one; however, it has no natural choice of identity element. More generally, any algebraic curve of genus one, for example the intersection of twoquadric surfacesembedded in three-dimensional projective space, is called an elliptic curve, provided that it is equipped with a marked point to act as the identity.
Using the theory ofelliptic functions, it can be shown that elliptic curves defined over thecomplex numberscorrespond to embeddings of thetorusinto thecomplex projective plane. The torus is also anabelian group, and this correspondence is also agroup isomorphism.
Elliptic curves are especially important innumber theory, and constitute a major area of current research; for example, they were used inAndrew Wiles's proof of Fermat's Last Theorem. They also find applications inelliptic curve cryptography(ECC) andinteger factorization.
An elliptic curve isnotanellipsein the sense of a projective conic, which has genus zero: seeelliptic integralfor the origin of the term. However, there is a natural representation of real elliptic curves with shape invariantj≥ 1as ellipses in the hyperbolic planeH2{\displaystyle \mathbb {H} ^{2}}. Specifically, the intersections of the Minkowski hyperboloid with quadric surfaces characterized by a certain constant-angle property produce the Steiner ellipses inH2{\displaystyle \mathbb {H} ^{2}}(generated by orientation-preserving collineations). Further, the orthogonal trajectories of these ellipses comprise the elliptic curves withj≤ 1, and any ellipse inH2{\displaystyle \mathbb {H} ^{2}}described as a locus relative to two foci is uniquely the elliptic curve sum of two Steiner ellipses, obtained by adding the pairs of intersections on each orthogonal trajectory. Here, the vertex of the hyperboloid serves as the identity on each trajectory curve.[1]
Topologically, a complex elliptic curve is atorus, while a complex ellipse is asphere.
Although the formal definition of an elliptic curve requires some background inalgebraic geometry, it is possible to describe some features of elliptic curves over thereal numbersusing only introductoryalgebraandgeometry.
In this context, an elliptic curve is aplane curvedefined by an equation of the form
after a linear change of variables (aandbare real numbers). This type of equation is called a Weierstrass equation, and said to be in Weierstrass form, or Weierstrass normal form.
The definition of elliptic curve also requires that the curve benon-singular. Geometrically, this means that the graph has nocusps, self-intersections, orisolated points. Algebraically, this holds if and only if thediscriminant,Δ{\displaystyle \Delta }, is not equal to zero.
The discriminant is zero whena=−3k2,b=2k3{\displaystyle a=-3k^{2},b=2k^{3}}.
(Although the factor −16 is irrelevant to whether or not the curve is non-singular, this definition of the discriminant is useful in a more advanced study of elliptic curves.)[2]
The real graph of a non-singular curve hastwocomponents if its discriminant is positive, andonecomponent if it is negative. For example, in the graphs shown in figure to the right, the discriminant in the first case is 64, and in the second case is −368. Following the convention atConic section#Discriminant,ellipticcurves require that the discriminant is negative.
When working in theprojective plane, the equation inhomogeneous coordinatesbecomes
This equation is not defined on theline at infinity, but we can multiply byZ3{\displaystyle Z^{3}}to get one that is:
This resulting equation is defined on the whole projective plane, and the curve it defines projects onto the elliptic curve of interest. To find its intersection with the line at infinity, we can just positZ=0{\displaystyle Z=0}. This impliesX3=0{\displaystyle X^{3}=0}, which in afieldmeansX=0{\displaystyle X=0}.Y{\displaystyle Y}on the other hand can take any value, and thus all triplets(0,Y,0){\displaystyle (0,Y,0)}satisfy the equation. In projective geometry this set is simply the pointO=[0:1:0]{\displaystyle O=[0:1:0]}, which is thus the unique intersection of the curve with the line at infinity.
Since the curve is smooth, hencecontinuous, it can be shown that this point at infinity is the identity element of agroupstructure whose operation is geometrically described as follows:
Since the curve is symmetric about thexaxis, given any pointP, we can take−Pto be the point opposite it. We then have−O=O{\displaystyle -O=O}, asO{\displaystyle O}lies on theXZplane, so that−O{\displaystyle -O}is also the symmetrical ofO{\displaystyle O}about the origin, and thus represents the same projective point.
IfPandQare two points on the curve, then we can uniquely describe a third pointP+Qin the following way. First, draw the line that intersectsPandQ. This will generally intersect the cubic at a third point,R. We then takeP+Qto be−R, the point oppositeR.
This definition for addition works except in a few special cases related to the point at infinity and intersection multiplicity. The first is when one of the points isO. Here, we defineP+O=P=O+P, makingOthe identity of the group. IfP=Q, we only have one point, thus we cannot define the line between them. In this case, we use the tangent line to the curve at this point as our line. In most cases, the tangent will intersect a second pointR, and we can take its opposite. IfPandQare opposites of each other, we defineP+Q=O. Lastly, ifPis aninflection point(a point where the concavity of the curve changes), we takeRto bePitself, andP+Pis simply the point opposite itself, i.e. itself.
LetKbe a field over which the curve is defined (that is, the coefficients of the defining equation or equations of the curve are inK) and denote the curve byE. Then theK-rational pointsofEare the points onEwhose coordinates all lie inK, including the point at infinity. The set ofK-rational points is denoted byE(K).E(K)is a group, because properties of polynomial equations show that ifPis inE(K), then−Pis also inE(K), and if two ofP,Q,Rare inE(K), then so is the third. Additionally, ifKis a subfield ofL, thenE(K)is asubgroupofE(L).
The above groups can be described algebraically as well as geometrically. Given the curvey2=x3+bx+cover the fieldK(whosecharacteristicwe assume to be neither 2 nor 3), and pointsP= (xP,yP)andQ= (xQ,yQ)on the curve, assume first thatxP≠xQ(case1). Lety=sx+dbe the equation of the line that intersectsPandQ, which has the following slope:
The line equation and the curve equation intersect at the pointsxP,xQ, andxR, so the equations have identicalyvalues at these values.
which is equivalent to
SincexP,xQ, andxRare solutions, this equation has its roots at exactly the samexvalues as
and because both equations are cubics, they must be the same polynomial up to a scalar. Thenequating the coefficientsofx2in both equations
and solving for the unknownxR,
yRfollows from the line equation
and this is an element ofK, becausesis.
IfxP=xQ, then there are two options: ifyP= −yQ(case3), including the case whereyP=yQ= 0(case4), then the sum is defined as 0; thus, the inverse of each point on the curve is found by reflecting it across thexaxis.
IfyP=yQ≠ 0, thenQ=PandR= (xR,yR) = −(P+P) = −2P= −2Q(case2usingPasR). The slope is given by the tangent to the curve at (xP,yP).
A more general expression fors{\displaystyle s}that works in both case 1 and case 2 is
where equality toyP−yQ/xP−xQrelies onPandQobeyingy2=x3+bx+c.
For the curvey2=x3+ax2+bx+c(the general form of an elliptic curve withcharacteristic3), the formulas are similar, withs=xP2+xPxQ+xQ2+axP+axQ+b/yP+yQandxR=s2−a−xP−xQ.
For a general cubic curve not in Weierstrass normal form, we can still define a group structure by designating one of its nine inflection points as the identityO. In the projective plane, each line will intersect a cubic at three points when accounting for multiplicity. For a pointP,−Pis defined as the unique third point on the line passing throughOandP. Then, for anyPandQ,P+Qis defined as−RwhereRis the unique third point on the line containingPandQ.
For an example of the group law over a non-Weierstrass curve, seeHessian curves.
A curveEdefined over the field of rational numbers is also defined over the field of real numbers. Therefore, the law of addition (of points with real coordinates) by the tangent and secant method can be applied toE. The explicit formulae show that the sum of two pointsPandQwith rational coordinates has again rational coordinates, since the line joiningPandQhas rational coefficients. This way, one shows that the set of rational points ofEforms a subgroup of the group of real points ofE.
This section is concerned with pointsP= (x,y) ofEsuch thatxis an integer.
For example, the equationy2=x3+ 17 has eight integral solutions withy> 0:[3][4]
As another example,Ljunggren's equation, a curve whose Weierstrass form isy2=x3− 2x, has only four solutions withy≥ 0 :[5]
Rational points can be constructed by the method of tangents and secants detailedabove, starting with afinitenumber of rational points. More precisely[6]theMordell–Weil theoremstates that the groupE(Q) is afinitely generated(abelian) group. By thefundamental theorem of finitely generated abelian groupsit is therefore a finite direct sum of copies ofZand finite cyclic groups.
The proof of the theorem[7]involves two parts. The first part shows that for any integerm> 1, thequotient groupE(Q)/mE(Q) is finite (this is the weak Mordell–Weil theorem). Second, introducing aheight functionhon the rational pointsE(Q) defined byh(P0) = 0 andh(P) = log max(|p|, |q|)ifP(unequal to the point at infinityP0) has asabscissathe rational numberx=p/q(withcoprimepandq). This height functionhhas the property thath(mP) grows roughly like the square ofm. Moreover, only finitely many rational points with height smaller than any constant exist onE.
The proof of the theorem is thus a variant of the method ofinfinite descent[8]and relies on the repeated application ofEuclidean divisionsonE: letP∈E(Q) be a rational point on the curve, writingPas the sum 2P1+Q1whereQ1is a fixed representant ofPinE(Q)/2E(Q), the height ofP1is about1/4of the one ofP(more generally, replacing 2 by anym> 1, and1/4by1/m2). Redoing the same withP1, that is to sayP1= 2P2+Q2, thenP2= 2P3+Q3, etc. finally expressesPas an integral linear combination of pointsQiand of points whose height is bounded by a fixed constant chosen in advance: by the weak Mordell–Weil theorem and the second property of the height functionPis thus expressed as an integral linear combination of a finite number of fixed points.
The theorem however doesn't provide a method to determine any representatives ofE(Q)/mE(Q).
TherankofE(Q), that is the number of copies ofZinE(Q) or, equivalently, the number of independent points of infinite order, is called therankofE. TheBirch and Swinnerton-Dyer conjectureis concerned with determining the rank. One conjectures that it can be arbitrarily large, even if only examples with relatively small rank are known. The elliptic curve with the currently largest exactly-known rank is
It has rank 20, found byNoam Elkiesand Zev Klagsbrun in 2020. Curves of rank higher than 20 have been known since 1994, with lower bounds on their ranks ranging from 21 to 29, but their exact ranks are not known and in particular it is not proven which of them have higher rank than the others or which is the true "current champion".[9]
As for the groups constituting thetorsion subgroupofE(Q), the following is known:[10]the torsion subgroup ofE(Q) is one of the 15 following groups (a theoremdue toBarry Mazur):Z/NZforN= 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 12, orZ/2Z×Z/2NZwithN= 1, 2, 3, 4. Examples for every case are known. Moreover, elliptic curves whose Mordell–Weil groups overQhave the same torsion groups belong to a parametrized family.[11]
TheBirch and Swinnerton-Dyer conjecture(BSD) is one of theMillennium problemsof theClay Mathematics Institute. The conjecture relies on analytic and arithmetic objects defined by the elliptic curve in question.
At the analytic side, an important ingredient is a function of a complex variable,L, theHasse–Weil zeta functionofEoverQ. This function is a variant of theRiemann zeta functionandDirichlet L-functions. It is defined as anEuler product, with one factor for everyprime numberp.
For a curveEoverQgiven by a minimal equation
with integral coefficientsai{\displaystyle a_{i}}, reducing the coefficientsmodulopdefines an elliptic curve over thefinite fieldFp(except for a finite number of primesp, where the reduced curve has asingularityand thus fails to be elliptic, in which caseEis said to be ofbad reductionatp).
The zeta function of an elliptic curve over a finite fieldFpis, in some sense, agenerating functionassembling the information of the number of points ofEwith values in the finitefield extensionsFpnofFp. It is given by[12]
The interior sum of the exponential resembles the development of thelogarithmand, in fact, the so-defined zeta function is arational functioninT:
where the 'trace of Frobenius' term[13]ap{\displaystyle a_{p}}is defined to be the difference between the 'expected' numberp+1{\displaystyle p+1}and the number of points on the elliptic curveE{\displaystyle E}overFp{\displaystyle \mathbb {F} _{p}}, viz.
or equivalently,
We may define the same quantities and functions over an arbitrary finite field of characteristicp{\displaystyle p}, withq=pn{\displaystyle q=p^{n}}replacingp{\displaystyle p}everywhere.
TheL-functionofEoverQis then defined by collecting this information together, for all primesp. It is defined by
whereNis theconductorofE, i.e. the product of primes with bad reduction(Δ(Emodp)=0{\displaystyle (\Delta (E\mod p)=0}),[14]in which caseapis defined differently from the method above: see Silverman (1986) below.
For exampleE:y2=x3+14x+19{\displaystyle E:y^{2}=x^{3}+14x+19}has bad reduction at 17, becauseEmod17:y2=x3−3x+2{\displaystyle E\mod 17:y^{2}=x^{3}-3x+2}hasΔ=0{\displaystyle \Delta =0}.
This productconvergesfor Re(s) > 3/2 only. Hasse's conjecture affirms that theL-function admits ananalytic continuationto the whole complex plane and satisfies afunctional equationrelating, for anys,L(E,s) toL(E, 2 −s). In 1999 this was shown to be a consequence of the proof of the Shimura–Taniyama–Weil conjecture, which asserts that every elliptic curve overQis amodular curve, which implies that itsL-function is theL-function of amodular formwhose analytic continuation is known. One can therefore speak about the values ofL(E,s) at any complex numbers.
Ats= 1 (the conductor product can be discarded as it is finite), theL-function becomes
TheBirch and Swinnerton-Dyer conjecturerelates the arithmetic of the curve to the behaviour of thisL-function ats= 1. It affirms that the vanishing order of theL-function ats= 1 equals the rank ofEand predicts the leading term of the Laurent series ofL(E,s) at that point in terms of several quantities attached to the elliptic curve.
Much like theRiemann hypothesis, the truth of the BSD conjecture would have multiple consequences, including the following two:
LetK=Fqbe thefinite fieldwithqelements andEan elliptic curve defined overK. While the precisenumber of rational points of an elliptic curveEoverKis in general difficult to compute,Hasse's theorem on elliptic curvesgives the following inequality:
In other words, the number of points on the curve grows proportionally to the number of elements in the field. This fact can be understood and proven with the help of some general theory; seelocal zeta functionandétale cohomologyfor example.
The set of pointsE(Fq) is a finite abelian group. It is always cyclic or the product of two cyclic groups. For example,[17]the curve defined by
overF71has 72 points (71affine pointsincluding (0,0) and onepoint at infinity) over this field, whose group structure is given byZ/2Z×Z/36Z. The number of points on a specific curve can be computed withSchoof's algorithm.
Studying the curve over thefield extensionsofFqis facilitated by the introduction of the local zeta function ofEoverFq, defined by a generating series (also see above)
where the fieldKnis the (unique up to isomorphism) extension ofK=Fqof degreen(that is,Kn=Fqn{\displaystyle K_{n}=F_{q^{n}}}).
The zeta function is a rational function inT. To see this, consider the integera{\displaystyle a}such that
There is a complex numberα{\displaystyle \alpha }such that
whereα¯{\displaystyle {\bar {\alpha }}}is thecomplex conjugate, and so we have
We chooseα{\displaystyle \alpha }so that itsabsolute valueisq{\displaystyle {\sqrt {q}}}, that isα=q12eiθ,α¯=q12e−iθ{\displaystyle \alpha =q^{\frac {1}{2}}e^{i\theta },{\bar {\alpha }}=q^{\frac {1}{2}}e^{-i\theta }}, and thatcosθ=a2q{\displaystyle \cos \theta ={\frac {a}{2{\sqrt {q}}}}}. Note that|a|≤2q{\displaystyle |a|\leq 2{\sqrt {q}}}.
α{\displaystyle \alpha }can then be used in the local zeta function as its values when raised to the various powers ofncan be said to reasonably approximate the behaviour ofan{\displaystyle a_{n}}, in that
Using theTaylor series for the natural logarithm,
Then(1−αT)(1−α¯T)=1−aT+qT2{\displaystyle (1-\alpha T)(1-{\bar {\alpha }}T)=1-aT+qT^{2}}, so finally
For example,[18]the zeta function ofE:y2+y=x3over the fieldF2is given by
which follows from:
asq=2{\displaystyle q=2}, then|E|=21+1=3=1−a+2{\displaystyle |E|=2^{1}+1=3=1-a+2}, soa=0{\displaystyle a=0}.
Thefunctional equationis
As we are only interested in the behaviour ofan{\displaystyle a_{n}}, we can use a reduced zeta function
and so
which leads directly to the local L-functions
TheSato–Tate conjectureis a statement about how the error term2q{\displaystyle 2{\sqrt {q}}}in Hasse's theorem varies with the different primesq, if an elliptic curve E overQis reduced modulo q. It was proven (for almost all such curves) in 2006 due to the results of Taylor, Harris and Shepherd-Barron,[19]and says that the error terms are equidistributed.
Elliptic curves over finite fields are notably applied incryptographyand for thefactorizationof large integers. These algorithms often make use of the group structure on the points ofE. Algorithms that are applicable to general groups, for example the group of invertible elements in finite fields,F*q, can thus be applied to the group of points on an elliptic curve. For example, thediscrete logarithmis such an algorithm. The interest in this is that choosing an elliptic curve allows for more flexibility than choosingq(and thus the group of units inFq). Also, the group structure of elliptic curves is generally more complicated.
Elliptic curves can be defined over anyfieldK; the formal definition of an elliptic curve is a non-singular projective algebraic curve overKwithgenus1 and endowed with a distinguished point defined overK.
If thecharacteristicofKis neither 2 nor 3, then every elliptic curve overKcan be written in the form
after a linear change of variables. Herepandqare elements ofKsuch that the right hand side polynomialx3−px−qdoes not have any double roots. If the characteristic is 2 or 3, then more terms need to be kept: in characteristic 3, the most general equation is of the form
for arbitrary constantsb2,b4,b6such that the polynomial on the right-hand side has distinct roots (the notation is chosen for historical reasons). In characteristic 2, even this much is not possible, and the most general equation is
provided that the variety it defines is non-singular. If characteristic were not an obstruction, each equation would reduce to the previous ones by a suitable linear change of variables.
One typically takes the curve to be the set of all points (x,y) which satisfy the above equation and such that bothxandyare elements of thealgebraic closureofK. Points of the curve whose coordinates both belong toKare calledK-rational points.
Many of the preceding results remain valid when the field of definition ofEis anumber fieldK, that is to say, a finitefield extensionofQ. In particular, the groupE(K)ofK-rational points of an elliptic curveEdefined overKis finitely generated, which generalizes the Mordell–Weil theorem above. A theorem due toLoïc Merelshows that for a given integerd, there are (up toisomorphism) only finitely many groups that can occur as the torsion groups ofE(K) for an elliptic curve defined over a number fieldKofdegreed. More precisely,[20]there is a numberB(d) such that for any elliptic curveEdefined over a number fieldKof degreed, any torsion point ofE(K) is oforderless thanB(d). The theorem is effective: ford> 1, if a torsion point is of orderp, withpprime, then
As for the integral points, Siegel's theorem generalizes to the following: LetEbe an elliptic curve defined over a number fieldK,xandythe Weierstrass coordinates. Then there are only finitely many points ofE(K)whosex-coordinate is in thering of integersOK.
The properties of the Hasse–Weil zeta function and the Birch and Swinnerton-Dyer conjecture can also be extended to this more general situation.
The formulation of elliptic curves as the embedding of atorusin thecomplex projective planefollows naturally from a curious property ofWeierstrass's elliptic functions. These functions and their first derivative are related by the formula
Here,g2andg3are constants;℘(z)is theWeierstrass elliptic functionand℘′(z)its derivative. It should be clear that this relation is in the form of an elliptic curve (over thecomplex numbers). The Weierstrass functions are doubly periodic; that is, they areperiodicwith respect to alatticeΛ; in essence, the Weierstrass functions are naturally defined on a torusT=C/Λ. This torus may be embedded in the complex projective plane by means of the map
This map is agroup isomorphismof the torus (considered with its natural group structure) with the chord-and-tangent group law on the cubic curve which is the image of this map. It is also an isomorphism ofRiemann surfacesfrom the torus to the cubic curve, so topologically, an elliptic curve is a torus. If the latticeΛis related by multiplication by a non-zero complex numbercto a latticecΛ, then the corresponding curves are isomorphic. Isomorphism classes of elliptic curves are specified by thej-invariant.
The isomorphism classes can be understood in a simpler way as well. The constantsg2andg3, called themodular invariants, are uniquely determined by the lattice, that is, by the structure of the torus. However, all real polynomials factorize completely into linear factors over the complex numbers, since the field of complex numbers is thealgebraic closureof the reals. So, the elliptic curve may be written as
One finds that
and
withj-invariantj(τ)andλ(τ)is sometimes called themodular lambda function. For example, letτ= 2i, thenλ(2i) = (−1 +√2)4which impliesg′2,g′3, and thereforeg′23− 27g′32of the formula above are allalgebraic numbersifτinvolves animaginary quadratic field. In fact, it yields the integerj(2i) = 663=287496.
In contrast, themodular discriminant
is generally atranscendental number. In particular, the value of theDedekind eta functionη(2i)is
Note that theuniformization theoremimplies that everycompactRiemann surface of genus one can be represented as a torus. This also allows an easy understanding of thetorsion pointson an elliptic curve: if the latticeΛis spanned by the fundamental periodsω1andω2, then then-torsion points are the (equivalence classes of) points of the form
for integersaandbin the range0 ≤ (a,b) <n.
If
is an elliptic curve over the complex numbers and
then a pair of fundamental periods ofEcan be calculated very rapidly by
M(w,z)is thearithmetic–geometric meanofwandz. At each step of the arithmetic–geometric mean iteration, the signs ofznarising from the ambiguity of geometric mean iterations are chosen such that|wn−zn| ≤ |wn+zn|wherewnandzndenote the individual arithmetic mean and geometric mean iterations ofwandz, respectively. When|wn−zn| = |wn+zn|, there is an additional condition thatIm(zn/wn)> 0.[21]
Over the complex numbers, every elliptic curve has nineinflection points. Every line through two of these points also passes through a third inflection point; the nine points and 12 lines formed in this way form a realization of theHesse configuration.
Given anisogeny
of elliptic curves of degreen{\displaystyle n}, thedual isogenyis an isogeny
of the same degree such that
Here[n]{\displaystyle [n]}denotes the multiplication-by-n{\displaystyle n}isogenye↦ne{\displaystyle e\mapsto ne}which has degreen2.{\displaystyle n^{2}.}
Often only the existence of a dual isogeny is needed, but it can be explicitly given as the composition
whereDiv0{\displaystyle \operatorname {Div} ^{0}}is the group ofdivisorsof degree 0. To do this, we need mapsE→Div0(E){\displaystyle E\to \operatorname {Div} ^{0}(E)}given byP→P−O{\displaystyle P\to P-O}whereO{\displaystyle O}is the neutral point ofE{\displaystyle E}andDiv0(E)→E{\displaystyle \operatorname {Div} ^{0}(E)\to E}given by∑nPP→∑nPP.{\displaystyle \sum n_{P}P\to \sum n_{P}P.}
To see thatf∘f^=[n]{\displaystyle f\circ {\hat {f}}=[n]}, note that the original isogenyf{\displaystyle f}can be written as a composite
and that sincef{\displaystyle f}isfiniteof degreen{\displaystyle n},f∗f∗{\displaystyle f_{*}f^{*}}is multiplication byn{\displaystyle n}onDiv0(E′).{\displaystyle \operatorname {Div} ^{0}(E').}
Alternatively, we can use the smallerPicard groupPic0{\displaystyle \operatorname {Pic} ^{0}}, aquotientofDiv0.{\displaystyle \operatorname {Div} ^{0}.}The mapE→Div0(E){\displaystyle E\to \operatorname {Div} ^{0}(E)}descends to anisomorphism,E→Pic0(E).{\displaystyle E\to \operatorname {Pic} ^{0}(E).}The dual isogeny is
Note that the relationf∘f^=[n]{\displaystyle f\circ {\hat {f}}=[n]}also implies the conjugate relationf^∘f=[n].{\displaystyle {\hat {f}}\circ f=[n].}Indeed, letϕ=f^∘f.{\displaystyle \phi ={\hat {f}}\circ f.}Thenϕ∘f^=f^∘[n]=[n]∘f^.{\displaystyle \phi \circ {\hat {f}}={\hat {f}}\circ [n]=[n]\circ {\hat {f}}.}Butf^{\displaystyle {\hat {f}}}issurjective, so we must haveϕ=[n].{\displaystyle \phi =[n].}
Elliptic curves over finite fields are used in somecryptographicapplications as well as forinteger factorization. Typically, the general idea in these applications is that a knownalgorithmwhich makes use of certain finite groups is rewritten to use the groups of rational points of elliptic curves. For more see also:
Serge Lang, in the introduction to the book cited below, stated that "It is possible to write endlessly on elliptic curves. (This is not a threat.)" The following short list is thus at best a guide to the vast expository literature available on the theoretical, algorithmic, and cryptographic aspects of elliptic curves.
This article incorporates material from Isogeny onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Elliptic_curve#Group_structure
|
Wi-Fi Protected Access(WPA) (Wireless Protected Access),Wi-Fi Protected Access 2(WPA2), andWi-Fi Protected Access 3(WPA3) are the three security certification programs developed after 2000 by theWi-Fi Allianceto secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system,Wired Equivalent Privacy(WEP).[1]
WPA (sometimes referred to as the TKIP standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (orIEEE 802.11i-2004) standard.
In January 2018, the Wi-Fi Alliance announced the release of WPA3, which has several security improvements over WPA2.[2]
As of 2023, most computers that connect to a wireless network have support for using WPA, WPA2, or WPA3. All versions thereof, at least as implemented through May, 2021, are vulnerable to compromise.[3]
WEP (Wired Equivalent Privacy) is an early encryption protocol for wireless networks, designed to secure WLAN connections. It supports 64-bit and 128-bit keys, combining user-configurable and factory-set bits. WEP uses the RC4 algorithm for encrypting data, creating a unique key for each packet by combining a new Initialization Vector (IV) with a shared key (it has 40 bits of vectored key and 24 bits of random numbers). Decryption involves reversing this process, using the IV and the shared key to generate a key stream and decrypt the payload. Despite its initial use, WEP's significant vulnerabilities led to the adoption of more secure protocols.[4]
The Wi-Fi Alliance intended WPA as an intermediate measure to take the place ofWEPpending the availability of the fullIEEE 802.11istandard. WPA could be implemented throughfirmware upgradesonwireless network interface cardsdesigned for WEP that began shipping as far back as 1999. However, since the changes required in thewireless access points(APs) were more extensive than those needed on the network cards, most pre-2003 APs were not upgradable by vendor-provided methods to support WPA.
The WPA protocol implements theTemporal Key Integrity Protocol(TKIP). WEP uses a 64-bit or 128-bit encryption key that must be manually entered on wireless access points and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically generates a new 128-bit key for each packet and thus prevents the types of attacks that compromise WEP.[5]
WPA also includes aMessage Integrity Check, which is designed to prevent an attacker from altering and resending data packets. This replaces thecyclic redundancy check(CRC) that was used by the WEP standard. CRC's main flaw is that it does not provide a sufficiently strongdata integrityguarantee for the packets it handles.[6]Well-testedmessage authentication codesexisted to solve these problems, but they require too much computation to be used on old network cards. WPA uses a message integrity check algorithm calledTKIPto verify the integrity of the packets. TKIP is much stronger than a CRC, but not as strong as the algorithm used in WPA2. Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the limitations of the message integrity code hash function, namedMichael, to retrieve the keystream from short packets to use for re-injection andspoofing.[7][8]
Ratified in 2004, WPA2 replaced WPA. WPA2, which requires testing and certification by the Wi-Fi Alliance, implements the mandatory elements of IEEE 802.11i. In particular, it includes support forCCMP, anAES-based encryption mode.[9][10][11]Certification began in September, 2004. From March 13, 2006, to June 30, 2020, WPA2 certification was mandatory for all new devices to bear the Wi-Fi trademark.[12]In WPA2-protected WLANs, secure communication is established through a multi-step process. Initially, devices associate with the Access Point (AP) via an association request. This is followed by a 4-way handshake, a crucial the for step ensuring both the client and AP have the correctPre-Shared Key(PSK) without actually transmitting it. During this handshake, aPairwise Transient Key(PTK) is generated for secure data exchange key fution for the exchange RP = 2025
WPA2 employs the Advanced Encryption Standard (AES) with a 128-bit key, enhancing security through the Counter-Mode/CBC-Mac ProtocolCCMP. This protocol ensures robust encryption and data integrity, using different Initialization Vectors (IVs) for encryption and authentication purposes.[13]
The 4-way handshake involves:
Post-handshake, the established PTK is used for encrypting unicast traffic, and theGroup Temporal Key(GTK) is used for broadcast traffic. This comprehensive authentication and encryption mechanism is what makes WPA2 a robust security standard for wireless networks.[14]
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2.[15][16]Certification began in June 2018,[17]and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.[18]
The new standard uses an equivalent 192-bit cryptographic strength in WPA3-Enterprise mode[19](AES-256inGCM modewithSHA-384asHMAC), and still mandates the use ofCCMP-128(AES-128inCCM mode) as the minimum encryption algorithm in WPA3-Personal mode.TKIPis not allowed in WPA3.
The WPA3 standard also replaces thepre-shared key(PSK) exchange withSimultaneous Authentication of Equals(SAE) exchange, a method originally introduced withIEEE 802.11s, resulting in a more secure initial key exchange in personal mode[20][21]andforward secrecy.[22]The Wi-Fi Alliance also says that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.[2][23]WPA3 also supportsOpportunistic Wireless Encryption (OWE)for open Wi-Fi networks that do not have passwords.
Protection of management frames as specified in theIEEE 802.11wamendment is also enforced by the WPA3 specifications.
WPA has been designed specifically to work with wireless hardware produced prior to the introduction of WPA protocol,[24]which provides inadequate security throughWEP. Some of these devices support WPA only after applying firmware upgrades, which are not available for some legacy devices.[24]
Wi-Fi devices certified since 2006 support both the WPA and WPA2 security protocols. WPA3 is required since July 1, 2020.[18]
Different WPA versions and protection mechanisms can be distinguished based on the target end-user (such as WEP, WPA, WPA2, WPA3) and the method of authentication key distribution, as well as the encryption protocol used. As of July 2020, WPA3 is the latest iteration of the WPA standard, bringing enhanced security features and addressing vulnerabilities found in WPA2. WPA3 improves authentication methods and employs stronger encryption protocols, making it the recommended choice for securing Wi-Fi networks.[23]
Also referred to asWPA-PSK(pre-shared key) mode, this is designed for home, small office and basic uses and does not require an authentication server.[25]Each wireless network device encrypts the network traffic by deriving its 128-bit encryption key from a 256-bit sharedkey. This key may be entered either as a string of 64hexadecimaldigits, or as apassphraseof 8 to 63printable ASCII characters.[26]This pass-phrase-to-PSK mapping is nevertheless not binding, as Annex J is informative in the latest 802.11 standard.[27]If ASCII characters are used, the 256-bit key is calculated by applying thePBKDF2key derivation functionto the passphrase, using theSSIDas thesaltand 4096 iterations ofHMAC-SHA1.[28]WPA-Personal mode is available on all three WPA versions.
This enterprise mode uses an802.1Xserver for authentication, offering higher security control by replacing the vulnerable WEP with the more advanced TKIP encryption. TKIP ensures continuous renewal of encryption keys, reducing security risks. Authentication is conducted through aRADIUSserver, providing robust security, especially vital in corporate settings. This setup allows integration with Windows login processes and supports various authentication methods likeExtensible Authentication Protocol, which uses certificates for secure authentication, and PEAP, creating a protected environment for authentication without requiring client certificates.[29]
Originally, only EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) was certified by the Wi-Fi alliance. In April 2010, theWi-Fi Allianceannounced the inclusion of additional EAP[31]types to its WPA- and WPA2-Enterprise certification programs.[32]This was to ensure that WPA-Enterprise certified products can interoperate with one another.
As of 2010[update]the certification program includes the following EAP types:
802.1X clients and servers developed by specific firms may support other EAP types. This certification is an attempt for popular EAP types to interoperate; their failure to do so as of 2013[update]is one of the major issues preventing rollout of 802.1X on heterogeneous networks.
Commercial 802.1X servers include MicrosoftNetwork Policy ServerandJuniper NetworksSteelbelted RADIUS as well as Aradial Radius server.[34]FreeRADIUSis an open source 802.1X server.
WPA-Personal and WPA2-Personal remain vulnerable topassword crackingattacks if users rely on aweak password or passphrase. WPA passphrase hashes are seeded from the SSID name and its length;rainbow tablesexist for the top 1,000 network SSIDs and a multitude of common passwords, requiring only a quick lookup to speed up cracking WPA-PSK.[35]
Brute forcing of simple passwords can be attempted using theAircrack Suitestarting from the four-way authentication handshake exchanged during association or periodic re-authentication.[36][37][38][39][40]
WPA3 replaces cryptographic protocols susceptible to off-line analysis with protocols that require interaction with the infrastructure for each guessed password, supposedly placing temporal limits on the number of guesses.[15]However, design flaws in WPA3 enable attackers to plausibly launch brute-force attacks (see§ Dragonblood).
WPA and WPA2 do not provideforward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place. In other words, WPA only protects from attackers who do not have access to the password. Because of that, it's safer to useTransport Layer Security(TLS) or similar on top of that for the transfer of any sensitive data. However starting from WPA3, this issue has been addressed.[22]
In 2013, Mathy Vanhoef and Frank Piessens[41]significantly improved upon theWPA-TKIPattacks of Erik Tews and Martin Beck.[42][43]They demonstrated how to inject an arbitrary number of packets, with each packet containing at most 112 bytes of payload. This was demonstrated by implementing aport scanner, which can be executed against any client usingWPA-TKIP. Additionally, they showed how to decrypt arbitrary packets sent to a client. They mentioned this can be used to hijack aTCP connection, allowing an attacker to inject maliciousJavaScriptwhen the victim visits a website.
In contrast, the Beck-Tews attack could only decrypt short packets with mostly known content, such asARPmessages, and only allowed injection of 3 to 7 packets of at most 28 bytes. The Beck-Tews attack also requiresquality of service(as defined in802.11e) to be enabled, while the Vanhoef-Piessens attack does not. Neither attack leads to recovery of the shared session key between the client andAccess Point. The authors say using a short rekeying interval can prevent some attacks but not all, and strongly recommend switching fromTKIPto AES-basedCCMP.
Halvorsen and others show how to modify the Beck-Tews attack to allow injection of 3 to 7 packets having a size of at most 596 bytes.[44]The downside is that their attack requires substantially more time to execute: approximately 18 minutes and 25 seconds. In other work Vanhoef and Piessens showed that, when WPA is used to encrypt broadcast packets, their original attack can also be executed.[45]This is an important extension, as substantially more networks use WPA to protectbroadcast packets, than to protectunicast packets. The execution time of this attack is on average around 7 minutes, compared to the 14 minutes of the original Vanhoef-Piessens and Beck-Tews attack.
The vulnerabilities of TKIP are significant because WPA-TKIP had been held before to be an extremely safe combination; indeed, WPA-TKIP is still a configuration option upon a wide variety of wireless routing devices provided by many hardware vendors. A survey in 2013 showed that 71% still allow usage of TKIP, and 19% exclusively support TKIP.[41]
A more serious security flaw was revealed in December 2011 by Stefan Viehböck is the production that affects wireless routers with theWi-Fi Protected Setup(WPS) feature, regardless of which encryption method they use. Most recent models have this feature and enable it by default. Many consumer Wi-Fi device manufacturers had taken steps to eliminate the potential of weak passphrase choices by promoting alternative methods of automatically generating and distributing strong keys when users add a new wireless adapter or appliance to a network. These methods include pushing buttons on the devices or entering an 8-digitPIN.
The Wi-Fi Alliance standardized these methods as Wi-Fi Protected Setup; however, the PIN feature as widely implemented introduced a major new security flaw. The flaw allows a remote attacker to recover the WPS PIN and, with it, the router's WPA/WPA2 password in a few hours.[46]Users have been urged to turn off the WPS feature,[47]although this may not be possible on some router models. Also, the PIN is written on a label on most Wi-Fi routers with WPS, which cannot be changed if compromised.
In 2018, the Wi-Fi Alliance introduced Wi-Fi Easy Connect[48]as a new alternative for the configuration of devices that lack sufficient user interface capabilities by allowing nearby devices to serve as an adequate UI for network provisioning purposes, thus mitigating the need for WPS.[49]
Several weaknesses have been found inMS-CHAPv2, some of which severely reduce the complexity of brute-force attacks, making them feasible with modern hardware. In 2012 the complexity of breaking MS-CHAPv2 was reduced to that of breaking a singleDESkey (work byMoxie Marlinspikeand Marsh Ray). Moxie advised: "Enterprises who are depending on the mutual authentication properties of MS-CHAPv2 for connection to their WPA2 Radius servers should immediately start migrating to something else."[50]
Tunneled EAP methods using TTLS or PEAP which encrypt the MSCHAPv2 exchange are widely deployed to protect against exploitation of this vulnerability. However, prevalent WPA2 client implementations during the early 2000s were prone to misconfiguration by end users, or in some cases (e.g.Android), lacked any user-accessible way to properly configure validation of AAA server certificate CNs. This extended the relevance of the original weakness in MSCHAPv2 withinMiTMattack scenarios.[51]Under stricter compliance tests for WPA2 announced alongside WPA3, certified client software will be required to conform to certain behaviors surrounding AAA certificate validation.[15]
Hole196 is a vulnerability in the WPA2 protocol that abuses the shared Group Temporal Key (GTK). It can be used to conduct man-in-the-middle anddenial-of-serviceattacks. However, it assumes that the attacker is already authenticated against Access Point and thus in possession of the GTK.[52][53]
In 2016 it was shown that the WPA and WPA2 standards contain an insecure expositoryrandom number generator(RNG). Researchers showed that, if vendors implement the proposed RNG, an attacker is able to predict the group key (GTK) that is supposed to be randomly generated by theaccess point(AP). Additionally, they showed that possession of the GTK enables the attacker to inject any traffic into the network, and allowed the attacker to decrypt unicast internet traffic transmitted over the wireless network. They demonstrated their attack against anAsusRT-AC51U router that uses theMediaTekout-of-tree drivers, which generate the GTK themselves, and showed the GTK can be recovered within two minutes or less. Similarly, they demonstrated the keys generated by Broadcom access daemons running on VxWorks 5 and later can be recovered in four minutes or less, which affects, for example, certain versions of Linksys WRT54G and certain Apple AirPort Extreme models. Vendors can defend against this attack by using a secure RNG. By doing so,Hostapdrunning on Linux kernels is not vulnerable against this attack and thus routers running typicalOpenWrtorLEDEinstallations do not exhibit this issue.[54]
In October 2017, details of theKRACK(Key Reinstallation Attack) attack on WPA2 were published.[55][56]The KRACK attack is believed to affect all variants of WPA and WPA2; however, the security implications vary between implementations, depending upon how individual developers interpreted a poorly specified part of the standard. Software patches can resolve the vulnerability but are not available for all devices.[57]KRACK exploits a weakness in the WPA2 4-Way Handshake, a critical process for generating encryption keys. Attackers can force multiple handshakes, manipulating key resets. By intercepting the handshake, they could decrypt network traffic without cracking encryption directly. This poses a risk, especially with sensitive data transmission.[58]
Manufacturers have released patches in response, but not all devices have received updates. Users are advised to keep their devices updated to mitigate such security risks. Regular updates are crucial for maintaining network security against evolving threats.[58]
The Dragonblood attacks exposed significant vulnerabilities in the Dragonfly handshake protocol used in WPA3 and EAP-pwd. These included side-channel attacks potentially revealing sensitive user information and implementation weaknesses in EAP-pwd and SAE. Concerns were also raised about the inadequate security in transitional modes supporting both WPA2 and WPA3. In response, security updates and protocol changes are being integrated into WPA3 and EAP-pwd to address these vulnerabilities and enhance overall Wi-Fi security.[59]
On May 11, 2021,FragAttacks, a set of new security vulnerabilities, were revealed, affecting Wi-Fi devices and enabling attackers within range to steal information or target devices. These include design flaws in the Wi-Fi standard, affecting most devices, and programming errors in Wi-Fi products, making almost all Wi-Fi products vulnerable. The vulnerabilities impact all Wi-Fi security protocols, including WPA3 and WEP. Exploiting these flaws is complex but programming errors in Wi-Fi products are easier to exploit. Despite improvements in Wi-Fi security, these findings highlight the need for continuous security analysis and updates. In response, security patches were developed, and users are advised to use HTTPS and install available updates for protection.
|
https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access
|
Digital Enhanced Cordless Telecommunications(DECT) is acordless telephonystandard maintained byETSI. It originated inEurope, where it is the common standard, replacing earlier standards, such asCT1andCT2.[1]Since the DECT-2020 standard onwards, it also includesIoTcommunication.
Beyond Europe, it has been adopted byAustraliaand most countries inAsiaandSouth America. North American adoption was delayed byUnited Statesradio-frequency regulations. This forced development of a variation of DECT calledDECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America.
DECT was originally intended for fast roaming between networked base stations, and the first DECT product wasNet3wireless LAN. However, its most popular application is single-cell cordless phones connected totraditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in manyprivate branch exchange(PBX) systems for medium and large businesses, produced byPanasonic,Mitel,Gigaset,Ascom,Cisco,Grandstream,Snom,Spectralink, and RTX. DECT can also be used for purposes other than cordless phones, such asbaby monitors,wireless microphonesand industrial sensors. TheULE Alliance'sDECT ULEand its "HAN FUN" protocol[2]are variants tailored for home security, automation, and theinternet of things(IoT).
The DECT standard includes thegeneric access profile(GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum.[3]
The New Generation DECT (NG-DECT) standard, marketed asCAT-iqby the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability acrossIP-DECTbase stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support forwideband audio.
DECT-2020New Radio, marketed as NR+ (New Radio plus), is a5Gdata transmission protocol which meets ITU-RIMT-2020requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices.[4][5][6]
The DECT standard was developed byETSIin several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard.[7]Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing.
Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by theITUas fulfilling theIMT-2000requirements and thus qualifies as a3Gsystem. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT).
DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries inEurope. Outside Europe, it is used in most ofAsia,AustraliaandSouth America. In theUnited States, theFederal Communications Commissionin 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9GHz), known asUnlicensed Personal Communications Services(UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such asbaby monitorsandwireless networks.
The New Generation DECT (NG-DECT) standard was first published in 2007;[8]it was developed by ETSI with guidance from theHome Gateway Initiativethrough the DECT Forum[9]to supportIP-DECTfunctions inhome gateway/IP-PBXequipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570.[10]The DECT Forum maintains theCAT-iqtrademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527.
TheDECT Ultra Low Energy(DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year byDialog Semiconductor. The standard was created to enablehome automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference thanZigbee,Bluetooth, orWi-Fifrom microwave ovens, which all operate in the unlicensed 2.4 GHzISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit.
A new low-complexity audio codec,LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications such as wireless speakers, headphones, headsets, and microphones. LC3plus supports scalable 16-bit narrowband, wideband, super wideband, fullband, and 24-bit high-resolution fullband and ultra-band coding, with sample rates of 8, 16, 24, 32, 48 and 96 kHz and audio bandwidth of up to 48 kHz.[11][12]
DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based oncyclic prefixorthogonal frequency-division multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate withQAM-1024 modulation. The updated standard supports multi-antennaMIMOandbeamforming, FECchannel coding, and hybridautomatic repeat request. There are 17 radio channel frequencies in the range from 450MHz up to 5,875MHz, and channel bandwidths of 1,728, 3,456, or 6,912kHz. Direct communication between end devices is possible with amesh networktopology. In October 2021, DECT-2020 NR was approved for theIMT-2020standard,[4]for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professionalwireless audioapplications with point-to-point ormulticastcommunications;[13][14][15]the proposal was fast-tracked by ITU-R following real-world evaluations.[5][16]The new protocol will be marketed as NR+ (New Radio plus) by the DECT Forum.[6]OFDMAandSC-FDMAmodulations were also considered by the ESTI DECT committee.[17][18]
OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware fromDialog SemiconductorandDSP Group; the project is maintained by the DECT forum.[19][20]
The DECT standard originally envisaged three major areas of application:[7]
Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprisePABXmarket, albeit much smaller than the cordless home market, has been very successful as well, and all the major PABX vendors have advanced DECT access options available. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998Telecom Italialaunched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy.[21]The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001.[22]
DECT has been used forwireless local loopas a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over 10 kilometres (6.2 mi). One example is thecorDECTstandard.
The first data application for DECT wasNet3wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates.
Data applications such as electronic cash terminals, traffic lights, and remote door openers[23]also exist, but have been eclipsed byWi-Fi,3Gand4Gwhich compete with DECT for both voice and data.
The DECT standard specifies a means for aportable phoneor "Portable Part" to access a fixed telephone network via radio.Base stationor "Fixed Part" is used to terminate the radio link and provide access to a fixed line. Agatewayis then used to connect calls to the fixed network, such aspublic switched telephone network(telephone jack), office PBX, ISDN, or VoIP over Ethernet connection.
Typical abilities of a domestic DECTGeneric Access Profile(GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone line. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used asintercoms, communicating between each other, and sometimes aswalkie-talkies, intercommunicating without telephone line connection.
DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz.
DECT operates as a multicarrierfrequency-division multiple access(FDMA) andtime-division multiple access(TDMA) system. This means that theradio spectrumis divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT usestime-division duplex(TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel – thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s.
DECT also providesfrequency-hopping spread spectrumoverTDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion.[24]
DECT allows interference-free wireless operation to around 100 metres (110 yd) outdoors. Indoor performance is reduced when interior spaces are constrained by walls.
DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems,Wi-Finetworks,video senders,Bluetoothtechnology, baby monitors and other wireless devices.
ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties:
The DECTphysical layeruses FDMA/TDMA access with TDD.
Gaussian frequency-shift keying(GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 6 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s.
DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list.
The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed aseffective radiated power(ERP), rather than the more commonly usedequivalent isotropically radiated power(EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges.
The DECTmedia access controllayer controls the physical layer and providesconnection oriented,connectionlessandbroadcastservices to the higher layers.
The DECTdata link layeruses Link Access Protocol Control (LAPC), a specially designed variant of theISDNdata link protocol called LAPD. They are based onHDLC.
GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP).
There are several combinations of slots and corresponding types of physical packets with GFSK modulation:
The 420/424 bits of a GFSK basic packet (P32) contain the following fields:
The resulting full data rate is 32 kbit/s, available in both directions.
The DECTnetwork layeralways contains the following protocol entities:
Optionally it may also contain others:
All these communicate through a Link Control Entity (LCE).
The call control protocol is derived fromISDNDSS1, which is aQ.931-derived protocol. Many DECT-specific changes have been made.[specify]
The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT.
Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability.
DECTGAPis an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions.
The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number.
The standard also providesencryptionservices with the DECT Standard Cipher (DSC). The encryption isfairly weak, using a 35-bitinitialization vectorand encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under anon-disclosure agreementto the phones' manufacturers fromETSI.
The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time.[26]Unfortunately very few DECT devices on the market implemented authentication and encryption procedures[26][27]– and even when encryption was used by the phone, it was possible to implement aman-in-the-middle attackimpersonating a DECT base station and revert to unencrypted mode – which allows calls to be listened to, recorded, and re-routed to a different destination.[27][28][29]
After an unverified report of a successful attack in 2002,[30][31]members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008,[27]and as of 2010 there has been a viable attack on it that can recover the key.[32]
In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based onAES128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite.
DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication.
Various access profiles have been defined in the DECT standard:
DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM.
In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since theUPCS band(1920–1930 MHz) is not free from heavy interference.[34]Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor.
Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed2.4 GHzand 900 MHz Region 2ISM bands; some users ofUnidenWDECT 2.4 GHz phones reported interoperability issues withWi-Fiequipment.[35][36][unreliable source?]
North-AmericanDECT 6.0products may not be used in Europe, Pakistan,[37]Sri Lanka,[38]and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities,PTA, Telecommunications Regulatory Commission of Sri Lanka[39]and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by theFederal Communications CommissionandInnovation, Science and Economic Development Canada.
DECT 8.0 HD is a marketing designation for North American DECT devices certified withCAT-iq 2.0"Multi Line" profile.[40]
Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI.
NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitateVoIPcalls throughSIPandH.323protocols.
There are several CAT-iq profiles which define supported voice features:
CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to supportIP-DECTbase stations found in officeIP-PBXandhome gateways.
DECT-2020, also called NR+, is a new radio standard byETSIfor the DECT bands worldwide.[41][42]The standard was designed to meet a subset of theITUIMT-20205Grequirements that are applicable toIOTandIndustrial internet of things.[43]DECT-2020 is compliant with the requirements for Ultra Reliable Low Latency CommunicationsURLLCand massive Machine Type Communication (mMTC) of IMT-2020.
DECT-2020 NR has new capabilities[44]compared to DECT and DECT Evolution:
The DECT-2020 standard has been designed to co-exist in the DECT radio band with existing DECT deployments. It uses the same Time Division slot timing and Frequency Division center frequencies and uses pre-transmit scanning to minimize co-channel interference.
Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to 200 metres (660 ft) indoors and 6 kilometres (3.7 mi) using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative toWi-Fi.[45]The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti'sNet3, was a wireless LAN, and German firmsDosch & AmandandHoeft & Wesselbuilt niche businesses on the supply of data transmission systems based on DECT.
However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed.
DECT usesUHFradio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies.
In North America, the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe.
The UKHealth Protection Agency(HPA) claims that due to a mobile phone's adaptive power ability, a European DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A European DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.[46]
Most studies have been unable to demonstrate any link to health effects, or have been inconclusive.Electromagnetic fieldsmay have an effect on protein expression in laboratory settings[47]but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.[48]
|
https://en.wikipedia.org/wiki/Digital_Enhanced_Cordless_Telecommunications
|
Code-division multiple access(CDMA) is achannel access methodused by variousradiocommunication technologies. CDMA is an example ofmultiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (seebandwidth). To permit this without undue interference between the users, CDMA employsspread spectrumtechnology and a special coding scheme (where each transmitter is assigned a code).[1][2]
CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range.
It is used as the access method in manymobile phone standards.IS-95, also called "cdmaOne", and its3GevolutionCDMA2000, are often simply referred to as "CDMA", butUMTS, the 3G standard used byGSMcarriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such asAT&T,UScellularandVerizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to911.[3][4]
It can be also used as a channel or medium access technology, likeALOHAfor example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently.
In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are calledBarker codes(with a very short sequence length of typically 8 to 32).
For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used withbinary phase-shift keying(BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases)quadrature amplitude modulation(QAM) ororthogonal frequency-division multiplexing(OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based onbinary offset carrier modulation(BOC modulation), which is inspired byManchester codesand enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers.
The technology of code-division multiple access channels has long been known.
In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at theMassachusetts Institute of Technologyfrom June to August 1950.[5]Further research in the context ofjammingandanti-jammingwas carried out in 1952 atLincoln Lab.[6]
In theSoviet Union(USSR), the first work devoted to this subject was published in 1935 byDmitry Ageev.[7]It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory.[clarification needed]The technology of CDMA was used in 1957, when the young military radio engineerLeonid Kupriyanovichin Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station.[8]LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life.[9][10]The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator."[11][12]In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg (24 lb). It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.[13]
CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is apseudo-random codein the time domain that has a narrowambiguity functionin the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwiseXOR(exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration ofTb{\displaystyle T_{b}}(symbol period) is XORed with the code signal with pulse duration ofTc{\displaystyle T_{c}}(chip period). (Note:bandwidthis proportional to1/T{\displaystyle 1/T}, whereT{\displaystyle T}= bit time.) Therefore, the bandwidth of the data signal is1/Tb{\displaystyle 1/T_{b}}and the bandwidth of the spread spectrum signal is1/Tc{\displaystyle 1/T_{c}}. SinceTc{\displaystyle T_{c}}is much smaller thanTb{\displaystyle T_{b}}, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratioTb/Tc{\displaystyle T_{b}/T_{c}}is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.[1][2]
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made bycorrelatingthe received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to ascross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.[18][19]
An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived asnoiseand rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate.
In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).
The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced byWalsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually aGilbert cellmixer in the circuitry.
Synchronous CDMA exploits mathematical properties oforthogonalitybetweenvectorsrepresenting the data strings. For example, the binary string1011is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking theirdot product, by summing the products of their respective components (for example, ifu= (a,b) andv= (c,d), then their dot productu·v=ac+bd). If the dot product is zero, the two vectors are said to beorthogonalto each other. Some properties of the dot product aid understanding of howW-CDMAworks. If vectorsaandbare orthogonal, thena⋅b=0{\displaystyle \mathbf {a} \cdot \mathbf {b} =0}and:
Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bitWalsh codesare used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded.
Start with a set of vectors that are mutuallyorthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows fromWalsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called thecode,chipcode, orchipping code. In the interest of brevity, the rest of this example uses codesvwith only two bits.
Each user is associated with a different code, sayv. A 1 bit is represented by transmitting a positive codev, and a 0 bit is represented by a negative code−v. For example, ifv= (v0,v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be
For the purposes of this article, we call this constructed vector thetransmitted vector.
Each sender has a different, unique vectorvchosen from that set, but the construction method of the transmitted vector is identical.
Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.
If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps:
Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal
This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another:
Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example:
Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver:
When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data.
When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used inasynchronousCDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results inmultiple access interference(MAI) that is approximated by a Gaussian noise process (following thecentral limit theoremin statistics).Gold codesare an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.
All forms of CDMA use thespread-spectrumspreading factorto allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor.
Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power.
In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed.[20]Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables.
In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA.
TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency.
Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictableDoppler shiftof the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum.
Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2Nusers that only talk half of the time, then 2Nusers can be accommodated with the sameaveragebit error probability asNusers that talk all of the time. The key difference here is that the bit error probability forNusers talking all of the time is constant, whereas it is arandomquantity (with the same mean) for 2Nusers talking half of the time.
In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number oforthogonalcodes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there areNtime slots in a TDMA system and 2Nusers that talk half of the time, then half of the time there will be more thanNusers needing to use more thanNtime slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system.
Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.[18][19]
CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information.Convolution encodingandinterleavingcan be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome.
Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.
Some CDMA devices use arake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.[1][2]
Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.[1]
Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.[2]
A novel collaborative multi-user transmission and detection scheme called collaborative CDMA[21]has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and highbit error rateperformance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
|
https://en.wikipedia.org/wiki/Code_division_multiple_access
|
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number.
Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic.
Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory.
A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1]
The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1]
Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3]
In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1]
The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1]
The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed]
LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1]
Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed]
ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed]
Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1]
A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1]
Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}.
In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2]
The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).}
It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}.
Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}.
More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}.
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions.
However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing).
Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4]
A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}.
The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}.
For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}.
More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}.
Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}}
The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.}
The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}}
The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}}
The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14).
The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}.
Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function.
The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem.
Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}.
This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}.
Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details.
When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }).
Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}.
Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}.
The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}.
In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}.
On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity.
When computingBrauer characters, one uses the mapαk↦exp(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero).
The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group.
The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2).
The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =54/6irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2).
The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements.
By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that:
This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}.
InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius.
Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots.
There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.}
In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group.
The fact that the Frobenius map is surjective implies that every finite field isperfect.
IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF.
As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq.
This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization.
The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n).
The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime.
For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q).
Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields.
Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product.
Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields.
TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates.
Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions.
Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10]
A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}.
Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp.
This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union".
Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.}
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.}
Such a construction may be obtained byConway polynomials.
Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
|
https://en.wikipedia.org/wiki/Finite_field#Field_operations
|
In cryptography,security levelis a measure of the strength that acryptographic primitive— such as acipherorhash function— achieves. Security level is usually expressed as a number of "bitsof security" (alsosecurity strength),[1]wheren-bit security means that the attacker would have to perform 2noperations to break it,[2]but other methods have been proposed that more closely model the costs for an attacker.[3]This allows for convenient comparison between algorithms and is useful when combining multiple primitives in ahybrid cryptosystem, so there is no clear weakest link. For example,AES-128 (key size128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to aRSAusing 3072-bit key.
In this context,security claimortarget security levelis the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is consideredbroken.[4][5]
Symmetric algorithms usually have a strictly defined security claim. Forsymmetric ciphers, it is typically equal to thekey sizeof the cipher — equivalent to thecomplexityof abrute-force attack.[5][6]Cryptographic hash functionswith output size ofnbits usually have acollision resistancesecurity leveln/2 and apreimage resistanceleveln. This is because the generalbirthday attackcan always find collisions in 2n/2steps.[7]For example,SHA-256offers 128-bit collision resistance and 256-bit preimage resistance.
However, there are some exceptions to this. ThePhelixand Helix are 256-bit ciphers offering a 128-bit security level.[5][8]The SHAKE variants ofSHA-3are also different: for a 256-bit output size, SHAKE-128 provides 128-bit security level for both collision and preimage resistance.[9]
The design of most asymmetric algorithms (i.e.public-key cryptography) relies on neatmathematical problemsthat are efficient to compute in one direction, but inefficient to reverse by the attacker. However, attacks against current public-key systems are always faster thanbrute-force searchof the key space. Their security level isn't set at design time, but represents acomputational hardness assumption, which is adjusted to match the best currently known attack.[6]
Various recommendations have been published that estimate the security level of asymmetric algorithms, which differ slightly due to different methodologies.
The following table are examples of typical security levels for types of algorithms as found in s5.6.1.1 of the US NIST SP-800-57 Recommendation for Key Management.[16]: Table 2
Under NIST recommendation, a key of a given security level should only be transported under protection using an algorithm of equivalent or higher security level.[14]
The security level is given for the cost of breaking one target, not the amortized cost for group of targets. It takes 2128operations to find a AES-128 key, yet the same number of amortized operations is required for any numbermof keys. On the other hand, breakingmECC keys using the rho method require sqrt(m) times the base cost.[15][17]
A cryptographic primitive is considered broken when an attack is found to have less than its advertised level of security. However, not all such attacks are practical: most currently demonstrated attacks take fewer than 240operations, which translates to a few hours on an average PC. The costliest demonstrated attack on hash functions is the 261.2attack on SHA-1, which took 2 months on 900GTX 970GPUs, and cost US$75,000 (although the researchers estimate only $11,000 was needed to find a collision).[18]
Aumasson draws the line between practical and impractical attacks at 280operations. He proposes a new terminology:[19]
|
https://en.wikipedia.org/wiki/Cryptographic_strength
|
Wired Equivalent Privacy(WEP) is an obsolete, severely flawedsecurityalgorithm for 802.11wireless networks. Introduced as part of the originalIEEE 802.11standard ratified in 1997, its intention was to provide security/privacy comparable to that of a traditional wirednetwork.[1]WEP, recognizable by its key of 10 or 26hexadecimaldigits (40 or 104 bits), was at one time widely used, and was often the first security choice presented to users by router configuration tools.[2][3]After a severe design flaw in the algorithm was disclosed in 2001,[4]WEP was no longer considered a secure method of wireless connection; however, in the vast majority of cases, Wi-Fi hardware devices relying on WEP security could not be upgraded to secure operation. Some of WEP's design flaws were addressed in WEP2, but it also proved insecure, and never saw wide adoption or standardization.[5]
In 2003, theWi-Fi Allianceannounced that WEP and WEP2 had been superseded byWi-Fi Protected Access(WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated.[6]WPA retained some design characteristics of WEP that remained problematic.
WEP was the only encryption protocol available to802.11aand802.11bdevices built before the WPA standard, which was available for802.11gdevices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in.[7]
WEP was ratified as a Wi-Fi security standard in 1999. The first versions of WEP were not particularly strong, even for the time they were released, due to U.S. restrictions on the export of various cryptographic technologies. These restrictions led to manufacturers restricting their devices to only 64-bit encryption. When the restrictions were lifted, the encryption was increased to 128 bits. Despite the introduction of 256-bit WEP, 128-bit remains one of the most common implementations.[8]
WEP was included as the privacy component of the originalIEEE 802.11[9]standard ratified in 1997.[10][11]WEP uses thestream cipherRC4forconfidentiality,[12]and theCRC-32checksum forintegrity.[13]It was deprecated in 2004 and is documented in the current standard.[14]
Standard 64-bit WEP uses a 40-bitkey (also known as WEP-40), which is concatenated with a 24-bitinitialization vector(IV) to form the RC4 key. At the time that the original WEP standard was drafted,the U.S. Government's export restrictions on cryptographic technologylimited thekey size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).
A 64-bit WEP key is usually entered as a string of 10hexadecimal(base 16) characters (0–9 and A–F). Each character represents 4 bits, 10 digits of 4 bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key (4 bits × 10 + 24-bit IV = 64-bit WEP key). Most devices also allow the user to enter the key as 5ASCIIcharacters (0–9, a–z, A–Z), each of which is turned into 8 bits using the character's byte value in ASCII (8 bits × 5 + 24-bit IV = 64-bit WEP key); however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.
A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of 4 bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key (4 bits × 26 + 24-bit IV = 128-bit WEP key). Most devices also allow the user to enter it as 13 ASCII characters (8 bits × 13 + 24-bit IV = 128-bit WEP key).
152-bit and 256-bit WEP systems are available from some vendors. As with the other WEP variants, 24 bits of that is for the IV, leaving 128 or 232 bits for actual protection. These 128 or 232 bits are typically entered as 32 or 58 hexadecimal characters (4 bits × 32 + 24-bit IV = 152-bit WEP key, 4 bits × 58 + 24-bit IV = 256-bit WEP key). Most devices also allow the user to enter it as 16 or 29 ASCII characters (8 bits × 16 + 24-bit IV = 152-bit WEP key, 8 bits × 29 + 24-bit IV = 256-bit WEP key).
Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.
In Open System authentication, the WLAN client does not provide its credentials to the access point during authentication. Any client can authenticate with the access point and then attempt to associate. In effect, no authentication occurs. Subsequently, WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.
In Shared Key authentication, the WEP key is used for authentication in a four-stepchallenge–responsehandshake:
After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.
At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication.[15]Therefore, data can be more easily intercepted and decrypted with Shared Key authentication than with Open System authentication. If privacy is a primary concern, it is more advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication; however, this also means that any WLAN client can connect to the AP. (Both authentication mechanisms are weak; Shared Key WEP is deprecated in favor of WPA/WPA2.)
Because RC4 is astream cipher, the same traffic key must never be used twice. The purpose of an IV, which is transmitted as plaintext, is to prevent any repetition, but a 24-bit IV is not long enough to ensure this on a busy network. The way the IV was used also opened WEP to arelated-key attack. For a 24-bit IV, there is a 50% probability the same IV will repeat after 5,000 packets.
In August 2001,Scott Fluhrer,Itsik Mantin, andAdi Shamirpublished acryptanalysisof WEP[4]that exploits the way the RC4 ciphers and IV are used in WEP, resulting in a passive attack that can recover the RC4keyafter eavesdropping on the network. Depending on the amount of network traffic, and thus the number of packets available for inspection, a successful key recovery could take as little as one minute. If an insufficient number of packets are being sent, there are ways for an attacker to send packets on the network and thereby stimulate reply packets, which can then be inspected to find the key. The attack was soon implemented, and automated tools have since been released. It is possible to perform the attack with a personal computer, off-the-shelf hardware, and freely available software such asaircrack-ngto crackanyWEP key in minutes.
Cam-Winget et al.[16]surveyed a variety of shortcomings in WEP. They wrote "Experiments in the field show that, with proper equipment, it is practical to eavesdrop on WEP-protected networks from distances of a mile or more from the target." They also reported two generic weaknesses:
In 2005, a group from the U.S.Federal Bureau of Investigationgave a demonstration where they cracked a WEP-protected network in three minutes using publicly available tools.[17]Andreas Klein presented another analysis of the RC4 stream cipher. Klein showed that there are more correlations between the RC4 keystream and the key than the ones found by Fluhrer, Mantin, and Shamir, which can additionally be used to break WEP in WEP-like usage modes.
In 2006, Bittau,Handley, and Lackey showed[2]that the 802.11 protocol itself can be used against WEP to enable earlier attacks that were previously thought impractical. After eavesdropping a single packet, an attacker can rapidly bootstrap to be able to transmit arbitrary data. The eavesdropped packet can then be decrypted one byte at a time (by transmitting about 128 packets per byte to decrypt) to discover the local network IP addresses. Finally, if the 802.11 network is connected to the Internet, the attacker can use 802.11 fragmentation to replay eavesdropped packets while crafting a new IP header onto them. The access point can then be used to decrypt these packets and relay them on to a buddy on the Internet, allowing real-time decryption of WEP traffic within a minute of eavesdropping the first packet.
In 2007, Erik Tews, Andrei Pyshkin, and Ralf-Philipp Weinmann were able to extend Klein's 2005 attack and optimize it for usage against WEP. With the new attack[18]it is possible to recover a 104-bit WEP key with a probability of 50% using only 40,000 captured packets. For 60,000 available data packets, the success probability is about 80%, and for 85,000 data packets, about 95%. Using active techniques likeWi-Fi deauthentication attacksandARPre-injection, 40,000 packets can be captured in less than one minute under good conditions. The actual computation takes about 3 seconds and 3 MB of main memory on aPentium-M1.7 GHz and can additionally be optimized for devices with slower CPUs. The same attack can be used for 40-bit keys with an even higher success probability.
In 2008 thePayment Card Industry Security Standards Council(PCI SSC) updated theData Security Standard(DSS) to prohibit use of WEP as part of any credit-card processing after 30 June 2010, and prohibit any new system from being installed that uses WEP after 31 March 2009. The use of WEP contributed to theTJ Maxxparent company network invasion.[19]
The Caffe Latte attack is another way to defeat WEP. It is not necessary for the attacker to be in the area of thenetworkusing this exploit. By using a process that targets theWindowswireless stack, it is possible to obtain the WEP key from a remote client.[20]By sending a flood of encryptedARPrequests, the assailant takes advantage of the shared key authentication and the message modification flaws in 802.11 WEP. The attacker uses the ARP responses to obtain the WEP key in less than 6 minutes.[21]
Use of encryptedtunneling protocols(e.g.,IPsec,Secure Shell) can provide secure data transmission over an insecure network. However, replacements for WEP have been developed with the goal of restoring security to the wireless network itself.
The recommended solution to WEP security problems is to switch to WPA2.WPAwas an intermediate solution for hardware that could not support WPA2. Both WPA and WPA2 are much more secure than WEP.[22]To add support for WPA or WPA2, some old Wi-Fiaccess pointsmight need to be replaced or have theirfirmwareupgraded. WPA was designed as an interim software-implementable solution for WEP that could forestall immediate deployment of new hardware.[23]However,TKIP(the basis of WPA) has reached the end of its designed lifetime, has been partially broken, and has been officially deprecated with the release of the 802.11-2012 standard.[24]
This stopgap enhancement to WEP was present in some of the early 802.11i drafts. It was implementable onsome(not all) hardware not able to handle WPA or WPA2, and extended both the IV and the key values to 128 bits.[9]It was hoped to eliminate the duplicate IV deficiency as well as stopbrute-force key attacks.
After it became clear that the overall WEP algorithm was deficient (and not just the IV and key sizes) and would require even more fixes, both the WEP2 name and original algorithm were dropped. The two extended key lengths remained in what eventually became WPA'sTKIP.
WEPplus, also known as WEP+, is a proprietary enhancement to WEP byAgere Systems(formerly a subsidiary ofLucent Technologies) that enhances WEP security by avoiding "weak IVs".[25]It is only completely effective when WEPplus is used atboth endsof the wireless connection. As this cannot easily be enforced, it remains a serious limitation. It also does not necessarily preventreplay attacks, and is ineffective against later statistical attacks that do not rely on weak IVs.
Dynamic WEP refers to the combination of 802.1x technology and theExtensible Authentication Protocol. Dynamic WEP changes WEP keys dynamically. It is a vendor-specific feature provided by several vendors such as3Com.
The dynamic change idea made it into 802.11i as part of TKIP, but not for the WEP protocol itself.
|
https://en.wikipedia.org/wiki/Wired_Equivalent_Privacy#Data_integrity
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Inmathematics, anoncommutative ringis aringwhose multiplication is notcommutative; that is, there existaandbin the ring such thatabandbaare different. Equivalently, anoncommutative ringis a ring that is not acommutative ring.
Noncommutative algebrais the part ofring theorydevoted to study of properties of the noncommutative rings, including the properties that apply also to commutative rings.
Sometimes the termnoncommutative ringis used instead ofringto refer to an unspecified ring which is not necessarily commutative, and hence may be commutative. Generally, this is for emphasizing that the studied properties are not restricted to commutative rings, as, in many contexts,ringis used as a shorthand forcommutative ring.
Although some authors do not assume that rings have a multiplicative identity, in this article we make that assumption unless stated otherwise.
Some examples of noncommutative rings:
Some examples of rings that are not typically commutative (but may be commutative in simple cases):
Beginning withdivision ringsarising from geometry, the study of noncommutative rings has grown into a major area of modern algebra. The theory and exposition of noncommutative rings was expanded and refined in the 19th and 20th centuries by numerous authors. An incomplete list of such contributors includesE. Artin,Richard Brauer,P. M. Cohn,W. R. Hamilton,I. N. Herstein,N. Jacobson,K. Morita,E. Noether,Ø. Ore,J. Wedderburnand others.
Because noncommutative rings of scientific interest are more complicated than commutative rings, their structure, properties and behavior are less well understood. A great deal of work has been done successfully generalizing some results from commutative rings to noncommutative rings. A major difference between rings which are and are not commutative is the necessity to separately considerright ideals and left ideals. It is common for noncommutative ring theorists to enforce a condition on one of these types of ideals while not requiring it to hold for the opposite side. For commutative rings, the left–right distinction does not exist.
A division ring, also called a skew field, is aringin whichdivisionis possible. Specifically, it is anonzeroring[2]in which every nonzero elementahas amultiplicative inverse, i.e., an elementxwitha·x=x·a= 1. Stated differently, a ring is a division ring if and only if itsgroup of unitsis the set of all nonzero elements.
Division rings differ fromfieldsonly in that their multiplication is not required to becommutative. However, byWedderburn's little theoremall finite division rings are commutative and thereforefinite fields. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields".
Amoduleover a (not necessarily commutative) ring with unity is said to be semisimple (or completely reducible) if it is thedirect sumofsimple(irreducible) submodules.
A ring is said to be (left)-semisimple if it is semisimple as a left module over itself. Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary.
A semiprimitive ring or Jacobson semisimple ring or J-semisimple ring is a ring whoseJacobson radicalis zero. This is a type of ring more general than asemisimple ring, but wheresimple modulesstill provide enough information about the ring. Rings such as the ring of integers are semiprimitive, and anartiniansemiprimitive ring is just asemisimple ring. Semiprimitive rings can be understood assubdirect productsofprimitive rings, which are described by theJacobson density theorem.
A simple ring is a non-zeroringthat has no two-sidedidealbesides thezero idealand itself. A simple ring can always be considered as asimple algebra. Rings which are simple as rings but not asmodulesdo exist: the fullmatrix ringover afielddoes not have any nontrivial ideals (since any ideal of M(n,R) is of the form M(n,I) withIan ideal ofR), but has nontrivial left ideals (namely, the sets of matrices which have some fixed zero columns).
According to theArtin–Wedderburn theorem, every simple ring that is left or rightArtinianis amatrix ringover adivision ring. In particular, the only simple rings that are a finite-dimensionalvector spaceover thereal numbersare rings of matrices over either the real numbers, thecomplex numbers, or thequaternions.
Any quotient of a ring by amaximal idealis a simple ring. In particular, afieldis a simple ring. A ringRis simple if and only if itsopposite ringRois simple.
An example of a simple ring that is not a matrix ring over a division ring is theWeyl algebra.
Wedderburn's little theorem states that everyfinitedomainis afield. In other words, forfinite rings, there is no distinction between domains,division ringsand fields.
TheArtin–Zorn theoremgeneralizes the theorem toalternative rings: every finite simple alternative ring is a field.[3]
The Artin–Wedderburn theorem is aclassification theoremforsemisimple ringsandsemisimple algebras. The theorem states that an (Artinian)[4]semisimple ringRis isomorphic to aproductof finitely manyni-by-nimatrix ringsoverdivision ringsDi, for some integersni, both of which are uniquely determined up to permutation of the indexi. In particular, anysimpleleft or rightArtinian ringis isomorphic to ann-by-nmatrix ringover adivision ringD, where bothnandDare uniquely determined.[5]
As a direct corollary, the Artin–Wedderburn theorem implies that every simple ring that is finite-dimensional over a division ring (a simple algebra) is amatrix ring. This isJoseph Wedderburn's original result.Emil Artinlater generalized it to the case of Artinian rings.
TheJacobson density theoremis a theorem concerningsimple modulesover a ringR.[6]
The theorem can be applied to show that anyprimitive ringcan be viewed as a "dense" subring of the ring oflinear transformationsof a vector space.[7][8]This theorem first appeared in the literature in 1945, in the famous paper "Structure Theory of Simple Rings Without Finiteness Assumptions" byNathan Jacobson.[9]This can be viewed as a kind of generalization of theArtin-Wedderburn theorem's conclusion about the structure ofsimpleArtinian rings.
More formally, the theorem can be stated as follows:
Let J(R) be theJacobson radicalofR. IfUis a right module over a ring,R, andIis a right ideal inR, then defineU·Ito be the set of all (finite) sums of elements of the formu·i, where·is simply the action ofRonU. Necessarily,U·Iis a submodule ofU.
IfVis amaximal submoduleofU, thenU/Vissimple. SoU·J(R) is necessarily a subset ofV, by the definition of J(R) and the fact thatU/Vis simple.[11]Thus, ifUcontains at least one (proper) maximal submodule,U·J(R) is a proper submodule ofU. However, this need not hold for arbitrary modulesUoverR, forUneed not contain any maximal submodules.[12]Naturally, ifUis aNoetherianmodule, this holds. IfRis Noetherian, andUisfinitely generated, thenUis a Noetherian module overR, and the conclusion is satisfied.[13]Somewhat remarkable is that the weaker assumption, namely thatUis finitely generated as anR-module (and no finiteness assumption onR), is sufficient to guarantee the conclusion. This is essentially the statement of Nakayama's lemma.[14]
Precisely, one has the following.
A version of the lemma holds for right modules over non-commutativeunitary ringsR. The resulting theorem is sometimes known as theJacobson–Azumaya theorem.[15]
Localization is a systematic method of adding multiplicative inverses to aring, and is usually applied to commutative rings. Given a ringRand a subsetS, one wants to construct some ringR* andring homomorphismfromRtoR*, such that the image ofSconsists ofunits(invertible elements) inR*. Further one wantsR* to be the 'best possible' or 'most general' way to do this – in the usual fashion this should be expressed by auniversal property. The localization ofRbySis usually denoted byS−1R; however other notations are used in some important special cases. IfSis the set of the non zero elements of anintegral domain, then the localization is thefield of fractionsand thus usually denoted Frac(R).
Localizingnon-commutative ringsis more difficult; the localization does not exist for every setSof prospective units. One condition which ensures that the localization exists is theOre condition.
One case for non-commutative rings where localization has a clear interest is for rings of differential operators. It has the interpretation, for example, of adjoining a formal inverseD−1for a differentiation operatorD. This is done in many contexts in methods fordifferential equations. There is now a large mathematical theory about it, namedmicrolocalization, connecting with numerous other branches. Themicro-tag is to do with connections withFourier theory, in particular.
Morita equivalence is a relationship defined betweenringsthat preserves many ring-theoretic properties. It is named after Japanese mathematicianKiiti Moritawho defined equivalence and a similar notion of duality in 1958.
Two ringsRandS(associative, with 1) are said to be (Morita)equivalentif there is an equivalence of the category of (left) modules overR,R-Mod, and the category of (left) modules overS,S-Mod. It can be shown that the left module categoriesR-ModandS-Modare equivalent if and only if the right module categoriesMod-RandMod-Sare equivalent. Further it can be shown that any functor fromR-ModtoS-Modthat yields an equivalence is automaticallyadditive.
The Brauer group of afieldKis anabelian groupwhose elements areMorita equivalenceclasses ofcentral simple algebrasof finite rank overKand addition is induced by thetensor productof algebras. It arose out of attempts to classifydivision algebrasover a field and is named after the algebraistRichard Brauer. The group may also be defined in terms ofGalois cohomology. More generally, the Brauer group of aschemeis defined in terms ofAzumaya algebras.
The Ore condition is a condition introduced byØystein Ore, in connection with the question of extending beyondcommutative ringsthe construction of afield of fractions, or more generallylocalization of a ring. Theright Ore conditionfor amultiplicative subsetSof aringRis that fora∈Rands∈S, the intersectionaS∩sR≠ ∅.[16]A domain that satisfies the right Ore condition is called aright Ore domain. The left case is defined similarly.
Inmathematics,Goldie's theoremis a basic structural result inring theory, proved byAlfred Goldieduring the 1950s. What is now termed a rightGoldie ringis aringRthat has finiteuniform dimension(also called "finite rank") as a right module over itself, and satisfies theascending chain conditionon rightannihilatorsof subsets ofR.
Goldie's theorem states that thesemiprimeright Goldie rings are precisely those that have asemisimpleArtinianrightclassical ring of quotients. The structure of this ring of quotients is then completely determined by theArtin–Wedderburn theorem.
In particular, Goldie's theorem applies to semiprime rightNoetherian rings, since by definition right Noetherian rings have the ascending chain condition onallright ideals. This is sufficient to guarantee that a right-Noetherian ring is right Goldie. The converse does not hold: every rightOre domainis a right Goldie domain, and hence so is every commutativeintegral domain.
A consequence of Goldie's theorem, again due to Goldie, is that every semiprimeprincipal right ideal ringis isomorphic to a finite direct sum ofprimeprincipal right ideal rings. Every prime principal right ideal ring is isomorphic to amatrix ringover a right Ore domain.
|
https://en.wikipedia.org/wiki/Noncommutative_algebra
|
Addition(usually signified by theplus symbol, +) is one of the four basicoperationsofarithmetic, the other three beingsubtraction,multiplication, anddivision. The addition of twowhole numbersresults in the total orsumof those values combined. For example, the adjacent image shows two columns of apples, one with three apples and the other with two apples, totaling to five apples. This observation is expressed as"3 + 2 = 5", which is read as "three plus twoequalsfive".
Besidescountingitems, addition can also be defined and executed without referring toconcrete objects, using abstractions callednumbersinstead, such asintegers,real numbers, andcomplex numbers. Addition belongs to arithmetic, a branch ofmathematics. Inalgebra, another area of mathematics, addition can also be performed on abstract objects such asvectors,matrices,subspaces, andsubgroups.
Addition has several important properties. It iscommutative, meaning that the order of thenumbers being addeddoes not matter, so3 + 2 = 2 + 3, and it isassociative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of1is the same as counting (seeSuccessor function). Addition of0does not change a number. Addition also obeys rules concerning related operations such as subtraction and multiplication.
Performing addition is one of the simplest numerical tasks to perform. Addition of very small numbers is accessible to toddlers; the most basic task,1 + 1, can be performed by infants as young as five months, and even some members of other animal species. Inprimary education, students are taught to add numbers in thedecimalsystem, beginning with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancientabacusto the moderncomputer, where research on the most efficient implementations of addition continues to this day.
Addition is written using theplus sign"+"between the terms, and the result is expressed with anequals sign. For example,1+2=3{\displaystyle 1+2=3}reads "one plus two equals three".[2]Nonetheless, some situations where addition is "understood", even though no symbol appears: a whole number followed immediately by afractionindicates the sum of the two, called amixed number, with an example,[3]312=3+12=3.5.{\displaystyle 3{\frac {1}{2}}=3+{\frac {1}{2}}=3.5.}This notation can cause confusion, since in most other contexts,juxtapositiondenotesmultiplicationinstead.[4]
The numbers or the objects to be added in general addition are collectively referred to as theterms,[5]theaddendsor thesummands.[2]This terminology carries over to the summation of multiple terms.
This is to be distinguished fromfactors, which aremultiplied.
Some authors call the first addend theaugend.[6]In fact, during theRenaissance, many authors did not consider the first addend an "addend" at all. Today, due to thecommutative propertyof addition, "augend" is rarely used, and both terms are generally called addends.[7]
All of the above terminology derives fromLatin. "Addition" and "add" areEnglishwords derived from the Latinverbaddere, which is in turn acompoundofad"to" anddare"to give", from theProto-Indo-European root*deh₃-"to give"; thus toaddis togive to.[7]Using thegerundivesuffix-ndresults in "addend", "thing to be added".[a]Likewise fromaugere"to increase", one gets "augend", "thing to be increased".
"Sum" and "summand" derive from the Latinnounsumma"the highest, the top" and associated verbsummare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was common for theancient GreeksandRomansto add upward, contrary to the modern practice of adding downward, so that a sum was literally at the top of the addends.[9]Addereandsummaredate back at least toBoethius, if not to earlier Roman writers such asVitruviusandFrontinus; Boethius also used several other terms for the addition operation. The laterMiddle Englishterms "adden" and "adding" were popularized byChaucer.[10]
Addition is one of the four basicoperationsofarithmetic, with the other three beingsubtraction,multiplication, anddivision. This operation works by adding two or more terms.[11]An arbitrary of many operation of additions is called thesummation.[12]An infinite summation is a delicate procedure known as aseries,[13]and it can be expressed throughcapital sigma notation∑{\textstyle \sum }, which compactly denotesiterationof the operation of addition based on the given indexes.[14]For example,∑k=15k2=12+22+32+42+52=55.{\displaystyle \sum _{k=1}^{5}k^{2}=1^{2}+2^{2}+3^{2}+4^{2}+5^{2}=55.}
Addition is used to model many physical processes. Even for the simple case of addingnatural numbers, there are many possible interpretations and even more visual representations.
Possibly the most basic interpretation of addition lies in combiningsets, that is:[2]
When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the numbers of objects in the original collections.
This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see§ Natural numbersbelow). However, it is not obvious how one should extend this version of an addition's operation to include fractional or negative numbers.[15]
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods.[16]
A second interpretation of addition comes from extending an initial length by a given length:[17]
When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension.
The suma+b{\displaystyle a+b}can be interpreted as abinary operationthat combinesa{\displaystyle a}andb{\displaystyle b}algebraically, or it can be interpreted as the addition ofb{\displaystyle b}more units toa{\displaystyle a}. Under the latter interpretation, the parts of a suma+b{\displaystyle a+b}play asymmetric roles, and the operationa+b{\displaystyle a+b}is viewed as applying theunary operation+b{\displaystyle +b}toa{\displaystyle a}.[18]Instead of calling botha{\displaystyle a}andb{\displaystyle b}addends, it is more appropriate to calla{\displaystyle a}the "augend" in this case, sincea{\displaystyle a}plays a passive role. The unary view is also useful when discussingsubtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa.
Addition iscommutative, meaning that one can change the order of the terms in a sum, but still get the same result. Symbolically, ifa{\displaystyle a}andb{\displaystyle b}are any two numbers, then:[19]a+b=b+a.{\displaystyle a+b=b+a.}The fact that addition is commutative is known as the "commutative law of addition"[20]or "commutative property of addition".[21]Some otherbinary operationsare commutative too as inmultiplication,[22]but others are not as insubtractionanddivision.[23]
Addition isassociative, which means that when three or more numbers are added together, theorder of operationsdoes not change the result. For any three numbersa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, it is true that:[24](a+b)+c=a+(b+c).{\displaystyle (a+b)+c=a+(b+c).}For example,(1+2)+3=1+(2+3){\displaystyle (1+2)+3=1+(2+3)}.
When addition is used together with other operations, theorder of operationsbecomes important. In the standard order of operations, addition is a lower priority thanexponentiation,nth roots, multiplication and division, but is given equal priority to subtraction.[25]
Addingzeroto any number does not change the number. In other words, zero is theidentity elementfor addition, and is also known as theadditive identity. In symbols, for everya{\displaystyle a}, one has:[24]a+0=0+a=a.{\displaystyle a+0=0+a=a.}This law was first identified inBrahmagupta'sBrahmasphutasiddhantain 628 AD, although he wrote it as three separate laws, depending on whethera{\displaystyle a}is negative, positive, or zero itself, and he used words rather than algebraic symbols. LaterIndian mathematiciansrefined the concept; around the year 830,Mahavirawrote, "zero becomes the same as what is added to it", corresponding to the unary statement0+a=a{\displaystyle 0+a=a}. In the 12th century,Bhaskarawrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statementa+0=a{\displaystyle a+0=a}.[26]
Within the context of integers, addition ofonealso plays a special role: for any integera{\displaystyle a}, the integera+1{\displaystyle a+1}is the least integer greater thana{\displaystyle a}, also known as thesuccessorofa{\displaystyle a}. For instance, 3 is the successor of 2, and 7 is the successor of 6. Because of this succession, the value ofa+b{\displaystyle a+b}can also be seen as theb{\displaystyle b}-th successor ofa{\displaystyle a}, making addition an iterated succession. For example,6 + 2is 8, because 8 is the successor of 7, which is the successor of 6, making 8 the second successor of 6.[27]
To numerically add physical quantities withunits, they must be expressed with common units.[28]For example, adding 50 milliliters to 150 milliliters gives 200 milliliters. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental indimensional analysis.[29]
Studies on mathematical development starting around the 1980s have exploited the phenomenon ofhabituation:infantslook longer at situations that are unexpected.[30]A seminal experiment byKaren Wynnin 1992 involvingMickey Mousedolls manipulated behind a screen demonstrated that five-month-old infantsexpect1 + 1to be 2, and they are comparatively surprised when a physical situation seems to imply that1 + 1is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies.[31]Another 1992 experiment with oldertoddlers, between 18 and 35 months, exploited their development of motor control by allowing them to retrieveping-pongballs from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5.[32]
Even some nonhuman animals show a limited ability to add, particularlyprimates. In a 1995 experiment imitating Wynn's 1992 result (but usingeggplantsinstead of dolls),rhesus macaqueandcottontop tamarinmonkeys performed similarly to human infants. More dramatically, after being taught the meanings of theArabic numerals0 through 4, onechimpanzeewas able to compute the sum of two numerals without further training.[33]More recently,Asian elephantshave demonstrated an ability to perform basic arithmetic.[34]
Typically, children first mastercounting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four,five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers.[35]Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case, starting with three and counting "four,five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that6 + 6 = 12and then reason that6 + 7is one more, or 13.[36]Such derived facts can be found very quickly and most elementary school students eventually rely on a mixture of memorized and derived facts to add fluently.[37]
Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school.[38]However, throughout the world, addition is taught by the end of the first year of elementary school.[39]
The prerequisite to addition in thedecimalsystem is the fluent recall or derivation of the 100 single-digit "addition facts". One couldmemorizeall the facts byrote, but pattern-based strategies are more enlightening and, for most people, more efficient:[40]
As students grow older, they commit more facts to memory and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly.[37]
The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds nine, the extra digit is "carried" into the next column. For example, in the following image, the ones in the addition of59 + 27is 9 + 7 = 16, and the digit 1 is the carry.[b]An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many alternative methods.
Decimal fractionscan be added by a simple modification of the above process.[41]One aligns two decimal fractions above each other, with the decimal point in the same location. If necessary, one can add trailing zeros to a shorter decimal to make it the same length as the longer decimal. Finally, one performs the same addition process as above, except the decimal point is placed in the answer, exactly where it was placed in the summands.
As an example, 45.1 + 4.34 can be solved as follows:
Inscientific notation, numbers are written in the formx=a×10b{\displaystyle x=a\times 10^{b}}, wherea{\displaystyle a}is the significand and10b{\displaystyle 10^{b}}is the exponential part. Addition requires two numbers in scientific notation to be represented using the same exponential part, so that the two significands can simply be added.
For example:
Addition in other bases is very similar to decimal addition. As an example, one can consider addition in binary.[42]Adding two single-digit binary numbers is relatively simple, using a form of carrying:
Adding two "1" digits produces a digit "0", while 1 must be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:
This is known ascarrying.[43]When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
In this example, two numerals are being added together: 011012(1310) and 101112(2310). The top row shows the carry bits used. Starting in the rightmost column,1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added:1 + 0 + 1 = 102again; the 1 is carried, and 0 is written at the bottom. The third column:1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002(3610).
Analog computerswork directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with anaveraginglever. If the addends are the rotation speeds of twoshafts, they can be added with adifferential. A hydraulic adder can add thepressuresin two chambers by exploitingNewton's second lawto balance forces on an assembly ofpistons. The most common situation for a general-purpose analog computer is to add twovoltages(referenced toground); this can be accomplished roughly with aresistornetwork, but a better design exploits anoperational amplifier.[44]
Addition is also fundamental to the operation ofdigital computers, where the efficiency of addition, in particular thecarrymechanism, is an important limitation to overall performance.[45]
Theabacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks inAsia,Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used inSumer.[46]
Blaise Pascalinvented the mechanical calculator in 1642;[47]it was the first operationaladding machine. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century[48]and the earliest automatic, digital computer.Pascal's calculatorwas limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use thePascal's calculator's complement, which required as many steps as an addition.Giovanni Polenifollowed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically.
Addersexecute integer addition in electronic digital computers, usually usingbinary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is thecarry skipdesign, again following human intuition; one does not perform all the carries in computing999 + 1, but one bypasses the group of 9s and skips to the answer.[49]
In practice, computational addition may be achieved viaXORandANDbitwise logical operations in conjunction with bitshift operations as shown in thepseudocodebelow. Both XOR and AND gates are straightforward to realize in digital logic, allowing the realization offull addercircuits, which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies allfloating-point operationsas well as such basic tasks asaddressgeneration duringmemoryaccess and fetchinginstructionsduringbranching. To increase speed, modern designs calculate digits inparallel; these schemes go by such names as carry select,carry lookahead, and theLingpseudocarry. Many implementations are, in fact, hybrids of these last three designs.[50]Unlike addition on paper, addition on a computer often changes the addends. Both addends are destroyed on the ancientabacusand adding board, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that earlyLatintexts often claimed that in the process of adding "a number to a number", both numbers vanish.[51]In modern times, the ADD instruction of amicroprocessoroften replaces the augend with the sum but preserves the addend.[52]In ahigh-level programming language, evaluatinga+b{\displaystyle a+b}does not change eithera{\displaystyle a}orb{\displaystyle b}; if the goal is to replacea{\displaystyle a}with the sum this must be explicitly requested, typically with the statementa=a+b{\displaystyle a=a+b}. Some languages likeCorC++allow this to be abbreviated asa+=b.
On a computer, if the result of an addition is too large to store, anarithmetic overflowoccurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause ofprogram errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests.[53]TheYear 2000 problemwas a series of bugs where overflow errors occurred due to the use of a 2-digit format for years.[54]
Computers have another way of representing numbers, calledfloating-point arithmetic, which is similar to scientific notation described above and which reduces the overflow problem. Each floating point number has two parts, an exponent and a mantissa. To add two floating-point numbers, the exponents must match, which typically means shifting the mantissa of the smaller number. If the disparity between the larger and smaller numbers is too great, a loss of precision may result. If many smaller numbers are to be added to a large number, it is best to add the smaller numbers together first and then add the total to the larger number, rather than adding small numbers to the large number one at a time. This makes floating point addition non-associative in general. Seefloating-point arithmetic#Accuracy problems.
To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on thenatural numbers. Inset theory, addition is then extended to progressively larger sets that include the natural numbers: theintegers, therational numbers, and thereal numbers.[55]Inmathematics education,[56]positive fractions are added before negative numbers are even considered; this is also the historical route.[57]
There are two popular ways to define the sum of two natural numbersa{\displaystyle a}andb{\displaystyle b}. If one defines natural numbers to be thecardinalitiesof finite sets (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows:[58]
LetN(S){\displaystyle N(S)}be the cardinality of a setS{\displaystyle S}. Take two disjoint setsA{\displaystyle A}andB{\displaystyle B}, withN(A)=a{\displaystyle N(A)=a}andN(B)=b{\displaystyle N(B)=b}. Thena+b{\displaystyle a+b}is defined asN(A∪B){\displaystyle N(A\cup B)}.
HereA∪B{\displaystyle A\cup B}means theunionofA{\displaystyle A}andB{\displaystyle B}. An alternate version of this definition allowsA{\displaystyle A}andB{\displaystyle B}to possibly overlap and then takes theirdisjoint union, a mechanism that allows common elements to be separated out and therefore counted twice.
The other popular definition is recursive:[59]
Letn+{\displaystyle n^{+}}be the successor ofn{\displaystyle n}, that is the number followingn{\displaystyle n}in the natural numbers, so0+=1{\displaystyle 0^{+}=1},1+=2{\displaystyle 1^{+}=2}. Definea+0=a{\displaystyle a+0=a}. Define the general sum recursively bya+b+=(a+b)+{\displaystyle a+b^{+}=(a+b)^{+}}. Hence1+1=1+0+=(1+0)+=1+=2{\displaystyle 1+1=1+0^{+}=(1+0)^{+}=1^{+}=2}.
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of therecursion theoremon thepartially ordered setN2{\displaystyle \mathbb {N} ^{2}}.[60]On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considersa{\displaystyle a}to be temporarily "fixed", applies recursion onb{\displaystyle b}to define a function "a+{\displaystyle a+}", and pastes these unary operations for alla{\displaystyle a}together to form the full binary operation.[61]
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, throughmathematical induction.[62]
The simplest conception of an integer is that it consists of anabsolute value(which is a natural number) and asign(generally eitherpositiveornegative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:[63]
For an integern{\displaystyle n}, let|n|{\displaystyle |n|}be its absolute value. Leta{\displaystyle a}andb{\displaystyle b}be integers. If eithera{\displaystyle a}orb{\displaystyle b}is zero, treat it as an identity. Ifa{\displaystyle a}andb{\displaystyle b}are both positive, definea+b=|a|+|b|{\displaystyle a+b=|a|+|b|}. Ifa{\displaystyle a}andb{\displaystyle b}are both negative, definea+b=−(|a|+|b|){\displaystyle a+b=-(|a|+|b|)}. Ifa{\displaystyle a}andb{\displaystyle b}have different signs, definea+b{\displaystyle a+b}to be the difference between|a|+|b|{\displaystyle |a|+|b|}, with the sign of the term whose absolute value is larger.
As an example,−6 + 4 = −2; because −6 and 4 have different signs, their absolute values are subtracted, and since the absolute value of the negative term is larger, the answer is negative.
Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences,a−b{\displaystyle a-b}andc−d{\displaystyle c-d}are equal if and only ifa+d=b+c{\displaystyle a+d=b+c}. So, one can define formally the integers as theequivalence classesofordered pairsof natural numbers under theequivalence relation(a,b)∼(c,d){\displaystyle (a,b)\sim (c,d)}if and only ifa+d=b+c{\displaystyle a+d=b+c}.[64]The equivalence class of(a,b){\displaystyle (a,b)}contains either(a−b,0){\displaystyle (a-b,0)}ifa≥b{\displaystyle a\geq b}, or(0,b−a){\displaystyle (0,b-a)}if otherwise. Given thatn{\displaystyle n}is a natural number, then one can denote+n{\displaystyle +n}the equivalence class of(n,0){\displaystyle (n,0)}, and by−n{\displaystyle -n}the equivalence class of(0,n){\displaystyle (0,n)}. This allows identifying the natural numbern{\displaystyle n}with the equivalence class+n{\displaystyle +n}.
The addition of ordered pairs is done component-wise:[65](a,b)+(c,d)=(a+c,b+d).{\displaystyle (a,b)+(c,d)=(a+c,b+d).}A straightforward computation shows that the equivalence class of the result depends only on the equivalence classes of the summands, and thus that this defines an addition of equivalence classes, that is, integers.[66]Another straightforward computation shows that this addition is the same as the above case definition.
This way of defining integers as equivalence classes of pairs of natural numbers can be used to embed into agroupany commutativesemigroupwithcancellation property. Here, the semigroup is formed by the natural numbers, and the group is the additive group of integers. The rational numbers are constructed similarly, by taking as a semigroup the nonzero integers with multiplication.
This construction has also been generalized under the name ofGrothendieck groupto the case of any commutative semigroup. Without the cancellation property, thesemigroup homomorphismfrom the semigroup into the group may be non-injective. Originally, the Grothendieck group was the result of this construction applied to the equivalence classes under isomorphisms of the objects of anabelian category, with thedirect sumas semigroup operation.
Addition ofrational numbersinvolves thefractions. The computation can be done by using theleast common denominator, but a conceptually simpler definition involves only integer addition and multiplication:ab+cd=ad+bcbd.{\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.}As an example, the sum34+18=3×8+4×14×8=24+432=2832=78{\textstyle {\frac {3}{4}}+{\frac {1}{8}}={\frac {3\times 8+4\times 1}{4\times 8}}={\frac {24+4}{32}}={\frac {28}{32}}={\frac {7}{8}}}.
Addition of fractions is much simpler when thedenominatorsare the same; in this case, one can simply add the numerators while leaving the denominator the same:ac+bc=a+bc,{\displaystyle {\frac {a}{c}}+{\frac {b}{c}}={\frac {a+b}{c}},}so14+24=1+24=34{\textstyle {\frac {1}{4}}+{\frac {2}{4}}={\frac {1+2}{4}}={\frac {3}{4}}}.[67]
The commutativity and associativity of rational addition are easy consequences of the laws of integer arithmetic.[68]
A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be aDedekind cutof rationals: anon-empty setof rationals that is closed downward and has nogreatest element. The sum of real numbersaandbis defined element by element:[69]a+b={q+r∣q∈a,r∈b}.{\displaystyle a+b=\{q+r\mid q\in a,r\in b\}.}This definition was first published, in a slightly modified form, byRichard Dedekindin 1872.[70]The commutativity and associativity of real addition are immediate; defining the real number 0 as the set of negative rationals, it is easily seen as the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.[71]
Unfortunately, dealing with the multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers.[72]Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of aCauchy sequenceof rationals, liman. Addition is defined term by term:[73]limnan+limnbn=limn(an+bn).{\displaystyle \lim _{n}a_{n}+\lim _{n}b_{n}=\lim _{n}(a_{n}+b_{n}).}This definition was first published byGeorg Cantor, also in 1872, although his formalism was slightly different.[74]One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.[75]
Complex numbers are added by adding the real and imaginary parts of the summands.[76][77]That is to say:
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbersAandB, interpreted as points of the complex plane, is the pointXobtained by building aparallelogramthree of whose vertices areO,AandB. Equivalently,Xis the point such that thetriangleswith verticesO,A,B, andX,B,A, arecongruent.
There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of algebra is centrally concerned with such generalized operations, and they also appear inset theoryandcategory theory.
Inlinear algebra, avector spaceis an algebraic structure that allows for adding any twovectorsand for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair(a,b){\displaystyle (a,b)}is interpreted as a vector from the origin in the Euclidean plane to the point(a,b){\displaystyle (a,b)}in the plane. The sum of two vectors is obtained by adding their individual coordinates:(a,b)+(c,d)=(a+c,b+d).{\displaystyle (a,b)+(c,d)=(a+c,b+d).}This addition operation is central toclassical mechanics, in whichvelocities,accelerationsandforcesare all represented by vectors.[78]
Matrix addition is defined for two matrices of the same dimensions. The sum of twom×n(pronounced "m by n") matricesAandB, denoted byA+B, is again anm×nmatrix computed by adding corresponding elements:[79][80]A+B=[a11a12⋯a1na21a22⋯a2n⋮⋮⋱⋮am1am2⋯amn]+[b11b12⋯b1nb21b22⋯b2n⋮⋮⋱⋮bm1bm2⋯bmn]=[a11+b11a12+b12⋯a1n+b1na21+b21a22+b22⋯a2n+b2n⋮⋮⋱⋮am1+bm1am2+bm2⋯amn+bmn]{\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\\\end{bmatrix}}+{\begin{bmatrix}b_{11}&b_{12}&\cdots &b_{1n}\\b_{21}&b_{22}&\cdots &b_{2n}\\\vdots &\vdots &\ddots &\vdots \\b_{m1}&b_{m2}&\cdots &b_{mn}\\\end{bmatrix}}\\[8mu]&={\begin{bmatrix}a_{11}+b_{11}&a_{12}+b_{12}&\cdots &a_{1n}+b_{1n}\\a_{21}+b_{21}&a_{22}+b_{22}&\cdots &a_{2n}+b_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}+b_{m1}&a_{m2}+b_{m2}&\cdots &a_{mn}+b_{mn}\\\end{bmatrix}}\\\end{aligned}}}
For example:
Inmodular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central tomusical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known inBoolean logicas the "exclusive or" function. A similar "wrap around" operation arises ingeometry, where the sum of twoangle measuresis often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on thecircle, which in turn generalizes to addition operations on many-dimensionaltori.
The general theory of abstract algebra allows an "addition" operation to be anyassociativeandcommutativeoperation on a set. Basicalgebraic structureswith such an addition operation includecommutative monoidsandabelian groups.
Linear combinationscombine multiplication and summation; they are sums in which each term has a multiplier, usually arealorcomplexnumber. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such asmixingofstrategiesingame theoryorsuperpositionofstatesinquantum mechanics.[81]
A far-reaching generalization of the addition of natural numbers is the addition ofordinal numbersandcardinal numbersin set theory. These give two different generalizations of the addition of natural numbers to thetransfinite. Unlike most addition operations, the addition of ordinal numbers is not commutative.[82]Addition of cardinal numbers, however, is a commutative operation closely related to thedisjoint unionoperation.
Incategory theory, disjoint union is seen as a particular case of thecoproductoperation,[83]and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such asdirect sumandwedge sum, are named to evoke their connection with addition.
Addition, along with subtraction, multiplication, and division, is considered one of the basic operations and is used inelementary arithmetic.
Subtractioncan be thought of as a kind of addition—that is, the addition of anadditive inverse. Subtraction is itself a sort of inverse to addition, in that addingx{\displaystyle x}and subtractingx{\displaystyle x}areinverse functions.[84]Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction.[85]
Multiplicationcan be thought of asrepeated addition. If a single termxappears in a sumn{\displaystyle n}times, then the sum is the product ofn{\displaystyle n}andx. Nonetheless, this works only fornatural numbers.[86]By the definition in general, multiplication is the operation between two numbers, called the multiplier and the multiplicand, that are combined into a single number called the product.
In the real and complex numbers, addition and multiplication can be interchanged by theexponential function:[87]ea+b=eaeb.{\displaystyle e^{a+b}=e^{a}e^{b}.}This identity allows multiplication to be carried out by consulting atableoflogarithmsand computing addition by hand; it also enables multiplication on aslide rule. The formula is still a good first-order approximation in the broad context ofLie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associatedLie algebra.[88]
There are even more generalizations of multiplication than addition.[89]In general, multiplication operations alwaysdistributeover addition; this requirement is formalized in the definition of aring. In some contexts, integers, distributivity over addition, and the existence of a multiplicative identity are enough to determine the multiplication operation uniquely. The distributive property also provides information about the addition operation; by expanding the product(1+1)(a+b){\displaystyle (1+1)(a+b)}in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.[90]
Divisionis an arithmetic operation remotely related to addition. Sincea/b=ab−1{\displaystyle a/b=ab^{-1}}, division is right distributive over addition:(a+b)/c=a/c+b/c{\displaystyle (a+b)/c=a/c+b/c}.[91]However, division is not left distributive over addition, such as1/(2+2){\displaystyle 1/(2+2)}is not the same as1/2+1/2{\displaystyle 1/2+1/2}.
The maximum operationmax(a,b){\displaystyle \max(a,b)}is a binary operation similar to addition. In fact, if two nonnegative numbersa{\displaystyle a}andb{\displaystyle b}are of differentorders of magnitude, their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example, in truncatingTaylor series. However, it presents a perpetual difficulty innumerical analysis, essentially since "max" is not invertible. Ifb{\displaystyle b}is much greater thana{\displaystyle a}, then a straightforward calculation of(a+b)−b{\displaystyle (a+b)-b}can accumulate an unacceptableround-off error, perhaps even returning zero. See alsoLoss of significance.
The approximation becomes exact in a kind of infinite limit; if eithera{\displaystyle a}orb{\displaystyle b}is an infinitecardinal number, their cardinal sum is exactly equal to the greater of the two.[93]Accordingly, there is no subtraction operation for infinite cardinals.[94]
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:a+max(b,c)=max(a+b,a+c).{\displaystyle a+\max(b,c)=\max(a+b,a+c).}For these reasons, intropical geometryone replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" isnegative infinity.[95]Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.[96]
Tying these observations together, tropical addition is approximately related to regular addition through thelogarithm:log(a+b)≈max(loga,logb),{\displaystyle \log(a+b)\approx \max(\log a,\log b),}which becomes more accurate as the base of the logarithm increases.[97]The approximation can be made exact by extracting a constanth{\displaystyle h}, named by analogy with thePlanck constantfromquantum mechanics,[98]and taking the "classical limit" ash{\displaystyle h}tends to zero:max(a,b)=limh→0hlog(ea/h+eb/h).{\displaystyle \max(a,b)=\lim _{h\to 0}h\log(e^{a/h}+e^{b/h}).}In this sense, the maximum operation is adequantizedversion of addition.[99]
Convolutionis used to add two independentrandom variablesdefined bydistribution functions. Its usual definition combines integration, subtraction, and multiplication.[100]In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Addition
|
Subtraction(which is signified by theminus sign, –) is one of the fourarithmetic operationsalong withaddition,multiplicationanddivision. Subtraction is an operation that represents removal of objects from a collection.[1]For example, in the adjacent picture, there are5 − 2peaches—meaning 5 peaches with 2 taken away, resulting in a total of 3 peaches. Therefore, thedifferenceof 5 and 2 is 3; that is,5 − 2 = 3. While primarily associated with natural numbers inarithmetic, subtraction can also represent removing or decreasing physical and abstract quantities using different kinds of objects includingnegative numbers,fractions,irrational numbers,vectors, decimals, functions, and matrices.[2]
In a sense, subtraction is the inverse of addition. That is,c=a−bif and only ifc+b=a. In words: the difference of two numbers is the number that gives the first one when added to the second one.
Subtraction follows several important patterns. It isanticommutative, meaning that changing the order changes the sign of the answer. It is also notassociative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Because0is theadditive identity, subtraction of it does not change a number. Subtraction also obeys predictable rules concerning related operations, such asadditionandmultiplication. All of these rules can beproven, starting with the subtraction ofintegersand generalizing up through thereal numbersand beyond. Generalbinary operationsthat follow these patterns are studied inabstract algebra.
Incomputability theory, considering subtraction is notwell-definedovernatural numbers, operations between numbers are actually defined using "truncated subtraction" ormonus.[3]
Subtraction is usually written using theminus sign"−" between the terms; that is, ininfix notation. The result is expressed with anequals sign. For example,2−1=1{\displaystyle 2-1=1}(pronounced as "two minus one equals one") and4−6=−2{\displaystyle 4-6=-2}(pronounced as "four minus six equals negative two"). Nonetheless, some situations where subtraction is "understood", even though no symbol appears; inaccounting, a column of two numbers, with the lower number in red, usually indicates that the lower number in the column is to be subtracted, with the difference written below, under a line.[4]
The number being subtracted is thesubtrahend, while the number it is subtracted from is theminuend. The result is thedifference, that is:[5]minuend−subtrahend=difference{\displaystyle {\rm {minuend}}-{\rm {subtrahend}}={\rm {difference}}}.
All of this terminology derives fromLatin. "Subtraction" is anEnglishword derived from the Latinverbsubtrahere, which in turn is acompoundofsub"from under" andtrahere"to pull". Thus, to subtract is todraw from below, or totake away.[6]Using thegerundivesuffix-ndresults in "subtrahend", "thing to be subtracted".[a]Likewise, fromminuere"to reduce or diminish", one gets "minuend", which means "thing to be diminished".
Imagine aline segmentoflengthbwith the left end labeledaand the right end labeledc.
Starting froma, it takesbsteps to the right to reachc. This movement to the right is modeled mathematically byaddition:
Fromc, it takesbsteps to theleftto get back toa. This movement to the left is modeled by subtraction:
Now, a line segment labeled with the numbers1,2, and3. From position 3, it takes no steps to the left to stay at 3, so3 − 0 = 3. It takes 2 steps to the left to get to position 1, so3 − 2 = 1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended.
To subtract arbitrarynatural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so3 − 3 = 0. But3 − 4is still invalid, since it again leaves the line. The natural numbers are not a useful context for subtraction.
The solution is to consider theintegernumber line(..., −3, −2, −1, 0, 1, 2, 3, ...). This way, it takes 4 steps to the left from 3 to get to −1:
Subtraction ofnatural numbersis notclosed: the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number. Such a case uses one of two approaches:
Thefieldof real numbers can be defined specifying only two binary operations, addition and multiplication, together withunary operationsyieldingadditiveandmultiplicativeinverses. The subtraction of a real number (the subtrahend) from another (the minuend) can then be defined as the addition of the minuend and the additive inverse of the subtrahend. For example,3 −π= 3 + (−π). Alternatively, instead of requiring these unary operations, the binary operations of subtraction anddivisioncan be taken as basic.
Subtraction isanti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, ifaandbare any two numbers, then
Subtraction isnon-associative, which comes up when one tries to define repeated subtraction. In general, the expression
can be defined to mean either (a−b) −cora− (b−c), but these two possibilities lead to different answers. To resolve this issue, one must establish anorder of operations, with different orders yielding different results.
In the context of integers, subtraction ofonealso plays a special role: for any integera, the integer(a− 1)is the largest integer less thana, also known as the predecessor ofa.
When subtracting two numbers with units of measurement such askilogramsorpounds, they must have the same unit. In most cases, the difference will have the same unit as the original numbers.
Changes inpercentagescan be reported in at least two forms,percentage changeandpercentage pointchange. Percentage change represents therelative changebetween the two quantities as a percentage, whilepercentage pointchange is simply the number obtained by subtracting the two percentages.[7][8][9]
As an example, suppose that 30% of widgets made in a factory are defective. Six months later, 20% of widgets are defective. The percentage change is20% − 30%/30%= −1/3=−33+1/3%, while the percentage point change is −10 percentage points.
Themethod of complementsis a technique used to subtract one number from another using only the addition of positive numbers. This method was commonly used inmechanical calculators, and is still used in moderncomputers.
To subtract a binary numbery(the subtrahend) from another numberx(the minuend), the ones' complement ofyis added toxand one is added to the sum. The leading digit "1" of the result is then discarded.
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing "0" to "1" and vice versa). And adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example:
becomes the sum:
Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)
Methods used to teach subtraction toelementary schoolvary from country to country, and within a country, different methods are adopted at different times. In what is known in the United States astraditional mathematics, a specific process is taught to students at the end of the 1st year (or during the 2nd year) for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to includedecimal representationsof fractional numbers.
Almost all American schools currently teach a method of subtraction using borrowing or regrouping (the decomposition algorithm) and a system of markings called crutches.[10][11]Although a method of borrowing had been known and published in textbooks previously, the use of crutches in American schools spread afterWilliam A. Brownellpublished a study—claiming that crutches were beneficial to students using this method.[12]This system caught on rapidly, displacing the other methods of subtraction in use in America at that time.
Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country.[13][14]
Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of the subtrahend:
from the minuend
where eachsiandmiis a digit, proceeds by writing downm1−s1,m2−s2, and so forth, as long assidoes not exceedmi. Otherwise,miis increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digitmi+1by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digitsi+1by one.
Example:704 − 512.
−1CDU704512192⟵carry⟵Minuend⟵Subtrahend⟵RestorDifference{\displaystyle {\begin{array}{rrrr}&\color {Red}-1\\&C&D&U\\&7&0&4\\&5&1&2\\\hline &1&9&2\\\end{array}}{\begin{array}{l}{\color {Red}\longleftarrow {\rm {carry}}}\\\\\longleftarrow \;{\rm {Minuend}}\\\longleftarrow \;{\rm {Subtrahend}}\\\longleftarrow {\rm {Rest\;or\;Difference}}\\\end{array}}}
The minuend is 704, the subtrahend is 512. The minuend digits arem3= 7,m2= 0andm1= 4. The subtrahend digits ares3= 5,s2= 1ands1= 2. Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one's place. In the ten's place, 0 is less than 1, so the 0 is increased by 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192.
The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundreds digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundreds place.
There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7.
Example:[citation needed]
Example:[citation needed]
In this method, each digit of the subtrahend is subtracted from the digit above it starting from right to left. If the top number is too small to subtract the bottom number from it, we add 10 to it; this 10 is "borrowed" from the top digit to the left, which we subtract 1 from. Then we move on to subtracting the next digit and borrowing as needed, until every digit has been subtracted. Example:[citation needed]
A variant of the American method where all borrowing is done before all subtraction.[15]
Example:
The partial differences method is different from other vertical subtraction methods because no borrowing or carrying takes place. In their place, one places plus or minus signs depending on whether the minuend is greater or smaller than the subtrahend. The sum of the partial differences is the total difference.[16]
Example:
Instead of finding the difference digit by digit, one can count up the numbers between the subtrahend and the minuend.[17]
Example:
1234 − 567 = can be found by the following steps:
Add up the value from each step to get the total difference:3 + 30 + 400 + 234 = 667.
Another method that is useful formental arithmeticis to split up the subtraction into small steps.[18]
Example:
1234 − 567 = can be solved in the following way:
The same change method uses the fact that adding or subtracting the same number from the minuend and subtrahend does not change the answer. One simply adds the amount needed to get zeros in the subtrahend.[19]
Example:
"1234 − 567 =" can be solved as follows:
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Subtraction
|
Multiplicationis one of the four elementary mathematical operations ofarithmetic, with the other ones beingaddition,subtraction, anddivision. The result of a multiplication operation is called aproduct. Multiplication is often denoted by the cross symbol,×, by the mid-line dot operator,·, by juxtaposition, or, on computers, by an asterisk,*.
The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, themultiplicand, as the quantity of the other one, themultiplier; both numbers can be referred to asfactors. This is to be distinguished fromterms, which are added.
Whether the first factor is the multiplier or the multiplicand may be ambiguous or depend upon context. For example, the expression3×4{\displaystyle 3\times 4}, can be phrased as "3 times 4" and evaluated as4+4+4{\displaystyle 4+4+4}, where 3 is the multiplier, but also as "3 multiplied by 4", in which case 3 becomes the multiplicand.[1]One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3. Thus, the designation of multiplier and multiplicand does not affect the result of the multiplication.[2][3]
Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a rectangle does not depend on which side is measured first—a consequence of the commutative property.
The product of two measurements (orphysical quantities) is a new type of measurement (or new quantity), usually with a derivedunit of measurement. For example, multiplying the lengths (in meters or feet) of the two sides of a rectangle gives its area (in square meters or square feet). Such a product is the subject ofdimensional analysis.
The inverse operation of multiplication isdivision. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the original number. The division of a number other than 0 by itself equals 1.
Several mathematical concepts expand upon the fundamental idea of multiplication. The product of a sequence, vector multiplication, complex numbers, and matrices are all examples where this can be seen. These more advanced constructs tend to affect the basic properties in their own ways, such as becoming noncommutative in matrices and some forms of vector multiplication or changing the sign of complex numbers.
Inarithmetic, multiplication is often written using themultiplication sign(either×or×{\displaystyle \times }) between the factors (that is, ininfix notation).[4]For example,
There are othermathematical notationsfor multiplication:
Incomputer programming, theasterisk(as in5*2) is still the most common notation. This is because most computers historically were limited to smallcharacter sets(such asASCIIandEBCDIC) that lacked a multiplication sign (such as⋅or×),[citation needed]while the asterisk appeared on every keyboard.[12]This usage originated in theFORTRANprogramming language.[13]
The numbers to be multiplied are generally called the "factors" (as infactorization). The number to be multiplied is the "multiplicand", and the number by which it is multiplied is the "multiplier". Usually, the multiplier is placed first, and the multiplicand is placed second;[14][15]however, sometimes the first factor is considered the multiplicand and the second the multiplier.
Also, as the result of multiplication does not depend on the order of the factors, the distinction between "multiplicand" and "multiplier" is useful only at a very elementary level and in somemultiplication algorithms, such as thelong multiplication. Therefore, in some sources, the term "multiplicand" is regarded as a synonym for "factor".[16]In algebra, a number that is the multiplier of a variable or expression (e.g., the 3 in3xy2{\displaystyle 3xy^{2}}) is called acoefficient.
The result of a multiplication is called aproduct. When one factor is an integer, the product is amultipleof the other or of the product of the others. Thus,2×π{\displaystyle 2\times \pi }is a multiple ofπ{\displaystyle \pi }, as is5133×486×π{\displaystyle 5133\times 486\times \pi }. A product of integers is a multiple of each factor; for example, 15 is the product of 3 and 5 and is both a multiple of 3 and a multiple of 5.
The product of two numbers or the multiplication between two numbers can be defined for common special cases: natural numbers, integers, rational numbers, real numbers, complex numbers, and quaternions.
The product of two natural numbersr,s∈N{\displaystyle r,s\in \mathbb {N} }is defined as:
r⋅s≡∑i=1sr=r+r+⋯+r⏟stimes≡∑j=1rs=s+s+⋯+s⏟rtimes.{\displaystyle r\cdot s\equiv \sum _{i=1}^{s}r=\underbrace {r+r+\cdots +r} _{s{\text{ times}}}\equiv \sum _{j=1}^{r}s=\underbrace {s+s+\cdots +s} _{r{\text{ times}}}.}
An integer can be either zero, a nonzero natural number, or minus a nonzero natural number. The product of zero and another integer is always zero. The product of two nonzero integers is determined by the product of theirpositive amounts, combined with the sign derived from the following rule:
(This rule is a consequence of thedistributivityof multiplication over addition, and is not anadditional rule.)
In words:
Two fractions can be multiplied by multiplying their numerators and denominators:
There are several equivalent ways to define formally the real numbers; seeConstruction of the real numbers. The definition of multiplication is a part of all these definitions.
A fundamental aspect of these definitions is that every real number can be approximated to any accuracy byrational numbers. A standard way for expressing this is that every real number is theleast upper boundof a set of rational numbers. In particular, every positive real number is the least upper bound of thetruncationsof its infinitedecimal representation; for example,π{\displaystyle \pi }is the least upper bound of{3,3.1,3.14,3.141,…}.{\displaystyle \{3,\;3.1,\;3.14,\;3.141,\ldots \}.}
A fundamental property of real numbers is that rational approximations are compatible witharithmetic operations, and, in particular, with multiplication. This means that, ifaandbare positive real numbers such thata=supx∈Ax{\displaystyle a=\sup _{x\in A}x}andb=supy∈By,{\displaystyle b=\sup _{y\in B}y,}thena⋅b=supx∈A,y∈Bx⋅y.{\displaystyle a\cdot b=\sup _{x\in A,y\in B}x\cdot y.}In particular, the product of two positive real numbers is the least upper bound of the term-by-term products of thesequencesof their decimal representations.
As changing the signs transforms least upper bounds into greatest lower bounds, the simplest way to deal with a multiplication involving one or two negative numbers, is to use the rule of signs described above in§ Product of two integers. The construction of the real numbers throughCauchy sequencesis often preferred in order to avoid consideration of the four possible sign configurations.
Two complex numbers can be multiplied by the distributive law and the fact thati2=−1{\displaystyle i^{2}=-1}, as follows:
The geometric meaning of complex multiplication can be understood by rewriting complex numbers inpolar coordinates:
Furthermore,
from which one obtains
The geometric meaning is that the magnitudes are multiplied and the arguments are added.
The product of twoquaternionscan be found in the article onquaternions. Note, in this case, thata⋅b{\displaystyle a\cdot b}andb⋅a{\displaystyle b\cdot a}are in general different.
Many common methods for multiplying numbers using pencil and paper require amultiplication tableof memorized or consulted products of small numbers (typically any two numbers from 0 to 9). However, one method, thepeasant multiplicationalgorithm, does not. The example below illustrates "long multiplication" (the "standard algorithm", "grade-school multiplication"):
In some countries such asGermany, the multiplication above is depicted similarly but with the original problem written on a single line and computation starting with the first digit of the multiplier:[17]
Multiplying numbers to more than a couple of decimal places by hand is tedious and error-prone.Common logarithmswere invented to simplify such calculations, since adding logarithms is equivalent to multiplying. Theslide ruleallowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early 20th century, mechanicalcalculators, such as theMarchant, automated multiplication of up to 10-digit numbers. Modern electroniccomputersand calculators have greatly reduced the need for multiplication by hand.
Methods of multiplication were documented in the writings ofancient Egyptian,Greek, Indian,[citation needed]andChinesecivilizations.
TheIshango bone, dated to about 18,000 to 20,000 BC, may hint at a knowledge of multiplication in theUpper Paleolithicera inCentral Africa, but this is speculative.[18][verification needed]
The Egyptian method of multiplication of integers and fractions, which is documented in theRhind Mathematical Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining2 × 21 = 42,4 × 21 = 2 × 42 = 84,8 × 21 = 2 × 84 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence:[19]
TheBabyloniansused asexagesimalpositional number system, analogous to the modern-daydecimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering60 × 60different products, Babylonian mathematicians employedmultiplication tables. These tables consisted of a list of the first twenty multiples of a certainprincipal numbern:n, 2n, ..., 20n; followed by the multiples of 10n: 30n40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50nand 3ncomputed from the table.[citation needed]
In the mathematical textZhoubi Suanjing, dated prior to 300 BC, and theNine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employedRod calculusinvolving place value addition, subtraction, multiplication, and division. The Chinese were already using adecimal multiplication tableby the end of theWarring Statesperiod.[20]
The modern method of multiplication based on theHindu–Arabic numeral systemwas first described byBrahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication, and division.Henry Burchard Fine, then a professor of mathematics atPrinceton University, wrote the following:
These place value decimal arithmetic algorithms were introduced to Arab countries byAl Khwarizmiin the early 9th century and popularized in the Western world byFibonacciin the 13th century.[22]
Grid method multiplication, or the box method, is used in primary schools in England and Wales and in some areas[which?]of the United States to help teach an understanding of how multiple digit multiplication works. An example of multiplying 34 by 13 would be to lay the numbers out in a grid as follows:
and then add the entries.
The classical method of multiplying twon-digit numbers requiresn2digit multiplications.Multiplication algorithmshave been designed that reduce the computation time considerably when multiplying large numbers. Methods based on thediscrete Fourier transformreduce thecomputational complexitytoO(nlognlog logn). In 2016, the factorlog lognwas replaced by a function that increases much slower, though still not constant.[23]In March 2019, David Harvey and Joris van der Hoeven submitted a paper presenting an integer multiplication algorithm with a complexity ofO(nlogn).{\displaystyle O(n\log n).}[24]The algorithm, also based on the fast Fourier transform, is conjectured to be asymptotically optimal.[25]The algorithm is not practically useful, as it only becomes faster for multiplying extremely large numbers (having more than2172912bits).[26]
One can only meaningfully add or subtract quantities of the same type, but quantities of different types can be multiplied or divided without problems. For example, four bags with three marbles each can be thought of as:[2]
When two measurements are multiplied together, the product is of a type depending on the types of measurements. The general theory is given bydimensional analysis. This analysis is routinely applied in physics, but it also has applications in finance and other applied fields.
A common example in physics is the fact that multiplyingspeedbytimegivesdistance. For example:
In this case, the hour units cancel out, leaving the product with only kilometer units.
Other examples of multiplication involving units include:
The product of a sequence of factors can be written with the product symbol∏{\displaystyle \textstyle \prod }, which derives from the capital letter Π (pi) in theGreek alphabet(much like the same way thesummation symbol∑{\displaystyle \textstyle \sum }is derived from the Greek letter Σ (sigma)).[27][28]The meaning of this notation is given by
which results in
In such a notation, thevariableirepresents a varyinginteger, called the multiplication index, that runs from the lower value1indicated in the subscript to the upper value4given by the superscript. The product is obtained by multiplying together all factors obtained by substituting the multiplication index for an integer between the lower and the upper values (the bounds included) in the expression that follows the product operator.
More generally, the notation is defined as
wheremandnare integers or expressions that evaluate to integers. In the case wherem=n, the value of the product is the same as that of the single factorxm; ifm>n, the product is anempty productwhose value is 1—regardless of the expression for the factors.
By definition,
If all factors are identical, a product ofnfactors is equivalent toexponentiation:
Associativityandcommutativityof multiplication imply
ifais a non-negative integer, or if allxi{\displaystyle x_{i}}are positivereal numbers, and
if allai{\displaystyle a_{i}}are non-negative integers, or ifxis a positive real number.
One may also consider products of infinitely many factors; these are calledinfinite products. Notationally, this consists in replacingnabove by theinfinity symbol∞. The product of such an infinite sequence is defined as thelimitof the product of the firstnfactors, asngrows without bound. That is,
One can similarly replacemwith negative infinity, and define:
provided both limits exist.[citation needed]
When multiplication is repeated, the resulting operation is known asexponentiation. For instance, the product of three factors of two (2×2×2) is "two raised to the third power", and is denoted by 23, a two with asuperscriptthree. In this example, the number two is thebase, and three is theexponent.[29]In general, the exponent (or superscript) indicates how many times the base appears in the expression, so that the expression
indicates thatncopies of the baseaare to be multiplied together. This notation can be used whenever multiplication is known to bepower associative.
Forrealandcomplexnumbers, which includes, for example,natural numbers,integers, andfractions, multiplication has certain properties:
Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative formatricesandquaternions.[30]Hurwitz's theoremshows that for thehypercomplex numbersofdimension8 or greater, including theoctonions,sedenions, andtrigintaduonions, multiplication is generally not associative.[34]
In the bookArithmetices principia, nova methodo exposita,Giuseppe Peanoproposed axioms for arithmetic based on his axioms for natural numbers. Peano arithmetic has two axioms for multiplication:
HereS(y) represents thesuccessorofy; i.e., the natural number that followsy. The various properties like associativity can be proved from these and the other axioms of Peano arithmetic, includinginduction. For instance,S(0), denoted by 1, is a multiplicative identity because
The axioms forintegerstypically define them as equivalence classes of ordered pairs of natural numbers. The model is based on treating (x,y) as equivalent tox−ywhenxandyare treated as integers. Thus both (0,1) and (1,2) are equivalent to −1. The multiplication axiom for integers defined this way is
The rule that −1 × −1 = 1 can then be deduced from
Multiplication is extended in a similar way torational numbersand then toreal numbers.[citation needed]
The product of non-negative integers can be defined with set theory usingcardinal numbersor thePeano axioms. Seebelowhow to extend this to multiplying arbitrary integers, and then arbitrary rational numbers. The product of real numbers is defined in terms of products of rational numbers; seeconstruction of the real numbers.[35]
There are many sets that, under the operation of multiplication, satisfy the axioms that definegroupstructure. These axioms are closure, associativity, and the inclusion of an identity element and inverses.
A simple example is the set of non-zerorational numbers. Here identity 1 is had, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, zero must be excluded because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example, anabelian groupis had, but that is not always the case.
To see this, consider the set of invertible square matrices of a given dimension over a givenfield. Here, it is straightforward to verify closure, associativity, and inclusion of identity (theidentity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian.
Another fact worth noticing is that the integers under multiplication do not form a group—even if zero is excluded. This is easily seen by the nonexistence of an inverse for all elements other than 1 and −1.
Multiplication in group theory is typically notated either by a dot or by juxtaposition (the omission of an operation symbol between elements). So multiplying elementaby elementbcould be notated asa⋅{\displaystyle \cdot }borab. When referring to a group via the indication of the set and operation, the dot is used. For example, our first example could be indicated by(Q/{0},⋅){\displaystyle \left(\mathbb {Q} /\{0\},\,\cdot \right)}.[36]
Numbers cancount(3 apples),order(the 3rd apple), ormeasure(3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such asmatrices) or do not look much like numbers (such asquaternions).
+Addition(+)
−Subtraction(−)
×Multiplication(×or·)
÷Division(÷or∕)
|
https://en.wikipedia.org/wiki/Multiplication
|
Exclusive or,exclusive disjunction,exclusive alternation,logical non-equivalence, orlogical inequalityis alogical operatorwhose negation is thelogical biconditional. With two inputs, XOR is true if and only if the inputs differ (one is true, one is false). With multiple inputs, XOR is true if and only if the number of true inputs isodd.[1]
It gains the name "exclusive or" because the meaning of "or" is ambiguous when bothoperandsare true. XORexcludesthat case. Some informal ways of describing XOR are "one or the other but not both", "either one or the other", and "A or B, but not A and B".
It issymbolizedby the prefix operatorJ{\displaystyle J}[2]: 16and by theinfix operatorsXOR(/ˌɛksˈɔːr/,/ˌɛksˈɔː/,/ˈksɔːr/or/ˈksɔː/),EOR,EXOR,∨˙{\displaystyle {\dot {\vee }}},∨¯{\displaystyle {\overline {\vee }}},∨_{\displaystyle {\underline {\vee }}},⩛,⊕{\displaystyle \oplus },↮{\displaystyle \nleftrightarrow }, and≢{\displaystyle \not \equiv }.
Thetruth tableofA↮B{\displaystyle A\nleftrightarrow B}shows that it outputs true whenever the inputs differ:
Exclusive disjunction essentially means 'either one, but not both nor none'. In other words, the statement is trueif and only ifone is true and the other is false. For example, if two horses are racing, then one of the two will win the race, but not both of them. The exclusive disjunctionp↮q{\displaystyle p\nleftrightarrow q}, also denoted byp?q{\displaystyle p\operatorname {?} q}orJpq{\displaystyle Jpq}, can be expressed in terms of thelogical conjunction("logical and",∧{\displaystyle \land }), thedisjunction("logical or",∨{\displaystyle \vee }), and thenegation(¬{\displaystyle \neg }) as follows:
The exclusive disjunctionp↮q{\displaystyle p\nleftrightarrow q}can also be expressed in the following way:
This representation of XOR may be found useful when constructing a circuit or network, because it has only one¬{\displaystyle \lnot }operation and small number of∧{\displaystyle \land }and∨{\displaystyle \lor }operations. A proof of this identity is given below:
It is sometimes useful to writep↮q{\displaystyle p\nleftrightarrow q}in the following way:
or:
This equivalence can be established by applyingDe Morgan's lawstwice to the fourth line of the above proof.
The exclusive or is also equivalent to the negation of alogical biconditional, by the rules of material implication (amaterial conditionalis equivalent to the disjunction of the negation of itsantecedentand its consequence) andmaterial equivalence.
In summary, we have, in mathematical and in engineering notation:
By applying the spirit ofDe Morgan's laws, we get:¬(p↮q)≡¬p↮q≡p↮¬q.{\displaystyle \neg (p\nleftrightarrow q)\equiv \neg p\nleftrightarrow q\equiv p\nleftrightarrow \neg q.}
Although theoperators∧{\displaystyle \wedge }(conjunction) and∨{\displaystyle \lor }(disjunction) are very useful in logic systems, they fail a more generalizable structure in the following way:
The systems({T,F},∧){\displaystyle (\{T,F\},\wedge )}and({T,F},∨){\displaystyle (\{T,F\},\lor )}aremonoids, but neither is agroup. This unfortunately prevents the combination of these two systems into larger structures, such as amathematical ring.
However, the system using exclusive or({T,F},⊕){\displaystyle (\{T,F\},\oplus )}isanabelian group. The combination of operators∧{\displaystyle \wedge }and⊕{\displaystyle \oplus }over elements{T,F}{\displaystyle \{T,F\}}produce the well-knowntwo-element fieldF2{\displaystyle \mathbb {F} _{2}}. This field can represent any logic obtainable with the system(∧,∨){\displaystyle (\land ,\lor )}and has the added benefit of the arsenal of algebraic analysis tools for fields.
More specifically, if one associatesF{\displaystyle F}with 0 andT{\displaystyle T}with 1, one can interpret the logical "AND" operation as multiplication onF2{\displaystyle \mathbb {F} _{2}}and the "XOR" operation as addition onF2{\displaystyle \mathbb {F} _{2}}:
The description of aBoolean functionas apolynomialinF2{\displaystyle \mathbb {F} _{2}}, using this basis, is called the function'salgebraic normal form.[3]
Disjunction is often understood exclusively innatural languages. In English, the disjunctive word "or" is often understood exclusively, particularly when used with the particle "either". The English example below would normally be understood in conversation as implying that Mary is not both a singer and a poet.[4][5]
However, disjunction can also be understood inclusively, even in combination with "either". For instance, the first example below shows that "either" can befelicitouslyused in combination with an outright statement that both disjuncts are true. The second example shows that the exclusive inference vanishes away underdownward entailingcontexts. If disjunction were understood as exclusive in this example, it would leave open the possibility that some people ate both rice and beans.[4]
Examples such as the above have motivated analyses of the exclusivity inference aspragmaticconversational implicaturescalculated on the basis of an inclusivesemantics. Implicatures are typicallycancellableand do not arise in downward entailing contexts if their calculation depends on theMaxim of Quantity. However, some researchers have treated exclusivity as a bona fide semanticentailmentand proposed nonclassical logics which would validate it.[4]
This behavior of English "or" is also found in other languages. However, many languages have disjunctive constructions which are robustly exclusive such as Frenchsoit... soit.[4]
The symbol used for exclusive disjunction varies from one field of application to the next, and even depends on the properties being emphasized in a given context of discussion. In addition to the abbreviation "XOR", any of the following symbols may also be seen:
If usingbinaryvalues for true (1) and false (0), thenexclusive orworks exactly likeadditionmodulo2.
Exclusive disjunction is often used for bitwise operations. Examples:
As noted above, since exclusive disjunction is identical to addition modulo 2, the bitwise exclusive disjunction of twon-bit strings is identical to the standard vector of addition in thevector space(Z/2Z)n{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}}.
In computer science, exclusive disjunction has several uses:
In logical circuits, a simpleaddercan be made with anXOR gateto add the numbers, and a series of AND, OR and NOT gates to create the carry output.
On some computer architectures, it is more efficient to store a zero in a register by XOR-ing the register with itself (bits XOR-ed with themselves are always zero) than to load and store the value zero.
Incryptography, XOR is sometimes used as a simple, self-inverse mixing function, such as inone-time padorFeistel networksystems.[citation needed]XOR is also heavily used in block ciphers such as AES (Rijndael) or Serpent and in block cipher implementation (CBC, CFB, OFB or CTR).
In simple threshold-activatedartificial neural networks, modeling the XOR function requires a second layer because XOR is not alinearly separablefunction.
Similarly, XOR can be used in generatingentropy poolsforhardware random number generators. The XOR operation preserves randomness, meaning that a random bit XORed with a non-random bit will result in a random bit. Multiple sources of potentially random data can be combined using XOR, and the unpredictability of the output is guaranteed to be at least as good as the best individual source.[22]
XOR is used inRAID3–6 for creating parity information. For example, RAID can "back up" bytes100111002and011011002from two (or more) hard drives by XORing the just mentioned bytes, resulting in (111100002) and writing it to another drive. Under this method, if any one of the three hard drives are lost, the lost byte can be re-created by XORing bytes from the remaining drives. For instance, if the drive containing011011002is lost,100111002and111100002can be XORed to recover the lost byte.[23]
XOR is also used to detect an overflow in the result of a signed binary arithmetic operation. If the leftmost retained bit of the result is not the same as the infinite number of digits to the left, then that means overflow occurred. XORing those two bits will give a "1" if there is an overflow.
XOR can be used to swap two numeric variables in computers, using theXOR swap algorithm; however this is regarded as more of a curiosity and not encouraged in practice.
XOR linked listsleverage XOR properties in order to save space to representdoubly linked listdata structures.
Incomputer graphics, XOR-based drawing methods are often used to manage such items asbounding boxesandcursorson systems withoutalpha channelsor overlay planes.
It is also called "not left-right arrow" (\nleftrightarrow) inLaTeX-based markdown (↮{\displaystyle \nleftrightarrow }). Apart from the ASCII codes, the operator is encoded atU+22BB⊻XOR(⊻) andU+2295⊕CIRCLED PLUS(⊕, ⊕), both in blockmathematical operators.
|
https://en.wikipedia.org/wiki/Exclusive_or
|
Incryptography, acipher(orcypher) is analgorithmfor performingencryptionordecryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term isencipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especiallyclassical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a largecodebookwhich linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates.". When using a cipher the original information is known asplaintext, and the encrypted form asciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called akey(or, in traditionalNSAparlance, acryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message, with some exceptions such asROT13andAtbash.
Most modern ciphers can be categorized in several ways:
Originating from the Sanskrit word for zero शून्य (śuṇya), via the Arabic word صفر (ṣifr), the word "cipher"(see etymology) spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept ofzero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.[1]
The termcipherwas later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
In casual contexts, "code" and "cipher" can typically be used interchangeably; however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message.
An example of this is thecommercial telegraph codewhich was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges oftelegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizesKanji(meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬".Stenographerssometimes use specific symbols to abbreviate whole words.
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, usingsuperenciphermentto increase the security. In some cases the termscodesandciphersare used synonymously withsubstitutionandtransposition, respectively.
Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding,codetext,decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility tocryptanalysisand the difficulty of managing a cumbersomecodebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
There are a variety of different types of encryption. Algorithms used earlier in thehistory of cryptographyare substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
TheCaesar Cipheris one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[citation needed]
Historical pen and paper ciphers used in the past are sometimes known asclassical ciphers. They include simplesubstitution ciphers(such asROT13) andtransposition ciphers(such as aRail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.[2][3]
In the 1640s, the Parliamentarian commander,Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during theEnglish Civil War.[4]The English theologian John Wilkins published a book in 1641 titled "Mercury, or The Secret and Swift Messenger" and described a musical cipher wherein letters of the alphabet were substituted for music notes.[5][6]This species of melodic cipher was depicted in greater detail by author Abraham Rees in his book Cyclopædia (1778).[7]
Simple ciphers were replaced bypolyalphabetic substitutionciphers (such as theVigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack.[8]It is possible to create a secure pen and paper cipher based on aone-time pad, but these have other disadvantages.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. Inrotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the BritishBombewere invented to crack these encryption methods.
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.
Ciphers can be distinguished into two types by the type of input data:
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Since the desired effect is computational difficulty, in theory one would choose analgorithmand desired difficulty level, thus decide the key length accordingly.
Claude Shannonproved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once:one-time pad.[9]
|
https://en.wikipedia.org/wiki/Cipher#Perfect_secrecy
|
Elliptic-curve Diffie–Hellman(ECDH) is akey agreementprotocol that allows two parties, each having anelliptic-curvepublic–private key pair, to establish ashared secretover aninsecure channel.[1][2][3]This shared secret may be directly used as a key, or toderive another key. The key, or the derived key, can then be used to encrypt subsequent communications using asymmetric-key cipher. It is a variant of theDiffie–Hellmanprotocol usingelliptic-curve cryptography.
The following example illustrates how a shared key is established. SupposeAlicewants to establish a shared key withBob, but the only channel available for them may be eavesdropped by a third party. Initially, thedomain parameters(that is,(p,a,b,G,n,h){\displaystyle (p,a,b,G,n,h)}in the prime case or(m,f(x),a,b,G,n,h){\displaystyle (m,f(x),a,b,G,n,h)}in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private keyd{\displaystyle d}(a randomly selected integer in the interval[1,n−1]{\displaystyle [1,n-1]}) and a public key represented by a pointQ{\displaystyle Q}(whereQ=d⋅G{\displaystyle Q=d\cdot G}, that is, the result ofaddingG{\displaystyle G}to itselfd{\displaystyle d}times). Let Alice's key pair be(dA,QA){\displaystyle (d_{\text{A}},Q_{\text{A}})}and Bob's key pair be(dB,QB){\displaystyle (d_{\text{B}},Q_{\text{B}})}. Each party must know the other party's public key prior to execution of the protocol.
Alice computes point(xk,yk)=dA⋅QB{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}. Bob computes point(xk,yk)=dB⋅QA{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}. The shared secret isxk{\displaystyle x_{k}}(thexcoordinate of the point). Most standardized protocols based on ECDH derive a symmetric key fromxk{\displaystyle x_{k}}using some hash-based key derivation function.
The shared secret calculated by both parties is equal, becausedA⋅QB=dA⋅dB⋅G=dB⋅dA⋅G=dB⋅QA{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}.
The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curvediscrete logarithmproblem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curveDiffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral (also known asECDHE, where final 'E' stands for "ephemeral").Ephemeral keysare temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoidman-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neitherforward secrecynor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a securekey derivation functionto the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, seeMQV.
If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. SeveralTLSlibraries were found to be vulnerable to this attack.[4]
The shared secret is uniformly distributed on a subset of[0,p){\displaystyle [0,p)}of size(n+1)/2{\displaystyle (n+1)/2}. For this reason, the secret should not be used directly as a symmetric key, but it can be used as entropy for a key derivation function.
LetA,B∈Fp{\displaystyle A,B\in F_{p}}such thatB(A2−4)≠0{\displaystyle B(A^{2}-4)\neq 0}. The Montgomery form elliptic curveEM,A,B{\displaystyle E_{M,A,B}}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfying the equationBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with the point at infinity denoted as∞{\displaystyle \infty }. This is called the affine form of the curve. The set of allFp{\displaystyle F_{p}}-rational points ofEM,A,B{\displaystyle E_{M,A,B}}, denoted asEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is the set of all(x,y)∈Fp×Fp{\displaystyle (x,y)\in F_{p}\times F_{p}}satisfyingBy2=x(x2+Ax+1){\displaystyle By^{2}=x(x^{2}+Ax+1)}along with∞{\displaystyle \infty }. Under a suitably defined addition operation,EM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}is a group with∞{\displaystyle \infty }as the identity element. It is known that the order of this group is a multiple of 4. In fact, it is usually possible to obtainA{\displaystyle A}andB{\displaystyle B}such that the order ofEM,A,B{\displaystyle E_{M,A,B}}is4q{\displaystyle 4q}for a primeq{\displaystyle q}. For more extensive discussions of Montgomery curves and their arithmetic one may follow.[5][6][7]
For computational efficiency, it is preferable to work with projective coordinates. The projective form of the Montgomery curveEM,A,B{\displaystyle E_{M,A,B}}isBY2Z=X(X2+AXZ+Z2){\displaystyle BY^{2}Z=X(X^{2}+AXZ+Z^{2})}. For a pointP=[X:Y:Z]{\displaystyle P=[X:Y:Z]}onEM,A,B{\displaystyle E_{M,A,B}}, thex{\displaystyle x}-coordinate mapx{\displaystyle x}is the following:[7]x(P)=[X:Z]{\displaystyle x(P)=[X:Z]}ifZ≠0{\displaystyle Z\neq 0}andx(P)=[1:0]{\displaystyle x(P)=[1:0]}ifP=[0:1:0]{\displaystyle P=[0:1:0]}.Bernstein[8][9]introduced the mapx0{\displaystyle x_{0}}as follows:x0(X:Z)=XZp−2{\displaystyle x_{0}(X:Z)=XZ^{p-2}}which is defined for all values ofX{\displaystyle X}andZ{\displaystyle Z}inFp{\displaystyle F_{p}}. Following Miller,[10]Montgomery[5]and Bernstein,[9]the Diffie-Hellman key agreement can be carried out on a Montgomery curve as follows. LetQ{\displaystyle Q}be a generator of a prime order subgroup ofEM,A,B(Fp){\displaystyle E_{M,A,B}(F_{p})}. Alice chooses a secret keys{\displaystyle s}and has public keyx0(sQ){\displaystyle x_{0}(sQ)};
Bob chooses a secret keyt{\displaystyle t}and has public keyx0(tQ){\displaystyle x_{0}(tQ)}. The shared secret key of Alice and Bob isx0(stQ){\displaystyle x_{0}(stQ)}. Using classical computers, the best known method of obtainingx0(stQ){\displaystyle x_{0}(stQ)}fromQ,x0(sQ){\displaystyle Q,x_{0}(sQ)}andx0(tQ){\displaystyle x_{0}(tQ)}requires aboutO(p1/2){\displaystyle O(p^{1/2})}time using thePollards rho algorithm.[11]
The most famous example of Montgomery curve isCurve25519which was introduced by Bernstein.[9]For Curve25519,p=2255−19,A=486662{\displaystyle p=2^{255}-19,A=486662}andB=1{\displaystyle B=1}.
The other Montgomery curve which is part of TLS 1.3 isCurve448which was introduced
by Hamburg.[12]For Curve448,p=2448−2224−1,A=156326{\displaystyle p=2^{448}-2^{224}-1,A=156326}andB=1{\displaystyle B=1}. Couple of Montgomery curves named M[4698] and M[4058] competitive toCurve25519andCurve448respectively have been proposed in.[13]For M[4698],p=2251−9,A=4698,B=1{\displaystyle p=2^{251}-9,A=4698,B=1}and for M[4058],p=2444−17,A=4058,B=1{\displaystyle p=2^{444}-17,A=4058,B=1}. At 256-bit security level, three Montgomery curves named M[996558], M[952902] and M[1504058] have been proposed in.[14]For M[996558],p=2506−45,A=996558,B=1{\displaystyle p=2^{506}-45,A=996558,B=1}, for M[952902],p=2510−75,A=952902,B=1{\displaystyle p=2^{510}-75,A=952902,B=1}and for M[1504058],p=2521−1,A=1504058,B=1{\displaystyle p=2^{521}-1,A=1504058,B=1}respectively. Apart from these two, other proposals of Montgomery curves can be found at.[15]
|
https://en.wikipedia.org/wiki/Elliptic_Curve_Diffie%E2%80%93Hellman
|
TheVigenère cipher(French pronunciation:[viʒnɛːʁ]) is a method ofencryptingalphabetictext where each letter of theplaintextis encoded with a differentCaesar cipher, whose increment is determined by the corresponding letter of another text, thekey.
For example, if the plaintext isattacking tonightand the key isoculorhinolaryngology, then
and so on.
It is important to note that traditionally spaces and punctuation are removed prior to encryption[1]and reintroduced afterwards.
If the recipient of the message knows the key, they can recover the plaintext by reversing this process.
The Vigenère cipher is therefore a special case of apolyalphabetic substitution.[2][3]
First described byGiovan Battista Bellasoin 1553, the cipher is easy to understand and implement, but it resisted all attempts to break it until 1863, three centuries later. This earned it the descriptionle chiffrage indéchiffrable(Frenchfor 'the indecipherable cipher'). Many people have tried to implement encryption schemes that are essentially Vigenère ciphers.[4]In 1863,Friedrich Kasiskiwas the first to publish a general method of deciphering Vigenère ciphers.
In the 19th century, the scheme was misattributed toBlaise de Vigenère(1523–1596) and so acquired its present name.[5]
The very first well-documented description of a polyalphabetic cipher was byLeon Battista Albertiaround 1467 and used a metalcipher diskto switch between cipher alphabets. Alberti's system only switched alphabets after several words, and switches were indicated by writing the letter of the corresponding alphabet in the ciphertext. Later,Johannes Trithemius, in his workPolygraphia(which was completed in manuscript form in 1508 but first published in 1518),[6]invented thetabula recta, a critical component of the Vigenère cipher.[7]TheTrithemius cipher, however, provided a progressive, rather rigid and predictable system for switching between cipher alphabets.[note 1]
In 1586 Blaise de Vigenère published a type of polyalphabetic cipher called anautokey cipher– because its key is based on the original plaintext – before the court ofHenry III of France.[8]The cipher now known as the Vigenère cipher, however, is based on that originally described byGiovan Battista Bellasoin his 1553 bookLa cifra del Sig. Giovan Battista Bellaso.[9]He built upon the tabula recta of Trithemius but added a repeating "countersign" (akey) to switch cipher alphabets every letter.
Whereas Alberti and Trithemius used a fixed pattern of substitutions, Bellaso's scheme meant the pattern of substitutions could be easily changed, simply by selecting a new key. Keys were typically single words or short phrases, known to both parties in advance, or transmitted "out of band" along with the message, Bellaso's method thus required strong security for only the key. As it is relatively easy to secure a short key phrase, such as by a previous private conversation, Bellaso's system was considerably more secure.[citation needed]
Note, however, as opposed to the modern Vigenère cipher, Bellaso's cipher didn't have 26 different "shifts" (different Caesar's ciphers) for every letter, instead having 13 shifts for pairs of letters. In the 19th century, the invention of this cipher, essentially designed by Bellaso, was misattributed to Vigenère. David Kahn, in his book,The Codebreakerslamented this misattribution, saying that history had "ignored this important contribution and instead named a regressive and elementary cipher for him [Vigenère] though he had nothing to do with it".[10]
The Vigenère cipher gained a reputation for being exceptionally strong. Noted author and mathematician Charles Lutwidge Dodgson (Lewis Carroll) called the Vigenère cipher unbreakable in his 1868 piece "The Alphabet Cipher" in a children's magazine. In 1917,Scientific Americandescribed the Vigenère cipher as "impossible of translation".[11][12]That reputation was not deserved.Charles Babbageis known to have broken a variant of the cipher as early as 1854 but did not publish his work.[13]Kasiski entirely broke the cipher and published the technique in the 19th century, but even in the 16th century, some skilled cryptanalysts could occasionally break the cipher.[10]
The Vigenère cipher is simple enough to be a field cipher if it is used in conjunction with cipher disks.[14]TheConfederate States of America, for example, used a brass cipher disk to implement the Vigenère cipher during theAmerican Civil War. The Confederacy's messages were far from secret, and the Union regularly cracked its messages. Throughout the war, the Confederate leadership primarily relied upon three key phrases: "Manchester Bluff", "Complete Victory" and, as the war came to a close, "Come Retribution".[15]
A Vigenère cipher with a completely random (and non-reusable) key which is as long as the message becomes aone-time pad, a theoretically unbreakable cipher.[16]Gilbert Vernamtried to repair the broken cipher (creating the Vernam–Vigenère cipher in 1918), but the technology he used was so cumbersome as to be impracticable.[17]
In aCaesar cipher, each letter of the alphabet is shifted along some number of places. For example, in a Caesar cipher of shift 3,awould becomeD,bwould becomeE,ywould becomeBand so on. The Vigenère cipher has several Caesar ciphers in sequence with different shift values.
To encrypt, a table of alphabets can be used, termed atabula recta,Vigenère squareorVigenère table. It has the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating keyword.[citation needed]
For example, suppose that theplaintextto be encrypted is
The person sending the message chooses a keyword and repeats it until it matches the length of the plaintext, for example, the keyword "LEMON":
Each row starts with a key letter. The rest of the row holds the letters A to Z (in shifted order). Although there are 26 key rows shown, a code will use only as many keys (different alphabets) as there are unique letters in the key string, here just 5 keys: {L, E, M, O, N}. For successive letters of the message, successive letters of the key string will be taken and each message letter enciphered by using its corresponding key row. When a new character of the message is selected, the next letter of the key is chosen, and the row corresponding to that char is gone along to find the column heading that matches the message character. The letter at the intersection of [key-row, msg-col] is the enciphered letter.
For example, the first letter of the plaintext,a, is paired withL, the first letter of the key. Therefore, rowLand columnAof the Vigenère square are used, namelyL. Similarly, for the second letter of the plaintext, the second letter of the key is used. The letter at rowEand columnTisX. The rest of the plaintext is enciphered in a similar fashion:
Decryption is performed by going to the row in the table corresponding to the key, finding the position of the ciphertext letter in that row and then using the column's label as the plaintext. For example, in rowL(fromLEMON), the ciphertextLappears in columnA, soais the first plaintext letter. Next, in rowE(fromLEMON), the ciphertextXis located in columnT. Thustis the second plaintext letter.
Vigenère can also be described algebraically. If the lettersA–Zare taken to be the numbers 0–25 (A=^0{\displaystyle A\,{\widehat {=}}\,0},B=^1{\displaystyle B\,{\widehat {=}}\,1}, etc.), and addition is performedmodulo26, Vigenère encryptionE{\displaystyle E}using the keyK{\displaystyle K}can be written as
and decryptionD{\displaystyle D}using the keyK{\displaystyle K}as
in whichM=M1…Mn{\displaystyle M=M_{1}\dots M_{n}}is the message,C=C1…Cn{\displaystyle C=C_{1}\dots C_{n}}is the ciphertext andK=K1…Kn{\displaystyle K=K_{1}\dots K_{n}}is the key obtained by repeating the keyword⌈n/m⌉{\displaystyle \lceil n/m\rceil }times in whichm{\displaystyle m}is the keyword length.
Thus, by using the previous example, to encryptA=^0{\displaystyle A\,{\widehat {=}}\,0}with key letterL=^11{\displaystyle L\,{\widehat {=}}\,11}the calculation would result in11=^L{\displaystyle 11\,{\widehat {=}}\,L}.
Therefore, to decryptR=^17{\displaystyle R\,{\widehat {=}}\,17}with key letterE=^4{\displaystyle E\,{\widehat {=}}\,4}, the calculation would result in13=^N{\displaystyle 13\,{\widehat {=}}\,N}.
In general, ifΣ{\displaystyle \Sigma }is the alphabet of lengthℓ{\displaystyle \ell }, andm{\displaystyle m}is the length of key, Vigenère encryption and decryption can be written:
Mi{\displaystyle M_{i}}denotes the offset of thei-th character of the plaintextM{\displaystyle M}in the alphabetΣ{\displaystyle \Sigma }. For example, by taking the 26 English characters as the alphabetΣ=(A,B,C,…,X,Y,Z){\displaystyle \Sigma =(A,B,C,\ldots ,X,Y,Z)}, the offset of A is 0, the offset of B is 1 etc.Ci{\displaystyle C_{i}}andKi{\displaystyle K_{i}}are similar.
The idea behind the Vigenère cipher, like all other polyalphabetic ciphers, is to disguise the plaintextletter frequencyto interfere with a straightforward application offrequency analysis. For instance, ifPis the most frequent letter in a ciphertext whose plaintext is inEnglish, one might suspect thatPcorresponds toesinceeis the most frequently used letter in English. However, by using the Vigenère cipher,ecan be enciphered as different ciphertext letters at different points in the message, which defeats simple frequency analysis.
The primary weakness of the Vigenère cipher is the repeating nature of itskey. If a cryptanalyst correctly guesses the key's lengthn, the cipher text can be treated asninterleavedCaesar ciphers, which can easily be broken individually. The key length may be discovered bybrute forcetesting each possible value ofn, orKasiski examinationand theFriedman testcan help to determine the key length (see below:§ Kasiski examinationand§ Friedman test).
In 1863,Friedrich Kasiskiwas the first to publish a successful general attack on the Vigenère cipher.[18]Earlier attacks relied on knowledge of the plaintext or the use of a recognizable word as a key. Kasiski's method had no such dependencies. Although Kasiski was the first to publish an account of the attack, it is clear that others had been aware of it. In 1854,Charles Babbagewas goaded into breaking the Vigenère cipher when John Hall Brock Thwaites submitted a "new" cipher to theJournal of the Society of the Arts.[19][20]When Babbage showed that Thwaites' cipher was essentially just another recreation of the Vigenère cipher, Thwaites presented a challenge to Babbage: given an original text (from Shakespeare'sThe Tempest: Act 1, Scene 2) and its enciphered version, he was to find the key words that Thwaites had used to encipher the original text. Babbage soon found the key words: "two" and "combined". Babbage then enciphered the same passage from Shakespeare using different key words and challenged Thwaites to find Babbage's key words.[21]Babbage never explained the method that he used. Studies of Babbage's notes reveal that he had used the method later published by Kasiski and suggest that he had been using the method as early as 1846.[22]
TheKasiski examination, also called the Kasiski test, takes advantage of the fact that repeated words are, by chance, sometimes encrypted using the same key letters, leading to repeated groups in the ciphertext. For example, consider the following encryption using the keywordABCD:
There is an easily noticed repetition in the ciphertext, and so the Kasiski test will be effective.
The distance between the repetitions ofCSASTPis 16. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 16, 8, 4, 2, or 1 characters long. (Allfactorsof the distance are possible key lengths; a key of length one is just a simpleCaesar cipher, and itscryptanalysisis much easier.) Since key lengths 2 and 1 are unrealistically short, one needs to try only lengths 16, 8, and 4. Longer messages make the test more accurate because they usually contain more repeated ciphertext segments. The following ciphertext has two segments that are repeated:
The distance between the repetitions ofVHVSis 18. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 18, 9, 6, 3, 2, or 1 characters long. The distance between the repetitions ofQUCEis 30 characters. That means that the key length could be 30, 15, 10, 6, 5, 3, 2, or 1 characters long. By taking theintersectionof those sets, one could safely conclude that the most likely key length is 6 since 3, 2, and 1 are unrealistically short.
The Friedman test (sometimes known as the kappa test) was invented during the 1920s byWilliam F. Friedman, who used theindex of coincidence, which measures the unevenness of the cipher letter frequencies to break the cipher. By knowing the probabilityκp{\displaystyle \kappa _{\text{p}}}that any two randomly chosen source language letters are the same (around 0.067 forcase-insensitiveEnglish) and the probability of a coincidence for a uniform random selection from the alphabetκr{\displaystyle \kappa _{\text{r}}}(1⁄26= 0.0385 for English), the key length can be estimated as the following:
from the observed coincidence rate
in whichcis the size of the alphabet (26 for English),Nis the length of the text andn1toncare the observed ciphertextletter frequencies, as integers.
That is, however, only an approximation; its accuracy increases with the length of the text. It would, in practice, be necessary to try various key lengths that are close to the estimate.[23]A better approach for repeating-key ciphers is to copy the ciphertext into rows of a matrix with as many columns as an assumed key length and then to compute the averageindex of coincidencewith each column considered separately. When that is done for each possible key length, the highest average index of coincidence then corresponds to the most-likely key length.[24]Such tests may be supplemented by information from the Kasiski examination.
Once the length of the key is known, the ciphertext can be rewritten into that many columns, with each column corresponding to a single letter of the key. Each column consists of plaintext that has been encrypted by a singleCaesar cipher. The Caesar key (shift) is just the letter of the Vigenère key that was used for that column. Using methods similar to those used to break the Caesar cipher, the letters in the ciphertext can be discovered.
An improvement to the Kasiski examination, known asKerckhoffs' method, matches each column's letter frequencies to shifted plaintext frequencies to discover the key letter (Caesar shift) for that column. Once every letter in the key is known, all the cryptanalyst has to do is to decrypt the ciphertext and reveal the plaintext.[25]Kerckhoffs' method is not applicable if the Vigenère table has been scrambled, rather than using normal alphabetic sequences, but Kasiski examination and coincidence tests can still be used to determine key length.
The Vigenère cipher, with normal alphabets, essentially uses modulo arithmetic, which is commutative. Therefore, if the key length is known (or guessed), subtracting the cipher text from itself, offset by the key length, will produce the plain text subtracted from itself, also offset by the key length. If any "probable word" in the plain text is known or can be guessed, its self-subtraction can be recognized, which allows recovery of the key by subtracting the known plaintext from the cipher text. Key elimination is especially useful against short messages. For example, usingLIONas the key below:
Then subtract the ciphertext from itself with a shift of the key length 4 forLION.
Which is nearly equivalent to subtracting the plaintext from itself by the same shift.
Which is algebraically represented fori∈[1,n−m]{\displaystyle i\in [1,n-m]}as:
In this example, the wordsbrownfoxare known.
This resultomazcorresponds with the 9th through 12th letters in the result of the larger examples above. The known section and its location is verified.
Subtractbrowfrom that range of the ciphertext.
This produces the final result, the reveal of the keyLION.
Therunning keyvariant of the Vigenère cipher was also considered unbreakable at one time. For the key, this version uses a block of text as long as the plaintext. Since the key is as long as the message, the Friedman and Kasiski tests no longer work, as the key is not repeated.
If multiple keys are used, the effective key length is theleast common multipleof the lengths of the individual keys. For example, using the two keysGOandCAT, whose lengths are 2 and 3, one obtains an effective key length of 6 (the least common multiple of 2 and 3). This can be understood as the point where both keys line up.
Encrypting twice, first with the keyGOand then with the keyCATis the same as encrypting once with a key produced by encrypting one key with the other.
This is demonstrated by encryptingattackatdawnwithIOZQGH, to produce the same ciphertext as in the original example.
If key lengths are relatively prime, the effective key length is the product of the key lengths, and hence grows quickly as the individual key lengths are increased. For example, while the effective length of combined key lengths of 10, 12, and 15 characters is only 60 (2x2x3x5), that of key lengths of 8, 11, and 15 characters is 1320 (8x11x15). If this effective key length is longer than the ciphertext, it achieves the same immunity to the Friedman and Kasiski tests as the running key variant.
If one uses a key that is truly random, is at least as long as the encrypted message, and is used only once, the Vigenère cipher is theoretically unbreakable. However, in that case, the key, not the cipher, provides cryptographic strength, and such systems are properly referred to collectively asone-time padsystems, irrespective of the ciphers employed.
A simple variant is to encrypt by using the Vigenère decryption method and to decrypt by using Vigenère encryption. That method is sometimes referred to as "Variant Beaufort". It is different from theBeaufort cipher, created byFrancis Beaufort, which is similar to Vigenère but uses a slightly modified enciphering mechanism and tableau. The Beaufort cipher is areciprocal cipher.
Despite the Vigenère cipher's apparent strength, it never became widely used throughout Europe. The Gronsfeld cipher is a variant attributed byGaspar Schottto Count Gronsfeld (Josse Maximilaan vanGronsveldné van Bronckhorst) but was actually used much earlier by an ambassador of Duke of Mantua in 1560s-1570s. It is identical to the Vigenère cipher except that it uses just a cipher alphabet of 10 characters, corresponding to the digits 0 to 9: a Gronsfeld key of 0123 is the same as a Vigenere key of ABCD. The Gronsfeld cipher is strengthened because its key is not a word, but it is weakened because it has just a cipher alphabet of 10 characters. It is Gronsfeld's cipher that became widely used throughout Germany and Europe, despite its weaknesses.
Vigenère actually invented a stronger cipher, anautokey cipher. The name "Vigenère cipher" became associated with a simpler polyalphabetic cipher instead. In fact, the two ciphers were often confused, and both were sometimes calledle chiffre indéchiffrable. Babbage actually broke the much-stronger autokey cipher, but Kasiski is generally credited with the first published solution to the fixed-key polyalphabetic ciphers.
|
https://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher
|
Adigital signatureis a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient.[1][2]
Digital signatures are a standard element of mostcryptographic protocolsuites, and are commonly used for software distribution, financial transactions,contract management software, and in other cases where it is important to detect forgery ortampering.
Digital signatures are often used to implementelectronic signatures, which include any electronic data that carries the intent of a signature,[3]but not all electronic signatures use digital signatures.[4][5]Electronic signatures have legal significance in some countries, includingBrazil,Canada,[6]South Africa,[7]Russia,[8]theUnited States,Algeria,[9]Turkey,[10]India,[11]Indonesia,Mexico,Saudi Arabia,[12]Uruguay,[13]Switzerland,Chile[14]and the countries of theEuropean Union.[15][16]
Digital signatures employasymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also providenon-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming theirprivate keyremains secret.[17]Further, some non-repudiation schemes offer a timestamp for the digital signature, so that even if the private key is exposed, the signature is valid.[18][19]Digitally signed messages may be anything representable as abitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.
A digital signature scheme typically consists of three algorithms:
Two main properties are required:
First, the authenticity of a signature generated from a fixed message and fixed private key can be verified by using the corresponding public key.
Secondly, it should be computationally infeasible to generate a valid signature for a party without knowing that party's private key.
A digital signature is an authentication mechanism that enables the creator of the message to attach a code that acts as a signature.
TheDigital Signature Algorithm(DSA), developed by theNational Institute of Standards and Technology, is one ofmany examplesof a signing algorithm.
In the following discussion, 1nrefers to aunary number.
Formally, adigital signature schemeis a triple of probabilistic polynomial time algorithms, (G,S,V), satisfying:
For correctness,SandVmust satisfy
A digital signature scheme issecureif for every non-uniform probabilistic polynomial timeadversary,A
whereAS(sk, · )denotes thatAhas access to theoracle,S(sk, · ),Qdenotes the set of the queries onSmade byA, which knows the public key,pk, and the security parameter,n, andx∉Qdenotes that the adversary may not directly query the string,x, onS.[20][21]
In 1976,Whitfield DiffieandMartin Hellmanfirst described the notion of a digital signature scheme, although they only conjectured that such schemes existed based on functions that are trapdoor one-way permutations.[22][23]Soon afterwards,Ronald Rivest,Adi Shamir, andLen Adlemaninvented theRSAalgorithm, which could be used to produce primitive digital signatures[24](although only as a proof-of-concept – "plain" RSA signatures are not secure[25]). The first widely marketed software package to offer digital signature wasLotus Notes1.0, released in 1989, which used the RSA algorithm.[26]
Other digital signature schemes were soon developed after RSA, the earliest beingLamport signatures,[27]Merkle signatures(also known as "Merkle trees" or simply "Hash trees"),[28]andRabin signatures.[29]
In 1988,Shafi Goldwasser,Silvio Micali, andRonald Rivestbecame the first to rigorously define the security requirements of digital signature schemes.[30]They described a hierarchy of attack models for signature schemes, and also presented theGMR signature scheme, the first that could be proved to prevent even an existential forgery against a chosen message attack, which is the currently accepted security definition for signature schemes.[30]The first such scheme which is not built on trapdoor functions but rather on a family of function with a much weaker required property of one-way permutation was presented byMoni NaorandMoti Yung.[31]
One digital signature scheme (of many) is based onRSA. To create signature keys, generate an RSA key pair containing a modulus,N, that is the product of two random secret distinct large primes, along with integers,eandd, such thated≡1 (modφ(N)), whereφisEuler's totient function. The signer's public key consists ofNande, and the signer's secret key containsd.
Used directly, this type of signature scheme is vulnerable to key-only existential forgery attack. To create a forgery, the attacker picks a random signature σ and uses the verification procedure to determine the message,m, corresponding to that signature.[32]In practice, however, this type of signature is not used directly, but rather, the message to be signed is firsthashedto produce a short digest, that is thenpaddedto larger width comparable toN, then signed with the reversetrapdoor function.[33]This forgery attack, then, only produces the padded hash function output that corresponds to σ, but not a message that leads to that value, which does not lead to an attack. In the random oracle model,hash-then-sign(an idealized version of that practice where hash and padding combined have close toNpossible outputs), this form of signature is existentially unforgeable, even against achosen-plaintext attack.[23][clarification needed][34]
There are several reasons to sign such a hash (or message digest) instead of the whole document.
As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of anelectronic documentas well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State,University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures.
Below are some common reasons for applying a digital signature to communications:
A message may have letterhead or a handwritten signature identifying its sender, but letterheads and handwritten signatures can be copied and pasted onto forged messages.
Even legitimate messages may be modified in transit.[35]
If a bank's central office receives a letter claiming to be from a branch office with instructions to change the balance of an account, the central bankers need to be sure, before acting on the instructions, that they were actually sent by a branch banker, and not forged—whether a forger fabricated the whole letter, or just modified an existing letter in transit by adding some digits.
With a digital signature scheme, the central office can arrange beforehand to have a public key on file whose private key is known only to the branch office.
The branch office can later sign a message and the central office can use the public key to verify the signed message was not a forgery before acting on it.
A forger whodoesn'tknow the sender's private key can't sign a different message, or even change a single digit in an existing message without making the recipient's signature verification fail.[35][1][2]
Encryptioncan hide the content of the message from an eavesdropper, but encryption on its own may not let recipient verify the message's authenticity, or even detectselective modifications like changing a digit—if the bank's offices simply encrypted the messages they exchange, they could still be vulnerable to forgery.
In other applications, such as software updates, the messages are not secret—when a software author publishes a patch for all existing installations of the software to apply, the patch itself is not secret, but computers running the software must verify the authenticity of the patch before applying it, lest they become victims to malware.[2]
Replays.A digital signature scheme on its own does not prevent a valid signed message from being recorded and then maliciously reused in areplay attack.
For example, the branch office may legitimately request that bank transfer be issued once in a signed message.
If the bank doesn't use a system of transaction IDs in their messages to detect which transfers have already happened, someone could illegitimately reuse the same signed message many times to drain an account.[35]
Uniqueness and malleability of signatures.A signature itself cannot be used to uniquely identify the message it signs—in some signature schemes, every message has a large number of possible valid signatures from the same signer, and it may be easy, even without knowledge of the private key, to transform one valid signature into another.[36]If signatures are misused as transaction IDs in an attempt by a bank-like system such as aBitcoinexchange to detect replays, this can be exploited to replay transactions.[37]
Authenticating a public key.Prior knowledge of apublic keycan be used to verify authenticity of asigned message, but not the other way around—prior knowledge of asigned messagecannot be used to verify authenticity of apublic key.
In some signature schemes, given a signed message, it is easy to construct a public key under which the signed message will pass verification, even without knowledge of the private key that was used to make the signed message in the first place.[38]
Non-repudiation,[15]or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.
Note that these authentication, non-repudiation etc. properties rely on the secret keynot having been revokedprior to its usage. Publicrevocationof a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check; e.g., checking acertificate revocation listor via theOnline Certificate Status Protocol.[16]Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose.
In their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:[30]
They also describe a hierarchy of attack results:[30]
The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack.
All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages:
A more secure alternative is to store the private key on asmart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably byRoss Andersonand his students[39]). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns the signed hash. Typically, a user must activate their smart card by entering apersonal identification numberor PIN code (thus providingtwo-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect.
Entering a PIN code to activate the smart card commonly requires anumeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running akeystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are oftenEAL3certified.
Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, despite having few security proofs so far.
One of the main differences between a digital signature and a written signature is that the user does not "see" what they sign. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application.
To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, the signing application may require all requests to come from digitally signed binaries.
One of the main differences between acloudbased digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, likeFIPS 140-2level 3 andFIPS 201certification, to ensure the signature is validated and secure.
Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to be semantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS (What You See Is What You Sign)[40]means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a requirement for the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexity of modern computer systems. The term WYSIWYS was coined byPeter LandrockandTorben Pedersento describe some of the principles in delivering secure and legally binding digital signatures for Pan-European projects.[40]
An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult.
Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract.
Most digital signature schemes share the following goals regardless of cryptographic theory or legal provision:
Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.
Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing, endorsing, encouraging, or permitting digital signatures and providing for (or limiting) their legal effect. The first appears to have been inUtahin the United States, followed closely by the statesMassachusettsandCalifornia. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments (or proposed enactments) vary from place to place, have typically embodied expectations at variance (optimistically or pessimistically) with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable.
Adoption of technical standards for digital signatures have lagged behind much of the legislation, delaying a more or less unified engineering position oninteroperability,algorithmchoice,key lengths, and so on what the engineering is attempting to provide.
Some industries have established common interoperability standards for the use of digital signatures between members of the industry and with regulators. These include theAutomotive Network Exchangefor the automobile industry and the SAFE-BioPharma Association for thehealthcare industry.
In several countries, a digital signature has a status somewhat like that of a traditional pen and paper signature, as in the1999 EU digital signature directiveand2014 EU follow-on legislation.[15]Generally, these provisions mean that anything digitally signed legally binds the signer of the document to the terms therein. For that reason, it is often thought best to use separate key pairs for encrypting and signing. Using the encryption key pair, a person can engage in an encrypted conversation (e.g., regarding a real estate transaction), but the encryption does not legally sign every message he or she sends. Only when both parties come to an agreement do they sign a contract with their signing keys, and only then are they legally bound by the terms of a specific document. After signing, the document can be sent over the encrypted link. If a signing key is lost or compromised, it can be revoked to mitigate any future transactions. If an encryption key is lost, a backup orkey escrowshould be utilized to continue viewing encrypted content. Signing keys should never be backed up or escrowed unless the backup destination is securely encrypted.
|
https://en.wikipedia.org/wiki/Digital_signature
|
Public Key Cryptography Standards(PKCS) are a group ofpublic-key cryptographystandards devised and published byRSA SecurityLLC, starting in the early 1990s. The company published the standards to promote the use of the cryptography techniques for which they hadpatents, such as theRSA algorithm, theSchnorr signaturealgorithm and several others. Though notindustry standards(because the company retained control over them), some of the standards have begun to move into the "standards track" processes of relevantstandards organizationsin recent years[when?], such as theIETFand thePKIXworking group.
Key Updates (2023–2024):
This container format can contain multiple embedded objects, such as multiple certificates. Usually protected/encrypted with a password. Usable as a format for theJava KeyStoreand to establish client authentication certificates in Mozilla Firefox. Usable byApache Tomcat.
|
https://en.wikipedia.org/wiki/PKCS#1
|
Incryptography,forward secrecy(FS), also known asperfect forward secrecy(PFS), is a feature of specifickey-agreement protocolsthat gives assurances thatsession keyswill not be compromised even if long-term secrets used in the session key exchange are compromised, limiting damage.[1][2][3]ForTLS, the long-term secret is typically theprivate keyof the server. Forward secrecy protects past sessions against future compromises of keys or passwords. By generating a unique session key for every session a user initiates, the compromise of a single session key will not affect any data other than that exchanged in the specific session protected by that particular key. This by itself is not sufficient for forward secrecy which additionally requires that a long-term secret compromise does not affect the security of past session keys.
Forward secrecy protects data on thetransport layerof a network that uses common transport layer security protocols, includingOpenSSL,[4]when its long-term secret keys are compromised, as with theHeartbleedsecurity bug. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered, for example via aman-in-the-middle (MITM) attack.
The value of forward secrecy is that it protects past communication. This reduces the motivation for attackers to compromise keys. For instance, if an attacker learns a long-term key, but the compromise is detected and the long-term key is revoked and updated, relatively little information is leaked in a forward secure system.
The value of forward secrecy depends on the assumed capabilities of an adversary. Forward secrecy has value if an adversary is assumed to be able to obtain secret keys from a device (read access) but is either detected or unable to modify the way session keys are generated in the device (full compromise). In some cases an adversary who can read long-term keys from a device may also be able to modify the functioning of the session key generator, as in the backdooredDual Elliptic Curve Deterministic Random Bit Generator. If an adversary can make the random number generator predictable, then past traffic will be protected but all future traffic will be compromised.
The value of forward secrecy is limited not only by the assumption that an adversary will attack a server by only stealing keys and not modifying the random number generator used by the server but it is also limited by the assumption that the adversary will only passively collect traffic on the communications link and not be active using a man-in-the-middle attack. Forward secrecy typically uses an ephemeralDiffie–Hellman key exchangeto prevent reading past traffic. The ephemeral Diffie–Hellman key exchange is often signed by the server using a static signing key. If an adversary can steal (or obtain through a court order) this static (long term) signing key, the adversary can masquerade as the server to the client and as the client to the server and implement a classic man-in-the-middle attack.[5]
The term "perfect forward secrecy" was coined by C. G. Günther in 1990[6]and further discussed byWhitfield Diffie,Paul van Oorschot, and Michael James Wiener in 1992,[7]where it was used to describe a property of the Station-to-Station protocol.[8]
Forward secrecy has also been used to describe the analogous property ofpassword-authenticated key agreementprotocols where the long-term secret is a (shared)password.[9]
In 2000 theIEEEfirst ratifiedIEEE 1363, which establishes the related one-party and two-party forward secrecy properties of various standard key agreement schemes.[10]
An encryption system has the property of forward secrecy if plain-text (decrypted) inspection of the data exchange that occurs during key agreement phase of session initiation does not reveal the key that was used to encrypt the remainder of the session.
The following is a hypothetical example of a simple instant messaging protocol that employs forward secrecy:
Forward secrecy (achieved by generating new session keys for each message) ensures that past communications cannot be decrypted if one of the keys generated in an iteration of step 2 is compromised, since such a key is only used to encrypt a single message. Forward secrecy also ensures that past communications cannot be decrypted if the long-term private keys from step 1 are compromised. However, masquerading as Alice or Bob would be possible going forward if this occurred, possibly compromising all future messages.
Forward secrecy is designed to prevent the compromise of a long-term secret key from affecting the confidentiality of past conversations. However, forward secrecy cannot defend against a successfulcryptanalysisof the underlyingciphersbeing used, since a cryptanalysis consists of finding a way to decrypt an encrypted message without the key, and forward secrecy only protects keys, not the ciphers themselves.[11]A patient attacker can capture a conversation whose confidentiality is protected through the use ofpublic-key cryptographyand wait until the underlying cipher is broken (e.g. largequantum computerscould be created which allow thediscrete logarithm problemto be computed quickly), a.k.a.harvest now, decrypt laterattacks. This would allow the recovery of old plaintexts even in a system employing forward secrecy.
Non-interactive forward-secure key exchange protocols face additional threats that are not relevant to interactive protocols. In amessage suppressionattack, an attacker in control of the network may itself store messages while preventing them from reaching the intended recipient; as the messages are never received, the corresponding private keys may not be destroyed or punctured, so a compromise of the private key can lead to successful decryption. Proactively retiring private keys on a schedule mitigates, but does not eliminate, this attack. In amalicious key exhaustionattack, the attacker sends many messages to the recipient and exhausts the private key material, forcing a protocol to choose between failing closed (and enablingdenial of serviceattacks) or failing open (and giving up some amount of forward secrecy).[12]
Most key exchange protocols areinteractive, requiring bidirectional communication between the parties. A protocol that permits the sender to transmit data without first needing to receive any replies from the recipient may be callednon-interactive, orasynchronous, orzero round trip(0-RTT).[13][14]
Interactivity is onerous for some applications—for example, in a secure messaging system, it may be desirable to have astore-and-forwardimplementation, rather than requiring sender and recipient to be online at the same time; loosening the bidirectionality requirement can also improve performance even where it is not a strict requirement, for example at connection establishment or resumption. These use cases have stimulated interest in non-interactive key exchange, and, as forward security is a desirable property in a key exchange protocol, in non-interactive forward secrecy.[15][16]This combination has been identified as desirable since at least 1996.[17]However, combining forward secrecy and non-interactivity has proven challenging;[18]it had been suspected that forward secrecy with protection againstreplay attackswas impossible non-interactively, but it has been shown to be possible to achieve all three desiderata.[14]
Broadly, two approaches to non-interactive forward secrecy have been explored,pre-computed keysandpuncturable encryption.[16]
With pre-computed keys, many key pairs are created and the public keys shared, with the private keys destroyed after a message has been received using the corresponding public key. This approach has been deployed as part of theSignal protocol.[19]
In puncturable encryption, the recipient modifies their private key after receiving a message in such a way that the new private key cannot read the message but the public key is unchanged.Ross J. Andersoninformally described a puncturable encryption scheme for forward secure key exchange in 1997,[20]andGreen & Miers (2015)formally described such a system,[21]building on the related scheme ofCanetti, Halevi & Katz (2003), which modifies the private key according to a schedule so that messages sent in previous periods cannot be read with the private key from a later period.[18]Green & Miers (2015)make use ofhierarchical identity-based encryptionandattribute-based encryption, whileGünther et al. (2017)use a different construction that can be based on any hierarchical identity-based scheme.[22]Dallmeier et al. (2020)experimentally found that modifyingQUICto use a 0-RTT forward secure and replay-resistant key exchange implemented with puncturable encryption incurred significantly increased resource usage, but not so much as to make practical use infeasible.[23]
Weak perfect forward secrecy (Wpfs) is the weaker property whereby when agents' long-term keys are compromised, the secrecy of previously established session-keys is guaranteed, but only for sessions in which the adversary did not actively interfere. This new notion, and the distinction between this and forward secrecy was introduced by Hugo Krawczyk in 2005.[24][25]This weaker definition implicitly requires that full (perfect) forward secrecy maintains the secrecy of previously established session keys even in sessions where the adversarydidactively interfere, or attempted to act as a man in the middle.
Forward secrecy is present in several protocol implementations, such asSSHand as an optional feature inIPsec(RFC 2412).Off-the-Record Messaging, a cryptography protocol and library for many instant messaging clients, as well asOMEMOwhich provides additional features such as multi-user functionality in such clients, both provide forward secrecy as well asdeniable encryption.
InTransport Layer Security(TLS),cipher suitesbased onDiffie–Hellmankey exchange (DHE-RSA, DHE-DSA) andelliptic curve Diffie–Hellmankey exchange (ECDHE-RSA, ECDHE-ECDSA) are available. In theory, TLS can use forward secrecy since SSLv3, but many implementations do not offer forward secrecy or provided it with lower grade encryption.[26]TLS 1.3 removed support for RSA for key exchange, leaving Diffie-Hellman (with forward-secrecy) as the sole algorithm for key exchange.[27]
OpenSSLsupports forward secrecy usingelliptic curve Diffie–Hellmansince version 1.0,[28]with a computational overhead of approximately 15% for the initial handshake.[29]
TheSignal Protocoluses theDouble Ratchet Algorithmto provide forward secrecy.[30]
On the other hand, among popular protocols currently in use,WPA Personaldid not support forward secrecy before WPA3.[31]
Since late 2011, Google provided forward secrecy with TLS by default to users of itsGmailservice,Google Docsservice, and encrypted search services.[28]Since November 2013,Twitterprovided forward secrecy with TLS to its users.[32]Wikishosted by theWikimedia Foundationhave all provided forward secrecy to users since July 2014[33]and are requiring the use of forward secrecy since August 2018.
Facebook reported as part of an investigation into email encryption that, as of May 2014, 74% of hosts that supportSTARTTLSalso provide forward secrecy.[34]TLS 1.3, published in August 2018, dropped support for ciphers without forward secrecy. As of February 2019[update], 96.6% of web servers surveyed support some form of forward secrecy, and 52.1% will use forward secrecy with most browsers.[35]
At WWDC 2016, Apple announced that all iOS apps would need to use App Transport Security (ATS), a feature which enforces the use of HTTPS transmission. Specifically, ATS requires the use of an encryption cipher that provides forward secrecy.[36]ATS became mandatory for apps on January 1, 2017.[37]
TheSignalmessaging application employs forward secrecy in its protocol, notably differentiating it from messaging protocols based onPGP.[38]
Forward secrecy is supported on 92.6% of websites on modern browsers, while 0.3% of websites do not support forward secrecy at all as of May 2024.[39]
|
https://en.wikipedia.org/wiki/Forward_secrecy
|
Incryptography, asubstitution cipheris a method ofencryptingin which units ofplaintextare replaced with theciphertext, in a defined manner, with the help of a key; the "units" may be single letters (the most common), pairs of letters, triplets of letters, mixtures of the above, and so forth. The receiver deciphers the text by performing the inverse substitution process to extract the original message.
Substitution ciphers can be compared withtransposition ciphers. In a transposition cipher, the units of the plaintext are rearranged in a different and usually quite complex order, but the units themselves are left unchanged. By contrast, in a substitution cipher, the units of the plaintext are retained in the same sequence in the ciphertext, but the units themselves are altered.
There are a number of different types of substitution cipher. If the cipher operates on single letters, it is termed asimple substitution cipher; a cipher that operates on larger groups of letters is termedpolygraphic. Amonoalphabetic cipheruses fixed substitution over the entire message, whereas apolyalphabetic cipheruses a number of substitutions at different positions in the message, where a unit from the plaintext is mapped to one of several possibilities in the ciphertext and vice versa.
The first ever published description of how to crack simple substitution ciphers was given byAl-KindiinA Manuscript on Deciphering Cryptographic Messageswritten around 850 AD. The method he described is now known asfrequency analysis.
The simplest substitution ciphers are theCaesar cipherandAtbash cipher. Here single letters are substituted (referred to assimple substitution). It can be demonstrated by writing out the alphabet twice, once in regular order and again with the letters shifted by some number of steps or reversed to represent theciphertext alphabet(or substitution alphabet).
The substitution alphabet could also be scrambled in a more complex fashion, in which case it is called amixed alphabetorderanged alphabet. Traditionally, mixed alphabets may be created by first writing out a keyword, removing repeated letters in it, then writing all the remaining letters in the alphabet in the usual order.
Using this system, the keyword "zebras" gives us the following alphabets:
A message
enciphers to
And the keyword "grandmother" gives us the following alphabets:
The same message
enciphers to
Usually the ciphertext is written out in blocks of fixed length, omitting punctuation and spaces; this is done to disguise word boundaries from theplaintextand to help avoid transmission errors. These blocks are called "groups", and sometimes a "group count" (i.e. the number of groups) is given as an additional check. Five-letter groups are often used, dating from when messages used to be transmitted bytelegraph:
If the length of the message happens not to be divisible by five, it may be padded at the end with "nulls". These can be any characters that decrypt to obvious nonsense, so that the receiver can easily spot them and discard them.
The ciphertext alphabet is sometimes different from the plaintext alphabet; for example, in thepigpen cipher, the ciphertext consists of a set of symbols derived from a grid. For example:
Such features make little difference to the security of a scheme, however – at the very least, any set of strange symbols can be transcribed back into an A-Z alphabet and dealt with as normal.
In lists and catalogues for salespeople, a very simple encryption is sometimes used to replace numeric digits by letters.
Examples: MAT would be used to represent 120, PAPR would be used for 5256, and OFTK would be used for 7803.
Although the traditional keyword method for creating a mixed substitution alphabet is simple, a serious disadvantage is that the last letters of the alphabet (which are mostly low frequency) tend to stay at the end. A stronger way of constructing a mixed alphabet is to generate the substitution alphabet completely randomly.
Although the number of possible substitution alphabets is very large (26! ≈ 288.4, or about88 bits), this cipher is not very strong, and is easily broken. Provided the message is of reasonable length (see below), thecryptanalystcan deduce the probable meaning of the most common symbols by analyzing thefrequency distributionof the ciphertext. This allows formation of partial words, which can be tentatively filled in, progressively expanding the (partial) solution (seefrequency analysisfor a demonstration of this). In some cases, underlying words can also be determined from the pattern of their letters; for example, theEnglishwordstater,ninth, andpaperall have the patternABACD. Many people solve such ciphers for recreation, as withcryptogrampuzzles in the newspaper.
According to theunicity distanceofEnglish, 27.6 letters of ciphertext are required to crack a mixed alphabet simple substitution. In practice, typically about 50 letters are needed, although some messages can be broken with fewer if unusual patterns are found. In other cases, the plaintext can be contrived to have a nearly flat frequency distribution, and much longer plaintexts will then be required by the cryptanalyst.
One once-common variant of the substitution cipher is thenomenclator. Named after the public official who announced the titles of visiting dignitaries, thiscipheruses a smallcodesheet containing letter, syllable and word substitution tables, sometimes homophonic, that typically converted symbols into numbers. Originally the code portion was restricted to the names of important people, hence the name of the cipher; in later years, it covered many common words and place names as well. The symbols for whole words (codewordsin modern parlance) and letters (cipherin modern parlance) were not distinguished in the ciphertext. TheRossignols'Great Cipherused byLouis XIV of Francewas one.
Nomenclators were the standard fare ofdiplomaticcorrespondence,espionage, and advanced politicalconspiracyfrom the early fifteenth century to the late eighteenth century; most conspirators were and have remained less cryptographically sophisticated. Althoughgovernmentintelligencecryptanalystswere systematically breaking nomenclators by the mid-sixteenth century, and superior systems had been available since 1467, the usual response tocryptanalysiswas simply to make the tables larger. By the late eighteenth century, when the system was beginning to die out, some nomenclators had 50,000 symbols.[citation needed]
Nevertheless, not all nomenclators were broken; today, cryptanalysis of archived ciphertexts remains a fruitful area ofhistorical research.
An early attempt to increase the difficulty of frequency analysis attacks on substitution ciphers was to disguise plaintext letter frequencies byhomophony. In these ciphers, plaintext letters map to more than one ciphertext symbol. Usually, the highest-frequency plaintext symbols are given more equivalents than lower frequency letters. In this way, the frequency distribution is flattened, making analysis more difficult.
Since more than 26 characters will be required in the ciphertext alphabet, various solutions are employed to invent larger alphabets. Perhaps the simplest is to use a numeric substitution 'alphabet'. Another method consists of simple variations on the existing alphabet; uppercase, lowercase, upside down, etc. More artistically, though not necessarily more securely, some homophonic ciphers employed wholly invented alphabets of fanciful symbols.
Thebook cipheris a type of homophonic cipher, one example being theBeale ciphers. This is a story of buried treasure that was described in 1819–21 by use of a ciphered text that was keyed to the Declaration of Independence. Here each ciphertext character was represented by a number. The number was determined by taking the plaintext character and finding a word in the Declaration of Independence that started with that character and using the numerical position of that word in the Declaration of Independence as the encrypted form of that letter. Since many words in the Declaration of Independence start with the same letter, the encryption of that character could be any of the numbers associated with the words in the Declaration of Independence that start with that letter. Deciphering the encrypted text characterX(which is a number) is as simple as looking up the Xth word of the Declaration of Independence and using the first letter of that word as the decrypted character.
Another homophonic cipher was described by Stahl[2][3]and was one of the first[citation needed]attempts to provide for computer security of data systems in computers through encryption. Stahl constructed the cipher in such a way that the number of homophones for a given character was in proportion to the frequency of the character, thus making frequency analysis much more difficult.
Francesco I Gonzaga,Duke of Mantua, used the earliest known example of a homophonic substitution cipher in 1401 for correspondence with one Simone de Crema.[4][5]
Mary, Queen of Scots, while imprisoned by Elizabeth I, during the years from 1578 to 1584 used homophonic ciphers with additional encryption using a nomenclator for frequent prefixes, suffixes, and proper names while communicating with her allies includingMichel de Castelnau.[6]
The work ofAl-Qalqashandi(1355-1418), based on the earlier work ofIbn al-Durayhim(1312–1359), contained the first published discussion of the substitution and transposition of ciphers, as well as the first description of a polyalphabetic cipher, in which each plaintext letter is assigned more than one substitute.[7]Polyalphabetic substitution ciphers were later described in 1467 byLeone Battista Albertiin the form of disks.Johannes Trithemius, in his bookSteganographia(Ancient Greekfor "hidden writing") introduced the now more standard form of atableau(see below; ca. 1500 but not published until much later). A more sophisticated version using mixed alphabets was described in 1563 byGiovanni Battista della Portain his book,De Furtivis Literarum Notis(Latinfor "On concealed characters in writing").
In a polyalphabetic cipher, multiple cipher alphabets are used. To facilitate encryption, all the alphabets are usually written out in a largetable, traditionally called atableau. The tableau is usually 26×26, so that 26 full ciphertext alphabets are available. The method of filling the tableau, and of choosing which alphabet to use next, defines the particular polyalphabetic cipher. All such ciphers are easier to break than once believed, as substitution alphabets are repeated for sufficiently large plaintexts.
One of the most popular was that ofBlaise de Vigenère. First published in 1585, it was considered unbreakable until 1863, and indeed was commonly calledle chiffre indéchiffrable(Frenchfor "indecipherable cipher").
In theVigenère cipher, the first row of the tableau is filled out with a copy of the plaintext alphabet, and successive rows are simply shifted one place to the left. (Such a simple tableau is called atabula recta, and mathematically corresponds to adding the plaintext and key letters,modulo26.) A keyword is then used to choose which ciphertext alphabet to use. Each letter of the keyword is used in turn, and then they are repeated again from the beginning. So if the keyword is 'CAT', the first letter of plaintext is enciphered under alphabet 'C', the second under 'A', the third under 'T', the fourth under 'C' again, and so on, or if the keyword is 'RISE', the first letter of plaintext is enciphered under alphabet 'R', the second under 'I', the third under 'S', the fourth under 'E', and so on. In practice, Vigenère keys were often phrases several words long.
In 1863,Friedrich Kasiskipublished a method (probably discovered secretly and independently before theCrimean WarbyCharles Babbage) which enabled the calculation of the length of the keyword in a Vigenère ciphered message. Once this was done, ciphertext letters that had been enciphered under the same alphabet could be picked out and attacked separately as a number of semi-independent simple substitutions - complicated by the fact that within one alphabet letters were separated and did not form complete words, but simplified by the fact that usually atabula rectahad been employed.
As such, even today a Vigenère type cipher should theoretically be difficult to break if mixed alphabets are used in the tableau, if the keyword is random, and if the total length of ciphertext is less than 27.67 times the length of the keyword.[8]These requirements are rarely understood in practice, and so Vigenère enciphered message security is usually less than might have been.
Other notable polyalphabetics include:
Modernstream cipherscan also be seen, from a sufficiently abstract perspective, to be a form of polyalphabetic cipher in which all the effort has gone into making thekeystreamas long and unpredictable as possible.
In a polygraphic substitution cipher, plaintext letters are substituted in larger groups, instead of substituting letters individually. The first advantage is that the frequency distribution is much flatter than that of individual letters (though not actually flat in real languages; for example, 'OS' is much more common than 'RÑ' in Spanish). Second, the larger number of symbols requires correspondingly more ciphertext to productively analyze letter frequencies.
To substitutepairsof letters would take a substitution alphabet 676 symbols long (262{\displaystyle 26^{2}}). In the sameDe Furtivis Literarum Notismentioned above, della Porta actually proposed such a system, with a 20 x 20 tableau (for the 20 letters of the Italian/Latin alphabet he was using) filled with 400 uniqueglyphs. However the system was impractical and probably never actually used.
The earliest practicaldigraphic cipher(pairwise substitution), was the so-calledPlayfair cipher, invented by SirCharles Wheatstonein 1854. In this cipher, a 5 x 5 grid is filled with the letters of a mixed alphabet (two letters, usually I and J, are combined). A digraphic substitution is then simulated by taking pairs of letters as two corners of a rectangle, and using the other two corners as the ciphertext (see thePlayfair ciphermain article for a diagram). Special rules handle double letters and pairs falling in the same row or column. Playfair was in military use from theBoer WarthroughWorld War II.
Several other practical polygraphics were introduced in 1901 byFelix Delastelle, including thebifidandfour-square ciphers(both digraphic) and thetrifid cipher(probably the first practical trigraphic).
TheHill cipher, invented in 1929 byLester S. Hill, is a polygraphic substitution which can combine much larger groups of letters simultaneously usinglinear algebra. Each letter is treated as a digit inbase 26: A = 0, B =1, and so on. (In a variation, 3 extra symbols are added to make thebasisprime.) A block of n letters is then considered as avectorof ndimensions, and multiplied by a n x nmatrix,modulo26. The components of the matrix are the key, and should berandomprovided that the matrix is invertible inZ26n{\displaystyle \mathbb {Z} _{26}^{n}}(to ensure decryption is possible). A mechanical version of the Hill cipher of dimension 6 was patented in 1929.[9]
The Hill cipher is vulnerable to aknown-plaintext attackbecause it is completelylinear, so it must be combined with somenon-linearstep to defeat this attack. The combination of wider and wider weak, lineardiffusivesteps like a Hill cipher, with non-linear substitution steps, ultimately leads to asubstitution–permutation network(e.g. aFeistel cipher), so it is possible – from this extreme perspective – to consider modernblock ciphersas a type of polygraphic substitution.
Between aroundWorld War Iand the widespread availability ofcomputers(for some governments this was approximately the 1950s or 1960s; for other organizations it was a decade or more later; for individuals it was no earlier than 1975), mechanical implementations of polyalphabetic substitution ciphers were widely used. Several inventors had similar ideas about the same time, androtor cipher machineswere patented four times in 1919. The most important of the resulting machines was theEnigma, especially in the versions used by theGerman militaryfrom approximately 1930. TheAlliesalso developed and used rotor machines (e.g.,SIGABAandTypex).
All of these were similar in that the substituted letter was chosenelectricallyfrom amongst the huge number of possible combinations resulting from the rotation of several letter disks. Since one or more of the disks rotated mechanically with each plaintext letter enciphered, the number of alphabets used was astronomical. Early versions of these machine were, nevertheless, breakable.William F. Friedmanof the US Army'sSISearly found vulnerabilities inHebern's rotor machine, and theGovernment Code and Cypher School'sDillwyn Knoxsolved versions of the Enigma machine (those without the "plugboard") well beforeWWIIbegan. Traffic protected by essentially all of the German military Enigmas was broken by Allied cryptanalysts, most notably those atBletchley Park, beginning with the German Army variant used in the early 1930s. This version was broken by inspired mathematical insight byMarian RejewskiinPoland.
As far as is publicly known, no messages protected by theSIGABAandTypexmachines were ever broken during or near the time when these systems were in service.
One type of substitution cipher, theone-time pad, is unique. It was invented near the end of World War I byGilbert VernamandJoseph Mauborgnein the US. It was mathematically proven unbreakable byClaude Shannon, probably duringWorld War II; his work was first published in the late 1940s. In its most common implementation, the one-time pad can be called a substitution cipher only from an unusual perspective; typically, the plaintext letter is combined (not substituted) in some manner (e.g.,XOR) with the key material character at that position.
The one-time pad is, in most cases, impractical as it requires that the key material be as long as the plaintext,actuallyrandom, used once andonlyonce, and kept entirely secret from all except the sender and intended receiver. When these conditions are violated, even marginally, the one-time pad is no longer unbreakable.Sovietone-time pad messages sent from the US for a brief time during World War II usednon-randomkey material. US cryptanalysts, beginning in the late 40s, were able to, entirely or partially, break a few thousand messages out of several hundred thousand. (SeeVenona project)
In a mechanical implementation, rather like theRockexequipment, the one-time pad was used for messages sent on theMoscow-Washingtonhot lineestablished after theCuban Missile Crisis.
Substitution ciphers as discussed above, especially the older pencil-and-paper hand ciphers, are no longer in serious use. However, the cryptographic concept of substitution carries on even today. From an abstract perspective, modern bit-orientedblock ciphers(e.g.,DES, orAES) can be viewed as substitution ciphers on a largebinaryalphabet. In addition, block ciphers often include smaller substitution tables calledS-boxes. See alsosubstitution–permutation network.
|
https://en.wikipedia.org/wiki/Substitution_cipher
|
Incryptanalysis,frequency analysis(also known ascounting letters) is the study of thefrequency of lettersor groups of letters in aciphertext. The method is used as an aid to breakingclassical ciphers.
Frequency analysis is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. Moreover, there is a characteristic distribution of letters that is roughly the same for almost all samples of that language. For instance, given a section ofEnglish language,E,T,AandOare the most common, whileZ,Q,XandJare rare. Likewise,TH,ER,ON, andANare the most common pairs of letters (termedbigramsordigraphs), andSS,EE,TT, andFFare the most common repeats.[1]The nonsense phrase "ETAOIN SHRDLU" represents the 12 most frequent letters in typical English language text.
In some ciphers, such properties of the natural language plaintext are preserved in the ciphertext, and these patterns have the potential to be exploited in aciphertext-only attack.
In a simplesubstitution cipher, each letter of theplaintextis replaced with another, and any particular letter in the plaintext will always be transformed into the same letter in the ciphertext. For instance, if all occurrences of the lettereturn into the letterX, a ciphertext message containing numerous instances of the letterXwould suggest to a cryptanalyst thatXrepresentse.
The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. MoreXs in the ciphertext than anything else suggests thatXcorresponds toein the plaintext, but this is not certain;tandaare also very common in English, soXmight be either of them. It is unlikely to be a plaintextzorq, which are less common. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters.
More complex use of statistics can be conceived, such as considering counts of pairs of letters (bigrams), triplets (trigrams), and so on. This is done to provide more information to the cryptanalyst, for instance,QandUnearly always occur together in that order in English, even thoughQitself is rare.
SupposeEvehas intercepted thecryptogrambelow, and it is known to be encrypted using a simple substitution cipher:
For this example, uppercase letters are used to denote ciphertext, lowercase letters are used to denote plaintext (or guesses at such), andX~tis used to express a guess that ciphertext letterXrepresents the plaintext lettert.
Eve could use frequency analysis to help solve the message along the following lines: counts of the letters in the cryptogram show thatIis the most common single letter,[2]XLmost commonbigram, andXLIis the most commontrigram.eis the most common letter in the English language,this the most common bigram, andtheis the most common trigram. This strongly suggests thatX~t,L~handI~e. The second most common letter in the cryptogram isE; since the first and second most frequent letters in the English language,eandtare accounted for, Eve guesses thatE~a, the third most frequent letter. Tentatively making these assumptions, the following partial decrypted message is obtained.
Using these initial guesses, Eve can spot patterns that confirm her choices, such as "that". Moreover, other patterns suggest further guesses. "Rtate" might be "state", which would meanR~s. Similarly "atthattMZe" could be guessed as "atthattime", yieldingM~iandZ~m. Furthermore, "heVe" might be "here", givingV~r. Filling in these guesses, Eve gets:
In turn, these guesses suggest still others (for example, "remarA" could be "remark", implyingA~k) and so on, and it is relatively straightforward to deduce the rest of the letters, eventually yielding the plaintext.
At this point, it would be a good idea for Eve to insert spaces and punctuation:
In this example from "The Gold-Bug", Eve's guesses were all correct. This would not always be the case, however; the variation in statistics for individual plaintexts can mean that initial guesses are incorrect. It may be necessary tobacktrackincorrect guesses or to analyze the available statistics in much more depth than the somewhat simplified justifications given in the above example.
It is possible that the plaintext does not exhibit the expected distribution of letter frequencies. Shorter messages are likely to show more variation. It is also possible to construct artificially skewed texts. For example, entire novels have been written that omit the letterealtogether — a form of literature known as alipogram.
The first known recorded explanation of frequency analysis (indeed, of any kind of cryptanalysis) was given in the 9th century byAl-Kindi, anArabpolymath, inA Manuscript on Deciphering Cryptographic Messages.[3]It has been suggested that a close textual study of theQur'anfirst brought to light thatArabichas a characteristic letter frequency.[4]Its use spread, and similar systems were widely used in European states by the time of theRenaissance. By 1474,Cicco Simonettahad written a manual on deciphering encryptions ofLatinandItaliantext.[5]
Several schemes were invented by cryptographers to defeat this weakness in simple substitution encryptions. These included:
A disadvantage of all these attempts to defeat frequency counting attacks is that it increases complication of both enciphering and deciphering, leading to mistakes. Famously, a British Foreign Secretary is said to have rejected the Playfair cipher because, even if school boys could cope successfully as Wheatstone and Playfair had shown, "our attachés could never learn it!".
Therotor machinesof the first half of the 20th century (for example, theEnigma machine) were essentially immune to straightforward frequency analysis.
However, other kinds of analysis ("attacks") successfully decoded messages from some of those machines.[6]
Frequency analysis requires only a basic understanding of the statistics of the plaintext language and some problem-solving skills, and, if performed by hand, tolerance for extensive letter bookkeeping. DuringWorld War II, both theBritishand theAmericansrecruited codebreakers by placingcrosswordpuzzles in major newspapers and running contests for who could solve them the fastest. Several of the ciphers used by theAxis powerswere breakable using frequency analysis, for example, some of the consular ciphers used by the Japanese. Mechanical methods of letter counting and statistical analysis (generallyIBMcard type machinery) were first used in World War II, possibly by the US Army'sSIS. Today, the work of letter counting and analysis is done bycomputersoftware, which can carry out such analysis in seconds. With modern computing power, classical ciphers are unlikely to provide any real protection for confidential data.
Frequency analysis has been described in fiction.Edgar Allan Poe's "The Gold-Bug" andSir Arthur Conan Doyle'sSherlock Holmestale "The Adventure of the Dancing Men" are examples of stories which describe the use of frequency analysis to attack simple substitution ciphers. The cipher in the Poe story is encrusted with several deception measures, but this is more a literary device than anything significant cryptographically.
|
https://en.wikipedia.org/wiki/Frequency_analysis
|
Noncemay refer to:
|
https://en.wikipedia.org/wiki/Nonce
|
Inalgebra, aunitorinvertible element[a]of aringis aninvertible elementfor the multiplication of the ring. That is, an elementuof a ringRis a unit if there existsvinRsuch thatvu=uv=1,{\displaystyle vu=uv=1,}where1is themultiplicative identity; the elementvis unique for this property and is called themultiplicative inverseofu.[1][2]The set of units ofRforms agroupR×under multiplication, called thegroup of unitsorunit groupofR.[b]Other notations for the unit group areR∗,U(R), andE(R)(from the German termEinheit).
Less commonly, the termunitis sometimes used to refer to the element1of the ring, in expressions likering with a unitorunit ring, and alsounit matrix. Because of this ambiguity,1is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of arng.
The multiplicative identity1and its additive inverse−1are always units. More generally, anyroot of unityin a ringRis a unit: ifrn= 1, thenrn−1is a multiplicative inverse ofr.
In anonzero ring, theelement 0is not a unit, soR×is not closed under addition.
A nonzero ringRin which every nonzero element is a unit (that is,R×=R∖ {0}) is called adivision ring(or a skew-field). A commutative division ring is called afield. For example, the unit group of the field ofreal numbersRisR∖ {0}.
In the ring ofintegersZ, the only units are1and−1.
In the ringZ/nZofintegers modulon, the units are the congruence classes(modn)represented by integerscoprimeton. They constitute themultiplicative group of integers modulon.
In the ringZ[√3]obtained by adjoining thequadratic integer√3toZ, one has(2 +√3)(2 −√3) = 1, so2 +√3is a unit, and so are its powers, soZ[√3]has infinitely many units.
More generally, for thering of integersRin anumber fieldF,Dirichlet's unit theoremstates thatR×is isomorphic to the groupZn×μR{\displaystyle \mathbf {Z} ^{n}\times \mu _{R}}whereμR{\displaystyle \mu _{R}}is the (finite, cyclic) group of roots of unity inRandn, therankof the unit group, isn=r1+r2−1,{\displaystyle n=r_{1}+r_{2}-1,}wherer1,r2{\displaystyle r_{1},r_{2}}are the number of real embeddings and the number of pairs of complex embeddings ofF, respectively.
This recovers theZ[√3]example: The unit group of (the ring of integers of) areal quadratic fieldis infinite of rank 1, sincer1=2,r2=0{\displaystyle r_{1}=2,r_{2}=0}.
For a commutative ringR, the units of thepolynomial ringR[x]are the polynomialsp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}}such thata0is a unit inRand the remaining coefficientsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}arenilpotent, i.e., satisfyaiN=0{\displaystyle a_{i}^{N}=0}for someN.[4]In particular, ifRis adomain(or more generallyreduced), then the units ofR[x]are the units ofR.
The units of thepower series ringR[[x]]{\displaystyle R[[x]]}are the power seriesp(x)=∑i=0∞aixi{\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}}such thata0is a unit inR.[5]
The unit group of the ringMn(R)ofn×nmatricesover a ringRis the groupGLn(R)ofinvertible matrices. For a commutative ringR, an elementAofMn(R)is invertible if and only if thedeterminantofAis invertible inR. In that case,A−1can be given explicitly in terms of theadjugate matrix.
For elementsxandyin a ringR, if1−xy{\displaystyle 1-xy}is invertible, then1−yx{\displaystyle 1-yx}is invertible with inverse1+y(1−xy)−1x{\displaystyle 1+y(1-xy)^{-1}x};[6]this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series:(1−yx)−1=∑n≥0(yx)n=1+y(∑n≥0(xy)n)x=1+y(1−xy)−1x.{\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.}SeeHua's identityfor similar results.
Acommutative ringis alocal ringifR∖R×is amaximal ideal.
As it turns out, ifR∖R×is an ideal, then it is necessarily amaximal idealandRislocalsince amaximal idealis disjoint fromR×.
IfRis afinite field, thenR×is acyclic groupof order|R| − 1.
Everyring homomorphismf:R→Sinduces agroup homomorphismR×→S×, sincefmaps units to units. In fact, the formation of the unit group defines afunctorfrom thecategory of ringsto thecategory of groups. This functor has aleft adjointwhich is the integralgroup ringconstruction.[7]
Thegroup schemeGL1{\displaystyle \operatorname {GL} _{1}}is isomorphic to themultiplicative group schemeGm{\displaystyle \mathbb {G} _{m}}over any base, so for any commutative ringR, the groupsGL1(R){\displaystyle \operatorname {GL} _{1}(R)}andGm(R){\displaystyle \mathbb {G} _{m}(R)}are canonically isomorphic toU(R). Note that the functorGm{\displaystyle \mathbb {G} _{m}}(that is,R↦U(R)) isrepresentablein the sense:Gm(R)≃Hom(Z[t,t−1],R){\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)}for commutative ringsR(this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphismsZ[t,t−1]→R{\displaystyle \mathbb {Z} [t,t^{-1}]\to R}and the set of unit elements ofR(in contrast,Z[t]{\displaystyle \mathbb {Z} [t]}represents the additive groupGa{\displaystyle \mathbb {G} _{a}}, theforgetful functorfrom the category of commutative rings to thecategory of abelian groups).
Suppose thatRis commutative. ElementsrandsofRare calledassociateif there exists a unituinRsuch thatr=us; then writer~s. In any ring, pairs ofadditive inverseelements[c]xand−xareassociate, since any ring includes the unit−1. For example, 6 and −6 are associate inZ. In general,~is anequivalence relationonR.
Associatedness can also be described in terms of theactionofR×onRvia multiplication: Two elements ofRare associate if they are in the sameR×-orbit.
In anintegral domain, the set of associates of a given nonzero element has the samecardinalityasR×.
The equivalence relation~can be viewed as any one ofGreen's semigroup relationsspecialized to the multiplicativesemigroupof a commutative ringR.
|
https://en.wikipedia.org/wiki/Group_of_units#Finite_groups_of_units
|
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
Inmathematics, aringis analgebraic structureconsisting of a set with twobinary operationscalledadditionandmultiplication, which obey the same basic laws asadditionandmultiplicationof integers, except that multiplication in a ring does not need to becommutative. Ringelementsmay be numbers such as integers orcomplex numbers, but they may also be non-numerical objects such aspolynomials,square matrices,functions, andpower series.
Aringmay be defined as a set that is endowed with two binary operations calledadditionandmultiplicationsuch that the ring is anabelian groupwith respect to the addition operator, and the multiplication operator isassociative, isdistributiveover the addition operation, and has a multiplicativeidentity element. (Some authors apply the termringto a further generalization, often called arng, that omits the requirement for a multiplicative identity, and instead call the structure defined above aring with identity. See§ Variations on terminology.)
Whether a ring iscommutative(that is, its multiplication is acommutative operation) has profound implications on its properties.Commutative algebra, the theory of commutative rings, is a major branch ofring theory. Its development has been greatly influenced by problems and ideas ofalgebraic number theoryandalgebraic geometry.
Examples of commutative rings include everyfield, the integers, the polynomials in one or several variables with coefficients in another ring, thecoordinate ringof anaffine algebraic variety, and thering of integersof a number field. Examples of noncommutative rings include the ring ofn×nrealsquare matriceswithn≥ 2,group ringsinrepresentation theory,operator algebrasinfunctional analysis,rings of differential operators, andcohomology ringsintopology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions byDedekind,Hilbert,Fraenkel, andNoether. Rings were first formalized as a generalization ofDedekind domainsthat occur innumber theory, and ofpolynomial ringsand rings of invariants that occur inalgebraic geometryandinvariant theory. They later proved useful in other branches of mathematics such asgeometryandanalysis.
Rings appear in the following chain ofclass inclusions:
Aringis asetRequipped with two binary operations[a]+ (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called thering axioms:[1][2][3]
In notation, the multiplication symbol·is often omitted, in which casea·bis written asab.
In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "rng" (IPA:/rʊŋ/) with a missing "i". For example, the set ofeven integerswith the usual + and ⋅ is a rng, but not a ring. As explained in§ Historybelow, many authors apply the term "ring" without requiring a multiplicative identity.
Although ring addition iscommutative, ring multiplication is not required to be commutative:abneed not necessarily equalba. Rings that also satisfy commutativity for multiplication (such as the ring of integers) are calledcommutative rings. Books on commutative algebra or algebraic geometry often adopt the convention thatringmeanscommutative ring, to simplify terminology.
In a ring, multiplicative inverses are not required to exist. A nonzerocommutative ring in which every nonzero element has amultiplicative inverseis called afield.
The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms.[4]The proof makes use of the "1", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products:ab+cd=cd+ab.)
There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative.[5]For these authors, everyalgebrais a "ring".
The most familiar example of a ring is the set of all integersZ,{\displaystyle \mathbb {Z} ,}consisting of thenumbers
The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers.
Some basic properties of a ring follow immediately from the axioms:
Equip the setZ/4Z={0¯,1¯,2¯,3¯}{\displaystyle \mathbb {Z} /4\mathbb {Z} =\left\{{\overline {0}},{\overline {1}},{\overline {2}},{\overline {3}}\right\}}with the following operations:
ThenZ/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} }is a ring: each axiom follows from the corresponding axiom forZ.{\displaystyle \mathbb {Z} .}Ifxis an integer, the remainder ofxwhen divided by4may be considered as an element ofZ/4Z,{\displaystyle \mathbb {Z} /4\mathbb {Z} ,}and this element is often denoted by "xmod 4" orx¯,{\displaystyle {\overline {x}},}which is consistent with the notation for0, 1, 2, 3. The additive inverse of anyx¯{\displaystyle {\overline {x}}}inZ/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} }is−x¯=−x¯.{\displaystyle -{\overline {x}}={\overline {-x}}.}For example,−3¯=−3¯=1¯.{\displaystyle -{\overline {3}}={\overline {-3}}={\overline {1}}.}
Z/4Z{\displaystyle \mathbb {Z} /4\mathbb {Z} }has a subringZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }, and ifp{\displaystyle p}is prime, thenZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }has no subrings.
The set of 2-by-2square matriceswith entries in afieldFis[7][8][9][10]
With the operations of matrix addition andmatrix multiplication,M2(F){\displaystyle \operatorname {M} _{2}(F)}satisfies the above ring axioms. The element(1001){\displaystyle \left({\begin{smallmatrix}1&0\\0&1\end{smallmatrix}}\right)}is the multiplicative identity of the ring. IfA=(0110){\displaystyle A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)}andB=(0100),{\displaystyle B=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right),}thenAB=(0001){\displaystyle AB=\left({\begin{smallmatrix}0&0\\0&1\end{smallmatrix}}\right)}whileBA=(1000);{\displaystyle BA=\left({\begin{smallmatrix}1&0\\0&0\end{smallmatrix}}\right);}this example shows that the ring is noncommutative.
More generally, for any ringR, commutative or not, and any nonnegative integern, the squaren×nmatrices with entries inRform a ring; seeMatrix ring.
The study of rings originated from the theory ofpolynomial ringsand the theory ofalgebraic integers.[11]In 1871,Richard Dedekinddefined the concept of the ring of integers of a number field.[12]In this context, he introduced the terms "ideal" (inspired byErnst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting.
The term "Zahlring" (number ring) was coined byDavid Hilbertin 1892 and published in 1897.[13]In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring),[citation needed]so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of anequivalence).[14]Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, ifa3− 4a+ 1 = 0then:
and so on; in general,anis going to be an integral linear combination of1,a, anda2.
The first axiomatic definition of a ring was given byAdolf Fraenkelin 1915,[15][16]but his axioms were stricter than those in the modern definition. For instance, he required everynon-zero-divisorto have amultiplicative inverse.[17]In 1921,Emmy Noethergave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paperIdealtheorie in Ringbereichen.[18]
Fraenkel applied the term "ring" to structures with axioms that included a multiplicative identity,[19]whereas Noether applied it to structures that did not.[18]
Most or all books on algebra[20][21]up to around 1960 followed Noether's convention of not requiring a1for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of1in the definition of "ring", especially in advanced books by notable authors such as Artin,[22]Bourbaki,[23]Eisenbud,[24]and Lang.[3]There are also books published as late as 2022 that use the term without the requirement for a1.[25][26][27][28]Likewise, theEncyclopedia of Mathematicsdoes not require unit elements in rings.[29]In a research article, the authors often specify which definition of ring they use in the beginning of that article.
Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a1, then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable."[30]Poonenmakes the counterargument that the natural notion for rings would be thedirect productrather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence.[c][31]
Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention:
For each nonnegative integern, given a sequence(a1,…,an){\displaystyle (a_{1},\dots ,a_{n})}ofnelements ofR, one can define the productPn=∏i=1nai{\displaystyle \textstyle P_{n}=\prod _{i=1}^{n}a_{i}}recursively: letP0= 1and letPm=Pm−1amfor1 ≤m≤n.
As a special case, one can define nonnegative integer powers of an elementaof a ring:a0= 1andan=an−1aforn≥ 1. Thenam+n=amanfor allm,n≥ 0.
A leftzero divisorof a ringRis an elementain the ring such that there exists a nonzero elementbofRsuch thatab= 0.[d]A right zero divisor is defined similarly.
Anilpotent elementis an elementasuch thatan= 0for somen> 0. One example of a nilpotent element is anilpotent matrix. A nilpotent element in anonzero ringis necessarily a zero divisor.
Anidempotente{\displaystyle e}is an element such thate2=e. One example of an idempotent element is aprojectionin linear algebra.
Aunitis an elementahaving amultiplicative inverse; in this case the inverse is unique, and is denoted bya–1. The set of units of a ring is agroupunder ring multiplication; this group is denoted byR×orR*orU(R). For example, ifRis the ring of all square matrices of sizenover a field, thenR×consists of the set of all invertible matrices of sizen, and is called thegeneral linear group.
A subsetSofRis called asubringif any one of the following equivalent conditions holds:
For example, the ringZ{\displaystyle \mathbb {Z} }of integers is a subring of thefieldof real numbers and also a subring of the ring ofpolynomialsZ[X]{\displaystyle \mathbb {Z} [X]}(in both cases,Z{\displaystyle \mathbb {Z} }contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers2Z{\displaystyle 2\mathbb {Z} }does not contain the identity element1and thus does not qualify as a subring ofZ;{\displaystyle \mathbb {Z} ;}one could call2Z{\displaystyle 2\mathbb {Z} }asubrng, however.
An intersection of subrings is a subring. Given a subsetEofR, the smallest subring ofRcontainingEis the intersection of all subrings ofRcontainingE, and it is calledthe subring generated byE.
For a ringR, the smallest subring ofRis called thecharacteristic subringofR. It can be generated through addition of copies of1and−1. It is possible thatn· 1 = 1 + 1 + ... + 1(ntimes) can be zero. Ifnis the smallest positive integer such that this occurs, thennis called thecharacteristicofR. In some rings,n· 1is never zero for any positive integern, and those rings are said to havecharacteristic zero.
Given a ringR, letZ(R)denote the set of all elementsxinRsuch thatxcommutes with every element inR:xy=yxfor anyyinR. ThenZ(R)is a subring ofR, called thecenterofR. More generally, given a subsetXofR, letSbe the set of all elements inRthat commute with every element inX. ThenSis a subring ofR, called thecentralizer(or commutant) ofX. The center is the centralizer of the entire ringR. Elements or subsets of the center are said to becentralinR; they (each individually) generate a subring of the center.
LetRbe a ring. Aleft idealofRis a nonempty subsetIofRsuch that for anyx, yinIandrinR, the elementsx+yandrxare inI. IfR Idenotes theR-span ofI, that is, the set of finite sums
thenIis a left ideal ifRI⊆I. Similarly, aright idealis a subsetIsuch thatIR⊆I. A subsetIis said to be atwo-sided idealor simplyidealif it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup ofR. IfEis a subset ofR, thenREis a left ideal, called the left ideal generated byE; it is the smallest left ideal containingE. Similarly, one can consider the right ideal or the two-sided ideal generated by a subset ofR.
Ifxis inR, thenRxandxRare left ideals and right ideals, respectively; they are called theprincipalleft ideals and right ideals generated byx. The principal idealRxRis written as(x). For example, the set of all positive and negative multiples of2along with0form an ideal of the integers, and this ideal is generated by the integer2. In fact, every ideal of the ring of integers is principal.
Like a group, a ring is said to besimpleif it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field.
Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinitechainof left ideals is called a leftNoetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a leftArtinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (theHopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian.
For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper idealPofRis called aprime idealif for any elementsx,y∈R{\displaystyle x,y\in R}we have thatxy∈P{\displaystyle xy\in P}implies eitherx∈P{\displaystyle x\in P}ory∈P.{\displaystyle y\in P.}Equivalently,Pis prime if for any idealsI,Jwe have thatIJ⊆Pimplies eitherI⊆PorJ⊆P. This latter formulation illustrates the idea of ideals as generalizations of elements.
Ahomomorphismfrom a ring(R, +,⋅)to a ring(S, ‡, ∗)is a functionffromRtoSthat preserves the ring operations; namely, such that, for alla,binRthe following identities hold:
If one is working with rngs, then the third condition is dropped.
A ring homomorphismfis said to be anisomorphismif there exists an inverse homomorphism tof(that is, a ring homomorphism that is aninverse function), or equivalently if it isbijective.
Examples:
Given a ring homomorphismf:R→S, the set of all elements mapped to 0 byfis called thekerneloff. The kernel is a two-sided ideal ofR. The image off, on the other hand, is not always an ideal, but it is always a subring ofS.
To give a ring homomorphism from a commutative ringRto a ringAwith image contained in the center ofAis the same as to give a structure of analgebraoverRtoA(which in particular gives a structure of anA-module).
The notion ofquotient ringis analogous to the notion of aquotient group. Given a ring(R, +,⋅)and a two-sidedidealIof(R, +,⋅), viewIas subgroup of(R, +); then thequotient ringR/Iis the set ofcosetsofItogether with the operations
for alla,binR. The ringR/Iis also called afactor ring.
As with a quotient group, there is a canonical homomorphismp:R→R/I, given byx↦x+I. It is surjective and satisfies the following universal property:
For any ring homomorphismf:R→S, invoking the universal property withI= kerfproduces a homomorphismf¯:R/kerf→S{\displaystyle {\overline {f}}:R/\ker f\to S}that gives an isomorphism fromR/ kerfto the image off.
The concept of amodule over a ringgeneralizes the concept of avector space(over afield) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ringR, anR-moduleMis anabelian groupequipped with anoperationR×M→M(associating an element ofMto every pair of an element ofRand an element ofM) that satisfies certainaxioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for alla,binRand allx,yinM,
When the ring isnoncommutativethese axioms defineleft modules;right modulesare defined similarly by writingxainstead ofax. This is not only a change of notation, as the last axiom of right modules (that isx(ab) = (xa)b) becomes(ab)x=b(ax), if left multiplication (by ring elements) is used for a right module.
Basic examples of modules are ideals, including the ring itself.
Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (thedimension of a vector space). In particular, not all modules have abasis.
The axioms of modules imply that(−1)x= −x, where the first minus denotes theadditive inversein the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers.
Any ring homomorphism induces a structure of a module: iff:R→Sis a ring homomorphism, thenSis a left module overRby the multiplication:rs=f(r)s. IfRis commutative or iff(R)is contained in thecenterofS, the ringSis called aR-algebra. In particular, every ring is an algebra over the integers.
LetRandSbe rings. Then theproductR×Scan be equipped with the following natural ring structure:
for allr1,r2inRands1,s2inS. The ringR×Swith the above operations of addition and multiplication and the multiplicative identity(1, 1)is called thedirect productofRwithS. The same construction also works for an arbitrary family of rings: ifRiare rings indexed by a setI, then∏i∈IRi{\textstyle \prod _{i\in I}R_{i}}is a ring with componentwise addition and multiplication.
LetRbe a commutative ring anda1,⋯,an{\displaystyle {\mathfrak {a}}_{1},\cdots ,{\mathfrak {a}}_{n}}be ideals such thatai+aj=(1){\displaystyle {\mathfrak {a}}_{i}+{\mathfrak {a}}_{j}=(1)}wheneveri≠j. Then theChinese remainder theoremsays there is a canonical ring isomorphism:R/⋂i=1nai≃∏i=1nR/ai,xmod⋂i=1nai↦(xmoda1,…,xmodan).{\displaystyle R/{\textstyle \bigcap _{i=1}^{n}{{\mathfrak {a}}_{i}}}\simeq \prod _{i=1}^{n}{R/{\mathfrak {a}}_{i}},\qquad x{\bmod {\textstyle \bigcap _{i=1}^{n}{\mathfrak {a}}_{i}}}\mapsto (x{\bmod {\mathfrak {a}}}_{1},\ldots ,x{\bmod {\mathfrak {a}}}_{n}).}
A "finite" direct product may also be viewed as a direct sum of ideals.[36]Namely, letRi,1≤i≤n{\displaystyle R_{i},1\leq i\leq n}be rings,Ri→R=∏Ri{\textstyle R_{i}\to R=\prod R_{i}}the inclusions with the imagesai{\displaystyle {\mathfrak {a}}_{i}}(in particularai{\displaystyle {\mathfrak {a}}_{i}}are rings though not subrings). Thenai{\displaystyle {\mathfrak {a}}_{i}}are ideals ofRandR=a1⊕⋯⊕an,aiaj=0,i≠j,ai2⊆ai{\displaystyle R={\mathfrak {a}}_{1}\oplus \cdots \oplus {\mathfrak {a}}_{n},\quad {\mathfrak {a}}_{i}{\mathfrak {a}}_{j}=0,i\neq j,\quad {\mathfrak {a}}_{i}^{2}\subseteq {\mathfrak {a}}_{i}}as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic toR. Equivalently, the above can be done throughcentral idempotents. Assume thatRhas the above decomposition. Then we can write1=e1+⋯+en,ei∈ai.{\displaystyle 1=e_{1}+\cdots +e_{n},\quad e_{i}\in {\mathfrak {a}}_{i}.}By the conditions onai,{\displaystyle {\mathfrak {a}}_{i},}one has thateiare central idempotents andeiej= 0,i≠j(orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then letai=Rei,{\displaystyle {\mathfrak {a}}_{i}=Re_{i},}which are two-sided ideals. If eacheiis not a sum of orthogonal central idempotents,[e]then their direct sum is isomorphic toR.
An important application of an infinite direct product is the construction of aprojective limitof rings (see below). Another application is arestricted productof a family of rings (cf.adele ring).
Given a symbolt(called a variable) and a commutative ringR, the set of polynomials
forms a commutative ring with the usual addition and multiplication, containingRas a subring. It is called thepolynomial ringoverR. More generally, the setR[t1,…,tn]{\displaystyle R\left[t_{1},\ldots ,t_{n}\right]}of all polynomials in variablest1,…,tn{\displaystyle t_{1},\ldots ,t_{n}}forms a commutative ring, containingR[ti]{\displaystyle R\left[t_{i}\right]}as subrings.
IfRis anintegral domain, thenR[t]is also an integral domain; its field of fractions is the field ofrational functions. IfRis a Noetherian ring, thenR[t]is a Noetherian ring. IfRis a unique factorization domain, thenR[t]is a unique factorization domain. Finally,Ris a field if and only ifR[t]is a principal ideal domain.
LetR⊆S{\displaystyle R\subseteq S}be commutative rings. Given an elementxofS, one can consider the ring homomorphism
(that is, thesubstitution). IfS=R[t]andx=t, thenf(t) =f. Because of this, the polynomialfis often also denoted byf(t). The image of the mapf↦f(x){\displaystyle f\mapsto f(x)}is denoted byR[x]; it is the same thing as the subring ofSgenerated byRandx.
Example:k[t2,t3]{\displaystyle k\left[t^{2},t^{3}\right]}denotes the image of the homomorphism
In other words, it is the subalgebra ofk[t]generated byt2andt3.
Example: letfbe a polynomial in one variable, that is, an element in a polynomial ringR. Thenf(x+h)is an element inR[h]andf(x+h) –f(x)is divisible byhin that ring. The result of substituting zero tohin(f(x+h) –f(x)) /hisf'(x), the derivative offatx.
The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphismϕ:R→S{\displaystyle \phi :R\to S}and an elementxinSthere exists a unique ring homomorphismϕ¯:R[t]→S{\displaystyle {\overline {\phi }}:R[t]\to S}such thatϕ¯(t)=x{\displaystyle {\overline {\phi }}(t)=x}andϕ¯{\displaystyle {\overline {\phi }}}restricts toϕ.[37]For example, choosing a basis, asymmetric algebrasatisfies the universal property and so is a polynomial ring.
To give an example, letSbe the ring of all functions fromRto itself; the addition and the multiplication are those of functions. Letxbe the identity function. EachrinRdefines a constant function, giving rise to the homomorphismR→S. The universal property says that this map extends uniquely to
(tmaps tox) wheref¯{\displaystyle {\overline {f}}}is thepolynomial functiondefined byf. The resulting map is injective if and only ifRis infinite.
Given a non-constant monic polynomialfinR[t], there exists a ringScontainingRsuch thatfis a product of linear factors inS[t].[38]
Letkbe an algebraically closed field. TheHilbert's Nullstellensatz(theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals ink[t1,…,tn]{\displaystyle k\left[t_{1},\ldots ,t_{n}\right]}and the set of closed subvarieties ofkn. In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf.Gröbner basis.)
There are some other related constructions. Aformal power series ringR[[t]]{\displaystyle R[\![t]\!]}consists of formal power series
together with multiplication and addition that mimic those for convergent series. It containsR[t]as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it islocal(in fact,complete).
LetRbe a ring (not necessarily commutative). The set of all square matrices of sizenwith entries inRforms a ring with the entry-wise addition and the usual matrix multiplication. It is called thematrix ringand is denoted byMn(R). Given a rightR-moduleU, the set of allR-linear maps fromUto itself forms a ring with addition that is of function and multiplication that is ofcomposition of functions; it is called the endomorphism ring ofUand is denoted byEndR(U).
As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring:EndR(Rn)≃Mn(R).{\displaystyle \operatorname {End} _{R}(R^{n})\simeq \operatorname {M} _{n}(R).}This is a special case of the following fact: Iff:⊕1nU→⊕1nU{\displaystyle f:\oplus _{1}^{n}U\to \oplus _{1}^{n}U}is anR-linear map, thenfmay be written as a matrix with entriesfijinS= EndR(U), resulting in the ring isomorphism:
Any ring homomorphismR→SinducesMn(R) → Mn(S).[39]
Schur's lemmasays that ifUis a simple rightR-module, thenEndR(U)is a division ring.[40]IfU=⨁i=1rUi⊕mi{\displaystyle U=\bigoplus _{i=1}^{r}U_{i}^{\oplus m_{i}}}is a direct sum ofmi-copies of simpleR-modulesUi,{\displaystyle U_{i},}then
TheArtin–Wedderburn theoremstates anysemisimple ring(cf. below) is of this form.
A ringRand the matrix ringMn(R)over it areMorita equivalent: thecategoryof right modules ofRis equivalent to the category of right modules overMn(R).[39]In particular, two-sided ideals inRcorrespond in one-to-one to two-sided ideals inMn(R).
LetRibe a sequence of rings such thatRiis a subring ofRi+ 1for alli. Then the union (orfiltered colimit) ofRiis the ringlim→Ri{\displaystyle \varinjlim R_{i}}defined as follows: it is the disjoint union of allRi's modulo the equivalence relationx~yif and only ifx=yinRifor sufficiently largei.
Examples of colimits:
Any commutative ring is the colimit offinitely generated subrings.
Aprojective limit(or afiltered limit) of rings is defined as follows. Suppose we are given a family of ringsRi,irunning over positive integers, say, and ring homomorphismsRj→Ri,j≥isuch thatRi→Riare all the identities andRk→Rj→RiisRk→Riwheneverk≥j≥i. Thenlim←Ri{\displaystyle \varprojlim R_{i}}is the subring of∏Ri{\displaystyle \textstyle \prod R_{i}}consisting of(xn)such thatxjmaps toxiunderRj→Ri,j≥i.
For an example of a projective limit, see§ Completion.
Thelocalizationgeneralizes the construction of thefield of fractionsof an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ringRand a subsetSofR, there exists a ringR[S−1]{\displaystyle R[S^{-1}]}together with the ring homomorphismR→R[S−1]{\displaystyle R\to R\left[S^{-1}\right]}that "inverts"S; that is, the homomorphism maps elements inSto unit elements inR[S−1],{\displaystyle R\left[S^{-1}\right],}and, moreover, any ring homomorphism fromRthat "inverts"Suniquely factors throughR[S−1].{\displaystyle R\left[S^{-1}\right].}[41]The ringR[S−1]{\displaystyle R\left[S^{-1}\right]}is called thelocalizationofRwith respect toS. For example, ifRis a commutative ring andfan element inR, then the localizationR[f−1]{\displaystyle R\left[f^{-1}\right]}consists of elements of the formr/fn,r∈R,n≥0{\displaystyle r/f^{n},\,r\in R,\,n\geq 0}(to be precise,R[f−1]=R[t]/(tf−1).{\displaystyle R\left[f^{-1}\right]=R[t]/(tf-1).})[42]
The localization is frequently applied to a commutative ringRwith respect to the complement of a prime ideal (or a union of prime ideals) inR. In that caseS=R−p,{\displaystyle S=R-{\mathfrak {p}},}one often writesRp{\displaystyle R_{\mathfrak {p}}}forR[S−1].{\displaystyle R\left[S^{-1}\right].}Rp{\displaystyle R_{\mathfrak {p}}}is then alocal ringwith themaximal idealpRp.{\displaystyle {\mathfrak {p}}R_{\mathfrak {p}}.}This is the reason for the terminology "localization". The field of fractions of an integral domainRis the localization ofRat the prime ideal zero. Ifp{\displaystyle {\mathfrak {p}}}is a prime ideal of a commutative ringR, then the field of fractions ofR/p{\displaystyle R/{\mathfrak {p}}}is the same as the residue field of the local ringRp{\displaystyle R_{\mathfrak {p}}}and is denoted byk(p).{\displaystyle k({\mathfrak {p}}).}
IfMis a leftR-module, then the localization ofMwith respect toSis given by achange of ringsM[S−1]=R[S−1]⊗RM.{\displaystyle M\left[S^{-1}\right]=R\left[S^{-1}\right]\otimes _{R}M.}
The most important properties of localization are the following: whenRis a commutative ring andSa multiplicatively closed subset
Incategory theory, alocalization of a categoryamounts to making some morphisms isomorphisms. An element in a commutative ringRmay be thought of as an endomorphism of anyR-module. Thus, categorically, a localization ofRwith respect to a subsetSofRis afunctorfrom the category ofR-modules to itself that sends elements ofSviewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course,Rthen maps toR[S−1]{\displaystyle R\left[S^{-1}\right]}andR-modules map toR[S−1]{\displaystyle R\left[S^{-1}\right]}-modules.)
LetRbe a commutative ring, and letIbe an ideal ofR.
ThecompletionofRatIis the projective limitR^=lim←R/In;{\displaystyle {\hat {R}}=\varprojlim R/I^{n};}it is a commutative ring. The canonical homomorphisms fromRto the quotientsR/In{\displaystyle R/I^{n}}induce a homomorphismR→R^.{\displaystyle R\to {\hat {R}}.}The latter homomorphism is injective ifRis a Noetherian integral domain andIis a proper ideal, or ifRis a Noetherian local ring with maximal idealI, byKrull's intersection theorem.[45]The construction is especially useful whenIis a maximal ideal.
The basic example is the completion ofZ{\displaystyle \mathbb {Z} }at the principal ideal(p)generated by a prime numberp; it is called the ring ofp-adic integersand is denotedZp.{\displaystyle \mathbb {Z} _{p}.}The completion can in this case be constructed also from thep-adic absolute valueonQ.{\displaystyle \mathbb {Q} .}Thep-adic absolute value onQ{\displaystyle \mathbb {Q} }is a mapx↦|x|{\displaystyle x\mapsto |x|}fromQ{\displaystyle \mathbb {Q} }toR{\displaystyle \mathbb {R} }given by|n|p=p−vp(n){\displaystyle |n|_{p}=p^{-v_{p}(n)}}wherevp(n){\displaystyle v_{p}(n)}denotes the exponent ofpin the prime factorization of a nonzero integerninto prime numbers (we also put|0|p=0{\displaystyle |0|_{p}=0}and|m/n|p=|m|p/|n|p{\displaystyle |m/n|_{p}=|m|_{p}/|n|_{p}}). It defines a distance function onQ{\displaystyle \mathbb {Q} }and the completion ofQ{\displaystyle \mathbb {Q} }as ametric spaceis denoted byQp.{\displaystyle \mathbb {Q} _{p}.}It is again a field since the field operations extend to the completion. The subring ofQp{\displaystyle \mathbb {Q} _{p}}consisting of elementsxwith|x|p≤ 1is isomorphic toZp.{\displaystyle \mathbb {Z} _{p}.}
Similarly, the formal power series ringR[{[t]}]is the completion ofR[t]at(t)(see alsoHensel's lemma)
A complete ring has much simpler structure than a commutative ring. This owns to theCohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between theintegral closureand completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition ofexcellent ring.
The most general way to construct a ring is by specifying generators and relations. LetFbe afree ring(that is, free algebra over the integers) with the setXof symbols, that is,Fconsists of polynomials with integral coefficients in noncommuting variables that are elements ofX. A free ring satisfies the universal property: any function from the setXto a ringRfactors throughFso thatF→Ris the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring.[46]
Now, we can impose relations among symbols inXby taking a quotient. Explicitly, ifEis a subset ofF, then the quotient ring ofFby the ideal generated byEis called the ring with generatorsXand relationsE. If we used a ring, say,Aas a base ring instead ofZ,{\displaystyle \mathbb {Z} ,}then the resulting ring will be overA. For example, ifE={xy−yx∣x,y∈X},{\displaystyle E=\{xy-yx\mid x,y\in X\},}then the resulting ring will be the usual polynomial ring with coefficients inAin variables that are elements ofX(It is also the same thing as thesymmetric algebraoverAwith symbolsX.)
In the category-theoretic terms, the formationS↦the free ring generated by the setS{\displaystyle S\mapsto {\text{the free ring generated by the set }}S}is the left adjoint functor of theforgetful functorfrom thecategory of ringstoSet(and it is often called the free ring functor.)
LetA,Bbe algebras over a commutative ringR. Then the tensor product ofR-modulesA⊗RB{\displaystyle A\otimes _{R}B}is anR-algebra with multiplication characterized by(x⊗u)(y⊗v)=xy⊗uv.{\displaystyle (x\otimes u)(y\otimes v)=xy\otimes uv.}
Anonzeroring with no nonzerozero-divisorsis called adomain. A commutative domain is called anintegral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is aunique factorization domain(UFD), an integral domain in which every nonunit element is a product ofprime elements(an element is prime if it generates aprime ideal.) The fundamental question inalgebraic number theoryis on the extent to which thering of (generalized) integersin anumber field, where an "ideal" admits prime factorization, fails to be a PID.
Among theorems concerning a PID, the most important one is thestructure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra.[47]LetVbe a finite-dimensional vector space over a fieldkandf:V→Va linear map with minimal polynomialq. Then, sincek[t]is a unique factorization domain,qfactors into powers of distinct irreducible polynomials (that is, prime elements):q=p1e1…pses.{\displaystyle q=p_{1}^{e_{1}}\ldots p_{s}^{e_{s}}.}
Lettingt⋅v=f(v),{\displaystyle t\cdot v=f(v),}we makeVak[t]-module. The structure theorem then saysVis a direct sum ofcyclic modules, each of which is isomorphic to the module of the formk[t]/(pikj).{\displaystyle k[t]/\left(p_{i}^{k_{j}}\right).}Now, ifpi(t)=t−λi,{\displaystyle p_{i}(t)=t-\lambda _{i},}then such a cyclic module (forpi) has a basis in which the restriction offis represented by aJordan matrix. Thus, if, say,kis algebraically closed, then allpi's are of the formt–λiand the above decomposition corresponds to theJordan canonical formoff.
In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is aregular local ring. A regular local ring is a UFD.[48]
The following is a chain ofclass inclusionsthat describes the relationship between rings, domains and fields:
Adivision ringis a ring such that every non-zero element is a unit. A commutative division ring is afield. A prominent example of a division ring that is not a field is the ring ofquaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that everyfinitedomain (in particular finite division ring) is a field; in particular commutative (theWedderburn's little theorem).
Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field.
The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, theCartan–Brauer–Hua theorem.
Acyclic algebra, introduced byL. E. Dickson, is a generalization of aquaternion algebra.
Asemisimple moduleis a direct sum of simple modules. Asemisimple ringis a ring that is semisimple as a left module (or right module) over itself.
TheWeyl algebraover a field is asimple ring, but it is not semisimple. The same holds for aring of differential operators in many variables.
Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.)
For a ringR, the following are equivalent:
Semisimplicity is closely related to separability. A unital associative algebraAover a fieldkis said to beseparableif the base extensionA⊗kF{\displaystyle A\otimes _{k}F}is semisimple for everyfield extensionF/k. IfAhappens to be a field, then this is equivalent to the usual definition in field theory (cf.separable extension.)
For a fieldk, ak-algebra is central if its center iskand is simple if it is asimple ring. Since the center of a simplek-algebra is a field, any simplek-algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to ak-algebra. The matrix ring of sizenover a ringRwill be denoted byRn.
TheSkolem–Noether theoremstates any automorphism of a central simple algebra is inner.
Two central simple algebrasAandBare said to besimilarif there are integersnandmsuch thatA⊗kkn≈B⊗kkm.{\displaystyle A\otimes _{k}k_{n}\approx B\otimes _{k}k_{m}.}[49]Sincekn⊗kkm≃knm,{\displaystyle k_{n}\otimes _{k}k_{m}\simeq k_{nm},}the similarity is an equivalence relation. The similarity classes[A]with the multiplication[A][B]=[A⊗kB]{\displaystyle [A][B]=\left[A\otimes _{k}B\right]}form an abelian group called theBrauer groupofkand is denoted byBr(k). By theArtin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring.
For example,Br(k)is trivial ifkis a finite field or an algebraically closed field (more generallyquasi-algebraically closed field; cf.Tsen's theorem).Br(R){\displaystyle \operatorname {Br} (\mathbb {R} )}has order 2 (a special case of thetheorem of Frobenius). Finally, ifkis a nonarchimedeanlocal field(for example,Qp{\displaystyle \mathbb {Q} _{p}}),thenBr(k)=Q/Z{\displaystyle \operatorname {Br} (k)=\mathbb {Q} /\mathbb {Z} }through theinvariant map.
Now, ifFis a field extension ofk, then the base extension−⊗kF{\displaystyle -\otimes _{k}F}inducesBr(k) → Br(F). Its kernel is denoted byBr(F/k). It consists of[A]such thatA⊗kF{\displaystyle A\otimes _{k}F}is a matrix ring overF(that is,Ais split byF.) If the extension is finite and Galois, thenBr(F/k)is canonically isomorphic toH2(Gal(F/k),k∗).{\displaystyle H^{2}\left(\operatorname {Gal} (F/k),k^{*}\right).}[50]
Azumaya algebrasgeneralize the notion of central simple algebras to a commutative local ring.
IfKis a field, avaluationvis a group homomorphism from the multiplicative groupK∗to a totally ordered abelian groupGsuch that, for anyf,ginKwithf+gnonzero,v(f+g) ≥ min{v(f),v(g)}.Thevaluation ringofvis the subring ofKconsisting of zero and all nonzerofsuch thatv(f) ≥ 0.
Examples:
A ring may be viewed as anabelian group(by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example:
Many different kinds ofmathematical objectscan be fruitfully analyzed in terms of someassociated ring.
To anytopological spaceXone can associate its integralcohomology ring
agraded ring. There are alsohomology groupsHi(X,Z){\displaystyle H_{i}(X,\mathbb {Z} )}of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like thespheresandtori, for which the methods ofpoint-set topologyare not well-suited.Cohomology groupswere later defined in terms of homology groups in a way which is roughly analogous to the dual of avector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of theuniversal coefficient theorem. However, the advantage of the cohomology groups is that there is anatural product, which is analogous to the observation that one can multiply pointwise ak-multilinear formand anl-multilinear form to get a (k+l)-multilinear form.
The ring structure in cohomology provides the foundation forcharacteristic classesoffiber bundles, intersection theory on manifolds andalgebraic varieties,Schubert calculusand much more.
To anygroupis associated itsBurnside ringwhich uses a ring to describe the various ways the group canacton a finite set. The Burnside ring's additive group is thefree abelian groupwhose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of therepresentation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers.
To anygroup ringorHopf algebrais associated itsrepresentation ringor "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring fromcharacter theory, which is more or less theGrothendieck groupgiven a ring structure.
To any irreduciblealgebraic varietyis associated itsfunction field. The points of an algebraic variety correspond tovaluation ringscontained in the function field and containing thecoordinate ring. The study ofalgebraic geometrymakes heavy use ofcommutative algebrato study geometric concepts in terms of ring-theoretic properties.Birational geometrystudies maps between the subrings of the function field.
Everysimplicial complexhas an associated face ring, also called itsStanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest inalgebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension ofsimplicial polytopes.
Every ring can be thought of as amonoidinAb, thecategory of abelian groups(thought of as amonoidal categoryunder thetensor product ofZ{\displaystyle \mathbb {Z} }-modules). The monoid action of a ringRon an abelian group is simply anR-module. Essentially, anR-module is a generalization of the notion of avector space– where rather than a vector space over a field, one has a "vector space over a ring".
Let(A, +)be an abelian group and letEnd(A)be itsendomorphism ring(see above). Note that, essentially,End(A)is the set of all morphisms ofA, where iffis inEnd(A), andgis inEnd(A), the following rules may be used to computef+gandf⋅g:
where+as inf(x) +g(x)is addition inA, and function composition is denoted from right to left. Therefore,associatedto any abelian group, is a ring. Conversely, given any ring,(R, +,⋅),(R, +)is an abelian group. Furthermore, for everyrinR, right (or left) multiplication byrgives rise to a morphism of(R, +), by right (or left) distributivity. LetA= (R, +). Consider thoseendomorphismsofA, that "factor through" right (or left) multiplication ofR. In other words, letEndR(A)be the set of all morphismsmofA, having the property thatm(r⋅x) =r⋅m(x). It was seen that everyrinRgives rise to a morphism ofA: right multiplication byr. It is in fact true that this association of any element ofR, to a morphism ofA, as a function fromRtoEndR(A), is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelianX-group (byX-group, it is meant a group withXbeing itsset of operators).[51]In essence, the most general form of a ring, is the endomorphism group of some abelianX-group.
Any ring can be seen as apreadditive categorywith a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context.Additive functorsbetween preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets ofmorphismsclosed under addition and under composition with arbitrary morphisms.
Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms.
Arngis the same as a ring, except that the existence of a multiplicative identity is not assumed.[52]
Anonassociative ringis an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is aLie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras.[citation needed]
Asemiring(sometimesrig) is obtained by weakening the assumption that(R, +)is an abelian group to the assumption that(R, +)is a commutative monoid, and adding the axiom that0 ⋅a=a⋅ 0 = 0for allainR(since it no longer follows from the other axioms).
Examples:
LetCbe a category with finiteproducts. Let pt denote aterminal objectofC(an empty product). Aring objectinCis an objectRequipped with morphismsR×R→aR{\displaystyle R\times R\;{\stackrel {a}{\to }}\,R}(addition),R×R→mR{\displaystyle R\times R\;{\stackrel {m}{\to }}\,R}(multiplication),pt→0R{\displaystyle \operatorname {pt} {\stackrel {0}{\to }}\,R}(additive identity),R→iR{\displaystyle R\;{\stackrel {i}{\to }}\,R}(additive inverse), andpt→1R{\displaystyle \operatorname {pt} {\stackrel {1}{\to }}\,R}(multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an objectRequipped with a factorization of its functor of pointshR=Hom(−,R):Cop→Sets{\displaystyle h_{R}=\operatorname {Hom} (-,R):C^{\operatorname {op} }\to \mathbf {Sets} }through the category of rings:Cop→Rings⟶forgetfulSets.{\displaystyle C^{\operatorname {op} }\to \mathbf {Rings} {\stackrel {\textrm {forgetful}}{\longrightarrow }}\mathbf {Sets} .}
In algebraic geometry, aring schemeover a baseschemeSis a ring object in the category ofS-schemes. One example is the ring schemeWnoverSpecZ{\displaystyle \operatorname {Spec} \mathbb {Z} }, which for any commutative ringAreturns the ringWn(A)ofp-isotypicWitt vectorsof lengthnoverA.[53]
Inalgebraic topology, aring spectrumis aspectrumXtogether with a multiplicationμ:X∧X→X{\displaystyle \mu :X\wedge X\to X}and a unit mapS→Xfrom thesphere spectrumS, such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as amonoid objectin a good category of spectra such as the category ofsymmetric spectra.
Special types of rings:
|
https://en.wikipedia.org/wiki/Ring_(mathematics)
|
Acommitment schemeis acryptographic primitivethat allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later.[1]Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes arebinding. Commitment schemes have important applications in a number ofcryptographic protocolsincluding secure coin flipping,zero-knowledge proofs, andsecure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is calledthe commitment. It is essential that the specific value chosen cannot be extracted from the message by the receiver at that time (this is called thehidingproperty). A simple reveal phase would consist of a single message,the opening, from the sender to the receiver, followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called thebindingproperty).
The concept of commitment schemes was perhaps first formalized byGilles Brassard,David Chaum, andClaude Crépeauin 1988,[2]as part of various zero-knowledge protocols forNP, based on various types of commitment schemes.[3][4]But the concept was used prior to that without being treated formally.[5][6]The notion of commitments appeared earliest in works byManuel Blum,[7]Shimon Even,[8]andAdi Shamiret al.[9]The terminology seems to have been originated by Blum,[6]although commitment schemes can be interchangeably calledbit commitment schemes—sometimes reserved for the special case where the committed value is abit. Prior to that, commitment via one-way hash functions was considered, e.g., as part of, say,Lamport signature, the original one-time one-bit signature scheme.
Suppose Alice and Bob want to resolve some dispute viacoin flipping. If they are physically in the same place, a typical procedure might be:
If Alice and Bob are not in the same place a problem arises. Once Alice has "called" the coin flip, Bob can stipulate the flip "results" to be whatever is most desirable for him. Similarly, if Alice doesn't announce her "call" to Bob, after Bob flips the coin and announces the result, Alice can report that she called whatever result is most desirable for her. Alice and Bob can use commitments in a procedure that will allow both to trust the outcome:
For Bob to be able to skew the results to his favor, he must be able to understand the call hidden in Alice's commitment. If the commitment scheme is a good one, Bob cannot skew the results. Similarly, Alice cannot affect the result if she cannot change the value she commits to.
A real-life application of this problem exists, when people (often in media) commit to a decision or give an answer in a "sealed envelope", which is then opened later. "Let's find out if that's what the candidate answered", for example on a game show, can serve as a model of this system.
One particular motivating example is the use of commitment schemes inzero-knowledge proofs. Commitments are used in zero-knowledge proofs for two main purposes: first, to allow the prover to participate in "cut and choose" proofs where the verifier will be presented with a choice of what to learn, and the prover will reveal only what corresponds to the verifier's choice. Commitment schemes allow the prover to specify all the information in advance, and only reveal what should be revealed later in the proof.[10]Second, commitments are also used in zero-knowledge proofs by the verifier, who will often specify their choices ahead of time in a commitment. This allows zero-knowledge proofs to be composed in parallel without revealing additional information to the prover.[11]
TheLamport signaturescheme is adigital signaturesystem that relies on maintaining two sets of secretdata packets, publishingverifiable hashesof the data packets, and then selectively revealing partial secret data packets in a manner that conforms specifically to the data to be signed. In this way, the prior public commitment to the secret values becomes a critical part of the functioning of the system.
Because the Lamport signature system cannot be used more than once, a system to combine many Lamport key-sets under a single public value that can be tied to a person and verified by others was developed. This system uses trees ofhashesto compress many published Lamport-key-commitment sets into a single hash value that can be associated with the prospective author of later-verified data.
Another important application of commitments is inverifiable secret sharing, a critical building block ofsecure multiparty computation. In asecret sharingscheme, each of several parties receive "shares" of a value that is meant to be hidden from everyone. If enough parties get together, their shares can be used to reconstruct the secret, but even a malicious cabal of insufficient size should learn nothing. Secret sharing is at the root of many protocols forsecure computation: in order to securely compute a function of some shared input, the secret shares are manipulated instead. However, if shares are to be generated by malicious parties, it may be important that those shares can be checked for correctness. In a verifiable secret sharing scheme, the distribution of a secret is accompanied by commitments to the individual shares. The commitments reveal nothing that can help a dishonest cabal, but the shares allow each individual party to check to see if their shares are correct.[12]
Formal definitions of commitment schemes vary strongly in notation and in flavour. The first such flavour is whether the commitment scheme provides perfect or computational security with respect to the hiding or binding properties. Another such flavour is whether the commitment is interactive, i.e. whether both the commit phase and the reveal phase can be seen as being executed by acryptographic protocolor whether they are non-interactive, consisting of two algorithmsCommitandCheckReveal. In the latter caseCheckRevealcan often be seen as a derandomised version ofCommit, with the randomness used byCommitconstituting the opening information.
If the commitmentCto a valuexis computed asC:=Commit(x,open)withopenbeing the randomness used for computing the commitment, thenCheckReveal (C,x,open)reduces to simply verifying the equationC=Commit (x,open).
Using this notation and some knowledge aboutmathematical functionsandprobability theorywe formalise different versions of the binding and hiding properties of commitments. The two most important combinations of these properties are perfectly binding and computationally hiding commitment schemes and computationally binding and perfectly hiding commitment schemes. Note that no commitment scheme can be at the same time perfectly binding and perfectly hiding – a computationally unbounded adversary can simply generateCommit(x,open)for every value ofxandopenuntil finding a pair that outputsC, and in a perfectly binding scheme this uniquely identifiesx.
Letopenbe chosen from a set of size2k{\displaystyle 2^{k}}, i.e., it can be represented as akbit string, and letCommitk{\displaystyle {\text{Commit}}_{k}}be the corresponding commitment scheme. As the size ofkdetermines the security of the commitment scheme it is called thesecurity parameter.
Then for allnon-uniformprobabilistic polynomial time algorithmsthat outputx,x′{\displaystyle x,x'}andopen,open′{\displaystyle open,open'}of increasing lengthk, the probability thatx≠x′{\displaystyle x\neq x'}andCommitk(x,open)=Commitk(x′,open′){\displaystyle {\text{Commit}}_{k}(x,open)={\text{Commit}}_{k}(x',open')}is anegligible functionink.
This is a form ofasymptotic analysis. It is also possible to state the same requirement usingconcrete security: A commitment schemeCommitis(t,ϵ){\displaystyle (t,\epsilon )}secure, if for all algorithms that run in timetand outputx,x′,open,open′{\displaystyle x,x',open,open'}the probability thatx≠x′{\displaystyle x\neq x'}andCommit(x,open)=Commit(x′,open′){\displaystyle {\text{Commit}}(x,open)={\text{Commit}}(x',open')}is at mostϵ{\displaystyle \epsilon }.
LetUk{\displaystyle U_{k}}be the uniform distribution over the2k{\displaystyle 2^{k}}opening values for security parameterk. A commitment scheme is respectively perfect, statistical, or computational hiding, if for allx≠x′{\displaystyle x\neq x'}theprobability ensembles{Commitk(x,Uk)}k∈N{\displaystyle \{{\text{Commit}}_{k}(x,U_{k})\}_{k\in \mathbb {N} }}and{Commitk(x′,Uk)}k∈N{\displaystyle \{{\text{Commit}}_{k}(x',U_{k})\}_{k\in \mathbb {N} }}are equal,statistically close, orcomputationally indistinguishable.
It is impossible to realize commitment schemes in theuniversal composability(UC) framework. The reason is that UC commitment has to beextractable, as shown by Canetti and Fischlin[13]and explained below.
The ideal commitment functionality, denoted here byF, works roughly as follows. CommitterCsends valuemtoF, which
stores it and sends "receipt" to receiverR. Later,Csends "open" toF, which sendsmtoR.
Now, assume we have a protocolπthat realizes this functionality. Suppose that the committerCis corrupted. In the UC framework, that essentially means thatCis now controlled by the environment, which attempts to distinguish protocol execution from the ideal process. Consider an environment that chooses a messagemand then tellsCto act as prescribed byπ, as if it has committed tom. Note here that in order to realizeF, the receiver must, after receiving a commitment, output a message "receipt". After the environment sees this message, it tellsCto open the commitment.
The protocol is only secure if this scenario is indistinguishable from the ideal case, where the functionality interacts with a simulatorS. Here,Shas control ofC. In particular, wheneverRoutputs "receipt",Fhas to do likewise. The only way to do that is forSto tellCto send a value toF. However, note
that by this point,mis not known toS. Hence, when the commitment is opened during protocol execution, it is unlikely thatFwill open tom, unlessScan extractmfrom the messages it received from the environment beforeRoutputs the receipt.
However a protocol that is extractable in this sense cannot be statistically hiding. Suppose such a simulatorSexists. Now consider an
environment that, instead of corruptingC, corruptsRinstead. Additionally it runs a copy ofS. Messages received fromCare fed intoS, and replies fromSare forwarded toC.
The environment initially tellsCto commit to a messagem. At some point in the interaction,Swill commit to a valuem′. This message is handed toR, who outputsm′. Note that by assumption we havem' = mwith high probability. Now in the ideal process the simulator has to come up withm. But this is impossible, because at this point the commitment has not been opened yet, so the only messageRcan have received in the ideal process is a "receipt" message. We thus have a contradiction.
A commitment scheme can either be perfectly binding (it is impossible for Alice to alter her commitment after she has made it, even if she has unbounded computational resources); or perfectly concealing (it is impossible for Bob to find out the commitment without Alice revealing it, even if he has unbounded computational resources); or formulated as an instance-dependent commitment scheme, which is either hiding or binding depending on the solution to another problem.[14][15]A commitment scheme cannot be both perfectly hiding and perfectly binding at the same time.
Bit-commitment schemes are trivial to construct in therandom oraclemodel. Given ahash functionH with a 3kbit output, to commit thek-bit messagem, Alice generates a randomkbit stringRand sends Bob H(R||m). The probability that anyR′,m′exist wherem′≠msuch that H(R′||m′) = H(R||m) is ≈ 2−k, but to test any guess at the messagemBob will need to make 2k(for an incorrect guess) or 2k-1(on average, for a correct guess) queries to the random oracle.[16]We note that earlier schemes based on hash functions, essentially can be thought of schemes based on idealization of these hash functions as random oracle.
One can create a bit-commitment scheme from anyone-way functionthat isinjective. The scheme relies on the fact that every one-way function can be modified (via theGoldreich-Levin theorem) to possess a computationallyhard-core predicate(while retaining the injective property).
Letfbe an injective one-way function, withha hard-core predicate. Then to commit to a bitbAlice picks a random inputxand sends the triple
to Bob, where⊕{\displaystyle \oplus }denotes XOR,i.e., bitwise addition modulo 2. To decommit, Alice simply sendsxto Bob. Bob verifies by computingf(x) and comparing to the committed value. This scheme is concealing because for Bob to recoverbhe must recoverh(x). Sincehis a computationally hard-core predicate, recoveringh(x) fromf(x) with probability greater than one-half is as hard as invertingf. Perfect binding follows from the fact thatfis injective and thusf(x) has exactly one preimage.
Note that since we do not know how to construct a one-way permutation from any one-way function, this section reduces the strength of the cryptographic assumption necessary to construct a bit-commitment protocol.
In 1991 Moni Naor showed how to create a bit-commitment scheme from acryptographically secure pseudorandom number generator.[17]The construction is as follows. IfGis a pseudo-random generator such thatGtakesnbits to 3nbits, then if Alice wants to commit to a bitb:
To decommit Alice sendsYto Bob, who can then check whether he initially receivedG(Y) orG(Y)⊕{\displaystyle \oplus }R.
This scheme is statistically binding, meaning that even if Alice is computationally unbounded she cannot cheat with probability greater than 2−n. For Alice to cheat, she would need to find aY', such thatG(Y') =G(Y)⊕{\displaystyle \oplus }R. If she could find such a value, she could decommit by sending the truth andY, or send the opposite answer andY'. However,G(Y) andG(Y') are only able to produce 2npossible values each (that's 22n) whileRis picked out of 23nvalues. She does not pickR, so there is a 22n/23n= 2−nprobability that aY'satisfying the equation required to cheat will exist.
The concealing property follows from a standard reduction, if Bob can tell whether Alice committed to a zero or one, he can also distinguish the output of the pseudo-random generatorGfrom true-random, which contradicts the cryptographic security ofG.
Alice chooses aringof prime orderp, with multiplicative generatorg.
Alice randomly picks a secret valuexfrom0top− 1 to commit to and calculatesc=gxand publishesc. Thediscrete logarithm problemdictates that fromc, it is computationally infeasible to computex, so under this assumption, Bob cannot computex. On the other hand, Alice cannot compute ax′<>x, such thatgx′=c, so the scheme is binding.
This scheme isn't perfectly concealing as someone could find the commitment if he manages to solve thediscrete logarithm problem. In fact, this scheme isn't hiding at all with respect to the standard hiding game, where an adversary should be unable to guess which of two messages he chose were committed to - similar to theIND-CPAgame. One consequence of this is that if the space of possible values ofxis small, then an attacker could simply try them all and the commitment would not be hiding.
A better example of a perfectly binding commitment scheme is one where the commitment is the encryption ofxunder asemantically secure, public-key encryption scheme with perfect completeness, and the decommitment is the string of random bits used to encryptx. An example of an information-theoretically hiding commitment scheme is the Pedersen commitment scheme,[18]which is computationally binding under the discrete logarithm assumption.[19]Additionally to the scheme above, it uses another generatorhof the prime group and a random numberr. The commitment is setc=gxhr{\displaystyle c=g^{x}h^{r}}.[20]
These constructions are tightly related to and based on the algebraic properties of the underlying groups, and the notion originally seemed to be very much related to the algebra. However, it was shown that basing statistically binding commitment schemes on general unstructured assumption is possible, via the notion of interactive hashing
for commitments from general complexity assumptions (specifically and originally, based on any one way permutation) as in.[21]
Alice selectsN{\displaystyle N}such thatN=p⋅q{\displaystyle N=p\cdot q}, wherep{\displaystyle p}andq{\displaystyle q}are large secret prime numbers. Additionally, she selects a primee{\displaystyle e}such thate>N2{\displaystyle e>N^{2}}andgcd(e,ϕ(N2))=1{\displaystyle gcd(e,\phi (N^{2}))=1}. Alice then computes a public numbergm{\displaystyle g_{m}}as an element of maximum order in theZN2∗{\displaystyle \mathbb {Z} _{N^{2}}^{*}}group.[22]Finally, Alice commits to her secretm{\displaystyle m}by first generating a random numberr{\displaystyle r}fromZN2∗{\displaystyle \mathbb {Z} _{N^{2}}^{*}}and then by computingc=megmr{\displaystyle c=m^{e}g_{m}^{r}}.
The security of the above commitment relies on the hardness of the RSA problem and has perfect hiding and computational binding.[23]
The Pedersen commitment scheme introduces an interesting homomorphic property that allows performing addition between two commitments. More specifically, given two messagesm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}and randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}, respectively, it is possible to generate a new commitment such that:C(m1,r1)⋅C(m2,r2)=C(m1+m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}+m_{2},r_{1}+r_{2})}. Formally:
C(m1,r1)⋅C(m2,r2)=gm1hr1⋅gm2hr2=gm1+m2hr1+r2=C(m1+m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=g^{m_{1}}h^{r_{1}}\cdot g^{m_{2}}h^{r_{2}}=g^{m_{1}+m_{2}}h^{r_{1}+r_{2}}=C(m_{1}+m_{2},r_{1}+r_{2})}
To open the above Pedersen commitment to a new messagem1+m2{\displaystyle m_{1}+m_{2}}, the randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}has to be added.
Similarly, the RSA-based commitment mentioned above has a homomorphic property with respect to the multiplication operation. Given two messagesm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}with randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}, respectively, one can compute:C(m1,r1)⋅C(m2,r2)=C(m1⋅m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}\cdot m_{2},r_{1}+r_{2})}. Formally:C(m1,r1)⋅C(m2,r2)=m1egmr1⋅m2egmr2=(m1⋅m2)egmr1+r2=C(m1⋅m2,r1+r2){\displaystyle C(m_{1},r_{1})\cdot C(m_{2},r_{2})=m_{1}^{e}g_{m}^{r_{1}}\cdot m_{2}^{e}g_{m}^{r_{2}}=(m_{1}\cdot m_{2})^{e}g_{m}^{r_{1}+r_{2}}=C(m_{1}\cdot m_{2},r_{1}+r_{2})}.
To open the above commitment to a new messagem1⋅m2{\displaystyle m_{1}\cdot m_{2}}, the randomnessr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}has to be added. This newly generated commitment is distributed similarly to a new commitment tom1⋅m2{\displaystyle m_{1}\cdot m_{2}}.
Some commitment schemes permit a proof to be given of only a portion of the committed value. In these schemes, the secret valueX{\displaystyle X}is a vector of many individually separable values.
The commitmentC{\displaystyle C}is computed fromX{\displaystyle X}in the commit phase. Normally, in the reveal phase, the prover would reveal all ofX{\displaystyle X}and some additional proof data (such asR{\displaystyle R}insimple bit-commitment). Instead, the prover is able to reveal any single value from theX{\displaystyle X}vector, and create an efficient proof that it is the authentici{\displaystyle i}th element of the original vector that created the commitmentC{\displaystyle C}. The proof does not require any values ofX{\displaystyle X}other thanxi{\displaystyle x_{i}}to be revealed, and it is impossible to create valid proofs that reveal different values for any of thexi{\displaystyle x_{i}}than the true one.[24]
Vector hashing is a naive vector commitment partial reveal scheme based on bit-commitment. Valuesm1,m2,...mn{\displaystyle m_{1},m_{2},...m_{n}}are chosen randomly. Individual commitments are created by hashingy1=H(x1||m1),y2=H(x2||m2),...{\displaystyle y_{1}=H(x_{1}||m_{1}),y_{2}=H(x_{2}||m_{2}),...}. The overall commitment is computed as
In order to prove one element of the vectorX{\displaystyle X}, the prover reveals the values
The verifier is able to computeyi{\displaystyle y_{i}}fromxi{\displaystyle x_{i}}andmi{\displaystyle m_{i}}, and then is able to verify that the hash of ally{\displaystyle y}values is the commitmentC{\displaystyle C}.
Unfortunately the proof isO(n){\displaystyle O(n)}in size and verification time. Alternately, ifC{\displaystyle C}is the set of ally{\displaystyle y}values, then the commitment isO(n){\displaystyle O(n)}in size, and the proof isO(1){\displaystyle O(1)}in size and verification time. Either way, the commitment or the proof scales withO(n){\displaystyle O(n)}which is not optimal.
A common example of a practical partial reveal scheme is aMerkle tree, in which a binary hash tree is created of the elements ofX{\displaystyle X}. This scheme creates commitments that areO(1){\displaystyle O(1)}in size, and proofs that areO(log2n){\displaystyle O(\log _{2}{n})}in size and verification time. The root hash of the tree is the commitmentC{\displaystyle C}. To prove that a revealedxi{\displaystyle x_{i}}is part of the original tree, onlylog2n{\displaystyle \log _{2}{n}}hash values from the tree, one from each level, must be revealed as the proof. The verifier is able to follow the path from the claimed leaf node all the way up to the root, hashing in the sibling nodes at each level, and eventually arriving at a root node value that must equalC{\displaystyle C}.[25]
A Kate-Zaverucha-Goldberg commitment usespairing-based cryptographyto build a partial reveal scheme withO(1){\displaystyle O(1)}commitment sizes, proof sizes, and proof verification time. In other words, asn{\displaystyle n}, the number of values inX{\displaystyle X}, increases, the commitments and proofs do not get larger, and the proofs do not take any more effort to verify.
A KZG commitment requires a predetermined set of parameters to create apairing, and a trusted trapdoor element. For example, aTate pairingcan be used. Assume thatG1,G2{\displaystyle \mathbb {G} _{1},\mathbb {G} _{2}}are the additive groups, andGT{\displaystyle \mathbb {G} _{T}}is the multiplicative group of the pairing. In other words, the pairing is the mape:G1×G2→GT{\displaystyle e:\mathbb {G} _{1}\times \mathbb {G} _{2}\rightarrow \mathbb {G} _{T}}. Lett∈Fp{\displaystyle t\in \mathbb {F} _{p}}be the trapdoor element (ifp{\displaystyle p}is the prime order ofG1{\displaystyle \mathbb {G} _{1}}andG2{\displaystyle \mathbb {G} _{2}}), and letG{\displaystyle G}andH{\displaystyle H}be the generators ofG1{\displaystyle \mathbb {G} _{1}}andG2{\displaystyle \mathbb {G} _{2}}respectively. As part of the parameter setup, we assume thatG⋅ti{\displaystyle G\cdot t^{i}}andH⋅ti{\displaystyle H\cdot t^{i}}are known and shared values for arbitrarily many positive integer values ofi{\displaystyle i}, while the trapdoor valuet{\displaystyle t}itself is discarded and known to no one.
A KZG commitment reformulates the vector of values to be committed as a polynomial. First, we calculate a polynomial such thatp(i)=xi{\displaystyle p(i)=x_{i}}for all values ofxi{\displaystyle x_{i}}in our vector.Lagrange interpolationallows us to compute that polynomial
Under this formulation, the polynomial now encodes the vector, wherep(0)=x0,p(1)=x1,...{\displaystyle p(0)=x_{0},p(1)=x_{1},...}. Letp0,p1,...,pn−1{\displaystyle p_{0},p_{1},...,p_{n-1}}be the coefficients ofp{\displaystyle p}, such thatp(x)=∑i=0n−1pixi{\textstyle p(x)=\sum _{i=0}^{n-1}p_{i}x^{i}}. The commitment is calculated as
This is computed simply as adot productbetween the predetermined valuesG⋅ti{\displaystyle G\cdot t^{i}}and the polynomial coefficientspi{\displaystyle p_{i}}. SinceG1{\displaystyle \mathbb {G} _{1}}is an additive group with associativity and commutativity,C{\displaystyle C}is equal to simplyG⋅p(t){\displaystyle G\cdot p(t)}, since all the additions and multiplications withG{\displaystyle G}can be distributed out of the evaluation. Since the trapdoor valuet{\displaystyle t}is unknown, the commitmentC{\displaystyle C}is essentially the polynomial evaluated at a number known to no one, with the outcome obfuscated into an opaque element ofG1{\displaystyle \mathbb {G} _{1}}.
A KZG proof must demonstrate that the revealed data is the authentic value ofxi{\displaystyle x_{i}}whenC{\displaystyle C}was computed. Lety=xi{\displaystyle y=x_{i}}, the revealed value we must prove. Since the vector ofxi{\displaystyle x_{i}}was reformulated into a polynomial, we really need to prove that the polynomialp{\displaystyle p}, when evaluated ati{\displaystyle i}, takes on the valuey{\displaystyle y}. Simply, we just need to prove thatp(i)=y{\displaystyle p(i)=y}. We will do this by demonstrating that subtractingy{\displaystyle y}fromp{\displaystyle p}yields a root ati{\displaystyle i}. Define the polynomialq{\displaystyle q}as
This polynomial is itself the proof thatp(i)=y{\displaystyle p(i)=y}, because ifq{\displaystyle q}exists, thenp(x)−y{\displaystyle p(x)-y}is divisible byx−i{\displaystyle x-i}, meaning it has a root ati{\displaystyle i}, sop(i)−y=0{\displaystyle p(i)-y=0}(or, in other words,p(i)=y{\displaystyle p(i)=y}). The KZG proof will demonstrate thatq{\displaystyle q}exists and has this property.
The prover computesq{\displaystyle q}through the above polynomial division, then calculates the KZG proof valueπ{\displaystyle \pi }
This is equal toG⋅q(t){\displaystyle G\cdot q(t)}, as above. In other words, the proof value is the polynomialq{\displaystyle q}again evaluated at the trapdoor valuet{\displaystyle t}, hidden in the generatorG{\displaystyle G}ofG1{\displaystyle \mathbb {G} _{1}}.
This computation is only possible if the above polynomials were evenly divisible, because in that case the quotientq{\displaystyle q}is a polynomial, not arational function. Due to the construction of the trapdoor, it is not possible to evaluate a rational function at the trapdoor value, only to evaluate a polynomial using linear combinations of the precomputed known constants ofG⋅ti{\displaystyle G\cdot t^{i}}. This is why it is impossible to create a proof for an incorrect value ofxi{\displaystyle x_{i}}.
To verify the proof, the bilinear map of thepairingis used to show that the proof valueπ{\displaystyle \pi }summarizes a real polynomialq{\displaystyle q}that demonstrates the desired property, which is thatp(x)−y{\displaystyle p(x)-y}was evenly divided byx−i{\displaystyle x-i}. The verification computation checks the equality
wheree{\displaystyle e}is the bilinear map function as above.H⋅t{\displaystyle H\cdot t}is a precomputed constant,H⋅i{\displaystyle H\cdot i}is computed based oni{\displaystyle i}.
By rewriting the computation in the pairing groupGT{\displaystyle \mathbb {G} _{T}}, substituting inπ=q(t)⋅G{\displaystyle \pi =q(t)\cdot G}andC=p(t)⋅G{\displaystyle C=p(t)\cdot G}, and lettingτ(x)=e(G,H)x{\displaystyle \tau (x)=e(G,H)^{x}}be a helper function for lifting into the pairing group, the proof verification is more clear.
Assuming that the bilinear map is validly constructed, this demonstrates thatq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}, without the validator knowing whatp{\displaystyle p}orq{\displaystyle q}are. The validator can be assured of this because ifτ(q(t)⋅(t−i))=τ(p(t)−y){\displaystyle \tau (q(t)\cdot (t-i))=\tau (p(t)-y)}, then the polynomials evaluate to the same output at the trapdoor valuex=t{\displaystyle x=t}. This demonstrates the polynomials are identical, because, if the parameters were validly constructed, the trapdoor value is known to no one, meaning that engineering a polynomial to have a specific value at the trapdoor is impossible (according to theSchwartz–Zippel lemma). Ifq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}is now verified to be true, thenq{\displaystyle q}is verified to exist, thereforep(x)−y{\displaystyle p(x)-y}must be polynomial-divisible by(x−i){\displaystyle (x-i)}, sop(i)−y=0{\displaystyle p(i)-y=0}due to thefactor theorem. This proves that thei{\displaystyle i}th value of the committed vector must have equaledy{\displaystyle y}, since that is the output of evaluating the committed polynomial ati{\displaystyle i}.
The utility of the bilinear map pairing is to allow the multiplication ofq(x){\displaystyle q(x)}byx−i{\displaystyle x-i}to happen securely. These values truly lie inG1{\displaystyle \mathbb {G} _{1}}, where division is assumed to be computationally hard. For example,G1{\displaystyle \mathbb {G} _{1}}might be anelliptic curveover a finite field, as is common inelliptic-curve cryptography. Then, the division assumption is called theelliptic curve discrete logarithm problem[broken anchor], and this assumption is also what guards the trapdoor value from being computed, making it also a foundation of KZG commitments. In that case, we want to check ifq(x)(x−i)=p(x)−y{\displaystyle q(x)(x-i)=p(x)-y}. This cannot be done without a pairing, because with values on the curve ofG⋅q(x){\displaystyle G\cdot q(x)}andG⋅(x−i){\displaystyle G\cdot (x-i)}, we cannot computeG⋅(q(x)(x−i)){\displaystyle G\cdot (q(x)(x-i))}. That would violate thecomputational Diffie–Hellman assumption, a foundational assumption inelliptic-curve cryptography. We instead use apairingto sidestep this problem.q(x){\displaystyle q(x)}is still multiplied byG{\displaystyle G}to getG⋅q(x){\displaystyle G\cdot q(x)}, but the other side of the multiplication is done in the paired groupG2{\displaystyle \mathbb {G} _{2}}, so,H⋅(t−i){\displaystyle H\cdot (t-i)}. We computee(G⋅q(t),H⋅(t−i)){\displaystyle e(G\cdot q(t),H\cdot (t-i))}, which, due to thebilinearityof the map, is equal toe(G,H)q(t)⋅(t−i){\displaystyle e(G,H)^{q(t)\cdot (t-i)}}. In this output groupGT{\displaystyle \mathbb {G} _{T}}we still have thediscrete logarithm problem, so even though we know that value ande(G,H){\displaystyle e(G,H)}, we cannot extract the exponentq(t)⋅(t−i){\displaystyle q(t)\cdot (t-i)}, preventing any contradiction with discrete logarithm earlier. This value can be compared toe(G⋅(p(t)−y),H)=e(G,H)p(t)−y{\displaystyle e(G\cdot (p(t)-y),H)=e(G,H)^{p(t)-y}}though, and ife(G,H)q(t)⋅(t−i)=e(G,H)p(t)−y{\displaystyle e(G,H)^{q(t)\cdot (t-i)}=e(G,H)^{p(t)-y}}we are able to conclude thatq(t)⋅(t−i)=p(t)−y{\displaystyle q(t)\cdot (t-i)=p(t)-y}, without ever knowing what the actual value oft{\displaystyle t}is, let aloneq(t)(t−i){\displaystyle q(t)(t-i)}.
Additionally, a KZG commitment can be extended to prove the values of any arbitraryk{\displaystyle k}values ofX{\displaystyle X}(not just one value), with the proof size remainingO(1){\displaystyle O(1)}, but the proof verification time scales withO(k){\displaystyle O(k)}. The proof is the same, but instead of subtracting a constanty{\displaystyle y}, we subtract a polynomial that causes multiple roots, at all the locations we want to prove, and instead of dividing byx−i{\displaystyle x-i}we divide by∏ix−i{\textstyle \prod _{i}x-i}for those same locations.[26]
It is an interesting question inquantum cryptographyifunconditionally securebit commitment protocols exist on the quantum level, that is, protocols which are (at least asymptotically) binding and concealing even if there are no restrictions on the computational resources. One could hope that there might be a way to exploit the intrinsic properties ofquantum mechanics, as in the protocols forunconditionally secure key distribution.
However, this is impossible, as Dominic Mayers showed in 1996 (see[27]for the original proof). Any such protocol can be reduced to a protocol where the system is in one of two pure states after the commitment phase, depending on the bit Alice wants to commit. If the protocol is unconditionally concealing, then Alice can unitarily transform these states into each other using the properties of theSchmidt decomposition, effectively defeating the binding property.
One subtle assumption of the proof is that the commit phase must be finished at some point in time. This leaves room for protocols that require a continuing information flow until the bit is unveiled or the protocol is cancelled, in which case it is not binding anymore.[28]More generally, Mayers' proof applies only to protocols that exploitquantum physicsbut notspecial relativity. Kent has shown that there exist unconditionally secure protocols for bit commitment that exploit the principle ofspecial relativitystating that information cannot travel faster than light.[29]
Physical unclonable functions(PUFs) rely on the use of a physical key with internal randomness, which is hard to clone or to emulate. Electronic, optical and other types of PUFs[30]have been discussed extensively in the literature, in connection with their potential cryptographic applications including commitment schemes.[31][32]
|
https://en.wikipedia.org/wiki/Pedersen_commitment
|
TheDigital Signature Algorithm(DSA) is apublic-key cryptosystemandFederal Information Processing Standardfordigital signatures, based on the mathematical concept ofmodular exponentiationand thediscrete logarithm problem. In a public-key cryptosystem, a pair of private and public keys are created: data encrypted with either key can only be decrypted with the other. This means that a signing entity that declared their public key can generate anencrypted signatureusing their private key, and a verifier can assert the source if it is decrypted correctly using the declared public key. DSA is a variant of theSchnorrandElGamalsignature schemes.[1]: 486
TheNational Institute of Standards and Technology(NIST) proposed DSA for use in theirDigital Signature Standard(DSS) in 1991, and adopted it as FIPS 186 in 1994.[2]Five revisions to the initial specification have been released. The newest specification is:FIPS 186-5from February 2023.[3]DSA is patented but NIST has made this patent available worldwide royalty-free. SpecificationFIPS 186-5indicates DSA will no longer be approved for digital signature generation, but may be used to verify signatures generated prior to the implementation date of that standard.
The DSA works in the framework of public-key cryptosystems and is based on the algebraic properties ofmodular exponentiation, together with thediscrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer's corresponding public key. The digital signature providesmessage authentication(the receiver can verify the origin of the message),integrity(the receiver can verify that the message has not been modified since it was signed) andnon-repudiation(the sender cannot falsely claim that they have not signed the message).
In 1982, the U.S government solicited proposals for a public key signature standard. In August 1991 theNational Institute of Standards and Technology(NIST) proposed DSA for use in their Digital Signature Standard (DSS). Initially there was significant criticism, especially fromsoftwarecompanies that had already invested effort in developing digital signature software based on theRSA cryptosystem.[1]: 484Nevertheless, NIST adopted DSA as a Federal standard (FIPS 186) in 1994. Five revisions to the initial specification have been released: FIPS 186–1 in 1998,[4]FIPS 186–2 in 2000,[5]FIPS 186–3 in 2009,[6]FIPS 186–4 in 2013,[3]and FIPS 186–5 in 2023.[7]Standard FIPS 186-5 forbids signing with DSA, while allowing verification of signatures generated prior to the implementation date of the standard as a document. It is to be replaced by newer signature schemes such asEdDSA.[8]
DSA is covered byU.S. patent 5,231,668, filed July 26, 1991 and now expired, and attributed to David W. Kravitz,[9]a formerNSAemployee. This patent was given to "The United States of America as represented by theSecretary of Commerce, Washington, D.C.", and NIST has made this patent available worldwide royalty-free.[10]Claus P. Schnorrclaims that hisU.S. patent 4,995,082(also now expired) covered DSA; this claim is disputed.[11]
In 1993, Dave Banisar managed to get confirmation, via aFOIArequest, that the DSA algorithm hasn't been designed by the NIST, but by the NSA.[12]
OpenSSHannounced that DSA was going to be removed in 2025. The support was entirely dropped in version 10.0.[13][14]
The DSA algorithm involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification.
Key generation has two phases. The first phase is a choice ofalgorithm parameterswhich may be shared between different users of the system, while the second phase computes a single key pair for one user.
The algorithm parameters are (p{\displaystyle p},q{\displaystyle q},g{\displaystyle g}). These may be shared between different users of the system.
Given a set of parameters, the second phase computes the key pair for a single user:
x{\displaystyle x}is the private key andy{\displaystyle y}is the public key.
The signer should publish the public keyy{\displaystyle y}. That is, they should send the key to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private keyx{\displaystyle x}secret.
A messagem{\displaystyle m}is signed as follows:
The signature is(r,s){\displaystyle \left(r,s\right)}
The calculation ofk{\displaystyle k}andr{\displaystyle r}amounts to creating a new per-message key. The modular exponentiation in computingr{\displaystyle r}is the most computationally expensive part of the signing operation, but it may be computed before the message is known.
Calculating the modular inversek−1modq{\displaystyle k^{-1}{\bmod {\,}}q}is the second most expensive part, and it may also be computed before the message is known. It may be computed using theextended Euclidean algorithmor usingFermat's little theoremaskq−2modq{\displaystyle k^{q-2}{\bmod {\,}}q}.
One can verify that a signature(r,s){\displaystyle \left(r,s\right)}is a valid signature for a messagem{\displaystyle m}as follows:
The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows:
First, sinceg=h(p−1)/qmodp{\textstyle g=h^{(p-1)/q}~{\text{mod}}~p}, it follows thatgq≡hp−1≡1modp{\textstyle g^{q}\equiv h^{p-1}\equiv 1\mod p}byFermat's little theorem. Sinceg>0{\displaystyle g>0}andq{\displaystyle q}is prime,g{\displaystyle g}must have orderq{\displaystyle q}.
The signer computes
Thus
Sinceg{\displaystyle g}has orderq{\displaystyle q}we have
Finally, the correctness of DSA follows from
With DSA, the entropy, secrecy, and uniqueness of the random signature valuek{\displaystyle k}are critical. It is so critical that violating any one of those three requirements can reveal the entire private key to an attacker.[17]Using the same value twice (even while keepingk{\displaystyle k}secret), using a predictable value, or leaking even a few bits ofk{\displaystyle k}in each of several signatures, is enough to reveal the private keyx{\displaystyle x}.[18]
This issue affects both DSA and Elliptic Curve Digital Signature Algorithm (ECDSA) – in December 2010, the groupfail0verflowannounced the recovery of theECDSAprivate key used bySonyto sign software for thePlayStation 3game console. The attack was made possible because Sony failed to generate a new randomk{\displaystyle k}for each signature.[19]
This issue can be prevented by derivingk{\displaystyle k}deterministically from the private key and the message hash, as described byRFC6979. This ensures thatk{\displaystyle k}is different for eachH(m){\displaystyle H(m)}and unpredictable for attackers who do not know the private keyx{\displaystyle x}.
In addition, malicious implementations of DSA and ECDSA can be created wherek{\displaystyle k}is chosen in order tosubliminallyleak information via signatures. For example, anoffline private keycould be leaked from a perfect offline device that only released innocent-looking signatures.[20]
Below is a list of cryptographic libraries that provide support for DSA:
|
https://en.wikipedia.org/wiki/Digital_Signature_Algorithm
|
Morse codeis atelecommunicationsmethod whichencodestextcharacters as standardized sequences of two different signal durations, calleddotsanddashes, orditsanddahs.[3][4]Morse code is named afterSamuel Morse, one of the early developers of the system adopted forelectrical telegraphy.
International Morse codeencodes the 26basic Latin lettersAtoZ, oneaccentedLatin letter (É), theArabic numerals, and a small set of punctuation and procedural signals (prosigns). There is no distinction between upper and lower case letters.[1]Each Morse code symbol is formed by a sequence ofditsanddahs. Theditduration can vary for signal clarity and operator skill, but for any one message, once therhythmis established, ahalf-beatis the basic unit of time measurement in Morse code. The duration of adahis three times the duration of adit(although some telegraphers deliberately exaggerate the length of adahfor clearer signalling). Eachditordahwithin an encoded character is followed by a period of signal absence, called aspace, equal to theditduration. The letters of a word areseparated bya space of duration equal to threedits, and words are separated by a space equal to sevendits.[1][5][a]
Morse code can be memorized and sent in a form perceptible to the human senses, e.g. via sound waves or visible light, such that it can be directly interpreted by persons trained in the skill.[7][8]Morse code is usually transmitted byon-off keyingof an information-carrying medium such as electric current, radio waves, visible light, or sound waves.[9][10]The current or wave is present during the time period of theditordahand absent during the time betweenditsanddahs.[11][12]
Since many natural languages use more than the 26 letters of theLatin alphabet,Morse alphabetshave been developed for those languages, largely by transliteration of existing codes.[13]
To increase the efficiency of transmission, Morse code was originally designed so that the duration of each symbol is approximatelyinverse to the frequency of occurrenceof the character that it represents in text of the English language. Thus the most common letter in English, the letterE, has the shortest code – a singledit. Because the Morse code elements are specified by proportion rather than specific time durations, the code is usually transmitted at the highest rate that the receiver is capable of decoding. Morse code transmission rate (speed) is specified ingroups per minute, commonly referred to aswords per minute.[b][7]
Early in the nineteenth century, European experimenters made progress with electrical signaling systems, using a variety of techniques includingstatic electricityand electricity fromVoltaic pilesproducingelectrochemicalandelectromagneticchanges. These experimental designs were precursors to practical telegraphic applications.[14]
Following the discovery ofelectromagnetismbyHans Christian Ørstedin 1820 and the invention of theelectromagnetbyWilliam Sturgeonin 1824, there were developments inelectromagnetic telegraphyin Europe and America. Pulses ofelectric currentwere sent along wires to control an electromagnet in the receiving instrument. Many of the earliest telegraph systems used a single-needle system which gave a very simple and robust instrument. However, it was slow, as the receiving operator had to alternate between looking at the needle and writing down the message. In Morse code, a deflection of the needle to the left corresponded to aditand a deflection to the right to adah.[15]The needle clicked each time it moved to the right or left. By making the two clicks sound different (by installing one ivory and one metal stop), transmissions on the single needle device became audible as well as visible, which led in turn to theDouble PlateSounderSystem.[16]
William CookeandCharles WheatstoneinBritaindeveloped an electrical telegraph that used electromagnets in its receivers. They obtained an English patent in June 1837 and demonstrated it on the London and Birmingham Railway, making it the first commercial telegraph.Carl Friedrich GaussandWilhelm Eduard Weber(1833) as well asCarl August von Steinheil(1837) used codes with varying word lengths for their telegraph systems.[17]In 1841, Cooke and Wheatstone built a telegraph that printed the letters from a wheel of typefaces struck by a hammer.[18]: 79
The American artistSamuel Morse, the AmericanphysicistJoseph Henry, and mechanical engineerAlfred Vaildeveloped anelectrical telegraphsystem. The simple "on or off" nature of its signals made it desirable to find a method of transmitting natural language using only electrical pulses and the silence between them. Around 1837, Morse therefore developed such a method, an early forerunner to the modern International Morse code.[18]: 79
The Morse system fortelegraphy, which was first used in about 1844, was designed to make indentations on a paper tape when electric currents were received. Morse's original telegraph receiver used a mechanical clockwork to move a paper tape. When an electrical current was received, an electromagnet engaged an armature that pushed a stylus onto the moving paper tape, making an indentation on the tape. When the current was interrupted, a spring retracted the stylus and that portion of the moving tape remained unmarked. Morse code was developed so that operators could translate the indentations marked on the paper tape into text messages.
In his earliest design for a code, Morse had planned to transmit only numerals, and to use a codebook to look up each word according to the number which had been sent. However, the code was soon expanded byAlfred Vailin 1840 to include letters and special characters, so it could be used more generally. Vail estimated theletter frequencyof English by counting themovable typehe found in thetype casesof a local newspaper inMorristown, New Jersey.[18]: 84The shorter marks were called "dots" and the longer ones "dashes", and the letters most commonly used were assigned the shortest sequences of dots and dashes. This code, first used in 1844, was what later became known asMorse landline code,American Morse code, orRailroad Morse, until the end of railroad telegraphy in the U.S. in the 1970s.[citation needed]
In the original Morse telegraph system, the receiver's armature made a clicking noise as it moved in and out of position to mark the paper tape. Early telegraph operators soon learned that they could translate the clicks directly into dots and dashes, and write these down by hand, thus making the paper tape unnecessary. When Morse code was adapted toradio communication, the dots and dashes were sent as short and long tone pulses.
Later telegraphy training found that people become more proficient at receiving Morse code when it is taught "like a language", with each code perceived as a whole "word" instead of a sequence of separate dots and dashes, such as might be shown on a page.[19]
With the advent of tones produced by radiotelegraph receivers, the operators began to vocalize a dot asdit, and a dash asdah, to reflect the sounds of Morse code they heard. To conform to normal sending speed,ditswhich are not the last element of a code became voiced asdi. For example, theletterL(▄ ▄▄▄ ▄ ▄) is voiced asdi dah di dit.[20][21]Morse code was sometimes facetiously known as "iddy-umpty", aditlampooned as "iddy" and adahas "umpty", leading to the word "umpteen".[22]
The Morse code, as specified in the current international standard,International Morse Code Recommendation,ITU-RM.1677-1,[1]was derived from a much-improved proposal byFriedrich Gerkein 1848 that became known as the "Hamburg alphabet", its only real defect being the use of an excessively long code (▄ ▄▄▄ ▄ ▄ ▄and later the equal duration code▄▄▄ ▄▄▄ ▄▄▄) for the frequently used vowelO.
Gerke changed many of the codepoints, in the process doing away with the different length dashes and different inter-element spaces ofAmerican Morse, leaving only two coding elements, the dot and the dash. Codes forGermanumlautedvowels andCHwere introduced. Gerke's code was adopted in Germany and Austria in 1851.[23]
This finally led to the International Morse code in 1865. The International Morse code adopted most of Gerke's codepoints. The codes forOandPwere taken from a code system developed by Steinheil. A new codepoint was added forJsince Gerke did not distinguish betweenIandJ. Changes were also made toX,Y, andZ. The codes for the digits0–9in International Morse were completely revised from both Morse's original and Gerke's revised systems. This left only four codepoints identical to the original Morse code, namelyE,H,KandN, and the latter two had theirdahsextended to full length. The original American code being compared dates to 1838; the later American code shown in the table was developed in 1844.[17]
In the 1890s, Morse code began to be used extensively for earlyradiocommunication before it was possible to transmit voice. In the late 19th and early 20th centuries, most high-speed international communication used Morse code on telegraph lines, undersea cables, and radio circuits.
Although previous transmitters were bulky and thespark gap system of transmissionwas dangerous and difficult to use, there had been some early attempts: In 1910, the U.S. Navy experimented with sending Morse from an airplane.[24]However the first regular aviation radiotelegraphy was onairships, which had space to accommodate the large, heavy radio equipment then in use. The same year, 1910, a radio on the airshipAmericawas instrumental in coordinating the rescue of its crew.[25]
DuringWorld War I,Zeppelin airshipsequipped with radio were used for bombing and naval scouting,[26]and ground-based radio direction finders were used for airship navigation.[26]Allied airships and military aircraft also made some use of radiotelegraphy.
However, there was little aeronautical radio in general use duringWorld War I, and in the 1920s, there was no radio system used by such important flights as that ofCharles LindberghfromNew YorktoParisin 1927. Once he and theSpirit of St. Louiswere off the ground, Lindbergh was truly incommunicado and alone. Morse code in aviation began regular use in the mid-1920s. By 1928, when the first airplane flight was made by theSouthern Crossfrom California to Australia, one of its four crewmen was a radio operator who communicated with ground stations viaradio telegraph.
Beginning in the 1930s, both civilian and military pilots were required to be able to use Morse code, both for use with early communications systems and for identification of navigational beacons that transmitted continuous two- or three-letter identifiers in Morse code.Aeronautical chartsshow the identifier of each navigational aid next to its location on the map.
In addition, rapidly moving field armies could not have fought effectively without radiotelegraphy; they moved more quickly than their communications services could put up new telegraph and telephone lines. This was seen especially in theblitzkriegoffensives of theNazi GermanWehrmachtinPoland,Belgium,France(in 1940), theSoviet Union, and inNorth Africa; by theBritish ArmyinNorth Africa,Italy, and theNetherlands; and by theU.S. Armyin France and Belgium (in 1944), and in southern Germany in 1945.
Radiotelegraphy using Morse code was vital duringWorld War II, especially in carrying messages between thewarshipsand thenaval basesof the belligerents. Long-range ship-to-ship communication was by radio telegraphy, usingencryptedmessages because the voice radio systems on ships then were quite limited in both their range and their security. Radiotelegraphy was also extensively used bywarplanes, especially by long-rangepatrol planesthat were sent out by navies to scout for enemy warships, cargo ships, and troop ships.
Morse code was used as an international standard for maritime distress until 1999 when it was replaced by theGlobal Maritime Distress and Safety System. When theFrench Navyceased using Morse code on January 31, 1997, the final message transmitted was"Calling all. This is our last call before our eternal silence."[27]
In the United States the final commercial Morse code transmission was on July 12, 1999, signing off with Samuel Morse's original 1844 message,WHAT HATH GOD WROUGHT, and theprosignSK("end of contact").[28]
As of 2015[update], theUnited States Air Forcestill trains ten people a year in Morse.[29]
TheUnited States Coast Guardhas ceased all use of Morse code on the radio, and no longer monitors anyradio frequenciesfor Morse code transmissions, including the internationalmedium frequency(MF) distress frequency of500 kHz.[30]However, theFederal Communications Commissionstill grants commercial radiotelegraph operator licenses to applicants who pass its code and written tests.[31]Licensees have reactivated the old California coastal Morse stationKPHand regularly transmit from the site under either thiscall signor as KSM. Similarly, a few U.S.museum shipstations are operated by Morse enthusiasts.[32]
Morse code speed is measured inwords per minute(WPM) or characters per minute (CPM). Characters have differing lengths because they contain differing numbers ofditsanddahs. Consequently, words also have different lengths in terms of dot duration, even when they contain the same number of characters. For this reason, some standard word is adopted for measuring operators' transmission speeds: Two such standard words in common use arePARISandCODEX.[33]Operators skilled in Morse code can often understand ("copy") code in their heads at rates in excess of 40WPM.
In addition to knowing, understanding, and being able to copy the standard written alpha-numeric and punctuation characters or symbols at high speeds, skilled high-speed operators must also be fully knowledgeable of all of the special unwritten Morse code symbols for the standardProsigns for Morse codeand the meanings of these special procedural signals in standard Morse codecommunications protocol.
International contests in code copying are still occasionally held. In July 1939 at a contest inAsheville, North Carolinain the United States, Theodore Roosevelt McElroy (W1JYN) set a still-standing record for Morse copying, 75.2WPM.[34]Pierpont (2004) also notes that some operators may have passed 100WPM.[34]By this time, they are "hearing" phrases and sentences rather than words. The fastest speed ever sent by a straight key was achieved in 1942 by Harry Turner (W9YZE) (d. 1992) who reached 35WPMin a demonstration at a U.S. Army base. To accurately compare code copying speed records of different eras it is useful to keep in mind that different standard words (50 dit durations versus 60 dit durations) and different interword gaps (5 dit durations versus 7 dit durations) may have been used when determining such speed records. For example, speeds run with theCODEXstandard word and thePARISstandard may differ by up to 20%.
Today among amateur operators there are several organizations that recognize high-speed code ability, one group consisting of those who can copy Morse at 60WPM.[35]Also, Certificates of Code Proficiency are issued by several amateur radio societies, including theAmerican Radio Relay League. Their basic award starts at 10WPMwith endorsements as high as 40WPM, and are available to anyone who can copy the transmitted text. Members of theBoy Scouts of Americamay put a Morse interpreter's strip on their uniforms if they meet the standards for translating code at 5WPM.
Through May 2013, the First, Second, and Third Class (commercial) Radiotelegraph Licenses using code tests based upon theCODEXstandard word were still being issued in the United States by the Federal Communications Commission. The First Class license required 20WPMcode group and 25WPMtext code proficiency, the others 16WPMcode group test (five letter blocks sent as simulation of receiving encrypted text) and 20WPMcode text (plain language) test. It was also necessary to pass written tests on operating practice and electronics theory. A unique additional demand for the First Class was a requirement of a year of experience for operators of shipboard and coast stations using Morse. This allowed the holder to be chief operator on board a passenger ship. However, since 1999 the use of satellite and very high-frequency maritime communications systems (GMDSS) has made them obsolete. (By that point meeting experience requirement for the First was very difficult.)
Currently, only one class of license, the Radiotelegraph Operator License, is issued. This is granted either when the tests are passed or as the Second and First are renewed and become this lifetime license. For new applicants, it requires passing a written examination on electronic theory and radiotelegraphy practices, as well as 16WPMcode-group and 20WPMtext tests. However, the code exams are currently waived for holders of Amateur Extra Class licenses who obtained their operating privileges under the old 20WPMtest requirement.
Morse codes of one version or another have been in use for more than 160 years — longer than any otherelectricalmessage encoding system. What is called Morse code today is actually somewhat different from what was originally developed by Vail and Morse. The Modern International Morse code, orcontinental code, was created byFriedrich Clemens Gerkein 1848 and initially used for telegraphy betweenHamburgandCuxhavenin Germany. Gerke changed nearly half of the alphabet and all of thenumerals, providing the foundation for the modern form of the code. After some minor changes to the letters and a complete revision of the numerals, International Morse Code was standardized by the International Telegraphy Congress in 1865 in Paris, and later became the standard adopted by theInternational Telecommunication Union(ITU). Morse and Vail's final code specification, however, was only really used only for land-line telegraphy in the United States and Canada, with the International code used everywhere else, including all ships at sea and sailing in North American waters. Morse's version became known asAmerican Morse codeorrailroad code, and is now almost never used, with the possible exception of historical re-enactments.
Inaviation, pilots useradio navigationaids. To allow pilots to ensure that the stations they intend to use are serviceable, the stations transmit a set of identification letters (usually a two-to-five-letter version of the station name) in Morse code. Station identification letters are shown on air navigation charts. For example, theVOR-DMEbased atVilo Acuña AirportinCayo Largo del Sur, Cubais identified by "UCL", and Morse codeUCLis repeatedly transmitted on its radio frequency.
In some countries, during periods of maintenance, the facility may instead transmit the signalTEST(▄▄▄▄▄ ▄ ▄▄▄▄), or theidentificationmay be removed, which tellspilotsandnavigatorsthat the station is unreliable. In Canada, the identification is removed entirely to signify the navigation aid is not to be used.[36][37]
In the aviation service, Morse is typically sent at a very slow speed of about 5 words per minute. In the U.S., pilots do not actually have to know Morse to identify the transmitter because the dot/dash sequence is written out next to the transmitter's symbol on aeronautical charts. Some modern navigation receivers automatically translate the code into displayed letters.
International Morse code today is most popular amongamateur radiooperators, in the mode commonly referred to as "continuous wave" or "CW".[e]Other, faster keying methods are available in radio telegraphy, such asfrequency-shift keying(FSK).
The original amateur radio operators used Morse code exclusively since voice-capable radio transmitters did not become commonly available until around 1920. Until 2003, theInternational Telecommunication Unionmandated Morse code proficiency as part of the amateur radio licensing procedure worldwide. However, theWorld Radiocommunication Conferenceof 2003 made the Morse code requirement for amateur radio licensing optional.[39]Many countries subsequently removed the Morse requirement from their license requirements.[40]
Until 1991, a demonstration of the ability to send and receive Morse code at a minimum of five words per minute (WPM) was required to receive an amateur radio license for use in the United States from theFederal Communications Commission. Demonstration of this ability was still required for the privilege to use theshortwave bands. Until 2000, proficiency at the 20WPMlevel was required to receive the highest level of amateur license (Amateur Extra Class); effective April 15, 2000, in the FCC reduced the Extra Class requirement to 5WPM.[41]Finally, effective on February 23, 2007, the FCC eliminated the Morse code proficiency requirements from all amateur radio licenses.
While voice and data transmissions are limited to specific amateur radio bands under U.S. rules, Morse code is permitted on all amateur bands:LF,MF low,MF high,HF,VHF, andUHF. In some countries, certain portions of the amateur radio bands are reserved for transmission of Morse code signals only.
Because Morse code transmissions employ anon-off keyedradio signal, it requires less complex equipment than otherradio transmission modes. Morse code also uses lessbandwidth(typically only 100–150Hzwide, although only for a slow data rate) than voice communication (roughly 2,400~2,800 Hz used bySSB voice).
Morse code is usually received as a high-pitched audio tone, so transmissions are easier to copy than voice through the noise on congested frequencies, and it can be used in very high noise / low signal environments. The fact that the transmitted power is concentrated into a very limited bandwidth makes it possible to use narrow receiver filters, which suppress or eliminate interference on nearby frequencies. The narrow signal bandwidth also takes advantage of the natural aural selectivity of the human brain, further enhancing weak signal readability.[citation needed]This efficiency makes CW extremely useful forDX (long distance) transmissions, as well as for low-power transmissions (commonly called "QRP operation", from theQ-codefor "reduce power"). There are several amateur clubs that require solid high speed copy, the highest of these has a standard of 60WPM. TheAmerican Radio Relay Leagueoffers a code proficiency certification program that starts at 10WPM.
The relatively limited speed at which Morse code can be sent led to the development of an extensive number of abbreviations to speed communication. These include prosigns,Q codes, and a set ofMorse code abbreviationsfor typical message components. For example,CQis broadcast to be interpreted as "seek you" (I'd like to converse with anyone who can hear my signal). The abbreviationsOM(old man),YL(young lady), andXYL("ex-young lady" – wife) are common.YLorOMis used by an operator when referring to the other operator (regardless of their actual age), andXYLorOM(rather than the expectedXYM) is used by an operator when referring to his or her spouse.QTHis "transmitting location" (spoken "my Q.T.H." is "my location"). The use of abbreviations for common terms permits conversation even when the operators speak different languages.
Although the traditionaltelegraph key(straight key) is still used by some amateurs, the use of mechanical semi-automatickeyers[d](informally called "bugs"), and of fully automatic electronickeyers(called "single paddle" and either "double-paddle" or "iambic" keys) is prevalent today.Softwareis also frequently employed to produce and decode Morse code radio signals. TheARRLhas a readability standard for robot encoders calledARRL Farnsworth spacing[42]that is supposed to have higher readability for both robot and human decoders. Some programs like WinMorse[43]have implemented the standard.
Radio navigation aids such asVORsandNDBsfor aeronautical use broadcast identifying information in the form of Morse Code, though manyVORstations now also provide voice identification.[44]Warships, including those of theU.S. Navy, have long usedsignal lampsto exchange messages in Morse code. Modern use continues, in part, as a way to communicate while maintainingradio silence.
Automatic Transmitter Identification System(ATIS) uses Morse code to identify uplink sources of analog satellite transmissions.
Manyamateur radio repeatersidentify with Morse, even though they are used for voice communications.
An important application is signalling for help throughSOS, "▄ ▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄ ▄ ▄ ▄". This can be sent many ways: keying a radio on and off, flashing a mirror, toggling a flashlight, and similar methods. TheSOSsignal is not sent as three separate characters; rather, it is aprosignSOS, and is keyed without gaps between characters.[45]
Morse code has been employed as anassistive technology, helping people with a variety ofdisabilitiesto communicate.[46][47][f][49]For example, the Android operating system versions 5.0 and higher allow users to input text using Morse Code as an alternative to a keypad orhandwriting recognition.[50]
Morse can be sent by persons with severe motion disabilities, as long as they have some minimal motor control. An original solution to the problem that caretakers have to learn to decode has been an electronic typewriter with the codes written on the keys. Codes were sung by users; see the voice typewriter employing Morse or votem.[51]
Morse code can also be translated by computer and used in a speaking communication aid. In some cases, this means alternately blowing into and sucking on a plastic tube ("sip-and-puff" interface). An important advantage of Morse code overrow column scanningis that once learned, it does not require looking at a display. Also, it appears faster than scanning.
In one case reported in the radio amateur magazineQST,[52]an old shipboard radio operator who had astrokeand lost the ability to speak or write could communicate with his physician (a radio amateur) by blinking his eyes in Morse. Two examples of communication in intensive care units were also published inQST magazine.[53][54]Another example occurred in 1966 whenprisoner of warJeremiah Denton, brought on television by hisNorth Vietnamesecaptors, Morse-blinked the wordTORTURE. In these two cases, interpreters were available to understand those series of eye-blinks.
International Morse code is composed of five elements:[1]: §3
Morse code can be transmitted in a number of ways: Originally as electrical pulses along atelegraphwire, but later extended to an audio tone, a radio signal with short and long tones, or high and low tones, or as a mechanical, audible, or visual signal (e.g. a flashing light) using devices like anAldis lampor aheliograph, a common flashlight, or even a car horn. Some mine rescues have used pulling on a rope - a short pull for a dot and a long pull for adah. Ground forces send messages to aircraft with panel signalling, where a horizontal panel is a dah and a vertical panel a dit.[55]
Morse messages are generally transmitted by a hand-operated device such as atelegraph key, so there are variations introduced by the skill of the sender and receiver — more experienced operators can send and receive at faster speeds. In addition, individual operators differ slightly, for example, using slightly longer or shorterdahsor gaps, perhaps only for particular characters. This is called their "fist", and experienced operators can recognize specific individuals by it alone. A good operator who sends clearly and is easy to copy is said to have a "good fist". A "poor fist" is a characteristic of sloppy or hard to copy Morse code.
Morse code is transmitted using just two states (on and off). Morse code may be represented as a binary code, and that is what telegraph operators do when transmitting messages. Working from the above ITU definition and further defining abitas a dot time, a Morse code sequence may be crudely represented a combination of the following five bit-strings:
The marks and gaps alternate:Ditsanddahsare always separated by one of the gaps, and that the gaps are always separated by aditor adah.
A more efficient binary encoding uses only two-bits for eachditordahelement, with the 1dit-length pause that must follow after each automatically included for every 2 bit code. One possible coding is by number value for the length of signal tone sent one could use '01'b for aditand the automatic single-dit pause after it, and '11'b for adahand the automatic single-ditfollowing pause, and '00'b for theextrapause between letters (in effect, an end-of-letter mark). That leaves the code '10'b available for some other purpose, such as an escape character, or to more compactly represent theextraspace between words (an end-of-word mark) instead of '00 00 00'b (only 6ditlengths, since the 7th is automatically inserted as part of the priorditordah). Although theditand inter-letter pauses work out to be the same, for any letter containing adah, the two-bit encoding uses digital memory more compactly than the direct-conversion bit strings mentioned above. Including the letter-separating spaces, all International Morse letter codes pack into 12 bits or less (5 symbols), and most fit into 10 bits or less (4 symbols); most of theprocedural signsfit into 14 bits, with a few only needing 12 bits (5 symbols); and all digits require exactly 12 bits.
For example, MorseG(▄▄▄ ▄▄▄ ▄+ 2extraempty dits for "end of letter") would binary-encode as '11'b, '11'b, '01'b, '00'b; when packed it is '1111 0100'b = 'F4'x, which stores into only onebyte(twonibbles) (as does every three-element code). The bit encoding for the longer method mentioned earlier the same letter would encode as '1110'b, '1110'b, '1000'b = '1110 1110 1000'b = 'EE8'x, or one-and-a-half bytes (three nibbles). The space saving allows small devices, like portable memory keyers, to have more and longer International Morse code sequences in small, conventional device-drivermicroprocessors'RAMchips.
The very longtime constantsof 19th and early 20th centurysubmarine communications cablesrequired a different form of Morse signalling. Instead of keying a voltage on and off for varying times, the dits and dahs were represented by two polarities of voltage impressed on the cable, for a uniform time.[56]
Below is an illustration of timing conventions. The phraseMORSE CODE, in Morse code format, would normally be written something like this, where–representsdahsand·representsdits:
Next is the exact conventional timing for this phrase, with▓representing "signal on", and˽representing "signal off", each for the time length of exactly one dit:
Morse code is often spoken or written withdahfor dashes,ditfor dots located at the end of a character, anddifor dots located at the beginning or internally within the character. Thus, the following Morse code sequence:
is spoken (or sung):
Dah dahdah dah dahdi dah ditdi di ditdit,Dah di dah ditdah dah dahdah di ditdit.
For use on radio, there is little point in learning to readwrittenMorse as above; rather, thesoundsof all of the letters and symbols need to be learned, for both sending and receiving.
All Morse code elements depend on the dot /ditlength. Adahis the length of 3 dits (with no gaps between), and spacings are specified in number ofditlengths. An unambiguous method of specifying the transmission speed is to specify theditduration as, for example,50milliseconds.
Specifying theditduration is, however, not the common practice. Usually, speeds are stated in words per minute. That introduces ambiguity because words have different numbers of characters, and characters have differentditlengths. It is not immediately clear how a specific word rate determines theditduration in milliseconds.
Some method to standardize the transformation of a word rate to aditduration is useful. A simple way to do this is to choose aditduration that would send a typical word the desired number of times in one minute. If, for example, the operator wanted a character speed of 13 words per minute, the operator would choose aditrate that would send the typical word 13 times in exactly one minute.
The typical word thus determines theditlength. It is common to assume that a word is 5 characters long. There are two common typical words:PARISandCODEX.PARISmimics a word rate that is typical of natural language words and reflects the benefits of Morse code's shorter code durations for common characters such asEandT.CODEXoffers a word rate that is typical of 5 letter code groups (sequences of random letters). Using the wordPARISas a standard, the number ofditunits is 50 and a simple calculation shows that theditlength at 20 words per minute is60 milliseconds. Using the wordCODEXwith 60 dit units, theditlength at 20 words per minute is50 milliseconds.
Because Morse code is usually sent by hand, it is unlikely that an operator could be that precise with theditlength, and the individual characteristics and preferences of the operators usually override the standards.
For commercial radiotelegraph licenses in the United States, the Federal Communications Commission specifies tests for Morse code proficiency in words per minute and in code groups per minute.[57]: §13.207(c), §13.209(d)TheFCCspecifies that a "word" is 5 characters long. The Commission specifies Morse code test elements at 16 code groups per minute, 20 words per minute, 20 code groups per minute, and 25 words per minute.[57]: §13.203(b)The word per minute rate would be close to thePARISstandard, and the code groups per minute would be close to theCODEXstandard.
While the Federal Communications Commission no longer requires Morse code for amateur radio licenses, the old requirements were similar to the requirements for commercial radiotelegraph licenses.[57]: §97.503, 1996
A difference between amateur radio licenses and commercial radiotelegraph licenses is that commercial operators must be able to receive code groups of random characters along with plain language text. For each class of license, the code group speed requirement is slower than the plain language text requirement. For example, for the Radiotelegraph Operator License, the examinee must pass a 20 word per minute plain text test and a 16 word per minute code group test.[31]
Based upon a 50 dit duration standard word such asPARIS, the time for oneditduration or one unit can be computed by the formula:
where:Tis the unit time, orditduration in milliseconds, andWis the speed inWPM.
High-speed telegraphycontests are held; according to theGuinness Book of Recordsin June 2005 at theInternational Amateur Radio Union's 6th World Championship in High Speed Telegraphy inPrimorsko, Bulgaria, Andrei Bindasov ofBelarustransmitted 230 Morse code marks of mixed text in one minute.[58]
Sometimes, especially while teaching Morse code, the timing rules above are changed so two different speeds are used: A character speed and a text speed. The character speed is how fast each individual letter is sent. The text speed is how fast the entire message is sent. For example, individual characters may be sent at a 13 words-per-minute rate, but the intercharacter and interword gaps may be lengthened so the word rate is only 5 words per minute.
Using different character and text speeds is, in fact, a common practice, and is used in the Farnsworth method oflearning Morse code.
Some methods of teaching Morse code use adichotomic searchtable.
People learning Morse code using theFarnsworth methodare taught to send and receive letters and other symbols at their full target speed, that is with normal relative timing of thedits,dahs, and spaces within each symbol for that speed. The Farnsworth method is named for Donald R. "Russ" Farnsworth, also known by hiscall sign, W6TTB. However, initially exaggerated spaces between symbols and words are used, to give "thinking time" to make the sound "shape" of the letters and symbols easier to learn. The spacing can then be reduced with practice and familiarity.
Another popular teaching method is theKoch method, invented in 1935 by the German engineer and formerstormtrooperLudwig Koch,[59]which uses the full target speed from the outset but begins with just two characters. Once strings containing those two characters can be copied with 90% accuracy, an additional character is added, and so on until the full character set is mastered.
In North America, many thousands of individuals have increased their code recognition speed (after initial memorization of the characters) by listening to the regularly scheduled code practice transmissions broadcast byW1AW, the American Radio Relay League's headquarters station.[60]As of 2015,[update]the United States military taught Morse code as an 81-day self-paced course, having phased out more traditional classes.[61]
Visual mnemonic charts have been devised over the ages.Baden-Powellincluded one in theGirl Guideshandbook[62]in 1918.
In the United Kingdom, many people learned the Morse code by means of a series of words or phrases that have the same rhythm as a Morse character. For instance,Qin Morse isdah dah di dah,which can be memorized by the phrase "God Save the Queen", and the Morse forFisdi di dah dit,which can be memorized as "Did she like it?"[g]
Most numbers have an unofficial short-form, given in the table below. They are only used when both the sender and the receiver understand that numbers, and not letters, are intended;[citation needed]for example, one often sees the most commonR-S-T signal reportrendered as5NN[‡]instead of599.[citation needed]
Table notes
Prosigns for Morse code are special (usually) unwritten procedural signals or symbols that are used to indicate changes incommunications protocolstatus orwhite spacetext formatting actions.
The symbols [!], [$], and [&] are not defined inside the officialITU-RInternational Morse Code Recommendation,[1]but informal conventions for them exist. (The [@] symbol was formally added in 2004.)
The typical tactic for creating Morse codes fordiacriticsand non-Latinalphabetic scripts has been to begin by simply re-using the International Morse codes already used for letters whose sound matches the sound of the local alphabet. BecauseGerke code(the predecessor to International Morse) was in official use in central Europe,[23]and included four characters not included in the International Morse standard (Ä,Ö,Ü, andCH), these four have served as a beginning-point for other languages that use analphabeticscript, but require codes for letters not accommodated by International Morse.
The usual method has been to first transliterate the sounds represented by the International code and the four unique Gerke codes into the local alphabet, henceGreek, Hebrew,Russian, and UkrainianMorse codes. If more codes are needed, one can either invent a new code or convert an otherwise unused code from either code set to the non-Latin letter. For example:
ForRussianandBulgarian,Russian Morse codemaps theCyrillic charactersto four-element codes. Many of those characters are encoded the same as theirLatin alphabetlook-alikes or sound-alikes (A,O,E,I,T,M,N,R,K, etc.). TheBulgarian alphabetcontains 30 characters, which exactly matches the number of all possiblepermutationsof 1, 2, 3, and 4ditsanddahs(RussianЫis used as BulgarianЬ, RussianЬis used as BulgarianЪ). Russian requires two more codes, for the lettersЭandЪwhich are each encoded with 5 elements.
Non-alphabeticscripts require more radical adaption. Japanese Morse code (Wabun code) has a separate encoding forkanascript; although many of the codes are used for International Morse, the sounds they represent are mostly unrelated. The Japanese /Wabuncode includes specialprosignsfor switching back-and-forth from International Morse:▄▄▄ ▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄signals a switch from International Morse toWabun, and▄ ▄ ▄ ▄▄▄ ▄to return fromWabunto International Morse.
ForChinese,Chinese telegraph codeis used to mapChinese charactersto four-digit codes and send these digits out using standard Morse code. Korean Morse code[67]uses the SKATS mapping, originally developed to allow Korean to be typed on western typewriters. SKATS mapshangulcharacters to arbitrary letters of theLatin scriptand has no relationship to pronunciation inKorean.
During early World War I (1914–1916), Germany briefly experimented with 'dotty' and 'dashy' Morse, in essence adding a dot or a dash at the end of each Morse symbol. Each one was quickly broken by Allied SIGINT, and standard Morse was resumed by Spring 1916. Only a small percentage of Western Front (North AtlanticandMediterranean Sea) traffic was in 'dotty' or 'dashy' Morse during the entire war. In popular culture, this is mostly remembered in the bookThe CodebreakersbyDavid Kahnand in the national archives of the UK and Australia (whoseSIGINToperators copied most of this Morse variant). Kahn's cited sources come from the popular press and wireless magazines of the time.[68]
Other variations include forms of "fractional Morse" or "fractionated Morse", which recombine the characters of the Morse code–encoded message and then encrypt them using a cipher in order to disguise the text.[69]
Decoding software for Morse code ranges from software-defined wide-band radio receivers, coupled to the Reverse Beacon Network,[70]which decodes signals and detectsCQmessages onham bands, to smartphone applications.[71]
|
https://en.wikipedia.org/wiki/Morse_code
|
Cryptographic primitivesare well-established, low-levelcryptographicalgorithmsthat are frequently used to buildcryptographic protocolsfor computer security systems.[1]These routines include, but are not limited to,one-way hash functionsandencryption functions.
When creatingcryptographic systems,designersuse cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion.
Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable withXnumber of computer operations, and it is broken with significantly fewer thanXoperations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include:
Cryptographic primitives are one of the building blocks of every cryptosystem, e.g.,TLS,SSL,SSH, etc. Cryptosystem designers, not being in a position to definitivelyprovetheir security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any cryptosystem and it is the responsibility of the designer(s) to avoid them.
Cryptographic primitives are not cryptographic systems, as they are quite limited on their own. For example, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined insecurity protocolscan more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it isconfidentialandintegrity-protected), an encoding routine, such asDES, and a hash-routine such asSHA-1can be used in combination. If the attacker does not know the encryption key, they cannot modify the message such that message digest value(s) would be valid.
Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in cryptosystems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design andbuggyor not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature.[citation needed]There are some basic properties that can be verified with automated methods, such asBAN logic. There are even methods for full verification (e.g. theSPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on theOpenSSLvulnerabilitynews pagehere.
|
https://en.wikipedia.org/wiki/Cryptographic_primitive#Message_authentication_codes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.