text
stringlengths
16
172k
source
stringlengths
32
122
Software maintenanceis the modification of software after delivery. Software maintenance is often considered lower skilled and less rewarding than new development. As such, it is a common target for outsourcing oroffshoring. Usually, the team developing the software is different from those who will be maintaining it. The developers lack an incentive to write the code to be easily maintained. Software is often delivered incomplete and almost always contains some bugs that the maintenance team must fix. Software maintenance often initially includes the development of new functionality, but as the product nears the end of its lifespan, maintenance is reduced to the bare minimum and then cut off entirely before the product is withdrawn. Each maintenance cycle begins with a change request typically originating from an end user. That request is evaluated and if it is decided to implement it, the programmer studies the existing code to understand how it works before implementing the change. Testing to make sure the existing functionality is retained and the desired new functionality is added often comprises the majority of the maintenance cost. Software maintenance is not as well studied as other phases of the software life cycle, despite comprising the majority of costs. Understanding has not changed significantly since the 1980s. Software maintenance can be categorized into several types depending on whether it is preventative or reactive and whether it is seeking to add functionality or preserve existing functionality, the latter typically in the face of a changed environment. In the early 1970s, companies began to separate out software maintenance with its own team of engineers to free upsoftware developmentteams from support tasks.[1]In 1972, R. G. Canning published "The Maintenance 'Iceberg'", in which he contended that software maintenance was an extension of software development with an additional input: the existing system.[1]The discipline of software maintenance has changed little since then.[2]One twenty-first century innovation has been companies deliberately releasing incomplete software and planning to finish it post-release. This type of change, and others that expand functionality, is often calledsoftware evolutioninstead of maintenance.[2] Despitetestingandquality assurance, virtually all software containsbugswhere the system does not work as intended. Post-release maintenance is necessary to remediate these bugs when they are found.[3]Most software is a combination of pre-existingcommercial off-the-shelf(COTS) andopen-source softwarecomponents with custom-written code. COTS and open-source software is typically updated over time, which can reduce the maintenance burden, but the modifications to these software components will need to be adjusted for in the final product.[4]Unlikesoftware development, which is focused on meeting specified requirements, software maintenance is driven by events—such as user requests or detection of a bug.[5]Its main purpose is to preserve the usefulness of the software, usually in the face of changing requirements.[6] If conceived of as part of thesoftware development life cycle, maintenance is the last and typically the longest phase of the cycle,[7][8]comprising 80 to 90 percent of the lifecycle cost.[9]Other models consider maintenance separate from software development, instead as part of the software maintenance life cycle (SMLC).[8]SMLC models typically include understanding the code, modifying it, and revalidating it.[8] Frequently, software is delivered in an incomplete state. Developers will test a product until running out of time or funding, because they face fewer consequences for an imperfect product than going over time or budget.[10]The transition from the development to the maintenance team is often inefficient, without lists of known issues or validation tests, which the maintenance team will likely recreate.[11]After release, members of the development team are likely to be reassigned or otherwise become unavailable. The maintenance team will require additional resources for the first year after release, both fortechnical supportand fixing defects left over from development.[10] Initially, software may go through a period of enhancements after release. New features are added according to user feedback. At some point, the company may decide that it is no longer profitable to make functional improvements, and restrict support to bug fixing and emergency updates. Changes become increasingly difficult and expensive due to lack of expertise or decaying architecture due tosoftware aging. After a product is no longer maintained, and does not receive even this limited level of updating, some vendors will seek to extract revenue from the software as long as possible, even though the product is likely to become increasingly avoided. Eventually, the software will be withdrawn from the market, although it may remain in use. During this process, the software becomes alegacy system.[12] The first step in the change cycle is receiving a change request from a customer and analyzing it to confirm the problem and decide whether to implement the change.[13]This may require input from multiple departments; for example, the marketing team can help evaluate whether the change is expected to bring more business.[14]Software development effort estimationis a difficult problem, including for maintenance change requests,[15]but the request is likely to be declined if it is too expensive or infeasible.[16]If it is decided to implement the request, it can be assigned to a scheduled release and implemented.[16]Althoughagile methodologydoes not have a maintenance phase,[17]the change cycle can be enacted as ascrum sprint.[18] Understanding existing code is an essential step before modifying it.[2]The rate of understanding depends both on the code base as well as the skill of the programmer.[19]Following coding conventions such as using clear function and variable names that correspond to their purpose makes understanding easier.[20]Use ofconditional loopstatements only if the code could execute more than once, and eliminating code that will never execute can also increase understandability.[21]Experienced programmers have an easier time understanding what the code does at a high level.[22]Software visualizationis sometimes used to speed up this process.[23] Modification to the code may take place in any way. On the one hand, it is common to haphazardly apply a quick fix without being granted enough time to update thecode documentation.[24]On the other hard structured iterative enhancement can begin by changing the top-level requirements document and propagating the change down to lower levels of the system.[25]Modification often includescode refactoring(improving the structure without changing functionality) and restructuring (improving structure and functionality at the same time).[26]Unlike commercial software,free and open source softwarechange cycles are largely restricted to coding and testing, with minimal documentation. Open-source software projects instead rely on mailing lists and a large number of contributors to understand the code base and fix bugs efficiently.[27] An additional problem with maintenance is that nearly every change to code will introduce new bugs or unexpectedripple effects, which require another round of fixes.[2]Testing can consume the majority of maintenance resource for safety-critical code, due to the need to revalidate the entire software if any changes are made.[28]Revalidation may includecode review,regression testingwith a subset ofunit tests,integration tests, andsystem tests.[26]The goal of the testing is to verify that previous functionality is retained, and the new functionality has been added.[29] The key purpose of software maintenance is ensuring that the product continues to meet usability requirements. At times, this may mean extending the product's capabilities beyond what was initially envisioned.[30] According to theISO/IEC14764 specification, software maintenance can be classified into four types:[31] According to some estimates, enhancement (the latter two categories) comprises some 80 percent of software maintenance.[35] Maintainability is the quality of software enabling it to be easily modified without breaking existing functionality.[31]According to the ISO/IEC 14764 specification, activity to ensure software maintainability prior to release counts as part of software maintenance.[5]Many software development organizations neglect maintainability, even though doing so will increase long-term costs.[36]Technical debtis incurred when programmers, often out of laziness or urgency to meet a deadline, choose quick and dirty solutions rather than build maintainability into their code.[37]A common cause is underestimates insoftware development effort estimation, leading to insufficient resources allocated to development.[38]One important aspect is having a large amount of automatedsoftware teststhat can detect if existing functionality is compromised by a change.[31] A challenge with maintainability is that manysoftware engineeringcourses do not emphasize it, and give out one-and-done assignments that have clear and unchanging specifications.[39]Software engineering courses do not cover systems as complex as occur in the real world.[40]Development engineers who know that they will not be responsible for maintaining the software do not have an incentive to build in maintainability.[2] Maintenance is often considered an unrewarding job forsoftware engineers, who, if assigned to maintenance, were more likely to quit.[41][42]It often pays less than a comparable job in software development.[42]The task is often assigned to temporary workers or lesser-skilled staff,[2][43]although maintenance engineers are also typically older than developers, partly because they must be familiar with outdated technologies.[43]In 2008, around 900,000 of the 1.3 million software engineers and programmers working in the United States were doing maintenance.[44] Companies started separate teams for maintenance, which led tooutsourcingthis work to a different company, and by the turn of the twenty-first century, sometimesoffshoringthe work to another country—whether as part of the original company or a separate entity.[45][9]The typical sources of outsourcing are developed countries such as the United States, the United Kingdom, Japan, and Australia, while destinations are usually lower-cost countries such as China, India, Russia, and Ireland.[46]Reasons for offshoring include taking advantage of lower labor costs, enabling around-the-clock support, reducing time pressure on developers, and to move support closer to the market for the product.[47]Downsides of offshoring include communication barriers in the form of such factors astime zoneand organizational disjunction and cultural differences.[9]Despite many employers considering maintenance lower-skilled work and the phase of software development most suited to offshoring,[9][48]it requires close communication with the customer and rapid response, both of which are hampered by these communication difficulties.[9] In software engineering, the termlegacy systemdoes not have a fixed meaning, but often refers to older systems which are large, difficult to modify, and also necessary for current business needs. Often legacy systems are written in obsoleteprogramming languages, lack documentation, have a deteriorating structure after years of changes, and depend on experts to keep it operational.[49]When dealing with these systems, at some point so much technical debt accumulates that maintenance is not practical or economical.[12]Other choices include: Despite taking up the lion's share of software development resources, maintenance is the least studied phase of software development.[56][57]Much of the literature has focused on how to develop maintainable code from the outset, with less focus on motivating engineers to make maintainability a priority.[58]As of 2020[update], automated solutions for code refactoring to reduce maintenance effort are an active area of research,[59]as ismachine-learningenhanced maintainability assessment.[60]
https://en.wikipedia.org/wiki/Software_maintenance
Ininformation security, computer science, and other fields, theprinciple of least privilege(PoLP), also known as theprinciple of minimal privilege(PoMP) or theprinciple of least authority(PoLA), requires that in a particularabstraction layerof a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.[1] The principle means giving any user accounts or processes only those privileges which are essentially vital to perform its intended functions. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications. Any other privileges, such as installing new software, are blocked. The principle applies also to a personal computer user who usually does work in a normal user account, and opens a privileged, password protected account only when the situation absolutely demands it. When applied tousers, the termsleast user accessorleast-privileged user account(LUA) are also used, referring to the concept that all user accounts should run with as fewprivilegesas possible, and also launch applications with as few privileges as possible. The principle (of least privilege) is widely recognized as an important design consideration towards enhancing and giving a much needed 'Boost' to the protection of data and functionality from faults (fault tolerance) andmalicious behavior. Benefits of the principle include: In practice, there exist multiple competing definitions of true (least privilege). Asprogram complexityincreases rapidly, so do the number of potential issues, rendering a predictive approach impractical. Examples include the values of variables it may process, addresses it will need, or the precise time such things will be required. Object capability systems allow, for instance, deferring granting a single-use privilege until the time when it will be used. Currently, the closest practical approach is to eliminate privileges that can be manually evaluated as unnecessary. The resulting set of privileges typically exceeds the true minimum required privileges for the process. Another limitation is the granularity of control that the operating environment has over privileges for an individual process.[4]In practice, it is rarely possible to control a process's access to memory, processing time, I/O device addresses or modes with the precision needed to facilitate only the precise set of privileges a process will require. The original formulation is fromJerome Saltzer:[5] Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job. Peter J. Denning, in his paper "Fault Tolerant Operating Systems", set it in a broader perspective among "The four fundamental principles of fault tolerance". "Dynamic assignments of privileges" was earlier discussed byRoger Needhamin 1972.[6][7] Historically, the oldest instance of (least privilege) is probably the source code oflogin.c, which begins execution withsuper-userpermissions and—the instant they are no longer necessary—dismisses them viasetuid()with a non-zero argument as demonstrated in theVersion 6 Unixsource code. Thekernelalways runs with maximum privileges since it is theoperating systemcore and has hardware access. One of the principal responsibilities of an operating system, particularly a multi-user operating system, is management of the hardware's availability and requests to access it from runningprocesses. When the kernel crashes, the mechanisms by which it maintainsstatealso fail. Therefore, even if there is a way for theCPUto recover without ahard reset, security continues to be enforced, but the operating system cannot properly respond to the failure because it was not possible to detect the failure. This is because kernel execution either halted or theprogram counterresumed execution from somewhere in an endless, and—usually—non-functionalloop.[citation needed]This would be akin to either experiencingamnesia(kernel execution failure) or being trapped in a closed maze that always returns to the starting point (closed loops). If execution picks up after the crash by loading and runningtrojan code, the author of the trojan code can usurp control of all processes. The principle of least privilege forces code to run with the lowest privilege/permission level possible. This means that the code that resumes the code execution-whether trojan or simply code execution picking up from an unexpected location—would not have the ability to perform malicious or undesirable processes. One method used to accomplish this can be implemented in themicroprocessorhardware. For example, in theIntel x86architecture the manufacturer designed four (ring 0 through ring 3) running "modes" with graduated degrees of access-much likesecurity clearancesystems in defence and intelligence agencies.[citation needed] As implemented in some operating systems, processes execute with apotential privilege setand anactive privilege set.[citation needed]Such privilege sets are inherited from the parent as determined by the semantics offork(). Anexecutable filethat performs a privileged function—thereby technically constituting a component of theTCB, and concomitantly termed a trusted program or trusted process—may also be marked with a set of privileges. This is a logical extension of the notions ofset user IDandset group ID.[citation needed]The inheritance offile privilegesby a process are determined by the semantics of theexec()family ofsystem calls. The precise manner in which potential process privileges, actual process privileges, and file privileges interact can become complex. In practice, least privilege is practiced by forcing a process to run with only those privileges required by the task. Adherence to this model is quite complex as well as error-prone. TheTrusted Computer System Evaluation Criteria(TCSEC) concept oftrusted computing base(TCB) minimization is a far more stringent requirement that is only applicable to the functionally strongest assurance classes(Link to Trusted Computer System Evaluation Criteria section Divisions and classes), namely the classesB3andA1(which arefunctionallyidentical but differ in terms of evidence and documentation required). Least privilege is often associated withprivilege bracketing: that is, assuming necessary privileges at the last possible moment and dismissing them as soon as no longer strictly necessary, therefore ostensibly reducing fallout from erroneous code that unintentionally exploits more privilege than is merited. Least privilege has also been interpreted in the context of distribution ofdiscretionary access control(DAC) permissions, for example asserting that giving user U read/write access to file F violates least privilege if U can complete their authorized tasks with only read permission.
https://en.wikipedia.org/wiki/Principle_of_least_privilege
Inhacking, ashellcodeis a small piece of code used as thepayloadin theexploitationof a softwarevulnerability. It is called "shellcode" because it typically starts acommand shellfrom which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient.[1]However, attempts at replacing the term have not gained wide acceptance. Shellcode is commonly written inmachine code. When creating shellcode, it is generally desirable to make it both small and executable, which allows it to be used in as wide a variety of situations as possible.[2]In assembly code, the same function can be performed in a multitude of ways and there is some variety in the lengths of opcodes that can be used for this purpose; good shellcode writers can put these small opcodes to use to create more compact shellcode.[3]Some have reached the smallest possible size while maintaining stability.[4] Shellcode can either belocalorremote, depending on whether it gives an attacker control over the machine it runs on (local) or over another machine through a network (remote). Localshellcode is used by an attacker who has limited access to a machine but can exploit a vulnerability, for example abuffer overflow, in a higher-privileged process on that machine. If successfully executed, the shellcode will provide the attacker access to the machine with the same higher privileges as the targeted process. Remoteshellcode is used when an attacker wants to target a vulnerable process running on another machine on alocal network,intranet, or aremote network. If successfully executed, the shellcode can provide the attacker access to the target machine across the network. Remote shellcodes normally use standardTCP/IPsocketconnections to allow the attacker access to the shell on the target machine. Such shellcode can be categorized based on how this connection is set up: if the shellcode establishes the connection it is called a "reverse shell", or aconnect-backshellcode because the shellcodeconnects backto the attacker's machine. On the other hand, if the attacker establishes the connection, the shellcode is called abindshellbecause the shellcodebindsto a certain port on the victim's machine. There's a peculiar shellcode namedbindshell random portthat skips the binding part and listens on a random port made available by theoperating system. Because of that, thebindshell random portbecame the smallest stable bindshell shellcode forx86_64available to this date. A third, much less common type, issocket-reuseshellcode. This type of shellcode is sometimes used when an exploit establishes a connection to the vulnerable process that is not closed before the shellcode is run. The shellcode can thenre-usethis connection to communicate with the attacker. Socket re-using shellcode is more elaborate, since the shellcode needs to find out which connection to re-use and the machine may have many connections open.[5] Afirewallcan be used to detect outgoing connections made by connect-back shellcode as well as incoming connections made by bindshells. They can, therefore, offer some protection against an attacker, even if the system is vulnerable, by preventing the attacker from connecting to the shell created by the shellcode. One reason why socket re-using shellcode is sometimes used is that it does not create new connections and, therefore, is harder to detect and block. Download and executeis a type of remote shellcode thatdownloadsandexecutessome form of malware on the target system. This type of shellcode does not spawn a shell, but rather instructs the machine to download a certain executable file off the network, save it to disk and execute it. Nowadays, it is commonly used indrive-by downloadattacks, where a victim visits a malicious webpage that in turn attempts to run such a download and execute shellcode in order to install software on the victim's machine. A variation of this type of shellcode downloads andloadsalibrary.[6][7]Advantages of this technique are that the code can be smaller, that it does not require the shellcode to spawn a new process on the target system, and that the shellcode does not need code to clean up the targeted process as this can be done by the library loaded into the process. When the amount of data that an attacker can inject into the target process is too limited to execute useful shellcode directly, it may be possible to execute it in stages. First, a small piece of shellcode (stage 1) is executed. This code then downloads a larger piece of shellcode (stage 2) into the process's memory and executes it. This is another form ofstagedshellcode, which is used if an attacker can inject a larger shellcode into the process but cannot determine where in the process it will end up. Smallegg-huntshellcode is injected into the process at a predictable location and executed. This code then searches the process's address space for the larger shellcode (theegg) and executes it.[8] This type of shellcode is similar toegg-huntshellcode, but looks for multiple small blocks of data (eggs) and recombines them into one larger block (theomelette) that is subsequently executed. This is used when an attacker can only inject a number of small blocks of data into the process.[9] An exploit will commonly inject a shellcode into the target process before or at the same time as it exploits a vulnerability to gain control over theprogram counter. The program counter is adjusted to point to the shellcode, after which it gets executed and performs its task. Injecting the shellcode is often done by storing the shellcode in data sent over the network to the vulnerable process, by supplying it in a file that is read by the vulnerable process or through the command line or environment in the case of local exploits. Because most processes filter or restrict the data that can be injected, shellcode often needs to be written to allow for these restrictions. This includes making the code small, null-free oralphanumeric. Various solutions have been found to get around such restrictions, including: Sinceintrusion detectioncan detect signatures of simple shellcodes being sent over the network, it is often encoded, made self-decrypting orpolymorphicto avoid detection. Exploits that target browsers commonly encode shellcode in a JavaScript string usingpercent-encoding, escape sequence encoding "\uXXXX" orentity encoding.[10]Some exploits also obfuscate the encoded shellcode string further to prevent detection byIDS. For example, on theIA-32architecture, here's how twoNOP(no-operation) instructions would look, first unencoded: This instruction is used inNOP slides. Most shellcodes are written without the use ofnullbytes because they are intended to be injected into a target process throughnull-terminated strings. When a null-terminated string is copied, it will be copied up to and including the first null but subsequent bytes of the shellcode will not be processed. When shellcode that contains nulls is injected in this way, only part of the shellcode would be injected, making it incapable of running successfully. To produce null-free shellcode from shellcode that containsnullbytes, one can substitute machine instructions that contain zeroes with instructions that have the same effect but are free of nulls. For example, on theIA-32architecture one could replace this instruction: which contains zeroes as part of the literal (1expands to0x00000001) with these instructions: which have the same effect but take fewer bytes to encode and are free of nulls. Analphanumeric shellcodeis a shellcode that consists of or assembles itself on execution into entirelyalphanumericASCIIorUnicodecharacters such as 0–9, A–Z and a–z.[11][12]This type of encoding was created byhackersto hide workingmachine codeinside what appears to be text. This can be useful to avoid detection of the code and to allow the code to pass through filters that scrub non-alphanumeric characters from strings (in part, such filters were a response to non-alphanumeric shellcode exploits). A similar type of encoding is calledprintable codeand uses allprintablecharacters (0–9, A–Z, a–z, !@#%^&*() etc.). A similarly restricted variant isECHOable codenot containing any characters which are not accepted by theECHOcommand. It has been shown that it is possible to create shellcode that looks like normal text in English.[13]Writing alphanumeric or printable code requires good understanding of theinstruction set architectureof the machine(s) on which the code is to be executed. It has been demonstrated that it is possible to write alphanumeric code that is executable on more than one machine,[14]thereby constitutingmulti-architecture executablecode. In certain circumstances, a target process will filter any byte from the injected shellcode that is not aprintableoralphanumericcharacter. Under such circumstances, the range of instructions that can be used to write a shellcode becomes very limited. A solution to this problem was published by Rix inPhrack57[11]in which he showed it was possible to turn any code into alphanumeric code. A technique often used is to create self-modifying code, because this allows the code to modify its own bytes to include bytes outside of the normally allowed range, thereby expanding the range of instructions it can use. Using this trick, a self-modifying decoder can be created that initially uses only bytes in the allowed range. The main code of the shellcode is encoded, also only using bytes in the allowed range. When the output shellcode is run, the decoder can modify its own code to be able to use any instruction it requires to function properly and then continues to decode the original shellcode. After decoding the shellcode the decoder transfers control to it, so it can be executed as normal. It has been shown that it is possible to create arbitrarily complex shellcode that looks like normal text in English.[13] Modern programs useUnicodestrings to allow internationalization of text. Often, these programs will convert incomingASCIIstrings to Unicode before processing them. Unicode strings encoded inUTF-16use two bytes to encode each character (or four bytes for some special characters). When anASCII(Latin-1in general) string is transformed into UTF-16, a zero byte is inserted after each byte in the original string. Obscou proved inPhrack61[12]that it is possible to write shellcode that can run successfully after this transformation. Programs that can automatically encode any shellcode into alphanumeric UTF-16-proof shellcode exist, based on the same principle of a small self-modifying decoder that decodes the original shellcode. Most shellcode is written inmachine codebecause of the low level at which the vulnerability being exploited gives an attacker access to the process. Shellcode is therefore often created to target one specific combination ofprocessor,operating systemandservice pack, called aplatform. For some exploits, due to the constraints put on the shellcode by the target process, a very specific shellcode must be created. However, it is not impossible for one shellcode to work for multiple exploits, service packs, operating systems and even processors.[15][16][17]Such versatility is commonly achieved by creating multiple versions of the shellcode that target the various platforms and creating a header that branches to the correct version for the platform the code is running on. When executed, the code behaves differently for different platforms and executes the right part of the shellcode for the platform it is running on. Shellcode cannot be executed directly. In order to analyze what a shellcode attempts to do it must be loaded into another process. One common analysis technique is to write a small C program which holds the shellcode as a byte buffer, and then use a function pointer or use inline assembler to transfer execution to it. Another technique is to use an online tool, such as shellcode_2_exe, to embed the shellcode into a pre-made executable husk which can then be analyzed in a standard debugger. Specialized shellcode analysis tools also exist, such as the iDefense sclog project which was originally released in 2005 as part of the Malcode Analyst Pack. Sclog is designed to load external shellcode files and execute them within an API logging framework. Emulation-based shellcode analysis tools also exist such as thesctestapplication which is part of the cross-platform libemu package. Another emulation-based shellcode analysis tool, built around the libemu library, isscdbgwhich includes a basic debug shell and integrated reporting features.
https://en.wikipedia.org/wiki/Shellcode
Incomputing, theExecutable and Linkable Format[2](ELF, formerly namedExtensible Linking Format) is a common standardfile formatforexecutablefiles,object code,shared libraries, andcore dumps. First published in the specification for theapplication binary interface(ABI) of theUnixoperating system version namedSystem V Release 4(SVR4),[3]and later in the Tool Interface Standard,[1]it was quickly accepted among different vendors ofUnixsystems. In 1999, it was chosen as the standard binary file format for Unix andUnix-likesystems onx86processors by the86openproject. By design, the ELF format is flexible, extensible, andcross-platform. For instance, it supports differentendiannessesand address sizes so it does not exclude any particularCPUorinstruction set architecture. This has allowed it to be adopted by many differentoperating systemson many different hardwareplatforms. Each ELF file is made up of one ELF header, followed by file data. The data can include: The segments contain information that is needed forrun timeexecution of the file, while sections contain important data for linking and relocation. Anybytein the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section. The ELF header defines whether to use32-bitor64-bitaddresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries, respectively. glibc 2.12+ in casee_ident[EI_OSABI] == 3treats this field as ABI version of thedynamic linker:[6]it defines a list of dynamic linker's features,[7]treatse_ident[EI_ABIVERSION]as a feature level requested by the shared object (executable or dynamic library) and refuses to load it if an unknown feature is requested, i.e.e_ident[EI_ABIVERSION]is greater than the largest known feature.[8] [9] The program header table tells the system how to create a process image. It is found at file offsete_phoff, and consists ofe_phnumentries, each with sizee_phentsize. The layout is slightly different in32-bitELF vs64-bitELF, because thep_flagsare in a different structure location for alignment reasons. Each entry is structured as: The ELF format has replaced older executable formats in various environments. It has replaceda.outandCOFFformats inUnix-likeoperating systems: ELF has also seen some adoption in non-Unix operating systems, such as: Microsoft Windowsalso uses the ELF format, but only for itsWindows Subsystem for Linuxcompatibility system.[17] Some game consoles also use ELF: Other (operating) systems running on PowerPC that use ELF: Some operating systems for mobile phones and mobile devices use ELF: Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known asELFPackin the underground modding culture. The ELF file format is also used with theAtmel AVR(8-bit), AVR32[22]and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced. 86openwas a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture.[24]The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150". The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be ade factostandard supported by all involved vendors and operating systems. The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997. The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch,Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD,Intel, Linux, NetBSD, SCO and SunSoft. The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999,[25]and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications. With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.[26] FatELF is an ELF binary-format extension that adds fat binary capabilities.[27]It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size,CPUinstruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[28][29][30] [1]
https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
In computing, asystem call(syscall) is the programmatic way in which acomputer programrequests a service from theoperating system[a]on which it is executed. This may include hardware-related services (for example, accessing ahard disk driveor accessing the device's camera), creation and execution of newprocesses, and communication with integralkernel servicessuch asprocess scheduling. System calls provide an essential interface between a process and the operating system. In most systems, system calls can only be made fromuserspaceprocesses, while in some systems,OS/360 and successorsfor example, privileged system code also issues system calls.[1] Forembedded systems, system calls typically do not change theprivilege modeof the CPU. Thearchitectureof most modern processors, with the exception of some embedded systems, involves asecurity model. For example, theringsmodel specifies multiple privilege levels under which software may be executed: a program is usually limited to its ownaddress spaceso that it cannot access or modify other running programs or the operating system itself, and is usually prevented from directly manipulating hardware devices (e.g. theframe bufferornetworkdevices). However, many applications need access to these components, so system calls are made available by the operating system to provide well-defined, safe implementations for such operations. The operating system executes at the highest level of privilege, and allows applications to request services via system calls, which are often initiated viainterrupts. An interrupt automatically puts the CPU into some elevated privilege level and then passes control to the kernel, which determines whether the calling program should be granted the requested service. If the service is granted, the kernel executes a specific set of instructions over which the calling program has no direct control, returns the privilege level to that of the calling program, and then returns control to the calling program. Generally, systems provide alibraryorAPIthat sits between normal programs and the operating system. OnUnix-likesystems, that API is usually part of an implementation of theC library(libc), such asglibc, that provideswrapper functionsfor the system calls, often named the same as the system calls they invoke. OnWindows NT, that API is part of theNative API, in thentdll.dlllibrary; this is an undocumented API used by implementations of the regularWindows APIand directly used by some system programs on Windows. The library's wrapper functions expose an ordinary functioncalling convention(asubroutinecall on theassemblylevel) for using the system call, as well as making the system call moremodular. Here, the primary function of the wrapper is to place all the arguments to be passed to the system call in the appropriateprocessor registers(and maybe on thecall stackas well), and also setting a unique system call number for the kernel to call. In this way the library, which exists between the OS and the application, increasesportability. The call to the library function itself does not cause a switch tokernel modeand is usually a normalsubroutine call(using, for example, a "CALL" assembly instruction in someInstruction set architectures(ISAs)). The actual system call does transfer control to the kernel (and is more implementation-dependent and platform-dependent than the library call abstracting it). For example, inUnix-likesystems,forkandexecveare C library functions that in turn execute instructions that invoke theforkandexecsystem calls. Making the system call directly in theapplication codeis more complicated and may require embedded assembly code to be used (inCandC++), as well as requiring knowledge of the low-level binary interface for the system call operation, which may be subject to change over time and thus not be part of theapplication binary interface; the library functions are meant to abstract this away. Onexokernelbased systems, the library is especially important as an intermediary. On exokernels, libraries shield user applications from the very low level kernelAPI, and provideabstractionsandresourcemanagement. IBM'sOS/360,DOS/360andTSS/360implement most system calls through a library of assembly languagemacros,[b]although there are a few services with a call linkage. This reflects their origin at a time when programming in assembly language was more common thanhigh-level languageusage. IBM system calls were therefore not directly executable by high-level language programs, but required a callable assembly language wrapper subroutine. Since then, IBM has added many services that can be called from high level languages in, e.g.,z/OSandz/VSE. In more recent release ofMVS/SPand in all later MVS versions, some system call macros generate Program Call (PC). OnUnix,Unix-likeand otherPOSIX-compliant operating systems, popular system calls areopen,read,write,close,wait,exec,fork,exit, andkill. Many modern operating systems have hundreds of system calls. For example,LinuxandOpenBSDeach have over 300 different calls,[2][3]NetBSDhas close to 500,[4]FreeBSDhas over 500,[5]Windows has close to 2000, divided between win32k (graphical) and ntdll (core) system calls[6]whilePlan 9has 54.[7] Tools such asstrace,ftraceand truss allow a process to execute from start and report all system calls the process invokes, or can attach to an already running process and intercept any system call made by the said process if the operation does not violate the permissions of the user. This special ability of the program is usually also implemented with system calls such asptraceor system calls on files inprocfs. Implementing system calls requires a transfer of control from user space to kernel space, which involves some sort of architecture-specific feature. A typical way to implement this is to use asoftware interruptortrap. Interrupts transfer control to the operating systemkernel, so software simply needs to set up some register with the system call number needed, and execute the software interrupt. This is the only technique provided for manyRISCprocessors, butCISCarchitectures such asx86support additional techniques. For example, the x86instruction setcontains the instructionsSYSCALL/SYSRETandSYSENTER/SYSEXIT(these two mechanisms were independently created byAMDandIntel, respectively, but in essence they do the same thing). These are "fast" control transfer instructions that are designed to quickly transfer control to the kernel for a system call without the overhead of an interrupt.[8]Linux2.5 began using this on thex86, where available; formerly it used theINTinstruction, where the system call number was placed in theEAXregisterbeforeinterrupt0x80 was executed.[9][10] An older mechanism is thecall gate; originally used inMulticsand later, for example, seecall gateon the Intelx86. It allows a program to call a kernel function directly using a safe control transfer mechanism, which the operating system sets up in advance. This approach has been unpopular on x86, presumably due to the requirement of a far call (a call to a procedure located in a different segment than the current code segment[11]) which usesx86 memory segmentationand the resulting lack ofportabilityit causes, and the existence of the faster instructions mentioned above. ForIA-64architecture,EPC(Enter Privileged Code) instruction is used. The first eight system call arguments are passed in registers, and the rest are passed on the stack. In theIBM System/360mainframe family, and its successors, aSupervisor Call instruction(SVC), with the number in the instruction rather than in a register, implements a system call for legacy facilities in most of[c]IBM's own operating systems, and for all system calls in Linux. In later versions of MVS, IBM uses the Program Call (PC) instruction for many newer facilities. In particular, PC is used when the caller might be inService Request Block(SRB) mode. ThePDP-11minicomputerused theEMT,TRAPandIOTinstructions, which, similar to the IBM System/360SVCand x86INT, put the code in the instruction; they generate interrupts to specific addresses, transferring control to the operating system. TheVAX32-bit successor to the PDP-11 series used theCHMK,CHME, andCHMSinstructions to make system calls to privileged code at various levels; the code is an argument to the instruction. System calls can be grouped roughly into six major categories:[12] System calls in mostUnix-likesystems are processed inkernel mode, which is accomplished by changing the processor execution mode to a more privileged one, but noprocesscontext switchis necessary – although aprivilegecontext switch does occur. The hardware sees the world in terms of the execution mode according to the processorstatus register, and processes are an abstraction provided by the operating system. A system call does not generally require a context switch to another process; instead, it is processed in the context of whichever process invoked it.[13][14] In amultithreadedprocess, system calls can be made from multiplethreads. The handling of such calls is dependent on the design of the specific operating system kernel and the application runtime environment. The following list shows typical models followed by operating systems:[15][16]
https://en.wikipedia.org/wiki/System_call
TheMicrosoft Windowsfamily ofoperating systemsemploy some specificexception handlingmechanisms. Microsoft Structured Exception Handling is the native exception handling mechanism for Windows and a forerunner technology toVectored Exception Handling(VEH).[1]It features thefinallymechanism not present in standard C++ exceptions (but present in mostimperativelanguages introduced later). SEH is set up and handled separately for eachthread of execution. Microsoft supports SEH as a programming technique at the compiler level only. MS Visual C++ compiler features three non-standard keywords:__try,__exceptand__finally— for this purpose. Other exception handling aspects are backed by a number ofWin32 APIfunctions,[2]for example,RaiseExceptionto raise SEH exceptions manually. Eachthread of executionin WindowsIA-32edition or theWoW64emulation layer for thex86-64version has a link to an undocumented_EXCEPTION_REGISTRATION_RECORDlistat the start of itsThread Information Block. The__trystatement essentially calls a compiler-definedEH_prologfunction. That function allocates an_EXCEPTION_REGISTRATION_RECORDon the stackpointing to the__except_handler3[a]function inmsvcrt.dll,[b]then adds the record to the list's head. At the end of the__tryblocka compiler-definedEH_epilogfunction is called that does the reverse operation. Either of these compiler-defined routines can beinline. All the programmer-defined__exceptand__finallyblocks are called from within__except_handler3. If the programmer-defined blocks are present, the_EXCEPTION_REGISTRATION_RECORDcreated byEH_prologis extended with a few additional fields used by__except_handler3.[3] In the case of an exception inuser modecode, the operating system[c]parses the thread's_EXCEPTION_REGISTRATION_RECORDlist and calls each exception handler in sequence until a handler signals it has handled the exception (byreturn value) or the list is exhausted. The last one in the list is always thekernel32!UnhandledExceptionFilterwhich displays theGeneral protection faulterror message.[d]Then the list is traversed once more giving handlers a chance to clean up any resources used. Finally, the execution returns tokernel mode[e]where the process is either resumed or terminated. The patent on this mode of SEH, US5628016, expired in 2014. SEH on 64-bit Windows does not involve a runtime exception handler list; instead, it uses astack unwindingtable (UNWIND_INFO) interpreted by the system when an exception occurs.[4][5]This means that the compiler does not have to generate extra code to manually perform stack unwinding and to call exception handlers appropriately. It merely has to emit information in the form of unwinding tables about the stack frame layout and specified exception handlers. GCC 4.8+ fromMingw-w64supports using 64-bit SEH for C++ exceptions.LLVMclang supports__tryon both x86 and x64.[6] Vectored Exception Handling was introduced inWindows XP.[7]Vectored Exception Handling is made available to Windows programmers using languages such asC++andVisual Basic. VEH does not replace Structured Exception Handling (SEH); rather, VEH and SEH coexist, with VEH handlers having priority over SEH handlers.[1][7]Compared with SEH, VEH works more like kernel-deliveredUnix signals.[8]
https://en.wikipedia.org/wiki/Structured_Exception_Handling
Incomputer security, ashadow stackis a mechanism for protecting aprocedure's storedreturn address,[1]such as from astack buffer overflow. The shadow stack itself is a second, separate stack that "shadows" the programcall stack. In thefunction prologue, a function stores its return address to both the call stack and the shadow stack. In thefunction epilogue, a function loads the return address from both the call stack and the shadow stack, and then compares them. If the two records of the return address differ, then an attack is detected; the typical course of action is simply to terminate the program or alert system administrators about a possible intrusion attempt. A shadow stack is similar tostack canariesin that both mechanisms aim to maintain thecontrol-flow integrityof the protected program by detecting attacks that tamper the stored return address by an attacker during anexploitationattempt. Shadow stacks can be implemented by recompiling programs with modified prologues and epilogues,[2]by dynamic binary rewriting techniques to achieve the same effect,[3]or with hardware support.[4]Unlike the call stack, which also stores local program variables, passed arguments, spilled registers and other data, the shadow stack typically just stores a second copy of a function's return address. Shadow stacks provide more protection for return addresses than stack canaries, which rely on the secrecy of the canary value and are vulnerable to non-contiguous write attacks.[5]Shadow stacks themselves can be protected with guard pages[6]or with information hiding, such that an attacker would also need to locate the shadow stack to overwrite a return address stored there. Like stack canaries, shadow stacks do not protect stack data other than return addresses, and so offer incomplete protection against security vulnerabilities that result frommemory safetyerrors. In 2016,Intelannounced upcoming hardware support for shadow stacks with their Control-flow Enforcement Technology.[7] Shadow stacks face some compatibility problems. After a program throws anexceptionor alongjmpoccurs, the return address at the top of the shadow stack will not match return address popped from the call stack. The typical solution for this problem is to pop entries from the shadow stack until a matching return address is found, and to only terminate the program when no match is found in the shadow stack.[3] Amultithreadedprogram, which would have a call stack for each executing thread, would then also have a shadow stack shadowing each of the call stacks.
https://en.wikipedia.org/wiki/Shadow_stack
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2] The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5] Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema. A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9] Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10] In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10] Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12] To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories: Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14] Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer. Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks. Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17] Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods. Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19] Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20] Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks. Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23] Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include: Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27] Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31] Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32] Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33] Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks. In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35] In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36] Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including: In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40] Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41] HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43] When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42] Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44] Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47] In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50] Some common countermeasures are listed in the following sections: Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51] These design principles of security by design can include some of the following techniques: Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54] Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55] Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58] In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60] Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security. Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63] The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69] Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance. Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72] Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86] In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example. Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC). Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC). A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language. The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95] As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100] The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100] Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including: Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws. The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs. TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111] The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities. The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers. Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112] Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117] Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118] In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks. Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life. Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124] The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124] Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125] Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129] Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131] Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135] Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139] Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142] Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146] Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147] Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150] Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158] TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159] TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162] While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166] An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182] In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183] Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card. Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature. Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185] However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186] As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188] Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190] A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191] Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Some illustrative examples of different types of computer security breaches are given below. In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194] In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195] In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197] In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this. In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207] A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214] In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216] International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement. The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219] The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220] Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222] On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223] Many different teams and organizations exist, including: On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security. Most countries have their own computer emergency response team to protect network security. Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233] TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236] To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240] Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231] Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241] Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242] TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243] Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245] TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247] The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249] The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded. In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework. In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256] The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258] In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260] TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262] TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265] The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266] TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272] Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274] TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276] There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277] This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281] Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285] Typical cybersecurity job titles and descriptions include:[286] Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294] In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government. In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297] The following terms used with regards to computer security are explained below: Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302] TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303] A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307] However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308] One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed] Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311] Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314] TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316] NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
https://en.wikipedia.org/wiki/Computer_security#Mitigation
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2] The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5] Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema. A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9] Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10] In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10] Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12] To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories: Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14] Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer. Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks. Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17] Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods. Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19] Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20] Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks. Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23] Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include: Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27] Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31] Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32] Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33] Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks. In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35] In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36] Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including: In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40] Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41] HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43] When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42] Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44] Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47] In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50] Some common countermeasures are listed in the following sections: Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51] These design principles of security by design can include some of the following techniques: Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54] Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55] Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58] In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60] Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security. Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63] The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69] Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance. Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72] Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86] In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example. Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC). Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC). A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language. The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95] As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100] The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100] Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including: Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws. The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs. TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111] The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities. The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers. Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112] Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117] Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118] In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks. Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life. Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124] The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124] Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125] Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129] Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131] Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135] Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139] Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142] Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146] Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147] Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150] Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158] TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159] TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162] While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166] An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182] In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183] Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card. Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature. Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185] However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186] As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188] Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190] A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191] Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Some illustrative examples of different types of computer security breaches are given below. In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194] In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195] In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197] In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this. In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207] A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214] In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216] International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement. The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219] The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220] Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222] On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223] Many different teams and organizations exist, including: On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security. Most countries have their own computer emergency response team to protect network security. Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233] TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236] To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240] Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231] Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241] Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242] TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243] Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245] TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247] The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249] The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded. In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework. In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256] The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258] In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260] TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262] TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265] The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266] TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272] Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274] TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276] There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277] This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281] Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285] Typical cybersecurity job titles and descriptions include:[286] Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294] In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government. In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297] The following terms used with regards to computer security are explained below: Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302] TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303] A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307] However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308] One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed] Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311] Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314] TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316] NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
https://en.wikipedia.org/wiki/Software_security#Mitigations
Software testingis the act of checking whethersoftwaresatisfies expectations. Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1] Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs. Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43 Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8] A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss] Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed] Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Software testing is typically goal driven. Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix. Not all defects cause a failure. For example, a defect indead codewill not be considered a failure. A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12] A single defect may result in multiple failure symptoms. Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity. A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14] Testing can be categorized many ways.[15] Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21] There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24] Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24] Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24] Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis. The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]). Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT). While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:[32][34] Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36] 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36] Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40] Black box testing can be used to any level of testing although usually not at the unit level.[33] Component interface testing Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46] Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48] With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33] Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes. A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54] Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed] Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54] Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55] Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54] Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57] Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62] Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63] Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64] Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices. A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65] Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby libraryvcr. In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the termsoftware testerstarted to be used to denote a separate profession. Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67] Organizations that develop software, perform testing differently, but there are common patterns.[2] Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146 Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69] Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70] Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72] The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Software testing is used in association withverification and validation:[73] The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81 And, according to the ISO 9000 standard: The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability. There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness. A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection. Some of the majorsoftware testing controversiesinclude: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
https://en.wikipedia.org/wiki/Software_testing
In the context ofhardwareandsoftwaresystems,formal verificationis the act ofprovingor disproving thecorrectnessof a system with respect to a certainformal specificationor property, usingformal methodsofmathematics.[1]Formal verification is a key incentive forformal specificationof systems, and is at the core offormal methods. It represents an important dimension ofanalysis and verificationinelectronic design automationand is one approach tosoftware verification. The use of formal verification enables the highestEvaluation Assurance Level(EAL7) in the framework ofcommon criteriaforcomputer securitycertification.[2] Formal verification can be helpful in proving the correctness of systems such as:cryptographic protocols,combinational circuits,digital circuitswith internal memory, and software expressed assource codein aprogramming language. Prominent examples of verified software systems include theCompCertverifiedCcompilerand theseL4high-assuranceoperating system kernel. The verification of these systems is done by ensuring the existence of aformal proofof amathematical modelof the system.[3]Examples of mathematical objects used to model systems are:finite-state machines,labelled transition systems,Horn clauses,Petri nets,vector addition systems,timed automata,hybrid automata,process algebra, formal semantics of programming languages such asoperational semantics,denotational semantics,axiomatic semanticsandHoare logic.[4] Model checkinginvolves a systematic and exhaustive exploration of the mathematical model. Such exploration is possible forfinite models, but also for some infinite models, where infinite sets of states can be effectively represented finitely by using abstraction or taking advantage of symmetry. Usually, this consists of exploring all states and transitions in the model, by using smart and domain-specific abstraction techniques to consider whole groups of states in a single operation and reduce computing time. Implementation techniques includestate space enumeration, symbolic state space enumeration,abstract interpretation,symbolic simulation, abstraction refinement.[citation needed]The properties to be verified are often described intemporal logics, such aslinear temporal logic(LTL),Property Specification Language(PSL),SystemVerilogAssertions (SVA),[5]orcomputational tree logic(CTL). The great advantage of model checking is that it is often fully automatic; its primary disadvantage is that it does not in general scale to large systems; symbolic models are typically limited to a few hundred bits of state, while explicit state enumeration requires the state space being explored to be relatively small. Another approach is deductive verification.[6][7]It consists of generating from the system and its specifications (and possibly other annotations) a collection of mathematicalproof obligations, the truth of which imply conformance of the system to its specification, and discharging these obligations using eitherproof assistants(interactive theorem provers) (such asHOL,ACL2,Isabelle,Rocq(previously known asCoq) orPVS), orautomatic theorem provers, including in particularsatisfiability modulo theories(SMT) solvers. This approach has the disadvantage that it may require the user to understand in detail why the system works correctly, and to convey this information to the verification system, either in the form of a sequence of theorems to be proved or in the form of specifications (invariants, preconditions, postconditions) of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures). Formal verification of software programs involves proving that a program satisfies a formal specification of its behavior. Subareas of formal verification include deductive verification (see above),abstract interpretation,automated theorem proving,type systems, andlightweight formal methods. A promising type-based verification approach isdependently typed programming, in which the types of functions include (at least part of) those functions' specifications, and type-checking the code establishes its correctness against those specifications. Fully featured dependently typed languages support deductive verification as a special case. Another complementary approach isprogram derivation, in which efficient code is produced fromfunctionalspecifications by a series of correctness-preserving steps. An example of this approach is theBird–Meertens formalism, and this approach can be seen as another form ofprogram synthesis. These techniques can besound, meaning that the verified properties can be logically deduced from the semantics, orunsound, meaning that there is no such guarantee. A sound technique yields a result only once it has covered the entire space of possibilities. An example of an unsound technique is one that covers only a subset of the possibilities, for instance only integers up to a certain number, and give a "good-enough" result. Techniques can also bedecidable, meaning that their algorithmic implementations areguaranteed to terminatewith an answer, or undecidable, meaning that they may never terminate. By bounding the scope of possibilities, unsound techniques that are decidable might be able to be constructed when no decidable sound techniques are available. Verification is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process as V & V. The verification process consists of static/structural and dynamic/behavioral aspects. E.g., for a software product one can inspect the source code (static) and run against specific test cases (dynamic). Validation usually can be done only dynamically, i.e., the product is tested by putting it through typical and atypical usages ("Does it satisfactorily meet alluse cases?"). Program repair is performed with respect to anoracle, encompassing the desired functionality of the program which is used for validation of the generated fix. A simple example is a test-suite—the input/output pairs specify the functionality of the program. A variety of techniques are employed, most notably usingsatisfiability modulo theories(SMT) solvers, andgenetic programming,[8]using evolutionary computing to generate and evaluate possible candidates for fixes. The former method is deterministic, while the latter is randomized. Program repair combines techniques from formal verification andprogram synthesis. Fault-localization techniques in formal verification are used to compute program points which might be possible bug-locations, which can be targeted by the synthesis modules. Repair systems often focus on a small pre-defined class of bugs in order to reduce the search space. Industrial use is limited owing to the computational cost of existing techniques. The growth in complexity of designs increases the importance of formal verification techniques in thehardware industry.[9][10]At present, formal verification is used by most or all leading hardware companies,[11]but its use in thesoftware industryis still languishing.[citation needed]This could be attributed to the greater need in the hardware industry, where errors have greater commercial significance.[citation needed]Because of the potential subtle interactions between components, it is increasingly difficult to exercise a realistic set of possibilities by simulation. Important aspects of hardware design are amenable to automated proof methods, making formal verification easier to introduce and more productive.[12] As of 2011[update], several operating systems have been formally verified: NICTA's SecureEmbedded L4 microkernel, sold commercially asseL4by OK Labs;[13]OSEK/VDX based real-time operating system ORIENTAIS byEast China Normal University;[citation needed]Green Hills Software'sIntegrity operating system;[citation needed]andSYSGO'sPikeOS.[14][15]In 2016, a team led by Zhong Shao at Yale developed a formally verified operating system kernel called CertiKOS.[16][17] As of 2017, formal verification has been applied to the design of large computer networks through a mathematical model of the network,[18]and as part of a new network technology category,intent-based networking.[19]Network software vendors that offer formal verification solutions includeCisco[20]Forward Networks[21][22]and Veriflow Systems.[23] TheSPARK programming languageprovides a toolset which enables software development with formal verification and isused in several high-integrity systems.[citation needed] TheCompCert C compileris a formally verified C compiler implementing the majority of ISO C.[24][25]
https://en.wikipedia.org/wiki/Formal_verification
Software testingis the act of checking whethersoftwaresatisfies expectations. Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1] Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs. Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws. Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation. Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do? Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43 Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8] A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss] Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed] Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Software testing is typically goal driven. Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix. Not all defects cause a failure. For example, a defect indead codewill not be considered a failure. A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12] A single defect may result in multiple failure symptoms. Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity. A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another. Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14] Testing can be categorized many ways.[15] Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21] There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24] Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24] Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24] Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis. The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]). Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32] White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT). While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:[32][34] Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36] 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37] Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36] Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40] Black box testing can be used to any level of testing although usually not at the unit level.[33] Component interface testing Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46] Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48] With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33] Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes. A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54] Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed] Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54] Sometimes, UAT is performed by the customer, in their environment and on their own hardware. OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55] Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54] Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57] Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62] Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63] Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product. Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64] Actual translation to human languages must be tested, too. Possible localization and globalization failures include: Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process. Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices. A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome. Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language. Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies. Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input. Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test. Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65] Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test. The technique was popularized in web development by the Ruby libraryvcr. In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers. In the 1980s, the termsoftware testerstarted to be used to denote a separate profession. Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67] Organizations that develop software, perform testing differently, but there are common patterns.[2] Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146 Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69] Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing. One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70] Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72] The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently. Software testing is used in association withverification and validation:[73] The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81 And, according to the ISO 9000 standard: The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings. In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below). But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification. So, when these words are defined in common terms, the apparent contradiction disappears. Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it. Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document. In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers. Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability. There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing. A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs. Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact. Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program. In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data. The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness. A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated. Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection. Some of the majorsoftware testing controversiesinclude: It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis: The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points. Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
https://en.wikipedia.org/wiki/Software_testing#Fuzz_testing
Open-source software(OSS) iscomputer softwarethat is released under alicensein which thecopyrightholder grants users the rights to use, study, change, anddistribute the softwareand itssource codeto anyone and for any purpose.[1][2]Open-source software may be developed in a collaborative, public manner. Open-source software is a prominent example ofopen collaboration, meaning any capable user is able toparticipate onlinein development, making the number of possible contributors indefinite. The ability to examine the code facilitates public trust in the software.[3] Open-source software developmentcan bring in diverse perspectives beyond those of a single company. A 2024 estimate of the value of open-source software to firms is $8.8 trillion, as firms would need to spend 3.5 times the amount they currently do without the use of open source software.[4] Open-source code can be used forstudyingand allows capable end users to adapt software to their personal needs in a similar wayuser scriptsand customstyle sheetsallow for web sites, and eventually publish the modification as aforkfor users with similar preferences, and directly submit possible improvements aspull requests. TheOpen Source Initiative's (OSI) definition is recognized by several governments internationally[5]as the standard orde factodefinition. OSI usesThe Open Source Definitionto determine whether it considers a software license open source. The definition was based on theDebian Free Software Guidelines, written and adapted primarily byBruce Perens.[6][7][8]Perens did not base his writing on the "four freedoms" from theFree Software Foundation(FSF), which were only widely available later.[9] Under Perens' definition,open sourceis a broad software license that makes source code available to the general public with relaxed or non-existent restrictions on the use and modification of the code. It is an explicit "feature" of open source that it puts very few restrictions on the use or distribution by any organization or user, in order to enable the rapid evolution of the software.[10] According to Feller et al. (2005), the terms "free software" and "open-source software" should be applied to any "software products distributed under terms that allow users" to use, modify, and redistribute the software "in any manner they see fit, without requiring that they pay the author(s) of the software a royalty or fee for engaging in the listed activities."[11] Despite initially accepting it,[12]Richard Stallmanof the FSF now flatly opposes the term "Open Source" being applied to what they refer to as "free software". Although he agrees that the two terms describe "almost the same category of software", Stallman considers equating the terms incorrect and misleading.[13]Stallman also opposes the professed pragmatism of theOpen Source Initiative, as he fears that the free software ideals of freedom and community are threatened by compromising on the FSF's idealistic standards for software freedom.[14]The FSF considers free software to be asubsetof open-source software, and Richard Stallman explained thatDRMsoftware, for example, can be developed as open source, despite that it does not give its users freedom (it restricts them), and thus does not qualify as free software.[13] In his 1997 essayThe Cathedral and the Bazaar, open-source influential contributorEric S. Raymondsuggests a model for developing OSS known as thebazaarmodel.[15]Raymond likens the development of software by traditional methodologies to building a cathedral, with careful isolated work by individuals or small groups.[15]He suggests that all software should be developed using the bazaar style, with differing agendas and approaches.[15] In the traditional model of development, which he called thecathedralmodel, development takes place in a centralized way.[15]Roles are clearly defined.[15]Roles include people dedicated to designing (the architects), people responsible for managing the project, and people responsible for implementation.[15]Traditional software engineering follows the cathedral model.[15] The bazaar model, however, is different.[15]In this model, roles are not clearly defined.[15]Some proposed characteristics of software developed using the bazaar model should exhibit the following patterns:[16] Users should be treated as co-developers:The users are treated like co-developers and so they should have access to the source code of the software.[16]Furthermore, users are encouraged to submit additions to the software, code fixes for the software,bug reports, documentation, etc. Having more co-developers increases the rate at which the software evolves.[16]Linus's lawstates that given enough eyeballs all bugs are shallow.[16]This means that if many users view the source code, they will eventually find all bugs and suggest how to fix them.[16]Some users have advanced programming skills, and furthermore, each user's machine provides an additional testing environment.[16]This new testing environment offers the ability to find and fix a new bug.[16] Early releases:The first version of the software should be released as early as possible so as to increase one's chances of finding co-developers early.[16] Frequent integration:Code changes should be integrated (merged into a shared code base) as often as possible so as to avoid the overhead of fixing a large number of bugs at the end of the project life cycle.[16][17]Some open-source projects have nightly builds whereintegration is done automatically.[16] Several versions:There should be at least two versions of the software.[16]There should be a buggier version with more features and a more stable version with fewer features.[16]The buggy version (also called the development version) is for users who want the immediate use of the latest features and are willing to accept the risk of using code that is not yet thoroughly tested.[16]The users can then act as co-developers, reporting bugs and providing bug fixes.[16][18] High modularization:The general structure of the software should be modular allowing for parallel development on independent components.[16] Dynamic decision-making structure:There is a need for a decision-making structure, whether formal or informal, that makes strategic decisions depending on changing user requirements and other factors.[16]Compare withextreme programming.[16] The process of Open source development begins with arequirements elicitationwhere developers consider if they should add new features or if a bug needs to be fixed in their project.[18]This is established by communicating with the OSS community through avenues such asbug reporting and trackingormailing listsand project pages.[18]Next, OSS developers select or are assigned to a task and identify a solution. Because there are often many different possible routes for solutions in OSS, the best solution must be chosen with careful consideration and sometimes evenpeer feedback.[18]The developer then begins to develop and commit the code.[18]The code is then tested and reviewed by peers.[18]Developers can edit and evolve their code through feedback fromcontinuous integration.[18]Once the leadership and community are satisfied with the whole project, it can be partially released and user instruction can be documented.[18]If the project is ready to be released, it is frozen, with only serious bug fixes or security repairs occurring.[18]Finally, the project is fully released and only changed through minor bug fixes.[18] Open source implementation of a standard can increase adoption of that standard.[19]This creates developer loyalty as developers feel empowered and have a sense of ownership of the end product.[20] Moreover, lower costs of marketing and logistical services are needed for OSS.[21]OSS can be a tool to promote a company's image, including its commercial products.[22]The OSS development approach has helped produce reliable, high quality software quickly and inexpensively.[21] Open source development offers the potential to quicken innovation and create of social value.[23]In France for instance, a policy that incentivized government to favor free open-source software increased to nearly 600,000 OSS contributions per year, generating social value by increasing the quantity and quality of open-source software.[23]This policy also led to an estimated increase of up to 18% of tech startups and a 14% increase in the number of people employed in the IT sector.[23] OSS can be highly reliable when it has thousands of independent programmers testing and fixing bugs of the software.[16]Open source is not dependent on the company or author that originally created it.[24]Even if the company fails, the code continues to exist and be developed by its users.[24] OSS is flexible because modular systems allow programmers to build custom interfaces, or add new abilities to it and it is innovative since open-source programs are the product of collaboration among a large number of different programmers.[16]The mix of divergent perspectives, corporate objectives, and personal goals speeds up innovation.[25] Moreover, free software can be developed in accordance with purely technical requirements.[26]It does not require thinking about commercial pressure that often degrades the quality of the software.[26]Commercial pressures make traditional software developers pay more attention to customers' requirements than to security requirements, since such features are somewhat invisible to the customer.[26] In open-source software development, tools are used to support the development of the product and the development process itself.[18] Version controlsystems such as Centralized Version control system (CVCS) and thedistributed version control system(DVCS) are examples of tools, often open source, that help manage the source code files and the changes to those files for a software project in order to foster collaboration.[27]CVCS are centralized with a central repository while DVCS are decentralized and have a local repository for every user.[27]Concurrent Versions System(CVS) and laterSubversion(SVN) andGitare examples of CVCS.[27]Therepositoriesare hosted and published onsource-code-hosting facilitiessuch asGitHub.[27] Open-source projects use utilities such as issue trackers to organize open-source software development. Commonly usedbug trackersincludeBugzillaandRedmine.[18] Tools such asmailing listsandIRCprovide means of coordination and discussion of bugs among developers.[18]Project web pages, wiki pages, roadmap lists and newsgroups allow for the distribution of project information that focuses on end users.[18] The basic roles OSS participants can fall into multiple categories, beginning with leadership at the center of the project who have control over its execution.[28]Next are the core contributors with a great deal of experience and authority in the project who may guide the other contributors.[28]Non-core contributors have less experience and authority, but regularly contribute and are vital to the project's development.[28]New contributors are the least experienced but with mentorship and guidance can become regular contributors.[28] Some possible ways of contributing to open-source software include such roles asprogramming, userinterface designand testing,web design,bug triage, accessibility design and testing,UX design, code testing, andsecurity reviewand testing.[28]However, there are several ways of contributing to OSS projects even without coding skills.[28]For example, some less technical ways of participating aredocumentationwriting and editing,translation,project management, event organization and coordination, marketing, release management, community management, and public relations and outreach.[28] Funding is another way that individuals and organizations choose to contribute to open source projects. Groups likeOpen Collectiveprovide a means for individuals to contribute monthly to supporting their favorite projects.[29]Organizations like theSovereign Tech Fundis able to contribute to millions to supporting the tools theGerman Governmentuses.[30]TheNational Science Foundationestablished a Pathways to Enable Open-Source Ecosystems (POSE) program to support open source innovation.[31] The adoption of open-source software by industry is increasing over time.[32]OSS is popular in several industries such astelecommunications,aerospace,healthcare, andmedia & entertainmentdue to the benefits it provides.[33]Adoption of OSS is more likely in larger organizations and is dependent on the company's IT usage, operating efficiencies, and the productivity of employees.[32] Industries are likely to use OSS due to back-office functionality, sales support, research and development, software features, quick deployment, portability across platforms and avoidance of commercial license management.[32]Additionally, lower cost forhardwareand ownership are also important benefits.[32] Organizations that contribute to the development and expansions of free and open-source software movements exist all over the world.[28]These organizations are dedicated to goals such as teaching and spreading technology.[28]As listed by a former vice president of theOpen Source Initiative, some American organizations include theFree Software Foundation,Software Freedom Conservancy, theOpen Source InitiativeandSoftware in the Public Interest.[28]Within Europe some notable organizations areFree Software Foundation Europe, open-source projects EU (OSP) andOpenForum Europe(OFE).[28]One Australian organization isLinux Australiawhile Asia hasOpen source AsiaandFOSSAsia.[28]Free and open source software for Africa(FOSSFA) andOpenAfricaare African organizations and Central and South Asia has such organizations asFLISOLandGRUP de usuarios de software libre Peru.[28]Outside of these, many more organizations dedicated to the advancement of open-source software exist.[28] FOSS products are generally licensed under two types of licenses:permissive licensingandcopyleft licensing.[34]Both of these types of licenses are different thanproprietary licensingin that they can allow more users access to the software and allow for the creation ofderivative worksas specified by the terms of the specific license, as each license has its own rules.[34]Permissive licenses allow recipients of the software to implement the author'scopyright rightswithout having to use the same license for distribution.[34]Examples of this type of license include theBSD,MIT, andApache licenses.[34]Copyleft licenses are different in that they require recipients to use the same license for at least some parts of the distribution of their works.[34]Strong copyleft licenses require all derivative works to use the same license while weak copyleft licenses require the use of the same license only under certain conditions.[34]Examples of this type of license include theGNU family of licenses, and theMPLandEPLlicenses.[34]The similarities between these two categories of licensing include that they provide a broad grant of copyright rights, require that recipients preserve copyright notices, and that a copy of the license is provided to recipients with the code.[34] One important legal precedent for open-source software was created in 2008, when the Jacobson v Katzer case enforced terms of theArtistic license, including attribution and identification of modifications.[34]The ruling of this case cemented enforcement under copyright law when the conditions of the license were not followed.[34]Because of the similarity of theArtistic licenseto other open-source software licenses, the ruling created a precedent that applied widely.[34] Examples offree-software license/open-source licensesincludeApache licenses,BSD licenses,GNU General Public Licenses,GNU Lesser General Public License,MIT License,Eclipse Public LicenseandMozilla Public License.[34] Several gray areas exist within software regulation that have great impact on open-source software, such as if software is a good or service, what can be considered a modification, governance through contract vs license, ownership and right of use.[34]While there have been developments on these issues, they often lead to even more questions.[34]The existence of these uncertainties in regulation has a negative impact on industries involved in technologies as a whole.[34] Within the legal history of software as a whole, there was much debate on whether to protect it asintellectual propertyunderpatent law,copyright lawor establishing a unique regulation.[34]Ultimately,copyright lawbecame the standard with computer programs being considered a form of literary work, with some tweaks of unique regulation.[34] Software is generally consideredsource codeandobject code, with both being protectable, though there is legal variety in this definition.[34]Some jurisdictions attempt to expand or reduce this conceptualization for their own purposes.[34]For example, The European Court of Justice defines a computer program as not including the functionality of a program, theprograming language, or the format of data files.[34]By limiting protections of the different aspects of software, the law favors an open-source approach to software use.[34]The US especially has an open approach to software, with mostopen-source licensesoriginating there.[34]However, this has increased the focus onpatent rightswithin these licenses, which has seen backlash from the OSS community, who prefer other forms ofIPprotection.[34] Another issue includestechnological protection measures(TPM) anddigital rights management(DRM) techniques which were internationally legally recognized and protected in the1996 World Intellectual Property Organization (WIPO) Treaty.[34]Open source software proponents disliked these technologies as they constrained end-users potentially beyond copyright law.[34]Europe responded to such complaints by putting TPM under legal controls, representing a victory for OSS supporters.[34] In open-source communities, instead of owning the software produced, the producer owns the development of the evolving software.[35]In this way, the future of the software is open, making ownership orintellectual propertydifficult within OSS.[35]Licensingand branding can prevent others from stealing it, preserving its status as apublic good.[35]Open source software can be considered a public good as it is available to everyone and does not decrease in value for others when downloaded by one person.[35]Open source software is unique in that it becomes more valuable as it is used and contributed to, instead of diminishing the resource. This is explained by concepts such as investment in reputation andnetwork effects.[35] The economic model of open-source software can be explained as developers contribute work to projects, creating public benefits.[35]Developers choose projects based on the perceived benefits or costs, such as improved reputation or value of the project.[35]The motivations of developers can come from many different places and reasons, but the important takeaway is that money is not the only or even most importantincentivization.[35] Because economic theory mainly focuses on the consumption of scarce resources, the OSS dynamic can be hard to understand. In OSS, producers become consumers by reaping the rewards of contributing to a project.[35]For example, a developer becomes well regarded by their peers for a successful contribution to an OSS project.[35]The social benefits and interactions of OSS are difficult to account for in economic models as well.[35]Furthermore, the innovation of technology creates constantly changing value discussions and outlooks, making economic model unable to predict social behavior.[35] Although OSS is theoretically challenging in economic models, it is explainable as a sustainable social activity that requires resources.[35]These resources include time, money, technology and contributions.[35]Many developers have used technology funded by organizations such as universities and governments, though these same organizations benefit from the work done by OSS.[35]As OSS grows, hybrid systems containing OSS and proprietary systems are becoming more common.[35] Throughout the mid 2000s, more and more tech companies have begun to use OSS.[24]For example,Dell'smove of selling computers withGNU/Linuxalready installed.[24]Microsoftitself has launched aLinux-based operating systemdespite previous animosity with the OSS movement.[24]Despite these developments, these companies tend to only use OSS for certain purposes, leading to worries that OSS is being taken advantage of by corporations and not given anything in return.[24] While many governments are interested in implementing and promoting open-source software due to the many benefits provided, a huge issue to be considered iscybersecurity.[36]While accidental vulnerabilities are possible, so are attacks by outside agents.[36]Because of these fears, governmental interest in contributing to the governance of software has become more prominent.[36]However, these are the broad strokes of the issue, with each country having their own specific politicized interactions with open-source software and their goals for its implementation.[36]For example, the United States has focused onnational securityin regard to open-source software implementation due to the perceived threat of the increase of open-source software activity in countries like China and Russia, with the Department of Defense considering multiple criteria for using OSS.[36]These criteria include: if it comes from and is maintained by trusted sources, whether it will continue to be maintained, if there are dependencies on sub-components in the software, component security and integrity, and foreign governmental influence.[36] Another issue for governments in regard to open source is their investments in technologies such asoperating systems,semiconductors,cloud, andartificial intelligence.[36]These technologies all have implications for global cooperation, again opening up security issues and political consequences.[36]Many countries have to balance technological innovation with technological dependence in these partnerships.[36]For example, after China's open-source dependent companyHuaweiwas prevented from usingGoogle's Android systemin 2019, they began to create their own alternative operating system:Harmony OS.[36] Germany recently established aSovereign Tech Fund, to help support the governance and maintenance of the software that they use. In the early days ofcomputing, such as the 1950s and into the 1960s, programmers and developers shared software to learn from each other and evolve the field of computing.[37]For example,Unixincluded theoperating systemsource codefor users.[37]Eventually, thecommercialization of softwarein the years 1970–1980 began to prevent this practice.[37]However, academics still often developed software collaboratively.[37] In response, the open-source movement was born out of the work of skilled programmer enthusiasts, widely referred to ashackersorhacker culture.[38]One of these enthusiasts,Richard Stallman, was a driving force behind thefree software movement, which would later allow for theopen-source movement.[17]In 1984, he resigned from MIT to create a free operating system,GNU, after the programmer culture in his lab was stifled byproprietary softwarepreventing source code from being shared and improved upon.[17]GNU was UNIX compatible, meaning that the programmer enthusiasts would still be familiar with how it worked.[17]However, it quickly became apparent that there was some confusion with the label Stallman had chosen offree software, which he described as free as in free speech, not free beer, referring to the meaning of free as freedom rather than price.[17]He later expanded this concept of freedom to the four essential freedoms.[17]Through GNU, open-source norms of incorporating others' source code, community bug fixes and suggestions of code for new features appeared.[17]In 1985, Stallman founded theFree Software Foundation(FSF) to promote changes in software and to help write GNU.[17]In order to prevent his work from being used in proprietary software, Stallman created the concept ofcopyleft, which allowed the use of his work by anyone, but under specific terms.[17]To do this, he created theGNU General Public License(GNU GPL) in 1989, which was updated in 1991.[17]In 1991, GNU was combined with theLinux kernelwritten byLinus Torvalds, as a kernel was missing in GNU.[39]The operating system is now usually referred to asLinux.[17]Throughout this whole period, there were many other free software projects and licenses around at the time, all with different ideas of what the concept of free software was and should be, as well as the morality of proprietary software, such asBerkeley Software Distribution,TeX, and theX Window System.[40] As free software developed, theFree Software Foundationbegan to look how to bring free software ideas and perceived benefits to thecommercial software industry.[40]It was concluded that FSF'ssocial activismwas not appealing to companies and they needed a way to rebrand thefree software movementto emphasize the business potential of sharing and collaborating on software source code.[40]The term open source was suggested byChristine Petersonin 1998 at a meeting of supporters of free software.[17]Many in the group felt the name free software was confusing to newcomers and holding back industry interest and they readily accepted the new designation of open source, creating theOpen Source Initiative(OSI) and the OSI definition of what open source software is.[17]TheOpen Source Initiative's (OSI) definition is now recognized by several governments internationally as the standard orde factodefinition.[39]The definition was based on theDebian Free Software Guidelines, written and adapted primarily by Bruce Perens.[41]The OSI definition differed from thefree software definitionin that it allows the inclusion of proprietary software and allows more liberties in its licensing.[17]Some, such as Stallman, agree more with the original concept of free software as a result because it takes a strong moral stance against proprietary software, though there is much overlap between the two movements in terms of the operation of the software.[17] While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves increasingly threatened by the concept of freely distributed software and universal access to an application'ssource code, with an executive of Microsoft calling open source anintellectual propertydestroyer in 2001.[42]However, whilefree and open-source software(FOSS) has historically played a role outside of mainstream private software development, companies as large asMicrosofthave begun to develop official open source presences on the Internet.[42]IBM, Oracle, and State Farm are just a few of the companies with a serious public stake in today's competitive open source market, marking a significant shift in the corporate philosophy concerning the development of FOSS.[42] The future of the open source software community, and the free software community by extension, has become successful if not confused about what it stands for.[24]For example,AndroidandUbuntuare examples milestones of success in the open source software rise to prominence from the sidelines of technological innovation as it existed in the early 2000s.[24]However, some in the community consider them failures in their representation of OSS due to issues such as the downplaying of the OSS center of Android by Google and its partners, the use of anApache licensethat allowed forking and resulted in a loss of opportunities for collaboration within Android, the prioritization of convenience over freedom in Ubuntu, and features within Ubuntu that track users for marketing purposes.[24] The use of OSS has become more common in business with 78% of companies reporting that they run all or part of their operations on FOSS.[24]The popularity of OSS has risen to the point thatMicrosoft, a once detractor of OSS, has included its use in their systems.[24]However, this success has raised concerns that will determine the future of OSS as the community must answer questions such as what OSS is, what should it be, and what should be done to protect it, if it even needs protecting.[24]All in all, while the free and open source revolution has slowed to a perceived equilibrium in the market place, that does not mean it is over as many theoretical discussions must take place to determine its future.[24] Open source software differs from proprietary software in that it is publicly available, the license requires no fees, modifications and distributions are allowed under license specifications.[43]All of this works to prevent a monopoly on any OSS product, which is a goal of proprietary software.[43]Proprietary software limits their customers' choices to either committing to using that software, upgrading it or switching to other software, forcing customers to have their software preferences impacted by their monetary cost.[43]The ideal case scenario for the proprietary software vendor would be alock-in, where the customer does not or cannot switch software due to these costs and continues to buy products from that vendor.[43] Within proprietary software, bug fixes can only be provided by the vendor, moving platforms requires another purchase and the existence of the product relies on the vendor, who can discontinue it at any point.[38]Additionally, proprietary software does not provide its source code and cannot be altered by users.[17]For businesses, this can pose a security risk and source of frustration, as they cannot specialize the product to their needs, and there may be hidden threats or information leaks within the software that they cannot access or change.[17] Under OSI's definition, open source is a broad software license that makes source code available to the general public with relaxed or non-existent restrictions on the use and modification of the code.[44]It is an explicit feature of open source that it puts very few restrictions on the use or distribution by any organization or user, in order to enable the rapid evolution of the software.[44] Richard Stallman, leader of the Free software movement and member of the free software foundation opposes the term open source being applied to what they refer to as free software.[13]Although he agrees that the two terms describe almost the same category of software, Stallman considers equating the terms incorrect and misleading.[13]He believes that the main difference is that by choosing one term over the other lets others know about what one's goals are: development (open source) or a social stance (free software).[45]Nevertheless, there is significant overlap between open source software and free software.[13]Stallman also opposes the professed pragmatism of theOpen Source Initiative, as he fears that the free software ideals of freedom and community are threatened by compromising on the FSF's idealistic standards for software freedom.[45]The FSF considers free software to be asubsetof open-source software, and Richard Stallman explained thatDRMsoftware, for example, can be developed as open source, despite how it restricts its users, and thus does not qualify as free software.[13] The FSF said that the term open source fosters an ambiguity of a different kind such that it confuses the mere availability of the source with the freedom to use, modify, and redistribute it.[13]On the other hand, the term free software was criticized for the ambiguity of the word free, which was seen as discouraging for business adoption, and for the historical ambiguous usage of the term.[45] Developers have used thealternative termsFree and Open Source Software(FOSS), orFree/Libre and Open Source Software(FLOSS), consequently, to describe open-source software that is alsofree software.[28] Software can be distributed withsource code, which is a code that is readable.[46]Software issource availablewhen this source code is available to be seen.[46]However to be source available orFOSS, the source code does not need to be accessible to all, just the users of that software.[46]While all FOSS software is source available because this is a requirement made by theOpen Source Definition, not all source available software is FOSS.[46]For example, if the software does not meet other aspects of the Open Source Definition such as permitted modification or redistribution, even if the source code is available, the software is not FOSS.[46] A recent trend within software companies is open sourcing, or transitioning their previousproprietary softwareinto open source software through releasing it under anopen-source license.[47][48]Examples of companies who have done this are Google, Microsoft and Apple.[47]Additionally, open sourcing can refer to programming open source software or installing open source software.[48]Open sourcing can be beneficial in multiple ways, such as attracting more external contributors who bring new perspectives and problem solving capabilities.[47]The downsides of open sourcing include the work that has to be done to maintaining the new community, such as making the base code easily understandable, setting up communication channels for new developers and creating documentation to allow new developers to easily join.[47]However, a review of several open sourced projects found that although a newly open sourced project attracts many newcomers, a great amount are likely to soon leave the project and their forks are also likely to not be impactful.[47] Other concepts that may share some similarities to open source areshareware,public domain software,freeware, and software viewers/readers that are freely available but do not provide source code.[17]However, these differ from open source software in access tosource code, licensing, copyright and fees.[17] Despite being able to collaborate internationally, open source software contributors were found to mostly be located in large clusters such asSilicon Valleythat largely collaborate within themselves.[49]Possible reasons for this phenomenon may be that the OSS contributor demographic largely works in software, meaning that the OSS geographic location is closely related to that dispersion and collaborations could be encouraged through work andsocial networks.[49]Code acceptance can be impacted by status within these social network clusters, creating unfair predispositions in code acceptance based on location.[50]Barriers to international collaboration also include linguistic or cultural differences.[51]Furthermore, each country has been shown to have a higher acceptance rate for code from contributors within their country except India, indicating a bias for culturally similar collaborators.[51] In 2021, the countries with the highest open source software contributions included the United States, China, Germany, India, and the UK, in that order.[49]The counties with the highest OSS developers per capita from a study in 2021 include, in order, Iceland, Switzerland, Norway, Sweden, and Finland, while in 2008 the countries with top amount of estimated contributors in SourceForge were the United States, Germany, United Kingdom, Canada and France.[49][51]Though there have been several studies done on the distribution and contributions of OSS developers, this is still an open field that can be measured in several different ways.[51]For instance, Information and communication technology participation, population, wealth and proportion of access to the internet have been shown to be correlated with OSS contributions.[51] Althoughgender diversityhas been found to enhance team productivity, women still face biases while contributing to open source software projects when their gender is identifiable.[52]In 2002, only 1.5% of international open-source software developers were women, while women made up 28% of tech industry roles, demonstrating their low representation in the software field.[53]Despite OSS contributions having no prerequisites, thisgender biasmay continue to exist due to the common belief of contributors that gender should not matter, and the quality of code should be the only consideration for code acceptance, preventing the community from addressing the systemic disparities in female representation.[38]However, a more recent figure of female OSS participation internationally calculated across 2005 to 2021 is 9.8%, with most being recent contributors, indicating that female participation may be growing.[54] There are many motivations for contributing to the OSS community.[28]For one, it is an opportunity to learn and practice multiple skills such ascodingand other technology related abilities, but also fundamental skills such as communication and collaboration and practical skills needed to excel in technology related fields such asissue trackingorversion control.[28]Instead of learning through a classroom or a job, learning through contributing to OSS allows participants to learn at their own pace and follow what interests them.[28]When contributing to OSS, the contributor can learn the current industry best practices, technology and trends and even have the opportunity to contribute to the next big innovation as OSS grows increasingly popular within the tech field.[28]Contributing to OSS without payment means there is no threat of being fired, though reputations can take a hit.[28]On the other hand, a huge motivation to contribute to OSS is the reputation gained as one grows one's public portfolio.[28] Even though programming was originally seen as a female profession, there remains a large gap in computing.[55]Social identitytends to be a large concern as women in the tech industry face insecurity about attracting unwanted male attention and harassment or being unfeminine in their technology knowledge, having a large impact on confidence.[38]Some male tech participants make clear that they believe women fitting in within the culture is impossible, furthering the insecurity for women and their place in the tech industry.[52]Additionally, even in a voluntary contribution environment like open source software, women tend to end up doing the less technical aspects of projects, such asmanual testingordocumentationdespite women and men showing the same productivity in OSS contributions.[52]Explicit biases include longer feedback time, more scrutinization of code and lower acceptance rate of code.[52]Specifically in the open-source software community, women report that sexually offensive language is common and the women's identity as female is given more attention that as an OSS contributor[38]Bias is hard to address due to the belief that gender should not matter, with most contributors feeling that women getting special treatment is unfair and success should be dependent on skill, preventing any changes to be more inclusive.[38] Open source software projects are built and maintained by a network of programmers, who may often be volunteers, and are widely used in free as well as commercial products.[56] Unix: Unix is anoperating systemcreated by AT&T that began as a precursor to open source software in that thefreeandopen-source software revolutionbegan when developers began trying to create operating systems without Unix code.[24]Unix was created in the 1960s, before thecommercializationof software and before the concept of open source software was necessary, therefore it was not considered a true open source software project.[24]It started as a research project before being commercialized in the mid 1980s.[24]Before its commercialization, it represented many of the ideals held by the Free and Open source software revolution, including the decentralized collaboration of global users,rolling releasesand a community culture of distaste towardsproprietary software.[24] BSD:Berkeley Software Distribution (BSD) is anoperating systemthat began as a variant ofUnixin 1978 that mixed Unix code with code from Berkeley labs to increase functionality.[24]As BSD was focused on increasing functionality, it would publicly share its greatest innovations with the main Unix operating system.[24]This is an example of the free public code sharing that is a central characteristic of FOSS today.[24]As Unix became commercialized in the 1980s, developers or members of the community who did not supportproprietary softwarebegan to focus on BSD and turning it into an operating system that did not include any of Unix's code.[24]The final version of BSD was released in 1995.[24] GNU: GNU is a free operating system created byRichard Stallmanin 1984 with its name meaning Gnu's Not Unix.[24]The idea was to create aUnixalternative operating system that would be available for anyone to use and allow programmers to share code freely between them.[24]However, the goal of GNU was not to only replace Unix, but to make a superior version that had more technological capabilities.[24]It was released before the philosophical beliefs of the Free and Open source software revolution were truly defined.[24]Because of its creation by prominent FOSS programmer Richard Stallman, GNU was heavily involved in FOSS activism, with one of the greatest achievements of GNU being the creation of theGNU General Public Licenseor GPL, which allowed developers to release software that could be legally shared and modified.[24] Linux: Linux is anoperating system kernelthat was introduced in 1991 byLinus Torvalds.[24]Linux was inspired by making a better version of the for profit operating serviceMinux.[24]It was radically different than what other hackers were producing at the time due to it being totally free of cost and being decentralized.[24]Later, Linux was put under theGPL license, allowing people to make money with Linux and bringing Linux into the FOSS community.[24] Apache:Apache began in 1995 as a collaboration between a group of developers releasing their own web server due to their frustration withNCSA HTTPdcode base.[24]The name Apache was used because of the several patches they applied to this code base.[24]Within a year of its release, it became the worldwide leadingweb server.[24]Soon, Apache came out withits own license, creating discord in the greater FOSS community, though ultimately proving successful.[24]The Apache license allowed permitted members to directly access source code, a marked difference from GNU and Linux's approaches.[24] While the term open source applied originally only to the source code of software, it is now being applied to many other areas such asopen-source ecology, a movement to decentralize technologies so that any human can use them.[13][57]However, it is often misapplied to other areas that have different and competing principles, which overlap only partially.[38] The same principles that underlie open-source software can be found in many other ventures, such as open source,open content, andopen collaboration.[58][3] This "culture" or ideology takes the view that the principles apply more generally to facilitate concurrent input of different agendas, approaches, and priorities, in contrast with more centralized models of development such as those typically used in commercial companies.[15] More than 90 percent of companies use open-source software as a component of their proprietary software.[59]The decision to use open-source software, or even engage with open-source projects to improve existing open-source software, is typically a pragmatic business decision.[60][61]When proprietary software is in direct competition with an open-source alternative, research has found conflicting results on the effect of the competition on the proprietary product's price and quality.[62] For decades, some companies have made servicing of an open-source software product for enterprise users their business model. These companies control an open-source software product, and instead of charging for licensing or use, charge for improvements, integration, and other servicing.[63]Software as a service(SaaS) products based on open-source components are increasingly common.[64] Open-source software is preferred for scientific applications, because it increases transparency and aids in the validation and acceptance of scientific results.[65]
https://en.wikipedia.org/wiki/Open-source_software
Insoftware engineering,code coverage, also calledtest coverage, is a percentage measure of the degree to which thesource codeof aprogramis executed when a particulartest suiteis run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetectedsoftware bugscompared to a program with low code coverage.[1][2]Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of programsubroutinesand the percentage of programstatementscalled during execution of the test suite. Code coverage was among the first methods invented for systematicsoftware testing. The first published reference was by Miller and Maloney inCommunications of the ACM, in 1963.[3] To measure what percentage of code has been executed by atest suite, one or morecoverage criteriaare used. These are usually defined as rules or requirements, which a test suite must satisfy.[4] There are a number of coverage criteria, but the main ones are:[5] For example, consider the followingCfunction: Assume this function is a part of some bigger program and this program was run with some test suite. In programming languages that do not performshort-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the followingPascalcode fragment: Condition coverage can be satisfied by two tests: However, this set of tests does not satisfy branch coverage since neither case will meet theifcondition. Fault injectionmay be necessary to ensure that all conditions and branches ofexception-handlingcode have adequate coverage during testing. A combination of function coverage and branch coverage is sometimes also calleddecision coverage. This criterion requires that everypoint of entry and exitin the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is aBoolean expressioncomprising conditions and zero or more Boolean operators. This definition is not the same as branch coverage,[6]however, the termdecision coverageis sometimes used as a synonym for it.[7] Condition/decision coveragerequires that both decision and condition coverage be satisfied. However, forsafety-criticalapplications (such asavionics software) it is often required thatmodified condition/decision coverage (MC/DC)be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code: The condition/decision criteria will be satisfied by the following set of tests: However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC: This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests: Parameter value coverage(PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested.[8]For example, common values for a string are: 1)null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC. There are further coverage criteria, which are used less often: Safety-criticalordependableapplications are often required to demonstrate 100% of some form of test coverage. For example, theECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer.[11]However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.[12])Martin Fowlerwrites: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing".[13] Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch. Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession ofn{\displaystyle n}decisions in it can have up to2n{\displaystyle 2^{n}}paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve thehalting problem).[14]Basis path testingis for instance a method of achieving complete branch coverage without achieving complete path coverage.[15] Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes.[citation needed][clarification needed] The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests. In implementing test coverage policies within a software development environment, one must consider the following: Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage. Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing. There are also some sorts of defects which are affected by such tools. In particular, somerace conditionsor similarreal timesensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code. Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects.[17] Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by theFederal Aviation Administration(FAA) is documented inDO-178B[16]andDO-178C.[18] Test coverage is also a requirement in part 6 of the automotive safety standardISO 26262Road Vehicles - Functional Safety.[19]
https://en.wikipedia.org/wiki/Code_coverage
Inmathematics, thegreatest common divisor(GCD), also known asgreatest common factor (GCF), of two or moreintegers, which are not all zero, is the largest positive integer thatdivideseach of the integers. For two integersx,y, the greatest common divisor ofxandyis denotedgcd(x,y){\displaystyle \gcd(x,y)}. For example, the GCD of 8 and 12 is 4, that is,gcd(8, 12) = 4.[1][2] In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names includehighest common factor, etc.[3][4][5][6]Historically, other names for the same concept have includedgreatest common measure.[7] This notion can be extended to polynomials (seePolynomial greatest common divisor) and othercommutative rings(see§ In commutative ringsbelow). Thegreatest common divisor(GCD) ofintegersaandb, at least one of which is nonzero, is the greatestpositive integerdsuch thatdis adivisorof bothaandb; that is, there are integerseandfsuch thata=deandb=df, anddis the largest such integer. The GCD ofaandbis generally denotedgcd(a,b).[8] When one ofaandbis zero, the GCD is the absolute value of the nonzero integer:gcd(a, 0) = gcd(0,a) = |a|. This case is important as the terminating step of theEuclidean algorithm. The above definition is unsuitable for defininggcd(0, 0), since there is no greatest integernsuch that0 ×n= 0. However, zero is its own greatest divisor ifgreatestis understood in the context of the divisibility relation, sogcd(0, 0)is commonly defined as0. This preserves the usual identities for GCD, and in particularBézout's identity, namely thatgcd(a,b)generatesthe sameidealas{a,b}.[9][10][11]This convention is followed by manycomputer algebra systems.[12]Nonetheless, some authors leavegcd(0, 0)undefined.[13] The GCD ofaandbis theirgreatestpositive common divisor in thepreorderrelation ofdivisibility. This means that the common divisors ofaandbare exactly the divisors of their GCD. This is commonly proved by using eitherEuclid's lemma, thefundamental theorem of arithmetic, or theEuclidean algorithm. This is the meaning of "greatest" that is used for the generalizations of the concept of GCD. The number 54 can be expressed as a product of two integers in several different ways: Thus the complete list ofdivisorsof 54 is 1, 2, 3, 6, 9, 18, 27, 54. Similarly, the divisors of 24 are 1, 2, 3, 4, 6, 8, 12, 24. The numbers that these two lists havein commonare thecommon divisorsof 54 and 24, that is, Of these, the greatest is 6, so it is thegreatest common divisor: Computing all divisors of the two numbers in this way is usually not efficient, especially for large numbers that have many divisors. Much more efficient methods are described in§ Calculation. Two numbers are called relatively prime, orcoprime, if their greatest common divisor equals1.[14]For example, 9 and 28 are coprime. For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can thus be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5). The greatest common divisor is useful for reducingfractionsto thelowest terms.[15]For example,gcd(42, 56) = 14, therefore, The least common multiple of two integers that are not both zero can be computed from their greatest common divisor, by using the relation Greatest common divisors can be computed by determining theprime factorizationsof the two numbers and comparing factors. For example, to computegcd(48, 180), we find the prime factorizations 48 = 24· 31and 180 = 22· 32· 51; the GCD is then 2min(4,2)· 3min(1,2)· 5min(0,1)= 22· 31· 50= 12 The corresponding LCM is then 2max(4,2)· 3max(1,2)· 5max(0,1)= 24· 32· 51= 720. In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long. The method introduced byEuclidfor computing greatest common divisors is based on the fact that, given two positive integersaandbsuch thata>b, the common divisors ofaandbare the same as the common divisors ofa–bandb. So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number with the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor. For example, to computegcd(48,18), one proceeds as follows: Sogcd(48, 18) = 6. This method can be very slow if one number is much larger than the other. So, the variant that follows is generally preferred. A more efficient method is theEuclidean algorithm, a variant in which the difference of the two numbersaandbis replaced by theremainderof theEuclidean division(also calleddivision with remainder) ofabyb. Denoting this remainder asamodb, the algorithm replaces(a,b)with(b,amodb)repeatedly until the pair is(d, 0), wheredis the greatest common divisor. For example, to compute gcd(48,18), the computation is as follows: This again givesgcd(48, 18) = 6. The binary GCD algorithm is a variant of Euclid's algorithm that is specially adapted to thebinary representationof the numbers, which is used in mostcomputers. The binary GCD algorithm differs from Euclid's algorithm essentially by dividing by two every even number that is encountered during the computation. Its efficiency results from the fact that, in binary representation, testing parity consists of testing the right-most digit, and dividing by two consists of removing the right-most digit. The method is as follows, starting withaandbthat are the two positive integers whose GCD is sought. Step 1 determinesdas the highest power of2that dividesaandb, and thus their greatest common divisor. None of the steps changes the set of the odd common divisors ofaandb. This shows that when the algorithm stops, the result is correct. The algorithm stops eventually, since each steps divides at least one of the operands by at least2. Moreover, the number of divisions by2and thus the number of subtractions is at most the total number of digits. Example: (a,b,d) = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 3, 1) ; the original GCD is thus the product 6 of2d= 21anda=b= 3. The binary GCD algorithm is particularly easy to implement and particularly efficient on binary computers. Itscomputational complexityis The square in this complexity comes from the fact that division by2and subtraction take a time that is proportional to the number ofbitsof the input. The computational complexity is usually given in terms of the lengthnof the input. Here, this length isn= loga+ logb, and the complexity is thus Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than acomputer word. In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers. The quotients are collected into a small 2-by-2 transformation matrix (a matrix of single-word integers) to reduce the original numbers. This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient. This algorithm improves speed, because it reduces the number of operations on very large numbers, and can use hardware arithmetic for most operations. In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers. When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with aEuclidean divisionof large numbers. Ifaandbare both nonzero, the greatest common divisor ofaandbcan be computed by usingleast common multiple(LCM) ofaandb: but more commonly the LCM is computed from the GCD. UsingThomae's functionf, which generalizes toaandbrational numbersorcommensurablereal numbers. Keith Slavin has shown that for odda≥ 1: which is a function that can be evaluated for complexb.[16]Wolfgang Schramm has shown that is anentire functionin the variablebfor all positive integersawherecd(k) isRamanujan's sum.[17] Thecomputational complexityof the computation of greatest common divisors has been widely studied.[18]If one uses theEuclidean algorithmand the elementary algorithms for multiplication and division, the computation of the greatest common divisor of two integers of at mostnbitsisO(n2).[citation needed]This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication. However, if a fastmultiplication algorithmis used, one may modify the Euclidean algorithm for improving the complexity, but the computation of a greatest common divisor becomes slower than the multiplication. More precisely, if the multiplication of two integers ofnbits takes a time ofT(n), then the fastest known algorithm for greatest common divisor has a complexityO(T(n) logn). This implies that the fastest known algorithm has a complexity ofO(n(logn)2). Previous complexities are valid for the usualmodels of computation, specificallymultitape Turing machinesandrandom-access machines. The computation of the greatest common divisors belongs thus to the class of problems solvable inquasilinear time.A fortiori, the correspondingdecision problembelongs to the classPof problems solvable in polynomial time. The GCD problem is not known to be inNC, and so there is no known way toparallelizeit efficiently; nor is it known to beP-complete, which would imply that it is unlikely to be possible to efficiently parallelize GCD computation. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem ofinteger linear programmingwith two variables; if either problem is inNCor isP-complete, the other is as well.[19]SinceNCcontainsNL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines. Although the problem is not known to be inNC, parallel algorithmsasymptotically fasterthan the Euclidean algorithm exist; the fastest known deterministic algorithm is byChorandGoldreich, which (in theCRCW-PRAMmodel) can solve the problem inO(n/logn)time withn1+εprocessors.[20]Randomized algorithmscan solve the problem inO((logn)2)time onexp⁡(O(nlog⁡n)){\displaystyle \exp \left(O\left({\sqrt {n\log n}}\right)\right)}processors[clarification needed](this issuperpolynomial).[21] ∑k=1ngcd(k,n)=∑d|ndφ(nd)=n∑d|nφ(d)d=n∏p|n(1+νp(n)(1−1p)){\displaystyle \sum _{k=1}^{n}\gcd(k,n)=\sum _{d|n}d\varphi \left({\frac {n}{d}}\right)=n\sum _{d|n}{\frac {\varphi (d)}{d}}=n\prod _{p|n}\left(1+\nu _{p}(n)\left(1-{\frac {1}{p}}\right)\right)}whereνp(n){\displaystyle \nu _{p}(n)}is thep-adic valuation. (sequenceA018804in theOEIS) In 1972, James E. Nymann showed thatkintegers, chosen independently and uniformly from{1, ...,n}, are coprime with probability1/ζ(k)asngoes to infinity, whereζrefers to theRiemann zeta function.[24](Seecoprimefor a derivation.) This result was extended in 1987 to show that the probability thatkrandom integers have greatest common divisordisd−k/ζ(k).[25] Using this information, theexpected valueof the greatest common divisor function can be seen (informally) to not exist whenk= 2. In this case the probability that the GCD equalsdisd−2/ζ(2), and sinceζ(2) = π2/6we have This last summation is theharmonic series, which diverges. However, whenk≥ 3, the expected value is well-defined, and by the above argument, it is Fork= 3, this is approximately equal to 1.3684. Fork= 4, it is approximately 1.1106. The notion of greatest common divisor can more generally be defined for elements of an arbitrarycommutative ring, although in general there need not exist one for every pair of elements.[26] With this definition, two elementsaandbmay very well have several greatest common divisors, or none at all. IfRis anintegral domain, then any two GCDs ofaandbmust beassociate elements, since by definition either one must divide the other. Indeed, if a GCD exists, any one of its associates is a GCD as well. Existence of a GCD is not assured in arbitrary integral domains. However, ifRis aunique factorization domainor any otherGCD domain, then any two elements have a GCD. IfRis aEuclidean domainin which euclidean division is given algorithmically (as is the case for instance whenR=F[X]whereFis afield, or whenRis the ring ofGaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure. The following is an example of an integral domain with two elements that do not have a GCD: The elements2and1 +√−3are twomaximal common divisors(that is, any common divisor which is a multiple of2is associated to2, the same holds for1 +√−3, but they are not associated, so there is no greatest common divisor ofaandb. Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the formpa+qb, wherepandqrange over the ring. This is theidealgenerated byaandb, and is denoted simply(a,b). In a ring all of whose ideals are principal (aprincipal ideal domainor PID), this ideal will be identical with the set of multiples of some ring elementd; then thisdis a greatest common divisor ofaandb. But the ideal(a,b)can be useful even when there is no greatest common divisor ofaandb. (Indeed,Ernst Kummerused this ideal as a replacement for a GCD in his treatment ofFermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, orideal, ring elementd, whence the ring-theoretic term.)
https://en.wikipedia.org/wiki/Greatest_common_divisor
Inabstract algebra, agroup isomorphismis afunctionbetween twogroupsthat sets up abijectionbetween the elements of the groups in a way that respects the given group operations. If there exists anisomorphismbetween two groups, then the groups are calledisomorphic. From the standpoint ofgroup theory, isomorphic groups have the same properties and need not be distinguished.[1] Given two groups(G,∗){\displaystyle (G,*)}and(H,⊙),{\displaystyle (H,\odot ),}agroup isomorphismfrom(G,∗){\displaystyle (G,*)}to(H,⊙){\displaystyle (H,\odot )}is abijectivegroup homomorphismfromG{\displaystyle G}toH.{\displaystyle H.}Spelled out, this means that a group isomorphism is a bijective functionf:G→H{\displaystyle f:G\to H}such that for allu{\displaystyle u}andv{\displaystyle v}inG{\displaystyle G}it holds thatf(u∗v)=f(u)⊙f(v).{\displaystyle f(u*v)=f(u)\odot f(v).} The two groups(G,∗){\displaystyle (G,*)}and(H,⊙){\displaystyle (H,\odot )}are isomorphic if there exists an isomorphism from one to the other.[1][2]This is written(G,∗)≅(H,⊙).{\displaystyle (G,*)\cong (H,\odot ).} Often shorter and simpler notations can be used. When the relevant group operations are understood, they are omitted and one writesG≅H.{\displaystyle G\cong H.} Sometimes one can even simply writeG=H.{\displaystyle G=H.}Whether such a notation is possible without confusion or ambiguity depends on context. For example, the equals sign is not very suitable when the groups are bothsubgroupsof the same group. See also the examples. Conversely, given a group(G,∗),{\displaystyle (G,*),}a setH,{\displaystyle H,}and abijectionf:G→H,{\displaystyle f:G\to H,}we can makeH{\displaystyle H}a group(H,⊙){\displaystyle (H,\odot )}by definingf(u)⊙f(v)=f(u∗v).{\displaystyle f(u)\odot f(v)=f(u*v).} IfH=G{\displaystyle H=G}and⊙=∗{\displaystyle \odot =*}then the bijection is anautomorphism(q.v.). Intuitively, group theorists view two isomorphic groups as follows: For every elementg{\displaystyle g}of a groupG,{\displaystyle G,}there exists an elementh{\displaystyle h}ofH{\displaystyle H}such thath{\displaystyle h}"behaves in the same way" asg{\displaystyle g}(operates with other elements of the group in the same way asg{\displaystyle g}). For instance, ifg{\displaystyle g}generatesG,{\displaystyle G,}then so doesh.{\displaystyle h.}This implies, in particular, thatG{\displaystyle G}andH{\displaystyle H}are in bijective correspondence. Thus, the definition of an isomorphism is quite natural. An isomorphism of groups may equivalently be defined as aninvertiblegroup homomorphism (the inverse function of a bijective group homomorphism is also a group homomorphism). In this section some notable examples of isomorphic groups are listed. Some groups can be proven to be isomorphic, relying on theaxiom of choice, but the proof does not indicate how to construct a concrete isomorphism. Examples: Thekernelof an isomorphism from(G,∗){\displaystyle (G,*)}to(H,⊙){\displaystyle (H,\odot )}is always {eG}, where eGis theidentityof the group(G,∗){\displaystyle (G,*)} If(G,∗){\displaystyle (G,*)}and(H,⊙){\displaystyle (H,\odot )}are isomorphic, thenG{\displaystyle G}isabelianif and only ifH{\displaystyle H}is abelian. Iff{\displaystyle f}is an isomorphism from(G,∗){\displaystyle (G,*)}to(H,⊙),{\displaystyle (H,\odot ),}then for anya∈G,{\displaystyle a\in G,}theorderofa{\displaystyle a}equals the order off(a).{\displaystyle f(a).} If(G,∗){\displaystyle (G,*)}and(H,⊙){\displaystyle (H,\odot )}are isomorphic, then(G,∗){\displaystyle (G,*)}is alocally finite groupif and only if(H,⊙){\displaystyle (H,\odot )}is locally finite. The number of distinct groups (up to isomorphism) ofordern{\displaystyle n}is given bysequenceA000001 in theOEIS. The first few numbers are 0, 1, 1, 1 and 2 meaning that 4 is the lowest order with more than one group. All cyclic groups of a given order are isomorphic to(Zn,+n),{\displaystyle (\mathbb {Z} _{n},+_{n}),}where+n{\displaystyle +_{n}}denotes additionmodulon.{\displaystyle n.} LetG{\displaystyle G}be a cyclic group andn{\displaystyle n}be the order ofG.{\displaystyle G.}Lettingx{\displaystyle x}be a generator ofG{\displaystyle G},G{\displaystyle G}is then equal to⟨x⟩={e,x,…,xn−1}.{\displaystyle \langle x\rangle =\left\{e,x,\ldots ,x^{n-1}\right\}.}We will show thatG≅(Zn,+n).{\displaystyle G\cong (\mathbb {Z} _{n},+_{n}).} Defineφ:G→Zn={0,1,…,n−1},{\displaystyle \varphi :G\to \mathbb {Z} _{n}=\{0,1,\ldots ,n-1\},}so thatφ(xa)=a.{\displaystyle \varphi (x^{a})=a.}Clearly,φ{\displaystyle \varphi }is bijective. Thenφ(xa⋅xb)=φ(xa+b)=a+b=φ(xa)+nφ(xb),{\displaystyle \varphi (x^{a}\cdot x^{b})=\varphi (x^{a+b})=a+b=\varphi (x^{a})+_{n}\varphi (x^{b}),}which proves thatG≅(Zn,+n).{\displaystyle G\cong (\mathbb {Z} _{n},+_{n}).} From the definition, it follows that any isomorphismf:G→H{\displaystyle f:G\to H}will map the identity element ofG{\displaystyle G}to the identity element ofH,{\displaystyle H,}f(eG)=eH,{\displaystyle f(e_{G})=e_{H},}that it will mapinversesto inverses,f(u−1)=f(u)−1for allu∈G,{\displaystyle f(u^{-1})=f(u)^{-1}\quad {\text{ for all }}u\in G,}and more generally,n{\displaystyle n}th powers ton{\displaystyle n}th powers,f(un)=f(u)nfor allu∈G,{\displaystyle f(u^{n})=f(u)^{n}\quad {\text{ for all }}u\in G,}and that the inverse mapf−1:H→G{\displaystyle f^{-1}:H\to G}is also a group isomorphism. Therelation"being isomorphic" is anequivalence relation. Iff{\displaystyle f}is an isomorphism between two groupsG{\displaystyle G}andH,{\displaystyle H,}then everything that is true aboutG{\displaystyle G}that is only related to the group structure can be translated viaf{\displaystyle f}into a true ditto statement aboutH,{\displaystyle H,}and vice versa. An isomorphism from a group(G,∗){\displaystyle (G,*)}to itself is called anautomorphismof the group. Thus it is a bijectionf:G→G{\displaystyle f:G\to G}such thatf(u)∗f(v)=f(u∗v).{\displaystyle f(u)*f(v)=f(u*v).} Theimageunder an automorphism of aconjugacy classis always a conjugacy class (the same or another). Thecompositionof two automorphisms is again an automorphism, and with this operation the set of all automorphisms of a groupG,{\displaystyle G,}denoted byAut⁡(G),{\displaystyle \operatorname {Aut} (G),}itself forms a group, theautomorphism groupofG.{\displaystyle G.} For all abelian groups there is at least the automorphism that replaces the group elements by their inverses. However, in groups where all elements are equal to their inverses this is thetrivial automorphism, e.g. in theKlein four-group. For that group allpermutationsof the three non-identity elements are automorphisms, so the automorphism group is isomorphic toS3{\displaystyle S_{3}}(which itself is isomorphic toDih3{\displaystyle \operatorname {Dih} _{3}}). InZp{\displaystyle \mathbb {Z} _{p}}for aprime numberp,{\displaystyle p,}one non-identity element can be replaced by any other, with corresponding changes in the other elements. The automorphism group is isomorphic toZp−1{\displaystyle \mathbb {Z} _{p-1}}For example, forn=7,{\displaystyle n=7,}multiplying all elements ofZ7{\displaystyle \mathbb {Z} _{7}}by 3, modulo 7, is an automorphism of order 6 in the automorphism group, because36≡1(mod7),{\displaystyle 3^{6}\equiv 1{\pmod {7}},}while lower powers do not give 1. Thus this automorphism generatesZ6.{\displaystyle \mathbb {Z} _{6}.}There is one more automorphism with this property: multiplying all elements ofZ7{\displaystyle \mathbb {Z} _{7}}by 5, modulo 7. Therefore, these two correspond to the elements 1 and 5 ofZ6,{\displaystyle \mathbb {Z} _{6},}in that order or conversely. The automorphism group ofZ6{\displaystyle \mathbb {Z} _{6}}is isomorphic toZ2,{\displaystyle \mathbb {Z} _{2},}because only each of the two elements 1 and 5 generateZ6,{\displaystyle \mathbb {Z} _{6},}so apart from the identity we can only interchange these. The automorphism group ofZ2⊕Z2⊕Z2=Dih2⊕Z2{\displaystyle \mathbb {Z} _{2}\oplus \mathbb {Z} _{2}\oplus \mathbb {Z} _{2}=\operatorname {Dih} _{2}\oplus \mathbb {Z} _{2}}has order 168, as can be found as follows. All 7 non-identity elements play the same role, so we can choose which plays the role of(1,0,0).{\displaystyle (1,0,0).}Any of the remaining 6 can be chosen to play the role of (0,1,0). This determines which element corresponds to(1,1,0).{\displaystyle (1,1,0).}For(0,0,1){\displaystyle (0,0,1)}we can choose from 4, which determines the rest. Thus we have7×6×4=168{\displaystyle 7\times 6\times 4=168}automorphisms. They correspond to those of theFano plane, of which the 7 points correspond to the 7non-identityelements. The lines connecting three points correspond to the group operation:a,b,{\displaystyle a,b,}andc{\displaystyle c}on one line meansa+b=c,{\displaystyle a+b=c,}a+c=b,{\displaystyle a+c=b,}andb+c=a.{\displaystyle b+c=a.}See alsogeneral linear group over finite fields. For abelian groups, all non-trivial automorphisms areouter automorphisms. Non-abelian groups have a non-trivialinner automorphismgroup, and possibly also outer automorphisms.
https://en.wikipedia.org/wiki/Group_isomorphism
Incoding theory,block codesare a large and important family oferror-correcting codesthat encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists,mathematicians, andcomputer scientiststo study the limitations ofallblock codes in a unified way. Such limitations often take the form ofboundsthat relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes areReed–Solomon codes,Hamming codes,Hadamard codes,Expander codes,Golay codes,Reed–Muller codesandPolar codes. These examples also belong to the class oflinear codes, and hence they are calledlinear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using Boolean polynomials. Algebraic block codes are typicallyhard-decodedusing algebraic decoders.[jargon] The termblock codemay also refer to any error-correcting code that acts on a block ofk{\displaystyle k}bits of input data to producen{\displaystyle n}bits of output data(n,k){\displaystyle (n,k)}. Consequently, the block coder is amemorylessdevice. Under this definition codes such asturbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which hasmemoryand is instead classified as atree code. This article deals with "algebraic block codes". Error-correcting codesare used toreliablytransmitdigital dataover unreliablecommunication channelssubject tochannel noise. When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is calledmessageand the procedure given by the block code encodes each message individually into a codeword, also called ablockin the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. Formally, a block code is aninjectivemapping Here,Σ{\displaystyle \Sigma }is a finite and nonemptysetandk{\displaystyle k}andn{\displaystyle n}are integers. The meaning and significance of these three parameters and other parameters related to the code are described below. The data stream to be encoded is modeled as astringover somealphabetΣ{\displaystyle \Sigma }. The size|Σ|{\displaystyle |\Sigma |}of the alphabet is often written asq{\displaystyle q}. Ifq=2{\displaystyle q=2}, then the block code is called abinaryblock code. In many applications it is useful to considerq{\displaystyle q}to be aprime power, and to identifyΣ{\displaystyle \Sigma }with thefinite fieldFq{\displaystyle \mathbb {F} _{q}}. Messages are elementsm{\displaystyle m}ofΣk{\displaystyle \Sigma ^{k}}, that is, strings of lengthk{\displaystyle k}. Hence the numberk{\displaystyle k}is called themessage lengthordimensionof a block code. Theblock lengthn{\displaystyle n}of a block code is the number of symbols in a block. Hence, the elementsc{\displaystyle c}ofΣn{\displaystyle \Sigma ^{n}}are strings of lengthn{\displaystyle n}and correspond to blocks that may be received by the receiver. Hence they are also called received words. Ifc=C(m){\displaystyle c=C(m)}for some messagem{\displaystyle m}, thenc{\displaystyle c}is called the codeword ofm{\displaystyle m}. Therateof a block code is defined as the ratio between its message length and its block length: A large rate means that the amount of actual message per transmitted block is high. In this sense, the rate measures the transmission speed and the quantity1−R{\displaystyle 1-R}measures the overhead that occurs due to the encoding with the block code. It is a simpleinformation theoreticalfact that the rate cannot exceed1{\displaystyle 1}since data cannot in general be losslessly compressed. Formally, this follows from the fact that the codeC{\displaystyle C}is an injective map. Thedistanceorminimum distancedof a block code is the minimum number of positions in which any two distinct codewords differ, and therelative distanceδ{\displaystyle \delta }is the fractiond/n{\displaystyle d/n}. Formally, for received wordsc1,c2∈Σn{\displaystyle c_{1},c_{2}\in \Sigma ^{n}}, letΔ(c1,c2){\displaystyle \Delta (c_{1},c_{2})}denote theHamming distancebetweenc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}, that is, the number of positions in whichc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}differ. Then the minimum distanced{\displaystyle d}of the codeC{\displaystyle C}is defined as Since any code has to beinjective, any two codewords will disagree in at least one position, so the distance of any code is at least1{\displaystyle 1}. Besides, thedistanceequals theminimum weightfor linear block codes because:[citation needed] A larger distance allows for more error correction and detection. For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ. A code with distancedallows the receiver to detect up tod−1{\displaystyle d-1}transmission errors since changingd−1{\displaystyle d-1}positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than(d−1)/2{\displaystyle (d-1)/2}transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance(d−1)/2{\displaystyle (d-1)/2}. If more than(d−1)/2{\displaystyle (d-1)/2}transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to uselist decoding, in which the decoder outputs a list of all codewords in a certain radius. The notation(n,k,d)q{\displaystyle (n,k,d)_{q}}describes a block code over an alphabetΣ{\displaystyle \Sigma }of sizeq{\displaystyle q}, with a block lengthn{\displaystyle n}, message lengthk{\displaystyle k}, and distanced{\displaystyle d}. If the block code is a linear block code, then the square brackets in the notation[n,k,d]q{\displaystyle [n,k,d]_{q}}are used to represent that fact. For binary codes withq=2{\displaystyle q=2}, the index is sometimes dropped. Formaximum distance separable codes, the distance is alwaysd=n−k+1{\displaystyle d=n-k+1}, but sometimes the precise distance is not known, non-trivial to prove or state, or not needed. In such cases, thed{\displaystyle d}-component may be missing. Sometimes, especially for non-block codes, the notation(n,M,d)q{\displaystyle (n,M,d)_{q}}is used for codes that containM{\displaystyle M}codewords of lengthn{\displaystyle n}. For block codes with messages of lengthk{\displaystyle k}over an alphabet of sizeq{\displaystyle q}, this number would beM=qk{\displaystyle M=q^{k}}. As mentioned above, there are a vast number of error-correcting codes that are actually block codes. The first error-correcting code was theHamming(7,4)code, developed byRichard W. Hammingin 1950. This code transforms a message consisting of 4 bits into a codeword of 7 bits by adding 3 parity bits. Hence this code is a block code. It turns out that it is also a linear code and that it has distance 3. In the shorthand notation above, this means that the Hamming(7,4) code is a[7,4,3]2{\displaystyle [7,4,3]_{2}}code. Reed–Solomon codesare a family of[n,k,d]q{\displaystyle [n,k,d]_{q}}codes withd=n−k+1{\displaystyle d=n-k+1}andq{\displaystyle q}being aprime power.Rank codesare family of[n,k,d]q{\displaystyle [n,k,d]_{q}}codes withd≤n−k+1{\displaystyle d\leq n-k+1}.Hadamard codesare a family of[n,k,d]2{\displaystyle [n,k,d]_{2}}codes withn=2k−1{\displaystyle n=2^{k-1}}andd=2k−2{\displaystyle d=2^{k-2}}. A codewordc∈Σn{\displaystyle c\in \Sigma ^{n}}could be considered as a point in then{\displaystyle n}-dimension spaceΣn{\displaystyle \Sigma ^{n}}and the codeC{\displaystyle {\mathcal {C}}}is the subset ofΣn{\displaystyle \Sigma ^{n}}. A codeC{\displaystyle {\mathcal {C}}}has distanced{\displaystyle d}means that∀c∈C{\displaystyle \forall c\in {\mathcal {C}}}, there is no other codeword in theHamming ballcentered atc{\displaystyle c}with radiusd−1{\displaystyle d-1}, which is defined as the collection ofn{\displaystyle n}-dimension words whoseHamming distancetoc{\displaystyle c}is no more thand−1{\displaystyle d-1}. Similarly,C{\displaystyle {\mathcal {C}}}with (minimum) distanced{\displaystyle d}has the following properties: C={Ci}i≥1{\displaystyle C=\{C_{i}\}_{i\geq 1}}is calledfamily of codes, whereCi{\displaystyle C_{i}}is an(ni,ki,di)q{\displaystyle (n_{i},k_{i},d_{i})_{q}}code with monotonic increasingni{\displaystyle n_{i}}. Rateof family of codesCis defined asR(C)=limi→∞kini{\displaystyle R(C)=\lim _{i\to \infty }{k_{i} \over n_{i}}} Relative distanceof family of codesCis defined asδ(C)=limi→∞dini{\displaystyle \delta (C)=\lim _{i\to \infty }{d_{i} \over n_{i}}} To explore the relationship betweenR(C){\displaystyle R(C)}andδ(C){\displaystyle \delta (C)}, a set of lower and upper bounds of block codes are known. The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1: In other words, every block code satisfies the inequalityk+d≤n+1{\displaystyle k+d\leq n+1}.Reed–Solomon codesare non-trivial examples of codes that satisfy the singleton bound with equality. Forq=2{\displaystyle q=2},R+2δ≤1{\displaystyle R+2\delta \leq 1}. In other words,k+2d≤n{\displaystyle k+2d\leq n}. For the general case, the following Plotkin bounds holds for anyC⊆Fqn{\displaystyle C\subseteq \mathbb {F} _{q}^{n}}with distanced: For anyq-ary code with distanceδ{\displaystyle \delta },R≤1−(qq−1)δ+o(1){\displaystyle R\leq 1-\left({q \over {q-1}}\right)\delta +o\left(1\right)} R≥1−Hq(δ)−ϵ{\displaystyle R\geq 1-H_{q}\left(\delta \right)-\epsilon }, where0≤δ≤1−1q,0≤ϵ≤1−Hq(δ){\displaystyle 0\leq \delta \leq 1-{1 \over q},0\leq \epsilon \leq 1-H_{q}\left(\delta \right)},Hq(x)=def−x⋅logq⁡xq−1−(1−x)⋅logq⁡(1−x){\displaystyle H_{q}\left(x\right)~{\overset {\underset {\mathrm {def} }{}}{=}}~-x\cdot \log _{q}{x \over {q-1}}-\left(1-x\right)\cdot \log _{q}{\left(1-x\right)}}is theq-ary entropy function. DefineJq(δ)=def(1−1q)(1−1−qδq−1){\displaystyle J_{q}\left(\delta \right)~{\overset {\underset {\mathrm {def} }{}}{=}}~\left(1-{1 \over q}\right)\left(1-{\sqrt {1-{q\delta \over {q-1}}}}\right)}.LetJq(n,d,e){\displaystyle J_{q}\left(n,d,e\right)}be the maximum number of codewords in a Hamming ball of radiusefor any codeC⊆Fqn{\displaystyle C\subseteq \mathbb {F} _{q}^{n}}of distanced. Then we have theJohnson Bound:Jq(n,d,e)≤qnd{\displaystyle J_{q}\left(n,d,e\right)\leq qnd}, ifen≤q−1q(1−1−qq−1⋅dn)=Jq(dn){\displaystyle {e \over n}\leq {{q-1} \over q}\left({1-{\sqrt {1-{q \over {q-1}}\cdot {d \over n}}}}\,\right)=J_{q}\left({d \over n}\right)} Block codes are tied to thesphere packing problemwhich has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerfulGolay codeused in deep space communications uses 24 dimensions. If used as a binary code (which it usually is), the dimensions refer to the length of the codeword as defined above. The theory of coding uses theN-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called perfect codes. There are very few of these codes. Another property is the number of neighbors a single codeword may have.[1]Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the12-faceand24-cellwith 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by thekissing numbers. The result is that the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.[1]
https://en.wikipedia.org/wiki/Block_code
Ininformation theoryandcoding theorywith applications incomputer scienceandtelecommunications,error detection and correction(EDAC) orerror controlare techniques that enablereliable deliveryofdigital dataover unreliablecommunication channels. Many communication channels are subject tochannel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Error detectionis the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correctionis the detection of errors and reconstruction of the original, error-free data. In classical antiquity,copyistsof theHebrew Biblewere paid for their work according to the number ofstichs(lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters.[1]This also helped ensure accuracy in the transmission of the text with the production of subsequent copies.[2][3]Between the 7th and 10th centuries CE agroup of Jewish scribesformalized and expanded this to create theNumerical Masorahto ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary.[1]Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable.[4]The effectiveness of their error correction method was verified by the accuracy of copying through the centuries demonstrated by discovery of theDead Sea Scrollsin 1947–1956, dating fromc.150 BCE-75 CE.[5] The modern development oferror correction codesis credited toRichard Hammingin 1947.[6]A description ofHamming's codeappeared inClaude Shannon'sA Mathematical Theory of Communication[7]and was quickly generalized byMarcel J. E. Golay.[8] All error-detection and correction schemes add someredundancy(i.e., some extra data) to a message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be eithersystematicor non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number ofcheck bits(orparity data), which are derived from the data bits by some encoding algorithm. If error detection is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In a system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Commonchannel modelsincludememorylessmodels where errors occur randomly and with a certain probability, and dynamic models where errors occur primarily inbursts. Consequently, error-detecting and -correcting codes can be generally distinguished betweenrandom-error-detecting/correctingandburst-error-detecting/correcting. Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with a system for retransmissions of erroneous data. This is known asautomatic repeat request(ARQ), and is most notably used in the Internet. An alternate approach for error control ishybrid automatic repeat request(HARQ), which is a combination of ARQ and error-correction coding. There are three major types of error correction:[9] Automatic repeat request(ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, andtimeoutsto achieve reliable data transmission. Anacknowledgmentis a message sent by the receiver to indicate that it has correctly received adata frame. Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols areStop-and-wait ARQ,Go-Back-N ARQ, andSelective Repeat ARQ. ARQ is appropriate if the communication channel has varying or unknowncapacity, such as is the case on the Internet. However, ARQ requires the availability of aback channel, results in possibly increasedlatencydue to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case ofnetwork congestioncan put a strain on the server and overall network capacity.[10] For example, ARQ is used on shortwave radio data links in the form ofARQ-E, or combined with multiplexing asARQ-M. Forward error correction(FEC) is a process of addingredundant datasuch as anerror-correcting code(ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since the receiver does not have to ask the sender for retransmission of the data, abackchannelis not required in forward error correction. Error-correcting codes are used inlower-layercommunication such ascellular network, high-speedfiber-optic communicationandWi-Fi,[11][12]as well as for reliable storage in media such asflash memory,hard diskandRAM.[13] Error-correcting codes are usually distinguished betweenconvolutional codesandblock codes: Shannon's theoremis an important theorem in forward error correction, and describes the maximuminformation rateat which reliable communication is possible over a channel that has a certain error probability orsignal-to-noise ratio(SNR). This strict upper limit is expressed in terms of thechannel capacity. More specifically, the theorem says that there exist codes such that with increasing encoding length the probability of error on adiscrete memoryless channelcan be made arbitrarily small, provided that thecode rateis smaller than the channel capacity. The code rate is defined as the fractionk/nofksource symbols andnencoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and haveefficientencoding and decoding algorithms. Hybrid ARQis a combination of ARQ and forward error correction. There are two basic approaches:[10] The latter approach is particularly attractive on anerasure channelwhen using arateless erasure code. Error detection is most commonly realized using a suitablehash function(or specifically, achecksum,cyclic redundancy checkor other algorithm). A hash function adds a fixed-lengthtagto a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detectingburst errors). A random-error-correcting code based onminimum distance codingcan provide a strict guarantee on the number of detectable errors, but it may not protect against apreimage attack. Arepetition codeis a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern1011, the four-bit block can be repeated three times, thus producing1011 1011 1011. If this twelve-bit pattern was received as1010 1011 1011– where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g.,1010 1010 1010in the previous example would be detected as correct). The advantage of repetition codes is that they are extremely simple, and are in fact used in some transmissions ofnumbers stations.[14][15] Aparity bitis a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make the parity bit appear correct even though the data is erroneous. Parity bits added to eachwordsent are calledtransverse redundancy checks, while those added at the end of a stream ofwordsare calledlongitudinal redundancy checks. For example, if each of a series of m-bitwordshas a parity bit added, showing whether there were an odd or even number of ones in that word, any word with a single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. Achecksumof a message is amodular arithmeticsum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of aones'-complementoperation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits,check digits, andlongitudinal redundancy checks. Some checksum schemes, such as theDamm algorithm, theLuhn algorithm, and theVerhoeff algorithm, are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. Acyclic redundancy check(CRC) is a non-securehash functiondesigned to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of agenerator polynomial, which is used as thedivisorin apolynomial long divisionover afinite field, taking the input data as thedividend. Theremainderbecomes the result. A CRC has properties that make it well suited for detectingburst errors. CRCs are particularly easy to implement in hardware and are therefore commonly used incomputer networksand storage devices such ashard disk drives. The parity bit can be seen as a special-case 1-bit CRC. The output of acryptographic hash function, also known as amessage digest, can provide strong assurances aboutdata integrity, whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than the one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then akeyed hashormessage authentication code(MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Digital signatures can provide strong assurances about data integrity, whether the changes of the data are accidental or maliciously introduced. Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web. Any error-correcting code can be used for error detection. A code withminimumHamming distance,d, can detect up tod− 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distanced= 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications that require low latency (such as telephone conversations) cannot useautomatic repeat request(ARQ); they must useforward error correction(FEC). By the time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have areturn channel; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to the possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes,[16]as well as natural language.[17] In a typicalTCP/IPstack, error control is performed at multiple levels: The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and the limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded)convolutional codesandReed–Muller codes.[18]The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching abell curve), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. TheVoyager 1andVoyager 2missions, which started in 1977, were designed to deliver color imaging and scientific information fromJupiterandSaturn.[19]This resulted in increased coding requirements, and thus, the spacecraft were supported by (optimallyViterbi-decoded) convolutional codes that could beconcatenatedwith an outerGolay (24,12,8) code. The Voyager 2 craft additionally supported an implementation of aReed–Solomon code. The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey toUranusandNeptune. After ECC system upgrades in 1989, both crafts used V2 RSV coding. TheConsultative Committee for Space Data Systemscurrently recommends usage of error correction codes with performance similar to the Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such asTurbo codesorLDPC codes. The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of thenoisein thecommunication channelis different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. The demand for satellitetransponderbandwidth continues to grow, fueled by the desire to deliver television (including new channels andhigh-definition television) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity is determined by the selectedmodulationscheme and the proportion of capacity consumed by FEC. Error detection and correction codes are often used to improve the reliability of data storage media.[20]A parity track capable of detecting single-bit errors was present on the firstmagnetic tape data storagein 1951. Theoptimal rectangular codeused ingroup coded recordingtapes not only detects but also corrects single-bit errors. Somefile formats, particularlyarchive formats, include a checksum (most oftenCRC32) to detect corruption and truncation and can employ redundancy orparity filesto recover portions of corrupted data.Reed-Solomon codesare used incompact discsto correct errors caused by scratches. Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in the spare sectors.[21]RAIDsystems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such asZFSorBtrfs, as well as someRAIDimplementations, supportdata scrubbingand resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used.[22]The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on the same piece of hardware, or the data may be rewritten onto replacement hardware. Dynamic random-access memory(DRAM) may provide stronger protection againstsoft errorsby relying on error-correcting codes. Such error-correcting memory, known asECCorEDAC-protectedmemory, is particularly desirable for mission-critical applications, such as scientific computing, financial, medical, etc. as well as extraterrestrial applications due to the increasedradiationin space. Error-correcting memory controllers traditionally useHamming codes, although some usetriple modular redundancy.Interleavingallows distributing the effect of a single cosmic ray potentially upsetting multiple physically neighboring bits across multiple words by associating neighboring bits to different words. As long as asingle-event upset(SEU) does not exceed the error threshold (e.g., a single error) in any particular word between accesses, it can be corrected (e.g., by a single-bit error-correcting code), and the illusion of an error-free memory system may be maintained.[23] In addition to hardware providing features required for ECC memory to operate,operating systemsusually contain related reporting facilities that are used to provide notifications when soft errors are transparently recovered. One example is theLinux kernel'sEDACsubsystem (previously known asBluesmoke), which collects the data from error-checking-enabled components inside a computer system; besides collecting and reporting back the events related to ECC memory, it also supports other checksumming errors, including those detected on thePCI bus.[24][25][26]A few systems[specify]also supportmemory scrubbingto catch and correct errors early before they become unrecoverable.
https://en.wikipedia.org/wiki/Error_detection_and_correction#Error_detection
Incoding theory, agenerator matrixis amatrixwhose rows form abasisfor alinear code. The codewords are all of thelinear combinationsof the rows of this matrix, that is, the linear code is therow spaceof its generator matrix. IfGis a matrix, it generates thecodewordsof a linear codeCby wherewis a codeword of the linear codeC, andsis any input vector. Bothwandsare assumed to be row vectors.[1]A generator matrix for a linear[n,k,d]q{\displaystyle [n,k,d]_{q}}-code has formatk×n{\displaystyle k\times n}, wherenis the length of a codeword,kis the number of information bits (the dimension ofCas a vector subspace),dis the minimum distance of the code, andqis size of thefinite field, that is, the number of symbols in the alphabet (thus,q= 2 indicates abinary code, etc.). The number ofredundant bitsis denoted byr=n−k{\displaystyle r=n-k}. Thestandardform for a generator matrix is,[2] whereIk{\displaystyle I_{k}}is thek×k{\displaystyle k\times k}identity matrixand P is ak×(n−k){\displaystyle k\times (n-k)}matrix. When the generator matrix is in standard form, the codeCissystematicin its firstkcoordinate positions.[3] A generator matrix can be used to construct theparity check matrixfor a code (and vice versa). If the generator matrixGis in standard form,G=[Ik|P]{\displaystyle G={\begin{bmatrix}I_{k}|P\end{bmatrix}}}, then the parity check matrix forCis[4] whereP⊤{\displaystyle P^{\top }}is thetransposeof the matrixP{\displaystyle P}. This is a consequence of the fact that a parity check matrix ofC{\displaystyle C}is a generator matrix of thedual codeC⊥{\displaystyle C^{\perp }}. G is ak×n{\displaystyle k\times n}matrix, while H is a(n−k)×n{\displaystyle (n-k)\times n}matrix. CodesC1andC2areequivalent(denotedC1~C2) if one code can be obtained from the other via the following two transformations:[5] Equivalent codes have the same minimum distance. The generator matrices of equivalent codes can be obtained from one another via the followingelementary operations:[6] Thus, we can performGaussian eliminationonG. Indeed, this allows us to assume that the generator matrix is in the standard form. More precisely, for any matrixGwe can find aninvertible matrixUsuch thatUG=[Ik|P]{\displaystyle UG={\begin{bmatrix}I_{k}|P\end{bmatrix}}}, whereGand[Ik|P]{\displaystyle {\begin{bmatrix}I_{k}|P\end{bmatrix}}}generate equivalent codes. t=G⊺s{\displaystyle \mathbf {t} =\mathbf {G} ^{\intercal }\mathbf {s} } t=sG{\displaystyle \mathbf {t} =\mathbf {sG} }
https://en.wikipedia.org/wiki/Generator_matrix
Incoding theory, alinear codeis anerror-correcting codefor which anylinear combinationofcodewordsis also a codeword. Linear codes are traditionally partitioned intoblock codesandconvolutional codes, althoughturbo codescan be seen as a hybrid of these two types.[1]Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf.syndrome decoding).[citation needed] Linear codes are used inforward error correctionand are applied in methods for transmitting symbols (e.g.,bits) on acommunications channelso that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent.[2]A linear code of lengthntransmits blocks containingnsymbols. For example, the [7,4,3]Hamming codeis a linearbinary codewhich represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected.[3]This code contains 24= 16 codewords. Alinear codeof lengthnand dimensionkis alinear subspaceCwithdimensionkof thevector spaceFqn{\displaystyle \mathbb {F} _{q}^{n}}whereFq{\displaystyle \mathbb {F} _{q}}is thefinite fieldwithqelements. Such a code is called aq-ary code. Ifq= 2 orq= 3, the code is described as abinary code, or aternary coderespectively. The vectors inCare calledcodewords. Thesizeof a code is the number of codewords and equalsqk. Theweightof a codeword is the number of its elements that are nonzero and thedistancebetween two codewords is theHamming distancebetween them, that is, the number of elements in which they differ. The distancedof the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of lengthn, dimensionk, and distancedis called an [n,k,d] code (or, more precisely,[n,k,d]q{\displaystyle [n,k,d]_{q}}code). We want to giveFqn{\displaystyle \mathbb {F} _{q}^{n}}the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (abinary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to. As alinear subspaceofFqn{\displaystyle \mathbb {F} _{q}^{n}}, the entire codeC(which may be very large) may be represented as thespanof a set ofk{\displaystyle k}codewords (known as abasisinlinear algebra). These basis codewords are often collated in the rows of a matrix G known as agenerating matrixfor the codeC. When G has the block matrix formG=[Ik∣P]{\displaystyle {\boldsymbol {G}}=[I_{k}\mid P]}, whereIk{\displaystyle I_{k}}denotes thek×k{\displaystyle k\times k}identity matrix and P is somek×(n−k){\displaystyle k\times (n-k)}matrix, then we say G is instandard form. A matrixHrepresenting a linear functionϕ:Fqn→Fqn−k{\displaystyle \phi :\mathbb {F} _{q}^{n}\to \mathbb {F} _{q}^{n-k}}whosekernelisCis called acheck matrixofC(or sometimes a parity check matrix). Equivalently,His a matrix whosenull spaceisC. IfCis a code with a generating matrixGin standard form,G=[Ik∣P]{\displaystyle {\boldsymbol {G}}=[I_{k}\mid P]}, thenH=[−PT∣In−k]{\displaystyle {\boldsymbol {H}}=[-P^{T}\mid I_{n-k}]}is a check matrix for C. The code generated byHis called thedual codeof C. It can be verified that G is ak×n{\displaystyle k\times n}matrix, while H is a(n−k)×n{\displaystyle (n-k)\times n}matrix. Linearity guarantees that the minimumHamming distancedbetween a codewordc0and any of the other codewordsc≠c0is independent ofc0. This follows from the property that the differencec−c0of two codewords inCis also a codeword (i.e., anelementof the subspaceC), and the property thatd(c, c0) =d(c−c0, 0). These properties imply that In other words, in order to find out the minimum distance between the codewords of a linear code, one would only need to look at the non-zero codewords. The non-zero codeword with the smallest weight has then the minimum distance to the zero codeword, and hence determines the minimum distance of the code. The distancedof a linear codeCalso equals the minimum number of linearly dependent columns of the check matrixH. Proof:BecauseH⋅cT=0{\displaystyle {\boldsymbol {H}}\cdot {\boldsymbol {c}}^{T}={\boldsymbol {0}}}, which is equivalent to∑i=1n(ci⋅Hi)=0{\displaystyle \sum _{i=1}^{n}(c_{i}\cdot {\boldsymbol {H_{i}}})={\boldsymbol {0}}}, whereHi{\displaystyle {\boldsymbol {H_{i}}}}is theith{\displaystyle i^{th}}column ofH{\displaystyle {\boldsymbol {H}}}. Remove those items withci=0{\displaystyle c_{i}=0}, thoseHi{\displaystyle {\boldsymbol {H_{i}}}}withci≠0{\displaystyle c_{i}\neq 0}are linearly dependent. Therefore,d{\displaystyle d}is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns{Hj∣j∈S}{\displaystyle \{{\boldsymbol {H_{j}}}\mid j\in S\}}whereS{\displaystyle S}is the column index set.∑i=1n(ci⋅Hi)=∑j∈S(cj⋅Hj)+∑j∉S(cj⋅Hj)=0{\displaystyle \sum _{i=1}^{n}(c_{i}\cdot {\boldsymbol {H_{i}}})=\sum _{j\in S}(c_{j}\cdot {\boldsymbol {H_{j}}})+\sum _{j\notin S}(c_{j}\cdot {\boldsymbol {H_{j}}})={\boldsymbol {0}}}. Now consider the vectorc′{\displaystyle {\boldsymbol {c'}}}such thatcj′=0{\displaystyle c_{j}'=0}ifj∉S{\displaystyle j\notin S}. Notec′∈C{\displaystyle {\boldsymbol {c'}}\in C}becauseH⋅c′T=0{\displaystyle {\boldsymbol {H}}\cdot {\boldsymbol {c'}}^{T}={\boldsymbol {0}}}. Therefore, we haved≤wt(c′){\displaystyle d\leq wt({\boldsymbol {c'}})}, which is the minimum number of linearly dependent columns inH{\displaystyle {\boldsymbol {H}}}. The claimed property is therefore proven. As the first class of linear codes developed for error correction purpose,Hamming codeshave been widely used in digital communication systems. For any positive integerr≥2{\displaystyle r\geq 2}, there exists a[2r−1,2r−r−1,3]2{\displaystyle [2^{r}-1,2^{r}-r-1,3]_{2}}Hamming code. Sinced=3{\displaystyle d=3}, this Hamming code can correct a 1-bit error. Example :The linear block code with the following generator matrix and parity check matrix is a[7,4,3]2{\displaystyle [7,4,3]_{2}}Hamming code. Hadamard codeis a[2r,r,2r−1]2{\displaystyle [2^{r},r,2^{r-1}]_{2}}linear code and is capable of correcting many errors. Hadamard code could be constructed column by column : theith{\displaystyle i^{th}}column is the bits of the binary representation of integeri{\displaystyle i}, as shown in the following example. Hadamard code has minimum distance2r−1{\displaystyle 2^{r-1}}and therefore can correct2r−2−1{\displaystyle 2^{r-2}-1}errors. Example:The linear block code with the following generator matrix is a[8,3,4]2{\displaystyle [8,3,4]_{2}}Hadamard code:GHad=(000011110011001101010101){\displaystyle {\boldsymbol {G}}_{\mathrm {Had} }={\begin{pmatrix}0&0&0&0&1&\ 1&1&1\\0&0&1&1&0&0&1&1\\0&1&0&1&0&1&0&1\end{pmatrix}}}. Hadamard codeis a special case ofReed–Muller code. If we take the first column (the all-zero column) out fromGHad{\displaystyle {\boldsymbol {G}}_{\mathrm {Had} }}, we get[7,3,4]2{\displaystyle [7,3,4]_{2}}simplex code, which is thedual codeof Hamming code. The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm): Input: Areceived vectorv inFqn.{\displaystyle \mathbb {F} _{q}^{n}.} Output: A codewordw{\displaystyle w}inC{\displaystyle C}closest tov{\displaystyle v}, if any. We say that a linearC{\displaystyle C}ist{\displaystyle t}-error correcting if there is at most one codeword inBt(v){\displaystyle B_{t}(v)}, for eachv{\displaystyle v}inFqn{\displaystyle \mathbb {F} _{q}^{n}}. Codesin general are often denoted by the letterC, and a code of lengthnand ofrankk(i.e., havingncode words in its basis andkrows in itsgenerating matrix) is generally referred to as an (n,k) code. Linear block codes are frequently denoted as [n,k,d] codes, wheredrefers to the code's minimum Hamming distance between any two code words. (The [n,k,d] notation should not be confused with the (n,M,d) notation used to denote anon-linearcode of lengthn, sizeM(i.e., havingMcode words), and minimum Hamming distanced.) Lemma(Singleton bound): Every linear [n,k,d] code C satisfiesk+d≤n+1{\displaystyle k+d\leq n+1}. A codeCwhose parameters satisfyk+d=n+ 1 is calledmaximum distance separableorMDS. Such codes, when they exist, are in some sense best possible. IfC1andC2are two codes of lengthnand if there is a permutationpin thesymmetric groupSnfor which (c1,...,cn) inC1if and only if (cp(1),...,cp(n)) inC2, then we sayC1andC2arepermutation equivalent. In more generality, if there is ann×n{\displaystyle n\times n}monomial matrixM:Fqn→Fqn{\displaystyle M\colon \mathbb {F} _{q}^{n}\to \mathbb {F} _{q}^{n}}which sendsC1isomorphically toC2then we sayC1andC2areequivalent. Lemma: Any linear code is permutation equivalent to a code which is in standard form. A code is defined to beequidistantif and only if there exists some constantdsuch that the distance between any two of the code's distinct codewords is equal tod.[4]In 1984 Arrigo Bonisoli determined the structure of linear one-weight codes over finite fields and proved that every equidistant linear code is a sequence ofdualHamming codes.[5] Some examples of linear codes include: Hamming spacesover non-field alphabets have also been considered, especially overfinite rings, most notablyGalois ringsoverZ4. This gives rise tomodulesinstead of vector spaces andring-linear codes(identified withsubmodules) instead of linear codes. The typical metric used in this case theLee distance. There exist aGray isometrybetweenZ22m{\displaystyle \mathbb {Z} _{2}^{2m}}(i.e. GF(22m)) with the Hamming distance andZ4m{\displaystyle \mathbb {Z} _{4}^{m}}(also denoted as GR(4,m)) with the Lee distance; its main attraction is that it establishes a correspondence between some "good" codes that are not linear overZ22m{\displaystyle \mathbb {Z} _{2}^{2m}}as images of ring-linear codes fromZ4m{\displaystyle \mathbb {Z} _{4}^{m}}.[6][7][8] Some authors have referred to such codes over rings simply as linear codes as well.[9]
https://en.wikipedia.org/wiki/Linear_code
Incoding theory, asystematic codeis anyerror-correcting codein which the input data are embedded in the encoded output. Conversely, in anon-systematic codethe output does not contain the input symbols. Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur inerasuresand a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time.[1] Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance).[1][2]Because of the advantages cited above,linearerror-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimumfreedistance of the code is larger.[1][3] For a systematiclinear code, thegenerator matrix,G{\displaystyle G}, can always be written asG=[Ik|P]{\displaystyle G=[I_{k}|P]}, whereIk{\displaystyle I_{k}}is theidentity matrixof sizek{\displaystyle k}.
https://en.wikipedia.org/wiki/Systematic_code
The termminimum distancemay refer to
https://en.wikipedia.org/wiki/Minimum_distance
Inmathematics, particularly in the area ofarithmetic, amodular multiplicative inverseof anintegerais an integerxsuch that the productaxiscongruentto 1 with respect to the modulusm.[1]In the standard notation ofmodular arithmeticthis congruence is written as which is the shorthand way of writing the statement thatmdivides (evenly) the quantityax− 1, or, put another way, the remainder after dividingaxby the integermis 1. Ifadoes have an inverse modulom, then there is an infinite number of solutions of this congruence, which form acongruence classwith respect to this modulus. Furthermore, any integer that is congruent toa(i.e., ina's congruence class) has any element ofx's congruence class as a modular multiplicative inverse. Using the notation ofw¯{\displaystyle {\overline {w}}}to indicate the congruence class containingw, this can be expressed by saying that themodulo multiplicative inverseof the congruence classa¯{\displaystyle {\overline {a}}}is the congruence classx¯{\displaystyle {\overline {x}}}such that: where the symbol⋅m{\displaystyle \cdot _{m}}denotes the multiplication of equivalence classes modulom.[2]Written in this way, the analogy with the usual concept of amultiplicative inversein the set ofrationalorreal numbersis clearly represented, replacing the numbers by congruence classes and altering thebinary operationappropriately. As with the analogous operation on the real numbers, a fundamental use of this operation is in solving, when possible, linear congruences of the form Finding modular multiplicative inverses also has practical applications in the field ofcryptography, e.g.public-key cryptographyand theRSA algorithm.[3][4][5]A benefit for the computer implementation of these applications is that there exists a very fast algorithm (theextended Euclidean algorithm) that can be used for the calculation of modular multiplicative inverses. For a given positive integerm, two integers,aandb, are said to becongruent modulomifmdivides their difference. Thisbinary relationis denoted by, This is anequivalence relationon the set of integers,Z{\displaystyle \mathbb {Z} }, and the equivalence classes are calledcongruence classes modulomorresidue classes modulom. Leta¯{\displaystyle {\overline {a}}}denote the congruence class containing the integera,[6]then Alinear congruenceis a modular congruence of the form Unlike linear equations over the reals, linear congruences may have zero, one or several solutions. Ifxis a solution of a linear congruence then every element inx¯{\displaystyle {\overline {x}}}is also a solution, so, when speaking of the number of solutions of a linear congruence we are referring to the number of different congruence classes that contain solutions. Ifdis thegreatest common divisorofaandmthen the linear congruenceax≡b(modm)has solutions if and only ifddividesb. Ifddividesb, then there are exactlydsolutions.[7] A modular multiplicative inverse of an integerawith respect to the modulusmis a solution of the linear congruence The previous result says that a solution exists if and only ifgcd(a,m) = 1, that is,aandmmust berelatively prime(i.e. coprime). Furthermore, when this condition holds, there is exactly one solution, i.e., when it exists, a modular multiplicative inverse is unique:[8]Ifbandb'are both modular multiplicative inverses ofarespect to the modulusm, then therefore Ifa≡ 0 (modm), thengcd(a,m) =m, andawon't even have a modular multiplicative inverse. Therefore,b ≡ b'(modm). Whenax≡ 1 (modm)has a solution it is often denoted in this way − but this can be considered anabuse of notationsince it could be misinterpreted as thereciprocalofa{\displaystyle a}(which, contrary to the modular multiplicative inverse, is not an integer except whenais 1 or −1). The notation would be proper ifais interpreted as a token standing for the congruence classa¯{\displaystyle {\overline {a}}}, as the multiplicative inverse of a congruence class is a congruence class with the multiplication defined in the next section. The congruence relation, modulom, partitions the set of integers intomcongruence classes. Operations of addition and multiplication can be defined on thesemobjects in the following way: To either add or multiply two congruence classes, first pick a representative (in any way) from each class, then perform the usual operation for integers on the two representatives and finally take the congruence class that the result of the integer operation lies in as the result of the operation on the congruence classes. In symbols, with+m{\displaystyle +_{m}}and⋅m{\displaystyle \cdot _{m}}representing the operations on congruence classes, these definitions are and These operations arewell-defined, meaning that the end result does not depend on the choices of representatives that were made to obtain the result. Themcongruence classes with these two defined operations form aring, called thering of integers modulom. There are several notations used for these algebraic objects, most oftenZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }orZ/m{\displaystyle \mathbb {Z} /m}, but several elementary texts and application areas use a simplified notationZm{\displaystyle \mathbb {Z} _{m}}when confusion with other algebraic objects is unlikely. The congruence classes of the integers modulomwere traditionally known asresidue classes modulo m, reflecting the fact that all the elements of a congruence class have the same remainder (i.e., "residue") upon being divided bym. Any set ofmintegers selected so that each comes from a different congruence class modulo m is called acomplete system of residues modulom.[9]Thedivision algorithmshows that the set of integers,{0, 1, 2, ...,m− 1}form a complete system of residues modulom, known as theleast residue system modulom. In working with arithmetic problems it is sometimes more convenient to work with a complete system of residues and use the language of congruences while at other times the point of view of the congruence classes of the ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is more useful.[10] Not every element of a complete residue system modulomhas a modular multiplicative inverse, for instance, zero never does. After removing the elements of a complete residue system that are not relatively prime tom, what is left is called areduced residue system, all of whose elements have modular multiplicative inverses. The number of elements in a reduced residue system isϕ(m){\displaystyle \phi (m)}, whereϕ{\displaystyle \phi }is theEuler totient function, i.e., the number of positive integers less thanmthat are relatively prime tom. In a generalring with unitynot every element has amultiplicative inverseand those that do are calledunits. As the product of two units is a unit, the units of a ring form agroup, thegroup of units of the ringand often denoted byR×ifRis the name of the ring. The group of units of the ring of integers modulomis called themultiplicative group of integers modulom, and it isisomorphicto a reduced residue system. In particular, it hasorder(size),ϕ(m){\displaystyle \phi (m)}. In the case thatmis aprime, sayp, thenϕ(p)=p−1{\displaystyle \phi (p)=p-1}and all the non-zero elements ofZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }have multiplicative inverses, thusZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }is afinite field. In this case, the multiplicative group of integers modulopform acyclic groupof orderp− 1. For any integern>1{\displaystyle n>1}, it's always the case thatn2−n+1{\displaystyle n^{2}-n+1}is the modular multiplicative inverse ofn+1{\displaystyle n+1}with respect to the modulusn2{\displaystyle n^{2}}, since(n+1)(n2−n+1)=n3+1{\displaystyle (n+1)(n^{2}-n+1)=n^{3}+1}. Examples are3×3≡1(mod4){\displaystyle 3\times 3\equiv 1{\pmod {4}}},4×7≡1(mod9){\displaystyle 4\times 7\equiv 1{\pmod {9}}},5×13≡1(mod16){\displaystyle 5\times 13\equiv 1{\pmod {16}}}and so on. The following example uses the modulus 10: Two integers are congruent mod 10 if and only if their difference is divisible by 10, for instance Some of the ten congruence classes with respect to this modulus are: The linear congruence4x≡ 5 (mod 10)has no solutions since the integers that are congruent to 5 (i.e., those in5¯{\displaystyle {\overline {5}}}) are all odd while4xis always even. However, the linear congruence4x≡ 6 (mod 10)has two solutions, namely,x= 4andx= 9. Thegcd(4, 10) = 2and 2 does not divide 5, but does divide 6. Sincegcd(3, 10) = 1, the linear congruence3x≡ 1 (mod 10)will have solutions, that is, modular multiplicative inverses of 3 modulo 10 will exist. In fact, 7 satisfies this congruence (i.e., 21 − 1 = 20). However, other integers also satisfy the congruence, for instance 17 and −3 (i.e., 3(17) − 1 = 50 and 3(−3) − 1 = −10). In particular, every integer in7¯{\displaystyle {\overline {7}}}will satisfy the congruence since these integers have the form7 + 10rfor some integerrand is divisible by 10. This congruence has only this one congruence class of solutions. The solution in this case could have been obtained by checking all possible cases, but systematic algorithms would be needed for larger moduli and these will be given in the next section. The product of congruence classes5¯{\displaystyle {\overline {5}}}and8¯{\displaystyle {\overline {8}}}can be obtained by selecting an element of5¯{\displaystyle {\overline {5}}}, say 25, and an element of8¯{\displaystyle {\overline {8}}}, say −2, and observing that their product (25)(−2) = −50 is in the congruence class0¯{\displaystyle {\overline {0}}}. Thus,5¯⋅108¯=0¯{\displaystyle {\overline {5}}\cdot _{10}{\overline {8}}={\overline {0}}}. Addition is defined in a similar way. The ten congruence classes together with these operations of addition and multiplication of congruence classes form the ring of integers modulo 10, i.e.,Z/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. A complete residue system modulo 10 can be the set {10, −9, 2, 13, 24, −15, 26, 37, 8, 9} where each integer is in a different congruence class modulo 10. The unique least residue system modulo 10 is {0, 1, 2, ..., 9}. A reduced residue system modulo 10 could be {1, 3, 7, 9}. The product of any two congruence classes represented by these numbers is again one of these four congruence classes. This implies that these four congruence classes form a group, in this case the cyclic group of order four, having either 3 or 7 as a (multiplicative) generator. The represented congruence classes form the group of units of the ringZ/10Z{\displaystyle \mathbb {Z} /10\mathbb {Z} }. These congruence classes are precisely the ones which have modular multiplicative inverses. A modular multiplicative inverse ofamodulomcan be found by using the extended Euclidean algorithm. TheEuclidean algorithmdetermines the greatest common divisor (gcd) of two integers, sayaandm. Ifahas a multiplicative inverse modulom, this gcd must be 1. The last of several equations produced by the algorithm may be solved for this gcd. Then, using a method called "back substitution", an expression connecting the original parameters and this gcd can be obtained. In other words, integersxandycan be found to satisfyBézout's identity, Rewritten, this is that is, so, a modular multiplicative inverse ofahas been calculated. A more efficient version of the algorithm is the extended Euclidean algorithm, which, by using auxiliary equations, reduces two passes through the algorithm (back substitution can be thought of as passing through the algorithm in reverse) to just one. Inbig O notation, this algorithm runs in timeO(log2(m)), assuming|a| <m, and is considered to be very fast and generally more efficient than its alternative, exponentiation. As an alternative to the extended Euclidean algorithm, Euler's theorem may be used to compute modular inverses.[11] According toEuler's theorem, ifaiscoprimetom, that is,gcd(a,m) = 1, then whereϕ{\displaystyle \phi }isEuler's totient function. This follows from the fact thatabelongs to the multiplicative group(Z/mZ){\displaystyle (\mathbb {Z} /m\mathbb {Z} )}×if and only ifaiscoprimetom. Therefore, a modular multiplicative inverse can be found directly: In the special case wheremis a prime,ϕ(m)=m−1{\displaystyle \phi (m)=m-1}and a modular inverse is given by This method is generally slower than the extended Euclidean algorithm, but is sometimes used when an implementation for modular exponentiation is already available. Some disadvantages of this method include: One notableadvantageof this technique is that there are no conditional branches which depend on the value ofa, and thus the value ofa, which may be an important secret inpublic-key cryptography, can be protected fromside-channel attacks. For this reason, the standard implementation ofCurve25519uses this technique to compute an inverse. It is possible to compute the inverse of multiple numbersai, modulo a commonm, with a single invocation of the Euclidean algorithm and three multiplications per additional input.[12]The basic idea is to form the product of all theai, invert that, then multiply byajfor allj≠ito leave only the desireda−1i. More specifically, the algorithm is (all arithmetic performed modulom): It is possible to perform the multiplications in a tree structure rather than linearly to exploitparallel computing. Finding a modular multiplicative inverse has many applications in algorithms that rely on the theory of modular arithmetic. For instance, in cryptography the use of modular arithmetic permits some operations to be carried out more quickly and with fewer storage requirements, while other operations become more difficult.[13]Both of these features can be used to advantage. In particular, in the RSA algorithm, encrypting and decrypting a message is done using a pair of numbers that are multiplicative inverses with respect to a carefully selected modulus. One of these numbers is made public and can be used in a rapid encryption procedure, while the other, used in the decryption procedure, is kept hidden. Determining the hidden number from the public number is considered to be computationally infeasible and this is what makes the system work to ensure privacy.[14] As another example in a different context, consider the exact division problem in computer science where you have a list of odd word-sized numbers each divisible bykand you wish to divide them all byk. One solution is as follows: On many machines, particularly those without hardware support for division, division is a slower operation than multiplication, so this approach can yield a considerable speedup. The first step is relatively slow but only needs to be done once. Modular multiplicative inverses are used to obtain a solution of a system of linear congruences that is guaranteed by theChinese Remainder Theorem. For example, the system has common solutions since 5,7 and 11 are pairwisecoprime. A solution is given by where Thus, and in its unique reduced form since 385 is theLCMof 5,7 and 11. Also, the modular multiplicative inverse figures prominently in the definition of theKloosterman sum.
https://en.wikipedia.org/wiki/Modular_inverse
Ininformation theoryandcoding theory,Reed–Solomon codesare a group oferror-correcting codesthat were introduced byIrving S. ReedandGustave Solomonin 1960.[1]They have many applications, including consumer technologies such asMiniDiscs,CDs,DVDs,Blu-raydiscs,QR codes,Data Matrix,data transmissiontechnologies such asDSLandWiMAX,broadcastsystems such as satellite communications,DVBandATSC, and storage systems such asRAID 6. Reed–Solomon codes operate on a block of data treated as a set offinite-fieldelements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By addingt=n−kcheck symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up toterroneous symbols,orlocate and correct up to⌊t/2⌋erroneous symbols at unknown locations. As anerasure code, it can correct up toterasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burstbit-error correcting codes, since a sequence ofb+ 1consecutive bit errors can affect at most two symbols of sizeb. The choice oftis up to the designer of the code and may be selected within wide limits. There are two basic types of Reed–Solomon codes – original view andBCHview – with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders. Reed–Solomon codes were developed in 1960 byIrving S. ReedandGustave Solomon, who were then staff members ofMIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields".[1]The original encoding scheme described in the Reed and Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets ofk(unencoded message length) out ofn(encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to aBCH-code-like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed–Solomon codes: ones that use the original encoding scheme and ones that use the BCH encoding scheme. Also in 1960, a practical fixed polynomial decoder forBCH codesdeveloped byDaniel Gorensteinand Neal Zierler was described in anMIT Lincoln Laboratoryreport by Zierler in January 1960 and later in an article in June 1961.[2]The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error-Correcting Codes" byW. Wesley Peterson(1961).[3][page needed]By 1963 (or possibly earlier), J.J. Stone (and others)[who?]recognized that Reed–Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes,[4]but Reed–Solomon codes based on the original encoding scheme are not a class of BCH codes, and depending on the set of evaluation points, they are not evencyclic codes. In 1969, an improved BCH scheme decoder was developed byElwyn BerlekampandJames Masseyand has since been known as theBerlekamp–Massey decoding algorithm. In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on theextended Euclidean algorithm.[5] In 1977, Reed–Solomon codes were implemented in theVoyager programin the form ofconcatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with thecompact disc, where twointerleavedReed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented indigital storagedevices anddigital communicationstandards, though they are being slowly replaced byBose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in theDigital Video Broadcasting(DVB) standardDVB-S, in conjunction with aconvolutionalinner code, but BCH codes are used withLDPCin its successor,DVB-S2. In 1986, an original scheme decoder known as theBerlekamp–Welch algorithmwas developed. In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders (seeGuruswami–Sudan list decoding algorithm). In 2002, another original scheme decoder was developed by Shuhong Gao, based on theextended Euclidean algorithm.[6] Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects. Reed–Solomon coding is a key component of thecompact disc. It was the first use of strong error correction coding in a mass-produced consumer product, andDATandDVDuse similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-wayconvolutionalinterleaveryields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block. The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.[7] DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code. Reed–Solomon error correction is also used inparchivefiles which are commonly posted accompanying multimedia files onUSENET. The distributed online storage serviceWuala(discontinued in 2015) also used Reed–Solomon when breaking up files. Almost all two-dimensional bar codes such asPDF-417,MaxiCode,Datamatrix,QR Code,Aztec CodeandHan Xin codeuse Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure. Reed–Solomon coding is less common in one-dimensional bar codes, but is used by thePostBarsymbology. Specialized forms of Reed–Solomon codes, specificallyCauchy-RS andVandermonde-RS, can be used to overcome the unreliable nature of data transmission overerasure channels. The encoding process assumes a code of RS(N,K) which results inNcodewords of lengthNsymbols each storingKsymbols of data, being generated, that are then sent over an erasure channel. Any combination ofKcodewords received at the other end is enough to reconstruct all of theNcodewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion,Nis usually 2K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent. Reed–Solomon codes are also used inxDSLsystems andCCSDS'sSpace Communications Protocol Specificationsas a form offorward error correction. One significant application of Reed–Solomon coding was to encode the digital pictures sent back by theVoyager program. Voyager introduced Reed–Solomon codingconcatenatedwithconvolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications. Viterbi decoderstend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes. Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on theMars Pathfinder,Galileo,Mars Exploration RoverandCassinimissions, where they perform within about 1–1.5dBof the ultimate limit, theShannon capacity. These concatenated codes are now being replaced by more powerfulturbo codes: The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: analphabetsizeq, ablock lengthn, and amessage lengthk,withk<n≤q{\displaystyle k<n\leq q}. The set of alphabet symbols is interpreted as thefinite fieldF{\displaystyle F}of orderq{\displaystyle q}, and thus,q{\displaystyle q}must be aprime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, therateR=kn{\displaystyle R={\frac {k}{n}}}is some constant, and furthermore, the block length is either equal to the alphabet size or one less than it, i.e.,n=q{\displaystyle n=q}orn=q−1{\displaystyle n=q-1}.[citation needed] There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords. In the original view of Reed and Solomon, every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less thank{\displaystyle k}.[1]In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomialp{\displaystyle p}of degree less thank{\displaystyle k}, over the finite fieldF{\displaystyle F}withq{\displaystyle q}elements. In turn, the polynomialp{\displaystyle p}is evaluated atn≤q{\displaystyle n\leq q}distinct pointsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}of the fieldF{\displaystyle F}, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include{0,1,2,…,n−1}{\displaystyle \{0,1,2,\dots ,n-1\}},{0,1,α,α2,…,αn−2}{\displaystyle \{0,1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-2}\}}, or forn<q{\displaystyle n<q},{1,α,α2,…,αn−1}{\displaystyle \{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}}, ... , whereα{\displaystyle \alpha }is aprimitive elementofF{\displaystyle F}. Formally, the setC{\displaystyle \mathbf {C} }of codewords of the Reed–Solomon code is defined as follows:C={(p(a1),p(a2),…,p(an))|pis a polynomial overFof degree<k}.{\displaystyle \mathbf {C} ={\Bigl \{}\;{\bigl (}p(a_{1}),p(a_{2}),\dots ,p(a_{n}){\bigr )}\;{\Big |}\;p{\text{ is a polynomial over }}F{\text{ of degree }}<k\;{\Bigr \}}\,.}Since any twodistinctpolynomials of degree less thank{\displaystyle k}agree in at mostk−1{\displaystyle k-1}points, this means that any two codewords of the Reed–Solomon code disagree in at leastn−(k−1)=n−k+1{\displaystyle n-(k-1)=n-k+1}positions. Furthermore, there are two polynomials that do agree ink−1{\displaystyle k-1}points but are not equal, and thus, thedistanceof the Reed–Solomon code is exactlyd=n−k+1{\displaystyle d=n-k+1}. Then the relative distance isδ=d/n=1−k/n+1/n=1−R+1/n∼1−R{\displaystyle \delta =d/n=1-k/n+1/n=1-R+1/n\sim 1-R}, whereR=k/n{\displaystyle R=k/n}is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by theSingleton bound,everycode satisfiesδ+R≤1+1/n{\displaystyle \delta +R\leq 1+1/n}. Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class ofmaximum distance separable codes. While the number of different polynomials of degree less thankand the number of different messages are both equal toqk{\displaystyle q^{k}}, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of Reed & Solomon interprets the messagexas thecoefficientsof the polynomialp, whereas subsequent constructions interpret the message as thevaluesof the polynomial at the firstkpointsa1,…,ak{\displaystyle a_{1},\dots ,a_{k}}and obtain the polynomialpby interpolating these values with a polynomial of degree less thank. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to asystematic code, that is, the original message is always contained as a subsequence of the codeword.[1] In the original construction of Reed and Solomon, the messagem=(m0,…,mk−1)∈Fk{\displaystyle m=(m_{0},\dots ,m_{k-1})\in F^{k}}is mapped to the polynomialpm{\displaystyle p_{m}}withpm(a)=∑i=0k−1miai.{\displaystyle p_{m}(a)=\sum _{i=0}^{k-1}m_{i}a^{i}\,.}The codeword ofm{\displaystyle m}is obtained by evaluatingpm{\displaystyle p_{m}}atn{\displaystyle n}different pointsa0,…,an−1{\displaystyle a_{0},\dots ,a_{n-1}}of the fieldF{\displaystyle F}.[1]Thus the classical encoding functionC:Fk→Fn{\displaystyle C:F^{k}\to F^{n}}for the Reed–Solomon code is defined as follows:C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}}This functionC{\displaystyle C}is alinear mapping, that is, it satisfiesC(m)=Am{\displaystyle C(m)=Am}for the followingn×k{\displaystyle n\times k}-matrixA{\displaystyle A}with elements fromF{\displaystyle F}:C(m)=Am=[1a0a02…a0k−11a1a12…a1k−1⋮⋮⋮⋱⋮1an−1an−12…an−1k−1][m0m1⋮mk−1]{\displaystyle C(m)=Am={\begin{bmatrix}1&a_{0}&a_{0}^{2}&\dots &a_{0}^{k-1}\\1&a_{1}&a_{1}^{2}&\dots &a_{1}^{k-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&a_{n-1}&a_{n-1}^{2}&\dots &a_{n-1}^{k-1}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}} This matrix is aVandermonde matrixoverF{\displaystyle F}. In other words, the Reed–Solomon code is alinear code, and in the classical encoding procedure, itsgenerator matrixisA{\displaystyle A}. There are alternative encoding procedures that produce asystematicReed–Solomon code. One method usesLagrange interpolationto compute polynomialpm{\displaystyle p_{m}}such thatpm(ai)=mifor alli∈{0,…,k−1}.{\displaystyle p_{m}(a_{i})=m_{i}{\text{ for all }}i\in \{0,\dots ,k-1\}.}Thenpm{\displaystyle p_{m}}is evaluated at the other pointsak,…,an−1{\displaystyle a_{k},\dots ,a_{n-1}}. C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}} This functionC{\displaystyle C}is a linear mapping. To generate the corresponding systematic encoding matrix G, multiply the Vandermonde matrix A by the inverse of A's left square submatrix. G=(A's left square submatrix)−1⋅A=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle G=(A{\text{'s left square submatrix}})^{-1}\cdot A={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}} C(m)=Gm{\displaystyle C(m)=Gm}for the followingn×k{\displaystyle n\times k}-matrixG{\displaystyle G}with elements fromF{\displaystyle F}:C(m)=Gm=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n][m0m1⋮mk−1]{\displaystyle C(m)=Gm={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}} Adiscrete Fourier transformis essentially the same as the encoding procedure; it uses the generator polynomialpm{\displaystyle p_{m}}to map a set of evaluation points into the message values as shown above:C(m)=[pm(a0)pm(a1)⋯pm(an−1)]{\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}} The inverse Fourier transform could be used to convert an error free set ofn<qmessage values back into the encoding polynomial ofkcoefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers ofα:ai=αi{\displaystyle a_{i}=\alpha ^{i}}a0,…,an−1={1,α,α2,…,αn−1}{\displaystyle a_{0},\dots ,a_{n-1}=\{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}} However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of theGao decoder. In this view, the message is interpreted as the coefficients of a polynomialp(x){\displaystyle p(x)}. The sender computes a related polynomials(x){\displaystyle s(x)}of degreen−1{\displaystyle n-1}wheren≤q−1{\displaystyle n\leq q-1}and sends the polynomials(x){\displaystyle s(x)}. The polynomials(x){\displaystyle s(x)}is constructed by multiplying the message polynomialp(x){\displaystyle p(x)}, which has degreek−1{\displaystyle k-1}, with agenerator polynomialg(x){\displaystyle g(x)}of degreen−k{\displaystyle n-k}that is known to both the sender and the receiver. The generator polynomialg(x){\displaystyle g(x)}is defined as the polynomial whose roots are sequential powers of the Galois field primitiveα{\displaystyle \alpha }g(x)=(x−αi)(x−αi+1)⋯(x−αi+n−k−1)=g0+g1x+⋯+gn−k−1xn−k−1+xn−k{\displaystyle g(x)=\left(x-\alpha ^{i}\right)\left(x-\alpha ^{i+1}\right)\cdots \left(x-\alpha ^{i+n-k-1}\right)=g_{0}+g_{1}x+\cdots +g_{n-k-1}x^{n-k-1}+x^{n-k}} For a "narrow sense code",i=1{\displaystyle i=1}. C={(s1,s2,…,sn)|s(a)=∑i=1nsiaiis a polynomial that has at least the rootsα1,α2,…,αn−k}.{\displaystyle \mathbf {C} =\left\{\left(s_{1},s_{2},\dots ,s_{n}\right)\;{\Big |}\;s(a)=\sum _{i=1}^{n}s_{i}a^{i}{\text{ is a polynomial that has at least the roots }}\alpha ^{1},\alpha ^{2},\dots ,\alpha ^{n-k}\right\}.} The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield asystematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sendings(x)=p(x)g(x){\displaystyle s(x)=p(x)g(x)}, the encoder constructs the transmitted polynomials(x){\displaystyle s(x)}such that the coefficients of thek{\displaystyle k}largest monomials are equal to the corresponding coefficients ofp(x){\displaystyle p(x)}, and the lower-order coefficients ofs(x){\displaystyle s(x)}are chosen exactly in such a way thats(x){\displaystyle s(x)}becomes divisible byg(x){\displaystyle g(x)}. Then the coefficients ofp(x){\displaystyle p(x)}are a subsequence of the coefficients ofs(x){\displaystyle s(x)}. To get a code that is overall systematic, we construct the message polynomialp(x){\displaystyle p(x)}by interpreting the message as the sequence of its coefficients. Formally, the construction is done by multiplyingp(x){\displaystyle p(x)}byxt{\displaystyle x^{t}}to make room for thet=n−k{\displaystyle t=n-k}check symbols, dividing that product byg(x){\displaystyle g(x)}to find the remainder, and then compensating for that remainder by subtracting it. Thet{\displaystyle t}check symbols are created by computing the remaindersr(x){\displaystyle s_{r}(x)}:sr(x)=p(x)⋅xtmodg(x).{\displaystyle s_{r}(x)=p(x)\cdot x^{t}\ {\bmod {\ }}g(x).} The remainder has degree at mostt−1{\displaystyle t-1}, whereas the coefficients ofxt−1,xt−2,…,x1,x0{\displaystyle x^{t-1},x^{t-2},\dots ,x^{1},x^{0}}in the polynomialp(x)⋅xt{\displaystyle p(x)\cdot x^{t}}are zero. Therefore, the following definition of the codewords(x){\displaystyle s(x)}has the property that the firstk{\displaystyle k}coefficients are identical to the coefficients ofp(x){\displaystyle p(x)}:s(x)=p(x)⋅xt−sr(x).{\displaystyle s(x)=p(x)\cdot x^{t}-s_{r}(x)\,.} As a result, the codewordss(x){\displaystyle s(x)}are indeed elements ofC{\displaystyle \mathbf {C} }, that is, they are divisible by the generator polynomialg(x){\displaystyle g(x)}:[10]s(x)≡p(x)⋅xt−sr(x)≡sr(x)−sr(x)≡0modg(x).{\displaystyle s(x)\equiv p(x)\cdot x^{t}-s_{r}(x)\equiv s_{r}(x)-s_{r}(x)\equiv 0\mod g(x)\,.} This functions{\displaystyle s}is a linear mapping. To generate the corresponding systematic encoding matrix G, set G's left square submatrix to the identity matrix and then encode each row: G=[100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle G={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}}Ignoring leading zeroes, the last row =g(x){\displaystyle g(x)}. C(m)=mG{\displaystyle C(m)=mG}for the followingn×k{\displaystyle n\times k}-matrixG{\displaystyle G}with elements fromF{\displaystyle F}:C(m)=mG=[m0m1…mk−1][100…0g1,k+1…g1,n010…0g2,k+1…g2,n001…0g3,k+1…g3,n⋮⋮⋮⋮⋮⋮0…0…1gk,k+1…gk,n]{\displaystyle C(m)=mG={\begin{bmatrix}m_{0}&m_{1}&\ldots &m_{k-1}\end{bmatrix}}{\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}} The Reed–Solomon code is a [n,k,n−k+ 1] code; in other words, it is alinear block codeof lengthn(overF) withdimensionkand minimumHamming distancedmin=n−k+1.{\textstyle d_{\min }=n-k+1.}The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (n,k); this is known as theSingleton bound. Such a code is also called amaximum distance separable (MDS) code. The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, byn−k{\displaystyle n-k}, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to(n−k)/2{\displaystyle (n-k)/2}erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" indemodulatorsignal-to-noise ratios)—these are callederasures. A Reed–Solomon code (like anyMDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation2E+S≤n−kis satisfied, whereE{\displaystyle E}is the number of errors andS{\displaystyle S}is the number of erasures in the block. The theoretical error bound can be described via the following formula for theAWGNchannel forFSK:[11]Pb≈2m−12m−11n∑ℓ=t+1nℓ(nℓ)Psℓ(1−Ps)n−ℓ{\displaystyle P_{b}\approx {\frac {2^{m-1}}{2^{m}-1}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }}and for other modulation schemes:Pb≈1m1n∑ℓ=t+1nℓ(nℓ)Psℓ(1−Ps)n−ℓ{\displaystyle P_{b}\approx {\frac {1}{m}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }}wheret=12(dmin−1){\textstyle t={\frac {1}{2}}(d_{\min }-1)},Ps=1−(1−s)h{\displaystyle P_{s}=1-(1-s)^{h}},h=mlog2⁡M{\displaystyle h={\frac {m}{\log _{2}M}}},s{\displaystyle s}is the symbol error rate in uncoded AWGN case andM{\displaystyle M}is the modulation order. For practical uses of Reed–Solomon codes, it is common to use a finite fieldF{\displaystyle F}with2m{\displaystyle 2^{m}}elements. In this case, each symbol can be represented as anm{\displaystyle m}-bit value. The sender sends the data points as encoded blocks, and the number of symbols in the encoded block isn=2m−1{\displaystyle n=2^{m}-1}. Thus a Reed–Solomon code operating on 8-bit symbols hasn=28−1=255{\displaystyle n=2^{8}-1=255}symbols per block. (This is a very popular value because of the prevalence ofbyte-orientedcomputer systems.) The numberk{\displaystyle k}, withk<n{\displaystyle k<n}, ofdatasymbols in the block is a design parameter. A commonly used code encodesk=223{\displaystyle k=223}eight-bit data symbols plus 32 eight-bit parity symbols in ann=255{\displaystyle n=255}-symbol block; this is denoted as a(n,k)=(255,223){\displaystyle (n,k)=(255,223)}code, and is capable of correcting up to 16 symbol errors per block. The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur inbursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code. The Reed–Solomon code, like theconvolutional code, is a transparent code. This means that if the channel symbols have beeninvertedsomewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened (see 'Remarks' at the end of this section). The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding. Whether the Reed–Solomon code iscyclicor not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, ifα{\displaystyle \alpha }is aprimitive rootof the fieldF{\displaystyle F}, then by definition all non-zero elements ofF{\displaystyle F}take the formαi{\displaystyle \alpha ^{i}}fori∈{1,…,q−1}{\displaystyle i\in \{1,\dots ,q-1\}}, whereq=|F|{\displaystyle q=|F|}. Each polynomialp{\displaystyle p}overF{\displaystyle F}gives rise to a codeword(p(α1),…,p(αq−1)){\displaystyle (p(\alpha ^{1}),\dots ,p(\alpha ^{q-1}))}. Since the functiona↦p(αa){\displaystyle a\mapsto p(\alpha a)}is also a polynomial of the same degree, this function gives rise to a codeword(p(α2),…,p(αq)){\displaystyle (p(\alpha ^{2}),\dots ,p(\alpha ^{q}))}; sinceαq=α1{\displaystyle \alpha ^{q}=\alpha ^{1}}holds, this codeword is thecyclic left-shiftof the original codeword derived fromp{\displaystyle p}. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon codecyclic. Reed–Solomon codes in the BCH view are always cyclic becauseBCH codes are cyclic. Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The QR code, Ver 3 (29×29) uses interleaved blocks. The message has 26 data bytes and is encoded using two Reed-Solomon code blocks. Each block is a (255,233) Reed Solomon code shortened to a (35,13) code. The Delsarte–Goethals–Seidel[12]theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known aspuncturingallows omitting some of the encoded parity symbols. The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder. Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.[13][14]The Gorenstein–Zierler decoder and the related work on BCH codes are described in the bookError Correcting CodesbyW. Wesley Peterson(1961).[3][page needed] The transmitted message,(c0,…,ci,…,cn−1){\displaystyle (c_{0},\ldots ,c_{i},\ldots ,c_{n-1})}, is viewed as the coefficients of a polynomials(x)=∑i=0n−1cixi.{\displaystyle s(x)=\sum _{i=0}^{n-1}c_{i}x^{i}.} As a result of the Reed–Solomon encoding procedure,s(x) is divisible by the generator polynomialg(x)=∏j=1n−k(x−αj),{\displaystyle g(x)=\prod _{j=1}^{n-k}(x-\alpha ^{j}),}whereαis a primitive element. Sinces(x) is a multiple of the generatorg(x), it follows that it "inherits" all its roots:s(x)mod(x−αj)=g(x)mod(x−αj)=0.{\displaystyle s(x){\bmod {(}}x-\alpha ^{j})=g(x){\bmod {(}}x-\alpha ^{j})=0.}Therefore,s(αj)=0,j=1,2,…,n−k.{\displaystyle s(\alpha ^{j})=0,\ j=1,2,\ldots ,n-k.} The transmitted polynomial is corrupted in transit by an error polynomiale(x)=∑i=0n−1eixi{\displaystyle e(x)=\sum _{i=0}^{n-1}e_{i}x^{i}}to produce the received polynomialr(x)=s(x)+e(x).{\displaystyle r(x)=s(x)+e(x).} Coefficienteiwill be zero if there is no error at that power ofx, and nonzero if there is an error. If there areνerrors at distinct powersikofx, thene(x)=∑k=1νeikxik.{\displaystyle e(x)=\sum _{k=1}^{\nu }e_{i_{k}}x^{i_{k}}.} The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (eik). From those,e(x) can be calculated and subtracted fromr(x) to get the originally sent messages(x). The decoder starts by evaluating the polynomial as received at pointsα1…αn−k{\displaystyle \alpha ^{1}\dots \alpha ^{n-k}}. We call the results of that evaluation the "syndromes"Sj. They are defined asSj=r(αj)=s(αj)+e(αj)=0+e(αj)=e(αj)=∑k=1νeik(αj)ik,j=1,2,…,n−k.{\displaystyle {\begin{aligned}S_{j}&=r(\alpha ^{j})=s(\alpha ^{j})+e(\alpha ^{j})=0+e(\alpha ^{j})\\&=e(\alpha ^{j})\\&=\sum _{k=1}^{\nu }e_{i_{k}}{(\alpha ^{j})}^{i_{k}},\quad j=1,2,\ldots ,n-k.\end{aligned}}}Note thats(αj)=0{\displaystyle s(\alpha ^{j})=0}becauses(x){\displaystyle s(x)}has roots atαj{\displaystyle \alpha ^{j}}, as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit. For convenience, define theerror locatorsXkanderror valuesYkasXk=αik,Yk=eik.{\displaystyle X_{k}=\alpha ^{i_{k}},\quad Y_{k}=e_{i_{k}}.} Then the syndromes can be written in terms of these error locators and error values asSj=∑k=1νYkXkj.{\displaystyle S_{j}=\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}.} This definition of the syndrome values is equivalent to the previous since(αj)ik=αj⋅ik=(αik)j=Xkj{\displaystyle {(\alpha ^{j})}^{i_{k}}=\alpha ^{j\cdot i_{k}}={(\alpha ^{i_{k}})}^{j}=X_{k}^{j}}. The syndromes give a system ofn−k≥ 2νequations in 2νunknowns, but that system of equations is nonlinear in theXkand does not have an obvious solution. However, if theXkwere known (see below), then the syndrome equations provide a linear system of equations[X11X21⋯Xν1X12X22⋯Xν2⋮⋮⋱⋮X1n−kX2n−k⋯Xνn−k][Y1Y2⋮Yν]=[S1S2⋮Sn−k],{\displaystyle {\begin{bmatrix}X_{1}^{1}&X_{2}^{1}&\cdots &X_{\nu }^{1}\\X_{1}^{2}&X_{2}^{2}&\cdots &X_{\nu }^{2}\\\vdots &\vdots &\ddots &\vdots \\X_{1}^{n-k}&X_{2}^{n-k}&\cdots &X_{\nu }^{n-k}\\\end{bmatrix}}{\begin{bmatrix}Y_{1}\\Y_{2}\\\vdots \\Y_{\nu }\end{bmatrix}}={\begin{bmatrix}S_{1}\\S_{2}\\\vdots \\S_{n-k}\end{bmatrix}},}which can easily be solved for theYkerror values. Consequently, the problem is finding theXk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Yk In the variant of this algorithm where the locations of the errors are already known (when it is being used as anerasure code), this is the end. The error locations (Xk) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up ton−k{\displaystyle n-k}errors can be corrected. The rest of the algorithm serves to locate the errors and will require syndrome values up to2ν{\displaystyle 2\nu }, instead of just theν{\displaystyle \nu }used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations. There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locationsXk. Define theerror locator polynomialΛ(x)asΛ(x)=∏k=1ν(1−xXk)=1+Λ1x1+Λ2x2+⋯+Λνxν.{\displaystyle \Lambda (x)=\prod _{k=1}^{\nu }(1-xX_{k})=1+\Lambda _{1}x^{1}+\Lambda _{2}x^{2}+\cdots +\Lambda _{\nu }x^{\nu }.} The zeros ofΛ(x)are the reciprocalsXk−1{\displaystyle X_{k}^{-1}}. This follows from the above product notation construction, since ifx=Xk−1{\displaystyle x=X_{k}^{-1}}, then one of the multiplied terms will be zero,(1−Xk−1⋅Xk)=1−1=0{\displaystyle (1-X_{k}^{-1}\cdot X_{k})=1-1=0}, making the whole polynomial evaluate to zero:Λ(Xk−1)=0.{\displaystyle \Lambda (X_{k}^{-1})=0.} Letj{\displaystyle j}be any integer such that1≤j≤ν{\displaystyle 1\leq j\leq \nu }. Multiply both sides byYkXkj+ν{\displaystyle Y_{k}X_{k}^{j+\nu }}, and it will still be zero:YkXkj+νΛ(Xk−1)=0,YkXkj+ν(1+Λ1Xk−1+Λ2Xk−2+⋯+ΛνXk−ν)=0,YkXkj+ν+Λ1YkXkj+νXk−1+Λ2YkXkj+νXk−2+⋯+ΛνYkXkj+νXk−ν=0,YkXkj+ν+Λ1YkXkj+ν−1+Λ2YkXkj+ν−2+⋯+ΛνYkXkj=0.{\displaystyle {\begin{aligned}&Y_{k}X_{k}^{j+\nu }\Lambda (X_{k}^{-1})=0,\\&Y_{k}X_{k}^{j+\nu }(1+\Lambda _{1}X_{k}^{-1}+\Lambda _{2}X_{k}^{-2}+\cdots +\Lambda _{\nu }X_{k}^{-\nu })=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu }X_{k}^{-1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu }X_{k}^{-2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j+\nu }X_{k}^{-\nu }=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j}=0.\end{aligned}}} Sum fork= 1 toν, and it will still be zero:∑k=1ν(YkXkj+ν+Λ1YkXkj+ν−1+Λ2YkXkj+ν−2+⋯+ΛνYkXkj)=0.{\displaystyle \sum _{k=1}^{\nu }(Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j})=0.} Collect each term into its own sum:(∑k=1νYkXkj+ν)+(∑k=1νΛ1YkXkj+ν−1)+(∑k=1νΛ2YkXkj+ν−2)+⋯+(∑k=1νΛνYkXkj)=0.{\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\left(\sum _{k=1}^{\nu }\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}\right)+\left(\sum _{k=1}^{\nu }\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\left(\sum _{k=1}^{\nu }\Lambda _{\nu }Y_{k}X_{k}^{j}\right)=0.} Extract the constant values ofΛ{\displaystyle \Lambda }that are unaffected by the summation:(∑k=1νYkXkj+ν)+Λ1(∑k=1νYkXkj+ν−1)+Λ2(∑k=1νYkXkj+ν−2)+⋯+Λν(∑k=1νYkXkj)=0.{\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\Lambda _{1}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -1}\right)+\Lambda _{2}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\Lambda _{\nu }\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}\right)=0.} These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces toSj+ν+Λ1Sj+ν−1+⋯+Λν−1Sj+1+ΛνSj=0.{\displaystyle S_{j+\nu }+\Lambda _{1}S_{j+\nu -1}+\cdots +\Lambda _{\nu -1}S_{j+1}+\Lambda _{\nu }S_{j}=0.} SubtractingSj+ν{\displaystyle S_{j+\nu }}from both sides yieldsSjΛν+Sj+1Λν−1+⋯+Sj+ν−1Λ1=−Sj+ν.{\displaystyle S_{j}\Lambda _{\nu }+S_{j+1}\Lambda _{\nu -1}+\cdots +S_{j+\nu -1}\Lambda _{1}=-S_{j+\nu }.} Recall thatjwas chosen to be any integer between 1 andvinclusive, and this equivalence is true for all such values. Therefore, we havevlinear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λiof the error-location polynomial:[S1S2⋯SνS2S3⋯Sν+1⋮⋮⋱⋮SνSν+1⋯S2ν−1][ΛνΛν−1⋮Λ1]=[−Sν+1−Sν+2⋮−Sν+ν].{\displaystyle {\begin{bmatrix}S_{1}&S_{2}&\cdots &S_{\nu }\\S_{2}&S_{3}&\cdots &S_{\nu +1}\\\vdots &\vdots &\ddots &\vdots \\S_{\nu }&S_{\nu +1}&\cdots &S_{2\nu -1}\end{bmatrix}}{\begin{bmatrix}\Lambda _{\nu }\\\Lambda _{\nu -1}\\\vdots \\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-S_{\nu +1}\\-S_{\nu +2}\\\vdots \\-S_{\nu +\nu }\end{bmatrix}}.}The above assumes that the decoder knows the number of errorsν, but that number has not been determined yet. The PGZ decoder does not determineνdirectly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trialνand sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trialνis reduced by one and the next smaller system is examined.[15] Use the coefficients Λifound in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locatorsXkare the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locatorsXk{\displaystyle X_{k}}(not their reciprocalsXk−1{\displaystyle X_{k}^{-1}}).Chien searchis an efficient implementation of this step. Once the error locatorsXkare known, the error values can be determined. This can be done by direct solution forYkin theerror equationsmatrix given above, or using theForney algorithm. Calculateikby taking the log baseα{\displaystyle \alpha }ofXk. This is generally done using a precomputed lookup table. Finally,e(x) is generated fromikandeikand then is subtracted fromr(x) to get the originally sent messages(x), with errors corrected. Consider the Reed–Solomon code defined inGF(929)withα= 3andt= 4(this is used inPDF417barcodes) for a RS(7,3) code. The generator polynomial isg(x)=(x−3)(x−32)(x−33)(x−34)=x4+809x3+723x2+568x+522.{\displaystyle g(x)=(x-3)(x-3^{2})(x-3^{3})(x-3^{4})=x^{4}+809x^{3}+723x^{2}+568x+522.}If the message polynomial isp(x) = 3x2+ 2x+ 1, then a systematic codeword is encoded as follows:sr(x)=p(x)xtmodg(x)=547x3+738x2+442x+455,{\displaystyle s_{r}(x)=p(x)\,x^{t}{\bmod {g}}(x)=547x^{3}+738x^{2}+442x+455,}s(x)=p(x)xt−sr(x)=3x6+2x5+1x4+382x3+191x2+487x+474.{\displaystyle s(x)=p(x)\,x^{t}-s_{r}(x)=3x^{6}+2x^{5}+1x^{4}+382x^{3}+191x^{2}+487x+474.}Errors in transmission might cause this to be received instead:r(x)=s(x)+e(x)=3x6+2x5+123x4+456x3+191x2+487x+474.{\displaystyle r(x)=s(x)+e(x)=3x^{6}+2x^{5}+123x^{4}+456x^{3}+191x^{2}+487x+474.}The syndromes are calculated by evaluatingrat powers ofα:S1=r(31)=3⋅36+2⋅35+123⋅34+456⋅33+191⋅32+487⋅3+474=732,{\displaystyle S_{1}=r(3^{1})=3\cdot 3^{6}+2\cdot 3^{5}+123\cdot 3^{4}+456\cdot 3^{3}+191\cdot 3^{2}+487\cdot 3+474=732,}S2=r(32)=637,S3=r(33)=762,S4=r(34)=925,{\displaystyle S_{2}=r(3^{2})=637,\quad S_{3}=r(3^{3})=762,\quad S_{4}=r(3^{4})=925,}yielding the system[732637637762][Λ2Λ1]=[−762−925]=[167004].{\displaystyle {\begin{bmatrix}732&637\\637&762\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-762\\-925\end{bmatrix}}={\begin{bmatrix}167\\004\end{bmatrix}}.} UsingGaussian elimination,[001000000001][Λ2Λ1]=[329821],{\displaystyle {\begin{bmatrix}001&000\\000&001\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}329\\821\end{bmatrix}},}soΛ(x)=329x2+821x+001,{\displaystyle \Lambda (x)=329x^{2}+821x+001,}with rootsx1= 757 = 3−3andx2= 562 = 3−4. The coefficients can be reversed:R(x)=001x2+821x+329,{\displaystyle R(x)=001x^{2}+821x+329,}to produce roots 27 = 33and 81 = 34with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply theForney algorithm:Ω(x)=S(x)Λ(x)modx4=546x+732,{\displaystyle \Omega (x)=S(x)\Lambda (x){\bmod {x}}^{4}=546x+732,}Λ′(x)=658x+821,{\displaystyle \Lambda '(x)=658x+821,}e1=−Ω(x1)/Λ′(x1)=074,{\displaystyle e_{1}=-\Omega (x_{1})/\Lambda '(x_{1})=074,}e2=−Ω(x2)/Λ′(x2)=122.{\displaystyle e_{2}=-\Omega (x_{2})/\Lambda '(x_{2})=122.} Subtractinge1x3+e2x4=74x3+122x4{\displaystyle e_{1}x^{3}+e_{2}x^{4}=74x^{3}+122x^{4}}from the received polynomialr(x) reproduces the original codewords. TheBerlekamp–Massey algorithmis an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errorse:Δ=Si+Λ1Si−1+⋯+ΛeSi−e{\displaystyle \Delta =S_{i}+\Lambda _{1}\ S_{i-1}+\cdots +\Lambda _{e}\ S_{i-e}}and then adjusts Λ(x) andeso that a recalculated Δ would be zero. The articleBerlekamp–Massey algorithmhas a detailed description of the procedure. In the following example,C(x) is used to represent Λ(x). Using the same data as the Peterson Gorenstein Zierler example above: The final value ofCis the error locator polynomial, Λ(x). Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of theextended Euclidean algorithm. DefineS(x), Λ(x), and Ω(x) fortsyndromes andeerrors:S(x)=Stxt−1+St−1xt−2+⋯+S2x+S1Λ(x)=Λexe+Λe−1xe−1+⋯+Λ1x+1Ω(x)=Ωexe+Ωe−1xe−1+⋯+Ω1x+Ω0{\displaystyle {\begin{aligned}S(x)&=S_{t}x^{t-1}+S_{t-1}x^{t-2}+\cdots +S_{2}x+S_{1}\\[1ex]\Lambda (x)&=\Lambda _{e}x^{e}+\Lambda _{e-1}x^{e-1}+\cdots +\Lambda _{1}x+1\\[1ex]\Omega (x)&=\Omega _{e}x^{e}+\Omega _{e-1}x^{e-1}+\cdots +\Omega _{1}x+\Omega _{0}\end{aligned}}} The key equation is:Λ(x)S(x)=Q(x)xt+Ω(x){\displaystyle \Lambda (x)S(x)=Q(x)x^{t}+\Omega (x)} Fort= 6 ande= 3:[Λ3S6x8Λ2S6+Λ3S5x7Λ1S6+Λ2S5+Λ3S4x6S6+Λ1S5+Λ2S4+Λ3S3x5S5+Λ1S4+Λ2S3+Λ3S2x4S4+Λ1S3+Λ2S2+Λ3S1x3S3+Λ1S2+Λ2S1x2S2+Λ1S1xS1]=[Q2x8Q1x7Q0x6000Ω2x2Ω1xΩ0]{\displaystyle {\begin{bmatrix}\Lambda _{3}S_{6}&x^{8}\\\Lambda _{2}S_{6}+\Lambda _{3}S_{5}&x^{7}\\\Lambda _{1}S_{6}+\Lambda _{2}S_{5}+\Lambda _{3}S_{4}&x^{6}\\S_{6}+\Lambda _{1}S_{5}+\Lambda _{2}S_{4}+\Lambda _{3}S_{3}&x^{5}\\S_{5}+\Lambda _{1}S_{4}+\Lambda _{2}S_{3}+\Lambda _{3}S_{2}&x^{4}\\S_{4}+\Lambda _{1}S_{3}+\Lambda _{2}S_{2}+\Lambda _{3}S_{1}&x^{3}\\S_{3}+\Lambda _{1}S_{2}+\Lambda _{2}S_{1}&x^{2}\\S_{2}+\Lambda _{1}S_{1}&x\\S_{1}\end{bmatrix}}={\begin{bmatrix}Q_{2}x^{8}\\Q_{1}x^{7}\\Q_{0}x^{6}\\0\\0\\0\\\Omega _{2}x^{2}\\\Omega _{1}x\\\Omega _{0}\end{bmatrix}}} The middle terms are zero due to the relationship between Λ and syndromes. The extended Euclidean algorithm can find a series of polynomials of the form where the degree ofRdecreases asiincreases. Once the degree ofRi(x) <t/2, then B(x) andQ(x) don't need to be saved, so the algorithm becomes: to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) byAi(0): Ai(0) is the constant (low order) term of Ai. Using the same data as the Peterson–Gorenstein–Zierler example above: A discrete Fourier transform can be used for decoding.[16]To avoid conflict with syndrome names, letc(x) =s(x) the encoded codeword.r(x) ande(x) are the same as above. DefineC(x),E(x), andR(x) as the discrete Fourier transforms ofc(x),e(x), andr(x). Sincer(x) =c(x) +e(x), and since a discrete Fourier transform is a linear operator,R(x) =C(x) +E(x). Transformr(x) toR(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes,tcoefficients ofR(x) andE(x) are the same as the syndromes:Rj=Ej=Sj=r(αj)for1≤j≤t{\displaystyle R_{j}=E_{j}=S_{j}=r(\alpha ^{j})\qquad {\text{for }}1\leq j\leq t} UseR1{\displaystyle R_{1}}throughRt{\displaystyle R_{t}}as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders. Letv= number of errors. GenerateE(x) using the known coefficientsE1{\displaystyle E_{1}}toEt{\displaystyle E_{t}}, the error locator polynomial, and these formulasE0=−1Λv(Ev+Λ1Ev−1+⋯+Λv−1E1)Ej=−(Λ1Ej−1+Λ2Ej−2+⋯+ΛvEj−v)fort<j<n{\displaystyle {\begin{aligned}E_{0}&=-{\frac {1}{\Lambda _{v}}}(E_{v}+\Lambda _{1}E_{v-1}+\cdots +\Lambda _{v-1}E_{1})\\E_{j}&=-(\Lambda _{1}E_{j-1}+\Lambda _{2}E_{j-2}+\cdots +\Lambda _{v}E_{j-v})&{\text{for }}t<j<n\end{aligned}}} Then calculateC(x) =R(x) −E(x) and take the inverse transform (polynomial interpolation) ofC(x) to producec(x). TheSingleton boundstates that the minimum distancedof a linear block code of size (n,k) is upper-bounded byn-k+ 1. The distancedwas usually understood to limit the error-correction capability to⌊(d- 1) / 2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to⌊(n-k) / 2⌋errors. However, this error-correction bound is not exact. In 1999,Madhu SudanandVenkatesan Guruswamiat MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code.[17]It applies to Reed–Solomon codes and more generally toalgebraic geometric codes. This algorithm produces a list of codewords (it is alist-decodingalgorithm) and is based on interpolation and factorization of polynomials overGF(2m)and its extensions. In 2023, building on three exciting[according to whom?]works,[18][19][20]coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achievelist decodingcapacity (up ton-kerrors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic.[citation needed] The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channeldemodulator's confidence in the correctness of the symbol. The advent ofLDPCandturbo codes, which employ iteratedsoft-decisionbelief propagation decoding methods to achieve error-correction performance close to thetheoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter andAlexander Vardypresented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.[21]In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.[22] Here we present a simpleMATLABimplementation for an encoder. Now the decoding part: The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message. Reed and Solomon described a theoretical decoder that corrected errors by finding the most popular message polynomial.[1]The decoder only knows the set of valuesa1{\displaystyle a_{1}}toan{\displaystyle a_{n}}and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is thebinomial coefficient,(nk)=n!(n−k)!k!{\textstyle {\binom {n}{k}}={n! \over (n-k)!k!}}, and the number of subsets is infeasible for even modest codes. For a(255,249)code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets.[citation needed] In 1986, a decoder known as theBerlekamp–Welch algorithmwas developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexityO(n3), wherenis the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message. Using RS(7,3), GF(929), and the set of evaluation pointsai=i− 1 If the message polynomial is The codeword is Errors in transmission might cause this to be received instead. The key equation is: Assume maximum number of errors:e= 2. The key equation becomes: [001000928000000000000006006928928928928928123246928927925921913456439928926920902848057228928925913865673086430928924904804304121726928923893713562][e0e1q0q1q2q3q4]=[000923437541017637289]{\displaystyle {\begin{bmatrix}001&000&928&000&000&000&000\\006&006&928&928&928&928&928\\123&246&928&927&925&921&913\\456&439&928&926&920&902&848\\057&228&928&925&913&865&673\\086&430&928&924&904&804&304\\121&726&928&923&893&713&562\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}000\\923\\437\\541\\017\\637\\289\end{bmatrix}}} UsingGaussian elimination: [001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001000000000000000000000001][e0e1q0q1q2q3q4]=[006924006007009916003]{\displaystyle {\begin{bmatrix}001&000&000&000&000&000&000\\000&001&000&000&000&000&000\\000&000&001&000&000&000&000\\000&000&000&001&000&000&000\\000&000&000&000&001&000&000\\000&000&000&000&000&001&000\\000&000&000&000&000&000&001\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}006\\924\\006\\007\\009\\916\\003\end{bmatrix}}} RecalculateP(x)whereE(x) = 0 : {2, 3}to correctbresulting in the corrected codeword: In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm.[23] To duplicate the polynomials generated by Berlekamp Welsh, divideQ(x) andE(x) by most significant coefficient ofE(x) = 708. RecalculateP(x)whereE(x) = 0 : {2, 3}to correctbresulting in the corrected codeword:
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_code
Incoding theory, theKraft–McMillan inequalitygives a necessary and sufficient condition for the existence of aprefix code[1](in Leon G. Kraft's version) or a uniquely decodable code (inBrockway McMillan's version) for a given set ofcodewordlengths. Its applications to prefix codes and trees often find use incomputer scienceandinformation theory. The prefix code can contain either finitely many or infinitely many codewords. Kraft's inequality was published inKraft (1949). However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality toRaymond Redheffer. The result was independently discovered inMcMillan (1956). McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 byJoseph Leo Doob. Kraft's inequality limits the lengths of codewords in aprefix code: if one takes anexponentialof the length of each valid codeword, the resulting set of values must look like aprobability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive. Among the useful properties following from the inequality are the following statements: Let each source symbol from the alphabet be encoded into a uniquely decodable code over an alphabet of sizer{\displaystyle r}with codeword lengths Then Conversely, for a given set of natural numbersℓ1,ℓ2,…,ℓn{\displaystyle \ell _{1},\ell _{2},\ldots ,\ell _{n}}satisfying the above inequality, there exists a uniquely decodable code over an alphabet of sizer{\displaystyle r}with those codeword lengths. Anybinary treecan be viewed as defining a prefix code for theleavesof the tree. Kraft's inequality states that Here the sum is taken over the leaves of the tree, i.e. the nodes without any children. The depth is the distance to the root node. In the tree to the right, this sum is First, let us show that the Kraft inequality holds whenever the code forS{\displaystyle S}is a prefix code. Suppose thatℓ1⩽ℓ2⩽⋯⩽ℓn{\displaystyle \ell _{1}\leqslant \ell _{2}\leqslant \cdots \leqslant \ell _{n}}. LetA{\displaystyle A}be the fullr{\displaystyle r}-ary tree of depthℓn{\displaystyle \ell _{n}}(thus, every node ofA{\displaystyle A}at level<ℓn{\displaystyle <\ell _{n}}hasr{\displaystyle r}children, while the nodes at levelℓn{\displaystyle \ell _{n}}are leaves). Every word of lengthℓ⩽ℓn{\displaystyle \ell \leqslant \ell _{n}}over anr{\displaystyle r}-ary alphabet corresponds to a node in this tree at depthℓ{\displaystyle \ell }. Thei{\displaystyle i}th word in theprefix codecorresponds to a nodevi{\displaystyle v_{i}}; letAi{\displaystyle A_{i}}be the set of all leaf nodes (i.e. of nodes at depthℓn{\displaystyle \ell _{n}}) in the subtree ofA{\displaystyle A}rooted atvi{\displaystyle v_{i}}. That subtree being of heightℓn−ℓi{\displaystyle \ell _{n}-\ell _{i}}, we have Since the code is a prefix code, those subtrees cannot share any leaves, which means that Thus, given that the total number of nodes at depthℓn{\displaystyle \ell _{n}}isrℓn{\displaystyle r^{\ell _{n}}}, we have from which the result follows. Conversely, given any ordered sequence ofn{\displaystyle n}natural numbers, satisfying the Kraft inequality, one can construct a prefix code with codeword lengths equal to eachℓi{\displaystyle \ell _{i}}by choosing a word of lengthℓi{\displaystyle \ell _{i}}arbitrarily, then ruling out all words of greater length that have it as a prefix. There again, we shall interpret this in terms of leaf nodes of anr{\displaystyle r}-ary tree of depthℓn{\displaystyle \ell _{n}}. First choose any node from the full tree at depthℓ1{\displaystyle \ell _{1}}; it corresponds to the first word of our new code. Since we are building a prefix code, all the descendants of this node (i.e., all words that have this first word as a prefix) become unsuitable for inclusion in the code. We consider the descendants at depthℓn{\displaystyle \ell _{n}}(i.e., the leaf nodes among the descendants); there arerℓn−ℓ1{\displaystyle r^{\ell _{n}-\ell _{1}}}such descendant nodes that are removed from consideration. The next iteration picks a (surviving) node at depthℓ2{\displaystyle \ell _{2}}and removesrℓn−ℓ2{\displaystyle r^{\ell _{n}-\ell _{2}}}further leaf nodes, and so on. Aftern{\displaystyle n}iterations, we have removed a total of nodes. The question is whether we need to remove more leaf nodes than we actually have available —rℓn{\displaystyle r^{\ell _{n}}}in all — in the process of building the code. Since the Kraft inequality holds, we have indeed and thus a prefix code can be built. Note that as the choice of nodes at each step is largely arbitrary, many different suitable prefix codes can be built, in general. Now we will prove that the Kraft inequality holds wheneverS{\displaystyle S}is a uniquely decodable code. (The converse needs not be proven, since we have already proven it for prefix codes, which is a stronger claim.) The proof is by Jack I. Karush.[3][4] We need only prove it when there are finitely many codewords. If there are infinitely many codewords, then any finite subset of it is also uniquely decodable, so it satisfies the Kraft–McMillan inequality. Taking the limit, we have the inequality for the full code. DenoteC=∑i=1nr−li{\displaystyle C=\sum _{i=1}^{n}r^{-l_{i}}}. The idea of the proof is to get an upper bound onCm{\displaystyle C^{m}}form∈N{\displaystyle m\in \mathbb {N} }and show that it can only hold for allm{\displaystyle m}ifC≤1{\displaystyle C\leq 1}. RewriteCm{\displaystyle C^{m}}as Consider allm-powersSm{\displaystyle S^{m}}, in the form of wordssi1si2…sim{\displaystyle s_{i_{1}}s_{i_{2}}\dots s_{i_{m}}}, wherei1,i2,…,im{\displaystyle i_{1},i_{2},\dots ,i_{m}}are indices between 1 andn{\displaystyle n}. Note that, sinceSwas assumed to uniquely decodable,si1si2…sim=sj1sj2…sjm{\displaystyle s_{i_{1}}s_{i_{2}}\dots s_{i_{m}}=s_{j_{1}}s_{j_{2}}\dots s_{j_{m}}}impliesi1=j1,i2=j2,…,im=jm{\displaystyle i_{1}=j_{1},i_{2}=j_{2},\dots ,i_{m}=j_{m}}. This means that each summand corresponds to exactly one word inSm{\displaystyle S^{m}}. This allows us to rewrite the equation to whereqℓ{\displaystyle q_{\ell }}is the number of codewords inSm{\displaystyle S^{m}}of lengthℓ{\displaystyle \ell }andℓmax{\displaystyle \ell _{max}}is the length of the longest codeword inS{\displaystyle S}. For anr{\displaystyle r}-letter alphabet there are onlyrℓ{\displaystyle r^{\ell }}possible words of lengthℓ{\displaystyle \ell }, soqℓ≤rℓ{\displaystyle q_{\ell }\leq r^{\ell }}. Using this, we upper boundCm{\displaystyle C^{m}}: Taking them{\displaystyle m}-th root, we get This bound holds for anym∈N{\displaystyle m\in \mathbb {N} }. The right side is 1 asymptotically, so∑i=1nr−li≤1{\displaystyle \sum _{i=1}^{n}r^{-l_{i}}\leq 1}must hold (otherwise the inequality would be broken for a large enoughm{\displaystyle m}). Given a sequence ofn{\displaystyle n}natural numbers, satisfying the Kraft inequality, we can construct a prefix code as follows. Define theithcodeword,Ci, to be the firstℓi{\displaystyle \ell _{i}}digits after theradix point(e.g. decimal point) in the baserrepresentation of Note that by Kraft's inequality, this sum is never more than 1. Hence the codewords capture the entire value of the sum. Therefore, forj>i, the firstℓi{\displaystyle \ell _{i}}digits ofCjform a larger number thanCi, so the code is prefix free. The following generalization is found in.[5] Theorem—IfC,D{\textstyle C,D}are uniquely decodable, and every codeword inC{\textstyle C}is a concatenation of codewords inD{\textstyle D}, then∑c∈Cr−|c|≤∑c∈Dr−|c|{\displaystyle \sum _{c\in C}r^{-|c|}\leq \sum _{c\in D}r^{-|c|}} The previous theorem is the special case whenD={a1,…,ar}{\displaystyle D=\{a_{1},\dots ,a_{r}\}}. LetQC(x){\textstyle Q_{C}(x)}be thegenerating functionfor the code. That is,QC(x):=∑c∈Cx|c|{\displaystyle Q_{C}(x):=\sum _{c\in C}x^{|c|}} By a counting argument, thek{\textstyle k}-th coefficient ofQCn{\textstyle Q_{C}^{n}}is the number of strings of lengthn{\textstyle n}with code lengthk{\textstyle k}. That is,QCn(x)=∑k≥0xk#(strings of lengthnwithC-codes of lengthk){\displaystyle Q_{C}^{n}(x)=\sum _{k\geq 0}x^{k}\#({\text{strings of length }}n{\text{ with }}C{\text{-codes of length }}k)}Similarly,11−QC(x)=1+QC(x)+QC(x)2+⋯=∑k≥0xk#(strings withC-codes of lengthk){\displaystyle {\frac {1}{1-Q_{C}(x)}}=1+Q_{C}(x)+Q_{C}(x)^{2}+\cdots =\sum _{k\geq 0}x^{k}\#({\text{strings with }}C{\text{-codes of length }}k)} Since the code is uniquely decodable, any power ofQC{\textstyle Q_{C}}is absolutely bounded byr|x|+r2|x|2+⋯=r|x|1−r|x|{\textstyle r|x|+r^{2}|x|^{2}+\cdots ={\frac {r|x|}{1-r|x|}}}, so each ofQC,QC2,…{\textstyle Q_{C},Q_{C}^{2},\dots }and11−QC(x){\textstyle {\frac {1}{1-Q_{C}(x)}}}is analytic in the disk|x|<1/r{\textstyle |x|<1/r}. We claim that for allx∈(0,1/r){\textstyle x\in (0,1/r)},QCn≤QDn+QDn+1+⋯{\displaystyle Q_{C}^{n}\leq Q_{D}^{n}+Q_{D}^{n+1}+\cdots } The left side is∑k≥0xk#(strings of lengthnwithC-codes of lengthk){\displaystyle \sum _{k\geq 0}x^{k}\#({\text{strings of length }}n{\text{ with }}C{\text{-codes of length }}k)}and the right side is ∑k≥0xk#(strings of length≥nwithD-codes of lengthk){\displaystyle \sum _{k\geq 0}x^{k}\#({\text{strings of length}}\geq n{\text{ with }}D{\text{-codes of length }}k)} Now, since every codeword inC{\textstyle C}is a concatenation of codewords inD{\textstyle D}, andD{\textstyle D}is uniquely decodable, each string of lengthn{\textstyle n}withC{\textstyle C}-codec1…cn{\textstyle c_{1}\dots c_{n}}of lengthk{\textstyle k}corresponds to a unique stringsc1…scn{\textstyle s_{c_{1}}\dots s_{c_{n}}}whoseD{\textstyle D}-code isc1…cn{\textstyle c_{1}\dots c_{n}}. The string has length at leastn{\textstyle n}. Therefore, the coefficients on the left are less or equal to the coefficients on the right. Thus, for allx∈(0,1/r){\textstyle x\in (0,1/r)}, and alln=1,2,…{\textstyle n=1,2,\dots }, we haveQC≤QD(1−QD)1/n{\displaystyle Q_{C}\leq {\frac {Q_{D}}{(1-Q_{D})^{1/n}}}}Takingn→∞{\textstyle n\to \infty }limit, we haveQC(x)≤QD(x){\textstyle Q_{C}(x)\leq Q_{D}(x)}for allx∈(0,1/r){\textstyle x\in (0,1/r)}. SinceQC(1/r){\textstyle Q_{C}(1/r)}andQD(1/r){\textstyle Q_{D}(1/r)}both converge, we haveQC(1/r)≤QD(1/r){\textstyle Q_{C}(1/r)\leq Q_{D}(1/r)}by taking the limit and applyingAbel's theorem. There is a generalization toquantum code.[6]
https://en.wikipedia.org/wiki/Kraft's_inequality
Ininformation theory,Shannon's source coding theorem(ornoiseless coding theorem) establishes the statistical limits to possibledata compressionfor data whose source is anindependent identically-distributed random variable, and the operational meaning of theShannon entropy. Named afterClaude Shannon, thesource coding theoremshows that, in the limit, as the length of a stream ofindependent and identically-distributed random variable (i.i.d.)data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. Thesource coding theorem for symbol codesplaces an upper and a lower bound on the minimal possible expected length of codewords as a function of theentropyof the input word (which is viewed as arandom variable) and of the size of the target alphabet. Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), theKolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017).[1] Source codingis a mapping from (a sequence of) symbols from an informationsourceto a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach todata compression. In information theory, the source coding theorem (Shannon 1948)[2]informally states that (MacKay 2003, pg. 81,[3]Cover 2006, Chapter 5[4]): Ni.i.d.random variables each with entropyH(X)can be compressed into more thanN H(X)bitswith negligible risk of information loss, asN→ ∞; but conversely, if they are compressed into fewer thanN H(X)bits it is virtually certain that information will be lost. TheNH(X){\displaystyle NH(X)}coded sequence represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this hypothesis is not always true. Consequently, when the entropy encoding is applied the transmitted message isNH(X)+(inf.source){\displaystyle NH(X)+(inf.source)}. Usually, the information that characterizes the source is inserted at the beginning of the transmitted message. LetΣ1, Σ2denote two finite alphabets and letΣ∗1andΣ∗2denote theset of all finite wordsfrom those alphabets (respectively). Suppose thatXis a random variable taking values inΣ1and letfbe auniquely decodablecode fromΣ∗1toΣ∗2where|Σ2| =a. LetSdenote the random variable given by the length of codewordf(X). Iffis optimal in the sense that it has the minimal expected word length forX, then (Shannon 1948): WhereE{\displaystyle \mathbb {E} }denotes theexpected valueoperator. GivenXis ani.i.d.source, itstime seriesX1, ...,Xnis i.i.d. withentropyH(X)in the discrete-valued case anddifferential entropyin the continuous-valued case. The Source coding theorem states that for anyε> 0, i.e. for anyrateH(X) +εlarger than theentropyof the source, there is large enoughnand an encoder that takesni.i.d. repetition of the source,X1:n, and maps it ton(H(X) +ε)binary bits such that the source symbolsX1:nare recoverable from the binary bits with probability of at least1 −ε. Proof of Achievability.Fix someε> 0, and let Thetypical set,Aεn, is defined as follows: Theasymptotic equipartition property(AEP) shows that for large enoughn, the probability that a sequence generated by the source lies in the typical set,Aεn, as defined approaches one. In particular, for sufficiently largen,P((X1,X2,⋯,Xn)∈Anε){\displaystyle P((X_{1},X_{2},\cdots ,X_{n})\in A_{n}^{\varepsilon })}can be made arbitrarily close to 1, and specifically, greater than1−ε{\displaystyle 1-\varepsilon }(SeeAEPfor a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: Since|Anε|≤2n(H(X)+ε),n(H(X)+ε){\displaystyle \left|A_{n}^{\varepsilon }\right|\leq 2^{n(H(X)+\varepsilon )},n(H(X)+\varepsilon )}bits are enough to point to any string in this set. The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitraryn(H(X) +ε)digit number. As long as the input sequence lies within the typical set (with probability at least1 −ε), the encoder does not make any error. So, the probability of error of the encoder is bounded above byε. Proof of converse: the converse is proved by showing that any set of size smaller thanAεn(in the sense of exponent) would cover a set of probability bounded away from1. For1 ≤i≤nletsidenote the word length of each possiblexi. Defineqi=a−si/C{\displaystyle q_{i}=a^{-s_{i}}/C}, whereCis chosen so thatq1+ ... +qn= 1. Then where the second line follows fromGibbs' inequalityand the fifth line follows fromKraft's inequality: sologC≤ 0. For the second inequality we may set so that and so and and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimalSsatisfies Define typical setAεnas: Then, for givenδ> 0, fornlarge enough,Pr(Aεn) > 1 −δ. Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than2n(Hn¯(X)+ε){\displaystyle 2^{n({\overline {H_{n}}}(X)+\varepsilon )}}. Thus, on an average,Hn(X) +εbits suffice for encoding with probability greater than1 −δ, whereεandδcan be made arbitrarily small, by makingnlarger.
https://en.wikipedia.org/wiki/Source_coding_theorem
Incoding theory, avariable-length codeis acodewhich maps source symbols to avariablenumber ofbits. The equivalent concept incomputer scienceisbit string. Variable-length codes can allow sources to becompressedand decompressed withzeroerror (lossless data compression) and still be read back symbol by symbol. With the right coding strategy, anindependent and identically-distributed sourcemay be compressed almost arbitrarily close to itsentropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure. Some examples of well-known variable-length coding strategies areHuffman coding,Lempel–Ziv coding,arithmetic coding, andcontext-adaptive variable-length coding. The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code. Using terms fromformal language theory, the precise mathematical definition is as follows: LetS{\displaystyle S}andT{\displaystyle T}be two finite sets, called the source and targetalphabets, respectively. AcodeC:S→T∗{\displaystyle C:S\to T^{*}}is a total function[1]mapping each symbol fromS{\displaystyle S}to asequence of symbolsoverT{\displaystyle T}, and the extension ofC{\displaystyle C}to ahomomorphismofS∗{\displaystyle S^{*}}intoT∗{\displaystyle T^{*}}, which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as itsextension. Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular: A code isnon-singularif each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings isinjective. A code isuniquely decodableif its extension is§ non-singular. Whether a given code is uniquely decodable can be decided with theSardinas–Patterson algorithm. A code is aprefix codeif no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept areprefix-free code,instantaneous code, orcontext-free code. A special case of prefix codes areblock codes. Here, all codewords must have the same length. The latter are not very useful in the context ofsource coding, but often serve asforward error correctionin the context ofchannel coding. Another special case of prefix codes areLEB128andvariable-length quantity(VLQ) codes, which encode arbitrarily large integers as a sequence of octets—i.e., every codeword is a multiple of 8 bits. The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a lowexpectedcodeword length. For the above example, if the probabilities of (a, b, c, d) were(12,14,18,18){\displaystyle \textstyle \left({\frac {1}{2}},{\frac {1}{4}},{\frac {1}{8}},{\frac {1}{8}}\right)}, the expected number of bits used to represent a source symbol using the code above would be: As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered withzeroerror.
https://en.wikipedia.org/wiki/Uniquely_decodable_code
Incoding theory, avariable-length codeis acodewhich maps source symbols to avariablenumber ofbits. The equivalent concept incomputer scienceisbit string. Variable-length codes can allow sources to becompressedand decompressed withzeroerror (lossless data compression) and still be read back symbol by symbol. With the right coding strategy, anindependent and identically-distributed sourcemay be compressed almost arbitrarily close to itsentropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure. Some examples of well-known variable-length coding strategies areHuffman coding,Lempel–Ziv coding,arithmetic coding, andcontext-adaptive variable-length coding. The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code. Using terms fromformal language theory, the precise mathematical definition is as follows: LetS{\displaystyle S}andT{\displaystyle T}be two finite sets, called the source and targetalphabets, respectively. AcodeC:S→T∗{\displaystyle C:S\to T^{*}}is a total function[1]mapping each symbol fromS{\displaystyle S}to asequence of symbolsoverT{\displaystyle T}, and the extension ofC{\displaystyle C}to ahomomorphismofS∗{\displaystyle S^{*}}intoT∗{\displaystyle T^{*}}, which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as itsextension. Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular: A code isnon-singularif each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings isinjective. A code isuniquely decodableif its extension is§ non-singular. Whether a given code is uniquely decodable can be decided with theSardinas–Patterson algorithm. A code is aprefix codeif no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept areprefix-free code,instantaneous code, orcontext-free code. A special case of prefix codes areblock codes. Here, all codewords must have the same length. The latter are not very useful in the context ofsource coding, but often serve asforward error correctionin the context ofchannel coding. Another special case of prefix codes areLEB128andvariable-length quantity(VLQ) codes, which encode arbitrarily large integers as a sequence of octets—i.e., every codeword is a multiple of 8 bits. The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a lowexpectedcodeword length. For the above example, if the probabilities of (a, b, c, d) were(12,14,18,18){\displaystyle \textstyle \left({\frac {1}{2}},{\frac {1}{4}},{\frac {1}{8}},{\frac {1}{8}}\right)}, the expected number of bits used to represent a source symbol using the code above would be: As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered withzeroerror.
https://en.wikipedia.org/wiki/Variable-length_code
Ininformation theory, thebinary entropy function, denotedH⁡(p){\displaystyle \operatorname {H} (p)}orHb⁡(p){\displaystyle \operatorname {H} _{\text{b}}(p)}, is defined as theentropyof aBernoulli process(i.i.d.binary variable) withprobabilityp{\displaystyle p}of one of two values, and is given by the formula: The base of the logarithm corresponds to the choice ofunits of information; baseecorresponds tonatsand is mathematically convenient, while base 2 (binary logarithm) corresponds toshannonsand is conventional (as shown in the graph); explicitly: Note that the values at 0 and 1 are given by the limit0log⁡0:=limx→0+xlog⁡x=0{\displaystyle \textstyle 0\log 0:=\lim _{x\to 0^{+}}x\log x=0}(byL'Hôpital's rule); and that "binary" refers to two possible values for the variable, not the units of information. Whenp=1/2{\displaystyle p=1/2}, the binary entropy function attains its maximum value, 1 shannon (1 binary unit of information); this is the case of anunbiased coin flip. Whenp=0{\displaystyle p=0}orp=1{\displaystyle p=1}, the binary entropy is 0 (in any units), corresponding to no information, since there is no uncertainty in the variable. Binary entropyH⁡(p){\displaystyle \operatorname {H} (p)}is a special case ofH(X){\displaystyle \mathrm {H} (X)}, theentropy function.H⁡(p){\displaystyle \operatorname {H} (p)}is distinguished from theentropy functionH(X){\displaystyle \mathrm {H} (X)}in that the former takes a single real number as aparameterwhereas the latter takes a distribution or random variable as a parameter. Thus the binary entropy (ofp) is the entropy of the distributionBer⁡(p){\displaystyle \operatorname {Ber} (p)}, soH⁡(p)=H(Ber⁡(p)){\displaystyle \operatorname {H} (p)=\mathrm {H} (\operatorname {Ber} (p))}. Writing the probability of each of the two values beingpandq, sop+q=1{\displaystyle p+q=1}andq=1−p{\displaystyle q=1-p}, this corresponds to Sometimes the binary entropy function is also written asH2⁡(p){\displaystyle \operatorname {H} _{2}(p)}. However, it is different from and should not be confused with theRényi entropy, which is denoted asH2(X){\displaystyle \mathrm {H} _{2}(X)}. In terms of information theory,entropyis considered to be a measure of the uncertainty in a message. To put it intuitively, supposep=0{\displaystyle p=0}. At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. Ifp=1{\displaystyle p=1}, the result is again certain, so the entropy is 0 here as well. Whenp=1/2{\displaystyle p=1/2}, the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, ifp=1/4{\displaystyle p=1/4}, there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit. Thederivativeof thebinary entropy functionmay be expressed as the negative of thelogitfunction: Theconvex conjugate(specifically, theLegendre transform) of the binary entropy (with basee) is the negativesoftplusfunction. This is because (following the definition of the Legendre transform: the derivatives are inverse functions) the derivative of negative binary entropy is the logit, whose inverse function is thelogistic function, which is the derivative of softplus. Softplus can be interpreted aslogistic loss, so byduality, minimizing logistic loss corresponds to maximizing entropy. This justifies theprinciple of maximum entropyas loss minimization. TheTaylor seriesof the binary entropy function at 1/2 is which converges to the binary entropy function for all values0≤p≤1{\displaystyle 0\leq p\leq 1}. The following bounds hold for0<p<1{\displaystyle 0<p<1}:[1] and whereln{\displaystyle \ln }denotes natural logarithm.
https://en.wikipedia.org/wiki/Binary_entropy_function
Aprefix codeis a type ofcodesystem distinguished by its possession of theprefix property, which requires that there is no wholecode wordin the system that is aprefix(initial segment) of any other code word in the system. It is trivially true for fixed-length codes, so only a point of consideration forvariable-length codes. For example, a code with code {9, 55} has the prefix property; a code consisting of {9, 5, 59, 55} does not, because "5" is a prefix of "59" and also of "55". A prefix code is auniquely decodable code: given a complete and accurate sequence, a receiver can identify each word without requiring a special marker between words. However, there are uniquely decodable codes that are not prefix codes; for instance, the reverse of a prefix code is still uniquely decodable (it is a suffix code), but it is not necessarily a prefix code. Prefix codes are also known asprefix-free codes,prefix condition codesandinstantaneous codes. AlthoughHuffman codingis just one of many algorithms for deriving prefix codes, prefix codes are also widely referred to as "Huffman codes", even when the code was not produced by a Huffman algorithm. The termcomma-free codeis sometimes also applied as a synonym for prefix-free codes[1][2]but in most mathematical books and articles (e.g.[3][4]) a comma-free code is used to mean aself-synchronizing code, a subclass of prefix codes. Using prefix codes, a message can be transmitted as a sequence of concatenated code words, without anyout-of-bandmarkers or (alternatively) special markers between words toframethe words in the message. The recipient can decode the message unambiguously, by repeatedly finding and removing sequences that form valid code words. This is not generally possible with codes that lack the prefix property, for example {0, 1, 10, 11}: a receiver reading a "1" at the start of a code word would not know whether that was the complete code word "1", or merely the prefix of the code word "10" or "11"; so the string "10" could be interpreted either as a single codeword or as the concatenation of the words "1" then "0". The variable-lengthHuffman codes,country calling codes, the country and publisher parts ofISBNs, the Secondary Synchronization Codes used in theUMTSW-CDMA3G Wireless Standard, and theinstruction sets(machine language) of most computer microarchitectures are prefix codes. Prefix codes are noterror-correcting codes. In practice, a message might first be compressed with a prefix code, and then encoded again withchannel coding(including error correction) before transmission. For anyuniquely decodablecode there is a prefix code that has the same code word lengths.[5]Kraft's inequalitycharacterizes the sets of code word lengths that are possible in auniquely decodablecode.[6] If every word in the code has the same length, the code is called afixed-length code, or ablock code(though the termblock codeis also used for fixed-sizeerror-correcting codesinchannel coding). For example,ISO 8859-15letters are always 8 bits long.UTF-32/UCS-4letters are always 32 bits long.ATM cellsare always 424 bits (53 bytes) long. A fixed-length code of fixed lengthkbits can encode up to2k{\displaystyle 2^{k}}source symbols. A fixed-length code is necessarily a prefix code. It is possible to turn any code into a fixed-length code by padding fixed symbols to the shorter prefixes in order to meet the length of the longest prefixes. Alternately, such padding codes may be employed to introduce redundancy that allows autocorrection and/or synchronisation. However, fixed length encodings are inefficient in situations where some words are much more likely to be transmitted than others. Truncated binary encodingis a straightforward generalization of fixed-length codes to deal with cases where the number of symbolsnis not a power of two. Source symbols are assigned codewords of lengthkandk+1, wherekis chosen so that2k< n ≤ 2k+1. Huffman codingis a more sophisticated technique for constructing variable-length prefix codes. The Huffman coding algorithm takes as input the frequencies that the code words should have, and constructs a prefix code that minimizes the weighted average of the code word lengths. (This is closely related to minimizing the entropy.) This is a form oflossless data compressionbased onentropy encoding. Some codes mark the end of a code word with a special "comma" symbol (also called aSentinel value), different from normal data.[7]This is somewhat analogous to the spaces between words in a sentence; they mark where one word ends and another begins. If every code word ends in a comma, and the comma does not appear elsewhere in a code word, the code is automatically prefix-free. However, reserving an entire symbol only for use as a comma can be inefficient, especially for languages with a small number of symbols.Morse codeis an everyday example of a variable-length code with a comma. The long pauses between letters, and the even longer pauses between words, help people recognize where one letter (or word) ends, and the next begins. Similarly,Fibonacci codinguses a "11" to mark the end of every code word. Self-synchronizing codesare prefix codes that allowframe synchronization. Asuffix codeis a set of words none of which is a suffix of any other; equivalently, a set of words which are the reverse of a prefix code. As with a prefix code, the representation of a string as a concatenation of such words is unique. Abifix codeis a set of words which is both a prefix and a suffix code.[8]Anoptimal prefix codeis a prefix code with minimal average length. That is, assume an alphabet ofnsymbols with probabilitiesp(Ai){\displaystyle p(A_{i})}for a prefix codeC. IfC'is another prefix code andλi′{\displaystyle \lambda '_{i}}are the lengths of the codewords ofC', then∑i=1nλip(Ai)≤∑i=1nλi′p(Ai){\displaystyle \sum _{i=1}^{n}{\lambda _{i}p(A_{i})}\leq \sum _{i=1}^{n}{\lambda '_{i}p(A_{i})}\!}.[9] Examples of prefix codes include: Commonly used techniques for constructing prefix codes includeHuffman codesand the earlierShannon–Fano codes, anduniversal codessuch as:
https://en.wikipedia.org/wiki/Prefix_code
Code wordmay refer to:
https://en.wikipedia.org/wiki/Codeword
Inmathematics, anidentity elementorneutral elementof abinary operationis an element that leaves unchanged every element when the operation is applied.[1][2]For example, 0 is an identity element of theadditionofreal numbers. This concept is used inalgebraic structuressuch asgroupsandrings. The termidentity elementis often shortened toidentity(as in the case of additive identity and multiplicative identity)[3]when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Let(S, ∗)be a setSequipped with abinary operation∗. Then an elementeofSis called aleftidentityife∗s=sfor allsinS, and arightidentityifs∗e=sfor allsinS.[4]Ifeis both a left identity and a right identity, then it is called atwo-sided identity, or simply anidentity.[5][6][7][8][9] An identity with respect to addition is called anadditive identity(often denoted as 0) and an identity with respect to multiplication is called amultiplicative identity(often denoted as 1).[3]These need not be ordinary addition and multiplication—as the underlying operation could be rather arbitrary. In the case of agroupfor example, the identity element is sometimes simply denoted by the symbole{\displaystyle e}. The distinction between additive and multiplicative identity is used most often for sets that support both binary operations, such asrings,integral domains, andfields. The multiplicative identity is often calledunityin the latter context (a ring with unity).[10][11][12]This should not be confused with aunitin ring theory, which is any element having amultiplicative inverse. By its own definition, unity itself is necessarily a unit.[13][14] In the exampleS= {e,f} with the equalities given,Sis asemigroup. It demonstrates the possibility for(S, ∗)to have several left identities. In fact, every element can be a left identity. In a similar manner, there can be several right identities. But if there is both a right identity and a left identity, then they must be equal, resulting in a single two-sided identity. To see this, note that iflis a left identity andris a right identity, thenl=l∗r=r. In particular, there can never be more than one two-sided identity: if there were two, sayeandf, thene∗fwould have to be equal to botheandf. It is also quite possible for(S, ∗)to havenoidentity element,[15]such as the case of even integers under the multiplication operation.[3]Another common example is thecross productofvectors, where the absence of an identity element is related to the fact that thedirectionof any nonzero cross product is alwaysorthogonalto any element multiplied. That is, it is not possible to obtain a non-zero vector in the same direction as the original. Yet another example of structure without identity element involves the additivesemigroupofpositivenatural numbers.
https://en.wikipedia.org/wiki/Identity_element
Inmathematics, abinary operationiscommutativeif changing the order of theoperandsdoes not change the result. It is a fundamental property of many binary operations, and manymathematical proofsdepend on it. Perhaps most familiar as a property of arithmetic, e.g."3 + 4 = 4 + 3"or"2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such asdivisionandsubtraction, that do not have it (for example,"3 − 5 ≠ 5 − 3"); such operations arenotcommutative, and so are referred to asnoncommutative operations. The idea that simple operations, such as themultiplicationandadditionof numbers, are commutative was for many centuries implicitly assumed. Thus, this property was not named until the 19th century, when newalgebraic structuresstarted to be studied.[1] Abinary operation∗{\displaystyle *}on asetSiscommutativeifx∗y=y∗x{\displaystyle x*y=y*x}for allx,y∈S{\displaystyle x,y\in S}.[2]An operation that is not commutative is said to benoncommutative.[3] One says thatxcommuteswithyor thatxandycommuteunder∗{\displaystyle *}if[4]x∗y=y∗x.{\displaystyle x*y=y*x.} So, an operation is commutative if every two elements commute.[4]An operation is noncommutative if there are two elements such thatx∗y≠y∗x.{\displaystyle x*y\neq y*x.}This does not exclude the possibility that some pairs of elements commute.[3] Some types ofalgebraic structuresinvolve an operation that does not require commutativity. If this operation is commutative for a specific structure, the structure is often said to becommutative. So, However, in the case ofalgebras, the phrase "commutative algebra" refers only toassociative algebrasthat have a commutative multiplication.[18] Records of the implicit use of the commutative property go back to ancient times. TheEgyptiansused the commutative property ofmultiplicationto simplify computingproducts.[19]Euclidis known to have assumed the commutative property of multiplication in his bookElements.[20]Formal uses of the commutative property arose in the late 18th and early 19th centuries when mathematicians began to work on a theory of functions. Nowadays, the commutative property is a well-known and basic property used in most branches of mathematics.[2] The first recorded use of the termcommutativewas in a memoir byFrançois Servoisin 1814, which used the wordcommutativeswhen describing functions that have what is now called the commutative property.[21]Commutativeis the feminine form of the French adjectivecommutatif, which is derived from the French nouncommutationand the French verbcommuter, meaning "to exchange" or "to switch", a cognate ofto commute. The term then appeared in English in 1838. inDuncan Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in theTransactions of the Royal Society of Edinburgh.[22]
https://en.wikipedia.org/wiki/Commutative_property
Inmathematics, abinary operationiscommutativeif changing the order of theoperandsdoes not change the result. It is a fundamental property of many binary operations, and manymathematical proofsdepend on it. Perhaps most familiar as a property of arithmetic, e.g."3 + 4 = 4 + 3"or"2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such asdivisionandsubtraction, that do not have it (for example,"3 − 5 ≠ 5 − 3"); such operations arenotcommutative, and so are referred to asnoncommutative operations. The idea that simple operations, such as themultiplicationandadditionof numbers, are commutative was for many centuries implicitly assumed. Thus, this property was not named until the 19th century, when newalgebraic structuresstarted to be studied.[1] Abinary operation∗{\displaystyle *}on asetSiscommutativeifx∗y=y∗x{\displaystyle x*y=y*x}for allx,y∈S{\displaystyle x,y\in S}.[2]An operation that is not commutative is said to benoncommutative.[3] One says thatxcommuteswithyor thatxandycommuteunder∗{\displaystyle *}if[4]x∗y=y∗x.{\displaystyle x*y=y*x.} So, an operation is commutative if every two elements commute.[4]An operation is noncommutative if there are two elements such thatx∗y≠y∗x.{\displaystyle x*y\neq y*x.}This does not exclude the possibility that some pairs of elements commute.[3] Some types ofalgebraic structuresinvolve an operation that does not require commutativity. If this operation is commutative for a specific structure, the structure is often said to becommutative. So, However, in the case ofalgebras, the phrase "commutative algebra" refers only toassociative algebrasthat have a commutative multiplication.[18] Records of the implicit use of the commutative property go back to ancient times. TheEgyptiansused the commutative property ofmultiplicationto simplify computingproducts.[19]Euclidis known to have assumed the commutative property of multiplication in his bookElements.[20]Formal uses of the commutative property arose in the late 18th and early 19th centuries when mathematicians began to work on a theory of functions. Nowadays, the commutative property is a well-known and basic property used in most branches of mathematics.[2] The first recorded use of the termcommutativewas in a memoir byFrançois Servoisin 1814, which used the wordcommutativeswhen describing functions that have what is now called the commutative property.[21]Commutativeis the feminine form of the French adjectivecommutatif, which is derived from the French nouncommutationand the French verbcommuter, meaning "to exchange" or "to switch", a cognate ofto commute. The term then appeared in English in 1838. inDuncan Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in theTransactions of the Royal Society of Edinburgh.[22]
https://en.wikipedia.org/wiki/Commutative_operation
Incoding theory,block codesare a large and important family oferror-correcting codesthat encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists,mathematicians, andcomputer scientiststo study the limitations ofallblock codes in a unified way. Such limitations often take the form ofboundsthat relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes areReed–Solomon codes,Hamming codes,Hadamard codes,Expander codes,Golay codes,Reed–Muller codesandPolar codes. These examples also belong to the class oflinear codes, and hence they are calledlinear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using Boolean polynomials. Algebraic block codes are typicallyhard-decodedusing algebraic decoders.[jargon] The termblock codemay also refer to any error-correcting code that acts on a block ofk{\displaystyle k}bits of input data to producen{\displaystyle n}bits of output data(n,k){\displaystyle (n,k)}. Consequently, the block coder is amemorylessdevice. Under this definition codes such asturbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which hasmemoryand is instead classified as atree code. This article deals with "algebraic block codes". Error-correcting codesare used toreliablytransmitdigital dataover unreliablecommunication channelssubject tochannel noise. When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is calledmessageand the procedure given by the block code encodes each message individually into a codeword, also called ablockin the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. Formally, a block code is aninjectivemapping Here,Σ{\displaystyle \Sigma }is a finite and nonemptysetandk{\displaystyle k}andn{\displaystyle n}are integers. The meaning and significance of these three parameters and other parameters related to the code are described below. The data stream to be encoded is modeled as astringover somealphabetΣ{\displaystyle \Sigma }. The size|Σ|{\displaystyle |\Sigma |}of the alphabet is often written asq{\displaystyle q}. Ifq=2{\displaystyle q=2}, then the block code is called abinaryblock code. In many applications it is useful to considerq{\displaystyle q}to be aprime power, and to identifyΣ{\displaystyle \Sigma }with thefinite fieldFq{\displaystyle \mathbb {F} _{q}}. Messages are elementsm{\displaystyle m}ofΣk{\displaystyle \Sigma ^{k}}, that is, strings of lengthk{\displaystyle k}. Hence the numberk{\displaystyle k}is called themessage lengthordimensionof a block code. Theblock lengthn{\displaystyle n}of a block code is the number of symbols in a block. Hence, the elementsc{\displaystyle c}ofΣn{\displaystyle \Sigma ^{n}}are strings of lengthn{\displaystyle n}and correspond to blocks that may be received by the receiver. Hence they are also called received words. Ifc=C(m){\displaystyle c=C(m)}for some messagem{\displaystyle m}, thenc{\displaystyle c}is called the codeword ofm{\displaystyle m}. Therateof a block code is defined as the ratio between its message length and its block length: A large rate means that the amount of actual message per transmitted block is high. In this sense, the rate measures the transmission speed and the quantity1−R{\displaystyle 1-R}measures the overhead that occurs due to the encoding with the block code. It is a simpleinformation theoreticalfact that the rate cannot exceed1{\displaystyle 1}since data cannot in general be losslessly compressed. Formally, this follows from the fact that the codeC{\displaystyle C}is an injective map. Thedistanceorminimum distancedof a block code is the minimum number of positions in which any two distinct codewords differ, and therelative distanceδ{\displaystyle \delta }is the fractiond/n{\displaystyle d/n}. Formally, for received wordsc1,c2∈Σn{\displaystyle c_{1},c_{2}\in \Sigma ^{n}}, letΔ(c1,c2){\displaystyle \Delta (c_{1},c_{2})}denote theHamming distancebetweenc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}, that is, the number of positions in whichc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}differ. Then the minimum distanced{\displaystyle d}of the codeC{\displaystyle C}is defined as Since any code has to beinjective, any two codewords will disagree in at least one position, so the distance of any code is at least1{\displaystyle 1}. Besides, thedistanceequals theminimum weightfor linear block codes because:[citation needed] A larger distance allows for more error correction and detection. For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ. A code with distancedallows the receiver to detect up tod−1{\displaystyle d-1}transmission errors since changingd−1{\displaystyle d-1}positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than(d−1)/2{\displaystyle (d-1)/2}transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance(d−1)/2{\displaystyle (d-1)/2}. If more than(d−1)/2{\displaystyle (d-1)/2}transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to uselist decoding, in which the decoder outputs a list of all codewords in a certain radius. The notation(n,k,d)q{\displaystyle (n,k,d)_{q}}describes a block code over an alphabetΣ{\displaystyle \Sigma }of sizeq{\displaystyle q}, with a block lengthn{\displaystyle n}, message lengthk{\displaystyle k}, and distanced{\displaystyle d}. If the block code is a linear block code, then the square brackets in the notation[n,k,d]q{\displaystyle [n,k,d]_{q}}are used to represent that fact. For binary codes withq=2{\displaystyle q=2}, the index is sometimes dropped. Formaximum distance separable codes, the distance is alwaysd=n−k+1{\displaystyle d=n-k+1}, but sometimes the precise distance is not known, non-trivial to prove or state, or not needed. In such cases, thed{\displaystyle d}-component may be missing. Sometimes, especially for non-block codes, the notation(n,M,d)q{\displaystyle (n,M,d)_{q}}is used for codes that containM{\displaystyle M}codewords of lengthn{\displaystyle n}. For block codes with messages of lengthk{\displaystyle k}over an alphabet of sizeq{\displaystyle q}, this number would beM=qk{\displaystyle M=q^{k}}. As mentioned above, there are a vast number of error-correcting codes that are actually block codes. The first error-correcting code was theHamming(7,4)code, developed byRichard W. Hammingin 1950. This code transforms a message consisting of 4 bits into a codeword of 7 bits by adding 3 parity bits. Hence this code is a block code. It turns out that it is also a linear code and that it has distance 3. In the shorthand notation above, this means that the Hamming(7,4) code is a[7,4,3]2{\displaystyle [7,4,3]_{2}}code. Reed–Solomon codesare a family of[n,k,d]q{\displaystyle [n,k,d]_{q}}codes withd=n−k+1{\displaystyle d=n-k+1}andq{\displaystyle q}being aprime power.Rank codesare family of[n,k,d]q{\displaystyle [n,k,d]_{q}}codes withd≤n−k+1{\displaystyle d\leq n-k+1}.Hadamard codesare a family of[n,k,d]2{\displaystyle [n,k,d]_{2}}codes withn=2k−1{\displaystyle n=2^{k-1}}andd=2k−2{\displaystyle d=2^{k-2}}. A codewordc∈Σn{\displaystyle c\in \Sigma ^{n}}could be considered as a point in then{\displaystyle n}-dimension spaceΣn{\displaystyle \Sigma ^{n}}and the codeC{\displaystyle {\mathcal {C}}}is the subset ofΣn{\displaystyle \Sigma ^{n}}. A codeC{\displaystyle {\mathcal {C}}}has distanced{\displaystyle d}means that∀c∈C{\displaystyle \forall c\in {\mathcal {C}}}, there is no other codeword in theHamming ballcentered atc{\displaystyle c}with radiusd−1{\displaystyle d-1}, which is defined as the collection ofn{\displaystyle n}-dimension words whoseHamming distancetoc{\displaystyle c}is no more thand−1{\displaystyle d-1}. Similarly,C{\displaystyle {\mathcal {C}}}with (minimum) distanced{\displaystyle d}has the following properties: C={Ci}i≥1{\displaystyle C=\{C_{i}\}_{i\geq 1}}is calledfamily of codes, whereCi{\displaystyle C_{i}}is an(ni,ki,di)q{\displaystyle (n_{i},k_{i},d_{i})_{q}}code with monotonic increasingni{\displaystyle n_{i}}. Rateof family of codesCis defined asR(C)=limi→∞kini{\displaystyle R(C)=\lim _{i\to \infty }{k_{i} \over n_{i}}} Relative distanceof family of codesCis defined asδ(C)=limi→∞dini{\displaystyle \delta (C)=\lim _{i\to \infty }{d_{i} \over n_{i}}} To explore the relationship betweenR(C){\displaystyle R(C)}andδ(C){\displaystyle \delta (C)}, a set of lower and upper bounds of block codes are known. The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1: In other words, every block code satisfies the inequalityk+d≤n+1{\displaystyle k+d\leq n+1}.Reed–Solomon codesare non-trivial examples of codes that satisfy the singleton bound with equality. Forq=2{\displaystyle q=2},R+2δ≤1{\displaystyle R+2\delta \leq 1}. In other words,k+2d≤n{\displaystyle k+2d\leq n}. For the general case, the following Plotkin bounds holds for anyC⊆Fqn{\displaystyle C\subseteq \mathbb {F} _{q}^{n}}with distanced: For anyq-ary code with distanceδ{\displaystyle \delta },R≤1−(qq−1)δ+o(1){\displaystyle R\leq 1-\left({q \over {q-1}}\right)\delta +o\left(1\right)} R≥1−Hq(δ)−ϵ{\displaystyle R\geq 1-H_{q}\left(\delta \right)-\epsilon }, where0≤δ≤1−1q,0≤ϵ≤1−Hq(δ){\displaystyle 0\leq \delta \leq 1-{1 \over q},0\leq \epsilon \leq 1-H_{q}\left(\delta \right)},Hq(x)=def−x⋅logq⁡xq−1−(1−x)⋅logq⁡(1−x){\displaystyle H_{q}\left(x\right)~{\overset {\underset {\mathrm {def} }{}}{=}}~-x\cdot \log _{q}{x \over {q-1}}-\left(1-x\right)\cdot \log _{q}{\left(1-x\right)}}is theq-ary entropy function. DefineJq(δ)=def(1−1q)(1−1−qδq−1){\displaystyle J_{q}\left(\delta \right)~{\overset {\underset {\mathrm {def} }{}}{=}}~\left(1-{1 \over q}\right)\left(1-{\sqrt {1-{q\delta \over {q-1}}}}\right)}.LetJq(n,d,e){\displaystyle J_{q}\left(n,d,e\right)}be the maximum number of codewords in a Hamming ball of radiusefor any codeC⊆Fqn{\displaystyle C\subseteq \mathbb {F} _{q}^{n}}of distanced. Then we have theJohnson Bound:Jq(n,d,e)≤qnd{\displaystyle J_{q}\left(n,d,e\right)\leq qnd}, ifen≤q−1q(1−1−qq−1⋅dn)=Jq(dn){\displaystyle {e \over n}\leq {{q-1} \over q}\left({1-{\sqrt {1-{q \over {q-1}}\cdot {d \over n}}}}\,\right)=J_{q}\left({d \over n}\right)} Block codes are tied to thesphere packing problemwhich has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerfulGolay codeused in deep space communications uses 24 dimensions. If used as a binary code (which it usually is), the dimensions refer to the length of the codeword as defined above. The theory of coding uses theN-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called perfect codes. There are very few of these codes. Another property is the number of neighbors a single codeword may have.[1]Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the12-faceand24-cellwith 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by thekissing numbers. The result is that the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.[1]
https://en.wikipedia.org/wiki/Linear_block_code
Incomputer scienceandinformation theory, aHuffman codeis a particular type of optimalprefix codethat is commonly used forlossless data compression. The process of finding or using such a code isHuffman coding, an algorithm developed byDavid A. Huffmanwhile he was aSc.D.student atMIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".[1] The output from Huffman's algorithm can be viewed as avariable-length codetable for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence (weight) for each possible value of the source symbol. As in otherentropy encodingmethods, more common symbols are generally represented using fewer bits than less common symbols. Huffman's method can be efficiently implemented, finding a code in timelinearto the number of input weights if these weights are sorted.[2]However, although optimal among methods encoding symbols separately, Huffman codingis not always optimalamong all compression methods – it is replaced witharithmetic coding[3]orasymmetric numeral systems[4]if a better compression ratio is required. In 1951,David A. Huffmanand hisMITinformation theoryclassmates were given the choice of a term paper or a finalexam. The professor,Robert M. Fano, assigned aterm paperon the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sortedbinary treeand quickly proved this method the most efficient.[5] In doing so, Huffman outdid Fano, who had worked withClaude Shannonto develop a similar code. Building the tree from the bottom up guaranteed optimality, unlike the top-down approach ofShannon–Fano coding. Huffman coding uses a specific method for choosing the representation for each symbol, resulting in aprefix code(sometimes called "prefix-free codes", that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol). Huffman coding is such a widespread method for creating prefix codes that the term "Huffman code" is widely used as a synonym for "prefix code" even when such a code is not produced by Huffman's algorithm. Input.AlphabetA=(a1,a2,…,an){\displaystyle A=(a_{1},a_{2},\dots ,a_{n})}, which is the symbol alphabet of sizen{\displaystyle n}.TupleW=(w1,w2,…,wn){\displaystyle W=(w_{1},w_{2},\dots ,w_{n})}, which is the tuple of the (positive) symbol weights (usually proportional to probabilities), i.e.wi=weight⁡(ai),i∈{1,2,…,n}{\displaystyle w_{i}=\operatorname {weight} \left(a_{i}\right),\,i\in \{1,2,\dots ,n\}}.Output.CodeC(W)=(c1,c2,…,cn){\displaystyle C\left(W\right)=(c_{1},c_{2},\dots ,c_{n})}, which is the tuple of (binary) codewords, whereci{\displaystyle c_{i}}is the codeword forai,i∈{1,2,…,n}{\displaystyle a_{i},\,i\in \{1,2,\dots ,n\}}.Goal.LetL(C(W))=∑i=1nwilength⁡(ci){\textstyle L(C(W))=\sum _{i=1}^{n}w_{i}\operatorname {length} (c_{i})}be the weighted path length of codeC{\displaystyle C}. Condition:L(C(W))≤L(T(W)){\displaystyle L(C(W))\leq L(T(W))}for any codeT(W){\displaystyle T(W)}. We give an example of the result of Huffman coding for a code with five characters and given weights. We will not verify that it minimizesLover all codes, but we will computeLand compare it to theShannon entropyHof the given set of weights; the result is nearly optimal. For any code that isbiunique, meaning that the code isuniquely decodeable, the sum of the probability budgets across all symbols is always less than or equal to one. In this example, the sum is strictly equal to one; as a result, the code is termed acompletecode. If this is not the case, one can always derive an equivalent code by adding extra symbols (with associated null probabilities), to make the code complete while keeping itbiunique. As defined byShannon (1948), the information contenth(in bits) of each symbolaiwith non-null probability is TheentropyH(in bits) is the weighted sum, across all symbolsaiwith non-zero probabilitywi, of the information content of each symbol: (Note: A symbol with zero probability has zero contribution to the entropy, sincelimw→0+wlog2⁡w=0{\displaystyle \lim _{w\to 0^{+}}w\log _{2}w=0}. So for simplicity, symbols with zero probability can be left out of the formula above.) As a consequence ofShannon's source coding theorem, the entropy is a measure of the smallest codeword length that is theoretically possible for the given alphabet with associated weights. In this example, the weighted average codeword length is 2.25 bits per symbol, only slightly larger than the calculated entropy of 2.205 bits per symbol. So not only is this code optimal in the sense that no other feasible code performs better, but it is very close to the theoretical limit established by Shannon. In general, a Huffman code need not be unique. Thus the set of Huffman codes for a given probability distribution is a non-empty subset of the codes minimizingL(C){\displaystyle L(C)}for that probability distribution. (However, for each minimizing codeword length assignment, there exists at least one Huffman code with those lengths.) The technique works by creating abinary treeof nodes. These can be stored in a regulararray, the size of which depends on the number of symbols,n{\displaystyle n}. A node can be either aleaf nodeor aninternal node. Initially, all nodes are leaf nodes, which contain thesymbolitself, theweight(frequency of appearance) of the symbol and optionally, a link to aparentnode which makes it easy to read the code (in reverse) starting from a leaf node. Internal nodes contain aweight, links totwo child nodesand an optional link to aparentnode. As a common convention, bit '0' represents following the left child and bit '1' represents following the right child. A finished tree has up ton{\displaystyle n}leaf nodes andn−1{\displaystyle n-1}internal nodes. A Huffman tree that omits unused symbols produces the most optimal code lengths. The process begins with the leaf nodes containing the probabilities of the symbol they represent. Then, the process takes the two nodes with smallest probability, and creates a new internal node having these two nodes as children. The weight of the new node is set to the sum of the weight of the children. We then apply the process again, on the new internal node and on the remaining nodes (i.e., we exclude the two leaf nodes), we repeat this process until only one node remains, which is the root of the Huffman tree. The simplest construction algorithm uses apriority queuewhere the node with lowest probability is given highest priority: Since efficient priority queue data structures require O(logn) time per insertion, and a tree withnleaves has 2n−1 nodes, this algorithm operates in O(nlogn) time, wherenis the number of symbols. If the symbols are sorted by probability, there is alinear-time(O(n)) method to create a Huffman tree using twoqueues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues: Once the Huffman tree has been generated, it is traversed to generate a dictionary which maps the symbols to binary codes as follows: The final encoding of any symbol is then read by a concatenation of the labels on the edges along the path from the root node to the symbol. In many cases, time complexity is not very important in the choice of algorithm here, sincenhere is the number of symbols in the alphabet, which is typically a very small number (compared to the length of the message to be encoded); whereas complexity analysis concerns the behavior whenngrows to be very large. It is generally beneficial to minimize the variance of codeword length. For example, a communication buffer receiving Huffman-encoded data may need to be larger to deal with especially long symbols if the tree is especially unbalanced. To minimize variance, simply break ties between queues by choosing the item in the first queue. This modification will retain the mathematical optimality of the Huffman coding while both minimizing variance and minimizing the length of the longest character code. Generally speaking, the process of decompression is simply a matter of translating the stream of prefix codes to individual byte values, usually by traversing the Huffman tree node by node as each bit is read from the input stream (reaching a leaf node necessarily terminates the search for that particular byte value). Before this can take place, however, the Huffman tree must be somehow reconstructed. In the simplest case, where character frequencies are fairly predictable, the tree can be preconstructed (and even statistically adjusted on each compression cycle) and thus reused every time, at the expense of at least some measure of compression efficiency. Otherwise, the information to reconstruct the tree must be sent a priori. A naive approach might be to prepend the frequency count of each character to the compression stream. Unfortunately, the overhead in such a case could amount to several kilobytes, so this method has little practical use. If the data is compressed usingcanonical encoding, the compression model can be precisely reconstructed with justB⋅2B{\displaystyle B\cdot 2^{B}}bits of information (whereBis the number of bits per symbol). Another method is to simply prepend the Huffman tree, bit by bit, to the output stream. For example, assuming that the value of 0 represents a parent node and 1 a leaf node, whenever the latter is encountered the tree building routine simply reads the next 8 bits to determine the character value of that particular leaf. The process continues recursively until the last leaf node is reached; at that point, the Huffman tree will thus be faithfully reconstructed. The overhead using such a method ranges from roughly 2 to 320 bytes (assuming an 8-bit alphabet). Many other techniques are possible as well. In any case, since the compressed data can include unused "trailing bits" the decompressor must be able to determine when to stop producing output. This can be accomplished by either transmitting the length of the decompressed data along with the compression model or by defining a special code symbol to signify the end of input (the latter method can adversely affect code length optimality, however). The probabilities used can be generic ones for the application domain that are based on average experience, or they can be the actual frequencies found in the text being compressed. This requires that afrequency tablemust be stored with the compressed text. See the Decompression section above for more information about the various techniques employed for this purpose. Huffman's original algorithm is optimal for a symbol-by-symbol coding with a known input probability distribution, i.e., separately encoding unrelated symbols in such a data stream. However, it is not optimal when the symbol-by-symbol restriction is dropped, or when theprobability mass functionsare unknown. Also, if symbols are notindependent and identically distributed, a single code may be insufficient for optimality. Other methods such asarithmetic codingoften have better compression capability. Although both aforementioned methods can combine an arbitrary number of symbols for more efficient coding and generally adapt to the actual input statistics, arithmetic coding does so without significantly increasing its computational or algorithmic complexities (though the simplest version is slower and more complex than Huffman coding). Such flexibility is especially useful when input probabilities are not precisely known or vary significantly within the stream. However, Huffman coding is usually faster and arithmetic coding was historically a subject of some concern overpatentissues. Thus many technologies have historically avoided arithmetic coding in favor of Huffman and other prefix coding techniques. As of mid-2010, the most commonly used techniques for this alternative to Huffman coding have passed into the public domain as the early patents have expired. For a set of symbols with a uniform probability distribution and a number of members which is apower of two, Huffman coding is equivalent to simple binaryblock encoding, e.g.,ASCIIcoding. This reflects the fact that compression is not possible with such an input, no matter what the compression method, i.e., doing nothing to the data is the optimal thing to do. Huffman coding is optimal among all methods in any case where each input symbol is a known independent and identically distributed random variable having a probability that isdyadic. Prefix codes, and thus Huffman coding in particular, tend to have inefficiency on small alphabets, where probabilities often fall between these optimal (dyadic) points. The worst case for Huffman coding can happen when the probability of the most likely symbol far exceeds 2−1= 0.5, making the upper limit of inefficiency unbounded. There are two related approaches for getting around this particular inefficiency while still using Huffman coding. Combining a fixed number of symbols together ("blocking") often increases (and never decreases) compression. As the size of the block approaches infinity, Huffman coding theoretically approaches the entropy limit, i.e., optimal compression.[7]However, blocking arbitrarily large groups of symbols is impractical, as the complexity of a Huffman code is linear in the number of possibilities to be encoded, a number that is exponential in the size of a block. This limits the amount of blocking that is done in practice. A practical alternative, in widespread use, isrun-length encoding. This technique adds one step in advance of entropy coding, specifically counting (runs) of repeated symbols, which are then encoded. For the simple case ofBernoulli processes,Golomb codingis optimal among prefix codes for coding run length, a fact proved via the techniques of Huffman coding.[8]A similar approach is taken by fax machines usingmodified Huffman coding. However, run-length coding is not as adaptable to as many input types as other compression technologies. Many variations of Huffman coding exist,[9]some of which use a Huffman-like algorithm, and others of which find optimal prefix codes (while, for example, putting different restrictions on the output). Note that, in the latter case, the method need not be Huffman-like, and, indeed, need not even bepolynomial time. Then-ary Huffmanalgorithm uses an alphabet of sizen, typically {0, 1, ..., n-1}, to encode messages and build ann-ary tree. This approach was considered by Huffman in his original paper. The same algorithm applies as for binary (n=2{\displaystyle n=2}) codes, but instead of combining the two least likely symbols, thenleast likely symbols are grouped together. Note that forn> 2, not all sets of source words can properly form a completen-ary tree for Huffman coding. In these cases, additional placeholder symbols with 0 probability may need to be added. This is because the structure of the tree needs to repeatedly joinnbranches into one - also known as an "nto 1" combination. For binary coding, this is a "2 to 1" combination, which works with any number of symbols. Forn-ary coding, a complete tree is only possible when the total number of symbols (real + placeholders) leaves a remainder of 1 when divided by (n-1).[1] A variation calledadaptive Huffman codinginvolves calculating the probabilities dynamically based on recent actual frequencies in the sequence of source symbols, and changing the coding tree structure to match the updated probability estimates. It is used rarely in practice, since the cost of updating the tree makes it slower than optimizedadaptive arithmetic coding, which is more flexible and has better compression. Most often, the weights used in implementations of Huffman coding represent numeric probabilities, but the algorithm given above does not require this; it requires only that the weights form atotally orderedcommutative monoid, meaning a way to order weights and to add them. TheHuffman template algorithmenables one to use any kind of weights (costs, frequencies, pairs of weights, non-numerical weights) and one of many combining methods (not just addition). Such algorithms can solve other minimization problems, such as minimizingmaxi[wi+length(ci)]{\displaystyle \max _{i}\left[w_{i}+\mathrm {length} \left(c_{i}\right)\right]}, a problem first applied to circuit design. Length-limited Huffman codingis a variant where the goal is still to achieve a minimum weighted path length, but there is an additional restriction that the length of each codeword must be less than a given constant. Thepackage-merge algorithmsolves this problem with a simplegreedyapproach very similar to that used by Huffman's algorithm. Its time complexity isO(nL){\displaystyle O(nL)}, whereL{\displaystyle L}is the maximum length of a codeword. No algorithm is known to solve this problem inO(n){\displaystyle O(n)}orO(nlog⁡n){\displaystyle O(n\log n)}time, unlike the presorted and unsorted conventional Huffman problems, respectively. In the standard Huffman coding problem, it is assumed that each symbol in the set that the code words are constructed from has an equal cost to transmit: a code word whose length isNdigits will always have a cost ofN, no matter how many of those digits are 0s, how many are 1s, etc. When working under this assumption, minimizing the total cost of the message and minimizing the total number of digits are the same thing. Huffman coding with unequal letter costsis the generalization without this assumption: the letters of the encoding alphabet may have non-uniform lengths, due to characteristics of the transmission medium. An example is the encoding alphabet ofMorse code, where a 'dash' takes longer to send than a 'dot', and therefore the cost of a dash in transmission time is higher. The goal is still to minimize the weighted average codeword length, but it is no longer sufficient just to minimize the number of symbols used by the message. No algorithm is known to solve this in the same manner or with the same efficiency as conventional Huffman coding, though it has been solved byRichard M. Karp[10]whose solution has been refined for the case of integer costs by Mordecai J. Golin.[11] In the standard Huffman coding problem, it is assumed that any codeword can correspond to any input symbol. In the alphabetic version, the alphabetic order of inputs and outputs must be identical. Thus, for example,A={a,b,c}{\displaystyle A=\left\{a,b,c\right\}}could not be assigned codeH(A,C)={00,1,01}{\displaystyle H\left(A,C\right)=\left\{00,1,01\right\}}, but instead should be assigned eitherH(A,C)={00,01,1}{\displaystyle H\left(A,C\right)=\left\{00,01,1\right\}}orH(A,C)={0,10,11}{\displaystyle H\left(A,C\right)=\left\{0,10,11\right\}}. This is also known as theHu–Tuckerproblem, afterT. C. HuandAlan Tucker, the authors of the paper presenting the firstO(nlog⁡n){\displaystyle O(n\log n)}-timesolution to this optimal binary alphabetic problem,[12]which has some similarities to Huffman algorithm, but is not a variation of this algorithm. A later method, theGarsia–Wachs algorithmofAdriano GarsiaandMichelle L. Wachs(1977), uses simpler logic to perform the same comparisons in the same total time bound. These optimal alphabetic binary trees are often used asbinary search trees.[13] If weights corresponding to the alphabetically ordered inputs are in numerical order, the Huffman code has the same lengths as the optimal alphabetic code, which can be found from calculating these lengths, rendering Hu–Tucker coding unnecessary. The code resulting from numerically (re-)ordered input is sometimes called thecanonical Huffman codeand is often the code used in practice, due to ease of encoding/decoding. The technique for finding this code is sometimes calledHuffman–Shannon–Fano coding, since it is optimal like Huffman coding, but alphabetic in weight probability, likeShannon–Fano coding. The Huffman–Shannon–Fano code corresponding to the example is{000,001,01,10,11}{\displaystyle \{000,001,01,10,11\}}, which, having the same codeword lengths as the original solution, is also optimal. But incanonical Huffman code, the result is{110,111,00,01,10}{\displaystyle \{110,111,00,01,10\}}. Arithmetic codingand Huffman coding produce equivalent results — achieving entropy — when every symbol has a probability of the form 1/2k. In other circumstances, arithmetic coding can offer better compression than Huffman coding because — intuitively — its "code words" can have effectively non-integer bit lengths, whereas code words in prefix codes such as Huffman codes can only have an integer number of bits. Therefore, a code word of lengthkonly optimally matches a symbol of probability 1/2kand other probabilities are not represented optimally; whereas the code word length in arithmetic coding can be made to exactly match the true probability of the symbol. This difference is especially striking for small alphabet sizes.[citation needed] Prefix codes nevertheless remain in wide use because of their simplicity, high speed, andlack of patent coverage. They are often used as a "back-end" to other compression methods.Deflate(PKZIP's algorithm) and multimediacodecssuch asJPEGandMP3have a front-end model andquantizationfollowed by the use of prefix codes; these are often called "Huffman codes" even though most applications use pre-defined variable-length codes rather than codes designed using Huffman's algorithm.
https://en.wikipedia.org/wiki/Huffman_coding
Arandom variable(also calledrandom quantity,aleatory variable, orstochastic variable) is amathematicalformalization of a quantity or object which depends onrandomevents.[1]The term 'random variable' in its mathematical definition refers to neither randomness nor variability[2]but instead is a mathematicalfunctionin which Informally, randomness typically represents some fundamental element of chance, such as in the roll of adie; it may also represent uncertainty, such asmeasurement error.[1]However, theinterpretation of probabilityis philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorousaxiomaticsetup. In the formal mathematical language ofmeasure theory, a random variable is defined as ameasurable functionfrom aprobability measure space(called thesample space) to ameasurable space. This allows consideration of thepushforward measure, which is called thedistributionof the random variable; the distribution is thus aprobability measureon the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may beindependent. It is common to consider the special cases ofdiscrete random variablesandabsolutely continuous random variables, corresponding to whether a random variable is valued in a countable subset or in an interval ofreal numbers. There are other important possibilities, especially in the theory ofstochastic processes, wherein it is natural to considerrandom sequencesorrandom functions. Sometimes arandom variableis taken to be automatically valued in the real numbers, with more general random quantities instead being calledrandom elements. According toGeorge Mackey,Pafnuty Chebyshevwas the first person "to think systematically in terms of random variables".[3] Arandom variableX{\displaystyle X}is ameasurable functionX:Ω→E{\displaystyle X\colon \Omega \to E}from a sample spaceΩ{\displaystyle \Omega }as a set of possibleoutcomesto ameasurable spaceE{\displaystyle E}. The technical axiomatic definition requires the sample spaceΩ{\displaystyle \Omega }to belong to aprobability triple(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}(see themeasure-theoretic definition). A random variable is often denoted by capitalRoman letterssuch asX,Y,Z,T{\displaystyle X,Y,Z,T}.[4] The probability thatX{\displaystyle X}takes on a value in a measurable setS⊆E{\displaystyle S\subseteq E}is written as In many cases,X{\displaystyle X}isreal-valued, i.e.E=R{\displaystyle E=\mathbb {R} }. In some contexts, the termrandom element(seeextensions) is used to denote a random variable not of this form. When theimage(or range) ofX{\displaystyle X}is finite orcountablyinfinite, the random variable is called adiscrete random variable[5]: 399and its distribution is adiscrete probability distribution, i.e. can be described by aprobability mass functionthat assigns a probability to each value in the image ofX{\displaystyle X}. If the image is uncountably infinite (usually aninterval) thenX{\displaystyle X}is called acontinuous random variable.[6][7]In the special case that it isabsolutely continuous, its distribution can be described by aprobability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.[8] Any random variable can be described by itscumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value. The term "random variable" in statistics is traditionally limited to thereal-valuedcase (E=R{\displaystyle E=\mathbb {R} }). In this case, the structure of the real numbers makes it possible to define quantities such as theexpected valueandvarianceof a random variable, itscumulative distribution function, and themomentsof its distribution. However, the definition above is valid for anymeasurable spaceE{\displaystyle E}of values. Thus one can consider random elements of other setsE{\displaystyle E}, such as randomBoolean values,categorical values,complex numbers,vectors,matrices,sequences,trees,sets,shapes,manifolds, andfunctions. One may then specifically refer to arandom variable oftypeE{\displaystyle E}, or anE{\displaystyle E}-valued random variable. This more general concept of arandom elementis particularly useful in disciplines such asgraph theory,machine learning,natural language processing, and other fields indiscrete mathematicsandcomputer science, where one is often interested in modeling the random variation of non-numericaldata structures. In some cases, it is nonetheless convenient to represent each element ofE{\displaystyle E}, using one or more real numbers. In this case, a random element may optionally be represented as avector of real-valued random variables(all defined on the same underlying probability spaceΩ{\displaystyle \Omega }, which allows the different random variables tocovary). For example: If a random variableX:Ω→R{\displaystyle X\colon \Omega \to \mathbb {R} }defined on the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}is given, we can ask questions like "How likely is it that the value ofX{\displaystyle X}is equal to 2?". This is the same as the probability of the event{ω:X(ω)=2}{\displaystyle \{\omega :X(\omega )=2\}\,\!}which is often written asP(X=2){\displaystyle P(X=2)\,\!}orpX(2){\displaystyle p_{X}(2)}for short. Recording all these probabilities of outputs of a random variableX{\displaystyle X}yields theprobability distributionofX{\displaystyle X}. The probability distribution "forgets" about the particular probability space used to defineX{\displaystyle X}and only records the probabilities of various output values ofX{\displaystyle X}. Such a probability distribution, ifX{\displaystyle X}is real-valued, can always be captured by itscumulative distribution function and sometimes also using aprobability density function,fX{\displaystyle f_{X}}. Inmeasure-theoreticterms, we use the random variableX{\displaystyle X}to "push-forward" the measureP{\displaystyle P}onΩ{\displaystyle \Omega }to a measurepX{\displaystyle p_{X}}onR{\displaystyle \mathbb {R} }. The measurepX{\displaystyle p_{X}}is called the "(probability) distribution ofX{\displaystyle X}" or the "law ofX{\displaystyle X}".[9]The densityfX=dpX/dμ{\displaystyle f_{X}=dp_{X}/d\mu }, theRadon–Nikodym derivativeofpX{\displaystyle p_{X}}with respect to some reference measureμ{\displaystyle \mu }onR{\displaystyle \mathbb {R} }(often, this reference measure is theLebesgue measurein the case of continuous random variables, or thecounting measurein the case of discrete random variables). The underlying probability spaceΩ{\displaystyle \Omega }is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such ascorrelation and dependenceorindependencebased on ajoint distributionof two or more random variables on the same probability space. In practice, one often disposes of the spaceΩ{\displaystyle \Omega }altogether and just puts a measure onR{\displaystyle \mathbb {R} }that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article onquantile functionsfor fuller development. Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to their height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sumPMF⁡(0)+PMF⁡(2)+PMF⁡(4)+⋯{\displaystyle \operatorname {PMF} (0)+\operatorname {PMF} (2)+\operatorname {PMF} (4)+\cdots }. In examples such as these, thesample spaceis often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If{an},{bn}{\textstyle \{a_{n}\},\{b_{n}\}}are countable sets of real numbers,bn>0{\textstyle b_{n}>0}and∑nbn=1{\textstyle \sum _{n}b_{n}=1}, thenF=∑nbnδan(x){\textstyle F=\sum _{n}b_{n}\delta _{a_{n}}(x)}is a discrete distribution function. Hereδt(x)=0{\displaystyle \delta _{t}(x)=0}forx<t{\displaystyle x<t},δt(x)=1{\displaystyle \delta _{t}(x)=1}forx≥t{\displaystyle x\geq t}. Taking for instance an enumeration of all rational numbers as{an}{\displaystyle \{a_{n}\}}, one gets a discrete function that is not necessarily astep function(piecewiseconstant). The possible outcomes for one coin toss can be described by the sample spaceΩ={heads,tails}{\displaystyle \Omega =\{{\text{heads}},{\text{tails}}\}}. We can introduce a real-valued random variableY{\displaystyle Y}that models a $1 payoff for a successful bet on heads as follows:Y(ω)={1,ifω=heads,0,ifω=tails.{\displaystyle Y(\omega )={\begin{cases}1,&{\text{if }}\omega ={\text{heads}},\\[6pt]0,&{\text{if }}\omega ={\text{tails}}.\end{cases}}} If the coin is afair coin,Yhas aprobability mass functionfY{\displaystyle f_{Y}}given by:fY(y)={12,ify=1,12,ify=0,{\displaystyle f_{Y}(y)={\begin{cases}{\tfrac {1}{2}},&{\text{if }}y=1,\\[6pt]{\tfrac {1}{2}},&{\text{if }}y=0,\end{cases}}} A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbersn1andn2from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variableXgiven by the function that maps the pair to the sum:X((n1,n2))=n1+n2{\displaystyle X((n_{1},n_{2}))=n_{1}+n_{2}}and (if the dice arefair) has a probability mass functionfXgiven by:fX(S)=min(S−1,13−S)36,forS∈{2,3,4,5,6,7,8,9,10,11,12}{\displaystyle f_{X}(S)={\frac {\min(S-1,13-S)}{36}},{\text{ for }}S\in \{2,3,4,5,6,7,8,9,10,11,12\}} Formally, a continuous random variable is a random variable whosecumulative distribution functioniscontinuouseverywhere.[10]There are no "gaps", which would correspond to numbers which have a finite probability ofoccurring. Instead, continuous random variablesalmost nevertake an exact prescribed valuec(formally,∀c∈R:Pr(X=c)=0{\textstyle \forall c\in \mathbb {R} :\;\Pr(X=c)=0}) but there is a positive probability that its value will lie in particularintervalswhich can bearbitrarily small. Continuous random variables usually admitprobability density functions(PDF), which characterize their CDF andprobability measures; such distributions are also calledabsolutely continuous; but some continuous distributions aresingular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case,X= the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to anyrangeof values. For example, the probability of choosing a number in [0, 180] is1⁄2. Instead of speaking of a probability mass function, we say that the probabilitydensityofXis 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given anyintervalI=[a,b]={x∈R:a≤x≤b}{\textstyle I=[a,b]=\{x\in \mathbb {R} :a\leq x\leq b\}}, a random variableXI∼U⁡(I)=U⁡[a,b]{\displaystyle X_{I}\sim \operatorname {U} (I)=\operatorname {U} [a,b]}is called a "continuous uniformrandom variable" (CURV) if the probability that it takes a value in asubintervaldepends only on the length of the subinterval. This implies that the probability ofXI{\displaystyle X_{I}}falling in any subinterval[c,d]⊆[a,b]{\displaystyle [c,d]\subseteq [a,b]}isproportionalto thelengthof the subinterval, that is, ifa≤c≤d≤b, one has Pr(XI∈[c,d])=d−cb−a{\displaystyle \Pr \left(X_{I}\in [c,d]\right)={\frac {d-c}{b-a}}} where the last equality results from theunitarity axiomof probability. Theprobability density functionof a CURVX∼U⁡[a,b]{\displaystyle X\sim \operatorname {U} [a,b]}is given by theindicator functionof its interval ofsupportnormalized by the interval's length:fX(x)={1b−a,a≤x≤b0,otherwise.{\displaystyle f_{X}(x)={\begin{cases}\displaystyle {1 \over b-a},&a\leq x\leq b\\0,&{\text{otherwise}}.\end{cases}}}Of particular interest is the uniform distribution on theunit interval[0,1]{\displaystyle [0,1]}. Samples of any desiredprobability distributionD{\displaystyle \operatorname {D} }can be generated by calculating thequantile functionofD{\displaystyle \operatorname {D} }on arandomly-generated numberdistributed uniformly on the unit interval. This exploitsproperties of cumulative distribution functions, which are a unifying framework for all random variables. Amixed random variableis a random variable whosecumulative distribution functionis neitherdiscretenoreverywhere-continuous.[10]It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case theCDFwill be the weighted average of the CDFs of the component variables.[10] An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails,X= −1; otherwiseX= the value of the spinner as in the preceding example. There is a probability of1⁄2that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; seeLebesgue's decomposition theorem § Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers). The most formal,axiomaticdefinition of a random variable involvesmeasure theory. Continuous random variables are defined in terms ofsetsof numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. theBanach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed asigma-algebrato constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, theBorel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite orcountably infinitenumber ofunionsand/orintersectionsof such intervals.[11] The measure-theoretic definition is as follows. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability spaceand(E,E){\displaystyle (E,{\mathcal {E}})}ameasurable space. Then an(E,E){\displaystyle (E,{\mathcal {E}})}-valued random variableis a measurable functionX:Ω→E{\displaystyle X\colon \Omega \to E}, which means that, for every subsetB∈E{\displaystyle B\in {\mathcal {E}}}, itspreimageisF{\displaystyle {\mathcal {F}}}-measurable;X−1(B)∈F{\displaystyle X^{-1}(B)\in {\mathcal {F}}}, whereX−1(B)={ω:X(ω)∈B}{\displaystyle X^{-1}(B)=\{\omega :X(\omega )\in B\}}.[12]This definition enables us to measure any subsetB∈E{\displaystyle B\in {\mathcal {E}}}in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member ofΩ{\displaystyle \Omega }is a possible outcome, a member ofF{\displaystyle {\mathcal {F}}}is a measurable subset of possible outcomes, the functionP{\displaystyle P}gives the probability of each such measurable subset,E{\displaystyle E}represents the set of values that the random variable can take (such as the set of real numbers), and a member ofE{\displaystyle {\mathcal {E}}}is a "well-behaved" (measurable) subset ofE{\displaystyle E}(those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. WhenE{\displaystyle E}is atopological space, then the most common choice for theσ-algebraE{\displaystyle {\mathcal {E}}}is theBorel σ-algebraB(E){\displaystyle {\mathcal {B}}(E)}, which is the σ-algebra generated by the collection of all open sets inE{\displaystyle E}. In such case the(E,E){\displaystyle (E,{\mathcal {E}})}-valued random variable is called anE{\displaystyle E}-valued random variable. Moreover, when the spaceE{\displaystyle E}is the real lineR{\displaystyle \mathbb {R} }, then such a real-valued random variable is called simply arandom variable. In this case the observation space is the set of real numbers. Recall,(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}is the probability space. For a real observation space, the functionX:Ω→R{\displaystyle X\colon \Omega \rightarrow \mathbb {R} }is a real-valued random variable if This definition is a special case of the above because the set{(−∞,r]:r∈R}{\displaystyle \{(-\infty ,r]:r\in \mathbb {R} \}}generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that{ω:X(ω)≤r}=X−1((−∞,r]){\displaystyle \{\omega :X(\omega )\leq r\}=X^{-1}((-\infty ,r])}. The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept ofexpected valueof a random variable, denotedE⁡[X]{\displaystyle \operatorname {E} [X]}, and also called thefirstmoment.In general,E⁡[f(X)]{\displaystyle \operatorname {E} [f(X)]}is not equal tof(E⁡[X]){\displaystyle f(\operatorname {E} [X])}. Once the "average value" is known, one could then ask how far from this average value the values ofX{\displaystyle X}typically are, a question that is answered by thevarianceandstandard deviationof a random variable.E⁡[X]{\displaystyle \operatorname {E} [X]}can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations ofX{\displaystyle X}. Mathematically, this is known as the (generalised)problem of moments: for a given class of random variablesX{\displaystyle X}, find a collection{fi}{\displaystyle \{f_{i}\}}of functions such that the expectation valuesE⁡[fi(X)]{\displaystyle \operatorname {E} [f_{i}(X)]}fully characterise thedistributionof the random variableX{\displaystyle X}. Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity functionf(X)=X{\displaystyle f(X)=X}of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for acategoricalrandom variableXthat can take on thenominalvalues "red", "blue" or "green", the real-valued function[X=green]{\displaystyle [X={\text{green}}]}can be constructed; this uses theIverson bracket, and has the value 1 ifX{\displaystyle X}has the value "green", 0 otherwise. Then, theexpected valueand other moments of this function can be determined. A new random variableYcan be defined byapplyinga realBorel measurable functiong:R→R{\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }to the outcomes of areal-valuedrandom variableX{\displaystyle X}. That is,Y=g(X){\displaystyle Y=g(X)}. Thecumulative distribution functionofY{\displaystyle Y}is then If functiong{\displaystyle g}is invertible (i.e.,h=g−1{\displaystyle h=g^{-1}}exists, whereh{\displaystyle h}isg{\displaystyle g}'sinverse function) and is eitherincreasing or decreasing, then the previous relation can be extended to obtain With the same hypotheses of invertibility ofg{\displaystyle g}, assuming alsodifferentiability, the relation between theprobability density functionscan be found by differentiating both sides of the above expression with respect toy{\displaystyle y}, in order to obtain[10] If there is no invertibility ofg{\displaystyle g}but eachy{\displaystyle y}admits at most a countable number of roots (i.e., a finite, or countably infinite, number ofxi{\displaystyle x_{i}}such thaty=g(xi){\displaystyle y=g(x_{i})}) then the previous relation between theprobability density functionscan be generalized with wherexi=gi−1(y){\displaystyle x_{i}=g_{i}^{-1}(y)}, according to theinverse function theorem. The formulas for densities do not demandg{\displaystyle g}to be increasing. In the measure-theoretic,axiomatic approachto probability, if a random variableX{\displaystyle X}onΩ{\displaystyle \Omega }and aBorel measurable functiong:R→R{\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }, thenY=g(X){\displaystyle Y=g(X)}is also a random variable onΩ{\displaystyle \Omega }, since the composition of measurable functionsis also measurable. (However, this is not necessarily true ifg{\displaystyle g}isLebesgue measurable.[citation needed]) The same procedure that allowed one to go from a probability space(Ω,P){\displaystyle (\Omega ,P)}to(R,dFX){\displaystyle (\mathbb {R} ,dF_{X})}can be used to obtain the distribution ofY{\displaystyle Y}. LetX{\displaystyle X}be a real-valued,continuous random variableand letY=X2{\displaystyle Y=X^{2}}. Ify<0{\displaystyle y<0}, thenP(X2≤y)=0{\displaystyle P(X^{2}\leq y)=0}, so Ify≥0{\displaystyle y\geq 0}, then so SupposeX{\displaystyle X}is a random variable with a cumulative distribution whereθ>0{\displaystyle \theta >0}is a fixed parameter. Consider the random variableY=log(1+e−X).{\displaystyle Y=\mathrm {log} (1+e^{-X}).}Then, The last expression can be calculated in terms of the cumulative distribution ofX,{\displaystyle X,}so which is thecumulative distribution function(CDF) of anexponential distribution. SupposeX{\displaystyle X}is a random variable with astandard normal distribution, whose density is Consider the random variableY=X2.{\displaystyle Y=X^{2}.}We can find the density using the above formula for a change of variables: In this case the change is notmonotonic, because every value ofY{\displaystyle Y}has two corresponding values ofX{\displaystyle X}(one positive and negative). However, because of symmetry, both halves will transform identically, i.e., The inverse transformation is and its derivative is Then, This is achi-squared distributionwith onedegree of freedom. SupposeX{\displaystyle X}is a random variable with anormal distribution, whose density is Consider the random variableY=X2.{\displaystyle Y=X^{2}.}We can find the density using the above formula for a change of variables: In this case the change is notmonotonic, because every value ofY{\displaystyle Y}has two corresponding values ofX{\displaystyle X}(one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: The inverse transformation is and its derivative is Then, This is anoncentral chi-squared distributionwith onedegree of freedom. There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below. If the sample space is a subset of the real line, random variablesXandYareequal in distribution(denotedX=dY{\displaystyle X{\stackrel {d}{=}}Y}) if they have the same distribution functions: To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equalmoment generating functionshave the same distribution. This provides, for example, a useful method of checking equality of certain functions ofindependent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a definedLaplace transform. Two random variablesXandYareequalalmost surely(denotedX=a.s.Y{\displaystyle X\;{\stackrel {\text{a.s.}}{=}}\;Y}) if, and only if, the probability that they are different iszero: For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: where "ess sup" represents theessential supremumin the sense ofmeasure theory. Finally, the two random variablesXandYareequalif they are equal as functions on their measurable space: This notion is typically the least useful in probability theory because in practice and in theory, the underlyingmeasure spaceof theexperimentis rarely explicitly characterized or even characterizable. Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables consideredin isolationare "practically equivalent" if they are equal in distribution -- but once we relate them tootherrandom variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely. For example, consider the real random variablesA,B,C, andDall defined on the same probability space. Suppose thatAandBare equal almost surely (A=a.s.B{\displaystyle A\;{\stackrel {\text{a.s.}}{=}}\;B}), butAandCare only equal in distribution (A=dC{\displaystyle A{\stackrel {d}{=}}C}). ThenA+D=a.s.B+D{\displaystyle A+D\;{\stackrel {\text{a.s.}}{=}}\;B+D}, but in generalA+D≠C+D{\displaystyle A+D\;\neq \;C+D}(not even in distribution). Similarly, we have that the expectation valuesE(AD)=E(BD){\displaystyle \mathbb {E} (AD)=\mathbb {E} (BD)}, but in generalE(AD)≠E(CD){\displaystyle \mathbb {E} (AD)\neq \mathbb {E} (CD)}. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have differentcovarianceswith a third random variable. A significant theme in mathematical statistics consists of obtaining convergence results for certainsequencesof random variables; for instance thelaw of large numbersand thecentral limit theorem. There are various senses in which a sequenceXn{\displaystyle X_{n}}of random variables can converge to a random variableX{\displaystyle X}. These are explained in the article onconvergence of random variables.
https://en.wikipedia.org/wiki/Random_variable
Thepropositional calculus[a]is a branch oflogic.[1]It is also calledpropositional logic,[2]statement logic,[1]sentential calculus,[3]sentential logic,[4][1]or sometimeszeroth-order logic.[b][6][7][8]Sometimes, it is calledfirst-orderpropositional logic[9]to contrast it withSystem F, but it should not be confused withfirst-order logic. It deals withpropositions[1](which can betrue or false)[10]and relations between propositions,[11]including the construction of arguments based on them.[12]Compound propositions are formed by connecting propositions bylogical connectivesrepresenting thetruth functionsofconjunction,disjunction,implication,biconditional, andnegation.[13][14][15][16]Some sources include other connectives, as in the table below. Unlikefirst-order logic, propositional logic does not deal with non-logical objects, predicates about them, orquantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. Propositional logic is typically studied with aformal language,[c]in which propositions are represented by letters, which are calledpropositional variables. These are then used, together with symbols for connectives, to makepropositional formula. Because of this, the propositional variables are calledatomic formulasof a formal propositional language.[14][2]While the atomic propositions are typically represented by letters of thealphabet,[d][14]there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic. The most thoroughly researched branch of propositional logic isclassical truth-functional propositional logic,[1]in which formulas are interpreted as having precisely one of two possibletruth values, the truth value oftrueor the truth value offalse.[19]Theprinciple of bivalenceand thelaw of excluded middleare upheld. By comparison withfirst-order logic, truth-functional propositional logic is considered to bezeroth-order logic.[7][8] Although propositional logic (also called propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) byChrysippusin the 3rd century BC[20]and expanded by his successorStoics. The logic was focused onpropositions. This was different from the traditionalsyllogistic logic, which focused onterms. However, most of the original writings were lost[21]and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.[22] Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematicianGottfried Leibniz, whosecalculus ratiocinatorwas, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians likeGeorge BooleandAugustus De Morgan, completely independent of Leibniz.[23] Gottlob Frege'spredicate logicbuilds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic."[24]Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, includingnatural deduction,truth treesandtruth tables. Natural deduction was invented byGerhard GentzenandStanisław Jaśkowski. Truth trees were invented byEvert Willem Beth.[25]The invention of truth tables, however, is of uncertain attribution. Within works by Frege[26]andBertrand Russell,[27]are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to eitherLudwig WittgensteinorEmil Post(or both, independently).[26]Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole,Charles Sanders Peirce,[28]andErnst Schröder. Others credited with the tabular structure includeJan Łukasiewicz,Alfred North Whitehead,William Stanley Jevons,John Venn, andClarence Irving Lewis.[27]Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables".[27] Propositional logic, as currently studied in universities, is a specification of a standard oflogical consequencein which only the meanings ofpropositional connectivesare considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.[2] Propositional logic deals withstatements, which are defined as declarative sentences having truth value.[29][1]Examples of statements might include: Declarative sentences are contrasted withquestions, such as "What is Wikipedia?", andimperativestatements, such as "Please addcitationsto support the claims in this article.".[30][31]Such non-declarative sentences have notruth value,[32]and are only dealt with innonclassical logics, callederoteticandimperative logics. In propositional logic, a statement can contain one or more other statements as parts.[1]Compound sentencesare formed from simpler sentences and express relationships among the constituent sentences.[33]This is done by combining them withlogical connectives:[33][34]the main types of compound sentences arenegations,conjunctions,disjunctions,implications, andbiconditionals,[33]which are formed by using the corresponding connectives to connect propositions.[35][36]InEnglish, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional).[1][13]Examples of such compound sentences might include: If sentences lack any logical connectives, they are calledsimple sentences,[1]oratomic sentences;[34]if they contain one or more logical connectives, they are calledcompound sentences,[33]ormolecular sentences.[34] Sentential connectivesare a broader category that includes logical connectives.[2][34]Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence,[2][34]or that inflect a single sentence to create a new sentence.[2]Alogical connective, orpropositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express)propositions, the new sentence that results from its application also is (or expresses) aproposition.[2]Philosophers disagree about what exactly a proposition is,[10][2]as well as about which sentential connectives in natural languages should be counted as logical connectives.[34][2]Sentential connectives are also calledsentence-functors,[37]and logical connectives are also calledtruth-functors.[37] Anargumentis defined as apairof things, namely a set of sentences, called thepremises,[g]and a sentence, called theconclusion.[38][34][37]The conclusion is claimed tofollow fromthe premises,[37]and the premises are claimed tosupportthe conclusion.[34] The following is an example of an argument within the scope of propositional logic: Thelogical formof this argument is known asmodus ponens,[39]which is aclassicallyvalidform.[40]So, in classical logic, the argument isvalid, although it may or may not besound, depending on themeteorologicalfacts in a given context. Thisexample argumentwill be reused when explaining§ Formalization. An argument isvalidif, and only if, it isnecessarythat, if all its premises are true, its conclusion is true.[38][41][42]Alternatively, an argument is valid if, and only if, it isimpossiblefor all the premises to be true while the conclusion is false.[42][38] Validity is contrasted withsoundness.[42]An argument issoundif, and only if, it is valid and all its premises are true.[38][42]Otherwise, it isunsound.[42] Logic, in general, aims to precisely specify valid arguments.[34]This is done by defining a valid argument as one in which its conclusion is alogical consequenceof its premises,[34]which, when this is understood assemantic consequence, means that there is nocasein which the premises are true but the conclusion is not true[34]– see§ Semanticsbelow. Propositional logic is typically studied through aformal systemin whichformulasof aformal languageareinterpretedto representpropositions. This formal language is the basis forproof systems, which allow a conclusion to be derived from premises if, and only if, it is alogical consequenceof them. This section will show how this works by formalizing the§ Example argument. The formal language for a propositional calculus will be fully specified in§ Language, and an overview of proof systems will be given in§ Proof systems. Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives,[39][1]it is typically studied by replacing suchatomic(indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables).[1]With propositional variables, the§ Example argumentwould then be symbolized as follows: WhenPis interpreted as "It's raining" andQas "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the samelogical form. When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such asP{\displaystyle P},Q{\displaystyle Q}andR{\displaystyle R}) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself. If we assume that the validity ofmodus ponenshas been accepted as anaxiom, then the same§ Example argumentcan also be depicted like this: This method of displaying it isGentzen's notation fornatural deductionandsequent calculus.[43]The premises are shown above a line, called theinference line,[15]separated by acomma, which indicatescombinationof premises.[44]The conclusion is written below the inference line.[15]The inference line representssyntactic consequence,[15]sometimes calleddeductive consequence,[45]> which is also symbolized with ⊢.[46][45]So the above can also be written in one line asP→Q,P⊢Q{\displaystyle P\to Q,P\vdash Q}.[h] Syntactic consequence is contrasted withsemantic consequence,[47]which is symbolized with ⊧.[46][45]In this case, the conclusion followssyntacticallybecause thenatural deductioninference ruleofmodus ponenshas been assumed. For more on inference rules, see the sections on proof systems below. Thelanguage(commonly calledL{\displaystyle {\mathcal {L}}})[45][48][34]of a propositional calculus is defined in terms of:[2][14] Awell-formed formulais any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The languageL{\displaystyle {\mathcal {L}}}, then, is defined either as beingidentical toits set of well-formed formulas,[48]or ascontainingthat set (together with, for instance, its set of connectives and variables).[14][34] Usually the syntax ofL{\displaystyle {\mathcal {L}}}is defined recursively by just a few definitions, as seen next; some authors explicitly includeparenthesesas punctuation marks when defining their language's syntax,[34][51]while others use them without comment.[2][14] Given a set of atomic propositional variablesp1{\displaystyle p_{1}},p2{\displaystyle p_{2}},p3{\displaystyle p_{3}}, ..., and a set of propositional connectivesc11{\displaystyle c_{1}^{1}},c21{\displaystyle c_{2}^{1}},c31{\displaystyle c_{3}^{1}}, ...,c12{\displaystyle c_{1}^{2}},c22{\displaystyle c_{2}^{2}},c32{\displaystyle c_{3}^{2}}, ...,c13{\displaystyle c_{1}^{3}},c23{\displaystyle c_{2}^{3}},c33{\displaystyle c_{3}^{3}}, ..., a formula of propositional logic isdefined recursivelyby these definitions:[2][14][50][i] Writing the result of applyingcnm{\displaystyle c_{n}^{m}}to⟨{\displaystyle \langle }A, B, C, …⟩{\displaystyle \rangle }in functional notation, ascnm{\displaystyle c_{n}^{m}}(A, B, C, …), we have the following as examples of well-formed formulas: What was given asDefinition 2above, which is responsible for the composition of formulas, is referred to byColin Howsonas theprinciple of composition.[39][j]It is thisrecursionin the definitionof a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the languageL{\displaystyle {\mathcal {L}}}are built up from the atoms as ultimate building blocks.[2]Composite formulas (all formulas besides atoms) are calledmolecules,[49]ormolecular sentences.[34](This is an imperfect analogy withchemistry, since a chemical molecule may sometimes have only one atom, as inmonatomic gases.)[49] The definition that "nothing else is a formula", given above asDefinition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax.[37]In particular, it excludesinfinitely longformulas from beingwell-formed.[37]It is sometimes called theClosure Clause.[53] An alternative to the syntax definitions given above is to write acontext-free (CF) grammarfor the languageL{\displaystyle {\mathcal {L}}}inBackus-Naur form(BNF).[54][55]This is more common incomputer sciencethan inphilosophy.[55]It can be done in many ways,[54]of which a particularly brief one, for the common set of five connectives, is this single clause:[55][56] This clause, due to itsself-referentialnature (sinceϕ{\displaystyle \phi }is in some branches of the definition ofϕ{\displaystyle \phi }), also acts as arecursive definition, and therefore specifies the entire language. To expand it to addmodal operators, one need only add …|◻ϕ|◊ϕ{\displaystyle |~\Box \phi ~|~\Diamond \phi }to the end of the clause.[55] Mathematicians sometimes distinguish between propositional constants,propositional variables, and schemata.Propositional constantsrepresent some particular proposition,[57]whilepropositional variablesrange over the set of all atomic propositions.[57]Schemata, orschematic letters, however, range over all formulas.[37][1](Schematic letters are also calledmetavariables.)[38]It is common to represent propositional constants byA,B, andC, propositional variables byP,Q, andR, and schematic letters are often Greek letters, most oftenφ,ψ, andχ.[37][1] However, some authors recognize only two "propositional constants" in their formal system: the special symbol⊤{\displaystyle \top }, called "truth", which always evaluates toTrue, and the special symbol⊥{\displaystyle \bot }, called "falsity", which always evaluates toFalse.[58][59][60]Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors",[37]or equivalently, "nullaryconnectives".[50] To serve as a model of the logic of a givennatural language, a formal language must be semantically interpreted.[34]Inclassical logic, all propositions evaluate to exactly one of twotruth-values:TrueorFalse.[1][61]For example, "Wikipediais afreeonline encyclopediathat anyone can edit"evaluates toTrue,[62]while "Wikipedia is apaperencyclopedia"evaluates toFalse.[63] In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic.[61][64][37]To learn aboutnonclassical logicswith more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic". For a given languageL{\displaystyle {\mathcal {L}}}, aninterpretation,[65]valuation,[51]Boolean valuation,[66]orcase,[34][k]is anassignmentofsemantic valuesto each formula ofL{\displaystyle {\mathcal {L}}}.[34]For a formal language of classical logic, a case is defined as anassignment, to each formula ofL{\displaystyle {\mathcal {L}}}, of one or the other, but not both, of thetruth values, namelytruth(T, or 1) andfalsity(F, or 0).[67][68]An interpretation that follows the rules of classical logic is sometimes called aBoolean valuation.[51][69]An interpretation of a formal language for classical logic is often expressed in terms oftruth tables.[70][1]Since each formula is only assigned a single truth-value, an interpretation may be viewed as afunction, whosedomainisL{\displaystyle {\mathcal {L}}}, and whoserangeis its set of semantic valuesV={T,F}{\displaystyle {\mathcal {V}}=\{{\mathsf {T}},{\mathsf {F}}\}},[2]orV={1,0}{\displaystyle {\mathcal {V}}=\{1,0\}}.[34] Forn{\displaystyle n}distinct propositional symbols there are2n{\displaystyle 2^{n}}distinct possible interpretations. For any particular symbola{\displaystyle a}, for example, there are21=2{\displaystyle 2^{1}=2}possible interpretations: eithera{\displaystyle a}is assignedT, ora{\displaystyle a}is assignedF. And for the paira{\displaystyle a},b{\displaystyle b}there are22=4{\displaystyle 2^{2}=4}possible interpretations: either both are assignedT, or both are assignedF, ora{\displaystyle a}is assignedTandb{\displaystyle b}is assignedF, ora{\displaystyle a}is assignedFandb{\displaystyle b}is assignedT.[70]SinceL{\displaystyle {\mathcal {L}}}hasℵ0{\displaystyle \aleph _{0}}, that is,denumerablymany propositional symbols, there are2ℵ0=c{\displaystyle 2^{\aleph _{0}}={\mathfrak {c}}}, and thereforeuncountably manydistinct possible interpretations ofL{\displaystyle {\mathcal {L}}}as a whole.[70] WhereI{\displaystyle {\mathcal {I}}}is an interpretation andφ{\displaystyle \varphi }andψ{\displaystyle \psi }represent formulas, the definition of anargument, given in§ Arguments, may then be stated as a pair⟨{φ1,φ2,φ3,...,φn},ψ⟩{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }, where{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}is the set of premises andψ{\displaystyle \psi }is the conclusion. The definition of an argument'svalidity, i.e. its property that{φ1,φ2,φ3,...,φn}⊨ψ{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }, can then be stated as itsabsence of a counterexample, where acounterexampleis defined as a caseI{\displaystyle {\mathcal {I}}}in which the argument's premises{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}are all true but the conclusionψ{\displaystyle \psi }is not true.[34][39]As will be seen in§ Semantic truth, validity, consequence, this is the same as to say that the conclusion is asemantic consequenceof the premises. An interpretation assigns semantic values toatomic formulasdirectly.[65][34]Molecular formulas are assigned afunctionof the value of their constituent atoms, according to the connective used;[65][34]the connectives are defined in such a way that thetruth-valueof a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, andonlyon those.[65][34]This assumption is referred to byColin Howsonas the assumption of thetruth-functionalityof theconnectives.[39] Since logical connectives are defined semantically only in terms of thetruth valuesthat they take when thepropositional variablesthat they're applied to take either of thetwo possibletruth values,[1][34]the semantic definition of the connectives is usually represented as atruth tablefor each of the connectives,[1][34][71]as seen below: This table covers each of the main fivelogical connectives:[13][14][15][16]conjunction(here notatedp∧q{\displaystyle p\land q}),disjunction(p∨q),implication(p→q),biconditional(p↔q) andnegation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators.[1][72][34]For more truth tables for more different kinds of connectives, see the article "Truth table". Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, whereI(φ){\displaystyle {\mathcal {I}}(\varphi )}is the interpretation ofφ{\displaystyle \varphi }, the five connectives are defined as:[37][51] Instead ofI(φ){\displaystyle {\mathcal {I}}(\varphi )}, the interpretation ofφ{\displaystyle \varphi }may be written out as|φ|{\displaystyle |\varphi |},[37][73]or, for definitions such as the above,I(φ)=T{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}may be written simply as the English sentence "φ{\displaystyle \varphi }is given the valueT{\displaystyle {\mathsf {T}}}".[51]Yet other authors[74][75]may prefer to speak of aTarskian modelM{\displaystyle {\mathfrak {M}}}for the language, so that instead they'll use the notationM⊨φ{\displaystyle {\mathfrak {M}}\models \varphi }, which is equivalent to sayingI(φ)=T{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}, whereI{\displaystyle {\mathcal {I}}}is the interpretation function forM{\displaystyle {\mathfrak {M}}}.[75] Some of these connectives may be defined in terms of others: for instance, implication,p→q{\displaystyle p\rightarrow q}, may be defined in terms of disjunction and negation, as¬p∨q{\displaystyle \neg p\lor q};[76]and disjunction may be defined in terms of negation and conjunction, as¬(¬p∧¬q{\displaystyle \neg (\neg p\land \neg q}.[51]In fact, atruth-functionally completesystem,[l]in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (asRussell,Whitehead, andHilbertdid), or using only implication and negation (asFregedid), or using only conjunction and negation, or even using only a single connective for "not and" (theSheffer stroke),[3]asJean Nicoddid.[2]Ajoint denialconnective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property.[51][m] Some authors, namelyHowson[39]and Cunningham,[78]distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object languageL{\displaystyle {\mathcal {L}}}. Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence",[15]and/or the symbol ⇔,[79]to denote their object language's biconditional connective. Givenφ{\displaystyle \varphi }andψ{\displaystyle \psi }asformulas(or sentences) of a languageL{\displaystyle {\mathcal {L}}}, andI{\displaystyle {\mathcal {I}}}as an interpretation (or case)[n]ofL{\displaystyle {\mathcal {L}}}, then the following definitions apply:[70][68] For interpretations (cases)I{\displaystyle {\mathcal {I}}}ofL{\displaystyle {\mathcal {L}}}, these definitions are sometimes given: Forclassical logic, which assumes that all cases are complete and consistent,[34]the following theorems apply: Proof systems in propositional logic can be broadly classified intosemantic proof systemsandsyntactic proof systems,[88][89][90]according to the kind oflogical consequencethat they rely on: semantic proof systems rely on semantic consequence (φ⊨ψ{\displaystyle \varphi \models \psi }),[91]whereas syntactic proof systems rely on syntactic consequence (φ⊢ψ{\displaystyle \varphi \vdash \psi }).[92]Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system.[93]This section gives a very brief overview of the kinds of proof systems, withanchorsto the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one. Semantic proof systems rely on the concept of semantic consequence, symbolized asφ⊨ψ{\displaystyle \varphi \models \psi }, which indicates that ifφ{\displaystyle \varphi }is true, thenψ{\displaystyle \psi }must also be true in every possible interpretation.[93] Atruth tableis a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario.[94]By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory.[95]See§ Semantic proof via truth tables. Asemantic tableauis another semantic proof technique that systematically explores the truth of a proposition.[96]It constructs a tree where each branch represents a possible interpretation of the propositions involved.[97]If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered atautology.[39]See§ Semantic proof via tableaux. Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence,φ⊢ψ{\displaystyle \varphi \vdash \psi }, signifies thatψ{\displaystyle \psi }can be derived fromφ{\displaystyle \varphi }using the rules of the formal system.[93] Anaxiomatic systemis a set of axioms or assumptions from which other statements (theorems) are logically derived.[98]In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms.[99]See§ Syntactic proof via axioms. Natural deductionis a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning.[100]Each rule reflects a particular logical connective and shows how it can be introduced or eliminated.[100]See§ Syntactic proof via natural deduction. Thesequent calculusis a formal system that represents logical deductions as sequences or "sequents" of formulas.[101]Developed byGerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.[101][102] Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using atruth table, which gives every possible interpretation (assignment of truth values to variables) of a formula.[95][49][37]If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation).[95][49]Further, if (and only if)¬φ{\displaystyle \neg \varphi }is valid, thenφ{\displaystyle \varphi }is inconsistent.[83][84][85] For instance, this table shows that "p→ (q∨r→ (r→ ¬p))" is not valid:[49] The computation of the last column of the third line may be displayed as follows:[49] Further, using the theorem thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,(φ→ψ){\displaystyle (\varphi \to \psi )}is valid,[70][80]we can use a truth table to prove that a formula is a semantic consequence of a set of formulas:{φ1,φ2,φ3,...,φn}⊨ψ{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }if, and only if, we can produce a truth table that comes out all true for the formula((⋀i=1nφi)→ψ){\displaystyle \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}(that is, if⊨((⋀i=1nφi)→ψ){\displaystyle \models \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}).[103][104] Since truth tables have 2nlines for n variables, they can be tiresomely long for large values of n.[39]Analytic tableaux are a more efficient, but nevertheless mechanical,[71]semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."[39] Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below.[51]These rules use "signed formulas", where a signed formula is an expressionTX{\displaystyle TX}orFX{\displaystyle FX}, whereX{\displaystyle X}is a (unsigned) formula of the languageL{\displaystyle {\mathcal {L}}}.[51](Informally,TX{\displaystyle TX}is read "X{\displaystyle X}is true", andFX{\displaystyle FX}is read "X{\displaystyle X}is false".)[51]Their formal semantic definition is that "under any interpretation, a signed formulaTX{\displaystyle TX}is called true ifX{\displaystyle X}is true, and false ifX{\displaystyle X}is false, whereas a signed formulaFX{\displaystyle FX}is called false ifX{\displaystyle X}is true, and true ifX{\displaystyle X}is false."[51] 1)T∼XFXF∼XTXspacer2)T(X∧Y)TXTYF(X∧Y)FX|FYspacer3)T(X∨Y)TX|TYF(X∨Y)FXFYspacer4)T(X⊃Y)FX|TYF(X⊃Y)TXFY{\displaystyle {\begin{aligned}&1)\quad {\frac {T\sim X}{FX}}\quad &&{\frac {F\sim X}{TX}}\\{\phantom {spacer}}\\&2)\quad {\frac {T(X\land Y)}{\begin{matrix}TX\\TY\end{matrix}}}\quad &&{\frac {F(X\land Y)}{FX|FY}}\\{\phantom {spacer}}\\&3)\quad {\frac {T(X\lor Y)}{TX|TY}}\quad &&{\frac {F(X\lor Y)}{\begin{matrix}FX\\FY\end{matrix}}}\\{\phantom {spacer}}\\&4)\quad {\frac {T(X\supset Y)}{FX|TY}}\quad &&{\frac {F(X\supset Y)}{\begin{matrix}TX\\FY\end{matrix}}}\end{aligned}}} In this notation, rule 2 means thatT(X∧Y){\displaystyle T(X\land Y)}yields bothTX,TY{\displaystyle TX,TY}, whereasF(X∧Y){\displaystyle F(X\land Y)}branchesintoFX,FY{\displaystyle FX,FY}. The notation is to be understood analogously for rules 3 and 4.[51]Often, in tableaux forclassical logic, thesigned formulanotation is simplified so thatTφ{\displaystyle T\varphi }is written simply asφ{\displaystyle \varphi }, andFφ{\displaystyle F\varphi }as¬φ{\displaystyle \neg \varphi }, which accounts for naming rule 1 the "Rule of Double Negation".[39][71] One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing acompletetableau. In some cases, a branch can come to contain bothTX{\displaystyle TX}andFX{\displaystyle FX}for someX{\displaystyle X}, which is to say, a contradiction. In that case, the branch is said toclose.[39]If every branch in a tree closes, the tree itself is said to close.[39]In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false.[39]Conversely, a tableau can also prove that a logical formula istautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.[39] To construct a tableau for an argument⟨{φ1,φ2,φ3,...,φn},ψ⟩{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }, one first writes out the set of premise formulas,{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}, with one formula on each line, signed withT{\displaystyle T}(that is,Tφ{\displaystyle T\varphi }for eachTφ{\displaystyle T\varphi }in the set);[71]and together with those formulas (the order is unimportant), one also writes out the conclusion,ψ{\displaystyle \psi }, signed withF{\displaystyle F}(that is,Fψ{\displaystyle F\psi }).[71]One then produces a truth tree (analytic tableau) by using all those lines according to the rules.[71]A closed tree will be proof that the argument was valid, in virtue of the fact thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,{φ,∼ψ}{\displaystyle \{\varphi ,\sim \psi \}}is inconsistent (also written asφ,∼ψ⊨{\displaystyle \varphi ,\sim \psi \models }).[71] Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold.[37]We useφ{\displaystyle \varphi }⟚ψ{\displaystyle \psi }to denote equivalence ofφ{\displaystyle \varphi }andψ{\displaystyle \psi }, that is, as an abbreviation for bothφ⊨ψ{\displaystyle \varphi \models \psi }andψ⊨φ{\displaystyle \psi \models \varphi };[37]as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it,[37][105]although many authors prefer to read it as "entails",[37][106]or as "models".[107] Natural deduction, since it is a method of syntactical proof, is specified by providinginference rules(also calledrules of proof)[38]for a language with the typical set of connectives{−,&,∨,→,↔}{\displaystyle \{-,\&,\lor ,\to ,\leftrightarrow \}}; no axioms are used other than these rules.[110]The rules are covered below, and a proof example is given afterwards. Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The§ Gentzen notation, which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs[43][15]—not to be confused with "truth trees", which is another name foranalytic tableaux.[71]There is also a style due toStanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes,[43]and there is a simplification of Jaśkowski's style due toFredric Fitch(Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition.[43]Lastly, there is the only notation style which will actually be used in this article, which is due toPatrick Suppes,[43]but was much popularized byE.J. LemmonandBenson Mates.[111]This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for theeditorwho wrote this part of the article, who did not understand the complexLaTeXcommands that would be required to produce proofs in the other methods. Aproof, then, laid out in accordance with theSuppes–Lemmon notationstyle,[43]is a sequence of lines containing sentences,[38]where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence.[38]Eachline of proofis made up of asentence of proof, together with itsannotation, itsassumption set, and the currentline number.[38]The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers.[38]The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence.[38]See the§ Natural deduction proof example. Natural deduction inference rules, due ultimately toGentzen, are given below.[110]There are ten primitive rules of proof, which are the ruleassumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rulereductio ad adbsurdum.[38]Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination,[38]and MTT and DN are commonly given rules,[110]although they are not primitive.[38] The proof below[38]derives−P{\displaystyle -P}fromP→Q{\displaystyle P\to Q}and−Q{\displaystyle -Q}using onlyMPPandRAA, which shows thatMTTis not a primitive rule, since it can be derived from those two other rules. It is possible to perform proofs axiomatically, which means that certaintautologiesare taken as self-evident and various others are deduced from them usingmodus ponensas aninference rule, as well as arule ofsubstitution, which permits replacing anywell-formed formulawith anysubstitution-instanceof it.[113]Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used.[113] This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the articleAxiomatic system (logic). Although axiomatic proof has been used since the famousAncient Greektextbook,Euclid'sElements of Geometry, in propositional logic it dates back toGottlob Frege's1879Begriffsschrift.[37][113]Frege's system used onlyimplicationandnegationas connectives.[2]It had six axioms:[113][114][115] These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.[114] Jan Łukasiewiczshowed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentenceCCNpNqCpq{\displaystyle CCNpNqCpq}".[115]Which, taken out of Łukasiewicz'sPolish notationinto modern notation, means(¬p→¬q)→(p→q){\displaystyle (\neg p\rightarrow \neg q)\rightarrow (p\rightarrow q)}. Hence, Łukasiewicz is credited[113]with this system of three axioms: Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule.[113]The exact same system was given (with an explicit substitution rule) byAlonzo Church,[116]who referred to it as the system P2[116][117]and helped popularize it.[117] One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for anywell-formed formulas), the axioms are given as:[37][117] The schematic version of P2is attributed toJohn von Neumann,[113]and is used in theMetamath"set.mm" formal proof database.[117]It has also been attributed toHilbert,[118]and namedH{\displaystyle {\mathcal {H}}}in this context.[118] As an example, a proof ofA→A{\displaystyle A\to A}in P2is given below. First, the axioms are given names: And the proof is as follows: One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula isdecidable.[119]: 81Deciding satisfiability of propositional logic formulas is anNP-completeproblem. However, practical methods exist (e.g.,DPLL algorithm, 1962;Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended theSAT solveralgorithms to work with propositions containingarithmetic expressions; these are theSMT solvers.
https://en.wikipedia.org/wiki/Propositional_calculus
Inmathematical logic, auniversal quantificationis a type ofquantifier, alogical constantwhich isinterpretedas "given any", "for all", "for every", or "given anarbitraryelement". It expresses that apredicatecan besatisfiedby everymemberof adomain of discourse. In other words, it is thepredicationof apropertyorrelationto every member of the domain. Itassertsthat a predicate within thescopeof a universal quantifier is true of everyvalueof apredicate variable. It is usually denoted by theturned A(∀)logical operatorsymbol, which, when used together with a predicate variable, is called auniversal quantifier("∀x", "∀(x)", or sometimes by "(x)" alone). Universal quantification is distinct fromexistentialquantification("there exists"), which only asserts that the property or relation holds for at least one member of the domain. Quantification in general is covered in the article onquantification (logic). The universal quantifier is encoded asU+2200∀FOR ALLinUnicode, and as\forallinLaTeXand related formula editors. Suppose it is given that 2·0 = 0 + 0, and 2·1 = 1 + 1, and2·2 = 2 + 2, ..., and 2 · 100 = 100 + 100, and ..., etc. This would seem to be an infinitelogical conjunctionbecause of the repeated use of "and". However, the "etc." cannot be interpreted as a conjunction informal logic, Instead, the statement must be rephrased: For all natural numbersn, one has 2·n=n+n. This is a single statement using universal quantification. This statement can be said to be more precise than the original one. While the "etc." informally includesnatural numbers, and nothing more, this was not rigorously given. In the universal quantification, on the other hand, the natural numbers are mentioned explicitly. This particular example istrue, because any natural number could be substituted fornand the statement "2·n=n+n" would be true. In contrast, For all natural numbersn, one has 2·n> 2 +n isfalse, because ifnis substituted with, for instance, 1, the statement "2·1 > 2 + 1" is false. It is immaterial that "2·n> 2 +n" is true formostnatural numbersn: even the existence of a singlecounterexampleis enough to prove the universal quantification false. On the other hand, for allcomposite numbersn, one has 2·n> 2 +nis true, because none of the counterexamples are composite numbers. This indicates the importance of thedomain of discourse, which specifies which valuesncan take.[note 1]In particular, note that if the domain of discourse is restricted to consist only of those objects that satisfy a certain predicate, then for universal quantification this requires alogical conditional. For example, For all composite numbersn, one has 2·n> 2 +n islogically equivalentto For all natural numbersn, ifnis composite, then 2·n> 2 +n. Here the "if ... then" construction indicates the logical conditional. Insymbolic logic, the universal quantifier symbol∀{\displaystyle \forall }(a turned "A" in asans-seriffont, Unicode U+2200) is used to indicate universal quantification. It was first used in this way byGerhard Gentzenin 1935, by analogy withGiuseppe Peano's∃{\displaystyle \exists }(turned E) notation forexistential quantificationand the later use of Peano's notation byBertrand Russell.[1] For example, ifP(n) is the predicate "2·n> 2 +n" andNis thesetof natural numbers, then is the (false) statement Similarly, ifQ(n) is the predicate "nis composite", then is the (true) statement Several variations in the notation for quantification (which apply to all forms) can be found in theQuantifierarticle. The negation of a universally quantified function is obtained by changing the universal quantifier into anexistential quantifierand negating the quantified formula. That is, where¬{\displaystyle \lnot }denotesnegation. For example, ifP(x)is thepropositional function"xis married", then, for thesetXof all living human beings, the universal quantification Given any living personx, that person is married is written This statement is false. Truthfully, it is stated that It is not the case that, given any living personx, that person is married or, symbolically: If the functionP(x)is not true foreveryelement ofX, then there must be at least one element for which the statement is false. That is, the negation of∀x∈XP(x){\displaystyle \forall x\in X\,P(x)}is logically equivalent to "There exists a living personxwho is not married", or: It is erroneous to confuse "all persons are not married" (i.e. "there exists no person who is married") with "not all persons are married" (i.e. "there exists a person who is not married"): The universal (and existential) quantifier moves unchanged across thelogical connectives∧,∨,→, and↚, as long as the other operand is not affected;[2]that is: Conversely, for the logical connectives↑,↓,↛, and←, the quantifiers flip: Arule of inferenceis a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the universal quantifier. Universal instantiationconcludes that, if the propositional function is known to be universally true, then it must be true for any arbitrary element of the universe of discourse. Symbolically, this is represented as wherecis a completely arbitrary element of the universe of discourse. Universal generalizationconcludes the propositional function must be universally true if it is true for any arbitrary element of the universe of discourse. Symbolically, for an arbitraryc, The elementcmust be completely arbitrary; else, the logic does not follow: ifcis not arbitrary, and is instead a specific element of the universe of discourse, then P(c) only implies an existential quantification of the propositional function. By convention, the formula∀x∈∅P(x){\displaystyle \forall {x}{\in }\emptyset \,P(x)}is always true, regardless of the formulaP(x); seevacuous truth. Theuniversal closureof a formula φ is the formula with nofree variablesobtained by adding a universal quantifier for every free variable in φ. For example, the universal closure of is Incategory theoryand the theory ofelementary topoi, the universal quantifier can be understood as theright adjointof afunctorbetweenpower sets, theinverse imagefunctor of a function between sets; likewise, theexistential quantifieris theleft adjoint.[3] For a setX{\displaystyle X}, letPX{\displaystyle {\mathcal {P}}X}denote itspowerset. For any functionf:X→Y{\displaystyle f:X\to Y}between setsX{\displaystyle X}andY{\displaystyle Y}, there is aninverse imagefunctorf∗:PY→PX{\displaystyle f^{*}:{\mathcal {P}}Y\to {\mathcal {P}}X}between powersets, that takes subsets of the codomain offback to subsets of its domain. The left adjoint of this functor is the existential quantifier∃f{\displaystyle \exists _{f}}and the right adjoint is the universal quantifier∀f{\displaystyle \forall _{f}}. That is,∃f:PX→PY{\displaystyle \exists _{f}\colon {\mathcal {P}}X\to {\mathcal {P}}Y}is a functor that, for each subsetS⊂X{\displaystyle S\subset X}, gives the subset∃fS⊂Y{\displaystyle \exists _{f}S\subset Y}given by thosey{\displaystyle y}in the image ofS{\displaystyle S}underf{\displaystyle f}. Similarly, the universal quantifier∀f:PX→PY{\displaystyle \forall _{f}\colon {\mathcal {P}}X\to {\mathcal {P}}Y}is a functor that, for each subsetS⊂X{\displaystyle S\subset X}, gives the subset∀fS⊂Y{\displaystyle \forall _{f}S\subset Y}given by thosey{\displaystyle y}whose preimage underf{\displaystyle f}is contained inS{\displaystyle S}. The more familiar form of the quantifiers as used infirst-order logicis obtained by taking the functionfto be the unique function!:X→1{\displaystyle !:X\to 1}so thatP(1)={T,F}{\displaystyle {\mathcal {P}}(1)=\{T,F\}}is the two-element set holding the values true and false, a subsetSis that subset for which thepredicateS(x){\displaystyle S(x)}holds, and which is true ifS{\displaystyle S}is not empty, and which is false if S is not X. The universal and existential quantifiers given above generalize to thepresheaf category.
https://en.wikipedia.org/wiki/Universal_quantification
Logical consequence(alsoentailmentorlogical implication) is a fundamentalconceptinlogicwhich describes the relationship betweenstatementsthat hold true when one statement logicallyfollows fromone or more statements. Avalidlogicalargumentis one in which theconclusionis entailed by thepremises, because the conclusion is the consequence of the premises. Thephilosophical analysisof logical consequence involves the questions: In what sense does a conclusion follow from its premises? and What does it mean for a conclusion to be a consequence of premises?[1]All ofphilosophical logicis meant to provide accounts of the nature of logical consequence and the nature oflogical truth.[2] Logical consequence isnecessaryandformal, by way of examples that explain withformal proofandmodels of interpretation.[1]A sentence is said to be a logical consequence of a set of sentences, for a givenlanguage,if and only if, using only logic (i.e., without regard to anypersonalinterpretations of the sentences) the sentence must be true if every sentence in the set is true.[3] Logicians make precise accounts of logical consequence regarding a givenlanguageL{\displaystyle {\mathcal {L}}}, either by constructing adeductive systemforL{\displaystyle {\mathcal {L}}}or by formalintended semanticsfor languageL{\displaystyle {\mathcal {L}}}. The Polish logicianAlfred Tarskiidentified three features of an adequate characterization of entailment: (1) The logical consequence relation relies on thelogical formof the sentences: (2) The relation isa priori, i.e., it can be determined with or without regard toempirical evidence(sense experience); and (3) The logical consequence relation has amodalcomponent.[3] The most widely prevailing view on how best to account for logical consequence is to appeal to formality. This is to say that whether statements follow from one another logically depends on the structure orlogical formof the statements without regard to the contents of that form. Syntactic accounts of logical consequence rely onschemesusinginference rules. For instance, we can express the logical form of a valid argument as: This argument is formally valid, because everyinstanceof arguments constructed using this scheme is valid. This is in contrast to an argument like "Fred is Mike's brother's son. Therefore Fred is Mike's nephew." Since this argument depends on the meanings of the words "brother", "son", and "nephew", the statement "Fred is Mike's nephew" is a so-calledmaterial consequenceof "Fred is Mike's brother's son", not a formal consequence. A formal consequence must be truein all cases, however this is an incomplete definition of formal consequence, since even the argument "PisQ's brother's son, thereforePisQ's nephew" is valid in all cases, but is not aformalargument.[1] If it is known thatQ{\displaystyle Q}follows logically fromP{\displaystyle P}, then no information about the possible interpretations ofP{\displaystyle P}orQ{\displaystyle Q}will affect that knowledge. Our knowledge thatQ{\displaystyle Q}is a logical consequence ofP{\displaystyle P}cannot be influenced byempirical knowledge.[1]Deductively valid arguments can be known to be so without recourse to experience, so they must be knowable a priori.[1]However, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge. So the a priori property of logical consequence is considered to be independent of formality.[1] The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms ofproofsand viamodels. The study of the syntactic consequence (of a logic) is called (its)proof theorywhereas the study of (its) semantic consequence is called (its)model theory.[4] A formulaA{\displaystyle A}is asyntactic consequence[5][6][7][8][9]within someformal systemFS{\displaystyle {\mathcal {FS}}}of a setΓ{\displaystyle \Gamma }of formulas if there is aformal proofinFS{\displaystyle {\mathcal {FS}}}ofA{\displaystyle A}from the setΓ{\displaystyle \Gamma }. This is denotedΓ⊢FSA{\displaystyle \Gamma \vdash _{\mathcal {FS}}A}. The turnstile symbol⊢{\displaystyle \vdash }was originally introduced by Frege in 1879, but its current use only dates back to Rosser and Kleene (1934–1935).[9] Syntactic consequence does not depend on anyinterpretationof the formal system.[10] A formulaA{\displaystyle A}is asemantic consequencewithin some formal systemFS{\displaystyle {\mathcal {FS}}}of a set of statementsΓ{\displaystyle \Gamma }if and only if there is no modelI{\displaystyle {\mathcal {I}}}in which all members ofΓ{\displaystyle \Gamma }are true andA{\displaystyle A}is false.[11]This is denotedΓ⊨FSA{\displaystyle \Gamma \models _{\mathcal {FS}}A}. Or, in other words, the set of the interpretations that make all members ofΓ{\displaystyle \Gamma }true is a subset of the set of the interpretations that makeA{\displaystyle A}true. Modalaccounts of logical consequence are variations on the following basic idea: Alternatively (and, most would say, equivalently): Such accounts are called "modal" because they appeal to the modal notions oflogical necessityandlogical possibility. 'It is necessary that' is often expressed as auniversal quantifieroverpossible worlds, so that the accounts above translate as: Consider the modal account in terms of the argument given as an example above: The conclusion is a logical consequence of the premises because we can not imagine a possible world where (a) all frogs are green; (b) Kermit is a frog; and (c) Kermit is not green. Modal-formal accounts of logical consequence combine the modal and formal accounts above, yielding variations on the following basic idea: The accounts considered above are all "truth-preservational", in that they all assume that the characteristic feature of a good inference is that it never allows one to move from true premises to an untrue conclusion. As an alternative, some have proposed "warrant-preservational" accounts, according to which the characteristic feature of a good inference is that it never allows one to move from justifiably assertible premises to a conclusion that is not justifiably assertible. This is (roughly) the account favored byintuitionists. The accounts discussed above all yieldmonotonicconsequence relations, i.e. ones such that ifA{\displaystyle A}is a consequence ofΓ{\displaystyle \Gamma }, thenA{\displaystyle A}is a consequence of any superset ofΓ{\displaystyle \Gamma }. It is also possible to specify non-monotonic consequence relations to capture the idea that, e.g., 'Tweety can fly' is a logical consequence of but not of
https://en.wikipedia.org/wiki/Logical_implication
Inlogicandmathematics, atruth value, sometimes called alogical value, is a value indicating the relation of apropositiontotruth, which inclassical logichas only two possible values (trueorfalse).[1][2]Truth values are used incomputingas well as various types oflogic. In some programming languages, anyexpressioncan be evaluated in a context that expects aBoolean data type. Typically (though this varies by programming language) expressions like the numberzero, theempty string, empty lists, andnullare treated as false, and strings with content (like "abc"), other numbers, and objects evaluate to true. Sometimes these classes of expressions are calledfalsyandtruthy. For example, inLisp,nil, the empty list, is treated as false, and all other values are treated as true. InC, the number 0 or 0.0 is false, and all other values are treated as true. InJavaScript, the empty string (""),null,undefined,NaN, +0,−0andfalse[3]are sometimes calledfalsy(of which thecomplementistruthy) to distinguish between strictlytype-checkedandcoercedBooleans (see also:JavaScript syntax#Type conversion).[4]As opposed to Python, empty containers (Arrays, Maps, Sets) are considered truthy. Languages such asPHPalso use this approach. Inclassical logic, with its intended semantics, the truth values aretrue(denoted by1or theverum⊤), anduntrueorfalse(denoted by0or thefalsum⊥); that is, classical logic is atwo-valued logic. This set of two values is also called theBoolean domain. Corresponding semantics oflogical connectivesaretruth functions, whose values are expressed in the form oftruth tables.Logical biconditionalbecomes theequalitybinary relation, andnegationbecomes abijectionwhichpermutestrue and false. Conjunction and disjunction aredualwith respect to negation, which is expressed byDe Morgan's laws: Propositional variablesbecomevariablesin the Boolean domain. Assigning values for propositional variables is referred to asvaluation. Whereas in classical logic truth values form aBoolean algebra, inintuitionistic logic, and more generally,constructive mathematics, the truth values form aHeyting algebra. Such truth values may express various aspects of validity, including locality, temporality, or computational content. For example, one may use theopen setsof a topological space as intuitionistic truth values, in which case the truth value of a formula expresseswherethe formula holds, not whether it holds. Inrealizabilitytruth values are sets of programs, which can be understood as computational evidence of validity of a formula. For example, the truth value of the statement "for every number there is a prime larger than it" is the set of all programs that take as input a numbern{\displaystyle n}, and output a prime larger thann{\displaystyle n}. Incategory theory, truth values appear as the elements of thesubobject classifier. In particular, in atoposevery formula ofhigher-order logicmay be assigned a truth value in the subobject classifier. Even though a Heyting algebra may have many elements, this should not be understood as there being truth values that are neither true nor false, because intuitionistic logic proves¬(p≠⊤∧p≠⊥){\displaystyle \neg (p\neq \top \land p\neq \bot )}("it is not the case thatp{\displaystyle p}is neither true nor false").[5] Inintuitionistic type theory, theCurry-Howard correspondenceexhibits an equivalence of propositions and types, according to which validity is equivalent to inhabitation of a type. For other notions of intuitionistic truth values, see theBrouwer–Heyting–Kolmogorov interpretationandIntuitionistic logic § Semantics. Multi-valued logics(such asfuzzy logicandrelevance logic) allow for more than two truth values, possibly containing some internal structure. For example, on theunit interval[0,1]such structure is atotal order; this may be expressed as the existence of variousdegrees of truth. Not alllogical systemsare truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example,intuitionistic logiclacks a complete set of truth values because its semantics, theBrouwer–Heyting–Kolmogorov interpretation, is specified in terms ofprovabilityconditions, and not directly in terms of thenecessary truthof formulae. But even non-truth-valuational logics can associate values with logical formulae, as is done inalgebraic semantics. The algebraic semantics of intuitionistic logic is given in terms ofHeyting algebras, compared toBoolean algebrasemantics of classical propositional calculus.
https://en.wikipedia.org/wiki/Truth_value
Inpredicate logic, anexistential quantificationis a type ofquantifier, alogical constantwhich isinterpretedas "there exists", "there is at least one", or "for some". It is usually denoted by thelogical operatorsymbol∃, which, when used together with a predicate variable, is called anexistential quantifier("∃x" or "∃(x)" or "(∃x)"[1]). Existential quantification is distinct fromuniversal quantification("for all"), which asserts that the property or relation holds forallmembers of the domain.[2][3]Some sources use the termexistentializationto refer to existential quantification.[4] Quantification in general is covered in the article onquantification (logic). The existential quantifier is encoded asU+2203∃THERE EXISTSinUnicode, and as\existsinLaTeXand related formula editors. Consider theformalsentence This is a single statement using existential quantification. It is roughly analogous to the informal sentence "Either0×0=25{\displaystyle 0\times 0=25}, or1×1=25{\displaystyle 1\times 1=25}, or2×2=25{\displaystyle 2\times 2=25}, or... and so on," but more precise, because it doesn't need us to infer the meaning of the phrase "and so on." (In particular, the sentence explicitly specifies itsdomain of discourseto be the natural numbers, not, for example, thereal numbers.) This particular example is true, because 5 is a natural number, and when we substitute 5 forn, we produce the true statement5×5=25{\displaystyle 5\times 5=25}. It does not matter that "n×n=25{\displaystyle n\times n=25}" is true only for that single natural number, 5; the existence of a singlesolutionis enough to prove this existential quantification to be true. In contrast, "For someeven numbern{\displaystyle n},n×n=25{\displaystyle n\times n=25}" is false, because there are no even solutions. Thedomain of discourse, which specifies the values the variablenis allowed to take, is therefore critical to a statement's trueness or falseness.Logical conjunctionsare used to restrict the domain of discourse to fulfill a given predicate. For example, the sentence islogically equivalentto the sentence Themathematical proofof an existential statement about "some" object may be achieved either by aconstructive proof, which exhibits an object satisfying the "some" statement, or by anonconstructive proof, which shows that there must be such an object without concretely exhibiting one. Insymbolic logic, "∃" (a turned letter "E" in asans-seriffont, Unicode U+2203) is used to indicate existential quantification. For example, the notation∃n∈N:n×n=25{\displaystyle \exists {n}{\in }\mathbb {N} :n\times n=25}represents the (true) statement The symbol's first usage is thought to be byGiuseppe PeanoinFormulario mathematico(1896). Afterwards,Bertrand Russellpopularised its use as the existential quantifier. Through his research in set theory, Peano also introduced the symbols∩{\displaystyle \cap }and∪{\displaystyle \cup }to respectively denote the intersection and union of sets.[5] A quantified propositional function is a statement; thus, like statements, quantified functions can be negated. The¬{\displaystyle \lnot \ }symbol is used to denote negation. For example, ifP(x) is the predicate "xis greater than 0 and less than 1", then, for a domain of discourseXof all natural numbers, the existential quantification "There exists a natural numberxwhich is greater than 0 and less than 1" can be symbolically stated as: This can be demonstrated to be false. Truthfully, it must be said, "It is not the case that there is a natural numberxthat is greater than 0 and less than 1", or, symbolically: If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those elements. That is, the negation of is logically equivalent to "For any natural numberx,xis not greater than 0 and less than 1", or: Generally, then, the negation of apropositional function's existential quantification is auniversal quantificationof that propositional function's negation; symbolically, (This is a generalization ofDe Morgan's lawsto predicate logic.) A common error is stating "all persons are not married" (i.e., "there exists no person who is married"), when "not all persons are married" (i.e., "there exists a person who is not married") is intended: Negation is also expressible through a statement of "for no", as opposed to "for some": Unlike the universal quantifier, the existential quantifier distributes over logical disjunctions: ∃x∈XP(x)∨Q(x)→(∃x∈XP(x)∨∃x∈XQ(x)){\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)\lor Q(x)\to \ (\exists {x}{\in }\mathbf {X} \,P(x)\lor \exists {x}{\in }\mathbf {X} \,Q(x))} Arule of inferenceis a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the existential quantifier. Existential introduction(∃I) concludes that, if the propositional function is known to be true for a particular element of the domain of discourse, then it must be true that there exists an element for which the proposition function is true. Symbolically, Existential instantiation, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while substituting an existentially quantified variable for a subject—which does not appear within any active sub-derivation. If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (∃E) is as follows: If it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached by giving that element an arbitrary name, that conclusion isnecessarily true, as long as it does not contain the name. Symbolically, for an arbitrarycand for a propositionQin whichcdoes not appear: P(c)→Q{\displaystyle P(c)\to \ Q}must be true for all values ofcover the same domainX; else, the logic does not follow: Ifcis not arbitrary, and is instead a specific element of the domain of discourse, then statingP(c) might unjustifiably give more information about that object. The formula∃x∈∅P(x){\displaystyle \exists {x}{\in }\varnothing \,P(x)}is always false, regardless ofP(x). This is because∅{\displaystyle \varnothing }denotes theempty set, and noxof any description – let alone anxfulfilling a given predicateP(x) – exist in the empty set. See alsoVacuous truthfor more information. Incategory theoryand the theory ofelementary topoi, the existential quantifier can be understood as theleft adjointof afunctorbetweenpower sets, theinverse imagefunctor of a function between sets; likewise, theuniversal quantifieris theright adjoint.[6]
https://en.wikipedia.org/wiki/Existential_quantification
Inmathematicsandlogic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition.[1]This sort ofquantificationis known asuniqueness quantificationorunique existential quantification, and is often denoted with the symbols "∃!"[2]or "∃=1". It is defined to meanthere existsan object with the given property, andall objectswith this property areequal. For example, the formal statement may be read as "there is exactly one natural numbern{\displaystyle n}such thatn−2=4{\displaystyle n-2=4}". The most common technique to prove the unique existence of an object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say,a{\displaystyle a}andb{\displaystyle b}) must be equal to each other (i.e.a=b{\displaystyle a=b}). For example, to show that the equationx+2=5{\displaystyle x+2=5}has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds: To establish the uniqueness of the solution, one would proceed by assuming that there are two solutions, namelya{\displaystyle a}andb{\displaystyle b}, satisfyingx+2=5{\displaystyle x+2=5}. That is, Then since equality is atransitive relation, Subtracting 2 from both sides then yields which completes the proof that 3 is the unique solution ofx+2=5{\displaystyle x+2=5}. In general, both existence (there existsat leastone object) and uniqueness (there existsat mostone object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition. An alternative way to prove uniqueness is to prove that there exists an objecta{\displaystyle a}satisfying the condition, and then to prove that every object satisfying the condition must be equal toa{\displaystyle a}. Uniqueness quantification can be expressed in terms of theexistentialanduniversalquantifiers ofpredicate logic, by defining the formula∃!xP(x){\displaystyle \exists !xP(x)}to mean[3] which is logically equivalent to An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is Another equivalent definition, which has the advantage of brevity, is The uniqueness quantification can be generalized intocounting quantification(or numerical quantification[4]). This includes both quantification of the form "exactlykobjects exist such that …" as well as "infinitely many objects exist such that …" and "only finitely many objects exist such that…". The first of these forms is expressible using ordinary quantifiers, but the latter two cannot be expressed in ordinaryfirst-order logic.[5] Uniqueness depends on a notion ofequality. Loosening this to a coarserequivalence relationyields quantification of uniquenessup tothat equivalence (under this framework, regular uniqueness is "uniqueness up to equality"). This is calledessentially unique. For example, many concepts incategory theoryare defined to be unique up toisomorphism. The exclamation mark!{\displaystyle !}can be also used as a separate quantification symbol, so(∃!x.P(x))↔((∃x.P(x))∧(!x.P(x))){\displaystyle (\exists !x.P(x))\leftrightarrow ((\exists x.P(x))\land (!x.P(x)))}, where(!x.P(x)):=(∀a∀b.P(a)∧P(b)→a=b){\displaystyle (!x.P(x)):=(\forall a\forall b.P(a)\land P(b)\rightarrow a=b)}. E.g. it can be safely used in thereplacement axiom, instead of∃!{\displaystyle \exists !}.
https://en.wikipedia.org/wiki/Uniqueness_quantifier
Elephantsare thelargestliving land animals. Three livingspeciesare currently recognised: theAfrican bush elephant(Loxodontaafricana), theAfrican forest elephant(L. cyclotis), and theAsian elephant(Elephasmaximus). They are the only surviving members of thefamilyElephantidaeand theorderProboscidea; extinct relatives includemammothsandmastodons. Distinctive features of elephants include a longprobosciscalled a trunk,tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk isprehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs. Elephants are scattered throughoutsub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, includingsavannahs, forests, deserts, andmarshes. They areherbivorous, and they stay near water when it is accessible. They are considered to bekeystone species, due to their impact on their environments. Elephants have afission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as thematriarch. Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increasedtestosteroneand aggression known asmusth, which helps them gaindominanceover other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. Theycommunicateby touch, sight, smell, and sound; elephants useinfrasoundandseismic communicationover long distances.Elephant intelligencehas been compared with that ofprimatesandcetaceans. They appear to haveself-awareness, and possibly show concern for dying and dead individuals of their kind. African bush elephants and Asian elephants are listed asendangeredand African forest elephants ascritically endangeredon theIUCN Red Lists. One of the biggest threats to elephant populations is theivory trade, as the animals arepoachedfor their ivory tusks. Other threats to wild elephants includehabitat destructionand conflicts with local people. Elephants are used asworking animalsin Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment incircuses. Elephants have an iconic statusin human cultureand have been widely featured in art, folklore, religion, literature, and popular culture. The wordelephantis derived from theLatinwordelephas(genitiveelephantis)'elephant', which is theLatinisedform of theancient Greekἐλέφας(elephas) (genitiveἐλέφαντος(elephantos,[1])) probably from a non-Indo-European language, likelyPhoenician.[2]It is attested inMycenaean Greekase-re-pa(genitivee-re-pa-to) inLinear Bsyllabic script.[3][4]As in Mycenaean Greek,Homerused the Greek word to meanivory, but after the time ofHerodotus, it also referred to the animal.[1]The wordelephantappears inMiddle Englishasolyfauntinc.1300and was borrowed fromOld Frencholiphantin the 12th century.[2] Orycteropodidae Macroscelididae Chrysochloridae Tenrecidae Procaviidae Elephantidae Dugongidae Trichechidae Elephants belong to the familyElephantidae, the sole remaining family within the orderProboscidea. Their closestextantrelatives are thesirenians(dugongsandmanatees) and thehyraxes, with which they share thecladePaenungulatawithin the superorderAfrotheria.[6]Elephants and sirenians are further grouped in the cladeTethytheria.[7] Three species of living elephants are recognised; theAfrican bush elephant(Loxodontaafricana),forest elephant(Loxodonta cyclotis), andAsian elephant(Elephasmaximus).[8]African elephantswere traditionally considered a single species,Loxodonta africana, but molecular studies have affirmed their status as separate species.[9][10][11]Mammoths(Mammuthus) are nested within living elephants as they are more closely related to Asian elephants than to African elephants.[12]Another extinct genus of elephant,Palaeoloxodon, is also recognised, which appears to have close affinities with African elephants and to have hybridised with African forest elephants.[13]Some species of the extinctPalaeoloxodonwere even larger, all exceeding 4 metres in height and 10 tonnes in body mass, withP. namadicusbeing a contender for the largest land mammal to have ever existed.[14] The earliest members of Proboscidea likeEritheriumare known from thePaleoceneof Africa, around 60 million years ago, the earliest proboscideans were much smaller than living elephants, withEritheriumhaving a body mass of around 3–8 kg (6.6–17.6 lb).[15]By the late Eocene, some members of Proboscidea likeBarytheriumhad reached considerable size, with an estimated mass of around 2 tonnes,[14]while others likeMoeritheriumare suggested to have been semi-aquatic.[16] early proboscideans, e.g.Moeritherium Deinotheriidae Mammutidae Gomphotheriidae Stegodontidae Loxodonta Palaeoloxodon Mammuthus Elephas A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18–19 million years ago, allowing proboscideans to disperse from their African homeland across Eurasia and later, around 16–15 million years ago into North America across theBering Land Bridge. Proboscidean groups prominent during the Miocene include thedeinotheres, along with the more advancedelephantimorphs, includingmammutids(mastodons),gomphotheres,amebelodontids(which includes the "shovel tuskers" likePlatybelodon),choerolophodontidsandstegodontids.[19]Around 10 million years ago, the earliest members of the familyElephantidaeemerged in Africa, having originated from gomphotheres.[20] Elephantids are distinguished from earlier proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher-crowned (hypsodont) and more efficient in consuming grass.[21]The Late Miocene saw major climactic changes, which resulted in the decline and extinction of many proboscidean groups.[19]The earliest members of the modern genera of elephants (Elephas,Loxodonta) as well as mammoths, appeared in Africa during the latest Miocene–early Pliocene around 7-4 million years ago.[22]The elephantid generaElephas(which includes the living Asian elephant) andMammuthus(mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago.[23] Over the course of theEarly Pleistocene, all non-elephantid probobscidean genera outside of the Americas became extinct with the exception ofStegodon,[19]with gomphotheres dispersing into South America as part of theGreat American interchange,[24]and mammoths migrating into North America around 1.5 million years ago.[25]At the end of the Early Pleistocene, around 800,000 years ago the elephantid genusPalaeoloxodondispersed outside of Africa, becoming widely distributed in Eurasia.[26]Proboscideans were represented by around 23 species at the beginning of theLate Pleistocene. Proboscideans underwent a dramatic decline during the Late Pleistocene as part of theLate Pleistocene extinctionsof most large mammals globally, with all remaining non-elephantid proboscideans (includingStegodon,mastodons, and the American gomphotheresCuvieroniusandNotiomastodon) andPalaeoloxodonbecoming extinct, with mammoths only surviving inrelictpopulations on islands around theBering Straitinto the Holocene, with their latest survival being onWrangel Island, where they persisted until around 4,000 years ago.[19][27] Over the course of their evolution, probobscideans grew in size. With that came longer limbs and wider feet with a moredigitigradestance, along with a larger head and shorter neck. The trunk evolved and grew longer to provide reach. The number of premolars, incisors, and canines decreased, and the cheek teeth (molars and premolars) became longer and more specialised. The incisors developed into tusks of different shapes and sizes.[28]Several species of proboscideans became isolated on islands and experiencedinsular dwarfism,[29]some dramatically reducing in body size, such as the 1 m (3 ft 3 in) talldwarf elephantspeciesPalaeoloxodon falconeri.[30] Elephants are the largest living terrestrial animals. The skeleton is made up of 326–351 bones.[34]The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs.[35]The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull ahoneycomb-like appearance. By contrast, the lower jaw is dense. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head.[34]The skull is built to withstand great stress, particularly when fighting or using the tusks. The brain is surrounded by arches in the skull, which serve as protection.[36]Because of the size of the head, the neck is relatively short to provide better support.[28]Elephants arehomeothermsand maintain their average body temperature at ~ 36 °C (97 °F), with a minimum of 35.2 °C (95.4 °F) during the cool season, and a maximum of 38.0 °C (100.4 °F) during the hot dry season.[37] Elephant ear flaps, orpinnae, are 1–2 mm (0.039–0.079 in) thick in the middle with a thinner tip and supported by a thicker base. They contain numerous blood vessels calledcapillaries. Warm blood flows into the capillaries, releasing excess heat into the environment. This effect is increased by flapping the ears back and forth. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates and have the largest ear flaps.[34][38]Theossiclesare adapted for hearing low frequencies, being most sensitive at 1kHz.[39] Lacking alacrimal apparatus(tear duct), the eye relies on theharderian glandin the orbit to keep it moist. A durablenictitating membraneshields the globe. The animal'sfield of visionis compromised by the location and limited mobility of the eyes.[40]Elephants aredichromats[41]and they can see well in dim light but not in bright light.[42] The elongated andprehensiletrunk, orproboscis, consists of both the nose and upper lip, which fuse in earlyfetaldevelopment.[28]This versatile appendage contains up to 150,000 separatemuscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided intodorsal, ventral, andlateralmuscles, while the latter are divided intotransverseandradiatingmuscles. The muscles of the trunk connect to a bony opening in the skull. Thenasal septumconsists of small elastic muscles between the nostrils, which are divided bycartilageat the base.[43]A unique proboscis nerve – a combination of themaxillaryandfacial nerves– lines each side of the appendage.[44] As amuscular hydrostat, the trunk moves through finely controlled muscle contractions, working both with and against each other.[44]Using three basic movements: bending, twisting, and longitudinal stretching or retracting, the trunk has near unlimited flexibility. Objects grasped by the end of the trunk can be moved to the mouth by curving the appendage inward. The trunk can also bend at different points by creating stiffened "pseudo-joints". The tip can be moved in a way similar to the human hand.[45]The skin is more elastic on the dorsal side of the elephant trunk than underneath; allowing the animal to stretch and coil while maintaining a strong grasp.[46]The flexibility of the trunk is aided by the numerous wrinkles in the skin.[47]The African elephants have two finger-like extensions at the tip of the trunk that allow them to pluck small food. The Asian elephant has only one and relies more on wrapping around a food item.[31]Asian elephant trunks have bettermotor coordination.[43] The trunk's extreme flexibility allows it to forage and wrestle other elephants with it. It is powerful enough to lift up to 350 kg (770 lb), but it also has the precision to crack a peanut shell without breaking the seed. With its trunk, an elephant can reach items up to 7 m (23 ft) high and dig for water in the mud or sand below. It also uses it to clean itself.[48]Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right.[44]Elephant trunks are capable of powerful siphoning. They can expand their nostrils by 30%, leading to a 64% greater nasal volume, and can breathe in almost 30 times faster than a human sneeze, at over 150 m/s (490 ft/s).[49]They suck up water, which is squirted into the mouth or over the body.[28][49]The trunk of an adult Asian elephant is capable of retaining 8.5 L (2.2 US gal) of water.[43]They will also sprinkle dust or grass on themselves.[28]When underwater, the elephant uses its trunk as asnorkel.[50] The trunk also acts as a sense organ. Its sense of smell may be four times greater than abloodhound's nose.[51]Theinfraorbital nerve, which makes the trunk sensitive to touch, is thicker than both theopticandauditorynerves.Whiskersgrow all along the trunk, and are particularly packed at the tip, where they contribute to its tactile sensitivity. Unlike those of many mammals, such as cats and rats, elephant whiskers do not move independently ("whisk") to sense the environment; the trunk itself must move to bring the whiskers into contact with nearby objects. Whiskers grow in rows along each side on the ventral surface of the trunk, which is thought to be essential in helping elephants balance objects there, whereas they are more evenly arranged on the dorsal surface. The number and patterns of whiskers are distinctly different between species.[52] Damaging the trunk would be detrimental to an elephant's survival,[28]although in rare cases, individuals have survived with shortened ones. One trunkless elephant has been observed to graze using its lips with its hind legs in the air and balancing on its front knees.[43]Floppy trunk syndromeis a condition of trunkparalysisrecorded in African bush elephants and involves the degeneration of theperipheral nervesand muscles. The disorder has been linked to lead poisoning.[53] Elephants usually have 26 teeth: theincisors, known as thetusks; 12deciduouspremolars; and 12molars. Unlike most mammals, teeth are not replaced by new ones emerging from the jaws vertically. Instead, new teeth start at the back of the mouth and push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. This is followed by four more tooth replacements at the ages of four to six, 9–15, 18–28, and finally in their early 40s. The final (usually sixth) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are more diamond-shaped in African elephants.[54] The tusks of an elephant are modified second incisors in the upper jaw. They replace deciduousmilk teethat 6–12 months of age and keep growing at about 17 cm (7 in) a year. As the tusk develops, it is topped with smooth, cone-shapedenamelthat eventually wanes. Thedentineis known asivoryand has across-sectionof intersecting lines, known as "engine turning", which create diamond-shaped patterns. Being living tissue, tusks are fairly soft and about as dense as the mineralcalcite. The tusk protrudes from a socket in the skull, and most of it is external. At least one-third of the tusk contains thepulp, and some have nerves that stretch even further. Thus, it would be difficult to remove it without harming the animal. When removed, ivory will dry up and crack if not kept cool and wet. Tusks function in digging, debarking, marking, moving objects, and fighting.[55] Elephants are usually right- or left-tusked, similar to humans, who are typicallyright- or left-handed. The dominant, or "master" tusk, is typically more worn down, as it is shorter and blunter. For African elephants, tusks are present in both males and females and are around the same length in both sexes, reaching up to 300 cm (9 ft 10 in),[55]but those of males tend to be more massive.[56]In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all.[55]Tuskless males exist and are particularly common amongSri Lankan elephants.[57]Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was 302 cm (9 ft 11 in) long and weighed 39 kg (86 lb). Hunting for elephant ivory in Africa[58]and Asia[59]has resulted in an effectiveselection pressurefor shorter tusks[60][61]and tusklessness.[62][63] An elephant's skin is generally very tough, at 2.5 cm (1 in) thick on the back and parts of the head. The skin around the mouth,anus, and inside of the ear is considerably thinner. Elephants are typically grey, but African elephants look brown or reddish after rolling in coloured mud. Asian elephants have some patches of depigmentation, particularly on the head. Calves have brownish or reddish hair, with the head and back being particularly hairy. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the tip of the tail and parts of the head and genitals. Normally, the skin of an Asian elephant is covered with more hair than its African counterpart.[64]Their hair is thought to help them lose heat in their hot environments.[65] Although tough, an elephant's skin is very sensitive and requiresmud bathsto maintain moisture and protection from burning and insect bites. After bathing, the elephant will usually use its trunk to blow dust onto its body, which dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their lowsurface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs to expose their soles to the air.[64]Elephants only havesweat glandsbetween the toes,[66]but the skin allows water to disperse and evaporate, cooling the animal.[67][68]In addition, cracks in the skin may reduce dehydration and allow for increased thermal regulation in the long term.[69] To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs havecancellous bonesin place ofmedullary cavities. This strengthens the bones while still allowinghaematopoiesis(blood cell creation).[70]Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front.[71]The position of the limbs and leg bones allows an elephant to stand still for extended periods of time without tiring. Elephants are incapable of turning theirmanusas theulnaandradiusof the front legs are secured inpronation.[70]Elephants may also lack thepronator quadratusandpronator teresmuscles or have very small ones.[72]The circular feet of an elephant have soft tissues, or "cushion pads" beneath the manus orpes, which allow them to bear the animal's great mass.[71]They appear to have asesamoid, an extra "toe" similar in placement to agiant panda's extra "thumb", that also helps in weight distribution.[73]As many as five toenails can be found on both the front and hind feet.[31] Elephants can move both forward and backward, but are incapable oftrotting,jumping, orgalloping. They can move on land only by walking orambling: a faster gait similar to running.[70][74]In walking, the legs act as pendulums, with the hips and shoulders moving up and down while the foot is planted on the ground. The fast gait does not meet all the criteria of running, since there is no point where all the feet are off the ground, although the elephant uses its legs much like other running animals, and can move faster by quickening its stride. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of 25 km/h (16 mph). At this speed, most otherquadrupedsare well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals.[74][75]The cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving.[71]Elephants are capable swimmers: they can swim for up to six hours while completely waterborne, moving at 2.1 km/h (1 mph) and traversing up to 48 km (30 mi) continuously.[76] The brain of an elephant weighs 4.5–5.5 kg (10–12 lb) compared to 1.6 kg (4 lb) for a human brain.[77]It is the largest of all terrestrial mammals.[78]While the elephant brain is larger overall, it isproportionally smaller than the human brain. At birth, an elephant's brain already weighs 30–40% of its adult weight. Thecerebrumandcerebellumare well developed, and thetemporal lobesare so large that they bulge out laterally.[77]Their temporal lobes are proportionally larger than those of other animals, including humans.[78]The throat of an elephant appears to contain a pouch where it can store water for later use.[28]Thelarynxof the elephant is the largest known among mammals. Thevocal foldsare anchored close to theepiglottisbase. When comparing an elephant's vocal folds to those of a human, an elephant's are proportionally longer, thicker, with a greater cross-sectional area. In addition, they are located further up the vocal tract with an acute slope.[79] The heart of an elephant weighs 12–21 kg (26–46 lb). Itsapexhas two pointed ends, an unusual trait among mammals.[77]In addition, theventriclesof the heart split towards the top, a trait also found in sirenians.[80]When upright, the elephant's heart beats around 28 beats per minute and actually speeds up to 35 beats when it lies down.[77]The blood vessels are thick and wide and can hold up under high blood pressure.[80]The lungs are attached to thediaphragm, and breathing relies less on the expanding of the ribcage.[77]Connective tissueexists in place of thepleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air.[50]Elephants breathe mostly with the trunk but also with the mouth. They have ahindgut fermentationsystem, and their large and small intestines together reach 35 m (115 ft) in length. Less than half of an elephant's food intake gets digested, despite the process lasting a day.[77]An elephant's bladder can store up to 18 litres of urine[81]and itskidneyscan produce more than 50litresof urine per day.[82] A male elephant's testes, like otherAfrotheria,[83]are internally located near the kidneys.[84]Thepeniscan be as long as 100 cm (39 in) with a 16 cm (6 in) wide base. It curves to an 'S' when fully erect and has anorificeshaped like a Y. The female'sclitorismay be 40 cm (16 in). Thevulvais found lower than in other herbivores, between the hind legs instead of under the tail. Determining pregnancy status can be difficult due to the animal's large belly. The female'smammary glandsoccupy the space between the front legs, which puts the suckling calf within reach of the female's trunk.[77]Elephants have a unique organ, thetemporal gland, located on both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when inmusth.[85]Females have also been observed with these secretions.[51] Elephants areherbivorousand will eat leaves, twigs, fruit, bark, grass, and roots. African elephants mostlybrowse, while Asian elephants mainlygraze.[32]They can eat as much as 300 kg (660 lb) of food and drink 40 L (11 US gal) of water in a day. Elephants tend to stay near water sources.[32][86]They have morning, afternoon, and nighttime feeding sessions. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down.[86]Elephants average 3–4 hours of sleep per day.[87]Both males and family groups typically move no more than 20 km (12 mi) a day, but distances as far as 180 km (112 mi) have been recorded in theEtosharegion of Namibia.[88]Elephants go on seasonal migrations in response to changes in environmental conditions.[89]In northern Botswana, they travel 325 km (202 mi) to theChobe Riverafter the local waterholes dry up in late August.[90] Because of their large size, elephants have a huge impact on their environments and are consideredkeystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands;[91]smaller herbivores can access trees mowed down by elephants.[86]When they dig for water during droughts, they create waterholes that can be used by other animals. When they use waterholes, they end up making them bigger.[91]AtMount Elgon, elephants dig through caves and pave the way forungulates, hyraxes, bats, birds, and insects.[91]Elephants are importantseed dispersers; African forest elephants consume and deposit many seeds over great distances, with either no effect or a positive effect ongermination.[92]In Asian forests, large seeds require giant herbivores like elephants andrhinocerosfor transport and dispersal. This ecological niche cannot be filled by the smallerMalayan tapir.[93]Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such asdung beetlesand monkeys.[91]Elephants can have a negative impact on ecosystems. AtMurchison Falls National Parkin Uganda, elephant numbers have threatened several species of small birds that depend on woodlands. Their weight causes the soil to compress, leading torunoffanderosion.[86] Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded.[86]The size of adult elephants makes them nearly invulnerable topredators.[33]Calves may be preyed on bylions,spotted hyenas, andwild dogsin Africa[94]andtigersin Asia.[33]The lions ofSavuti, Botswana, have adapted to hunting elephants, targeting calves, juveniles or even sub-adults.[95][96]There are rare reports of adult Asian elephants falling prey to tigers.[97]Elephants tend to have high numbers of parasites, particularlynematodes, compared to many other mammals. This may be due to elephants being less vulnerable to predation; in other mammal species, individuals weakened by significantparasite loadsare easily killed off by predators, removing them from the population.[98] Elephants are generallygregariousanimals. African bush elephants in particular have a complex, stratified social structure.[99]Female elephants spend their entire lives in tight-knitmatrilinealfamily groups.[100]They are led by thematriarch, who is often the eldest female.[101]She remains leader of the group until death[94]or if she no longer has the energy for the role;[102]a study on zoo elephants found that the death of the matriarch led to greater stress in the surviving elephants.[103]When her tenure is over, the matriarch's eldest daughter takes her place instead of her sister (if present).[94]One study found that younger matriarchs take potential threats less seriously.[104]Large family groups may split if they cannot be supported by local resources.[105] AtAmboseli National Park, Kenya, female groups may consist of around ten members, including four adults and their dependent offspring. Here, a cow's life involves interaction with those outside her group. Two separate families may associate and bond with each other, forming what are known as bond groups. During the dry season, elephant families may aggregate into clans. These may number around nine groups, in which clans do not form strong bonds but defend their dry-season ranges against other clans. The Amboseli elephant population is further divided into the "central" and "peripheral" subpopulations.[100] Female Asian elephants tend to have more fluid social associations.[99]In Sri Lanka, there appear to be stable family units or "herds" and larger, looser "groups". They have been observed to have "nursing units" and "juvenile-care units". In southern India, elephant populations may contain family groups, bond groups, and possibly clans. Family groups tend to be small, with only one or two adult females and their offspring. A group containing more than two cows and their offspring is known as a "joint family". Malay elephant populations have even smaller family units and do not reach levels above a bond group. Groups of African forest elephants typically consist of one cow with one to three offspring. These groups appear to interact with each other, especially at forest clearings.[100] Adult males live separate lives. As he matures, a bull associates more with outside males or even other families. At Amboseli, young males may be away from their families 80% of the time by 14–15 years of age. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. Adominance hierarchyexists among males, whether they are social or solitary. Dominance depends on age, size, and sexual condition.[106]Male elephants can be quite sociable when not competing for mates and form vast and fluid social networks.[107][108]Older bulls act as the leaders of these groups.[109]The presence of older males appears to subdue the aggression and "deviant" behaviour of younger ones.[110]The largest all-male groups can reach close to 150 individuals. Adult males and females come together to breed. Bulls will accompany family groups if a cow is inoestrous.[106] Adult males enter a state of increasedtestosteroneknown asmusth. In a population in southern India, males first enter musth at 15 years old, but it is not very intense until they are older than 25. At Amboseli, no bulls under 24 were found to be in musth, while half of those aged 25–35 and all those over 35 were. In some areas, there may be seasonal influences on the timing of musths. The main characteristic of a bull's musth is a fluid discharged from thetemporal glandthat runs down the side of his face. Behaviours associated with musth include walking with a high and swinging head, nonsynchronous ear flapping, picking at the ground with the tusks, marking, rumbling, and urinating in thesheath. The length of this varies between males of different ages and conditions, lasting from days to months.[111] Males become extremely aggressive during musth. Size is the determining factor inagonisticencounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring. Rarely do they full-on fight.[111] There is at least one documented case ofinfanticideamong Asian elephants at Dong Yai Wildlife Sanctuary, with the researchers describing it as most likely normal behaviour among aggressive musth elephants.[112] Elephants arepolygynousbreeders,[113]and mostcopulationsoccur during rainfall.[114]An oestrous cow usespheromonesin her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with theflehmen response, which requires him to collect a chemical sample with his trunk and taste it with thevomeronasal organat the roof of the mouth.[115]The oestrous cycle of a cow lasts 14–16 weeks, with thefollicular phaselasting 4–6 weeks and theluteal phaselasting 8–10 weeks. While most mammals have one surge ofluteinizing hormoneduring the follicular phase, elephants have two. The first (or anovulatory) surge, appears to change the female's scent, signaling to males that she is in heat, butovulationdoes not occur until the second (or ovulatory) surge.[116]Cows over 45–50 years of age are less fertile.[102] Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males.[117]Most mate-guarding is done by musth males, and females seek them out, particularly older ones.[118]Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths.[119]For young females, the approach of an older bull can be intimidating, so her relatives stay nearby for comfort.[120]During copulation, the male rests his trunk on the female.[121]The penis is mobile enough to move without the pelvis.[82]Before mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involvepelvic thrustingor an ejaculatory pause.[122] Homosexual behaviourhas been observed in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting, and "championships" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity, where they engage inmutual masturbationwith their trunks.[123] Gestationin elephants typically lasts between one and a half and two years and the female will not give birth again for at least four years.[124]The relatively long pregnancy is supported by severalcorpus luteumsand gives the foetus more time to develop, particularly the brain and trunk.[125]Births tend to take place during the wet season.[114]Typically, only a single young is born, but twins sometimes occur.[125]Calves are born roughly 85 cm (33 in) tall and with a weight of around 120 kg (260 lb).[120]They areprecocialand quickly stand and walk to follow their mother and family herd.[126]A newborn calf will attract the attention of all the herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother limits access to her young.Alloparenting– where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically aged two to twelve years.[120] For the first few days, the newborn is unsteady on its feet and needs its mother's help. It relies on touch, smell, and hearing, as its eyesight is less developed. With little coordination in its trunk, it can only flop it around which may cause it to trip. When it reaches its second week, the calf can walk with more balance and has more control over its trunk. After its first month, the trunk can grab and hold objects but still lacks sucking abilities, and the calf must bend down to drink. It continues to stay near its mother as it is still reliant on her. For its first three months, a calf relies entirely on its mother's milk, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, there is progress in lip and leg movements. By nine months, mouth, trunk, and foot coordination are mastered. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year. After a year, a calf is fully capable of grooming, drinking, and feeding itself. It still needs its mother's milk and protection until it is at least two years old. Suckling after two years may improve growth, health, and fertility.[126] Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former aresexually matureby the age of nine years[120]while the latter become mature around 14–15 years.[106]Adulthood starts at about 18 years of age in both sexes.[127][128]Elephants have long lifespans, reaching 60–70 years of age.[54]Lin Wang, a captive male Asian elephant, lived for 86 years.[129] Elephants communicate in various ways. Individuals greet one another by touching each other on the mouth, temporal glands, and genitals. This allows them to pick up chemical cues. Older elephants use trunk-slaps, kicks, and shoves to control younger ones. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. A calf will press against its mother's front legs to signal it wants to rest and will touch her breast or leg when it wants to suckle.[130] Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as tossing around dust and vegetation. They are usually bluffing when performing these actions. Excited elephants also raise their heads and spread their ears but additionally may raise their trunks. Submissive elephants will lower their heads and trunks, as well as flatten their ears against their necks, while those that are ready to fight will bend their ears in a V shape.[131] Elephants produce several vocalisations—some of which pass through the trunk[132]—for both short and long range communication. This includes trumpeting,bellowing,roaring,growling,barking, snorting, andrumbling.[132][133]Elephants can produceinfrasonicrumbles.[134]For Asian elephants, these calls have a frequency of 14–24Hz, withsound pressurelevels of 85–90dBand last 10–15 seconds.[135]For African elephants, calls range from 15 to 35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, possibly over 10 km (6 mi).[136]Elephants are known tocommunicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. An individual foot stomping or mock charging can create seismic signals that can be heard at travel distances of up to 32 km (20 mi). Seismic waveforms produced by rumbles travel 16 km (10 mi).[137][138] Elephants are among the most intelligent animals. They exhibitmirror self-recognition, an indication ofself-awarenessandcognitionthat has also been demonstrated in someapesanddolphins.[139]One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later.[140]Elephants are among thespecies known to use tools. An Asian elephant has been observed fine-tuning branches for use asflyswatters.[141]Tool modification by these animals is not as advanced as that ofchimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly havecognitive mapswhich give them long lasting memories of their environment on a wide scale. Individuals may be able to remember where their family members are located.[42] Scientists debate the extent to which elephants feelemotion. They are attracted to the bones of their own kind, regardless of whether they are related.[142]As with chimpanzees and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing "concern";[143]however, theOxford Companion to Animal Behaviour(1987) said that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion".[144] African bush elephants were listed asEndangeredby theInternational Union for Conservation of Nature(IUCN) in 2021,[145]and African forest elephants were listed asCritically Endangeredin the same year.[146]In 1979, Africa had an estimated population of at least 1.3 million elephants, possibly as high as 3.0 million. A decade later, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in Eastern Africa, 204,000 in Southern Africa, and 19,000 in Western Africa. The population of rainforest elephants was lower than anticipated, at around 214,000 individuals. Between 1977 and 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers hastened, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were various, with unconfirmed losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa.[147]The IUCN estimated that total population in Africa is estimated at to 415,000 individuals for both species combined as of 2016.[148] African elephants receive at least some legal protection in every country where they are found. Successful conservation efforts in certain areas have led to high population densities while failures have led to declines as high as 70% or more of the course of ten years. As of 2008, local numbers were controlled by contraception ortranslocation. Large-scalecullingsstopped in the late 1980s and early 1990s. In 1989, the African elephant was listed under Appendix I by theConvention on International Trade in Endangered Species of Wild Fauna and Flora(CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries,sport huntingof the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies.[145] In 2020, the IUCN listed the Asian elephant asendangereddue to the population declining by half over "the last three generations".[149]Asian elephants once ranged fromWesterntoEast Asiaand south toSumatra.[150]and Java. It is now extinct in these areas,[149]and the current range of Asian elephants is highly fragmented.[150]The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. Around 60% of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in theWestern Ghatsmay have stabilised.[149] Thepoachingof elephants for their ivory, meat and hides has been one of the major threats to their existence.[149]Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use was comparable to that of gold.[151]The ivory trade contributed to the fall of the African elephant population in the late 20th century.[145]This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan.[151]Around the same time, Kenya destroyed all its ivory stocks.[152]Ivory was banned internationally by CITES in 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not as badly affected.[151]Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local populations were healthy, but only if their supplies were from culled individuals or those that died of natural causes.[152] The ban allowed the elephant to recover in parts of Africa.[151]In February 2012, 650 elephants inBouba Njida National Park, Cameroon, were slaughtered by Chadian raiders.[153]This has been called "one of the worst concentrated killings" since the ivory ban.[152]Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such asPeriyar National Parkin India.[149]China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said "they would enact a nearly complete ban on the import and export of ivory" due to causes of extinction.[154] Other threats to elephants includehabitat destructionandfragmentation. The Asian elephant lives in areas with some of the highest human populations and may be confined to small islands of forest among human-dominated landscapes. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the protection ofwildlife corridorswhich give populations greater interconnectivity and space.[149]Chili pepper products as well as guarding with defense tools have been found to be effective in preventing crop-raiding by elephants. Less effective tactics includebeehiveandelectric fences.[155] Elephants have beenworking animalssince at least theIndus Valley civilizationover 4,000 years ago[156]and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old, the age range when they are both more trainable and can work for more years.[157]They weretraditionally captured with traps and lassos, but since 1950,tranquillisershave been used.[158]Individuals of the Asian species have often been trained as working animals. Asian elephants are used to carry and pull both objects and people in and out of areas as well as lead people in religious celebrations. They are valued over mechanised tools as they can perform the same tasks but in more difficult terrain, with strength, memory, and delicacy. Elephants can learn over 30 commands.[157]Musth bulls are difficult and dangerous to work with and so are chained up until their condition passes.[159] In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected underThe Prevention of Cruelty to Animals Act of 1960.[160]In both Myanmar and Thailand,deforestationand other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live.[161][162] The practice of working elephants has also been attempted in Africa. The taming of African elephants in theBelgian Congobegan by decree ofLeopold II of Belgiumduring the 19th century and continues to the present with theApi Elephant Domestication Centre.[163] Historically, elephants were considered formidable instruments of war. They were described inSanskrittexts as far back as 1500 BC. From South Asia, the use of elephants in warfare spread west to Persia[164]and east to Southeast Asia.[165]The Persians used them during theAchaemenid Empire(between the 6th and 4th centuries BC)[164]while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century.[165]War elephants were also employed in the Mediterranean and North Africa throughout theclassical periodsince the reign ofPtolemy IIin Egypt. TheCarthaginiangeneralHannibalfamously took African elephants across theAlpsduring his war with the Romans and reached thePo Valleyin 218 BC with all of them alive, but died of disease and combat a year later.[164] An elephant's head and sides were equipped with armour, the trunk may have had a sword tied to it and tusks were sometimes covered with sharpened iron or brass. Trained elephants would attack both humans and horses with their tusks. They might have grasped an enemy soldier with the trunk and tossed him to theirmahout, or pinned the soldier to the ground and speared him. Some shortcomings of war elephants included their great visibility, which made them easy to target, and limited maneuverability compared to horses.Alexander the Greatachieved victory over armies with war elephants by having his soldiers injure the trunks and legs of the animals which caused them to panic and become uncontrollable.[164] Elephants have traditionally been a major part ofzoosandcircusesaround the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probablyJumbo(1861 – 15 September 1885), who was a major attraction in theBarnum & Bailey Circus.[166][167]These animals do not reproduce well in captivity due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, imports of the species almost stopped by the end of the 1980s. Subsequently, the US received many captive African elephants from Zimbabwe, which had an overabundance of the animals.[167] Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they allow easy access to the animals and provide fund and knowledge for preserving their natural habitats, as well as safekeeping for the species. Opponents claim that animals in zoos are under physical and mental stress.[168]Elephants have been recorded displayingstereotypical behavioursin the form of wobbling the body or head and pacing the same route both forwards and backwards. This has been observed in 54% of individuals in UK zoos.[169]One study claims wild elephants in protected areas of Africa and Asia live more than twice as long as those in European zoos; the median lifespan of elephants in European zoos being 17 years. Other studies suggest that elephants in zoos live a similar lifespan as those in the wild.[170] The use of elephants in circuses has also been controversial; theHumane Society of the United Stateshas accused circuses of mistreating and distressing their animals.[171]In testimony to a US federal court in 2009, Barnum & Bailey Circus CEOKenneth Feldacknowledged that circus elephants are struck behind their ears, under their chins, and on their legs with metal-tipped prods, calledbull hooksor ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was rebuked for using an electric prod on an elephant. Despite this, he denied that any of these practices hurt the animals.[172]Some trainers have tried to train elephants without the use of physical punishment.Ralph Helferis known to have relied on positive reinforcement when training his animals.[173]Barnum and Bailey circus retired its touring elephants in May 2016.[174] Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans.[175]In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive.[176][177]In parts of India, male elephants have entered villages at night, destroying homes and killing people. From 2000 to 2004, 300 people died inJharkhand, and inAssam, 239 people were reportedly killed between 2001 and 2006.[175]Throughout the country, 1,500 people were killed by elephants between 2019 and 2022, which led to 300 elephants being killed in kind.[178]Local people have reported that some elephants were drunk during the attacks, though officials have disputed this.[179][180]Purportedly drunk elephants attacked an Indian village in December 2002, killing six people, which led to the retaliatory slaughter of about 200 elephants by locals.[181] Elephants have a universal presence in global culture. They have been represented in art sincePaleolithictimes. Africa, in particular, contains many examples of elephantrock art, especially in theSaharaand southern Africa.[182]In Asia, the animals are depicted asmotifsinHinduandBuddhistshrines and temples.[183]Elephants were often difficult to portray by people with no first-hand experience of them.[184]Theancient Romans, who kept the animals in captivity, depicted elephants more accurately thanmedievalEuropeans who portrayed them more like fantasy creatures, with horse, bovine, and boar-like traits, and trumpet-like trunks. As Europeans gained more access to captive elephants during the 15th century, depictions of them became more accurate, including one made byLeonardo da Vinci.[185] Elephants have been the subject of religious beliefs. TheMbuti peopleof central Africa believe that the souls of their dead ancestors resided in elephants.[183]Similar ideas existed among other African societies, who believed that their chiefs would bereincarnatedas elephants. During the 10th century AD, the people ofIgbo-Ukwu, in modern-day Nigeria, placed elephant tusks underneath their dead leader's feet in the grave.[186]The animals' importance is onlytotemicin Africa but is much more significant in Asia.[187]In Sumatra, elephants have been associated with lightning. Likewise, in Hinduism, they are linked with thunderstorms asAiravata, the father of all elephants, represents both lightning and rainbows.[183]One of the most important Hindu deities, the elephant-headedGanesha, is ranked equal with the supreme godsShiva,Vishnu, andBrahmain some traditions.[188]Ganesha is associated with writers and merchants, and it is believed that he can give people success as well as grant them their desires, but could also take these things away.[183]In Buddhism,Buddhais said to have taken the form of awhite elephantwhen he entered hismother'swomb to be reincarnated as a human.[189] In Western popular culture, elephants symbolise the exotic, especially since – as with thegiraffe,hippopotamus, andrhinoceros– there are no similar animals familiar to Western audiences. As characters, elephants are most common inchildren's stories, where they are portrayed positively. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to or finding a family, such as "The Elephant's Child" fromRudyard Kipling'sJust So Stories,Disney'sDumbo,and Kathryn and Byron Jackson'sThe Saggy Baggy Elephant. Other elephant heroesgiven human qualitiesincludeJean de Brunhoff'sBabar,David McKee'sElmer, andDr. Seuss'sHorton.[190] Several cultural references emphasise the elephant's size and strangeness. For instance, a "white elephant" is a byword for something that is weird, unwanted, and has no value.[190]The expression "elephant in the room" refers to something that is being ignored but ultimately must be addressed.[191]The story of theblind men and an elephantinvolves blind men touching different parts of an elephant and trying to figure out what it is.[192]
https://en.wikipedia.org/wiki/Elephant
Quantifiermay refer to:
https://en.wikipedia.org/wiki/Quantifier
First-order logic, also calledpredicate logic,predicate calculus, orquantificational logic, is a collection offormal systemsused inmathematics,philosophy,linguistics, andcomputer science. First-order logic usesquantified variablesover non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for allx, ifxis a man, thenxis mortal"; where "for allx"is a quantifier,xis a variable, and "...is a man" and "...is mortal" are predicates.[1]This distinguishes it frompropositional logic, which does not use quantifiers orrelations;[2]: 161in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such asset theory, a theory for groups,[3]or a formal theory ofarithmetic, is usually a first-order logic together with a specifieddomain of discourse(over which the quantified variables range), finitely many functions from that domain to itself, finitely manypredicatesdefined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic fromhigher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted.[4]: 56In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are manydeductive systemsfor first-order logic which are bothsound, i.e. all provable statements are true in all models; andcomplete, i.e. all statements which are true in all models are provable. Although thelogical consequencerelation is onlysemidecidable, much progress has been made inautomated theorem provingin first-order logic. First-order logic also satisfies severalmetalogicaltheorems that make it amenable to analysis inproof theory, such as theLöwenheim–Skolem theoremand thecompactness theorem. First-order logic is the standard for the formalization of mathematics intoaxioms, and is studied in thefoundations of mathematics.Peano arithmeticandZermelo–Fraenkel set theoryare axiomatizations ofnumber theoryand set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as thenatural numbersor thereal line. Axiom systems that do fully describe these two structures, i.e.categoricalaxiom systems, can be obtained in stronger logics such assecond-order logic. The foundations of first-order logic were developed independently byGottlob FregeandCharles Sanders Peirce.[5]For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). Whilepropositional logicdeals with simple declarative propositions, first-order logic additionally coverspredicatesandquantification. A predicate evaluates totrueorfalsefor an entity or entities in thedomain of discourse. Consider the two sentences "Socratesis a philosopher" and "Platois a philosopher". Inpropositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such aspandq. They are not viewed as an application of a predicate, such asisPhil{\displaystyle {\text{isPhil}}}, to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false.[6]However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common formisPhil(x){\displaystyle {\text{isPhil}}(x)}for some individualx{\displaystyle x}, in the first sentence the value of the variablexis "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic.[7]: 29–30 The truth of a formula such as "xis a philosopher" depends on which object is denoted byxand on the interpretation of the predicate "is a philosopher". Consequently, "xis a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment.[8]Relationships between predicates can be stated usinglogical connectives. For example, the first-order formula "ifxis a philosopher, thenxis a scholar", is aconditionalstatement with "xis a philosopher" as its hypothesis, and "xis a scholar" as its conclusion, which again needs specification ofxin order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variablexin the previous formula can be universally quantified, for instance, with the first-order sentence "For everyx, ifxis a philosopher, thenxis a scholar". Theuniversal quantifier"for every" in this sentence expresses the idea that the claim "ifxis a philosopher, thenxis a scholar" holds forallchoices ofx. Thenegationof the sentence "For everyx, ifxis a philosopher, thenxis a scholar" is logically equivalent to the sentence "There existsxsuch thatxis a philosopher andxis not a scholar". Theexistential quantifier"there exists" expresses the idea that the claim "xis a philosopher andxis not a scholar" holds forsomechoice ofx. The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form thedomain of discourseor universe, which is usually required to be a nonempty set. For example, consider the sentence "There existsxsuch thatxis a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of theRepublic." It is true, as witnessed by Plato in that text.[clarification needed] There are two key parts of first-order logic. Thesyntaxdetermines which finite sequences of symbols are well-formed expressions in first-order logic, while thesemanticsdetermines the meanings behind these expressions. Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression iswell formed. There are two key types of well-formed expressions:terms, which intuitively represent objects, andformulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings ofsymbols, where all the symbols together form thealphabetof the language. As with allformal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet intological symbols, which always have the same meaning, andnon-logical symbols, whose meaning varies by interpretation.[9]For example, the logical symbol∧{\displaystyle \land }always represents "and"; it is never interpreted as "or", which is represented by the logical symbol∨{\displaystyle \lor }. However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "xis a philosopher", "xis a man named Philip", or any other unary predicate depending on the interpretation at hand. Logical symbols are a set of characters that vary by author, but usually include the following:[10] Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Non-logical symbolsrepresent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: When the arity of a predicate symbol or function symbol is clear from context, the superscriptnis often omitted. In this traditional approach, there is only one language of first-order logic.[13]This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via asignature.[14] Typical signatures in mathematics are {1, ×} or just {×} forgroups,[3]or {0, 1, +, ×, <} forordered fields. There are no restrictions on the number of non-logical symbols. The signature can beempty, finite, or infinite, evenuncountable. Uncountable signatures occur for example in modern proofs of theLöwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted,interpretationof the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. Theformation rulesdefine the terms and formulas of first-order logic.[16]When terms and formulas are represented as strings of symbols, these rules can be used to write aformal grammarfor terms and formulas. These rules are generallycontext-free(each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case ofterms. The set oftermsisinductively definedby the following rules:[17] Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. The set offormulas(also calledwell-formed formulas[18]orWFFs) is inductively defined by the following rules: Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to beatomic formulas. For example: is a formula, iffis a unary function symbol,Pa unary predicate symbol, and Q a ternary predicate symbol. However,∀xx→{\displaystyle \forall x\,x\rightarrow }is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a uniqueparse treefor each formula). This property is known asunique readabilityof formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to theorder of operationsin arithmetic. A common convention is: Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: might be written as: In a formula, a variable may occurfreeorbound(or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbolx, each occurrence of a variable symbolxin a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbolxappears.[8]p.297Then, an occurrence ofxis said to be bound if that occurrence ofxlies within the scope of at least one of either∃x{\displaystyle \exists x}or∀x{\displaystyle \forall x}. Finally,xis bound in φ if all occurrences ofxin φ are bound.[8]pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:[8]pp.142--143in∀yP(x,y), the sole occurrence of variablexis free while that ofyis bound. The free and bound variable occurrences in a formula are defined inductively as follows. For example, in∀x∀y(P(x) →Q(x,f(x),z)),xandyoccur only bound,[19]zoccurs only free, andwis neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formulaP(x) → ∀xQ(x), the first occurrence ofx, as argument ofP, is free while the second one, as argument ofQ, is bound. A formula in first-order logic with no free variable occurrences is called afirst-ordersentence. These are the formulas that will have well-definedtruth valuesunder an interpretation. For example, whether a formula such as Phil(x) is true must depend on whatxrepresents. But the sentence∃xPhil(x)will be either true or false in a given interpretation. In mathematics, the language of orderedabelian groupshas one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written(∀x)(∀y)[x+y=y+x].{\displaystyle (\forall x)(\forall y)[x+y=y+x].} Aninterpretationof a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines adomain of discoursethat specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is calledformal semantics. What follows is a description of the standard orTarskiansemantics for first-order logic. (It is also possible to definegame semantics for first-order logic, but aside from requiring theaxiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) The most common way of specifying an interpretation (especially in mathematics) is to specify astructure(also called amodel; see below). The structure consists of a domain of discourseDand an interpretation functionImapping non-logical symbols to predicates, functions, and constants. The domain of discourseDis a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example,∃xP(x){\displaystyle \exists xP(x)}states the existence of some object inDfor which the predicatePis true (or, more precisely, for which the predicate assigned to the predicate symbolPby the interpretation is true). For example, one can takeDto be the set ofintegers. Non-logical symbols are interpreted as follows: A formula evaluates to true or false given an interpretation and avariable assignmentμ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such asy=x{\displaystyle y=x}. The truth value of this formula changes depending on the values thatxandydenote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called theT-schema. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according toMandμ{\displaystyle \mu }if and only if it is true according toMand every other variable assignmentμ′{\displaystyle \mu '}. There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretationM, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse inM; say that for eachdin the domain the constant symbolcdis fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. If a sentence φ evaluates totrueunder a given interpretationM, one says thatMsatisfiesφ; this is denoted[20]M⊨φ{\displaystyle M\vDash \varphi }. A sentence issatisfiableif there is some interpretation under which it is true. This is a bit different from the symbol⊨{\displaystyle \vDash }from model theory, whereM⊨ϕ{\displaystyle M\vDash \phi }denotes satisfiability in a model, i.e. "there is a suitable assignment of values inM{\displaystyle M}'s domain to variable symbols ofϕ{\displaystyle \phi }".[21] Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variablesx1{\displaystyle x_{1}}, ...,xn{\displaystyle x_{n}}is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variablesx1{\displaystyle x_{1}}, ...,xn{\displaystyle x_{n}}. This has the same effect as saying that a formula φ is satisfied if and only if itsuniversal closure∀x1…∀xnϕ(x1,…,xn){\displaystyle \forall x_{1}\dots \forall x_{n}\phi (x_{1},\dots ,x_{n})}is satisfied. A formula islogically valid(or simplyvalid) if it is true in every interpretation.[22]These formulas play a role similar totautologiesin propositional logic. A formula φ is alogical consequenceof a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. An alternate approach to the semantics of first-order logic proceeds viaabstract algebra. This approach generalizes theLindenbaum–Tarski algebrasof propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: Thesealgebrasare alllatticesthat properly extend thetwo-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has noatomic sentencelying in the scope of more than three quantifiers has the same expressive power asrelation algebra.[23]: 32–33This fragment is of great interest because it suffices forPeano arithmeticand mostaxiomatic set theory, including the canonicalZermelo–Fraenkel set theory(ZFC). They also prove that first-order logic with a primitiveordered pairis equivalent to a relation algebra with two ordered pairprojection functions.[24]: 803 Afirst-order theoryof a particular signature is a set ofaxioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite orrecursively enumerable, in which case the theory is calledeffective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be amodelof the theory. Anelementary classis the set of all structures satisfying a particular theory. These classes are a main subject of study inmodel theory. Many theories have anintended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation ofPeano arithmeticconsists of the usualnatural numberswith their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other,nonstandard models. A theory isconsistent(within adeductive system) if it is not possible to prove a contradiction from the axioms of the theory. A theory iscompleteif, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory.Gödel's incompleteness theoremshows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such asinclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an emptyposet), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. Adeductive systemis used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, includingHilbert-style deductive systems,natural deduction, thesequent calculus, thetableaux method, andresolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often calledderivationsin proof theory. They are also often calledproofsbut are completely formalized unlike natural-languagemathematical proofs. A deductive system issoundif any formula that can be derived in the system is logically valid. Conversely, a deductive system iscompleteif every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are calledeffective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is onlysemidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. Arule of inferencestates that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is therule of substitution. Iftis a term and φ is a formula possibly containing the variablex, then φ[t/x] is the result of replacing all free instances ofxbytin φ. The substitution rule states that for any φ and any termt, one can conclude φ[t/x] from φ provided that no free variable oftbecomes bound during the substitution process. (If some free variable oftbecomes bound, then to substitutetforxit is first necessary to change the bound variables of φ to differ from the free variables oft.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by∃x(x=y){\displaystyle \exists x(x=y)}, in the signature of (0,1,+,×,=) of arithmetic. Iftis the term "x + 1", the formula φ[t/y] is∃x(x=x+1){\displaystyle \exists x(x=x+1)}, which will be false in many interpretations. The problem is that the free variablexoftbecame bound during the substitution. The intended replacement can be obtained by renaming the bound variablexof φ to something else, sayz, so that the formula after substitution is∃z(z=x+1){\displaystyle \exists z(z=x+1)}, which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. A deduction in a Hilbert-style deductive system is a list of formulas, each of which is alogical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of severalaxiom schemasof logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have onlymodus ponensanduniversal generalizationas rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. The sequent calculus was developed to study the properties of natural deduction systems.[25]Instead of working with one formula at a time, it usessequents, which are expressions of the form: where A1, ..., An, B1, ..., Bkare formulas and the turnstile symbol⊢{\displaystyle \vdash }is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that(A1∧⋯∧An){\displaystyle (A_{1}\land \cdots \land A_{n})}implies(B1∨⋯∨Bk){\displaystyle (B_{1}\lor \cdots \lor B_{k})}. Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has¬A{\displaystyle \lnot A}at its root; the tree branches in a way that reflects the structure of the formula. For example, to show thatC∨D{\displaystyle C\lor D}is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parentC∨D{\displaystyle C\lor D}and children C and D. Theresolution ruleis a single rule of inference that, together withunification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form throughSkolemization. The resolution rule states that from the hypothesesA1∨⋯∨Ak∨C{\displaystyle A_{1}\lor \cdots \lor A_{k}\lor C}andB1∨⋯∨Bl∨¬C{\displaystyle B_{1}\lor \cdots \lor B_{l}\lor \lnot C}, the conclusionA1∨⋯∨Ak∨B1∨⋯∨Bl{\displaystyle A_{1}\lor \cdots \lor A_{k}\lor B_{1}\lor \cdots \lor B_{l}}can be obtained. Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas inprenex normal form. Some provable identities include: There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known asfirst-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:[26]: 198–200 These areaxiom schemas, each of which specifies an infinite set of axioms. The third schema is known asLeibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbolf, is (equivalent to) a special case of the third schema, using the formula: Then Sincex=yis given, andf(...,x, ...) =f(...,x, ...) true by reflexivity, we havef(...,x, ...) =f(...,y, ...) Many other properties of equality are consequences of the axioms above, for example: An alternate approach considers the equality relation to be a non-logical symbol. This convention is known asfirst-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitraryequivalence relationon the domain of discourse that iscongruentwith respect to the functions and relations of the interpretation. When this second convention is followed, the termnormal modelis used to refer to an interpretation where no distinct individualsaandbsatisfya=b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as theLöwenheim–Skolem theoremso that only normal models are considered. First-order logic without equality is often employed in the context ofsecond-order arithmeticand other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. If a theory has a binary formulaA(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible todefineequality in terms of the relations, by defining the two termssandtto be equal if any relation is unchanged by changingstotin any argument. Some theories allow otherad hocdefinitions of equality: One motivation for the use of first-order logic, rather thanhigher-order logic, is that first-order logic has manymetalogicalproperties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. Gödel's completeness theorem, proved byKurt Gödelin 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence issemidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlikepropositional logic, first-order logic isundecidable(although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is nodecision procedurethat determines whether arbitrary formulas are logically valid. This result was established independently byAlonzo ChurchandAlan Turingin 1936 and 1937, respectively, giving a negative answer to theEntscheidungsproblemposed byDavid HilbertandWilhelm Ackermannin 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of thehalting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic andmonadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are theguarded fragmentof first-order logic, as well astwo-variable logic. TheBernays–Schönfinkel classof first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework ofdescription logics. TheLöwenheim–Skolem theoremshows that if a first-order theory ofcardinalityλ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results inmodel theory, it implies that it is not possible to characterizecountabilityor uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot becategoricallyaxiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by somenonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known asSkolem's paradox. Thecompactness theoremstates that a set of first-order sentences has a model if and only if every finite subset of it has a model.[29]This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finitegraphsis not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as adirected graphof states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in differentconnected componentsof the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in thelogic of graphs, that expresses the idea that there is a path fromxtoy. Connectedness can be expressed insecond-order logic, however, but not with only existential set quantifiers, asΣ11{\displaystyle \Sigma _{1}^{1}}also enjoys compactness. Per Lindströmshowed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and thecounting quantifiers∃≥n{\displaystyle \exists ^{\geq n}}and∃≤n{\displaystyle \exists ^{\leq n}}.[30] TheLöwenheim–Skolem theoremshows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can becategorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers[clarification needed]. This expressiveness comes at a metalogical cost, however: byLindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis forknowledge representation languages, such asFO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic".[31] There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. First-order logic can be studied in languages with fewer logical symbols than were described above: Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include apairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns anordered paircontaining them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range.Many-sorted first-order logicallows variables to have differentsorts, which have different domains. This is also calledtyped first-order logic, and the sorts calledtypes(as indata type), but it is not the same as first-ordertype theory. Many-sorted first-order logic is often used in the study ofsecond-order arithmetic.[33] When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic.[34]: 296–299One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbolsP1(x){\displaystyle P_{1}(x)}andP2(x){\displaystyle P_{2}(x)}and the axiom: Then the elements satisfyingP1{\displaystyle P_{1}}are thought of as elements of the first sort, and elements satisfyingP2{\displaystyle P_{2}}as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formulaφ(x){\displaystyle \varphi (x)}, one writes: Additional quantifiers can be added to first-order logic. Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics includingtopologyandmodel theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denotedLαβ, where α and β are each eithercardinal numbersor the symbol ∞. In this notation, ordinary first-order logic isLωω. In the logicL∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known asLκω. For example,Lω1ωpermitscountableconjunctions and disjunctions. The set of free variables in a formula ofLκωcan have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another.[35]In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, inLκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logicLκλpermits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators.[36] The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus is a legal first-order formula, but is not, in most formalizations of first-order logic.Second-order logicextends first-order logic by adding the latter type of quantification. Otherhigher-order logicsallow quantification over even highertypesthan second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known asfull semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. Automated theorem provingrefers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems.[37]Finding derivations is a difficult task because thesearch spacecan be very large; an exhaustive search of every possible derivation is theoretically possible butcomputationally infeasiblefor many systems of interest in mathematics. Thus complicatedheuristic functionsare developed to attempt to find a derivation in less time than a blind search.[38] The related area of automatedproof verificationuses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such asMetamath, insist on having a complete derivation as input. Others, such asMizarandIsabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known asproof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write,[39]results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implementformal verificationin computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such asprocessorswith respect to aformal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem ofmodel checking, efficientalgorithmsare known todecidewhether an input finite structure satisfies a first-order formula, in addition tocomputational complexitybounds: seeModel checking § First-order logic.
https://en.wikipedia.org/wiki/Predicate_logic
Inlogicandphilosophy, aformal fallacy[a]is a pattern ofreasoningrenderedinvalidby a flaw in its logical structure.Propositional logic,[2]for example, is concerned with the meanings of sentences and the relationships between them. It focuses on the role of logical operators, called propositional connectives, in determining whether a sentence is true. An error in the sequence will result in adeductiveargumentthat is invalid. The argument itself could have truepremises, but still have a falseconclusion.[3]Thus, a formal fallacy is afallacyin which deduction goes wrong, and is no longer alogicalprocess. This may not affect the truth of the conclusion, since validity and truth are separate in formal logic. While a logical argument is anon sequiturif, and only if, it is invalid, the term "non sequitur" typically refers to those types of invalid arguments which do not constitute formal fallacies covered by particular terms (e.g.,affirming the consequent). In other words, in practice, "non sequitur" refers to an unnamed formal fallacy. A special case is amathematical fallacy, an intentionally invalidmathematical proof, often with the error subtle and somehow concealed. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of spurious proofs of obviouscontradictions. A formal fallacy is contrasted with aninformal fallacywhich may have a validlogical formand yet beunsoundbecause one or morepremisesare false. A formal fallacy, however, may have a true premise, but a false conclusion. The term 'logical fallacy' is sometimes used in everyday conversation, and refers to a formal fallacy. "Some of your key evidence is missing, incomplete, or even faked! That proves I'm right!"[4] "The vet can't find any reasonable explanation for why my dog died. See! See! That proves that you poisoned him! There’s no other logical explanation!"[5] In the strictest sense, a logical fallacy is the incorrect application of a valid logical principle or an application of a nonexistent principle: This is fallacious. Indeed, there is no logical principle that states: An easy way to show the above inference as invalid is by usingVenn diagrams. In logical parlance, the inference is invalid, since under at least one interpretation of the predicates it is not validity preserving. People often have difficulty applying the rules of logic. For example, a person may say the followingsyllogismis valid, when in fact it is not: "That creature" may well be a bird, but theconclusiondoes not follow from the premises. Certain other animals also have beaks, for example: anoctopusand asquidboth have beaks, someturtlesandcetaceanshave beaks. Errors of this type occur because people reverse a premise.[6]In this case, "All birds have beaks" is converted to "All beaked animals are birds." The reversed premise is plausible because few people are aware of any instances ofbeaked creaturesbesides birds—but this premise is not the one that was given. In this way, the deductive fallacy is formed by points that may individually appear logical, but when placed together are shown to be incorrect. In everyday speech, a non sequitur is a statement in which the final part is totally unrelated to the first part, for example: Life is life and fun is fun, but it's all so quiet when the goldfish die.
https://en.wikipedia.org/wiki/Logical_fallacy#Fallacies_of_composition_and_division
Inmathematics, aninjective function(also known asinjection, orone-to-one function[1]) is afunctionfthat mapsdistinctelements of its domain to distinct elements of its codomain; that is,x1≠x2impliesf(x1) ≠f(x2)(equivalently bycontraposition,f(x1) =f(x2)impliesx1=x2). In other words, every element of the function'scodomainis theimageofat mostone element of itsdomain.[2](There may be some elements in the codomain that are not mapped from elements in the domain.) The termone-to-one functionmust not be confused withone-to-one correspondencethat refers tobijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain. Ahomomorphismbetweenalgebraic structuresis a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular forvector spaces, aninjective homomorphismis also called amonomorphism. However, in the more general context ofcategory theory, the definition of a monomorphism differs from that of an injective homomorphism.[3]This is thus a theorem that they are equivalent for algebraic structures; seeHomomorphism § Monomorphismfor more details. A functionf{\displaystyle f}that is not injective is sometimes called many-to-one.[2] Letf{\displaystyle f}be a function whose domain is a setX.{\displaystyle X.}The functionf{\displaystyle f}is said to beinjectiveprovided that for alla{\displaystyle a}andb{\displaystyle b}inX,{\displaystyle X,}iff(a)=f(b),{\displaystyle f(a)=f(b),}thena=b{\displaystyle a=b}; that is,f(a)=f(b){\displaystyle f(a)=f(b)}impliesa=b.{\displaystyle a=b.}Equivalently, ifa≠b,{\displaystyle a\neq b,}thenf(a)≠f(b){\displaystyle f(a)\neq f(b)}in thecontrapositivestatement. Symbolically,∀a,b∈X,f(a)=f(b)⇒a=b,{\displaystyle \forall a,b\in X,\;\;f(a)=f(b)\Rightarrow a=b,}which is logically equivalent to thecontrapositive,[4]∀a,b∈X,a≠b⇒f(a)≠f(b).{\displaystyle \forall a,b\in X,\;\;a\neq b\Rightarrow f(a)\neq f(b).}An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example,f:A↣B{\displaystyle f:A\rightarrowtail B}orf:A↪B{\displaystyle f:A\hookrightarrow B}), although some authors specifically reserve ↪ for aninclusion map.[5] For visual examples, readers are directed to thegallery section. More generally, whenX{\displaystyle X}andY{\displaystyle Y}are both thereal lineR,{\displaystyle \mathbb {R} ,}then an injective functionf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as thehorizontal line test.[2] Functions withleft inversesare always injections. That is, givenf:X→Y,{\displaystyle f:X\to Y,}if there is a functiong:Y→X{\displaystyle g:Y\to X}such that for everyx∈X{\displaystyle x\in X},g(f(x))=x{\displaystyle g(f(x))=x}, thenf{\displaystyle f}is injective. In this case,g{\displaystyle g}is called aretractionoff.{\displaystyle f.}Conversely,f{\displaystyle f}is called asectionofg.{\displaystyle g.} Conversely, every injectionf{\displaystyle f}with a non-empty domain has a left inverseg{\displaystyle g}. It can be defined by choosing an elementa{\displaystyle a}in the domain off{\displaystyle f}and settingg(y){\displaystyle g(y)}to the unique element of the pre-imagef−1[y]{\displaystyle f^{-1}[y]}(if it is non-empty) or toa{\displaystyle a}(otherwise).[6] The left inverseg{\displaystyle g}is not necessarily aninverseoff,{\displaystyle f,}because the composition in the other order,f∘g,{\displaystyle f\circ g,}may differ from the identity onY.{\displaystyle Y.}In other words, an injective function can be "reversed" by a left inverse, but is not necessarilyinvertible, which requires that the function is bijective. In fact, to turn an injective functionf:X→Y{\displaystyle f:X\to Y}into a bijective (hence invertible) function, it suffices to replace its codomainY{\displaystyle Y}by its actual imageJ=f(X).{\displaystyle J=f(X).}That is, letg:X→J{\displaystyle g:X\to J}such thatg(x)=f(x){\displaystyle g(x)=f(x)}for allx∈X{\displaystyle x\in X}; theng{\displaystyle g}is bijective. Indeed,f{\displaystyle f}can be factored asInJ,Y∘g,{\displaystyle \operatorname {In} _{J,Y}\circ g,}whereInJ,Y{\displaystyle \operatorname {In} _{J,Y}}is theinclusion functionfromJ{\displaystyle J}intoY.{\displaystyle Y.} More generally, injectivepartial functionsare calledpartial bijections. A proof that a functionf{\displaystyle f}is injective depends on how the function is presented and what properties the function holds. For functions that are given by some formula there is a basic idea. We use the definition of injectivity, namely that iff(x)=f(y),{\displaystyle f(x)=f(y),}thenx=y.{\displaystyle x=y.}[7] Here is an example:f(x)=2x+3{\displaystyle f(x)=2x+3} Proof: Letf:X→Y.{\displaystyle f:X\to Y.}Supposef(x)=f(y).{\displaystyle f(x)=f(y).}So2x+3=2y+3{\displaystyle 2x+3=2y+3}implies2x=2y,{\displaystyle 2x=2y,}which impliesx=y.{\displaystyle x=y.}Therefore, it follows from the definition thatf{\displaystyle f}is injective. There are multiple other methods of proving that a function is injective. For example, in calculus iff{\displaystyle f}is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, iff{\displaystyle f}is a linear transformation it is sufficient to show that the kernel off{\displaystyle f}contains only the zero vector. Iff{\displaystyle f}is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list. A graphical approach for a real-valued functionf{\displaystyle f}of a real variablex{\displaystyle x}is thehorizontal line test. If every horizontal line intersects the curve off(x){\displaystyle f(x)}in at most one point, thenf{\displaystyle f}is injective or one-to-one.
https://en.wikipedia.org/wiki/Injective_function
Inmathematics, asurjective function(also known assurjection, oronto function/ˈɒn.tuː/) is afunctionfsuch that, for every elementyof the function'scodomain, there existsat leastone elementxin the function'sdomainsuch thatf(x) =y. In other words, for a functionf:X→Y, the codomainYis theimageof the function's domainX.[1][2]It is not required thatxbeunique; the functionfmay map one or more elements ofXto the same element ofY. The termsurjectiveand the related termsinjectiveandbijectivewere introduced byNicolas Bourbaki,[3][4]a group of mainlyFrench20th-centurymathematicianswho, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French wordsurmeansoverorabove, and relates to the fact that theimageof the domain of a surjective function completely covers the function's codomain. Any function induces a surjection byrestrictingits codomain to the image of its domain. Every surjective function has aright inverseassuming theaxiom of choice, and every function with a right inverse is necessarily a surjection. Thecompositionof surjective functions is always surjective. Any function can be decomposed into a surjection and an injection. Asurjective functionis afunctionwhoseimageis equal to itscodomain. Equivalently, a functionf{\displaystyle f}withdomainX{\displaystyle X}and codomainY{\displaystyle Y}is surjective if for everyy{\displaystyle y}inY{\displaystyle Y}there exists at least onex{\displaystyle x}inX{\displaystyle X}withf(x)=y{\displaystyle f(x)=y}.[1]Surjections are sometimes denoted by a two-headed rightwards arrow (U+21A0↠RIGHTWARDS TWO HEADED ARROW),[5]as inf:X↠Y{\displaystyle f\colon X\twoheadrightarrow Y}. Symbolically, A function isbijectiveif and only if it is both surjective andinjective. If (as is often done) a function is identified with itsgraph, then surjectivity is not a property of the function itself, but rather a property of themapping.[7]This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone. The functiong:Y→Xis said to be aright inverseof the functionf:X→Yiff(g(y)) =yfor everyyinY(gcan be undone byf). In other words,gis a right inverse offif thecompositionfogofgandfin that order is theidentity functionon the domainYofg. The functiongneed not be a completeinverseoffbecause the composition in the other order,gof, may not be the identity function on the domainXoff. In other words,fcan undo or "reverse"g, but cannot necessarily be reversed by it. Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to theaxiom of choice. Iff:X→Yis surjective andBis asubsetofY, thenf(f−1(B)) =B. Thus,Bcan be recovered from itspreimagef−1(B). For example, in the first illustration in thegallery, there is some functiongsuch thatg(C) = 4. There is also some functionfsuch thatf(4) =C. It doesn't matter thatgis not unique (it would also work ifg(C) equals 3); it only matters thatf"reverses"g. A functionf:X→Yis surjective if and only if it isright-cancellative:[8]given any functionsg,h:Y→Z, whenevergof=hof, theng=h. This property is formulated in terms of functions and theircompositionand can be generalized to the more general notion of themorphismsof acategoryand their composition. Right-cancellative morphisms are calledepimorphisms. Specifically, surjective functions are precisely the epimorphisms in thecategory of sets. The prefixepiis derived from the Greek prepositionἐπίmeaningover,above,on. Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inversegof a morphismfis called asectionoff. A morphism with a right inverse is called asplit epimorphism. Any function with domainXand codomainYcan be seen as aleft-totalandright-uniquebinary relation betweenXandYby identifying it with itsfunction graph. A surjective function with domainXand codomainYis then a binary relation betweenXandYthat is right-unique and both left-total andright-total. Thecardinalityof the domain of a surjective function is greater than or equal to the cardinality of its codomain: Iff:X→Yis a surjective function, thenXhas at least as many elements asY, in the sense ofcardinal numbers. (The proof appeals to theaxiom of choiceto show that a functiong:Y→Xsatisfyingf(g(y)) =yfor allyinYexists.gis easily seen to be injective, thus theformal definitionof |Y| ≤ |X| is satisfied.) Specifically, if bothXandYarefinitewith the same number of elements, thenf:X→Yis surjective if and only iffisinjective. Given two setsXandY, the notationX≤*Yis used to say that eitherXis empty or that there is a surjection fromYontoX. Using the axiom of choice one can show thatX≤*YandY≤*Xtogether imply that|Y| = |X|,a variant of theSchröder–Bernstein theorem. Thecompositionof surjective functions is always surjective: Iffandgare both surjective, and the codomain ofgis equal to the domain off, thenfogis surjective. Conversely, iffogis surjective, thenfis surjective (butg, the function applied first, need not be). These properties generalize from surjections in thecategory of setsto anyepimorphismsin anycategory. Any function can be decomposed into a surjection and aninjection: For any functionh:X→Zthere exist a surjectionf:X→Yand an injectiong:Y→Zsuch thath=gof. To see this, defineYto be the set ofpreimagesh−1(z)wherezis inh(X). These preimages aredisjointandpartitionX. Thenfcarries eachxto the element ofYwhich contains it, andgcarries each element ofYto the point inZto whichhsends its points. Thenfis surjective since it is a projection map, andgis injective by definition. Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on aquotientof its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjectionf:A→Bcan be factored as a projection followed by a bijection as follows. LetA/~ be theequivalence classesofAunder the followingequivalence relation:x~yif and only iff(x) =f(y). Equivalently,A/~ is the set of all preimages underf. LetP(~) :A→A/~ be theprojection mapwhich sends eachxinAto its equivalence class [x]~, and letfP:A/~ →Bbe the well-defined function given byfP([x]~) =f(x). Thenf=fPoP(~). Given fixed finite setsAandB, one can form the set of surjectionsA↠B. Thecardinalityof this set is one of the twelve aspects of Rota'sTwelvefold way, and is given by|B|!{|A||B|}{\textstyle |B|!{\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}}, where{|A||B|}{\textstyle {\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}}denotes aStirling number of the second kind.
https://en.wikipedia.org/wiki/Surjective_function
Inmathematics, abijection,bijective function, orone-to-one correspondenceis afunctionbetween twosetssuch that each element of the second set (thecodomain) is the image of exactly one element of the first set (thedomain). Equivalently, a bijection is arelationbetween two sets such that each element of either set is paired with exactly one element of the other set. A function is bijectiveif and only ifit isinvertible; that is, a functionf:X→Y{\displaystyle f:X\to Y}is bijective if and only if there is a functiong:Y→X,{\displaystyle g:Y\to X,}theinverseoff, such that each of the two ways forcomposingthe two functions produces anidentity function:g(f(x))=x{\displaystyle g(f(x))=x}for eachx{\displaystyle x}inX{\displaystyle X}andf(g(y))=y{\displaystyle f(g(y))=y}for eachy{\displaystyle y}inY.{\displaystyle Y.} For example, themultiplication by twodefines a bijection from theintegersto theeven numbers, which has thedivision by twoas its inverse function. A function is bijective if and only if it is bothinjective(orone-to-one)—meaning that each element in the codomain is mapped from at most one element of the domain—andsurjective(oronto)—meaning that each element of the codomain is mapped from at least one element of the domain. The termone-to-one correspondencemust not be confused withone-to-one function, which means injective but not necessarily surjective. The elementary operation ofcountingestablishes a bijection from somefinite setto the firstnatural numbers(1, 2, 3, ...), up to the number of elements in the counted set. It results that two finite sets have the same number of elements if and only if there exists a bijection between them. More generally, two sets are said to have the samecardinal numberif there exists a bijection between them. A bijective function from a set to itself is also called apermutation,[1]and the set of all permutations of a set forms itssymmetric group. Some bijections with further properties have received specific names, which includeautomorphisms,isomorphisms,homeomorphisms,diffeomorphisms,permutation groups, and mostgeometric transformations.Galois correspondencesare bijections between sets ofmathematical objectsof apparently very different nature. For abinary relationpairing elements of setXwith elements of setYto be a bijection, four properties must hold: Satisfying properties (1) and (2) means that a pairing is afunctionwithdomainX. It is more common to see properties (1) and (2) written as a single statement: Every element ofXis paired with exactly one element ofY. Functions which satisfy property (3) are said to be "ontoY" and are calledsurjections(orsurjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are calledinjections(orinjective functions).[2]With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto".[3] Consider thebatting line-upof a baseball orcricketteam (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The setXwill be the players on the team (of size nine in the case of baseball) and the setYwill be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list. In a classroom there are a certain number of seats. A group of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that: The instructor was able to conclude that there were just as many seats as there were students, without having to count either set. A bijectionfwith domainX(indicated byf:X → Yinfunctional notation) also defines aconverse relationstarting inYand going toX(by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not,in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domainY. Moreover, properties (1) and (2) then say that this inversefunctionis a surjection and an injection, that is, theinverse functionexists and is also a bijection. Functions that have inverse functions are said to beinvertible. A function is invertible if and only if it is a bijection. Stated in concise mathematical notation, a functionf:X → Yis bijective if and only if it satisfies the condition Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position. Thecompositiong∘f{\displaystyle g\,\circ \,f}of two bijectionsf:X → Yandg:Y → Zis a bijection, whose inverse is given byg∘f{\displaystyle g\,\circ \,f}is(g∘f)−1=(f−1)∘(g−1){\displaystyle (g\,\circ \,f)^{-1}\;=\;(f^{-1})\,\circ \,(g^{-1})}. Conversely, if the compositiong∘f{\displaystyle g\,\circ \,f}of two functions is bijective, it only follows thatfisinjectiveandgissurjective. IfXandYarefinite sets, then there exists a bijection between the two setsXandYif and only ifXandYhave the same number of elements. Indeed, inaxiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition toinfinite setsleads to the concept ofcardinal number, a way to distinguish the various sizes of infinite sets. Bijections are precisely theisomorphismsin thecategorySetofsetsand set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the categoryGrpofgroups, the morphisms must behomomorphismssince they must preserve the group structure, so the isomorphisms aregroup isomorphismswhich are bijective homomorphisms. The notion of one-to-one correspondence generalizes topartial functions, where they are calledpartial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be atotal function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called thesymmetric inverse semigroup.[4] Another way of defining the same notion is to say that a partial bijection fromAtoBis any relationR(which turns out to be a partial function) with the property thatRis thegraph ofa bijectionf:A′→B′, whereA′is asubsetofAandB′is a subset ofB.[5] When the partial bijection is on the same set, it is sometimes called aone-to-one partialtransformation.[6]An example is theMöbius transformationsimply defined on the complex plane, rather than its completion to the extended complex plane.[7] This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these:
https://en.wikipedia.org/wiki/Bijection
Inmathematics, afunctionfrom asetXto a setYassigns to each element ofXexactly one element ofY.[1]The setXis called thedomainof the function[2]and the setYis called thecodomainof the function.[3] Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of aplanetis afunctionof time.Historically, the concept was elaborated with theinfinitesimal calculusat the end of the 17th century, and, until the 19th century, the functions that were considered weredifferentiable(that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms ofset theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such asf,gorh. The value of a functionfat an elementxof its domain (that is, the element of the codomain that is associated withx) is denoted byf(x); for example, the value offatx= 4is denoted byf(4). Commonly, a specific function is defined by means of anexpressiondepending onx, such asf(x)=x2+1;{\displaystyle f(x)=x^{2}+1;}in this case, some computation, calledfunction evaluation, may be needed for deducing the value of the function at a particular value; for example, iff(x)=x2+1,{\displaystyle f(x)=x^{2}+1,}thenf(4)=42+1=17.{\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of allpairs(x,f(x)), called thegraph of the function, a popular means of illustrating the function.[note 1][4]When the domain and the codomain are sets of real numbers, each such pair may be thought of as theCartesian coordinatesof a point in the plane. Functions are widely used inscience,engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.[5] The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. SeeHistory of the function conceptfor details. Afunctionffrom asetXto a setYis an assignment of one element ofYto each element ofX. The setXis called thedomainof the function and the setYis called thecodomainof the function. If the elementyinYis assigned toxinXby the functionf, one says thatfmapsxtoy, and this is commonly writteny=f(x).{\displaystyle y=f(x).}In this notation,xis theargumentorvariableof the function. A specific elementxofXis avalue of the variable, and the corresponding element ofYis thevalue of the functionatx, or theimageofxunder the function. Theimage of a function, sometimes called itsrange, is the set of the images of all elements in the domain.[6][7][8][9] A functionf, its domainX, and its codomainYare often specified by the notationf:X→Y.{\displaystyle f:X\to Y.}One may writex↦y{\displaystyle x\mapsto y}instead ofy=f(x){\displaystyle y=f(x)}, where the symbol↦{\displaystyle \mapsto }(read 'maps to') is used to specify where a particular elementxin the domain is mapped to byf. This allows the definition of a function without naming. For example, thesquare functionis the functionx↦x2.{\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, iff:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }is areal function, the determination of the domain of the functionx↦1/f(x){\displaystyle x\mapsto 1/f(x)}requires knowing thezerosoff.This is one of the reasons for which, inmathematical analysis, "a functionfromXtoY"may refer to a function having a proper subset ofXas a domain.[note 2]For example, a "function from the reals to the reals" may refer to areal-valuedfunction of areal variablewhose domain is a proper subset of thereal numbers, typically a subset that contains a non-emptyopen interval. Such a function is then called apartial function. A functionfon a setSmeans a function from the domainS, without specifying a codomain. However, some authors use it as shorthand for saying that the function isf:S→S. The above definition of a function is essentially that of the founders ofcalculus,Leibniz,NewtonandEuler. However, it cannot beformalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms ofset theory. This set-theoretic definition is based on the fact that a function establishes arelationbetween the elements of the domain and some (possibly all) elements of the codomain. Mathematically, abinary relationbetween two setsXandYis asubsetof the set of allordered pairs(x,y){\displaystyle (x,y)}such thatx∈X{\displaystyle x\in X}andy∈Y.{\displaystyle y\in Y.}The set of all these pairs is called theCartesian productofXandYand denotedX×Y.{\displaystyle X\times Y.}Thus, the above definition may be formalized as follows. Afunctionwith domainXand codomainYis a binary relationRbetweenXandYthat satisfies the two following conditions:[10] This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (includingset-builder notation): A function is formed by three sets, thedomainX,{\displaystyle X,}thecodomainY,{\displaystyle Y,}and thegraphR{\displaystyle R}that satisfy the three following conditions. Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, apartial functionfromXtoYis a binary relationRbetweenXandYsuch that, for everyx∈X,{\displaystyle x\in X,}there isat most oneyinYsuch that(x,y)∈R.{\displaystyle (x,y)\in R.} Using functional notation, this means that, givenx∈X,{\displaystyle x\in X,}eitherf(x){\displaystyle f(x)}is inY, or it is undefined. The set of the elements ofXsuch thatf(x){\displaystyle f(x)}is defined and belongs toYis called thedomain of definitionof the function. A partial function fromXtoYis thus a ordinary function that has as its domain a subset ofXcalled the domain of definition of the function. If the domain of definition equalsX, one often says that the partial function is atotal function. In several areas of mathematics the term "function" refers to partial functions rather than to ordinary functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. Incalculus, areal-valued function of a real variableorreal functionis a partial function from the setR{\displaystyle \mathbb {R} }of thereal numbersto itself. Given a real functionf:x↦f(x){\displaystyle f:x\mapsto f(x)}itsmultiplicative inversex↦1/f(x){\displaystyle x\mapsto 1/f(x)}is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute thezerosof the function, the values where the function is defined but not its multiplicative inverse. Similarly, afunction of a complex variableis generally a partial function with a domain of definition included in the setC{\displaystyle \mathbb {C} }of thecomplex numbers. The difficulty of determining the domain of definition of acomplex functionis illustrated by the multiplicative inverse of theRiemann zeta function: the determination of the domain of definition of the functionz↦1/ζ(z){\displaystyle z\mapsto 1/\zeta (z)}is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, theRiemann hypothesis. Incomputability theory, ageneral recursive functionis a partial function from the integers to the integers whose values can be computed by analgorithm(roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether0belongs to its domain of definition (seeHalting problem). Amultivariate function,multivariable function, orfunction of several variablesis a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function ofnvariables is a function whose domain is a set ofn-tuples.[note 3]For example, multiplication ofintegersis a function of two variables, orbivariate function, whose domain is the set of allordered pairs(2-tuples) of integers, and whose codomain is the set of integers. The same is true for everybinary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining aparametric surface, as used in, e.g.,bivariate interpolation. Commonly, ann-tuple is denoted enclosed between parentheses, such as in(1,2,…,n).{\displaystyle (1,2,\ldots ,n).}When usingfunctional notation, one usually omits the parentheses surrounding tuples, writingf(x1,…,xn){\displaystyle f(x_{1},\ldots ,x_{n})}instead off((x1,…,xn)).{\displaystyle f((x_{1},\ldots ,x_{n})).} GivennsetsX1,…,Xn,{\displaystyle X_{1},\ldots ,X_{n},}the set of alln-tuples(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}such thatx1∈X1,…,xn∈Xn{\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}}is called theCartesian productofX1,…,Xn,{\displaystyle X_{1},\ldots ,X_{n},}and denotedX1×⋯×Xn.{\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or aproper subsetof a Cartesian product as a domain. f:U→Y,{\displaystyle f:U\to Y,} where the domainUhas the form U⊆X1×⋯×Xn.{\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all theXi{\displaystyle X_{i}}are equal to the setR{\displaystyle \mathbb {R} }of thereal numbersor to the setC{\displaystyle \mathbb {C} }of thecomplex numbers, one talks respectively of afunction of several real variablesor of afunction of several complex variables. There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letterf. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f(x),sin⁡(3),orf(x2+1).{\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be avariable, oftenx, that represents an arbitrary element of the domain of the function, a specific element of the domain (3in the above example), or anexpressionthat can be evaluated to an element of the domain (x2+1{\displaystyle x^{2}+1}in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "letf(x)=sin⁡(x2+1){\displaystyle f(x)=\sin(x^{2}+1)}". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to writesinxinstead ofsin(x). Functional notation was first used byLeonhard Eulerin 1734.[11]Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, aroman typeis customarily used instead, such as "sin" for thesine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "letf(x){\displaystyle f(x)}be a function". This is anabuse of notationthat is useful for a simpler formulation. Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example,x↦x+1{\displaystyle x\mapsto x+1}is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain ofR{\displaystyle \mathbb {R} }is implied. The domain and codomain can also be explicitly stated, for example: sqr:Z→Zx↦x2.{\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a functionsqrfrom the integers to the integers that returns the square of its input. As a common application of the arrow notation, supposef:X×X→Y;(x,t)↦f(x,t){\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)}is a function in two variables, and we want to refer to apartially applied functionX→Y{\displaystyle X\to Y}produced by fixing the second argument to the valuet0without introducing a new function name. The map in question could be denotedx↦f(x,t0){\displaystyle x\mapsto f(x,t_{0})}using the arrow notation. The expressionx↦f(x,t0){\displaystyle x\mapsto f(x,t_{0})}(read: "the map takingxtofofxcommatnought") represents this new function with just one argument, whereas the expressionf(x0,t0)refers to the value of the functionfat thepoint(x0,t0). Index notation may be used instead of functional notation. That is, instead of writingf(x), one writesfx.{\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of thenatural numbers. Such a function is called asequence, and, in this case the elementfn{\displaystyle f_{n}}is called thenth element of the sequence. The index notation can also be used for distinguishing some variables calledparametersfrom the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the mapx↦f(x,t){\displaystyle x\mapsto f(x,t)}(see above) would be denotedft{\displaystyle f_{t}}using index notation, if we define the collection of mapsft{\displaystyle f_{t}}by the formulaft(x)=f(x,t){\displaystyle f_{t}(x)=f(x,t)}for allx,t∈X{\displaystyle x,t\in X}. In the notationx↦f(x),{\displaystyle x\mapsto f(x),}the symbolxdoes not represent any value; it is simply aplaceholder, meaning that, ifxis replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore,xmay be replaced by any symbol, often aninterpunct"⋅". This may be useful for distinguishing the functionf(⋅)from its valuef(x)atx. For example,a(⋅)2{\displaystyle a(\cdot )^{2}}may stand for the functionx↦ax2{\displaystyle x\mapsto ax^{2}}, and∫a(⋅)f(u)du{\textstyle \int _{a}^{\,(\cdot )}f(u)\,du}may stand for a function defined by anintegralwith variable upper bound:x↦∫axf(u)du{\textstyle x\mapsto \int _{a}^{x}f(u)\,du}. There are other, specialized notations for functions in sub-disciplines of mathematics. For example, inlinear algebraandfunctional analysis,linear formsand thevectorsthey act upon are denoted using adual pairto show the underlyingduality. This is similar to the use ofbra–ket notationin quantum mechanics. Inlogicand thetheory of computation, the function notation oflambda calculusis used to explicitly express the basic notions of functionabstractionandapplication. Incategory theoryandhomological algebra, networks of functions are described in terms of how they and their compositionscommutewith each other usingcommutative diagramsthat extend and generalize the arrow notation for functions described above. In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a functionfcan be defined as mapping any pair of real numbers(x,y){\displaystyle (x,y)}to the sum of their squares,x2+y2{\displaystyle x^{2}+y^{2}}. Such a function is commonly written asf(x,y)=x2+y2{\displaystyle f(x,y)=x^{2}+y^{2}}and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such asf(w,x,y){\displaystyle f(w,x,y)},f(w,x,y,z){\displaystyle f(w,x,y,z)}. A function may also be called amapor amapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g.maps of manifolds). In particularmapmay be used in place ofhomomorphismfor the sake of succinctness (e.g.,linear mapormap fromGtoHinstead ofgroup homomorphismfromGtoH). Some authors[14]reserve the wordmappingfor the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such asSerge Lang,[13]use "function" only to refer to maps for which thecodomainis a subset of therealorcomplexnumbers, and use the termmappingfor more general functions. In the theory ofdynamical systems, a map denotes anevolution functionused to creatediscrete dynamical systems. See alsoPoincaré map. Whichever definition ofmapis used, related terms likedomain,codomain,injective,continuoushave the same meaning as for a function. Given a functionf{\displaystyle f}, by definition, to each elementx{\displaystyle x}of the domain of the functionf{\displaystyle f}, there is a unique element associated to it, the valuef(x){\displaystyle f(x)}off{\displaystyle f}atx{\displaystyle x}. There are several ways to specify or describe howx{\displaystyle x}is related tof(x){\displaystyle f(x)}, both explicitly and implicitly. Sometimes, a theorem or anaxiomasserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the functionf{\displaystyle f}. On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, ifA={1,2,3}{\displaystyle A=\{1,2,3\}}, then one can define a functionf:A→R{\displaystyle f:A\to \mathbb {R} }byf(1)=2,f(2)=3,f(3)=4.{\displaystyle f(1)=2,f(2)=3,f(3)=4.} Functions are often defined by anexpressionthat describes a combination ofarithmetic operationsand previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example,f{\displaystyle f}can be defined by the formulaf(n)=n+1{\displaystyle f(n)=n+1}, forn∈{1,2,3}{\displaystyle n\in \{1,2,3\}}. When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of thezerosof auxiliary functions. Similarly, ifsquare rootsoccur in the definition of a function fromR{\displaystyle \mathbb {R} }toR,{\displaystyle \mathbb {R} ,}the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example,f(x)=1+x2{\displaystyle f(x)={\sqrt {1+x^{2}}}}defines a functionf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }whose domain isR,{\displaystyle \mathbb {R} ,}because1+x2{\displaystyle 1+x^{2}}is always positive ifxis a real number. On the other hand,f(x)=1−x2{\displaystyle f(x)={\sqrt {1-x^{2}}}}defines a function from the reals to the reals whose domain is reduced to the interval[−1, 1]. (In old texts, such a domain was called thedomain of definitionof the function.) Functions can be classified by the nature of formulas that define them: A functionf:X→Y,{\displaystyle f:X\to Y,}with domainXand codomainY, isbijective, if for everyyinY, there is one and only one elementxinXsuch thaty=f(x). In this case, theinverse functionoffis the functionf−1:Y→X{\displaystyle f^{-1}:Y\to X}that mapsy∈Y{\displaystyle y\in Y}to the elementx∈X{\displaystyle x\in X}such thaty=f(x). For example, thenatural logarithmis a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called theexponential function, that maps the real numbers onto the positive numbers. If a functionf:X→Y{\displaystyle f:X\to Y}is not bijective, it may occur that one can select subsetsE⊆X{\displaystyle E\subseteq X}andF⊆Y{\displaystyle F\subseteq Y}such that therestrictionofftoEis a bijection fromEtoF, and has thus an inverse. Theinverse trigonometric functionsare defined this way. For example, thecosine functioninduces, by restriction, a bijection from theinterval[0,π]onto the interval[−1, 1], and its inverse function, calledarccosine, maps[−1, 1]onto[0,π]. The other inverse trigonometric functions are defined similarly. More generally, given abinary relationRbetween two setsXandY, letEbe a subset ofXsuch that, for everyx∈E,{\displaystyle x\in E,}there is somey∈Y{\displaystyle y\in Y}such thatx R y. If one has a criterion allowing selecting such ayfor everyx∈E,{\displaystyle x\in E,}this defines a functionf:E→Y,{\displaystyle f:E\to Y,}called animplicit function, because it is implicitly defined by the relationR. For example, the equation of theunit circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}defines a relation on real numbers. If−1 <x< 1there are two possible values ofy, one positive and one negative. Forx= ± 1, these two values become both equal to 0. Otherwise, there is no possible value ofy. This means that the equation defines two implicit functions with domain[−1, 1]and respective codomains[0, +∞)and(−∞, 0]. In this example, the equation can be solved iny, givingy=±1−x2,{\displaystyle y=\pm {\sqrt {1-x^{2}}},}but, in more complicated examples, this is impossible. For example, the relationy5+y+x=0{\displaystyle y^{5}+y+x=0}definesyas an implicit function ofx, called theBring radical, which hasR{\displaystyle \mathbb {R} }as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations andnth roots. Theimplicit function theoremprovides milddifferentiabilityconditions for existence and uniqueness of an implicit function in the neighborhood of a point. Many functions can be defined as theantiderivativeof another function. This is the case of thenatural logarithm, which is the antiderivative of1/xthat is 0 forx= 1. Another common example is theerror function. More generally, many functions, including mostspecial functions, can be defined as solutions ofdifferential equations. The simplest example is probably theexponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 forx= 0. Power seriescan be used to define functions on the domain in which they converge. For example, theexponential functionis given byex=∑n=0∞xnn!{\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}}. However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of itsTaylor seriesin some interval, this power series allows immediately enlarging the domain to a subset of thecomplex numbers, thedisc of convergenceof the series. Thenanalytic continuationallows enlarging further the domain for including almost the wholecomplex plane. This process is the method that is generally used for defining thelogarithm, theexponentialand thetrigonometric functionsof a complex number. Functions whose domain are the nonnegative integers, known assequences, are sometimes defined byrecurrence relations. Thefactorialfunction on the nonnegative integers (n↦n!{\displaystyle n\mapsto n!}) is a basic example, as it can be defined by the recurrence relation n!=n(n−1)!forn>0,{\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0!=1.{\displaystyle 0!=1.} Agraphis commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented bybar charts. Given a functionf:X→Y,{\displaystyle f:X\to Y,}itsgraphis, formally, the set G={(x,f(x))∣x∈X}.{\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case whereXandYare subsets of thereal numbers(or may be identified with such subsets, e.g.intervals), an element(x,y)∈G{\displaystyle (x,y)\in G}may be identified with a point having coordinatesx,yin a 2-dimensional coordinate system, e.g. theCartesian plane. Parts of this may create aplotthat represents (parts of) the function. The use of plots is so ubiquitous that they too are called thegraph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of thesquare function x↦x2,{\displaystyle x\mapsto x^{2},} consisting of all points with coordinates(x,x2){\displaystyle (x,x^{2})}forx∈R,{\displaystyle x\in \mathbb {R} ,}yields, when depicted in Cartesian coordinates, the well knownparabola. If the same quadratic functionx↦x2,{\displaystyle x\mapsto x^{2},}with the same formal graph, consisting of pairs of numbers, is plotted instead inpolar coordinates(r,θ)=(x,x2),{\displaystyle (r,\theta )=(x,x^{2}),}the plot obtained isFermat's spiral. A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication functionf:{1,…,5}2→R{\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} }defined asf(x,y)=xy{\displaystyle f(x,y)=xy}can be represented by the familiarmultiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed,interpolationcan be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. A bar chart can represent a function whose domain is a finite set, thenatural numbers, or theintegers. In this case, an elementxof the domain is represented by anintervalof thex-axis, and the corresponding value of the function,f(x), is represented by arectanglewhose base is the interval corresponding toxand whose height isf(x)(possibly negative, in which case the bar extends below thex-axis). This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. There are a number of standard functions that occur frequently: Given two functionsf:X→Y{\displaystyle f:X\to Y}andg:Y→Z{\displaystyle g:Y\to Z}such that the domain ofgis the codomain off, theircompositionis the functiong∘f:X→Z{\displaystyle g\circ f:X\rightarrow Z}defined by (g∘f)(x)=g(f(x)).{\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value ofg∘f{\displaystyle g\circ f}is obtained by first applyingftoxto obtainy=f(x)and then applyinggto the resultyto obtaing(y) =g(f(x)). In this notation, the function that is applied first is always written on the right. The compositiong∘f{\displaystyle g\circ f}is anoperationon functions that is defined only if the codomain of the first function is the domain of the second one. Even when bothg∘f{\displaystyle g\circ f}andf∘g{\displaystyle f\circ g}satisfy these conditions, the composition is not necessarilycommutative, that is, the functionsg∘f{\displaystyle g\circ f}andf∘g{\displaystyle f\circ g}need not be equal, but may deliver different values for the same argument. For example, letf(x) =x2andg(x) =x+ 1, theng(f(x))=x2+1{\displaystyle g(f(x))=x^{2}+1}andf(g(x))=(x+1)2{\displaystyle f(g(x))=(x+1)^{2}}agree just forx=0.{\displaystyle x=0.} The function composition isassociativein the sense that, if one of(h∘g)∘f{\displaystyle (h\circ g)\circ f}andh∘(g∘f){\displaystyle h\circ (g\circ f)}is defined, then the other is also defined, and they are equal, that is,(h∘g)∘f=h∘(g∘f).{\displaystyle (h\circ g)\circ f=h\circ (g\circ f).}Therefore, it is usual to just writeh∘g∘f.{\displaystyle h\circ g\circ f.} Theidentity functionsidX{\displaystyle \operatorname {id} _{X}}andidY{\displaystyle \operatorname {id} _{Y}}are respectively aright identityand aleft identityfor functions fromXtoY. That is, iffis a function with domainX, and codomainY, one hasf∘idX=idY∘f=f.{\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} Letf:X→Y.{\displaystyle f:X\to Y.}Theimageunderfof an elementxof the domainXisf(x).[6]IfAis any subset ofX, then theimageofAunderf, denotedf(A), is the subset of the codomainYconsisting of all images of elements ofA,[6]that is, f(A)={f(x)∣x∈A}.{\displaystyle f(A)=\{f(x)\mid x\in A\}.} Theimageoffis the image of the whole domain, that is,f(X).[17]It is also called therangeoff,[6][7][8][9]although the termrangemay also refer to the codomain.[9][17][18] On the other hand, theinverse imageorpreimageunderfof an elementyof the codomainYis the set of all elements of the domainXwhose images underfequaly.[6]In symbols, the preimage ofyis denoted byf−1(y){\displaystyle f^{-1}(y)}and is given by the equation f−1(y)={x∈X∣f(x)=y}.{\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subsetBof the codomainYis the set of the preimages of the elements ofB, that is, it is the subset of the domainXconsisting of all elements ofXwhose images belong toB.[6]It is denoted byf−1(B){\displaystyle f^{-1}(B)}and is given by the equation f−1(B)={x∈X∣f(x)∈B}.{\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of{4,9}{\displaystyle \{4,9\}}under thesquare functionis the set{−3,−2,2,3}{\displaystyle \{-3,-2,2,3\}}. By definition of a function, the image of an elementxof the domain is always a single element of the codomain. However, the preimagef−1(y){\displaystyle f^{-1}(y)}of an elementyof the codomain may beemptyor contain any number of elements. For example, iffis the function from the integers to themselves that maps every integer to 0, thenf−1(0)=Z{\displaystyle f^{-1}(0)=\mathbb {Z} }. Iff:X→Y{\displaystyle f:X\to Y}is a function,AandBare subsets ofX, andCandDare subsets ofY, then one has the following properties: The preimage byfof an elementyof the codomain is sometimes called, in some contexts, thefiberofyunderf. If a functionfhas an inverse (see below), this inverse is denotedf−1.{\displaystyle f^{-1}.}In this casef−1(C){\displaystyle f^{-1}(C)}may denote either the image byf−1{\displaystyle f^{-1}}or the preimage byfofC. This is not a problem, as these sets are equal. The notationf(A){\displaystyle f(A)}andf−1(C){\displaystyle f^{-1}(C)}may be ambiguous in the case of sets that contain some subsets as elements, such as{x,{x}}.{\displaystyle \{x,\{x\}\}.}In this case, some care may be needed, for example, by using square bracketsf[A],f−1[C]{\displaystyle f[A],f^{-1}[C]}for images and preimages of subsets and ordinary parentheses for images and preimages of elements. Letf:X→Y{\displaystyle f:X\to Y}be a function. The functionfisinjective(orone-to-one, or is aninjection) iff(a) ≠f(b)for every two different elementsaandbofX.[17][19]Equivalently,fis injective if and only if, for everyy∈Y,{\displaystyle y\in Y,}the preimagef−1(y){\displaystyle f^{-1}(y)}contains at most one element. An empty function is always injective. IfXis not the empty set, thenfis injective if and only if there exists a functiong:Y→X{\displaystyle g:Y\to X}such thatg∘f=idX,{\displaystyle g\circ f=\operatorname {id} _{X},}that is, iffhas aleft inverse.[19]Proof: Iffis injective, for definingg, one chooses an elementx0{\displaystyle x_{0}}inX(which exists asXis supposed to be nonempty),[note 6]and one definesgbyg(y)=x{\displaystyle g(y)=x}ify=f(x){\displaystyle y=f(x)}andg(y)=x0{\displaystyle g(y)=x_{0}}ify∉f(X).{\displaystyle y\not \in f(X).}Conversely, ifg∘f=idX,{\displaystyle g\circ f=\operatorname {id} _{X},}andy=f(x),{\displaystyle y=f(x),}thenx=g(y),{\displaystyle x=g(y),}and thusf−1(y)={x}.{\displaystyle f^{-1}(y)=\{x\}.} The functionfissurjective(oronto, or is asurjection) if its rangef(X){\displaystyle f(X)}equals its codomainY{\displaystyle Y}, that is, if, for each elementy{\displaystyle y}of the codomain, there exists some elementx{\displaystyle x}of the domain such thatf(x)=y{\displaystyle f(x)=y}(in other words, the preimagef−1(y){\displaystyle f^{-1}(y)}of everyy∈Y{\displaystyle y\in Y}is nonempty).[17][20]If, as usual in modern mathematics, theaxiom of choiceis assumed, thenfis surjective if and only if there exists a functiong:Y→X{\displaystyle g:Y\to X}such thatf∘g=idY,{\displaystyle f\circ g=\operatorname {id} _{Y},}that is, iffhas aright inverse.[20]The axiom of choice is needed, because, iffis surjective, one definesgbyg(y)=x,{\displaystyle g(y)=x,}wherex{\displaystyle x}is anarbitrarily chosenelement off−1(y).{\displaystyle f^{-1}(y).} The functionfisbijective(or is abijectionor aone-to-one correspondence) if it is both injective and surjective.[17][21]That is,fis bijective if, for everyy∈Y,{\displaystyle y\in Y,}the preimagef−1(y){\displaystyle f^{-1}(y)}contains exactly one element. The functionfis bijective if and only if it admits aninverse function, that is, a functiong:Y→X{\displaystyle g:Y\to X}such thatg∘f=idX{\displaystyle g\circ f=\operatorname {id} _{X}}andf∘g=idY.{\displaystyle f\circ g=\operatorname {id} _{Y}.}[21](Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every functionf:X→Y{\displaystyle f:X\to Y}may befactorizedas the compositioni∘s{\displaystyle i\circ s}of a surjection followed by an injection, wheresis the canonical surjection ofXontof(X)andiis the canonical injection off(X)intoY. This is thecanonical factorizationoff. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by theBourbaki groupand imported into English.[22]As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "fmapsXontoY" differs from "fmapsXintoB", in that the former implies thatfis surjective, while the latter makes no assertion about the nature off. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. Iff:X→Y{\displaystyle f:X\to Y}is a function andSis a subset ofX, then therestrictionoff{\displaystyle f}toS, denotedf|S{\displaystyle f|_{S}}, is the function fromStoYdefined by f|S(x)=f(x){\displaystyle f|_{S}(x)=f(x)} for allxinS. Restrictions can be used to define partialinverse functions: if there is asubsetSof the domain of a functionf{\displaystyle f}such thatf|S{\displaystyle f|_{S}}is injective, then the canonical surjection off|S{\displaystyle f|_{S}}onto its imagef|S(S)=f(S){\displaystyle f|_{S}(S)=f(S)}is a bijection, and thus has an inverse function fromf(S){\displaystyle f(S)}toS. One application is the definition ofinverse trigonometric functions. For example, thecosinefunction is injective when restricted to theinterval[0,π]. The image of this restriction is the interval[−1, 1], and thus the restriction has an inverse function from[−1, 1]to[0,π], which is calledarccosineand is denotedarccos. Function restriction may also be used for "gluing" functions together. LetX=⋃i∈IUi{\textstyle X=\bigcup _{i\in I}U_{i}}be the decomposition ofXas aunionof subsets, and suppose that a functionfi:Ui→Y{\displaystyle f_{i}:U_{i}\to Y}is defined on eachUi{\displaystyle U_{i}}such that for each pairi,j{\displaystyle i,j}of indices, the restrictions offi{\displaystyle f_{i}}andfj{\displaystyle f_{j}}toUi∩Uj{\displaystyle U_{i}\cap U_{j}}are equal. Then this defines a unique functionf:X→Y{\displaystyle f:X\to Y}such thatf|Ui=fi{\displaystyle f|_{U_{i}}=f_{i}}for alli. This is the way that functions onmanifoldsare defined. Anextensionof a functionfis a functiongsuch thatfis a restriction ofg. A typical use of this concept is the process ofanalytic continuation, that allows extending functions whose domain is a small part of thecomplex planeto functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studyinghomographiesof thereal line. Ahomographyis a functionh(x)=ax+bcx+d{\displaystyle h(x)={\frac {ax+b}{cx+d}}}such thatad−bc≠ 0. Its domain is the set of allreal numbersdifferent from−d/c,{\displaystyle -d/c,}and its image is the set of all real numbers different froma/c.{\displaystyle a/c.}If one extends the real line to theprojectively extended real lineby including∞, one may extendhto a bijection from the extended real line to itself by settingh(∞)=a/c{\displaystyle h(\infty )=a/c}andh(−d/c)=∞{\displaystyle h(-d/c)=\infty }. The idea of function, starting in the 17th century, was fundamental to the newinfinitesimal calculus. At that time, onlyreal-valuedfunctions of areal variablewere considered, and all functions were assumed to besmooth. But the definition was soon extended tofunctions of several variablesand tofunctions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductorycalculus, when the wordfunctionis used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students withSTEMmajors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such asreal analysisandcomplex analysis. Areal functionis areal-valuedfunction of a real variable, that is, a function whose codomain is thefield of real numbersand whose domain is a set ofreal numbersthat contains aninterval. In this section, these functions are simply calledfunctions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they arecontinuous,differentiable, and evenanalytic. This regularity insures that these functions can be visualized by theirgraphs. In this section, all functions are differentiable in some interval. Functions enjoypointwise operations, that is, iffandgare functions, their sum, difference and product are functions defined by (f+g)(x)=f(x)+g(x)(f−g)(x)=f(x)−g(x)(f⋅g)(x)=f(x)⋅g(x).{\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are theintersectionof the domains offandg. The quotient of two functions is defined similarly by fg(x)=f(x)g(x),{\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing thezerosofgfrom the intersection of the domains offandg. Thepolynomial functionsare defined bypolynomials, and their domain is the whole set of real numbers. They includeconstant functions,linear functionsandquadratic functions.Rational functionsare quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoiddivision by zero. The simplest rational function is the functionx↦1x,{\displaystyle x\mapsto {\frac {1}{x}},}whose graph is ahyperbola, and whose domain is the wholereal lineexcept for 0. Thederivativeof a real differentiable function is a real function. Anantiderivativeof a continuous real function is a real function that has the original function as a derivative. For example, the functionx↦1x{\textstyle x\mapsto {\frac {1}{x}}}is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero forx= 1, is a differentiable function called thenatural logarithm. A real functionfismonotonicin an interval if the sign off(x)−f(y)x−y{\displaystyle {\frac {f(x)-f(y)}{x-y}}}does not depend of the choice ofxandyin the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real functionfis monotonic in an intervalI, it has aninverse function, which is a real function with domainf(I)and imageI. This is howinverse trigonometric functionsare defined in terms oftrigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is abijectionbetween the real numbers and the positive real numbers. This inverse is theexponential function. Many other real functions are defined either by theimplicit function theorem(the inverse function is a particular instance) or as solutions ofdifferential equations. For example, thesineand thecosinefunctions are the solutions of thelinear differential equation y″+y=0{\displaystyle y''+y=0} such that sin⁡0=0,cos⁡0=1,∂sin⁡x∂x(0)=1,∂cos⁡x∂x(0)=0.{\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} When the elements of the codomain of a function arevectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid itsvelocity vectoris a vector-valued function. Some vector-valued functions are defined on a subset ofRn{\displaystyle \mathbb {R} ^{n}}or other spaces that share geometric ortopologicalproperties ofRn{\displaystyle \mathbb {R} ^{n}}, such asmanifolds. These vector-valued functions are given the namevector fields. Inmathematical analysis, and more specifically infunctional analysis, afunction spaceis a set ofscalar-valuedorvector-valued functions, which share a specific property and form atopological vector space. For example, the realsmooth functionswith acompact support(that is, they are zero outside somecompact set) form a function space that is at the basis of the theory ofdistributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic andtopologicalproperties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions ofordinaryorpartial differential equationsresult of the study of function spaces. Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on aneighbourhoodof a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting pointx0,{\displaystyle x_{0},}there are several possible starting values for the function. For example, in defining thesquare rootas the inverse function of the square function, for any positive real numberx0,{\displaystyle x_{0},}there are two choices for the value of the square root, one of which is positive and denotedx0,{\displaystyle {\sqrt {x_{0}}},}and another which is negative and denoted−x0.{\displaystyle -{\sqrt {x_{0}}}.}These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a singlesmooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positivex, one value for 0 and no value for negativex. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider theimplicit functionthat mapsyto arootxofx3−3x−y=0{\displaystyle x^{3}-3x-y=0}(see the figure on the right). Fory= 0one may choose either0,3,or−3{\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}}forx. By theimplicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval[−2, 2]and the image is[−1, 1]; for the second one, the domain is[−2, ∞)and the image is[1, ∞); for the last one, the domain is(−∞, 2]and the image is(−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a singlemulti-valued functionofythat has three values for−2 <y< 2, and only one value fory≤ −2andy≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typicallyanalytic functions. The domain to which a complex function may be extended byanalytic continuationgenerally consists of almost the wholecomplex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one getsifor the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets−i. There are generally two ways of solving the problem. One may define a function that is notcontinuousalong some curve, called abranch cut. Such a function is called theprincipal valueof the function. The other way is to consider that one has amulti-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called themonodromy. The definition of a function that is given in this article requires the concept ofset, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, thesingleton setmay be considered as a functionx↦{x}.{\displaystyle x\mapsto \{x\}.}Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions.[23] These generalized functions may be critical in the development of a formalization of thefoundations of mathematics. For example,Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is aclass. This theory includes thereplacement axiom, which may be stated as: IfXis a set andFis a function, thenF[X]is a set. In alternative formulations of the foundations of mathematics usingtype theoryrather than set theory, functions are taken asprimitive notionsrather than defined from other kinds of object. They are the inhabitants offunction types, and may be constructed using expressions in thelambda calculus.[24] Incomputer programming, afunctionis, in general, asubroutinewhichimplementsthe abstract concept of function. That is, it is a program unit that produces an output for each input.Functional programmingis theprogramming paradigmconsisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have noside effectsand depend only on their arguments: they arereferentially transparent. For example,if_then_elseis a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (trueorfalse), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easierprogram proofs, as being based on a well founded theory, thelambda calculus(see below). However, side effects are generally necessary for practical programs, ones that performinput/output. There is a class ofpurely functionallanguages, such asHaskell, which encapsulate the possibility of side effects in the type of a function. Others, such as theMLfamily, simply allow side effects. In manyprogramming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in thecomputer memory. Outside the context of programming languages, "function" has the usual mathematical meaning incomputer science. In this area, a property of major interest is thecomputabilityof a function. For giving a precise meaning to this concept, and to the related concept ofalgorithm, severalmodels of computationhave been introduced, the old ones beinggeneral recursive functions,lambda calculus, andTuring machine. The fundamental theorem ofcomputability theoryis that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. TheChurch–Turing thesisis the claim that every philosophically acceptable definition of acomputable functiondefines also the same functions. General recursive functions arepartial functionsfrom integers to integers that can be defined from via the operators Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: Lambda calculusis a theory that defines computable functions without usingset theory, and is the theoretical background of functional programming. It consists oftermsthat are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting itsaxioms(theα-equivalence, theβ-reduction, and theη-conversion) asrewritingrules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name oftypeintyped lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus.
https://en.wikipedia.org/wiki/Function_(mathematics)
Acounterexampleis any exception to ageneralization. Inlogica counterexample disproves the generalization, and does sorigorouslyin the fields ofmathematicsandphilosophy.[1]For example, the fact that "student John Smith is not lazy" is a counterexample to the generalization "students are lazy", and both a counterexample to, and disproof of, theuniversal quantification"all students are lazy."[2] In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples.[3] Suppose that a mathematician is studyinggeometryandshapes, and she wishes to prove certain theorems about them. Sheconjecturesthat "Allrectanglesaresquares", and she is interested in knowing whether this statement is true or false. In this case, she can either attempt toprovethe truth of the statement usingdeductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every square has four sides, but not every four-sided shape is a square. The above example explained — in a simplified way — how a mathematician might weaken her conjecture in the face of counterexamples, but counterexamples can also be used to demonstrate the necessity of certain assumptions andhypothesis. For example, suppose that after a while, the mathematician above settled on the new conjecture "All shapes that are rectangles and have four sides of equal length are squares". This conjecture has two parts to the hypothesis: the shape must be 'a rectangle' and must have 'four sides of equal length'. The mathematician then would like to know if she can remove either assumption, and still maintain the truth of her conjecture. This means that she needs to check the truth of the following two statements: A counterexample to (1) was already given above, and a counterexample to (2) is a non-squarerhombus. Thus, the mathematician now knows that each assumption by itself is insufficient. A counterexample to the statement "allprime numbersareodd numbers" is the number 2, as it is a prime number but is not an odd number.[1]Neither of the numbers 7 or 10 is a counterexample, as neither of them are enough to contradict the statement. In this example, 2 is in fact the only possible counterexample to the statement, even though that alone is enough to contradict the statement. In a similar manner, the statement "Allnatural numbersare eitherprimeorcomposite" has the number 1 as a counterexample, as 1 is neither prime nor composite. Euler's sum of powers conjecturewas disproved by counterexample. It asserted that at leastnnthpowers were necessary to sum to anothernthpower. This conjecture was disproved in 1966,[4]with a counterexample involvingn= 5; othern= 5 counterexamples are now known, as well as somen= 4 counterexamples.[5] Witsenhausen's counterexampleshows that it is not always true (forcontrol problems) that a quadraticloss functionand a linear equation of evolution of thestate variableimply optimal control laws that are linear. AllEuclidean plane isometriesare mappings that preservearea, but theconverseis false as shown by counterexamplesshear mappingandsqueeze mapping. Other examples include the disproofs of theSeifert conjecture, thePólya conjecture, the conjecture ofHilbert's fourteenth problem,Tait's conjecture, and theGanea conjecture. Inphilosophy, counterexamples are usually used to argue that a certain philosophical position is wrong by showing that it does not apply in certain cases. Alternatively, the first philosopher can modify their claim so that the counterexample no longer applies; this is analogous to when a mathematician modifies a conjecture because of a counterexample. For example, inPlato'sGorgias,Callicles, trying to define what it means to say that some people are "better" than others, claims that those who are stronger are better.Socratesreplies that, because of their strength of numbers, the class of common rabble is stronger than the propertied class of nobles, even though the masses areprima facieof worse character. Thus Socrates has proposed a counterexample to Callicles' claim, by looking in an area that Callicles perhaps did not expect — groups of people rather than individual persons. Callicles might challenge Socrates' counterexample, arguing perhaps that the common rabble really are better than the nobles, or that even in their large numbers, they still are not stronger. But if Callicles accepts the counterexample, then he must either withdraw his claim, or modify it so that the counterexample no longer applies. For example, he might modify his claim to refer only to individual persons, requiring him to think of the common people as a collection of individuals rather than as a mob. As it happens, he modifies his claim to say "wiser" instead of "stronger", arguing that no amount of numerical superiority can make people wiser.
https://en.wikipedia.org/wiki/Counterexample
Inmathematics, arelationdenotes some kind ofrelationshipbetween twoobjectsin aset, which may or may not hold.[1]As an example, "is less than" is a relation on the set ofnatural numbers; it holds, for instance, between the values1and3(denoted as1 < 3), and likewise between3and4(denoted as3 < 4), but not between the values3and1nor between4and4, that is,3 < 1and4 < 4both evaluate to false. As another example, "is sister of"is a relation on the set of all people, it holds e.g. betweenMarie CurieandBronisława Dłuska, and likewise vice versa. Set members may not be in relation "to a certain degree" – either they are in relation or they are not. Formally, a relationRover a setXcan be seen as a set ofordered pairs(x,y)of members ofX.[2]The relationRholds betweenxandyif(x,y)is a member ofR. For example, the relation "is less than" on the natural numbers is aninfinite setRlessof pairs of natural numbers that contains both(1,3)and(3,4), but neither(3,1)nor(4,4). The relation "is anontrivial divisorof"on the set of one-digit natural numbers is sufficiently small to be shown here:Rdv= { (2,4), (2,6), (2,8), (3,6), (3,9), (4,8) }; for example2is a nontrivial divisor of8, but not vice versa, hence(2,8) ∈Rdv, but(8,2) ∉Rdv. IfRis a relation that holds forxandy, one often writesxRy. For most common relations in mathematics, special symbols are introduced, like "<" for"is less than", and "|" for"is a nontrivial divisor of", and, most popular "=" for"is equal to". For example, "1 < 3", "1is less than3", and "(1,3) ∈Rless" mean all the same; some authors also write "(1,3) ∈ (<)". Various properties of relations are investigated. A relationRis reflexive ifxRxholds for allx, and irreflexive ifxRxholds for nox. It is symmetric ifxRyalways impliesyRx, and asymmetric ifxRyimplies thatyRxis impossible. It is transitive ifxRyandyRzalways impliesxRz. For example, "is less than" is irreflexive, asymmetric, and transitive, but neither reflexive nor symmetric. "is sister of"is transitive, but neither reflexive (e.g.Pierre Curieis not a sister of himself), nor symmetric, nor asymmetric; while being irreflexive or not may be a matter of definition (is every woman a sister of herself?), "is ancestor of"is transitive, while "is parent of"is not. Mathematical theorems are known about combinations of relation properties, such as "a transitive relation is irreflexive if, and only if, it is asymmetric". Of particular importance are relations that satisfy certain combinations of properties. Apartial orderis a relation that is reflexive, antisymmetric, and transitive,[3]anequivalence relationis a relation that is reflexive, symmetric, and transitive,[4]afunctionis a relation that is right-unique and left-total (seebelow).[5][6] Since relations are sets, they can be manipulated using set operations, includingunion,intersection, andcomplementation, leading to thealgebra of sets. Furthermore, thecalculus of relationsincludes the operations of taking theconverseandcomposing relations.[7][8][9] The above concept of relation[a]has been generalized to admit relations between members of two different sets (heterogeneous relation, like "lies on" between the set of allpointsand that of alllinesin geometry), relations between three or more sets (finitary relation, like"personxlives in townyat timez"), and relations betweenclasses[b](like "is an element of"on the class of all sets, seeBinary relation § Sets versus classes). Given a setX, a relationRoverXis a set ofordered pairsof elements fromX, formally:R⊆ { (x,y) |x,y∈X}.[2][10] The statement(x,y) ∈Rreads "xisR-related toy" and is written ininfix notationasxRy.[7][8]The order of the elements is important; ifx≠ythenyRxcan be true or false independently ofxRy. For example,3divides9, but9does not divide3. A relationRon a finite setXmay be represented as: A transitive[c]relationRon a finite setXmay be also represented as For example, on the set of all divisors of12, define the relationRdivby Formally,X= { 1, 2, 3, 4, 6, 12 }andRdiv= { (1,2), (1,3), (1,4), (1,6), (1,12), (2,4), (2,6), (2,12), (3,6), (3,12), (4,12), (6,12) }. The representation ofRdivas a Boolean matrix is shown in the middle table; the representation both as a Hasse diagram and as a directed graph is shown in the left picture. The following are equivalent: As another example, define the relationRelonRby The representation ofRelas a 2D-plot obtains an ellipse, see right picture. SinceRis not finite, neither a directed graph, nor a finite Boolean matrix, nor a Hasse diagram can be used to depictRel. Some important properties that a relationRover a setXmay have are: The previous 2 alternatives are not exhaustive; e.g., the red relationy=x2given in the diagram below is neither irreflexive, nor reflexive, since it contains the pair(0,0), but not(2,2), respectively. Again, the previous 3 alternatives are far from being exhaustive; as an example over the natural numbers, the relationxRydefined byx> 2is neither symmetric (e.g.5R1, but not1R5) nor antisymmetric (e.g.6R4, but also4R6), let alone asymmetric. Uniqueness properties: Totality properties: Relations that satisfy certain combinations of the above properties are particularly useful, and thus have received names by their own. Orderings: Uniqueness properties: Uniqueness and totality properties: A relationRover setsXandYis said to becontained ina relationSoverXandY, writtenR⊆S, ifRis a subset ofS, that is, for allx∈Xandy∈Y, ifxRy, thenxSy. IfRis contained inSandSis contained inR, thenRandSare calledequalwrittenR=S. IfRis contained inSbutSis not contained inR, thenRis said to besmallerthanS, writtenR⊊S. For example, on therational numbers, the relation>is smaller than≥, and equal to the composition> ∘ >. The above concept of relation has been generalized to admit relations between members of two different sets. Given setsXandY, aheterogeneous relationRoverXandYis a subset of{ (x,y) |x∈X,y∈Y}.[2][22]WhenX=Y, the relation concept described above is obtained; it is often calledhomogeneous relation(orendorelation)[23][24]to distinguish it from its generalization. The above properties and operations that are marked "[d]" and "[e]", respectively, generalize to heterogeneous relations. An example of a heterogeneous relation is "oceanxborders continenty". The best-known examples arefunctions[f]with distinct domains and ranges, such assqrt :N→R+.
https://en.wikipedia.org/wiki/Relation_(mathematics)#Properties_of_relations
Set theoryis the branch ofmathematical logicthat studiessets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory – as a branch ofmathematics– is mostly concerned with those that are relevant to mathematics as a whole. The modern study of set theory was initiated by the German mathematiciansRichard DedekindandGeorg Cantorin the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name ofnaive set theory. After the discovery ofparadoxes within naive set theory(such asRussell's paradox,Cantor's paradoxand theBurali-Forti paradox), variousaxiomatic systemswere proposed in the early twentieth century, of whichZermelo–Fraenkel set theory(with or without theaxiom of choice) is still the best-known and most studied. Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Besides its foundational role, set theory also provides the framework to develop a mathematical theory ofinfinity, and has various applications incomputer science(such as in the theory ofrelational algebra),philosophy,formal semantics, andevolutionary dynamics. Its foundational appeal, together with itsparadoxes, and its implications for the concept of infinity and its multiple applications have made set theory an area of major interest forlogiciansandphilosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of thereal numberline to the study of theconsistencyoflarge cardinals. The basic notion of grouping objects has existed since at least theemergence of numbers, and the notion of treating sets as their own objects has existed since at least theTree of Porphyry, 3rd-century AD. The simplicity and ubiquity of sets makes it hard to determine the origin of sets as now used in mathematics, however,Bernard Bolzano'sParadoxes of the Infinite(Paradoxien des Unendlichen, 1851) is generally considered the first rigorous introduction of sets to mathematics. In his work, he (among other things) expanded onGalileo's paradox, and introducedone-to-one correspondenceof infinite sets, for example between theintervals[0,5]{\displaystyle [0,5]}and[0,12]{\displaystyle [0,12]}by the relation5y=12x{\displaystyle 5y=12x}. However, he resisted saying these sets wereequinumerous, and his work is generally considered to have been uninfluential in mathematics of his time.[1][2] Before mathematical set theory, basic concepts ofinfinitywere considered to be solidly in the domain of philosophy (see:Infinity (philosophy)andInfinity § History). Since the 5th century BC, beginning with Greek philosopherZeno of Eleain the West (and earlyIndian mathematiciansin the East), mathematicians had struggled with the concept of infinity. With thedevelopment of calculusin the late 17th century, philosophers began to generally distinguish betweenactual and potential infinity, wherein mathematics was only considered in the latter.[3]Carl Friedrich Gaussfamously stated: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics."[4] Development of mathematical set theory was motivated by several mathematicians.Bernhard Riemann's lectureOn the Hypotheses which lie at the Foundations of Geometry(1854) proposed new ideas abouttopology, and about basing mathematics (especially geometry) in terms of sets ormanifoldsin the sense of aclass(which he calledMannigfaltigkeit) now calledpoint-set topology. The lecture was published byRichard Dedekindin 1868, along with Riemann's paper ontrigonometric series(which presented theRiemann integral), The latter was a starting point a movement inreal analysisfor the study of “seriously”discontinuous functions. A youngGeorg Cantorentered into this area, which led him to the study ofpoint-sets. Around 1871, influenced by Riemann, Dedekind began working with sets in his publications, which dealt very clearly and precisely withequivalence relations,partitions of sets, andhomomorphisms. Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work. However, he did not publish a formal explanation of his set theory until 1888. Set theory, as understood by modern mathematicians, is generally considered to be founded by a single paper in 1874 byGeorg CantortitledOn a Property of the Collection of All Real Algebraic Numbers.[5][6][7]In his paper, he developed the notion ofcardinality, comparing the sizes of two sets by setting them in one-to-one correspondence. His "revolutionary discovery" was that the set of allreal numbersisuncountable, that is, one cannot put all real numbers in a list. This theorem is proved usingCantor's first uncountability proof, which differs from the more familiar proof using hisdiagonal argument. Cantor introduced fundamental constructions in set theory, such as thepower setof a setA, which is the set of all possiblesubsetsofA. He later proved that the size of the power set ofAis strictly larger than the size ofA, even whenAis an infinite set; this result soon became known asCantor's theorem. Cantor developed a theory oftransfinite numbers, calledcardinalsandordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letterℵ{\displaystyle \aleph }(ℵ,aleph) with a natural number subscript; for the ordinals he employed the Greek letterω{\displaystyle \omega }(ω,omega). Set theory was beginning to become an essential ingredient of the new “modern” approach to mathematics. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such asLeopold KroneckerandHenri Poincaréand later fromHermann WeylandL. E. J. Brouwer, whileLudwig Wittgensteinraisedphilosophical objections(see:Controversy over Cantor's theory).[a]Dedekind's algebraic style only began to find followers in the 1890s Despite the controversy, Cantor's set theory gained remarkable ground around the turn of the 20th century with the work of several notable mathematicians and philosophers. Richard Dedekind, around the same time, began working with sets in his publications, and famously constructing the real numbers usingDedekind cuts. He also worked withGiuseppe Peanoin developing thePeano axioms, which formalized natural-number arithmetic, using set-theoretic ideas, which also introduced theepsilonsymbol forset membership. Possibly most prominently,Gottlob Fregebegan to develop hisFoundations of Arithmetic. In his work, Frege tries to ground all mathematics in terms of logical axioms using Cantor's cardinality. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concepthorse in the barn. Frege attempted to explain our grasp of numbers through cardinality ('the number of...', orNx:Fx{\displaystyle Nx:Fx}), relying onHume's principle. However, Frege's work was short-lived, as it was found byBertrand Russellthat his axioms lead to acontradiction. Specifically, Frege'sBasic Law V(now known as theaxiom schema of unrestricted comprehension). According toBasic Law V, for any sufficiently well-definedproperty, there is the set of all and only the objects that have that property. The contradiction, calledRussell's paradox, is shown as follows: LetRbe the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) IfRis not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: This came around a time of severalparadoxesor counter-intuitive results. For example, that theparallel postulatecannot be proved, the existence ofmathematical objectsthat cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved withPeano arithmetic. The result was afoundational crisis of mathematics. Set theory begins with a fundamentalbinary relationbetween an objectoand a setA. Ifois amember(orelement) ofA, the notationo∈Ais used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }.[8]Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also calledset inclusion. If all the members of setAare also members of setB, thenAis asubsetofB, denotedA⊆B. For example,{1, 2}is a subset of{1, 2, 3}, and so is{2}but{1, 4}is not. As implied by this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the termproper subsetis defined, variously denotedA⊂B{\displaystyle A\subset B},A⊊B{\displaystyle A\subsetneq B}, orA⫋B{\displaystyle A\subsetneqq B}(note however that the notationA⊂B{\displaystyle A\subset B}is sometimes used synonymously withA⊆B{\displaystyle A\subseteq B}; that is, allowing the possibility thatAandBare equal). We callAaproper subsetofBif and only ifAis a subset ofB, butAis not equal toB. Also, 1, 2, and 3 are members (elements) of the set{1, 2, 3}, but are not subsets of it; and in turn, the subsets, such as{1}, are not members of the set{1, 2, 3}. More complicated relations can exist; for example, the set{1}is both a member and a proper subset of the set{1, {1}}. Just asarithmeticfeaturesbinary operationsonnumbers, set theory features binary operations on sets.[9]The following is a partial list of them: Some basic sets of central importance are the set ofnatural numbers, the set ofreal numbersand theempty set– the unique set containing no elements. The empty set is also occasionally called thenull set,[15]though this name is ambiguous and can lead to several interpretations. The empty set can be denoted with empty braces "{}{\displaystyle \{\}}" or the symbol "∅{\displaystyle \varnothing }" or "∅{\displaystyle \emptyset }". Thepower setof a setA, denotedP(A){\displaystyle {\mathcal {P}}(A)}, is the set whose members are all of the possible subsets ofA. For example, the power set of{1, 2}is{ {}, {1}, {2}, {1, 2} }. Notably,P(A){\displaystyle {\mathcal {P}}(A)}contains bothAand the empty set. A set ispureif all of its members are sets, all members of its members are sets, and so on. For example, the set containing only the empty set is a nonempty pure set. In modern set theory, it is common to restrict attention to thevon Neumann universeof pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into acumulative hierarchy, based on how deeply their members, members of members, etc. are nested. Each set in this hierarchy is assigned (bytransfinite recursion) anordinal numberα{\displaystyle \alpha }, known as itsrank.The rank of a pure setX{\displaystyle X}is defined to be the least ordinal that is strictly greater than the rank of any of its elements. For example, the empty set is assigned rank 0, while the setcontaining only the empty set is assigned rank 1. For each ordinalα{\displaystyle \alpha }, the setVα{\displaystyle V_{\alpha }}is defined to consist of all pure sets with rank less thanα{\displaystyle \alpha }. The entire von Neumann universe is denotedV{\displaystyle V}. Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools usingVenn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition. This assumption gives rise to paradoxes, the simplest and best known of which areRussell's paradoxand theBurali-Forti paradox.Axiomatic set theorywas originally devised to rid set theory of such paradoxes.[note 1] The most widely studied systems of axiomatic set theory imply that all sets form acumulative hierarchy.[b]Such systems come in two flavors, those whoseontologyconsists of: The above systems can be modified to allowurelements, objects that can be members of sets but that are not themselves sets and do not have any members. TheNew Foundationssystems ofNFU(allowingurelements) andNF(lacking them), associate withWillard Van Orman Quine, are not based on a cumulative hierarchy. NF and NFU include a "set of everything", relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which theaxiom of choicedoes not hold. Despite NF's ontology not reflecting the traditional cumulative hierarchy and violating well-foundedness,Thomas Forsterhas argued that it does reflect aniterative conception of set.[16] Systems ofconstructive set theory, such as CST, CZF, and IZF, embed their set axioms inintuitionisticinstead ofclassical logic. Yet other systems accept classical logic but feature a nonstandard membership relation. These includerough set theoryandfuzzy set theory, in which the value of anatomic formulaembodying the membership relation is not simplyTrueorFalse. TheBoolean-valued modelsofZFCare a related subject. An enrichment of ZFC calledinternal set theorywas proposed byEdward Nelsonin 1977.[17] Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse asgraphs,manifolds,rings,vector spaces, andrelational algebrascan all be defined as sets satisfying various (axiomatic) properties.Equivalenceandorder relationsare ubiquitous in mathematics, and the theory of mathematicalrelationscan be described in set theory.[18][19] Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume ofPrincipia Mathematica, it has been claimed that most (or even all) mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, usingfirstorsecond-order logic. For example, properties of thenaturalandreal numberscan be derived within set theory, as each of these number systems can be defined by representing their elements as sets of specific forms.[20] Set theory as a foundation formathematical analysis,topology,abstract algebra, anddiscrete mathematicsis likewise uncontroversial; mathematicians accept (in principle) that theorems in these areas can be derived from the relevant definitions and the axioms of set theory. However, it remains that few full derivations of complex mathematical theorems from set theory have been formally verified, since such formal derivations are often much longer than the natural language proofs mathematicians commonly present. One verification project,Metamath, includes human-written, computer-verified derivations of more than 12,000 theorems starting fromZFCset theory,first-order logicandpropositional logic.[21] Set theory is a major area of research in mathematics with many interrelated subfields: Combinatorial set theoryconcerns extensions of finitecombinatoricsto infinite sets. This includes the study ofcardinal arithmeticand the study of extensions ofRamsey's theoremsuch as theErdős–Rado theorem. Descriptive set theoryis the study of subsets of thereal lineand, more generally, subsets ofPolish spaces. It begins with the study ofpointclassesin theBorel hierarchyand extends to the study of more complex hierarchies such as theprojective hierarchyand theWadge hierarchy. Many properties ofBorel setscan be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field ofeffective descriptive set theoryis between set theory andrecursion theory. It includes the study oflightface pointclasses, and is closely related tohyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable. A recent area of research concernsBorel equivalence relationsand more complicated definableequivalence relations. This has important applications to the study ofinvariantsin many fields of mathematics. In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. Infuzzy set theorythis condition was relaxed byLotfi A. Zadehso an object has adegree of membershipin a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.75. Aninner modelof Zermelo–Fraenkel set theory (ZF) is a transitiveclassthat includes all the ordinals and satisfies all the axioms of ZF. The canonical example is theconstructible universeLdeveloped by Gödel. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a modelVof ZF satisfies thecontinuum hypothesisor theaxiom of choice, the inner modelLconstructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice. Thus the assumption that ZF is consistent (has at least one model) implies that ZF together with these two principles is consistent. The study of inner models is common in the study ofdeterminacyandlarge cardinals, especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice. For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy (and thus not satisfying the axiom of choice).[22] Alarge cardinalis a cardinal number with an extra property. Many such properties are studied, includinginaccessible cardinals,measurable cardinals, and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable inZermelo–Fraenkel set theory. Determinacyrefers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy. The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. Theaxiom of determinacy(AD) is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved (in particular, measurable and with the perfect set property). AD can be used to prove that theWadge degreeshave an elegant structure. Paul Coheninvented the method offorcingwhile searching for amodelofZFCin which thecontinuum hypothesisfails, or a model of ZF in which theaxiom of choicefails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined (i.e. "forced") by the construction and the original model. For example, Cohen's construction adjoins additional subsets of thenatural numberswithout changing any of thecardinal numbersof the original model. Forcing is also one of two methods for provingrelative consistencyby finitistic methods, the other method beingBoolean-valued models. Acardinal invariantis a property of the real line measured by a cardinal number. For example, a well-studied invariant is the smallest cardinality of a collection ofmeagre setsof reals whose union is the entire real line. These are invariants in the sense that any two isomorphic models of set theory must give the same cardinal for each invariant. Many cardinal invariants have been studied, and the relationships between them are often complex and related to axioms of set theory. Set-theoretic topologystudies questions ofgeneral topologythat are set-theoretic in nature or that require advanced methods of set theory for their solution. Many of these theorems are independent of ZFC, requiring stronger axioms for their proof. A famous problem is thenormal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. From set theory's inception, some mathematicians have objected to it as afoundation for mathematics. The most common objection to set theory, oneKroneckervoiced in set theory's earliest years, starts from theconstructivistview that mathematics is loosely related to computation. If this view is granted, then the treatment of infinite sets, both innaiveand in axiomatic set theory, introduces into mathematics methods and objects that are not computable even in principle. The feasibility of constructivism as a substitute foundation for mathematics was greatly increased byErrett Bishop's influential bookFoundations of Constructive Analysis.[23] A different objection put forth byHenri Poincaréis that defining sets using the axiom schemas ofspecificationandreplacement, as well as theaxiom of power set, introducesimpredicativity, a type ofcircularity, into the definitions of mathematical objects. The scope of predicatively founded mathematics, while less than that of the commonly accepted Zermelo–Fraenkel theory, is much greater than that of constructive mathematics, to the point thatSolomon Fefermanhas said that "all of scientifically applicable analysis can be developed [using predicative methods]".[24] Ludwig Wittgensteincondemned set theory philosophically for its connotations ofmathematical platonism.[25]He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers".[26]Wittgenstein identified mathematics with algorithmic human deduction;[27]the need for a secure foundation for mathematics seemed, to him, nonsensical.[28]Moreover, since human effort is necessarily finite, Wittgenstein's philosophy required an ontological commitment to radicalconstructivismandfinitism. Meta-mathematical statements – which, for Wittgenstein, included any statement quantifying over infinite domains, and thus almost all modern set theory – are not mathematics.[29]Few modern philosophers have adopted Wittgenstein's views after a spectacular blunder inRemarks on the Foundations of Mathematics: Wittgenstein attempted to refuteGödel's incompleteness theoremsafter having only read the abstract. As reviewersKreisel,Bernays,Dummett, andGoodsteinall pointed out, many of his critiques did not apply to the paper in full. Only recently have philosophers such asCrispin Wrightbegun to rehabilitate Wittgenstein's arguments.[30] Category theoristshave proposedtopos theoryas an alternative to traditional axiomatic set theory. Topos theory can interpret various alternatives to that theory, such asconstructivism, finite set theory, andcomputableset theory.[31][32]Topoi also give a natural setting for forcing and discussions of the independence of choice from ZF, as well as providing the framework forpointless topologyandStone spaces.[33] An active area of research is theunivalent foundationsand related to ithomotopy type theory. Within homotopy type theory, a set may be regarded as a homotopy 0-type, withuniversal propertiesof sets arising from the inductive and recursive properties ofhigher inductive types. Principles such as theaxiom of choiceand thelaw of the excluded middlecan be formulated in a manner corresponding to the classical formulation in set theory or perhaps in a spectrum of distinct ways unique to type theory. Some of these principles may be proven to be a consequence of other principles. The variety of formulations of these axiomatic principles allows for a detailed analysis of the formulations required in order to derive various mathematical results.[34][35] As set theory gained popularity as a foundation for modern mathematics, there has been support for the idea of introducing the basics ofnaive set theoryearly inmathematics education. In the US in the 1960s, theNew Mathexperiment aimed to teach basic set theory, among other abstract concepts, toprimary schoolstudents but was met with much criticism.[36]The math syllabus in European schools followed this trend and currently includes the subject at different levels in all grades.Venn diagramsare widely employed to explain basic set-theoretic relationships to primary school students (even thoughJohn Vennoriginally devised them as part of a procedure to assess thevalidityofinferencesinterm logic). Set theory is used to introduce students tological operators(NOT, AND, OR), and semantic or rule description (technicallyintensional definition)[37]of sets (e.g. "months starting with the letterA"), which may be useful when learningcomputer programming, sinceBoolean logicis used in variousprogramming languages. Likewise, sets and other collection-like objects, such asmultisetsandlists, are commondatatypesin computer science and programming.[38] In addition to that, certain sets are commonly used in mathematical teaching, such as the setsN{\displaystyle \mathbb {N} }ofnatural numbers,Z{\displaystyle \mathbb {Z} }ofintegers,R{\displaystyle \mathbb {R} }ofreal numbers, etc.). These are commonly used when defining amathematical functionas a relation from one set (thedomain) to another set (therange).[39]
https://en.wikipedia.org/wiki/Set_theory
Inmathematics, asetiscountableif either it isfiniteor it can be made inone to one correspondencewith the set ofnatural numbers.[a]Equivalently, a set iscountableif there exists aninjective functionfrom it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming theaxiom of countable choice, a set iscountableif itscardinality(the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to becountably infinite. The concept is attributed toGeorg Cantor, who proved the existence ofuncountable sets, that is, sets that are not countable; for example the set of thereal numbers. Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal.[1]An alternative style usescountableto mean what is here called countably infinite, andat most countableto mean what is here called countable.[2][3] The termsenumerable[4]anddenumerable[5][6]may also be used, e.g. referring to countable and countably infinite respectively,[7]definitions vary and care is needed respecting the difference withrecursively enumerable.[8] A setS{\displaystyle S}iscountableif: All of these definitions are equivalent. A setS{\displaystyle S}iscountablyinfiniteif: A set isuncountableif it is not countable, i.e. its cardinality is greater thanℵ0{\displaystyle \aleph _{0}}.[9] In 1874, inhis first set theory article, Cantor proved that the set ofreal numbersis uncountable, thus showing that not all infinite sets are countable.[16]In 1878, he used one-to-one correspondences to define and compare cardinalities.[17]In 1883, he extended the natural numbers with his infiniteordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.[18] Asetis a collection ofelements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted{3,4,5}{\displaystyle \{3,4,5\}}, called roster form.[19]This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example,{1,2,3,…,100}{\displaystyle \{1,2,3,\dots ,100\}}presumably denotes the set ofintegersfrom 1 to 100. Even in this case, however, it is stillpossibleto list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up ton{\displaystyle n}, this gives us the usual definition of "sets of sizen{\displaystyle n}". Some sets areinfinite; these sets have more thann{\displaystyle n}elements wheren{\displaystyle n}is any integer that can be specified. (No matter how large the specified integern{\displaystyle n}is, such asn=101000{\displaystyle n=10^{1000}}, infinite sets have more thann{\displaystyle n}elements.) For example, the set of natural numbers, denotable by{0,1,2,3,4,5,…}{\displaystyle \{0,1,2,3,4,5,\dots \}},[a]has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:…−2→−4,−1→−2,0→0,1→2,2→4⋯{\displaystyle \ldots \,-\!2\!\rightarrow \!-\!4,\,-\!1\!\rightarrow \!-\!2,\,0\!\rightarrow \!0,\,1\!\rightarrow \!2,\,2\!\rightarrow \!4\,\cdots }or, more generally,n→2n{\displaystyle n\rightarrow 2n}(see picture). What we have done here is arrange the integers and the even integers into aone-to-one correspondence(orbijection), which is afunctionthat maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integerscountably infiniteand say they have cardinalityℵ0{\displaystyle \aleph _{0}}. Georg Cantorshowed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable. By definition, a setS{\displaystyle S}iscountableif there exists abijectionbetweenS{\displaystyle S}and a subset of thenatural numbersN={0,1,2,…}{\displaystyle \mathbb {N} =\{0,1,2,\dots \}}. For example, define the correspondencea↔1,b↔2,c↔3{\displaystyle a\leftrightarrow 1,\ b\leftrightarrow 2,\ c\leftrightarrow 3}Since every element ofS={a,b,c}{\displaystyle S=\{a,b,c\}}is paired withprecisely oneelement of{1,2,3}{\displaystyle \{1,2,3\}},andvice versa, this defines a bijection, and shows thatS{\displaystyle S}is countable. Similarly we can show all finite sets are countable. As for the case of infinite sets, a setS{\displaystyle S}is countably infinite if there is abijectionbetweenS{\displaystyle S}and all ofN{\displaystyle \mathbb {N} }. As examples, consider the setsA={1,2,3,…}{\displaystyle A=\{1,2,3,\dots \}}, the set of positiveintegers, andB={0,2,4,6,…}{\displaystyle B=\{0,2,4,6,\dots \}}, the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignmentsn↔n+1{\displaystyle n\leftrightarrow n+1}andn↔2n{\displaystyle n\leftrightarrow 2n}, so that0↔1,1↔2,2↔3,3↔4,4↔5,…0↔0,1↔2,2↔4,3↔6,4↔8,…{\displaystyle {\begin{matrix}0\leftrightarrow 1,&1\leftrightarrow 2,&2\leftrightarrow 3,&3\leftrightarrow 4,&4\leftrightarrow 5,&\ldots \\[6pt]0\leftrightarrow 0,&1\leftrightarrow 2,&2\leftrightarrow 4,&3\leftrightarrow 6,&4\leftrightarrow 8,&\ldots \end{matrix}}}Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally: Theorem—A subset of a countable set is countable.[20] The set of allordered pairsof natural numbers (theCartesian productof two sets of natural numbers,N×N{\displaystyle \mathbb {N} \times \mathbb {N} }is countably infinite, as can be seen by following a path like the one in the picture: The resultingmappingproceeds as follows: 0↔(0,0),1↔(1,0),2↔(0,1),3↔(2,0),4↔(1,1),5↔(0,2),6↔(3,0),…{\displaystyle 0\leftrightarrow (0,0),1\leftrightarrow (1,0),2\leftrightarrow (0,1),3\leftrightarrow (2,0),4\leftrightarrow (1,1),5\leftrightarrow (0,2),6\leftrightarrow (3,0),\ldots }This mapping covers all such ordered pairs. This form of triangular mappingrecursivelygeneralizes ton{\displaystyle n}-tuplesof natural numbers, i.e.,(a1,a2,a3,…,an){\displaystyle (a_{1},a_{2},a_{3},\dots ,a_{n})}whereai{\displaystyle a_{i}}andn{\displaystyle n}are natural numbers, by repeatedly mapping the first two elements of ann{\displaystyle n}-tuple to a natural number. For example,(0,2,3){\displaystyle (0,2,3)}can be written as((0,2),3){\displaystyle ((0,2),3)}. Then(0,2){\displaystyle (0,2)}maps to 5 so((0,2),3){\displaystyle ((0,2),3)}maps to(5,3){\displaystyle (5,3)}, then(5,3){\displaystyle (5,3)}maps to 39. Since a different 2-tuple, that is a pair such as(a,b){\displaystyle (a,b)}, maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set ofn{\displaystyle n}-tuples to the set of natural numbersN{\displaystyle \mathbb {N} }is proved. For the set ofn{\displaystyle n}-tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem. Theorem—TheCartesian productof finitely many countable sets is countable.[21][b] The set of allintegersZ{\displaystyle \mathbb {Z} }and the set of allrational numbersQ{\displaystyle \mathbb {Q} }may intuitively seem much bigger thanN{\displaystyle \mathbb {N} }. But looks can be deceiving. If a pair is treated as thenumeratoranddenominatorof avulgar fraction(a fraction in the form ofa/b{\displaystyle a/b}wherea{\displaystyle a}andb≠0{\displaystyle b\neq 0}are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural numbern{\displaystyle n}is also a fractionn/1{\displaystyle n/1}. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below. Theorem—Z{\displaystyle \mathbb {Z} }(the set of all integers) andQ{\displaystyle \mathbb {Q} }(the set of all rational numbers) are countable.[c] In a similar manner, the set ofalgebraic numbersis countable.[23][d] Sometimes more than one mapping is useful: a setA{\displaystyle A}to be shown as countable is one-to-one mapped (injection) to another setB{\displaystyle B}, thenA{\displaystyle A}is proved as countable ifB{\displaystyle B}is one-to-one mapped to the set of natural numbers. For example, the set of positiverational numberscan easily be one-to-one mapped to the set of natural number pairs (2-tuples) becausep/q{\displaystyle p/q}maps to(p,q){\displaystyle (p,q)}. Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable. Theorem—Any finiteunionof countable sets is countable.[24][25][e] With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so. Theorem—(Assuming theaxiom of countable choice) The union of countably many countable sets is countable.[f] For example, given countable setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }, we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:IndexTupleElement0(0,0)a01(0,1)a12(1,0)b03(0,2)a24(1,1)b15(2,0)c06(0,3)a37(1,2)b28(2,1)c19(3,0)d010(0,4)a4⋮{\displaystyle {\begin{array}{c|c|c }{\text{Index}}&{\text{Tuple}}&{\text{Element}}\\\hline 0&(0,0)&{\textbf {a}}_{0}\\1&(0,1)&{\textbf {a}}_{1}\\2&(1,0)&{\textbf {b}}_{0}\\3&(0,2)&{\textbf {a}}_{2}\\4&(1,1)&{\textbf {b}}_{1}\\5&(2,0)&{\textbf {c}}_{0}\\6&(0,3)&{\textbf {a}}_{3}\\7&(1,2)&{\textbf {b}}_{2}\\8&(2,1)&{\textbf {c}}_{1}\\9&(3,0)&{\textbf {d}}_{0}\\10&(0,4)&{\textbf {a}}_{4}\\\vdots &&\end{array}}} We need theaxiom of countable choiceto indexallthe setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }simultaneously. Theorem—The set of all finite-lengthsequencesof natural numbers is countable. This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, and so on, each of which is a countable set (finite Cartesian product). Thus the set is a countable union of countable sets, which is countable by the previous theorem. Theorem—The set of all finitesubsetsof the natural numbers is countable. The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets. Theorem—LetS{\displaystyle S}andT{\displaystyle T}be sets. These follow from the definitions of countable set as injective / surjective functions.[g] Cantor's theoremasserts that ifA{\displaystyle A}is a set andP(A){\displaystyle {\mathcal {P}}(A)}is itspower set, i.e. the set of all subsets ofA{\displaystyle A}, then there is no surjective function fromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}. A proof is given in the articleCantor's theorem. As an immediate consequence of this and the Basic Theorem above we have: Proposition—The setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}is not countable; i.e. it isuncountable. For an elaboration of this result seeCantor's diagonal argument. The set ofreal numbersis uncountable,[h]and so is the set of all infinitesequencesof natural numbers. If there is a set that is a standard model (seeinner model) of ZFC set theory, then there is a minimal standard model (seeConstructible universe). TheLöwenheim–Skolem theoremcan be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this modelMcontains elements that are: was seen as paradoxical in the early days of set theory; seeSkolem's paradoxfor more. The minimal standard model includes all thealgebraic numbersand all effectively computabletranscendental numbers, as well as many other kinds of numbers. Countable sets can betotally orderedin various ways, for example: In both examples of well orders here, any subset has aleast element; and in both examples of non-well orders,somesubsets do not have aleast element. This is the key definition that determines whether a total order is also a well order.
https://en.wikipedia.org/wiki/Countable_set
Inmathematics, anuncountable set, informally, is aninfinite setthat contains too manyelementsto becountable. The uncountability of a set is closely related to itscardinal number: a set is uncountable if its cardinal number is larger thanaleph-null, the cardinality of thenatural numbers. Examples of uncountable sets include the set⁠R{\displaystyle \mathbb {R} }⁠of allreal numbersand set of all subsets of the natural numbers. There are many equivalent characterizations of uncountability. A setXis uncountable if and only if any of the following conditions hold: The first three of these characterizations can be proven equivalent inZermelo–Fraenkel set theorywithout theaxiom of choice, but the equivalence of the third and fourth cannot be proved without additional choice principles. If an uncountable setXis a subset of setY, thenYis uncountable. The best known example of an uncountable set is the set⁠R{\displaystyle \mathbb {R} }⁠of allreal numbers;Cantor's diagonal argumentshows that this set is uncountable. The diagonalization proof technique can also be used to show that several other sets are uncountable, such as the set of all infinitesequencesofnatural numbers⁠N{\displaystyle \mathbb {N} }⁠(see: (sequenceA102288in theOEIS)), and theset of all subsetsof the set of natural numbers. The cardinality of⁠R{\displaystyle \mathbb {R} }⁠is often called thecardinality of the continuum, and denoted byc{\displaystyle {\mathfrak {c}}}, or2ℵ0{\displaystyle 2^{\aleph _{0}}}, orℶ1{\displaystyle \beth _{1}}(beth-one). TheCantor setis an uncountablesubsetof⁠R{\displaystyle \mathbb {R} }⁠. The Cantor set is afractaland hasHausdorff dimensiongreater than zero but less than one (⁠R{\displaystyle \mathbb {R} }⁠has dimension one). This is an example of the following fact: any subset of⁠R{\displaystyle \mathbb {R} }⁠of Hausdorff dimension strictly greater than zero must be uncountable. Another example of an uncountable set is the set of allfunctionsfrom⁠R{\displaystyle \mathbb {R} }⁠to⁠R{\displaystyle \mathbb {R} }⁠. This set is even "more uncountable" than⁠R{\displaystyle \mathbb {R} }⁠in the sense that the cardinality of this set isℶ2{\displaystyle \beth _{2}}(beth two), which is larger thanℶ1{\displaystyle \beth _{1}}. A more abstract example of an uncountable set is the set of all countableordinal numbers, denoted by Ω or ω1.[1]The cardinality of Ω is denotedℵ1{\displaystyle \aleph _{1}}(aleph-one). It can be shown, using theaxiom of choice, thatℵ1{\displaystyle \aleph _{1}}is thesmallestuncountable cardinal number. Thus eitherℶ1{\displaystyle \beth _{1}}, the cardinality of the reals, is equal toℵ1{\displaystyle \aleph _{1}}or it is strictly larger.Georg Cantorwas the first to propose the question of whetherℶ1{\displaystyle \beth _{1}}is equal toℵ1{\displaystyle \aleph _{1}}. In 1900,David Hilbertposed this question as the first of his23 problems. The statement thatℵ1=ℶ1{\displaystyle \aleph _{1}=\beth _{1}}is now called thecontinuum hypothesis, and is known to be independent of theZermelo–Fraenkel axiomsforset theory(including theaxiom of choice). Without theaxiom of choice, there might exist cardinalitiesincomparabletoℵ0{\displaystyle \aleph _{0}}(namely, the cardinalities ofDedekind-finiteinfinite sets). Sets of these cardinalities satisfy the first three characterizations above, but not the fourth characterization. Since these sets are not larger than the natural numbers in the sense of cardinality, some may not want to call them uncountable. If the axiom of choice holds, the following conditions on a cardinalκ{\displaystyle \kappa }are equivalent: However, these may all be different if the axiom of choice fails. So it is not obvious which one is the appropriate generalization of "uncountability" when the axiom fails. It may be best to avoid using the word in this case and specify which of these one means.
https://en.wikipedia.org/wiki/Uncountable_set
Cantor's diagonal argument(among various similar names[note 1]) is amathematical proofthat there areinfinite setswhich cannot be put intoone-to-one correspondencewith the infinite set ofnatural numbers– informally, that there aresetswhich in some sense contain more elements than there are positive integers. Such sets are now calleduncountable sets, and the size of infinite sets is treated by the theory ofcardinal numbers, which Cantor began. Georg Cantorpublished this proof in 1891,[1][2]: 20–[3]but it was nothis first proofof the uncountability of thereal numbers, which appeared in 1874.[4][5]However, it demonstrates a general technique that has since been used in a wide range of proofs,[6]including the first ofGödel's incompleteness theorems[2]and Turing's answer to theEntscheidungsproblem. Diagonalization arguments are often also the source of contradictions likeRussell's paradox[7][8]andRichard's paradox.[2]: 27 Cantor considered the setTof all infinitesequencesofbinary digits(i.e. each digit is zero or one).[note 2]He begins with aconstructive proofof the followinglemma: The proof starts with an enumeration of elements fromT, for example Next, a sequencesis constructed by choosing the 1st digit ascomplementaryto the 1st digit ofs1(swapping0s for1s and vice versa), the 2nd digit as complementary to the 2nd digit ofs2, the 3rd digit as complementary to the 3rd digit ofs3, and generally for everyn, then-th digit as complementary to then-th digit ofsn. For the example above, this yields By construction,sis a member ofTthat differs from eachsn, since theirn-th digits differ (highlighted in the example). Hence,scannot occur in the enumeration. Based on this lemma, Cantor then uses aproof by contradictionto show that: The proof starts by assuming thatTiscountable. Then all its elements can be written in an enumerations1,s2, ... ,sn, ... . Applying the previous lemma to this enumeration produces a sequencesthat is a member ofT, but is not in the enumeration. However, ifTis enumerated, then every member ofT, including thiss, is in the enumeration. This contradiction implies that the original assumption is false. Therefore,Tis uncountable.[1] The uncountability of thereal numberswas already established byCantor's first uncountability proof, but it also follows from the above result. To prove this, aninjectionwill be constructed from the setTof infinite binary strings to the setRof real numbers. SinceTis uncountable, theimageof this function, which is a subset ofR, is uncountable. Therefore,Ris uncountable. Also, by using a method of construction devised by Cantor, abijectionwill be constructed betweenTandR. Therefore,TandRhave the same cardinality, which is called the "cardinality of the continuum" and is usually denoted byc{\displaystyle {\mathfrak {c}}}or2ℵ0{\displaystyle 2^{\aleph _{0}}}. An injection fromTtoRis given by mapping binary strings inTtodecimal fractions, such as mappingt= 0111... to the decimal 0.0111.... This function, defined byf(t) = 0.t, is an injection because it maps different strings to different numbers.[note 4] Constructing a bijection betweenTandRis slightly more complicated. Instead of mapping 0111... to the decimal 0.0111..., it can be mapped to thebase-bnumber: 0.0111...b. This leads to the family of functions:fb(t) = 0.tb. The functionsfb(t)are injections, except forf2(t). This function will be modified to produce a bijection betweenTandR. This construction uses a method devised by Cantor that was published in 1878. He used it to construct a bijection between theclosed interval[0, 1] and theirrationalsin theopen interval(0, 1). He first removed acountably infinitesubset from each of these sets so that there is a bijection between the remaining uncountable sets. Since there is a bijection between the countably infinite subsets that have been removed, combining the two bijections produces a bijection between the original sets.[9] Cantor's method can be used to modify the functionf2(t) = 0.t2to produce a bijection fromTto (0, 1). Because some numbers have two binary expansions,f2(t)is not eveninjective. For example,f2(1000...) =0.1000...2= 1/2 andf2(0111...) =0.0111...2=1/4 + 1/8 + 1/16 + ...=1/2, so both 1000... and 0111... map to the same number, 1/2. To modifyf2(t), observe that it is a bijection except for a countably infinite subset of (0, 1) and a countably infinite subset ofT. It is not a bijection for the numbers in (0, 1) that have twobinary expansions. These are calleddyadicnumbers and have the formm/2nwheremis an odd integer andnis a natural number. Put these numbers in the sequence:r= (1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8, ...). Also,f2(t)is not a bijection to (0, 1) for the strings inTappearing after thebinary pointin the binary expansions of 0, 1, and the numbers in sequencer. Put these eventually-constant strings in the sequence:s= (000...,111..., 1000..., 0111..., 01000..., 00111..., 11000..., 10111..., ...). Define the bijectiong(t) fromTto (0, 1): Iftis thenthstring in sequences, letg(t) be thenthnumber in sequencer; otherwise,g(t) = 0.t2. To construct a bijection fromTtoR, start with thetangent functiontan(x), which is a bijection from (−π/2, π/2) toR(see the figure shown on the right). Next observe that thelinear functionh(x) =πx– π/2is a bijection from (0, 1) to (−π/2, π/2) (see the figure shown on the left). Thecomposite functiontan(h(x)) =tan(πx– π/2)is a bijection from (0, 1) toR. Composing this function withg(t) produces the function tan(h(g(t))) =tan(πg(t) – π/2), which is a bijection fromTtoR. A generalized form of the diagonal argument was used by Cantor to proveCantor's theorem: for everysetS, thepower setofS—that is, the set of allsubsetsofS(here written asP(S))—cannot be inbijectionwithSitself. This proof proceeds as follows: Letfbe anyfunctionfromStoP(S). It suffices to prove thatfcannot besurjective. This means that some memberTofP(S), i.e. some subset ofS, is not in theimageoff. As a candidate consider the set For everysinS, eithersis inTor not. Ifsis inT, then by definition ofT,sis not inf(s), soTis not equal tof(s). On the other hand, ifsis not inT, then by definition ofT,sis inf(s), so againTis not equal tof(s); see picture. For a more complete account of this proof, seeCantor's theorem. With equality defined as the existence of a bijection between their underlying sets, Cantor also defines binary predicate of cardinalities|S|{\displaystyle |S|}and|T|{\displaystyle |T|}in terms of theexistence of injectionsbetweenS{\displaystyle S}andT{\displaystyle T}. It has the properties of apreorderand is here written "≤{\displaystyle \leq }". One can embed the naturals into the binary sequences, thus proving variousinjection existencestatements explicitly, so that in this sense|N|≤|2N|{\displaystyle |{\mathbb {N} }|\leq |2^{\mathbb {N} }|}, where2N{\displaystyle 2^{\mathbb {N} }}denotes the function spaceN→{0,1}{\displaystyle {\mathbb {N} }\to \{0,1\}}. But following from the argument in the previous sections, there isno surjectionand so also no bijection, i.e. the set is uncountable. For this one may write|N|<|2N|{\displaystyle |{\mathbb {N} }|<|2^{\mathbb {N} }|}, where "<{\displaystyle <}" is understood to mean the existence of an injection together with the proven absence of a bijection (as opposed to alternatives such as the negation of Cantor's preorder, or a definition in terms ofassignedordinals). Also|S|<|P(S)|{\displaystyle |S|<|{\mathcal {P}}(S)|}in this sense, as has been shown, and at the same time it is the case that¬(|P(S)|≤|S|){\displaystyle \neg (|{\mathcal {P}}(S)|\leq |S|)}, for all setsS{\displaystyle S}. Assuming thelaw of excluded middle,characteristic functionssurject onto powersets, and then|2S|=|P(S)|{\displaystyle |2^{S}|=|{\mathcal {P}}(S)|}. So the uncountable2N{\displaystyle 2^{\mathbb {N} }}is also not enumerable and it can also be mapped ontoN{\displaystyle {\mathbb {N} }}. Classically, theSchröder–Bernstein theoremis valid and says that any two sets which are in the injective image of one another are in bijection as well. Here, every unbounded subset ofN{\displaystyle {\mathbb {N} }}is then in bijection withN{\displaystyle {\mathbb {N} }}itself, and everysubcountableset (a property in terms of surjections) is then already countable, i.e. in the surjective image ofN{\displaystyle {\mathbb {N} }}. In this context the possibilities are then exhausted, making "≤{\displaystyle \leq }" anon-strict partial order, or even atotal orderwhen assumingchoice. The diagonal argument thus establishes that, although both sets under consideration are infinite, there are actuallymoreinfinite sequences of ones and zeros than there are natural numbers. Cantor's result then also implies that the notion of theset of all setsis inconsistent: IfS{\displaystyle S}were the set of all sets, thenP(S){\displaystyle {\mathcal {P}}(S)}would at the same time be bigger thanS{\displaystyle S}and a subset ofS{\displaystyle S}. Also inconstructive mathematics, there is no surjection from the full domainN{\displaystyle {\mathbb {N} }}onto the space of functionsNN{\displaystyle {\mathbb {N} }^{\mathbb {N} }}or onto the collection of subsetsP(N){\displaystyle {\mathcal {P}}({\mathbb {N} })}, which is to say these two collections are uncountable. Again using "<{\displaystyle <}" for proven injection existence in conjunction with bijection absence, one hasN<2N{\displaystyle {\mathbb {N} }<2^{\mathbb {N} }}andS<P(S){\displaystyle S<{\mathcal {P}}(S)}. Further,¬(P(S)≤S){\displaystyle \neg ({\mathcal {P}}(S)\leq S)}, as previously noted. Likewise,2N≤NN{\displaystyle 2^{\mathbb {N} }\leq {\mathbb {N} }^{\mathbb {N} }},2S≤P(S){\displaystyle 2^{S}\leq {\mathcal {P}}(S)}and of courseS≤S{\displaystyle S\leq S}, also inconstructive set theory. It is however harder or impossible to order ordinals and also cardinals, constructively. For example, the Schröder–Bernstein theorem requires the law of excluded middle.[10]In fact, the standard ordering on the reals, extending the ordering of the rational numbers, is not necessarily decidable either. Neither are most properties of interesting classes of functions decidable, byRice's theorem, i.e. the set of counting numbers for the subcountable sets may not berecursiveand can thus fail to be countable. The elaborate collection of subsets of a set is constructively not exchangeable with the collection of its characteristic functions. In an otherwise constructive context (in which the law of excluded middle is not taken as axiom), it is consistent to adopt non-classical axioms that contradict consequences of the law of excluded middle. Uncountable sets such as2N{\displaystyle 2^{\mathbb {N} }}orNN{\displaystyle {\mathbb {N} }^{\mathbb {N} }}may be asserted to besubcountable.[11][12]This is a notion of size that is redundant in the classical context, but otherwise need not imply countability. The existence of injections from the uncountable2N{\displaystyle 2^{\mathbb {N} }}orNN{\displaystyle {\mathbb {N} }^{\mathbb {N} }}intoN{\displaystyle {\mathbb {N} }}is here possible as well.[13]So the cardinal relation fails to beantisymmetric. Consequently, also in the presence of function space sets that are even classically uncountable,intuitionistsdo not accept this relation to constitute a hierarchy of transfinite sizes.[14]When theaxiom of powersetis not adopted, in a constructive framework even the subcountability of all sets is then consistent. That all said, in common set theories, the non-existence of a set of all sets also already follows fromPredicative Separation. In a set theory, theories of mathematics aremodeled. Weaker logical axioms mean fewer constraints and so allow for a richer class of models. A set may be identified as amodel of the field of real numberswhen it fulfills someaxioms of real numbersor aconstructive rephrasingthereof. Various models have been studied, such as theCauchy realsor theDedekind reals, among others. The former relate to quotients of sequences while the later are well-behaved cuts taken from a powerset, if they exist. In the presence of excluded middle, those are all isomorphic and uncountable. Otherwise,variantsof the Dedekind reals can be countable[15]or inject into the naturals, but not jointly. When assumingcountable choice, constructive Cauchy reals even without an explicitmodulus of convergenceare thenCauchy-complete[16]and Dedekind reals simplify so as to become isomorphic to them. Indeed, here choice also aids diagonal constructions and when assuming it, Cauchy-complete models of the reals are uncountable. Russell's paradoxhas shown that set theory that includes anunrestricted comprehensionscheme is contradictory. Note that there is a similarity between the construction ofTand the set in Russell's paradox. Therefore, depending on how we modify the axiom scheme of comprehension in order to avoid Russell's paradox, arguments such as the non-existence of a set of all sets may or may not remain valid. Analogues of the diagonal argument are widely used in mathematics to prove the existence or nonexistence of certain objects. For example, the conventional proof of the unsolvability of thehalting problemis essentially a diagonal argument. Also, diagonalization was originally used to show the existence of arbitrarily hardcomplexity classesand played a key role in early attempts to proveP does not equal NP. The above proof fails forW. V. Quine's "New Foundations" set theory (NF). In NF, thenaive axiom scheme of comprehensionis modified to avoid the paradoxes by introducing a kind of "local"type theory. In this axiom scheme, isnota set — i.e., does not satisfy the axiom scheme. On the other hand, we might try to create a modified diagonal argument by noticing that isa set in NF. In which case, ifP1(S) is the set of one-element subsets ofSandfis a proposed bijection fromP1(S) toP(S), one is able to useproof by contradictionto prove that |P1(S)| < |P(S)|. The proof follows by the fact that iffwere indeed a mapontoP(S), then we could findrinS, such thatf({r}) coincides with the modified diagonal set, above. We would conclude that ifris not inf({r}), thenris inf({r}) and vice versa. It isnotpossible to putP1(S) in a one-to-one relation withS, as the two have different types, and so any function so defined would violate the typing rules for the comprehension scheme.
https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument
Inmathematics, areal numberis anumberthat can be used tomeasureacontinuousone-dimensionalquantitysuch as adurationortemperature. Here,continuousmeans that pairs of values can have arbitrarily small differences.[a]Every real number can be almost uniquely represented by an infinitedecimal expansion.[b][1] The real numbers are fundamental incalculus(and in many other branches of mathematics), in particular by their role in the classical definitions oflimits,continuityandderivatives.[c] The set of real numbers, sometimes called "the reals", is traditionallydenotedby a boldR, often usingblackboard bold,⁠R{\displaystyle \mathbb {R} }⁠.[2][3]The adjectivereal, used in the 17th century byRené Descartes, distinguishes real numbers fromimaginary numberssuch as thesquare rootsof−1.[4] The real numbers include therational numbers, such as theinteger−5and thefraction4 / 3. The rest of the real numbers are calledirrational numbers. Some irrational numbers (as well as all the rationals) are therootof apolynomialwith integer coefficients, such as the square root√2= 1.414...; these are calledalgebraic numbers. There are also real numbers which are not, such asπ= 3.1415...; these are calledtranscendental numbers.[4] Real numbers can be thought of as all points on alinecalled thenumber lineorreal line, where the points corresponding to integers (..., −2, −1, 0, 1, 2, ...) are equally spaced. The informal descriptions above of the real numbers are not sufficient for ensuring the correctness of proofs oftheoremsinvolving real numbers. The realization that a better definition was needed, and the elaboration of such a definition was a major development of19th-century mathematicsand is the foundation ofreal analysis, the study ofreal functionsand real-valuedsequences. A currentaxiomaticdefinition is that real numbers form theunique(up toanisomorphism)Dedekind-completeordered field.[d]Other common definitions of real numbers includeequivalence classesofCauchy sequences(of rational numbers),Dedekind cuts, and infinitedecimal representations. All these definitions satisfy the axiomatic definition and are thus equivalent. Real numbers are completely characterized by their fundamental properties that can be summarized by saying that they form anordered fieldthat isDedekind complete. Here, "completely characterized" means that there is a uniqueisomorphismbetween any two Dedekind complete ordered fields, and thus that their elements have exactly the same properties. This implies that one can manipulate real numbers and compute with them, without knowing how they can be defined; this is what mathematicians and physicists did during several centuries before the first formal definitions were provided in the second half of the 19th century. SeeConstruction of the real numbersfor details about these formal definitions and the proof of their equivalence. The real numbers form anordered field. Intuitively, this means that methods and rules ofelementary arithmeticapply to them. More precisely, there are twobinary operations,additionandmultiplication, and atotal orderthat have the following properties. Many other properties can be deduced from the above ones. In particular: Several other operations are commonly used, which can be deduced from the above ones. Thetotal orderthat is considered above is denoteda<b{\displaystyle a<b}and read as "aisless thanb". Three otherorder relationsare also commonly used: The real numbers0and1are commonly identified with thenatural numbers0and1. This allows identifying any natural numbernwith the sum ofnreal numbers equal to1. This identification can be pursued by identifying a negative integer−n{\displaystyle -n}(wheren{\displaystyle n}is a natural number) with the additive inverse−n{\displaystyle -n}of the real number identified withn.{\displaystyle n.}Similarly arational numberp/q{\displaystyle p/q}(wherepandqare integers andq≠0{\displaystyle q\neq 0}) is identified with the division of the real numbers identified withpandq. These identifications make the setQ{\displaystyle \mathbb {Q} }of the rational numbers an orderedsubfieldof the real numbersR.{\displaystyle \mathbb {R} .}TheDedekind completenessdescribed below implies that some real numbers, such as2,{\displaystyle {\sqrt {2}},}are not rational numbers; they are calledirrational numbers. The above identifications make sense, since natural numbers, integers and real numbers are generally not defined by their individual nature, but by defining properties (axioms). So, the identification of natural numbers with some real numbers is justified by the fact thatPeano axiomsare satisfied by these real numbers, with the addition with1taken as thesuccessor function. Formally, one has an injectivehomomorphismofordered monoidsfrom the natural numbersN{\displaystyle \mathbb {N} }to the integersZ,{\displaystyle \mathbb {Z} ,}an injective homomorphism ofordered ringsfromZ{\displaystyle \mathbb {Z} }to the rational numbersQ,{\displaystyle \mathbb {Q} ,}and an injective homomorphism ofordered fieldsfromQ{\displaystyle \mathbb {Q} }to the real numbersR.{\displaystyle \mathbb {R} .}The identifications consist of not distinguishing the source and the image of each injective homomorphism, and thus to write These identifications are formallyabuses of notation(since, formally, a rational number is an equivalence class of pairs of integers, and a real number is an equivalence class of Cauchy series), and are generally harmless. It is only in very specific situations, that one must avoid them and replace them by using explicitly the above homomorphisms. This is the case inconstructive mathematicsandcomputer programming. In the latter case, these homomorphisms are interpreted astype conversionsthat can often be done automatically by thecompiler. Previous properties do not distinguish real numbers fromrational numbers. This distinction is provided byDedekind completeness, which states that every set of real numbers with anupper boundadmits aleast upper bound. This means the following. A set of real numbersS{\displaystyle S}isbounded aboveif there is a real numberu{\displaystyle u}such thats≤u{\displaystyle s\leq u}for alls∈S{\displaystyle s\in S}; such au{\displaystyle u}is called anupper boundofS.{\displaystyle S.}So, Dedekind completeness means that, ifSis bounded above, it has an upper bound that is less than any other upper bound. Dedekind completeness implies other sorts of completeness (see below), but also has some important consequences. The last two properties are summarized by saying that the real numbers form areal closed field. This implies the real version of thefundamental theorem of algebra, namely that every polynomial with real coefficients can be factored into polynomials with real coefficients of degree at most two. The most common way of describing a real number is via its decimal representation, a sequence ofdecimal digitseach representing the product of an integer between zero and nine times apower of ten, extending to finitely many positive powers of ten to the left and infinitely many negative powers of ten to the right. For a numberxwhose decimal representation extendskplaces to the left, the standard notation is the juxtaposition of the digitsbkbk−1⋯b0.a1a2⋯,{\displaystyle b_{k}b_{k-1}\cdots b_{0}.a_{1}a_{2}\cdots ,}in descending order by power of ten, with non-negative and negative powers of ten separated by adecimal point, representing theinfinite series For example, for the circle constantπ=3.14159⋯,{\displaystyle \pi =3.14159\cdots ,}kis zero andb0=3,{\displaystyle b_{0}=3,}a1=1,{\displaystyle a_{1}=1,}a2=4,{\displaystyle a_{2}=4,}etc. More formally, adecimal representationfor a nonnegative real numberxconsists of a nonnegative integerkand integers between zero and nine in theinfinite sequence (Ifk>0,{\displaystyle k>0,}then by conventionbk≠0.{\displaystyle b_{k}\neq 0.}) Such a decimal representation specifies the real number as the least upper bound of thedecimal fractionsthat are obtained bytruncatingthe sequence: given a positive integern, the truncation of the sequence at the placenis the finitepartial sum The real numberxdefined by the sequence is the least upper bound of theDn,{\displaystyle D_{n},}which exists by Dedekind completeness. Conversely, given a nonnegative real numberx, one can define a decimal representation ofxbyinduction, as follows. Definebk⋯b0{\displaystyle b_{k}\cdots b_{0}}as decimal representation of the largest integerD0{\displaystyle D_{0}}such thatD0≤x{\displaystyle D_{0}\leq x}(this integer exists because of the Archimedean property). Then, supposing byinductionthat the decimal fractionDi{\displaystyle D_{i}}has been defined fori<n,{\displaystyle i<n,}one definesan{\displaystyle a_{n}}as the largest digit such thatDn−1+an/10n≤a,{\displaystyle D_{n-1}+a_{n}/10^{n}\leq a,}and one setsDn=Dn−1+an/10n.{\displaystyle D_{n}=D_{n-1}+a_{n}/10^{n}.} One can use the defining properties of the real numbers to show thatxis the least upper bound of theDn.{\displaystyle D_{n}.}So, the resulting sequence of digits is called adecimal representationofx. Another decimal representation can be obtained by replacing≤x{\displaystyle \leq x}with<x{\displaystyle <x}in the preceding construction. These two representations are identical, unlessxis adecimal fractionof the formm10h.{\textstyle {\frac {m}{10^{h}}}.}In this case, in the first decimal representation, allan{\displaystyle a_{n}}are zero forn>h,{\displaystyle n>h,}and, in the second representation, allan{\displaystyle a_{n}}9. (see0.999...for details). In summary, there is abijectionbetween the real numbers and the decimal representations that do not end with infinitely many trailing 9. The preceding considerations apply directly for everynumeral baseB≥2,{\displaystyle B\geq 2,}simply by replacing 10 withB{\displaystyle B}and 9 withB−1.{\displaystyle B-1.} A main reason for using real numbers is so that many sequences havelimits. More formally, the reals arecomplete(in the sense ofmetric spacesoruniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section): Asequence(xn) of real numbers is called aCauchy sequenceif for anyε > 0there exists an integerN(possibly depending on ε) such that thedistance|xn−xm|is less than ε for allnandmthat are both greater thanN. This definition, originally provided byCauchy, formalizes the fact that thexneventually come and remain arbitrarily close to each other. A sequence (xn)converges to the limitxif its elements eventually come and remain arbitrarily close tox, that is, if for anyε > 0there exists an integerN(possibly depending on ε) such that the distance|xn−x|is less than ε forngreater thanN. Every convergent sequence is a Cauchy sequence, and the converse is true for real numbers, and this means that thetopological spaceof the real numbers is complete. The set of rational numbers is not complete. For example, the sequence (1; 1.4; 1.41; 1.414; 1.4142; 1.41421; ...), where each term adds a digit of the decimal expansion of the positivesquare rootof 2, is Cauchy but it does not converge to a rational number (in the real numbers, in contrast, it converges to the positivesquare rootof 2). The completeness property of the reals is the basis on whichcalculus, and more generallymathematical analysis, are built. In particular, the test that a sequence is a Cauchy sequence allows proving that a sequence has a limit, without computing it, and even without knowing it. For example, the standard series of theexponential function converges to a real number for everyx, because the sums can be made arbitrarily small (independently ofM) by choosingNsufficiently large. This proves that the sequence is Cauchy, and thus converges, showing thatex{\displaystyle e^{x}}is well defined for everyx. The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways. First, an order can belattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have nolargest element(given any elementz,z+ 1is larger). Additionally, an order can be Dedekind-complete, see§ Axiomatic approach. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way. These two notions of completeness ignore the field structure. However, anordered group(in this case, the additive group of the field) defines auniformstructure, and uniform structures have a notion ofcompleteness; the description in§ Completenessis a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion formetric spaces, since the definition of metric space relies on already having a characterization of the real numbers.) It is not true thatR{\displaystyle \mathbb {R} }is theonlyuniformly complete ordered field, but it is the only uniformly completeArchimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way. But the original use of the phrase "complete Archimedean field" was byDavid Hilbert, who meant still something else by it. He meant that the real numbers form thelargestArchimedean field in the sense that every other Archimedean field is a subfield ofR{\displaystyle \mathbb {R} }. ThusR{\displaystyle \mathbb {R} }is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals fromsurreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield. The set of all real numbers isuncountable, in the sense that while both the set of allnatural numbers{1, 2, 3, 4, ...}and the set of all real numbers areinfinite sets, there exists noone-to-one functionfrom the real numbers to the natural numbers. Thecardinalityof the set of all real numbers is called thecardinality of the continuumand commonly denoted byc.{\displaystyle {\mathfrak {c}}.}It is strictly greater than the cardinality of the set of all natural numbers, denotedℵ0{\displaystyle \aleph _{0}}and calledAleph-zerooraleph-nought. The cardinality of the continuum equals the cardinality of thepower setof the natural numbers, that is, the set of all subsets of the natural numbers. The statement that there is no cardinality strictly greater thanℵ0{\displaystyle \aleph _{0}}and strictly smaller thanc{\displaystyle {\mathfrak {c}}}is known as thecontinuum hypothesis(CH). It is neither provable nor refutable using the axioms ofZermelo–Fraenkel set theoryincluding theaxiom of choice(ZFC)—the standard foundation of modern mathematics. In fact, some models of ZFC satisfy CH, while others violate it.[5] As a topological space, the real numbers areseparable. This is because the set of rationals, which is countable, isdensein the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals. The real numbers form ametric space: the distance betweenxandyis defined as theabsolute value|x−y|. By virtue of being a totally ordered set, they also carry anorder topology; thetopologyarising from the metric and the one arising from the order are identical, but yield different presentations for the topology—in the order topology as ordered intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals form acontractible(henceconnectedandsimply connected),separableandcompletemetric space ofHausdorff dimension1. The real numbers arelocally compactbut notcompact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separableorder topologiesare necessarilyhomeomorphicto the reals. Every nonnegative real number has asquare rootinR{\displaystyle \mathbb {R} }, although no negative number does. This shows that the order onR{\displaystyle \mathbb {R} }is determined by its algebraic structure. Also, everypolynomialof odd degree admits at least one real root: these two properties makeR{\displaystyle \mathbb {R} }the premier example of areal closed field. Proving this is the first half of one proof of thefundamental theorem of algebra. The reals carry a canonicalmeasure, theLebesgue measure, which is theHaar measureon their structure as atopological groupnormalized such that theunit interval[0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g.Vitali sets. The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals withfirst-order logicalone: theLöwenheim–Skolem theoremimplies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set ofhyperreal numberssatisfies the same first order sentences asR{\displaystyle \mathbb {R} }. Ordered fields that satisfy the same first-order sentences asR{\displaystyle \mathbb {R} }are callednonstandard modelsofR{\displaystyle \mathbb {R} }. This is what makesnonstandard analysiswork; by proving a first-order statement in some nonstandard model (which may be easier than proving it inR{\displaystyle \mathbb {R} }), we know that the same statement must also be true ofR{\displaystyle \mathbb {R} }. ThefieldR{\displaystyle \mathbb {R} }of real numbers is anextension fieldof the fieldQ{\displaystyle \mathbb {Q} }of rational numbers, andR{\displaystyle \mathbb {R} }can therefore be seen as avector spaceoverQ{\displaystyle \mathbb {Q} }.Zermelo–Fraenkel set theorywith theaxiom of choiceguarantees the existence of abasisof this vector space: there exists a setBof real numbers such that every real number can be written uniquely as a finitelinear combinationof elements of this set, using rational coefficients only, and such that no element ofBis a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described. Thewell-ordering theoremimplies that the real numbers can bewell-orderedif the axiom of choice is assumed: there exists a total order onR{\displaystyle \mathbb {R} }with the property that every nonemptysubsetofR{\displaystyle \mathbb {R} }has aleast elementin this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. anopen intervaldoes not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it has not been explicitly described. IfV=Lis assumed in addition to the axioms of ZF, a well ordering of the real numbers can be shown to be explicitly definable by a formula.[6] A real number may be eithercomputableor uncomputable; eitheralgorithmically randomor not; and eitherarithmetically randomor not. Simple fractionswere used by theEgyptiansaround 1000 BC; theVedic"Shulba Sutras" ("The rules of chords") inc.600 BCinclude what may be the first "use" of irrational numbers. The concept of irrationality was implicitly accepted by earlyIndian mathematicianssuch asManava(c.750–690 BC), who was aware that thesquare rootsof certain numbers, such as 2 and 61, could not be exactly determined.[7] Around 500 BC, theGreek mathematiciansled byPythagorasalso realized that thesquare root of 2is irrational. For Greek mathematicians, numbers were only thenatural numbers. Real numbers were called "proportions", being the ratios of two lengths, or equivalently being measures of a length in terms of another length, called unit length. Two lengths are "commensurable", if there is a unit in which they are both measured by integers, that is, in modern terminology, if their ratio is arational number.Eudoxus of Cnidus(c. 390−340 BC) provided a definition of the equality of two irrational proportions in a way that is similar toDedekind cuts(introduced more than 2,000 years later), except that he did not use anyarithmetic operationother than multiplication of a length by a natural number (seeEudoxus of Cnidus). This may be viewed as the first definition of the real numbers. TheMiddle Agesbrought about the acceptance ofzero,negative numbers, integers, andfractionalnumbers, first byIndianandChinese mathematicians, and then byArabic mathematicians, who were also the first to treat irrational numbers as algebraic objects (the latter being made possible by the development of algebra).[8]Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[9]The Egyptian mathematicianAbū Kāmil Shujā ibn Aslam(c.850–930)was the first to accept irrational numbers as solutions toquadratic equations, or ascoefficientsin anequation(often in the form of square roots,cube roots, andfourth roots).[10]In Europe, such numbers, not commensurable with the numerical unit, were calledirrationalorsurd("deaf"). In the 16th century,Simon Stevincreated the basis for moderndecimalnotation, and insisted that there is no difference between rational and irrational numbers in this regard. In the 17th century,Descartesintroduced the term "real" to describe roots of apolynomial, distinguishing them from "imaginary" numbers. In the 18th and 19th centuries, there was much work on irrational and transcendental numbers.Lambert(1761) gave a flawed proof thatπcannot be rational;Legendre(1794) completed the proof[11]and showed thatπis not the square root of a rational number.[12]Liouville(1840) showed that neitherenore2can be a root of an integerquadratic equation, and then established the existence of transcendental numbers;Cantor(1873) extended and greatly simplified this proof.[13]Hermite(1873) proved thateis transcendental, andLindemann(1882), showed thatπis transcendental. Lindemann's proof was much simplified by Weierstrass (1885),Hilbert(1893),Hurwitz,[14]andGordan.[15] The concept that many points existed between rational numbers, such as the square root of 2, was well known to the ancient Greeks. The existence of a continuous number line was considered self-evident, but the nature of this continuity, presently calledcompleteness, was not understood. The rigor developed for geometry did not cross over to the concept of numbers until the 1800s.[16] The developers ofcalculusused real numbers andlimitswithout defining them rigorously. In hisCours d'Analyse(1821),Cauchymade calculus rigorous, but he used the real numbers without defining them, and assumed without proof that everyCauchysequence has a limit and that this limit is a real number. In 1854Bernhard Riemannhighlighted the limitations of calculus in the method ofFourier series, showing the need for a rigorous definition of the real numbers.[17]: 672 Beginning withRichard Dedekindin 1858, several mathematicians worked on the definition of the real numbers, includingHermann Hankel,Charles Méray, andEduard Heine, leading to the publication in 1872 of two independent definitions of real numbers, one by Dedekind, asDedekind cuts, and the other one byGeorg Cantor, as equivalence classes of Cauchy sequences.[18]Several problems were left open by these definitions, which contributed to thefoundational crisis of mathematics. Firstly both definitions suppose thatrational numbersand thusnatural numbersare rigorously defined; this was done a few years later withPeano axioms. Secondly, both definitions involveinfinite sets(Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor'sset theorywas published several years later. Thirdly, these definitions implyquantificationon infinite sets, and this cannot be formalized in the classicallogicoffirst-order predicates. This is one of the reasons for whichhigher-order logicswere developed in the first half of the 20th century. In 1874 Cantor showed that the set of all real numbers isuncountably infinite, but the set of all algebraic numbers iscountably infinite.Cantor's first uncountability proofwas different from his famousdiagonal argumentpublished in 1891. The real number system(R;+;⋅;<){\displaystyle (\mathbb {R} ;{}+{};{}\cdot {};{}<{})}can be definedaxiomaticallyup to anisomorphism, which is described hereinafter. There are also many ways to construct "the" real number system, and a popular approach involves starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of theirCauchy sequencesor as Dedekind cuts, which are certain subsets of rational numbers.[19]Another approach is to start from some rigorous axiomatization of Euclidean geometry (say of Hilbert or ofTarski), and then define the real number system geometrically. All these constructions of the real numbers have been shown to be equivalent, in the sense that the resulting number systems areisomorphic. LetR{\displaystyle \mathbb {R} }denote thesetof all real numbers. Then: The last property applies to the real numbers but not to the rational numbers (or toother more exotic ordered fields). For example,{x∈Q:x2<2}{\displaystyle \{x\in \mathbb {Q} :x^{2}<2\}}has a rational upper bound (e.g., 1.42), but noleastrational upper bound, because2{\displaystyle {\sqrt {2}}}is not rational. These properties imply theArchimedean property(which is not implied by other definitions of completeness), which states that the set of integers has no upper bound in the reals. In fact, if this were false, then the integers would have a least upper boundN; then,N– 1 would not be an upper bound, and there would be an integernsuch thatn>N– 1, and thusn+ 1 >N, which is a contradiction with the upper-bound property ofN. The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fieldsR1{\displaystyle \mathbb {R} _{1}}andR2{\displaystyle \mathbb {R} _{2}}, there exists a unique fieldisomorphismfromR1{\displaystyle \mathbb {R} _{1}}toR2{\displaystyle \mathbb {R_{2}} }. This uniqueness allows us to think of them as essentially the same mathematical object. For another axiomatization ofR{\displaystyle \mathbb {R} }seeTarski's axiomatization of the reals. The real numbers can be constructed as acompletionof the rational numbers, in such a way that a sequence defined by a decimal or binary expansion like (3; 3.1; 3.14; 3.141; 3.1415; ...)convergesto a unique real number—in this caseπ. For details and other constructions of real numbers, seeConstruction of the real numbers. In the physical sciences most physical constants, such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact the fundamental physical theories such asclassical mechanics,electromagnetism,quantum mechanics,general relativity, and thestandard modelare described using mathematical structures, typicallysmooth manifoldsorHilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finiteaccuracy and precision. Physicists have occasionally suggested that a more fundamental theory would replace the real numbers with quantities that do not form a continuum, but such proposals remain speculative.[20] The real numbers are most often formalized using theZermelo–Fraenkelaxiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied inreverse mathematicsand inconstructive mathematics.[21] Thehyperreal numbersas developed byEdwin Hewitt,Abraham Robinson, and others extend the set of the real numbers by introducinginfinitesimaland infinite numbers, allowing for buildinginfinitesimal calculusin a way closer to the original intuitions ofLeibniz,Euler,Cauchy, and others. Edward Nelson'sinternal set theoryenriches theZermelo–Fraenkelset theory syntactically by introducing a unary predicate "standard". In this approach, infinitesimals are (non-"standard") elements of the set of the real numbers (rather than being elements of an extension thereof, as in Robinson's theory). Thecontinuum hypothesisposits that the cardinality of the set of the real numbers isℵ1{\displaystyle \aleph _{1}}; i.e. the smallest infinitecardinal numberafterℵ0{\displaystyle \aleph _{0}}, the cardinality of the integers.Paul Cohenproved in 1963 that it is an axiom independent of the other axioms of set theory; that is: one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction. Electronic calculatorsandcomputerscannot operate on arbitrary real numbers, because finite computers cannot directly store infinitely many digits or other infinite representations. Nor do they usually even operate on arbitrarydefinable real numbers, which are inconvenient to manipulate. Instead, computers typically work with finite-precision approximations calledfloating-point numbers, a representation similar toscientific notation. The achievable precision is limited by thedata storage spaceallocated for each number, whether asfixed-point, floating-point, orarbitrary-precision numbers, or some other representation. Mostscientific computationusesbinaryfloating-point arithmetic, often a64-bit representationwith around 16 decimaldigits of precision. Real numbers satisfy theusual rules of arithmetic, butfloating-point numbers do not. The field ofnumerical analysisstudies thestabilityandaccuracyof numericalalgorithmsimplemented with approximate arithmetic. Alternately,computer algebra systemscan operate on irrational quantities exactly bymanipulating symbolic formulasfor them (such as2,{\textstyle {\sqrt {2}},}arctan⁡5,{\textstyle \arctan 5,}or∫01xxdx{\textstyle \int _{0}^{1}x^{x}\,dx}) rather than their rational or decimal approximation.[22]But exact and symbolic arithmetic also have limitations: for instance, they are computationally more expensive; it is not in general possible to determine whether two symbolic expressions are equal (theconstant problem); and arithmetic operations can causeexponentialexplosion in the size of representation of a single number (for instance, squaring a rational number roughly doubles the number of digits in its numerator and denominator, and squaring apolynomialroughly doubles its number of terms), overwhelming finite computer storage.[23] A real number is calledcomputableif there exists an algorithm that yields its digits. Because there are onlycountablymany algorithms,[24]but an uncountable number of reals,almost allreal numbers fail to be computable. Moreover, the equality of two computable numbers is anundecidable problem. Someconstructivistsaccept the existence of only those reals that are computable. The set ofdefinable numbersis broader, but still only countable. Inset theory, specificallydescriptive set theory, theBaire spaceis used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals". Thesetof all real numbers is denotedR{\displaystyle \mathbb {R} }(blackboard bold) orR(upright bold). As it is naturally endowed with the structure of afield, the expressionfield of real numbersis frequently used when its algebraic properties are under consideration. The sets of positive real numbers and negative real numbers are often notedR+{\displaystyle \mathbb {R} ^{+}}andR−{\displaystyle \mathbb {R} ^{-}},[25]respectively;R+{\displaystyle \mathbb {R} _{+}}andR−{\displaystyle \mathbb {R} _{-}}are also used.[26]The non-negative real numbers can be notedR≥0{\displaystyle \mathbb {R} _{\geq 0}}but one often sees this set notedR+∪{0}.{\displaystyle \mathbb {R} ^{+}\cup \{0\}.}[25]In French mathematics, thepositive real numbersandnegative real numberscommonly includezero, and these sets are noted respectivelyR+{\displaystyle \mathbb {R_{+}} }andR−.{\displaystyle \mathbb {R} _{-}.}[26]In this understanding, the respective sets without zero are called strictly positive real numbers and strictly negative real numbers, and are notedR+∗{\displaystyle \mathbb {R} _{+}^{*}}andR−∗.{\displaystyle \mathbb {R} _{-}^{*}.}[26] The notationRn{\displaystyle \mathbb {R} ^{n}}refers to the set of then-tuplesof elements ofR{\displaystyle \mathbb {R} }(real coordinate space), which can be identified to theCartesian productofncopies ofR.{\displaystyle \mathbb {R} .}It is ann-dimensionalvector spaceover the field of the real numbers, often called thecoordinate spaceof dimensionn; this space may be identified to then-dimensionalEuclidean spaceas soon as aCartesian coordinate systemhas been chosen in the latter. In this identification, apointof the Euclidean space is identified with the tuple of itsCartesian coordinates. In mathematicsrealis used as an adjective, meaning that the underlying field is the field of the real numbers (orthe real field). For example,realmatrix,real polynomialandrealLie algebra. The word is also used as anoun, meaning a real number (as in "the set of all reals"). The real numbers can be generalized and extended in several different directions:
https://en.wikipedia.org/wiki/Real_number
Inmathematics, areal numberis anumberthat can be used tomeasureacontinuousone-dimensionalquantitysuch as adurationortemperature. Here,continuousmeans that pairs of values can have arbitrarily small differences.[a]Every real number can be almost uniquely represented by an infinitedecimal expansion.[b][1] The real numbers are fundamental incalculus(and in many other branches of mathematics), in particular by their role in the classical definitions oflimits,continuityandderivatives.[c] The set of real numbers, sometimes called "the reals", is traditionallydenotedby a boldR, often usingblackboard bold,⁠R{\displaystyle \mathbb {R} }⁠.[2][3]The adjectivereal, used in the 17th century byRené Descartes, distinguishes real numbers fromimaginary numberssuch as thesquare rootsof−1.[4] The real numbers include therational numbers, such as theinteger−5and thefraction4 / 3. The rest of the real numbers are calledirrational numbers. Some irrational numbers (as well as all the rationals) are therootof apolynomialwith integer coefficients, such as the square root√2= 1.414...; these are calledalgebraic numbers. There are also real numbers which are not, such asπ= 3.1415...; these are calledtranscendental numbers.[4] Real numbers can be thought of as all points on alinecalled thenumber lineorreal line, where the points corresponding to integers (..., −2, −1, 0, 1, 2, ...) are equally spaced. The informal descriptions above of the real numbers are not sufficient for ensuring the correctness of proofs oftheoremsinvolving real numbers. The realization that a better definition was needed, and the elaboration of such a definition was a major development of19th-century mathematicsand is the foundation ofreal analysis, the study ofreal functionsand real-valuedsequences. A currentaxiomaticdefinition is that real numbers form theunique(up toanisomorphism)Dedekind-completeordered field.[d]Other common definitions of real numbers includeequivalence classesofCauchy sequences(of rational numbers),Dedekind cuts, and infinitedecimal representations. All these definitions satisfy the axiomatic definition and are thus equivalent. Real numbers are completely characterized by their fundamental properties that can be summarized by saying that they form anordered fieldthat isDedekind complete. Here, "completely characterized" means that there is a uniqueisomorphismbetween any two Dedekind complete ordered fields, and thus that their elements have exactly the same properties. This implies that one can manipulate real numbers and compute with them, without knowing how they can be defined; this is what mathematicians and physicists did during several centuries before the first formal definitions were provided in the second half of the 19th century. SeeConstruction of the real numbersfor details about these formal definitions and the proof of their equivalence. The real numbers form anordered field. Intuitively, this means that methods and rules ofelementary arithmeticapply to them. More precisely, there are twobinary operations,additionandmultiplication, and atotal orderthat have the following properties. Many other properties can be deduced from the above ones. In particular: Several other operations are commonly used, which can be deduced from the above ones. Thetotal orderthat is considered above is denoteda<b{\displaystyle a<b}and read as "aisless thanb". Three otherorder relationsare also commonly used: The real numbers0and1are commonly identified with thenatural numbers0and1. This allows identifying any natural numbernwith the sum ofnreal numbers equal to1. This identification can be pursued by identifying a negative integer−n{\displaystyle -n}(wheren{\displaystyle n}is a natural number) with the additive inverse−n{\displaystyle -n}of the real number identified withn.{\displaystyle n.}Similarly arational numberp/q{\displaystyle p/q}(wherepandqare integers andq≠0{\displaystyle q\neq 0}) is identified with the division of the real numbers identified withpandq. These identifications make the setQ{\displaystyle \mathbb {Q} }of the rational numbers an orderedsubfieldof the real numbersR.{\displaystyle \mathbb {R} .}TheDedekind completenessdescribed below implies that some real numbers, such as2,{\displaystyle {\sqrt {2}},}are not rational numbers; they are calledirrational numbers. The above identifications make sense, since natural numbers, integers and real numbers are generally not defined by their individual nature, but by defining properties (axioms). So, the identification of natural numbers with some real numbers is justified by the fact thatPeano axiomsare satisfied by these real numbers, with the addition with1taken as thesuccessor function. Formally, one has an injectivehomomorphismofordered monoidsfrom the natural numbersN{\displaystyle \mathbb {N} }to the integersZ,{\displaystyle \mathbb {Z} ,}an injective homomorphism ofordered ringsfromZ{\displaystyle \mathbb {Z} }to the rational numbersQ,{\displaystyle \mathbb {Q} ,}and an injective homomorphism ofordered fieldsfromQ{\displaystyle \mathbb {Q} }to the real numbersR.{\displaystyle \mathbb {R} .}The identifications consist of not distinguishing the source and the image of each injective homomorphism, and thus to write These identifications are formallyabuses of notation(since, formally, a rational number is an equivalence class of pairs of integers, and a real number is an equivalence class of Cauchy series), and are generally harmless. It is only in very specific situations, that one must avoid them and replace them by using explicitly the above homomorphisms. This is the case inconstructive mathematicsandcomputer programming. In the latter case, these homomorphisms are interpreted astype conversionsthat can often be done automatically by thecompiler. Previous properties do not distinguish real numbers fromrational numbers. This distinction is provided byDedekind completeness, which states that every set of real numbers with anupper boundadmits aleast upper bound. This means the following. A set of real numbersS{\displaystyle S}isbounded aboveif there is a real numberu{\displaystyle u}such thats≤u{\displaystyle s\leq u}for alls∈S{\displaystyle s\in S}; such au{\displaystyle u}is called anupper boundofS.{\displaystyle S.}So, Dedekind completeness means that, ifSis bounded above, it has an upper bound that is less than any other upper bound. Dedekind completeness implies other sorts of completeness (see below), but also has some important consequences. The last two properties are summarized by saying that the real numbers form areal closed field. This implies the real version of thefundamental theorem of algebra, namely that every polynomial with real coefficients can be factored into polynomials with real coefficients of degree at most two. The most common way of describing a real number is via its decimal representation, a sequence ofdecimal digitseach representing the product of an integer between zero and nine times apower of ten, extending to finitely many positive powers of ten to the left and infinitely many negative powers of ten to the right. For a numberxwhose decimal representation extendskplaces to the left, the standard notation is the juxtaposition of the digitsbkbk−1⋯b0.a1a2⋯,{\displaystyle b_{k}b_{k-1}\cdots b_{0}.a_{1}a_{2}\cdots ,}in descending order by power of ten, with non-negative and negative powers of ten separated by adecimal point, representing theinfinite series For example, for the circle constantπ=3.14159⋯,{\displaystyle \pi =3.14159\cdots ,}kis zero andb0=3,{\displaystyle b_{0}=3,}a1=1,{\displaystyle a_{1}=1,}a2=4,{\displaystyle a_{2}=4,}etc. More formally, adecimal representationfor a nonnegative real numberxconsists of a nonnegative integerkand integers between zero and nine in theinfinite sequence (Ifk>0,{\displaystyle k>0,}then by conventionbk≠0.{\displaystyle b_{k}\neq 0.}) Such a decimal representation specifies the real number as the least upper bound of thedecimal fractionsthat are obtained bytruncatingthe sequence: given a positive integern, the truncation of the sequence at the placenis the finitepartial sum The real numberxdefined by the sequence is the least upper bound of theDn,{\displaystyle D_{n},}which exists by Dedekind completeness. Conversely, given a nonnegative real numberx, one can define a decimal representation ofxbyinduction, as follows. Definebk⋯b0{\displaystyle b_{k}\cdots b_{0}}as decimal representation of the largest integerD0{\displaystyle D_{0}}such thatD0≤x{\displaystyle D_{0}\leq x}(this integer exists because of the Archimedean property). Then, supposing byinductionthat the decimal fractionDi{\displaystyle D_{i}}has been defined fori<n,{\displaystyle i<n,}one definesan{\displaystyle a_{n}}as the largest digit such thatDn−1+an/10n≤a,{\displaystyle D_{n-1}+a_{n}/10^{n}\leq a,}and one setsDn=Dn−1+an/10n.{\displaystyle D_{n}=D_{n-1}+a_{n}/10^{n}.} One can use the defining properties of the real numbers to show thatxis the least upper bound of theDn.{\displaystyle D_{n}.}So, the resulting sequence of digits is called adecimal representationofx. Another decimal representation can be obtained by replacing≤x{\displaystyle \leq x}with<x{\displaystyle <x}in the preceding construction. These two representations are identical, unlessxis adecimal fractionof the formm10h.{\textstyle {\frac {m}{10^{h}}}.}In this case, in the first decimal representation, allan{\displaystyle a_{n}}are zero forn>h,{\displaystyle n>h,}and, in the second representation, allan{\displaystyle a_{n}}9. (see0.999...for details). In summary, there is abijectionbetween the real numbers and the decimal representations that do not end with infinitely many trailing 9. The preceding considerations apply directly for everynumeral baseB≥2,{\displaystyle B\geq 2,}simply by replacing 10 withB{\displaystyle B}and 9 withB−1.{\displaystyle B-1.} A main reason for using real numbers is so that many sequences havelimits. More formally, the reals arecomplete(in the sense ofmetric spacesoruniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section): Asequence(xn) of real numbers is called aCauchy sequenceif for anyε > 0there exists an integerN(possibly depending on ε) such that thedistance|xn−xm|is less than ε for allnandmthat are both greater thanN. This definition, originally provided byCauchy, formalizes the fact that thexneventually come and remain arbitrarily close to each other. A sequence (xn)converges to the limitxif its elements eventually come and remain arbitrarily close tox, that is, if for anyε > 0there exists an integerN(possibly depending on ε) such that the distance|xn−x|is less than ε forngreater thanN. Every convergent sequence is a Cauchy sequence, and the converse is true for real numbers, and this means that thetopological spaceof the real numbers is complete. The set of rational numbers is not complete. For example, the sequence (1; 1.4; 1.41; 1.414; 1.4142; 1.41421; ...), where each term adds a digit of the decimal expansion of the positivesquare rootof 2, is Cauchy but it does not converge to a rational number (in the real numbers, in contrast, it converges to the positivesquare rootof 2). The completeness property of the reals is the basis on whichcalculus, and more generallymathematical analysis, are built. In particular, the test that a sequence is a Cauchy sequence allows proving that a sequence has a limit, without computing it, and even without knowing it. For example, the standard series of theexponential function converges to a real number for everyx, because the sums can be made arbitrarily small (independently ofM) by choosingNsufficiently large. This proves that the sequence is Cauchy, and thus converges, showing thatex{\displaystyle e^{x}}is well defined for everyx. The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways. First, an order can belattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have nolargest element(given any elementz,z+ 1is larger). Additionally, an order can be Dedekind-complete, see§ Axiomatic approach. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way. These two notions of completeness ignore the field structure. However, anordered group(in this case, the additive group of the field) defines auniformstructure, and uniform structures have a notion ofcompleteness; the description in§ Completenessis a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion formetric spaces, since the definition of metric space relies on already having a characterization of the real numbers.) It is not true thatR{\displaystyle \mathbb {R} }is theonlyuniformly complete ordered field, but it is the only uniformly completeArchimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way. But the original use of the phrase "complete Archimedean field" was byDavid Hilbert, who meant still something else by it. He meant that the real numbers form thelargestArchimedean field in the sense that every other Archimedean field is a subfield ofR{\displaystyle \mathbb {R} }. ThusR{\displaystyle \mathbb {R} }is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals fromsurreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield. The set of all real numbers isuncountable, in the sense that while both the set of allnatural numbers{1, 2, 3, 4, ...}and the set of all real numbers areinfinite sets, there exists noone-to-one functionfrom the real numbers to the natural numbers. Thecardinalityof the set of all real numbers is called thecardinality of the continuumand commonly denoted byc.{\displaystyle {\mathfrak {c}}.}It is strictly greater than the cardinality of the set of all natural numbers, denotedℵ0{\displaystyle \aleph _{0}}and calledAleph-zerooraleph-nought. The cardinality of the continuum equals the cardinality of thepower setof the natural numbers, that is, the set of all subsets of the natural numbers. The statement that there is no cardinality strictly greater thanℵ0{\displaystyle \aleph _{0}}and strictly smaller thanc{\displaystyle {\mathfrak {c}}}is known as thecontinuum hypothesis(CH). It is neither provable nor refutable using the axioms ofZermelo–Fraenkel set theoryincluding theaxiom of choice(ZFC)—the standard foundation of modern mathematics. In fact, some models of ZFC satisfy CH, while others violate it.[5] As a topological space, the real numbers areseparable. This is because the set of rationals, which is countable, isdensein the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals. The real numbers form ametric space: the distance betweenxandyis defined as theabsolute value|x−y|. By virtue of being a totally ordered set, they also carry anorder topology; thetopologyarising from the metric and the one arising from the order are identical, but yield different presentations for the topology—in the order topology as ordered intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals form acontractible(henceconnectedandsimply connected),separableandcompletemetric space ofHausdorff dimension1. The real numbers arelocally compactbut notcompact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separableorder topologiesare necessarilyhomeomorphicto the reals. Every nonnegative real number has asquare rootinR{\displaystyle \mathbb {R} }, although no negative number does. This shows that the order onR{\displaystyle \mathbb {R} }is determined by its algebraic structure. Also, everypolynomialof odd degree admits at least one real root: these two properties makeR{\displaystyle \mathbb {R} }the premier example of areal closed field. Proving this is the first half of one proof of thefundamental theorem of algebra. The reals carry a canonicalmeasure, theLebesgue measure, which is theHaar measureon their structure as atopological groupnormalized such that theunit interval[0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g.Vitali sets. The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals withfirst-order logicalone: theLöwenheim–Skolem theoremimplies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set ofhyperreal numberssatisfies the same first order sentences asR{\displaystyle \mathbb {R} }. Ordered fields that satisfy the same first-order sentences asR{\displaystyle \mathbb {R} }are callednonstandard modelsofR{\displaystyle \mathbb {R} }. This is what makesnonstandard analysiswork; by proving a first-order statement in some nonstandard model (which may be easier than proving it inR{\displaystyle \mathbb {R} }), we know that the same statement must also be true ofR{\displaystyle \mathbb {R} }. ThefieldR{\displaystyle \mathbb {R} }of real numbers is anextension fieldof the fieldQ{\displaystyle \mathbb {Q} }of rational numbers, andR{\displaystyle \mathbb {R} }can therefore be seen as avector spaceoverQ{\displaystyle \mathbb {Q} }.Zermelo–Fraenkel set theorywith theaxiom of choiceguarantees the existence of abasisof this vector space: there exists a setBof real numbers such that every real number can be written uniquely as a finitelinear combinationof elements of this set, using rational coefficients only, and such that no element ofBis a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described. Thewell-ordering theoremimplies that the real numbers can bewell-orderedif the axiom of choice is assumed: there exists a total order onR{\displaystyle \mathbb {R} }with the property that every nonemptysubsetofR{\displaystyle \mathbb {R} }has aleast elementin this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. anopen intervaldoes not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it has not been explicitly described. IfV=Lis assumed in addition to the axioms of ZF, a well ordering of the real numbers can be shown to be explicitly definable by a formula.[6] A real number may be eithercomputableor uncomputable; eitheralgorithmically randomor not; and eitherarithmetically randomor not. Simple fractionswere used by theEgyptiansaround 1000 BC; theVedic"Shulba Sutras" ("The rules of chords") inc.600 BCinclude what may be the first "use" of irrational numbers. The concept of irrationality was implicitly accepted by earlyIndian mathematicianssuch asManava(c.750–690 BC), who was aware that thesquare rootsof certain numbers, such as 2 and 61, could not be exactly determined.[7] Around 500 BC, theGreek mathematiciansled byPythagorasalso realized that thesquare root of 2is irrational. For Greek mathematicians, numbers were only thenatural numbers. Real numbers were called "proportions", being the ratios of two lengths, or equivalently being measures of a length in terms of another length, called unit length. Two lengths are "commensurable", if there is a unit in which they are both measured by integers, that is, in modern terminology, if their ratio is arational number.Eudoxus of Cnidus(c. 390−340 BC) provided a definition of the equality of two irrational proportions in a way that is similar toDedekind cuts(introduced more than 2,000 years later), except that he did not use anyarithmetic operationother than multiplication of a length by a natural number (seeEudoxus of Cnidus). This may be viewed as the first definition of the real numbers. TheMiddle Agesbrought about the acceptance ofzero,negative numbers, integers, andfractionalnumbers, first byIndianandChinese mathematicians, and then byArabic mathematicians, who were also the first to treat irrational numbers as algebraic objects (the latter being made possible by the development of algebra).[8]Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[9]The Egyptian mathematicianAbū Kāmil Shujā ibn Aslam(c.850–930)was the first to accept irrational numbers as solutions toquadratic equations, or ascoefficientsin anequation(often in the form of square roots,cube roots, andfourth roots).[10]In Europe, such numbers, not commensurable with the numerical unit, were calledirrationalorsurd("deaf"). In the 16th century,Simon Stevincreated the basis for moderndecimalnotation, and insisted that there is no difference between rational and irrational numbers in this regard. In the 17th century,Descartesintroduced the term "real" to describe roots of apolynomial, distinguishing them from "imaginary" numbers. In the 18th and 19th centuries, there was much work on irrational and transcendental numbers.Lambert(1761) gave a flawed proof thatπcannot be rational;Legendre(1794) completed the proof[11]and showed thatπis not the square root of a rational number.[12]Liouville(1840) showed that neitherenore2can be a root of an integerquadratic equation, and then established the existence of transcendental numbers;Cantor(1873) extended and greatly simplified this proof.[13]Hermite(1873) proved thateis transcendental, andLindemann(1882), showed thatπis transcendental. Lindemann's proof was much simplified by Weierstrass (1885),Hilbert(1893),Hurwitz,[14]andGordan.[15] The concept that many points existed between rational numbers, such as the square root of 2, was well known to the ancient Greeks. The existence of a continuous number line was considered self-evident, but the nature of this continuity, presently calledcompleteness, was not understood. The rigor developed for geometry did not cross over to the concept of numbers until the 1800s.[16] The developers ofcalculusused real numbers andlimitswithout defining them rigorously. In hisCours d'Analyse(1821),Cauchymade calculus rigorous, but he used the real numbers without defining them, and assumed without proof that everyCauchysequence has a limit and that this limit is a real number. In 1854Bernhard Riemannhighlighted the limitations of calculus in the method ofFourier series, showing the need for a rigorous definition of the real numbers.[17]: 672 Beginning withRichard Dedekindin 1858, several mathematicians worked on the definition of the real numbers, includingHermann Hankel,Charles Méray, andEduard Heine, leading to the publication in 1872 of two independent definitions of real numbers, one by Dedekind, asDedekind cuts, and the other one byGeorg Cantor, as equivalence classes of Cauchy sequences.[18]Several problems were left open by these definitions, which contributed to thefoundational crisis of mathematics. Firstly both definitions suppose thatrational numbersand thusnatural numbersare rigorously defined; this was done a few years later withPeano axioms. Secondly, both definitions involveinfinite sets(Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor'sset theorywas published several years later. Thirdly, these definitions implyquantificationon infinite sets, and this cannot be formalized in the classicallogicoffirst-order predicates. This is one of the reasons for whichhigher-order logicswere developed in the first half of the 20th century. In 1874 Cantor showed that the set of all real numbers isuncountably infinite, but the set of all algebraic numbers iscountably infinite.Cantor's first uncountability proofwas different from his famousdiagonal argumentpublished in 1891. The real number system(R;+;⋅;<){\displaystyle (\mathbb {R} ;{}+{};{}\cdot {};{}<{})}can be definedaxiomaticallyup to anisomorphism, which is described hereinafter. There are also many ways to construct "the" real number system, and a popular approach involves starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of theirCauchy sequencesor as Dedekind cuts, which are certain subsets of rational numbers.[19]Another approach is to start from some rigorous axiomatization of Euclidean geometry (say of Hilbert or ofTarski), and then define the real number system geometrically. All these constructions of the real numbers have been shown to be equivalent, in the sense that the resulting number systems areisomorphic. LetR{\displaystyle \mathbb {R} }denote thesetof all real numbers. Then: The last property applies to the real numbers but not to the rational numbers (or toother more exotic ordered fields). For example,{x∈Q:x2<2}{\displaystyle \{x\in \mathbb {Q} :x^{2}<2\}}has a rational upper bound (e.g., 1.42), but noleastrational upper bound, because2{\displaystyle {\sqrt {2}}}is not rational. These properties imply theArchimedean property(which is not implied by other definitions of completeness), which states that the set of integers has no upper bound in the reals. In fact, if this were false, then the integers would have a least upper boundN; then,N– 1 would not be an upper bound, and there would be an integernsuch thatn>N– 1, and thusn+ 1 >N, which is a contradiction with the upper-bound property ofN. The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fieldsR1{\displaystyle \mathbb {R} _{1}}andR2{\displaystyle \mathbb {R} _{2}}, there exists a unique fieldisomorphismfromR1{\displaystyle \mathbb {R} _{1}}toR2{\displaystyle \mathbb {R_{2}} }. This uniqueness allows us to think of them as essentially the same mathematical object. For another axiomatization ofR{\displaystyle \mathbb {R} }seeTarski's axiomatization of the reals. The real numbers can be constructed as acompletionof the rational numbers, in such a way that a sequence defined by a decimal or binary expansion like (3; 3.1; 3.14; 3.141; 3.1415; ...)convergesto a unique real number—in this caseπ. For details and other constructions of real numbers, seeConstruction of the real numbers. In the physical sciences most physical constants, such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact the fundamental physical theories such asclassical mechanics,electromagnetism,quantum mechanics,general relativity, and thestandard modelare described using mathematical structures, typicallysmooth manifoldsorHilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finiteaccuracy and precision. Physicists have occasionally suggested that a more fundamental theory would replace the real numbers with quantities that do not form a continuum, but such proposals remain speculative.[20] The real numbers are most often formalized using theZermelo–Fraenkelaxiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied inreverse mathematicsand inconstructive mathematics.[21] Thehyperreal numbersas developed byEdwin Hewitt,Abraham Robinson, and others extend the set of the real numbers by introducinginfinitesimaland infinite numbers, allowing for buildinginfinitesimal calculusin a way closer to the original intuitions ofLeibniz,Euler,Cauchy, and others. Edward Nelson'sinternal set theoryenriches theZermelo–Fraenkelset theory syntactically by introducing a unary predicate "standard". In this approach, infinitesimals are (non-"standard") elements of the set of the real numbers (rather than being elements of an extension thereof, as in Robinson's theory). Thecontinuum hypothesisposits that the cardinality of the set of the real numbers isℵ1{\displaystyle \aleph _{1}}; i.e. the smallest infinitecardinal numberafterℵ0{\displaystyle \aleph _{0}}, the cardinality of the integers.Paul Cohenproved in 1963 that it is an axiom independent of the other axioms of set theory; that is: one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction. Electronic calculatorsandcomputerscannot operate on arbitrary real numbers, because finite computers cannot directly store infinitely many digits or other infinite representations. Nor do they usually even operate on arbitrarydefinable real numbers, which are inconvenient to manipulate. Instead, computers typically work with finite-precision approximations calledfloating-point numbers, a representation similar toscientific notation. The achievable precision is limited by thedata storage spaceallocated for each number, whether asfixed-point, floating-point, orarbitrary-precision numbers, or some other representation. Mostscientific computationusesbinaryfloating-point arithmetic, often a64-bit representationwith around 16 decimaldigits of precision. Real numbers satisfy theusual rules of arithmetic, butfloating-point numbers do not. The field ofnumerical analysisstudies thestabilityandaccuracyof numericalalgorithmsimplemented with approximate arithmetic. Alternately,computer algebra systemscan operate on irrational quantities exactly bymanipulating symbolic formulasfor them (such as2,{\textstyle {\sqrt {2}},}arctan⁡5,{\textstyle \arctan 5,}or∫01xxdx{\textstyle \int _{0}^{1}x^{x}\,dx}) rather than their rational or decimal approximation.[22]But exact and symbolic arithmetic also have limitations: for instance, they are computationally more expensive; it is not in general possible to determine whether two symbolic expressions are equal (theconstant problem); and arithmetic operations can causeexponentialexplosion in the size of representation of a single number (for instance, squaring a rational number roughly doubles the number of digits in its numerator and denominator, and squaring apolynomialroughly doubles its number of terms), overwhelming finite computer storage.[23] A real number is calledcomputableif there exists an algorithm that yields its digits. Because there are onlycountablymany algorithms,[24]but an uncountable number of reals,almost allreal numbers fail to be computable. Moreover, the equality of two computable numbers is anundecidable problem. Someconstructivistsaccept the existence of only those reals that are computable. The set ofdefinable numbersis broader, but still only countable. Inset theory, specificallydescriptive set theory, theBaire spaceis used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals". Thesetof all real numbers is denotedR{\displaystyle \mathbb {R} }(blackboard bold) orR(upright bold). As it is naturally endowed with the structure of afield, the expressionfield of real numbersis frequently used when its algebraic properties are under consideration. The sets of positive real numbers and negative real numbers are often notedR+{\displaystyle \mathbb {R} ^{+}}andR−{\displaystyle \mathbb {R} ^{-}},[25]respectively;R+{\displaystyle \mathbb {R} _{+}}andR−{\displaystyle \mathbb {R} _{-}}are also used.[26]The non-negative real numbers can be notedR≥0{\displaystyle \mathbb {R} _{\geq 0}}but one often sees this set notedR+∪{0}.{\displaystyle \mathbb {R} ^{+}\cup \{0\}.}[25]In French mathematics, thepositive real numbersandnegative real numberscommonly includezero, and these sets are noted respectivelyR+{\displaystyle \mathbb {R_{+}} }andR−.{\displaystyle \mathbb {R} _{-}.}[26]In this understanding, the respective sets without zero are called strictly positive real numbers and strictly negative real numbers, and are notedR+∗{\displaystyle \mathbb {R} _{+}^{*}}andR−∗.{\displaystyle \mathbb {R} _{-}^{*}.}[26] The notationRn{\displaystyle \mathbb {R} ^{n}}refers to the set of then-tuplesof elements ofR{\displaystyle \mathbb {R} }(real coordinate space), which can be identified to theCartesian productofncopies ofR.{\displaystyle \mathbb {R} .}It is ann-dimensionalvector spaceover the field of the real numbers, often called thecoordinate spaceof dimensionn; this space may be identified to then-dimensionalEuclidean spaceas soon as aCartesian coordinate systemhas been chosen in the latter. In this identification, apointof the Euclidean space is identified with the tuple of itsCartesian coordinates. In mathematicsrealis used as an adjective, meaning that the underlying field is the field of the real numbers (orthe real field). For example,realmatrix,real polynomialandrealLie algebra. The word is also used as anoun, meaning a real number (as in "the set of all reals"). The real numbers can be generalized and extended in several different directions:
https://en.wikipedia.org/wiki/Set_of_real_numbers
Incomputer science,linear searchorsequential searchis a method for finding an element within alist. It sequentially checks each element of the list until a match is found or the whole list has been searched.[1] A linear search runs inlinear timein theworst case, and makes at mostncomparisons, wherenis the length of the list. If each element is equally likely to be searched, then linear search has an average case of⁠n+1/2⁠comparisons, but the average case can be affected if the search probabilities for each element vary. Linear search is rarely practical because othersearch algorithmsand schemes, such as thebinary search algorithmandhash tables, allow significantly faster searching for all but short lists.[2] A linear search sequentially checks each element of the list until it finds an element that matches the target value. If thealgorithmreaches the end of the list, the search terminates unsuccessfully.[1] Given a listLofnelements with values orrecordsL0....Ln−1, and target valueT, the followingsubroutineuses linear search to find the index of the targetTinL.[3] The basic algorithm above makes two comparisons per iteration: one to check ifLiequalsT, and the other to check ifistill points to a valid index of the list. By adding an extra recordLnto the list (asentinel value) that equals the target, the second comparison can be eliminated until the end of the search, making the algorithm faster. The search will reach the sentinel if the target is not contained within the list.[4] If the list is ordered such thatL0≤L1... ≤Ln−1, the search can establish the absence of the target more quickly by concluding the search onceLiexceeds the target. This variation requires a sentinel that is greater than the target.[5] For a list withnitems, the best case is when the value is equal to the first element of the list, in which case only one comparison is needed. The worst case is when the value is not in the list (or occurs only once at the end of the list), in which casencomparisons are needed. If the value being sought occursktimes in the list, and all orderings of the list are equally likely, the expected number of comparisons is For example, if the value being sought occurs once in the list, and all orderings of the list are equally likely, the expected number of comparisons isn+12{\displaystyle {\frac {n+1}{2}}}. However, if it isknownthat it occurs once, then at mostn- 1 comparisons are needed, and the expected number of comparisons is (for example, forn= 2 this is 1, corresponding to a single if-then-else construct). Either way,asymptoticallythe worst-case cost and the expected cost of linear search are bothO(n). The performance of linear search improves if the desired value is more likely to be near the beginning of the list than to its end. Therefore, if some values are much more likely to be searched than others, it is desirable to place them at the beginning of the list. In particular, when the list items are arranged in order of decreasing probability, and these probabilities aregeometrically distributed, the cost of linear search is only O(1).[6] Linear search is usually very simple to implement, and is practical when the list has only a few elements, or when performing a single search in an un-ordered list. When many values have to be searched in the same list, it often pays to pre-process the list in order to use a faster method. For example, one maysortthe list and usebinary search, or build an efficientsearch data structurefrom it. Should the content of the list change frequently, repeated re-organization may be more trouble than it is worth. As a result, even though in theory other search algorithms may be faster than linear search (for instancebinary search), in practice even on medium-sized arrays (around 100 items or less) it might be infeasible to use anything else. On larger arrays, it only makes sense to use other, faster search methods if the data is large enough, because the initial time to prepare (sort) the data is comparable to many linear searches.[7]
https://en.wikipedia.org/wiki/Linear_search
Abinary numberis anumberexpressed in thebase-2numeral systemorbinary numeral system, a method for representingnumbersthat uses only two symbols for thenatural numbers: typically "0" (zero) and "1" (one). Abinary numbermay also refer to arational numberthat has a finite representation in the binary numeral system, that is, the quotient of anintegerby a power of two. The base-2 numeral system is apositional notationwith aradixof2. Each digit is referred to as abit, or binary digit. Because of its straightforward implementation indigital electronic circuitryusinglogic gates, the binary system is used by almost all moderncomputers and computer-based devices, as a preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation.[1] The modern binary number system was studied in Europe in the 16th and 17th centuries byThomas Harriot, andGottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Europe and India. The scribes of ancient Egypt used two different systems for their fractions,Egyptian fractions(not related to the binary number system) andHorus-Eyefractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye ofHorus, although this has been disputed).[2]Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of ahekatis expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from theFifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to theNineteenth Dynasty of Egypt, approximately 1200 BC.[3] The method used forancient Egyptian multiplicationis also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in theRhind Mathematical Papyrus, which dates to around 1650 BC.[4] TheI Chingdates from the 9th century BC in China.[5]The binary notation in theI Chingis used to interpret itsquaternarydivinationtechnique.[6] It is based on taoistic duality ofyin and yang.[7]Eight trigrams (Bagua)and a set of64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as theZhou dynastyof ancient China.[5] TheSong dynastyscholarShao Yong(1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically.[6]Viewing theleast significant biton top of single hexagrams in Shao Yong's square[8]and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63.[9] Etruscansdivided the outer edge ofdivination liversinto sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination.[10] Divination at Ancient GreekDodonaoracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy.[11] The Indian scholarPingala(c. 2nd century BC) developed a binary system for describingprosody.[12][13]He described meters in the form of short and long syllables (the latter equal in length to two short syllables).[14]They were known aslaghu(light) andguru(heavy) syllables. Pingala's Hindu classic titledChandaḥśāstra(8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates toscience of metersin Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modernpositional notation.[15]In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum ofplace values.[16] TheIfáis an African divination system.Similar to theI Ching, but has up to 256 binary signs,[17]unlike theI Chingwhich has 64. The Ifá originated in 15th century West Africa amongYoruba people. In 2008,UNESCOadded Ifá to its list of the "Masterpieces of the Oral and Intangible Heritage of Humanity".[18][19] The residents of the island ofMangarevainFrench Polynesiawere using a hybrid binary-decimalsystem before 1450.[20]Slit drumswith binary tones are used to encode messages across Africa and Asia.[7]Sets of binary combinations similar to theI Chinghave also been used in traditional African divination systems, such asIfáamong others, as well as inmedievalWesterngeomancy. The majority ofIndigenous Australian languagesuse a base-2 system.[21] In the late 13th centuryRamon Llullhad the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.[22] In 1605,Francis Bacondiscussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.[23]Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".[23](SeeBacon's cipher.) In 1617,John Napierdescribed a system he calledlocation arithmeticfor doing binary calculations using a non-positional representation by letters.Thomas Harriotinvestigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.[24]Possibly the first publication of the system in Europe was byJuan Caramuel y Lobkowitz, in 1700.[25] Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished.[26]Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics.[26] His first known work on binary,“On the Binary Progression", in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.[26] His most well known work appears in his articleExplication de l'Arithmétique Binaire(published in 1703). The full title of Leibniz's article is translated into English as the"Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures ofFu Xi".[27]Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:[27] While corresponding with the Jesuit priestJoachim Bouvetin 1700, who had made himself an expert on theI Chingwhile a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that theI Chingwas an independent, parallel invention of binary notation. Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophicalmathematicshe admired.[28]Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such a great interval of time, will seem all the more curious."[29] The relation was a central idea to his universal concept of a language orcharacteristica universalis, a popular idea that would be followed closely by his successors such asGottlob FregeandGeorge Boolein formingmodern symbolic logic.[30]Leibniz was first introduced to theI Chingthrough his contact with the French JesuitJoachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw theI Chinghexagrams as an affirmation of theuniversalityof his own religious beliefs as a Christian.[31]Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea ofcreatio ex nihiloor creation out of nothing.[32] [A concept that] is not easy to impart to the pagans, is the creationex nihilothrough God's almighty power. Now one can say that nothing in the world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing. In 1854, British mathematicianGeorge Boolepublished a landmark paper detailing analgebraicsystem oflogicthat would become known asBoolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.[33] In 1937,Claude Shannonproduced his master's thesis atMITthat implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. EntitledA Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practicaldigital circuitdesign.[34] In November 1937,George Stibitz, then working atBell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition.[35]Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculatecomplex numbers. In a demonstration to theAmerican Mathematical Societyconference atDartmouth Collegeon 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by ateletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration wereJohn von Neumann,John MauchlyandNorbert Wiener, who wrote about it in his memoirs.[36][37][38] TheZ1 computer, which was designed and built byKonrad Zusebetween 1935 and 1938, usedBoolean logicand binaryfloating-point numbers.[39] Any number can be represented by a sequence ofbits(binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667: The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values.[40]In a modern computer, the numeric values may be represented by two differentvoltages; on amagneticdisk,magnetic polaritiesmay be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use. In keeping with the customary representation of numerals usingArabic numerals, binary numbers are commonly written using the symbols0and1. When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, orradix. The following notations are equivalent: When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example, the binary numeral 100 is pronouncedone zero zero, rather thanone hundred, to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral asone hundred(a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correctvalue), but this does not make its binary nature explicit. Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiardecimalcounting system as a frame of reference. Decimalcounting uses the ten symbols0through9. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called thefirst digit. When the available symbols for this position are exhausted, the least significant digit is reset to0, and the next digit of higher significance (one position to the left) is incremented (overflow), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows: Binary counting follows the exact same procedure, and again the incremental substitution begins with the least significant binary digit, orbit(the rightmost one, also called thefirst bit), except that only the two symbols0and1are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left: In the binary system, each bit represents an increasing power of 2, with the rightmost bit representing 20, the next representing 21, then 22, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows: Fractionsin binary arithmeticterminateonly if thedenominatoris apower of 2. As a result, 1/10 does not have a finite binary representation (10has prime factors2and5). This causes 10 × 1/10 not to precisely equal 1 in binaryfloating-point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 ×2−1+ 1 ×2−2+ 0 ×2−3+ 1 ×2−4+ ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever. Arithmeticin binary is much like arithmetic in otherpositional notationnumeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals. The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying: Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: This is known ascarrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: In this example, two numerals are being added together: 011012(1310) and 101112(2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002(3610). When computers must add two numbers, the rule that: xxory = (x + y)mod2 for any two bits x and y allows for very fast calculation, as well. A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It is based on the simple premise that under the binary system, when given a stretch of digits composed entirely ofnones (wherenis any integer length), adding 1 will result in the number 1 followed by a string ofnzeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string ofn9s will result in the number 1 followed by a string ofn0s: Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02(95810) and 1 0 1 0 1 1 0 0 1 12(69110), using the traditional carry method on the left, and the long carry method on the right: The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12(164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort. The binary addition table is similar to, but not the same as, thetruth tableof thelogical disjunctionoperation∨{\displaystyle \lor }. The difference is that1∨1=1{\displaystyle 1\lor 1=1}, while1+1=10{\displaystyle 1+1=10}. Subtractionworks in much the same way: Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known asborrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value. Subtracting a positive number is equivalent toaddinganegative numberof equalabsolute value. Computers usesigned number representationsto handle negative numbers—most commonly thetwo's complementnotation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation, subtraction can be summarized by the following formula: Multiplicationin binary is similar to its decimal counterpart. Two numbersAandBcan be multiplied by partial products: for each digit inB, the product of that digit inAis calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit inBthat was used. The sum of all these partial products gives the final result. Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication: For example, the binary numbers 1011 and 1010 are multiplied as follows: Binary numbers can also be multiplied with bits after abinary point: See alsoBooth's multiplication algorithm. The binary multiplication table is the same as thetruth tableof thelogical conjunctionoperation∧{\displaystyle \land }. Long divisionin binary is again similar to its decimal counterpart. In the example below, thedivisoris 1012, or 5 in decimal, while thedividendis 110112, or 27 in decimal. The procedure is the same as that of decimallong division; here, the divisor 1012goes into the first three digits 1102of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence: The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted: Thus, thequotientof 110112divided by 1012is 1012, as shown on the top line, while the remainder, shown on the bottom line, is 102. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2. Aside from long division, one can also devise the procedure so as to allow for over-subtracting from the partial remainder at each iteration, thereby leading to alternative methods which are less systematic, but more flexible as a result. The process oftaking a binary square rootdigit by digit is essentially the same as for a decimal square root but much simpler, due to the binary nature. First group the digits in pairs, using a leading 0 if necessary so there are an even number of digits. Now at each step, consider the answer so far, extended with the digits 01. If this can be subtracted from the current remainder, do so. Then extend the remainder with the next pair of digits. If you subtracted, the next digit of the answer is 1, otherwise it's 0. Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated usingBoolean logical operators. When a string of binary symbols is manipulated in this way, it is called abitwise operation; the logical operatorsAND,OR, andXORmay be performed on corresponding bits in two binary numerals provided as input. The logicalNOToperation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, anarithmetic shiftleft of a binary number is the equivalent of multiplication by a (positive, integral) power of 2. To convert from a base-10integerto its base-2 (binary) equivalent, the number isdivided by two. The remainder is theleast-significant bit. The quotient is again divided by two; its remainder becomes the next least significant bit. This process repeats until a quotient of one is reached. The sequence of remainders (including the final quotient of one) forms the binary value, as each remainder must be either zero or one when dividing by two. For example, (357)10is expressed as (101100101)2.[43] Conversion from base-2 to base-10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multi-column table. For example, to convert 100101011012to decimal: The result is 119710. The first Prior Value of 0 is simply an initial decimal value. This method is an application of theHorner scheme. The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving. In a fractional binary number such as 0.110101101012, the first digit is12{\textstyle {\frac {1}{2}}}, the second(12)2=14{\textstyle ({\frac {1}{2}})^{2}={\frac {1}{4}}}, etc. So if there is a 1 in the first place after the decimal, then the number is at least12{\textstyle {\frac {1}{2}}}, and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part. For example,(13)10{\textstyle ({\frac {1}{3}})_{10}}, in binary, is: Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... . Or for example, 0.110, in binary, is: This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 1/10 + ... + 1/10 (addition of 10 numbers) differs from 1 in binaryfloating-point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not. The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example: x=1100.101110¯…x×26=1100101110.01110¯…x×2=11001.01110¯…x×(26−2)=1100010101x=1100010101/111110x=(789/62)10{\displaystyle {\begin{aligned}x&=&1100&.1{\overline {01110}}\ldots \\x\times 2^{6}&=&1100101110&.{\overline {01110}}\ldots \\x\times 2&=&11001&.{\overline {01110}}\ldots \\x\times (2^{6}-2)&=&1100010101\\x&=&1100010101/111110\\x&=&(789/62)_{10}\end{aligned}}} Another way of converting from binary to decimal, often quicker for a person familiar withhexadecimal, is to do so indirectly—first converting (x{\displaystyle x}in binary) into (x{\displaystyle x}in hexadecimal) and then converting (x{\displaystyle x}in hexadecimal) into (x{\displaystyle x}in decimal). For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10k, wherekis chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two areconcatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10kand added to the second converted piece, wherekis the number of decimal digits in the second, least-significant piece before conversion. Binary may be converted to and from hexadecimal more easily. This is because theradixof the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the adjacent table. To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits: To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra0bits at the left (calledpadding). For example: To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values: Binary is also easily converted to theoctalnumeral system, since octal uses a radix of 8, which is apower of two(namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits ofhexadecimalin the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth. Converting from octal to binary proceeds in the same fashion as it does forhexadecimal: And from binary to octal: And from octal to decimal: Non-integers can be represented by using negative powers, which are set off from the other digits by means of aradix point(called adecimal pointin the decimal system). For example, the binary number 11.012means: For a total of 3.25 decimal. Alldyadic rational numbersp2a{\displaystyle {\frac {p}{2^{a}}}}have aterminatingbinary numeral—the binary representation has a finite number of terms after the radix point. Otherrational numbershave binary representation, but instead of terminating, theyrecur, with a finite sequence of digits repeating indefinitely. For instance 110310=12112=0.0101010101¯…2{\displaystyle {\frac {1_{10}}{3_{10}}}={\frac {1_{2}}{11_{2}}}=0.01010101{\overline {01}}\ldots \,_{2}}12101710=11002100012=0.101101001011010010110100¯…2{\displaystyle {\frac {12_{10}}{17_{10}}}={\frac {1100_{2}}{10001_{2}}}=0.1011010010110100{\overline {10110100}}\ldots \,_{2}} The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation indecimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that0.111111...is the sum of thegeometric series2−1+ 2−2+ 2−3+ ... which is 1. Binary numerals that neither terminate nor recur representirrational numbers. For instance,
https://en.wikipedia.org/wiki/Binary_number
Thedecimalnumeral system(also called thebase-tenpositional numeral systemanddenary/ˈdiːnəri/[1]ordecanary) is the standard system for denotingintegerand non-integernumbers. It is the extension to non-integer numbers (decimal fractions) of theHindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to asdecimal notation.[2] Adecimal numeral(also often justdecimalor, less correctly,decimal number), refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by adecimal separator(usually "." or "," as in25.9703or3,1415).[3]Decimalmay also refer specifically to the digits after the decimal separator, such as in "3.14is the approximation ofπtotwo decimals". Zero-digits after a decimal separator serve the purpose of signifying the precision of a value. The numbers that may be represented in the decimal system are thedecimal fractions. That is,fractionsof the forma/10n, whereais an integer, andnis anon-negative integer. Decimal fractions also result from the addition of an integer and afractional part; the resulting sum sometimes is called afractional number. Decimals are commonly used toapproximatereal numbers. By increasing the number of digits after the decimal separator, one can make theapproximation errorsas small as one wants, when one has a method for computing the new digits. Originally and in most uses, a decimal has only a finite number of digits after the decimal separator. However, the decimal system has been extended toinfinite decimalsfor representing anyreal number, by using aninfinite sequenceof digits after the decimal separator (seedecimal representation). In this context, the usual decimals, with a finite number of non-zero digits after the decimal separator, are sometimes calledterminating decimals. Arepeating decimalis an infinite decimal that, after some place, repeats indefinitely the same sequence of digits (e.g.,5.123144144144144... = 5.123144).[4]An infinite decimal represents arational number, thequotientof two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits. Manynumeral systemsof ancient civilizations use ten and its powers for representing numbers, possibly because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly theEgyptian numerals, then theBrahmi numerals,Greek numerals,Hebrew numerals,Roman numerals, andChinese numerals.[5]Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of theHindu–Arabic numeral systemfor representingintegers. This system has been extended to represent some non-integer numbers, calleddecimal fractionsordecimal numbers, for forming thedecimal numeral system.[5] For writing numbers, the decimal system uses tendecimal digits, adecimal mark, and, fornegative numbers, aminus sign"−". The decimal digits are0,1,2,3,4,5,6,7,8,9;[6]thedecimal separatoris the dot "." in many countries (mostly English-speaking),[7]and a comma "," in other countries.[3] For representing anon-negative number, a decimal numeral consists of Ifm> 0, that is, if the first sequence contains at least two digits, it is generally assumed that the first digitamis not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example,3.14 = 03.14 = 003.14. Similarly, if the final digit on the right of the decimal mark is zero—that is, ifbn= 0—it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number;[note 1]for example,15 = 15.0 = 15.00and5.2 = 5.20 = 5.200. For representing anegative number, a minus sign is placed befoream. The numeralamam−1…a0.b1b2…bn{\displaystyle a_{m}a_{m-1}\ldots a_{0}.b_{1}b_{2}\ldots b_{n}}represents the number Theinteger partorintegral partof a decimal numeral is the integer written to the left of the decimal separator (see alsotruncation). For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is thefractional part, which equals the difference between the numeral and its integer part. When the integral part of a numeral is zero, it may occur, typically incomputing, that the integer part is not written (for example,.1234, instead of0.1234). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation. In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is apositional numeral system. Decimal fractions(sometimes calleddecimal numbers, especially in contexts involving explicit fractions) are therational numbersthat may be expressed as afractionwhosedenominatoris apowerof ten.[8]For example, the decimal expressions0.8,14.89,0.00079,1.618,3.14159{\displaystyle 0.8,14.89,0.00079,1.618,3.14159}represent the fractions⁠8/10⁠,⁠1489/100⁠,⁠79/100000⁠,⁠+1618/1000⁠and⁠+314159/100000⁠, and therefore denote decimal fractions. An example of a fraction that cannot be represented by a decimal expression (with a finite number of digits) is⁠1/3⁠, 3 not being a power of 10. More generally, a decimal withndigits after theseparator(a point or comma) represents the fraction with denominator10n, whose numerator is the integer obtained by removing the separator. It follows that a number is a decimal fractionif and only ifit has a finite decimal representation. Expressed asfully reduced fractions, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are Decimal numerals do not allow an exact representation for allreal numbers. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximatesπ, being less than 10−5off; so decimals are widely used inscience,engineeringand everyday life. More precisely, for every real numberxand every positive integern, there are two decimalsLanduwith at mostndigits after the decimal mark such thatL≤x≤uand(u−L) = 10−n. Numbers are very often obtained as the result ofmeasurement. As measurements are subject tomeasurement uncertaintywith a knownupper bound, the result of a measurement is well-represented by a decimal withndigits after the decimal mark, as soon as the absolute measurement error is bounded from above by10−n. In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see alsosignificant figures). For areal numberxand an integern≥ 0, let[x]ndenote the (finite) decimal expansion of the greatest number that is not greater thanxthat has exactlyndigits after the decimal mark. Letdidenote the last digit of[x]i. It is straightforward to see that[x]nmay be obtained by appendingdnto the right of[x]n−1. This way one has and the difference of[x]n−1and[x]namounts to which is either 0, ifdn= 0, or gets arbitrarily small asntends to infinity. According to the definition of alimit,xis the limit of[x]nwhenntends toinfinity. This is written asx=limn→∞[x]n{\textstyle \;x=\lim _{n\rightarrow \infty }[x]_{n}\;}or which is called aninfinite decimal expansionofx. Conversely, for any integer[x]0and any sequence of digits(dn)n=1∞{\textstyle \;(d_{n})_{n=1}^{\infty }}the (infinite) expression[x]0.d1d2...dn...is aninfinite decimal expansionof a real numberx. This expansion is unique if neither alldnare equal to 9 nor alldnare equal to 0 fornlarge enough (for allngreater than some natural numberN). If alldnforn>Nequal to 9 and[x]n= [x]0.d1d2...dn, the limit of the sequence([x]n)n=1∞{\textstyle \;([x]_{n})_{n=1}^{\infty }}is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.:dN, bydN+ 1, and replacing all subsequent 9s by 0s (see0.999...). Any such decimal fraction, i.e.:dn= 0forn>N, may be converted to its equivalent infinite decimal expansion by replacingdNbydN− 1and replacing all subsequent 0s by 9s (see0.999...). In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of[x]n, and the other containing only 9s after some place, which is obtained by defining[x]nas the greatest number that islessthanx, having exactlyndigits after the decimal mark. Long divisionallows computing the infinite decimal expansion of arational number. If the rational number is adecimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has arepeating decimal. For example, The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational. or, dividing both numerator and denominator by 6,⁠692/1665⁠. Most moderncomputerhardware and software systems commonly use abinary representationinternally (although many early computers, such as theENIACor theIBM 650, used decimal representation internally).[9]For external use by computer specialists, this binary representation is sometimes presented in the relatedoctalorhexadecimalsystems. For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.) Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant ofbinary-coded decimal,[10][11]especially in database implementations, but there are other decimal representations in use (includingdecimal floating pointsuch as in newer revisions of theIEEE 754 Standard for Floating-Point Arithmetic).[12] Decimal arithmetic is used in computers so that decimal fractional results of adding (or subtracting) values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of10{\displaystyle 10}have no finite binary fractional representation; and is generally impossible for multiplication (or division).[13][14]SeeArbitrary-precision arithmeticfor exact calculations. Many ancient cultures calculated with numerals based on ten, perhaps because two human hands have ten fingers.[15]Standardized weights used in theIndus Valley Civilisation(c.3300–1300 BCE) were based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, while their standardized ruler – theMohenjo-daro ruler– was divided into ten equal parts.[16][17][18]Egyptian hieroglyphs, in evidence since around 3000 BCE, used a purely decimal system,[19]as did theLinear Ascript (c.1800–1450 BCE) of theMinoans[20][21]and theLinear Bscript (c. 1400–1200 BCE) of theMycenaeans. TheÚnětice culturein central Europe (2300-1600 BC) used standardised weights and a decimal system in trade.[22]The number system ofclassical Greecealso used powers of ten, including an intermediate base of 5, as didRoman numerals.[23]Notably, the polymathArchimedes(c. 287–212 BCE) invented a decimal positional system in hisSand Reckonerwhich was based on 108.[23][24]Hittitehieroglyphs (since 15th century BCE) were also strictly decimal.[25] The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1,000, 2,000, 3,000, 4,000, to 10,000.[26]The world's earliest positional decimal system was the Chineserod calculus.[27] Starting from the 2nd century BCE, some Chinese units for length were based on divisions into ten; by the 3rd century CE these metrological units were used to express decimal fractions of lengths, non-positionally.[28]Calculations with decimal fractions of lengths wereperformed using positional counting rods, as described in the 3rd–5th century CESunzi Suanjing. The 5th century CE mathematicianZu Chongzhicalculated a 7-digitapproximation ofπ.Qin Jiushao's bookMathematical Treatise in Nine Sections(1247) explicitly writes a decimal fraction representing a number rather than a measurement, using counting rods.[29]The number 0.96644 is denoted Historians of Chinese science have speculated that the idea of decimal fractions may have been transmitted from China to the Middle East.[27] Al-Khwarizmiintroduced fractions to Islamic countries in the early 9th century CE, written with a numerator above and denominator below, without a horizontal bar. This form of fraction remained in use for centuries.[27][30] Positional decimal fractions appear for the first time in a book by the Arab mathematicianAbu'l-Hasan al-Uqlidisiwritten in the 10th century.[31]The Jewish mathematicianImmanuel Bonfilsused decimal fractions around 1350 but did not develop any notation to represent them.[32]The Persian mathematicianJamshid al-Kashiused, and claimed to have discovered, decimal fractions in the 15th century.[31] A forerunner of modern European decimal notation was introduced bySimon Stevinin the 16th century. Stevin's influential bookletDe Thiende("the art of tenths") was first published in Dutch in 1585 and translated into French asLa Disme.[33] John Napierintroduced using the period (.) to separate the integer part of a decimal number from the fractional part in his book on constructing tables of logarithms, published posthumously in 1620.[34]: p. 8, archive p. 32) A method of expressing every possiblenatural numberusing a set of ten symbols emerged in India.[35]Several Indian languages show a straightforward decimal system.Dravidian languageshave numbers between 10 and 20 expressed in a regular pattern of addition to 10.[36] TheHungarian languagealso uses a straightforward decimal system. All numbers between 10 and 20 are formed regularly (e.g. 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty"). A straightforward decimal rank system with a word for each order (10十, 100百, 1000千, 10,000万), and in which 11 is expressed asten-oneand 23 astwo-ten-three, and 89,345 is expressed as 8 (ten thousands)万9 (thousand)千3 (hundred)百4 (tens)十5 is found inChinese, and inVietnamesewith a few irregularities.Japanese,Korean, andThaihave imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example, in English 11 is "eleven" not "ten-one" or "one-teen". Incan languages such asQuechuaandAymarahave an almost straightforward decimal system, in which 11 is expressed asten with oneand 23 astwo-ten with three. Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability.[37] Some cultures do, or did, use other bases of numbers.
https://en.wikipedia.org/wiki/Decimal
Octal(base 8) is anumeral systemwitheightas thebase. In the decimal system, each place is apower of ten. For example: In the octal system, each place is a power of eight. For example: By performing the calculation above in the familiar decimal system, we see why 112 in octal is equal to64+8+2=74{\displaystyle 64+8+2=74}in decimal. Octal numerals can be easily converted frombinaryrepresentations (similar to aquaternary numeral system) by grouping consecutive binary digits into groups of three (starting from the right, for integers). For example, the binary representation for decimal 74 is 1001010. Two zeroes can be added at the left:(00)1 001 010, corresponding to the octal digits1 1 2, yielding the octal representation 112. The eightbaguaor trigrams of theI Chingcorrespond to octal digits: Gottfried Wilhelm Leibnizmade the connection between trigrams, hexagrams and binary numbers in 1703.[1] Octal became widely used in computing when systems such as theUNIVAC 1050,PDP-8,ICL 1900andIBM mainframesemployed6-bit,12-bit,24-bitor36-bitwords. Octal was an ideal abbreviation of binary for these machines because their word size is divisible by three (each octal digit represents three binary digits). So two, four, eight or twelve digits could concisely display an entiremachine word. It also cut costs by allowingNixie tubes,seven-segment displays, andcalculatorsto be used for the operator consoles, where binary displays were too complex to use, decimal displays needed complex hardware to convert radices, andhexadecimaldisplays needed to display more numerals. All modern computing platforms, however, use 16-, 32-, or 64-bit words, further divided intoeight-bit bytes. On such systems three octal digits per byte would be required, with the most significant octal digit representing two binary digits (plus one bit of the next significant byte, if any). Octal representation of a 16-bit word requires 6 digits, but the most significant octal digit represents (quite inelegantly) only one bit (0 or 1). This representation offers no way to easily read the most significant byte, because it's smeared over four octal digits. Therefore, hexadecimal is more commonly used in programming languages today, since two hexadecimal digits exactly specify one byte. Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes thePDP-11andMotorola 68000 family. The modern-day ubiquitousx86 architecturebelongs to this category as well, but octal is rarely used on this platform, although certain properties of the binary encoding of opcodes become more readily apparent when displayed in octal, e.g. the ModRM byte, which is divided into fields of 2, 3, and 3 bits, so octal can be useful in describing these encodings. Before the availability ofassemblers, some programmers would handcode programs in octal; for instance, Dick Whipple and John Arnold wroteTiny BASIC Extendeddirectly in machine code, using octal.[11] Octal is sometimes used in computing instead of hexadecimal, perhaps most often in modern times in conjunction withfile permissionsunderUnixsystems (seechmod). It has the advantage of not requiring any extra symbols as digits (the hexadecimal system is base-16 and therefore needs six additional symbols beyond 0–9). In programming languages, octalliteralsare typically identified with a variety ofprefixes, including the digit0, the lettersoorq, the digit–letter combination0o, or the symbol&[12]or$. InMotorola convention, octal numbers are prefixed with@, whereas a small (or capital[13]) lettero[13]orq[13]is added as apostfixfollowing theIntel convention.[14][15]InConcurrent DOS,Multiuser DOSandREAL/32as well as inDOS PlusandDR-DOSvariousenvironment variableslike$CLS,$ON,$OFF,$HEADERor$FOOTERsupport an\nnnoctal number notation,[16][17][18]and DR-DOSDEBUGutilizes\to prefix octal numbers as well. For example, the literal 73 (base 8) might be represented as073,o73,q73,0o73,\73,@73,&73,$73or73oin various languages. Newer languages have been abandoning the prefix0, as decimal numbers are often represented with leading zeroes. The prefixqwas introduced to avoid the prefixobeing mistaken for a zero, while the prefix0owas introduced to avoid starting a numerical literal with an alphabetic character (likeoorq), since these might cause the literal to be confused with a variable name. The prefix0oalso follows the model set by the prefix0xused for hexadecimal literals in theC language; it is supported byHaskell,[19]OCaml,[20]Pythonas of version 3.0,[21]Raku,[22]Ruby,[23]Tclas of version 9,[24]PHPas of version 8.1,[25]Rust[26]andECMAScriptas of ECMAScript 6[27](the prefix0originally stood for base 8 inJavaScriptbut could cause confusion,[28]therefore it has been discouraged in ECMAScript 3 and dropped in ECMAScript 5[29]). Octal numbers that are used in some programming languages (C,Perl,PostScript...) for textual/graphical representations of byte strings when some byte values (unrepresented in a code page, non-graphical, having special meaning in current context or otherwise undesired) have to be toescapedas\nnn. Octal representation may be particularly handy with non-ASCII bytes ofUTF-8, which encodes groups of 6 bits, and where any start byte has octal value\3nnand any continuation byte has octal value\2nn. Octal was also used forfloating pointin theFerranti Atlas(1962),Burroughs B5500(1964),Burroughs B5700(1971),Burroughs B6700(1971) andBurroughs B7700(1972) computers. Transpondersin aircraft transmit a "squawk"code, expressed as a four-octal-digit number, when interrogated by ground radar. This code is used to distinguish different aircraft on the radar screen. To convert integer decimals to octal,dividethe original number by the largest possible power of 8 and divide the remainders by successively smaller powers of 8 until the power is 1. The octal representation is formed by the quotients, written in the order generated by the algorithm. For example, to convert 12510to octal: Therefore, 12510= 1758. Another example: Therefore, 90010= 16048. To convert a decimal fraction to octal, multiply by 8; the integer part of the result is the first digit of the octal fraction. Repeat the process with the fractional part of the result, until it is null or within acceptable error bounds. Example: Convert 0.1640625 to octal: Therefore, 0.164062510= 0.1248. These two methods can be combined to handle decimal numbers with both integer and fractional parts, using the first on the integer part and the second on the fractional part. To convert integer decimals to octal, prefix the number with "0.". Perform the following steps for as long as digits remain on the right side of the radix: Double the value to the left side of the radix, usingoctalrules, move the radix point one digit rightward, and then place the doubled value underneath the current value so that the radix points align. If the moved radix point crosses over a digit that is 8 or 9, convert it to 0 or 1 and add the carry to the next leftward digit of the current value.Addoctallythose digits to the left of the radix and simply drop down those digits to the right, without modification. Example: To convert a numberkto decimal, use the formula that defines its base-8 representation: In this formula,aiis an individual octal digit being converted, whereiis the position of the digit (counting from 0 for the right-most digit). Example: Convert 7648to decimal: For double-digit octal numbers this method amounts to multiplying the lead digit by 8 and adding the second digit to get the total. Example: 658= 6 × 8 + 5 = 5310 To convert octals to decimals, prefix the number with "0.". Perform the following steps for as long as digits remain on the right side of the radix: Double the value to the left side of the radix, usingdecimalrules, move the radix point one digit rightward, and then place the doubled value underneath the current value so that the radix points align.Subtractdecimallythose digits to the left of the radix and simply drop down those digits to the right, without modification. Example: To convert octal to binary, replace each octal digit by its binary representation. Example: Convert 518to binary: Therefore, 518= 101 0012. The process is the reverse of the previous algorithm. The binary digits are grouped by threes, starting from the least significant bit and proceeding to the left and to the right. Add leading zeroes (or trailing zeroes to the right of decimal point) to fill out the last group of three if necessary. Then replace each trio with the equivalent octal digit. For instance, convert binary 1010111100 to octal: Therefore, 10101111002= 12748. Convert binary 11100.01001 to octal: Therefore, 11100.010012= 34.228. The conversion is made in two steps using binary as an intermediate base. Octal is converted to binary and then binary to hexadecimal, grouping digits by fours, which correspond each to a hexadecimal digit. For instance, convert octal 1057 to hexadecimal: Therefore, 10578= 22F16. Hexadecimal to octal conversion proceeds by first converting the hexadecimal digits to 4-bit binary values, then regrouping the binary bits into 3-bit octal digits. For example, to convert 3FA516: Therefore, 3FA516= 376458. Due to having only factors of two, many octal fractions have repeating digits, although these tend to be fairly simple: The table below gives the expansions of some commonirrational numbersin decimal and octal.
https://en.wikipedia.org/wiki/Octal
Hexadecimal(also known asbase-16or simplyhex) is apositional numeral systemthat represents numbers using aradix(base) of sixteen. Unlike thedecimalsystem representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation ofbinary-codedvalues. Each hexadecimal digit represents fourbits(binary digits), also known as anibble(or nybble).[1]For example, an 8-bitbyteis two hexadecimal digits and its value can be written as00toFFin hexadecimal. In mathematics, a subscript is typically used to specify the base. For example, the decimal value711would be expressed in hexadecimal as 2C716. In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix0xis used inC, which would denote this value as0x2C7. Hexadecimal is used in the transfer encodingBase 16, in which each byte of theplain textis broken into two 4-bit values and represented by two hexadecimal digits. In most current use cases, the letters A–F or a–f represent the values 10–15, while thenumerals0–9 are used to represent their decimal values. There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Someseven-segment displaysuse mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9. There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the followinghex dump, each 8-bitbyteis a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number. In contexts where thebaseis not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910is decimal 159; 15916is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimaland 159hex, or 159dand 159h. Donald Knuthintroduced the use of a particular typeface to represent a particular radix in his bookThe TeXbook.[2]Hexadecimal representations are written there in atypewriter typeface:5A3,C1F27ED In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: Sometimes the numbers are known to be Hex. The use of the lettersAthroughFto represent the digits above 9 was not universal in the early history of computers. Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using theNATO phonetic alphabet, theJoint Army/Navy Phonetic Alphabet, or a similarad-hocsystem. In the wake of the adoption of hexadecimal amongIBM System/360programmers, Magnuson (1968)[23]suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc.[23]Another naming-system was published online by Rogers (2007)[24]that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke inSilicon Valley.[25]The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016.[26] Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15. Systems of counting ondigitshave been devised for both binary and hexadecimal.Arthur C. Clarkesuggested using each finger as an on/off bit, allowing finger counting from zero to 102310on ten fingers.[27]Another system for counting up to FF16(25510) is illustrated on the right. The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910and so on. Hexadecimal can also be used to express the exact bit patterns used in theprocessor, so a sequence of hexadecimal digits may represent asignedor even afloating-pointvalue. This way, the negative number −4210can be written as FFFF FFD6 in a 32-bitCPU register(intwo's complement), as C228 0000 in a 32-bitFPUregister or C045 0000 0000 0000 in a 64-bit FPU register (in theIEEE floating-point standard). Just as decimal numbers can be represented inexponential notation, so too can hexadecimal numbers.P notationuses the letterP(orp, for "power"), whereasE(ore) serves a similar purpose in decimalE notation. The number after thePisdecimaland represents thebinaryexponent. Increasing the exponent by 1 multiplies by 2, not 16:20p0 = 10p1 = 8p2 = 4p3 = 2p4 = 1p5. Usually, the number is normalized so that the hexadecimal digits start with1.(zero is usually0with noP). Example:1.3DEp42represents1.3DE16× 24210. P notation is required by theIEEE 754-2008binary floating-point standard and can be used for floating-point literals in theC99edition of theC programming language.[28]Using the%aor%Aconversion specifiers, this notation can be produced by implementations of theprintffamily of functions following the C99 specification[29]andSingle Unix Specification(IEEE Std 1003.1)POSIXstandard.[30] Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112to base ten. Since eachpositionin a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right: Therefore: With little practice, mapping 11112to F16in one step becomes easy (see table inwritten representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit.[31] This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results. Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly: The conversion from hexadecimal to binary is equally direct.[31] Althoughquaternary(base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16= 02 11 304. Theoctal(base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four. As with all bases there is a simplealgorithmfor converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method. Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1be the hexadecimal digits representing the number. "16" may be replaced with any other base that may be desired. The following is aJavaScriptimplementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work withbitwise operators. It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation. For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p(pbeing the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that: B3AD = (11 × 163) + (3 × 162) + (10 × 161) + (13 × 160) which is 45997 in base 10. Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal. InMicrosoft Windows, theCalculator, on its Programmer mode, allows conversions between hexadecimal and other common programming bases. Elementary operations such as division can be carried out indirectly through conversion to an alternatenumeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits. Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such aslong divisionand the traditional subtraction algorithm. As with other numeral systems, the hexadecimal system can be used to representrational numbers, althoughrepeating expansionsare common since sixteen (1016) has only a single prime factor: two. For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two forbinaryor dividing one by sixteen for hexadecimal, both of these fractions are written as0.1. Because the radix 16 is aperfect square(42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are nocyclic numbers(other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has aprime factornot found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not apower of tworesult in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient thandecimalfor representing rational numbers since a larger proportion lies outside its range of finite representation. All rational numbers finitely representable in hexadecimal are also finitely representable in decimal,duodecimalandsexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.19in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510(one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560. The table below gives the expansions of some commonirrational numbersin decimal and hexadecimal. Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below. The traditionalChinese units of measurementwere base-16. For example, one jīn (斤) in the old system equals sixteentaels. Thesuanpan(Chineseabacus) can be used to perform hexadecimal calculations such as additions and subtractions.[32] As with theduodecimalsystem, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals.[33]Some proposals unify standard measures so that they are multiples of 16.[34][35]An early such proposal was put forward byJohn W. NystrominProject of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862.[36]Nystrom among other things suggestedhexadecimal time, which subdivides a day by 16, so that there are 16 "hours" (or "10tims", pronouncedtontim) in a day.[37] The wordhexadecimalis first recorded in 1952.[38]It ismacaronicin the sense that it combinesGreekἕξ (hex) "six" withLatinate-decimal. The all-Latin alternativesexadecimal(compare the wordsexagesimalfor base 60) is older, and sees at least occasional use from the late 19th century.[39]It is still in use in the 1950s inBendixdocumentation. Schwartzman (1994) argues that use ofsexadecimalmay have been avoided because of its suggestive abbreviation tosex.[40]Many western languages since the 1960s have adopted terms equivalent in formation tohexadecimal(e.g. Frenchhexadécimal, Italianesadecimale, Romanianhexazecimal, Serbianхексадецимални, etc.) but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandicsextándakerfi, Russianшестнадцатеричнойetc.) Terminology and notation did not become settled until the end of the 1960s. In 1969,Donald Knuthargued that the etymologically correct term would besenidenary, or possiblysedenary, a Latinate term intended to convey "grouped by 16" modelled onbinary,ternary,quaternary, etc. According to Knuth's argument, the correct terms fordecimalandoctalarithmetic would bedenaryandoctonary, respectively.[41]Alfred B. Taylor usedsenidenaryin his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits".[42][43] The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the publication of theFortran IVmanual forIBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants.[44]As noted above, alternative notations were used byNEC(1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin in his letter to the editor of theCACMcomplains that With the ridiculous choice of letters A, B, C, D, E, F as hexadecimal number symbols adding to already troublesome problems of distinguishing octal (or hex) numbers from decimal numbers (or variable names), the time is overripe for reconsideration of our number symbols. This should have been done before poor choices gelled into a de facto standard! Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme": "Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system".[19]He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (asBrahmi numerals, and later in aHindu–Arabic numeral system), and that the recentASCIIstandards (ASA X3.4-1963 and USAS X3.4-1968) "should have preserved six code table positions following the ten decimal digits -- rather than needlessly filling these with punctuation characters" (":;<=>?") that might have been placed elsewhere among the 128 available positions. Base16(as a proper name without a space) can also refer to abinary to text encodingbelonging to the same family asBase32,Base58, andBase64. In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from theASCIIcharacter set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers. There are several advantages of Base16 encoding: The main disadvantages of Base16 encoding are: Support for Base16 encoding is ubiquitous in modern computing. It is the basis for theW3Cstandard forURL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers.
https://en.wikipedia.org/wiki/Hexadecimal
In apositional numeral system, theradix(pl.:radices) orbaseis the number of uniquedigits, including the digit zero, used to represent numbers. For example, for thedecimal system(the most common system in use today) the radix is ten, because it uses the ten digits from 0 through 9. In any standard positional numeral system, a number is conventionally written as(x)ywithxas thestringof digits andyas its base. For base ten, the subscript is usually assumed and omitted (together with the enclosingparentheses), as it is the most common way to expressvalue. For example,(100)10is equivalent to 100(the decimal system is implied in the latter) and represents the number one hundred, while (100)2(in thebinary systemwith base 2) represents the number four.[1] Radixis a Latin word for "root".Rootcan be considered a synonym forbase,in the arithmetical sense. Generally, in a system with radixb(b> 1), a string of digitsd1...dndenotes the numberd1bn−1+d2bn−2+ ... +dnb0, where0 ≤di<b.[1]In contrast to decimal, or radix 10, which has a ones' place, tens' place, hundreds' place, and so on, radixbwould have a ones' place, then ab1s' place, ab2s' place, etc.[2] For example, ifb= 12, a string of digits such as 59A (where the letter "A" represents the value of ten) would represent the value5×122+9×121+10×120= 838 in base 10. Commonly used numeral systems include: The octal and hexadecimal systems are often used in computing because of their ease as shorthand for binary. Every hexadecimal digit corresponds to a sequence of four binary digits, since sixteen is the fourth power of two; for example, hexadecimal 7816is binary11110002. Similarly, every octal digit corresponds to a unique sequence of three binary digits, since eight is the cube of two. This representation is unique. Letbbe a positive integer greater than 1. Then every positive integeracan be expressed uniquely in the form wheremis a nonnegative integer and ther's are integers such that Radices are usuallynatural numbers. However, other positional systems are possible, for example,golden ratio base(whose radix is a non-integeralgebraic number),[5]andnegative base(whose radix is negative).[6]A negative base allows the representation of negative numbers without the use of a minus sign. For example, letb= −10. Then a string of digits such as 19 denotes the (decimal) number1 × (−10)1+ 9 × (−10)0= −1. Different bases are especially used in connection with computers. The commonly used bases are 10 (decimal), 2 (binary), 8 (octal), and 16 (hexadecimal). Abytewith 8bitscan represent values from 0 to 255, often expressed withleading zerosin base 2, 8 or 16 to give the same length.[7] The first row in the tables is the base written in decimal.
https://en.wikipedia.org/wiki/Number_base
Inmathematics, thenatural numbersare the numbers0,1,2,3, and so on, possibly excluding 0.[1]Some start counting with 0, defining the natural numbers as thenon-negative integers0, 1, 2, 3, ..., while others start with 1, defining them as thepositive integers1, 2, 3, ....[a]Some authors acknowledge both definitions whenever convenient.[2]Sometimes, thewhole numbersare the natural numbers as well as zero. In other cases, thewhole numbersrefer to all of theintegers, including negative integers.[3]Thecounting numbersare another term for the natural numbers, particularly in primary education, and are ambiguous as well although typically start at 1.[4] The natural numbers are used for counting things, like "there aresixcoins on the table", in which case they are calledcardinal numbers. They are also used to put things in order, like "this is thethirdlargest city in the country", which are calledordinal numbers. Natural numbers are also used as labels, likejersey numberson a sports team, where they serve asnominal numbersand do not have mathematical properties.[5] The natural numbers form aset, commonly symbolized as a boldNorblackboard bold⁠N{\displaystyle \mathbb {N} }⁠. Many othernumber setsare built from the natural numbers. For example, theintegersare made by adding 0 and negative numbers. Therational numbersadd fractions, and thereal numbersadd all infinite decimals.Complex numbersadd thesquare root of−1. This chain of extensions canonicallyembedsthe natural numbers in the other number systems.[6][7] Natural numbers are studied in different areas of math.Number theorylooks at things like how numbers divide evenly (divisibility), or howprime numbersare spread out.Combinatoricsstudies counting and arranging numbered objects, such aspartitionsandenumerations. The most primitive method of representing a natural number is to use one's fingers, as infinger counting. Putting down atally markfor each object is another primitive method. Later, a set of objects could be tested for equality, excess or shortage—by striking out a mark and removing an object from the set. The first major advance in abstraction was the use ofnumeralsto represent numbers. This allowed systems to be developed for recording large numbers. The ancientEgyptiansdeveloped a powerful system of numerals with distincthieroglyphsfor 1, 10, and all powers of 10 up to over 1 million. A stone carving fromKarnak, dating back from around 1500 BCE and now at theLouvrein Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. TheBabylonianshad aplace-valuesystem based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one—its value being determined from context.[11] A much later advance was the development of the idea that0can be considered as a number, with its own numeral. The use of a 0digitin place-value notation (within other numbers) dates back as early as 700 BCE by the Babylonians, who omitted such a digit when it would have been the last symbol in the number.[b]TheOlmecandMaya civilizationsused 0 as a separate number as early as the1st century BCE, but this usage did not spread beyondMesoamerica.[13][14]The use of a numeral 0 in modern times originated with the Indian mathematicianBrahmaguptain 628 CE. However, 0 had been used as a number in the medievalcomputus(the calculation of the date of Easter), beginning withDionysius Exiguusin 525 CE, without being denoted by a numeral. StandardRoman numeralsdo not have a symbol for 0; instead,nulla(or the genitive formnullae) fromnullus, the Latin word for "none", was employed to denote a 0 value.[15] The first systematic study of numbers asabstractionsis usually credited to theGreekphilosophersPythagorasandArchimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all.[c]Euclid, for example, defined a unit first and then a number as a multitude of units, thus by his definition, a unit is not a number and there are no unique numbers (e.g., any two units from indefinitely many units is a 2).[17]However, in the definition ofperfect numberwhich comes shortly afterward, Euclid treats 1 as a number like any other.[18] Independent studies on numbers also occurred at around the same time inIndia, China, andMesoamerica.[19] Nicolas Chuquetused the termprogression naturelle(natural progression) in 1484.[20]The earliest known use of "natural number" as a complete English phrase is in 1763.[21][22]The 1771 Encyclopaedia Britannica defines natural numbers in the logarithm article.[22] Starting at 0 or 1 has long been a matter of definition. In 1727,Bernard Le Bovier de Fontenellewrote that his notions of distance and element led to defining the natural numbers as including or excluding 0.[23]In 1889,Giuseppe Peanoused N for the positive integers and started at 1,[24]but he later changed to using N0and N1.[25]Historically, most definitions have excluded 0,[22][26][27]but many mathematicians such asGeorge A. Wentworth,Bertrand Russell,Nicolas Bourbaki,Paul Halmos,Stephen Cole Kleene, andJohn Horton Conwayhave preferred to include 0.[28][22] Mathematicians have noted tendencies in which definition is used, such as algebra texts including 0,[22][d]number theory and analysis texts excluding 0,[22][29][30]logic and set theory texts including 0,[31][32][33]dictionaries excluding 0,[22][34]school books (through high-school level) excluding 0, and upper-division college-level books including 0.[1]There are exceptions to each of these tendencies and as of 2023 no formal survey has been conducted. Arguments raised includedivision by zero[29]and the size of theempty set.Computer languagesoftenstart from zerowhen enumerating items likeloop countersandstring-orarray-elements.[35][36]Including 0 began to rise in popularity in the 1960s.[22]TheISO 31-11standard included 0 in the natural numbers in its first edition in 1978 and this has continued through its present edition asISO 80000-2.[37] In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers.Henri Poincaréstated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act.[38]Leopold Kroneckersummarized his belief as "God made the integers, all else is the work of man".[e] Theconstructivistssaw a need to improve upon the logical rigor in thefoundations of mathematics.[f]In the 1860s,Hermann Grassmannsuggested arecursive definitionfor natural numbers, thus stating they were not really natural—but a consequence of definitions. Later, two classes of such formal definitions emerged, using set theory and Peano's axioms respectively. Later still, they were shown to be equivalent in most practical applications. Set-theoretical definitions of natural numberswere initiated byFrege. He initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set. However, this definition turned out to lead to paradoxes, includingRussell's paradox. To avoid such paradoxes, the formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements.[41] In 1881,Charles Sanders Peirceprovided the firstaxiomatizationof natural-number arithmetic.[42][43]In 1888,Richard Dedekindproposed another axiomatization of natural-number arithmetic,[44]and in 1889, Peano published a simplified version of Dedekind's axioms in his bookThe principles of arithmetic presented by a new method(Latin:Arithmetices principia, nova methodo exposita). This approach is now calledPeano arithmetic. It is based on anaxiomatizationof the properties ofordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic isequiconsistentwith several weak systems ofset theory. One such system isZFCwith theaxiom of infinityreplaced by its negation.[45]Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms includeGoodstein's theorem.[46] Thesetof all natural numbers is standardly denotedNorN.{\displaystyle \mathbb {N} .}[2][47]Older texts have occasionally employedJas the symbol for this set.[48] Since natural numbers may contain0or not, it may be important to know which version is referred to. This is often specified by the context, but may also be done by using a subscript or a superscript in the notation, such as:[37][49] Alternatively, since the natural numbers naturally form asubsetof theintegers(oftendenotedZ{\displaystyle \mathbb {Z} }),they may be referred to as the positive, or the non-negative integers, respectively.[50]To be unambiguous about whether 0 is included or not, sometimes a superscript "∗{\displaystyle *}" or "+" is added in the former case, and a subscript (or superscript) "0" is added in the latter case:[37] This section uses the conventionN=N0=N∗∪{0}{\displaystyle \mathbb {N} =\mathbb {N} _{0}=\mathbb {N} ^{*}\cup \{0\}}. Given the setN{\displaystyle \mathbb {N} }of natural numbers and thesuccessor functionS:N→N{\displaystyle S\colon \mathbb {N} \to \mathbb {N} }sending each natural number to the next one, one can defineadditionof natural numbers recursively by settinga+ 0 =aanda+S(b) =S(a+b)for alla,b. Thus,a+ 1 =a+ S(0) = S(a+0) = S(a),a+ 2 =a+ S(1) = S(a+1) = S(S(a)), and so on. Thealgebraic structure(N,+){\displaystyle (\mathbb {N} ,+)}is acommutativemonoidwithidentity element0. It is afree monoidon one generator. This commutative monoid satisfies thecancellation property, so it can be embedded in agroup. The smallest group containing the natural numbers is theintegers. If 1 is defined asS(0), thenb+ 1 =b+S(0) =S(b+ 0) =S(b). That is,b+ 1is simply the successor ofb. Analogously, given that addition has been defined, amultiplicationoperator×{\displaystyle \times }can be defined viaa× 0 = 0anda× S(b) = (a×b) +a. This turns(N∗,×){\displaystyle (\mathbb {N} ^{*},\times )}into afree commutative monoidwith identity element 1; a generator set for this monoid is the set ofprime numbers. Addition and multiplication are compatible, which is expressed in thedistribution law:a× (b+c) = (a×b) + (a×c). These properties of addition and multiplication make the natural numbers an instance of acommutativesemiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact thatN{\displaystyle \mathbb {N} }is notclosedunder subtraction (that is, subtracting one natural from another does not always result in another natural), means thatN{\displaystyle \mathbb {N} }isnotaring; instead it is asemiring(also known as arig). If the natural numbers are taken as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that they begin witha+ 1 =S(a)anda× 1 =a. Furthermore,(N∗,+){\displaystyle (\mathbb {N^{*}} ,+)}has no identity element. In this section, juxtaposed variables such asabindicate the producta×b,[51]and the standardorder of operationsis assumed. Atotal orderon the natural numbers is defined by lettinga≤bif and only if there exists another natural numbercwherea+c=b. This order is compatible with thearithmetical operationsin the following sense: ifa,bandcare natural numbers anda≤b, thena+c≤b+candac≤bc. An important property of the natural numbers is that they arewell-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by anordinal number; for the natural numbers, this is denoted asω(omega). In this section, juxtaposed variables such asabindicate the producta×b, and the standardorder of operationsis assumed. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure ofdivision with remainderorEuclidean divisionis available as a substitute: for any two natural numbersaandbwithb≠ 0there are natural numbersqandrsuch that The numberqis called thequotientandris called theremainderof the division ofabyb. The numbersqandrare uniquely determined byaandb. This Euclidean division is key to the several other properties (divisibility), algorithms (such as theEuclidean algorithm), and ideas in number theory. The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic properties: Two important generalizations of natural numbers arise from the two uses of counting and ordering:cardinal numbersandordinal numbers. The least ordinal of cardinalityℵ0(that is, theinitial ordinalofℵ0) isωbut many well-ordered sets with cardinal numberℵ0have an ordinal number greater thanω. Forfinitewell-ordered sets, there is a one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite,sequence. A countablenon-standard model of arithmeticsatisfying the Peano Arithmetic (that is, the first-order Peano axioms) was developed bySkolemin 1933. Thehypernaturalnumbers are an uncountable model that can be constructed from the ordinary natural numbers via theultrapower construction. Other generalizations are discussed inNumber § Extensions of the concept. Georges Reebused to claim provocatively that "The naïve integers don't fill upN{\displaystyle \mathbb {N} }".[55] There are two standard methods for formally defining natural numbers. The first one, named forGiuseppe Peano, consists of an autonomousaxiomatic theorycalledPeano arithmetic, based on few axioms calledPeano axioms. The second definition is based onset theory. It defines the natural numbers as specificsets. More precisely, each natural numbernis defined as an explicitly defined set, whose elements allow counting the elements of other sets, in the sense that the sentence "a setShasnelements" means that there exists aone to one correspondencebetween the two setsnandS. The sets used to define natural numbers satisfy Peano axioms. It follows that everytheoremthat can be stated and proved in Peano arithmetic can also be proved in set theory. However, the two definitions are not equivalent, as there are theorems that can be stated in terms of Peano arithmetic and proved in set theory, which are notprovableinside Peano arithmetic. A probable example isFermat's Last Theorem. The definition of the integers as sets satisfying Peano axioms provide amodelof Peano arithmetic inside set theory. An important consequence is that, if set theory isconsistent(as it is usually guessed), then Peano arithmetic is consistent. In other words, if a contradiction could be proved in Peano arithmetic, then set theory would be contradictory, and every theorem of set theory would be both true and wrong. The five Peano axioms are the following:[56][g] These are not the original axioms published by Peano, but are named in his honor. Some forms of the Peano axioms have 1 in place of 0. In ordinary arithmetic, the successor ofx{\displaystyle x}isx+1{\displaystyle x+1}. Intuitively, the natural numbernis the common property of allsetsthat havenelements. So, it seems natural to definenas anequivalence classunder the relation "can be made inone to one correspondence". This does not work in allset theories, as such an equivalence class would not be a set[h](because ofRussell's paradox). The standard solution is to define a particular set withnelements that will be called the natural numbern. The following definition was first published byJohn von Neumann,[57]although Levy attributes the idea to unpublished work of Zermelo in 1916.[58]As this definition extends toinfinite setas a definition ofordinal number, the sets considered below are sometimes calledvon Neumann ordinals. The definition proceeds as follows: It follows that the natural numbers are defined iteratively as follows: It can be checked that the natural numbers satisfy thePeano axioms. With this definition, given a natural numbern, the sentence "a setShasnelements" can be formally defined as "there exists abijectionfromntoS." This formalizes the operation ofcountingthe elements ofS. Also,n≤mif and only ifnis asubsetofm. In other words, theset inclusiondefines the usualtotal orderon the natural numbers. This order is awell-order. It follows from the definition that each natural number is equal to the set of all natural numbers less than it. This definition, can be extended to thevon Neumann definition of ordinalsfor defining allordinal numbers, including the infinite ones: "each ordinal is the well-ordered set of all smaller ordinals." If onedoes not accept the axiom of infinity, the natural numbers may not form a set. Nevertheless, the natural numbers can still be individually defined as above, and they still satisfy the Peano axioms. There are other set theoretical constructions. In particular,Ernst Zermeloprovided a construction that is nowadays only of historical interest, and is sometimes referred to asZermelo ordinals.[58]It consists in defining0as the empty set, andS(a) = {a}. With this definition each nonzero natural number is asingleton set. So, the property of the natural numbers to representcardinalitiesis not directly accessible; only the ordinal property (being thenth element of a sequence) is immediate. Unlike von Neumann's construction, the Zermelo ordinals do not extend to infinite ordinals.
https://en.wikipedia.org/wiki/Natural_number
Inmathematics, asetiscountableif either it isfiniteor it can be made inone to one correspondencewith the set ofnatural numbers.[a]Equivalently, a set iscountableif there exists aninjective functionfrom it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming theaxiom of countable choice, a set iscountableif itscardinality(the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to becountably infinite. The concept is attributed toGeorg Cantor, who proved the existence ofuncountable sets, that is, sets that are not countable; for example the set of thereal numbers. Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal.[1]An alternative style usescountableto mean what is here called countably infinite, andat most countableto mean what is here called countable.[2][3] The termsenumerable[4]anddenumerable[5][6]may also be used, e.g. referring to countable and countably infinite respectively,[7]definitions vary and care is needed respecting the difference withrecursively enumerable.[8] A setS{\displaystyle S}iscountableif: All of these definitions are equivalent. A setS{\displaystyle S}iscountablyinfiniteif: A set isuncountableif it is not countable, i.e. its cardinality is greater thanℵ0{\displaystyle \aleph _{0}}.[9] In 1874, inhis first set theory article, Cantor proved that the set ofreal numbersis uncountable, thus showing that not all infinite sets are countable.[16]In 1878, he used one-to-one correspondences to define and compare cardinalities.[17]In 1883, he extended the natural numbers with his infiniteordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.[18] Asetis a collection ofelements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted{3,4,5}{\displaystyle \{3,4,5\}}, called roster form.[19]This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example,{1,2,3,…,100}{\displaystyle \{1,2,3,\dots ,100\}}presumably denotes the set ofintegersfrom 1 to 100. Even in this case, however, it is stillpossibleto list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up ton{\displaystyle n}, this gives us the usual definition of "sets of sizen{\displaystyle n}". Some sets areinfinite; these sets have more thann{\displaystyle n}elements wheren{\displaystyle n}is any integer that can be specified. (No matter how large the specified integern{\displaystyle n}is, such asn=101000{\displaystyle n=10^{1000}}, infinite sets have more thann{\displaystyle n}elements.) For example, the set of natural numbers, denotable by{0,1,2,3,4,5,…}{\displaystyle \{0,1,2,3,4,5,\dots \}},[a]has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:…−2→−4,−1→−2,0→0,1→2,2→4⋯{\displaystyle \ldots \,-\!2\!\rightarrow \!-\!4,\,-\!1\!\rightarrow \!-\!2,\,0\!\rightarrow \!0,\,1\!\rightarrow \!2,\,2\!\rightarrow \!4\,\cdots }or, more generally,n→2n{\displaystyle n\rightarrow 2n}(see picture). What we have done here is arrange the integers and the even integers into aone-to-one correspondence(orbijection), which is afunctionthat maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integerscountably infiniteand say they have cardinalityℵ0{\displaystyle \aleph _{0}}. Georg Cantorshowed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable. By definition, a setS{\displaystyle S}iscountableif there exists abijectionbetweenS{\displaystyle S}and a subset of thenatural numbersN={0,1,2,…}{\displaystyle \mathbb {N} =\{0,1,2,\dots \}}. For example, define the correspondencea↔1,b↔2,c↔3{\displaystyle a\leftrightarrow 1,\ b\leftrightarrow 2,\ c\leftrightarrow 3}Since every element ofS={a,b,c}{\displaystyle S=\{a,b,c\}}is paired withprecisely oneelement of{1,2,3}{\displaystyle \{1,2,3\}},andvice versa, this defines a bijection, and shows thatS{\displaystyle S}is countable. Similarly we can show all finite sets are countable. As for the case of infinite sets, a setS{\displaystyle S}is countably infinite if there is abijectionbetweenS{\displaystyle S}and all ofN{\displaystyle \mathbb {N} }. As examples, consider the setsA={1,2,3,…}{\displaystyle A=\{1,2,3,\dots \}}, the set of positiveintegers, andB={0,2,4,6,…}{\displaystyle B=\{0,2,4,6,\dots \}}, the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignmentsn↔n+1{\displaystyle n\leftrightarrow n+1}andn↔2n{\displaystyle n\leftrightarrow 2n}, so that0↔1,1↔2,2↔3,3↔4,4↔5,…0↔0,1↔2,2↔4,3↔6,4↔8,…{\displaystyle {\begin{matrix}0\leftrightarrow 1,&1\leftrightarrow 2,&2\leftrightarrow 3,&3\leftrightarrow 4,&4\leftrightarrow 5,&\ldots \\[6pt]0\leftrightarrow 0,&1\leftrightarrow 2,&2\leftrightarrow 4,&3\leftrightarrow 6,&4\leftrightarrow 8,&\ldots \end{matrix}}}Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally: Theorem—A subset of a countable set is countable.[20] The set of allordered pairsof natural numbers (theCartesian productof two sets of natural numbers,N×N{\displaystyle \mathbb {N} \times \mathbb {N} }is countably infinite, as can be seen by following a path like the one in the picture: The resultingmappingproceeds as follows: 0↔(0,0),1↔(1,0),2↔(0,1),3↔(2,0),4↔(1,1),5↔(0,2),6↔(3,0),…{\displaystyle 0\leftrightarrow (0,0),1\leftrightarrow (1,0),2\leftrightarrow (0,1),3\leftrightarrow (2,0),4\leftrightarrow (1,1),5\leftrightarrow (0,2),6\leftrightarrow (3,0),\ldots }This mapping covers all such ordered pairs. This form of triangular mappingrecursivelygeneralizes ton{\displaystyle n}-tuplesof natural numbers, i.e.,(a1,a2,a3,…,an){\displaystyle (a_{1},a_{2},a_{3},\dots ,a_{n})}whereai{\displaystyle a_{i}}andn{\displaystyle n}are natural numbers, by repeatedly mapping the first two elements of ann{\displaystyle n}-tuple to a natural number. For example,(0,2,3){\displaystyle (0,2,3)}can be written as((0,2),3){\displaystyle ((0,2),3)}. Then(0,2){\displaystyle (0,2)}maps to 5 so((0,2),3){\displaystyle ((0,2),3)}maps to(5,3){\displaystyle (5,3)}, then(5,3){\displaystyle (5,3)}maps to 39. Since a different 2-tuple, that is a pair such as(a,b){\displaystyle (a,b)}, maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set ofn{\displaystyle n}-tuples to the set of natural numbersN{\displaystyle \mathbb {N} }is proved. For the set ofn{\displaystyle n}-tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem. Theorem—TheCartesian productof finitely many countable sets is countable.[21][b] The set of allintegersZ{\displaystyle \mathbb {Z} }and the set of allrational numbersQ{\displaystyle \mathbb {Q} }may intuitively seem much bigger thanN{\displaystyle \mathbb {N} }. But looks can be deceiving. If a pair is treated as thenumeratoranddenominatorof avulgar fraction(a fraction in the form ofa/b{\displaystyle a/b}wherea{\displaystyle a}andb≠0{\displaystyle b\neq 0}are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural numbern{\displaystyle n}is also a fractionn/1{\displaystyle n/1}. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below. Theorem—Z{\displaystyle \mathbb {Z} }(the set of all integers) andQ{\displaystyle \mathbb {Q} }(the set of all rational numbers) are countable.[c] In a similar manner, the set ofalgebraic numbersis countable.[23][d] Sometimes more than one mapping is useful: a setA{\displaystyle A}to be shown as countable is one-to-one mapped (injection) to another setB{\displaystyle B}, thenA{\displaystyle A}is proved as countable ifB{\displaystyle B}is one-to-one mapped to the set of natural numbers. For example, the set of positiverational numberscan easily be one-to-one mapped to the set of natural number pairs (2-tuples) becausep/q{\displaystyle p/q}maps to(p,q){\displaystyle (p,q)}. Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable. Theorem—Any finiteunionof countable sets is countable.[24][25][e] With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so. Theorem—(Assuming theaxiom of countable choice) The union of countably many countable sets is countable.[f] For example, given countable setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }, we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:IndexTupleElement0(0,0)a01(0,1)a12(1,0)b03(0,2)a24(1,1)b15(2,0)c06(0,3)a37(1,2)b28(2,1)c19(3,0)d010(0,4)a4⋮{\displaystyle {\begin{array}{c|c|c }{\text{Index}}&{\text{Tuple}}&{\text{Element}}\\\hline 0&(0,0)&{\textbf {a}}_{0}\\1&(0,1)&{\textbf {a}}_{1}\\2&(1,0)&{\textbf {b}}_{0}\\3&(0,2)&{\textbf {a}}_{2}\\4&(1,1)&{\textbf {b}}_{1}\\5&(2,0)&{\textbf {c}}_{0}\\6&(0,3)&{\textbf {a}}_{3}\\7&(1,2)&{\textbf {b}}_{2}\\8&(2,1)&{\textbf {c}}_{1}\\9&(3,0)&{\textbf {d}}_{0}\\10&(0,4)&{\textbf {a}}_{4}\\\vdots &&\end{array}}} We need theaxiom of countable choiceto indexallthe setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }simultaneously. Theorem—The set of all finite-lengthsequencesof natural numbers is countable. This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, and so on, each of which is a countable set (finite Cartesian product). Thus the set is a countable union of countable sets, which is countable by the previous theorem. Theorem—The set of all finitesubsetsof the natural numbers is countable. The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets. Theorem—LetS{\displaystyle S}andT{\displaystyle T}be sets. These follow from the definitions of countable set as injective / surjective functions.[g] Cantor's theoremasserts that ifA{\displaystyle A}is a set andP(A){\displaystyle {\mathcal {P}}(A)}is itspower set, i.e. the set of all subsets ofA{\displaystyle A}, then there is no surjective function fromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}. A proof is given in the articleCantor's theorem. As an immediate consequence of this and the Basic Theorem above we have: Proposition—The setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}is not countable; i.e. it isuncountable. For an elaboration of this result seeCantor's diagonal argument. The set ofreal numbersis uncountable,[h]and so is the set of all infinitesequencesof natural numbers. If there is a set that is a standard model (seeinner model) of ZFC set theory, then there is a minimal standard model (seeConstructible universe). TheLöwenheim–Skolem theoremcan be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this modelMcontains elements that are: was seen as paradoxical in the early days of set theory; seeSkolem's paradoxfor more. The minimal standard model includes all thealgebraic numbersand all effectively computabletranscendental numbers, as well as many other kinds of numbers. Countable sets can betotally orderedin various ways, for example: In both examples of well orders here, any subset has aleast element; and in both examples of non-well orders,somesubsets do not have aleast element. This is the key definition that determines whether a total order is also a well order.
https://en.wikipedia.org/wiki/Countability
Probabilityis a branch ofmathematicsandstatisticsconcerningeventsand numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur.[note 1][1][2]This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These concepts have been given anaxiomaticmathematical formalization inprobability theory, which is used widely inareas of studysuch asstatistics,mathematics,science,finance,gambling,artificial intelligence,machine learning,computer science,game theory, andphilosophyto, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities ofcomplex systems.[3] The wordprobabilityderivesfrom the Latinprobabilitas, which can also mean "probity", a measure of theauthorityof awitnessin alegal casein Europe, and often correlated with the witness'snobility. In a sense, this differs much from the modern meaning ofprobability, which in contrast is a measure of the weight ofempirical evidence, and is arrived at frominductive reasoningandstatistical inference.[4] When dealing withrandom experiments– i.e.,experimentsthat arerandomandwell-defined– in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to astheoretical probability(in contrast toempirical probability, dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur. For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. The probability of getting an outcome of at least one head is 3 out of 4, or 0.75, and this event is more likely to occur. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability: The scientific study of probability is a modern development of mathematics.Gamblingshows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues[note 2]are still obscured by superstitions.[11] According toRichard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latinprobabilis) meantapprovable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."[12]However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.[13] The sixteenth-centuryItalianpolymathGerolamo Cardanodemonstrated the efficacy of definingoddsas the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes[14]). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence ofPierre de FermatandBlaise Pascal(1654).Christiaan Huygens(1657) gave the earliest known scientific treatment of the subject.[15]Jakob Bernoulli'sArs Conjectandi(posthumous, 1713) andAbraham de Moivre'sDoctrine of Chances(1718) treated the subject as a branch of mathematics.[16]SeeIan Hacking'sThe Emergence of Probability[4]andJames Franklin'sThe Science of Conjecture[17]for histories of the early development of the very concept of mathematical probability. Thetheory of errorsmay be traced back toRoger Cotes'sOpera Miscellanea(posthumous, 1722), but a memoir prepared byThomas Simpsonin 1755 (printed 1756) first applied the theory to the discussion of errors of observation.[18]The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve. The first two laws of error that were proposed both originated withPierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error – disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error.[19]The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."[19] Daniel Bernoulli(1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. Adrien-Marie Legendre(1805) developed themethod of least squares, and introduced it in hisNouvelles méthodes pour la détermination des orbites des comètes(New Methods for Determining the Orbits of Comets).[20]In ignorance of Legendre's contribution, an Irish-American writer,Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, ϕ(x)=ce−h2x2{\displaystyle \phi (x)=ce^{-h^{2}x^{2}}} whereh{\displaystyle h}is a constant depending on precision of observation, andc{\displaystyle c}is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same asJohn Herschel's (1850).[citation needed]Gaussgave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823),James Ivory(1825, 1826), Hagen (1837),Friedrich Bessel(1838),W.F. Donkin(1844, 1856), andMorgan Crofton(1870). Other contributors wereEllis(1844),De Morgan(1864),Glaisher(1872), andGiovanni Schiaparelli(1875).Peters's (1856) formula[clarification needed]forr, theprobable errorof a single observation, is well known. In the nineteenth century, authors on the general theory includedLaplace,Sylvestre Lacroix(1816), Littrow (1833),Adolphe Quetelet(1853),Richard Dedekind(1860), Helmert (1872),Hermann Laurent(1873), Liagre, Didion andKarl Pearson.Augustus De MorganandGeorge Booleimproved the exposition of the theory. In 1906,Andrey Markovintroduced[21]the notion ofMarkov chains, which played an important role instochastic processestheory and its applications. The modern theory of probability based onmeasure theorywas developed byAndrey Kolmogorovin 1931.[22] On the geometric side, contributors toThe Educational Timesincluded Miller, Crofton, McColl, Wolstenholme, Watson, andArtemas Martin.[23]Seeintegral geometryfor more information. Like othertheories, thetheory of probabilityis a representation of its concepts in formal terms – that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely theKolmogorovformulation and theCoxformulation. In Kolmogorov's formulation (see alsoprobability space),setsare interpreted aseventsand probability as ameasureon a class of sets. InCox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, thelaws of probabilityare the same, except for technical details. There are other methods for quantifying uncertainty, such as theDempster–Shafer theoryorpossibility theory, but those are essentially different and not compatible with the usually-understood laws of probability. Probability theory is applied in everyday life inriskassessment andmodeling. The insurance industry andmarketsuseactuarial scienceto determine pricing and make trading decisions. Governments apply probabilistic methods inenvironmental regulation, entitlement analysis, andfinancial regulation. An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory ofbehavioral financeemerged to describe the effect of suchgroupthinkon pricing, on policy, and on peace and conflict.[24] In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biologicalPunnett squares).[25]As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to designgames of chanceso that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.[26] Another significant application of probability theory in everyday life isreliability. Many consumer products, such asautomobilesand consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product'swarranty.[27] Thecache language modeland otherstatistical language modelsthat are used innatural language processingare also examples of applications of probability theory. Consider an experiment that can produce a number of results. The collection of all possible results is called thesample spaceof the experiment, sometimes denoted asΩ{\displaystyle \Omega }. Thepower setof the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of thepower setof the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred. A probability is away of assigningevery event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.[28] The probability of aneventAis written asP(A){\displaystyle P(A)},[29]p(A){\displaystyle p(A)}, orPr(A){\displaystyle {\text{Pr}}(A)}.[30]This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure. Theoppositeorcomplementof an eventAis the event [notA] (that is, the event ofAnot occurring), often denoted asA′,Ac{\displaystyle A',A^{c}},A¯,A∁,¬A{\displaystyle {\overline {A}},A^{\complement },\neg A}, or∼A{\displaystyle {\sim }A}; its probability is given byP(notA) = 1 −P(A).[31]As an example, the chance of not rolling a six on a six-sided die is1 – (chance of rolling a six) =1 −⁠1/6⁠=⁠5/6⁠.For a more comprehensive treatment, seeComplementary event. If two eventsAandBoccur on a single performance of an experiment, this is called the intersection orjoint probabilityofAandB, denoted asP(A∩B).{\displaystyle P(A\cap B).} If two events,AandBareindependentthen the joint probability is[29] P(AandB)=P(A∩B)=P(A)P(B).{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=P(A)P(B).} For example, if two coins are flipped, then the chance of both being heads is12×12=14.{\displaystyle {\tfrac {1}{2}}\times {\tfrac {1}{2}}={\tfrac {1}{4}}.}[32] If either eventAor eventBcan occur but never both simultaneously, then they are called mutually exclusive events. If two events aremutually exclusive, then the probability ofbothoccurring is denoted asP(A∩B){\displaystyle P(A\cap B)}andP(AandB)=P(A∩B)=0{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=0}If two events aremutually exclusive, then the probability ofeitheroccurring is denoted asP(A∪B){\displaystyle P(A\cup B)}andP(AorB)=P(A∪B)=P(A)+P(B)−P(A∩B)=P(A)+P(B)−0=P(A)+P(B){\displaystyle P(A{\mbox{ or }}B)=P(A\cup B)=P(A)+P(B)-P(A\cap B)=P(A)+P(B)-0=P(A)+P(B)} For example, the chance of rolling a 1 or 2 on a six-sided die isP(1or2)=P(1)+P(2)=16+16=13.{\displaystyle P(1{\mbox{ or }}2)=P(1)+P(2)={\tfrac {1}{6}}+{\tfrac {1}{6}}={\tfrac {1}{3}}.} If the events are not (necessarily) mutually exclusive thenP(AorB)=P(A∪B)=P(A)+P(B)−P(AandB).{\displaystyle P\left(A{\hbox{ or }}B\right)=P(A\cup B)=P\left(A\right)+P\left(B\right)-P\left(A{\mbox{ and }}B\right).}Rewritten,P(A∪B)=P(A)+P(B)−P(A∩B){\displaystyle P\left(A\cup B\right)=P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)} For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is1352+1252−352=1126,{\displaystyle {\tfrac {13}{52}}+{\tfrac {12}{52}}-{\tfrac {3}{52}}={\tfrac {11}{26}},}since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once. This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:P(A∪B∪C)=P((A∪B)∪C)=P(A∪B)+P(C)−P((A∪B)∩C)=P(A)+P(B)−P(A∩B)+P(C)−P((A∩C)∪(B∩C))=P(A)+P(B)+P(C)−P(A∩B)−(P(A∩C)+P(B∩C)−P((A∩C)∩(B∩C)))P(A∪B∪C)=P(A)+P(B)+P(C)−P(A∩B)−P(A∩C)−P(B∩C)+P(A∩B∩C){\displaystyle {\begin{aligned}P\left(A\cup B\cup C\right)=&P\left(\left(A\cup B\right)\cup C\right)\\=&P\left(A\cup B\right)+P\left(C\right)-P\left(\left(A\cup B\right)\cap C\right)\\=&P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)+P\left(C\right)-P\left(\left(A\cap C\right)\cup \left(B\cap C\right)\right)\\=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-\left(P\left(A\cap C\right)+P\left(B\cap C\right)-P\left(\left(A\cap C\right)\cap \left(B\cap C\right)\right)\right)\\P\left(A\cup B\cup C\right)=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-P\left(A\cap C\right)-P\left(B\cap C\right)+P\left(A\cap B\cap C\right)\end{aligned}}}It can be seen, then, that this pattern can be repeated for any number of events. Conditional probabilityis the probability of some eventA, given the occurrence of some other eventB. Conditional probability is writtenP(A∣B){\displaystyle P(A\mid B)}, and is read "the probability ofA, givenB". It is defined by[33] P(A∣B)=P(A∩B)P(B){\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}\,} IfP(B)=0{\displaystyle P(B)=0}thenP(A∣B){\displaystyle P(A\mid B)}is formallyundefinedby this expression. In this caseA{\displaystyle A}andB{\displaystyle B}are independent, sinceP(A∩B)=P(A)P(B)=0.{\displaystyle P(A\cap B)=P(A)P(B)=0.}However, it is possible to define a conditional probability for some zero-probability events, for example by using aσ-algebraof such events (such as those arising from acontinuous random variable).[34] For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is1/2;{\displaystyle 1/2;}however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be1/3,{\displaystyle 1/3,}since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be2/3.{\displaystyle 2/3.} Inprobability theoryand applications,Bayes' rulerelates theoddsof eventA1{\displaystyle A_{1}}to eventA2,{\displaystyle A_{2},}before (prior to) and after (posterior to)conditioningon another eventB.{\displaystyle B.}The odds onA1{\displaystyle A_{1}}to eventA2{\displaystyle A_{2}}is simply the ratio of the probabilities of the two events. When arbitrarily many eventsA{\displaystyle A}are of interest, not just two, the rule can be rephrased asposterior is proportional to prior times likelihood,P(A|B)∝P(A)P(B|A){\displaystyle P(A|B)\propto P(A)P(B|A)}where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side asA{\displaystyle A}varies, for fixed or givenB{\displaystyle B}(Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). In adeterministicuniverse, based onNewtonianconcepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in whichsensitivity to initial conditionsexceeds our ability to measure them, i.e. know them). In the case of aroulettewheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass'Newtonian Casinorevealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in thekinetic theory of gases, where the system, while deterministicin principle, is so complex (with the number of molecules typically the order of magnitude of theAvogadro constant6.02×1023) that only a statistical description of its properties is feasible.[35] Probability theoryis required to describe quantum phenomena.[36]A revolutionary discovery of early 20th centuryphysicswas the random character of all physical processes that occur at sub-atomic scales and are governed by the laws ofquantum mechanics. The objectivewave functionevolves deterministically but, according to theCopenhagen interpretation, it deals with probabilities of observing, the outcome being explained by awave function collapsewhen an observation is made. However, the loss ofdeterminismfor the sake ofinstrumentalismdid not meet with universal approval.Albert Einsteinfamouslyremarkedin a letter toMax Born: "I am convinced that God does not play dice".[37]Like Einstein,Erwin Schrödinger, whodiscoveredthe wave function, believed quantum mechanics is astatisticalapproximation of an underlying deterministicreality.[38]In some modern interpretations of the statistical mechanics of measurement,quantum decoherenceis invoked to account for the appearance of subjectively probabilistic experimental outcomes.
https://en.wikipedia.org/wiki/Probability
Combinatoricsis an area ofmathematicsprimarily concerned withcounting, both as a means and as an end to obtaining results, and certain properties offinitestructures. It is closely related to many other areas of mathematics and has many applications ranging fromlogictostatistical physicsand fromevolutionary biologytocomputer science. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas ofpure mathematics, notably inalgebra,probability theory,topology, andgeometry,[1]as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving anad hocsolution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right.[2]One of the oldest and most accessible parts of combinatorics isgraph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in theanalysis of algorithms. The full scope of combinatorics is not universally agreed upon.[3]According toH. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions.[4]Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with: Leon Mirskyhas said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained."[5]One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella.[6]Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically,countable) butdiscretesetting. Basic combinatorial concepts and enumerative results appeared throughout theancient world. The earliest recorded use of combinatorial techniques comes from problem 79 of theRhind papyrus, which dates to the 16th century BC. The problem concerns a certaingeometric series, and has similarities to Fibonacci's problem of counting the number ofcompositionsof 1s and 2s thatsumto a given total.[7]IndianphysicianSushrutaasserts inSushruta Samhitathat 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26− 1 possibilities.GreekhistorianPlutarchdiscusses an argument betweenChrysippus(3rd century BCE) andHipparchus(2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related toSchröder–Hipparchus numbers.[8][9][10]Earlier, in theOstomachion,Archimedes(3rd century BCE) may have considered the number of configurations of atiling puzzle,[11]while combinatorial interests possibly were present in lost works byApollonius.[12][13] In theMiddle Ages, combinatorics continued to be studied, largely outside of theEuropean civilization. TheIndianmathematicianMahāvīra(c.850) provided formulae for the number ofpermutationsandcombinations,[14][15]and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE.[16]ThephilosopherandastronomerRabbiAbraham ibn Ezra(c.1140) established the symmetry ofbinomial coefficients, while a closed formula was obtained later by thetalmudistandmathematicianLevi ben Gerson(better known as Gersonides), in 1321.[17]The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known asPascal's triangle. Later, inMedieval England,campanologyprovided examples of what is now known asHamiltonian cyclesin certainCayley graphson permutations.[18][19] During theRenaissance, together with the rest of mathematics and thesciences, combinatorics enjoyed a rebirth. Works ofPascal,Newton,Jacob BernoulliandEulerbecame foundational in the emerging field. In modern times, the works ofJ.J. Sylvester(late 19th century) andPercy MacMahon(early 20th century) helped lay the foundation forenumerativeandalgebraic combinatorics.Graph theoryalso enjoyed an increase of interest at the same time, especially in connection with thefour color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject.[20]In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, fromfunctional analysistonumber theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broadmathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description.Fibonacci numbersis the basic example of a problem in enumerative combinatorics. Thetwelvefold wayprovides a unified framework for countingpermutations,combinationsandpartitions. Analytic combinatoricsconcerns the enumeration of combinatorial structures using tools fromcomplex analysisandprobability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae andgenerating functionsto describe the results, analytic combinatorics aims at obtainingasymptotic formulae. Partition theory studies various enumeration and asymptotic problems related tointeger partitions, and is closely related toq-series,special functionsandorthogonal polynomials. Originally a part ofnumber theoryandanalysis, it is now considered a part of combinatorics or an independent field. It incorporates thebijective approachand various tools in analysis andanalytic number theoryand has connections withstatistical mechanics. Partitions can be graphically visualized withYoung diagramsorFerrers diagrams. They occur in a number of branches ofmathematicsandphysics, including the study ofsymmetric polynomialsand of thesymmetric groupand ingroup representation theoryin general. Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs onnvertices withkedges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graphGand two numbersxandy, does theTutte polynomialTG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects.[21]While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems. Design theory is a study ofcombinatorial designs, which are collections of subsets with certainintersectionproperties.Block designsare combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as inKirkman's schoolgirl problemproposed in 1850. The solution of the problem is a special case of aSteiner system, which play an important role in theclassification of finite simple groups. The area has further connections tocoding theoryand geometric combinatorics. Combinatorial design theory can be applied to the area ofdesign of experiments. Some of the basic theory of combinatorial designs originated in the statisticianRonald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas includingfinite geometry,tournament scheduling,lotteries,mathematical chemistry,mathematical biology,algorithm design and analysis,networking,group testingandcryptography.[22] Finite geometry is the study ofgeometric systemshaving only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane,real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples fordesign theory. It should not be confused with discrete geometry (combinatorial geometry). Order theory is the study ofpartially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear inalgebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders includelatticesandBoolean algebras. Matroid theory abstracts part ofgeometry. It studies the properties of sets (usually, finite sets) of vectors in avector spacethat do not depend on the particular coefficients in alinear dependencerelation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced byHassler Whitneyand studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics. Extremal combinatorics studies how large or how small a collection of finite objects (numbers,graphs,vectors,sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concernsclassesofset systems; this is called extremal set theory. For instance, in ann-element set, what is the largest number ofk-elementsubsetsthat can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered bySperner's theorem, which gave rise to much of extremal set theory. The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largesttriangle-free graphon2nvertices is acomplete bipartite graphKn,n. Often it is too hard even to find the extremal answerf(n) exactly and one can only give anasymptotic estimate. Ramsey theoryis another part of extremal combinatorics. It states that anysufficiently largeconfiguration will contain some sort of order. It is an advanced generalization of thepigeonhole principle. In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as arandom graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to astheprobabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finiteMarkov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate themixing time.[clarification needed] Often associated withPaul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics. Algebraic combinatorics is an area ofmathematicsthat employs methods ofabstract algebra, notablygroup theoryandrepresentation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems inalgebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may beenumerativein nature or involvematroids,polytopes,partially ordered sets, orfinite geometries. On the algebraic side, besides group and representation theory,lattice theoryandcommutative algebraare common. Combinatorics on words deals withformal languages. It arose independently within several branches of mathematics, includingnumber theory,group theoryandprobability. It has applications to enumerative combinatorics,fractal analysis,theoretical computer science,automata theory, andlinguistics. While many applications are new, the classicalChomsky–Schützenberger hierarchyof classes offormal grammarsis perhaps the best-known result in the field. Geometric combinatorics is related toconvexanddiscrete geometry. It asks, for example, how many faces of each dimension aconvex polytopecan have.Metricproperties of polytopes play an important role as well, e.g. theCauchy theoremon the rigidity of convex polytopes. Special polytopes are also considered, such aspermutohedra,associahedraandBirkhoff polytopes.Combinatorial geometryis a historical name for discrete geometry. It includes a number of subareas such aspolyhedral combinatorics(the study offacesofconvex polyhedra),convex geometry(the study ofconvex sets, in particular combinatorics of their intersections), anddiscrete geometry, which in turn has many applications tocomputational geometry. The study ofregular polytopes,Archimedean solids, andkissing numbersis also a part of geometric combinatorics. Special polytopes are also considered, such as thepermutohedron,associahedronandBirkhoff polytope. Combinatorial analogs of concepts and methods intopologyare used to studygraph coloring,fair division,partitions,partially ordered sets,decision trees,necklace problemsanddiscrete Morse theory. It should not be confused withcombinatorial topologywhich is an older name foralgebraic topology. Arithmetic combinatorics arose out of the interplay betweennumber theory, combinatorics,ergodic theory, andharmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division).Additive number theory(sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is theergodic theoryofdynamical systems. Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part ofset theory, an area ofmathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied includecontinuous graphsandtrees, extensions ofRamsey's theorem, andMartin's axiom. Recent developments concern combinatorics of thecontinuum[23]and combinatorics on successors of singular cardinals.[24] Gian-Carlo Rotaused the namecontinuous combinatorics[25]to describegeometric probability, since there are many analogies betweencountingandmeasure. Combinatorial optimizationis the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related tooperations research,algorithm theoryandcomputational complexity theory. Coding theorystarted as a part of design theory with early combinatorial constructions oferror-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part ofinformation theory. Discrete geometry(also called combinatorial geometry) also began as a part of combinatorics, with early results onconvex polytopesandkissing numbers. With the emergence of applications of discrete geometry tocomputational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry. Combinatorial aspects of dynamical systemsis another emerging field. Here dynamical systems can be defined on combinatorial objects. See for examplegraph dynamical system. There are increasing interactions betweencombinatorics and physics, particularlystatistical physics. Examples include an exact solution of theIsing model, and a connection between thePotts modelon one hand, and thechromaticandTutte polynomialson the other hand.
https://en.wikipedia.org/wiki/Combinatorics
1nK(N−K)(N−n)(N−2)(N−3)⋅{\displaystyle \left.{\frac {1}{nK(N-K)(N-n)(N-2)(N-3)}}\cdot \right.}[(N−1)N2(N(N+1)−6K(N−K)−6n(N−n)){\displaystyle {\big [}(N-1)N^{2}{\big (}N(N+1)-6K(N-K)-6n(N-n){\big )}} Inprobability theoryandstatistics, thehypergeometric distributionis adiscrete probability distributionthat describes the probability ofk{\displaystyle k}successes (random draws for which the object drawn has a specified feature) inn{\displaystyle n}draws,withoutreplacement, from a finitepopulationof sizeN{\displaystyle N}that contains exactlyK{\displaystyle K}objects with that feature, wherein each draw is either a success or a failure. In contrast, thebinomial distributiondescribes the probability ofk{\displaystyle k}successes inn{\displaystyle n}drawswithreplacement. The following conditions characterize the hypergeometric distribution: Arandom variableX{\displaystyle X}follows the hypergeometric distribution if itsprobability mass function(pmf) is given by[1] where Thepmfis positive whenmax(0,n+K−N)≤k≤min(K,n){\displaystyle \max(0,n+K-N)\leq k\leq \min(K,n)}. A random variable distributed hypergeometrically with parametersN{\displaystyle N},K{\displaystyle K}andn{\displaystyle n}is writtenX∼Hypergeometric⁡(N,K,n){\textstyle X\sim \operatorname {Hypergeometric} (N,K,n)}and hasprobability mass functionpX(k){\textstyle p_{X}(k)}above. As required, we have which essentially follows fromVandermonde's identityfromcombinatorics. Also note that This identity can be shown by expressing the binomial coefficients in terms of factorials and rearranging the latter. Additionally, it follows from the symmetry of the problem, described in two different but interchangeable ways. For example, consider two rounds of drawing without replacement. In the first round,K{\displaystyle K}out ofN{\displaystyle N}neutral marbles are drawn from an urn without replacement and coloured green. Then the colored marbles are put back. In the second round,n{\displaystyle n}marbles are drawn without replacement and colored red. Then, the number of marbles with both colors on them (that is, the number of marbles that have been drawn twice) has the hypergeometric distribution. The symmetry inK{\displaystyle K}andn{\displaystyle n}stems from the fact that the two rounds are independent, and one could have started by drawingn{\displaystyle n}balls and colouring them red first. Note that we are interested in the probability ofk{\displaystyle k}successes inn{\displaystyle n}drawswithout replacement, since the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble. Keep in mind not to confuse with thebinomial distribution, which describes the probability ofk{\displaystyle k}successes inn{\displaystyle n}drawswith replacement. The classical application of the hypergeometric distribution issampling without replacement. Think of anurnwith two colors ofmarbles, red and green. Define drawing a green marble as a success and drawing a red marble as a failure. LetNdescribe the number ofall marbles in the urn(see contingency table below) andKdescribe the number ofgreen marbles, thenN−Kcorresponds to the number ofred marbles. Now, standing next to the urn, you close your eyes and draw n marbles without replacement. DefineXas arandom variablewhose outcome isk, the number of green marbles drawn in the experiment. This situation is illustrated by the followingcontingency table: Indeed, we are interested in calculating the probability of drawing k green marbles in n draws, given that there are K green marbles out of a total of N marbles. For this example, assume that there are5green and45red marbles in the urn. Standing next to the urn, you close your eyes and draw10marbles without replacement. What is the probability that exactly4of the10are green? This problem is summarized by the following contingency table: To find the probability ofdrawing k green marbles in exactly n draws out of N total draws, we identify X as a hyper-geometric random variable to use the formula P(X=k)=f(k;N,K,n)=(Kk)(N−Kn−k)(Nn).{\displaystyle P(X=k)=f(k;N,K,n)={{{K \choose k}{{N-K} \choose {n-k}}} \over {N \choose n}}.} To intuitively explain the given formula, consider the two symmetric problems represented by the identity (Kk)(N−Kn−k)(Nn)=(nk)(N−nK−k)(NK){\displaystyle {{K \choose k}{N-K \choose n-k} \over {N \choose n}}={{{n \choose k}{{N-n} \choose {K-k}}} \over {N \choose K}}} Back to the calculations, we use the formula above to calculate the probability of drawing exactlykgreen marbles Intuitively we would expect it to be even more unlikely that all 5 green marbles will be among the 10 drawn. As expected, the probability of drawing 5 green marbles is roughly 35 times less likely than that of drawing 4. Swapping the roles of green and red marbles: Swapping the roles of drawn and not drawn marbles: Swapping the roles of green and drawn marbles: These symmetries generate thedihedral groupD4{\displaystyle D_{4}}. The probability of drawing any set of green and red marbles (the hypergeometric distribution) depends only on the numbers of green and red marbles, not on the order in which they appear; i.e., it is anexchangeabledistribution. As a result, the probability of drawing a green marble in theith{\displaystyle i^{\text{th}}}draw is[2] This is anex anteprobability—that is, it is based on not knowing the results of the previous draws. LetX∼Hypergeometric⁡(N,K,n){\displaystyle X\sim \operatorname {Hypergeometric} (N,K,n)}andp=K/N{\displaystyle p=K/N}. Then for0<t<K/N{\displaystyle 0<t<K/N}we can derive the following bounds:[3] where is theKullback-Leibler divergenceand it is used thatD(a∥b)≥2(a−b)2{\displaystyle D(a\parallel b)\geq 2(a-b)^{2}}.[4] Note: In order to derive the previous bounds, one has to start by observing thatX=∑i=1nYin{\displaystyle X={\frac {\sum _{i=1}^{n}Y_{i}}{n}}}whereYi{\displaystyle Y_{i}}aredependentrandom variables with a specific distributionD{\displaystyle D}. Because most of the theorems about bounds in sum of random variables are concerned withindependentsequences of them, one has to first create a sequenceZi{\displaystyle Z_{i}}ofindependentrandom variables with the same distributionD{\displaystyle D}and apply the theorems onX′=∑i=1nZin{\displaystyle X'={\frac {\sum _{i=1}^{n}Z_{i}}{n}}}. Then, it is proved from Hoeffding[3]that the results and bounds obtained via this process hold forX{\displaystyle X}as well. Ifnis larger thanN/2, it can be useful to apply symmetry to "invert" the bounds, which give you the following:[4][5] Thehypergeometric testuses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number ofk{\displaystyle k}successes (out ofn{\displaystyle n}total draws) from a population of sizeN{\displaystyle N}containingK{\displaystyle K}successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawingk{\displaystyle k}or more successes from the population inn{\displaystyle n}total draws. In a test for under-representation, the p-value is the probability of randomly drawingk{\displaystyle k}or fewer successes. The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version ofFisher's exact test.[6]Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see[7]). The test is often used to identify which sub-populations are over- or under-represented in a sample. This test has a wide range of applications. For example, a marketing group could use the test to understand their customer base by testing a set of known customers for over-representation of various demographic subgroups (e.g., women, people under 30). LetX∼Hypergeometric⁡(N,K,n){\displaystyle X\sim \operatorname {Hypergeometric} (N,K,n)}andp=K/N{\displaystyle p=K/N}. whereΦ{\displaystyle \Phi }is thestandard normal distribution function The following table describes four distributions related to the number of successes in a sequence of draws: c∈N+={1,2,…}{\displaystyle c\in \mathbb {N} _{+}=\lbrace 1,2,\ldots \rbrace }(K1,…,Kc)∈Nc{\displaystyle (K_{1},\ldots ,K_{c})\in \mathbb {N} ^{c}} Var⁡(ki)=nN−nN−1KiN(1−KiN){\displaystyle \operatorname {Var} (k_{i})=n{\frac {N-n}{N-1}}\;{\frac {K_{i}}{N}}\left(1-{\frac {K_{i}}{N}}\right)} The model of anurnwith green and red marbles can be extended to the case where there are more than two colors of marbles. If there areKimarbles of coloriin the urn and you takenmarbles at random without replacement, then the number of marbles of each color in the sample (k1,k2,...,kc) has the multivariate hypergeometric distribution: This has the same relationship to themultinomial distributionthat the hypergeometric distribution has to the binomial distribution—the multinomial distribution is the "with-replacement" distribution and the multivariate hypergeometric is the "without-replacement" distribution. The properties of this distribution are given in the adjacent table,[8]wherecis the number of different colors andN=∑i=1cKi{\displaystyle N=\sum _{i=1}^{c}K_{i}}is the total number of marbles in the urn. Suppose there are 5 black, 10 white, and 15 red marbles in an urn. If six marbles are chosen without replacement, the probability that exactly two of each color are chosen is Election auditstypically test a sample of machine-counted precincts to see if recounts by hand or machine match the original counts. Mismatches result in either a report or a larger recount. The sampling rates are usually defined by law, not statistical design, so for a legally defined sample sizen, what is the probability of missing a problem which is present inKprecincts, such as a hack or bug? This is the probability thatk= 0 .Bugs are often obscure, and a hacker can minimize detection by affecting only a few precincts, which will still affect close elections, so a plausible scenario is forKto be on the order of 5% ofN. Audits typically cover 1% to 10% of precincts (often 3%),[9][10][11]so they have a high chance of missing a problem. For example, if a problem is present in 5 of 100 precincts, a 3% sample has 86% probability thatk= 0so the problem would not be noticed, and only 14% probability of the problem appearing in the sample (positivek): The sample would need 45 precincts in order to have probability under 5% thatk= 0 in the sample, and thus have probability over 95% of finding the problem: Inhold'empoker players make the best hand they can combining the two cards in their hand with the 5 cards (community cards) eventually turned up on the table. The deck has 52 and there are 13 of each suit. For this example assume a player has 2 clubs in the hand and there are 3 cards showing on the table, 2 of which are also clubs. The player would like to know the probability of one of the next 2 cards to be shown being a club to complete theflush.(Note that the probability calculated in this example assumes no information is known about the cards in the other players' hands; however, experienced poker players may consider how the other players place their bets (check, call, raise, or fold) in considering the probability for each scenario. Strictly speaking, the approach to calculating success probabilities outlined here is accurate in a scenario where there is just one player at the table; in a multiplayer game this probability might be adjusted somewhat based on the betting play of the opponents.) There are 4 clubs showing so there are 9 clubs still unseen. There are 5 cards showing (2 in the hand and 3 on the table) so there are52−5=47{\displaystyle 52-5=47}still unseen. The probability that one of the next two cards turned is a club can be calculated using hypergeometric withk=1,n=2,K=9{\displaystyle k=1,n=2,K=9}andN=47{\displaystyle N=47}. (about 31.64%) The probability that both of the next two cards turned are clubs can be calculated using hypergeometric withk=2,n=2,K=9{\displaystyle k=2,n=2,K=9}andN=47{\displaystyle N=47}. (about 3.33%) The probability that neither of the next two cards turned are clubs can be calculated using hypergeometric withk=0,n=2,K=9{\displaystyle k=0,n=2,K=9}andN=47{\displaystyle N=47}. (about 65.03%) The hypergeometric distribution is indispensable for calculatingKenoodds. In Keno, 20 balls are randomly drawn from a collection of 80 numbered balls in a container, rather likeAmerican Bingo. Prior to each draw, a player selects a certain number ofspotsby marking a paper form supplied for this purpose. For example, a player mightplay a 6-spotby marking 6 numbers, each from a range of 1 through 80 inclusive. Then (after all players have taken their forms to a cashier and been given a duplicate of their marked form, and paid their wager) 20 balls are drawn. Some of the balls drawn may match some or all of the balls selected by the player. Generally speaking, the morehits(balls drawn that match player numbers selected) the greater the payoff. For example, if a customer bets ("plays") $1 for a 6-spot (not an uncommon example) and hits 4 out of the 6, the casino would pay out $4. Payouts can vary from one casino to the next, but $4 is a typical value here. The probability of this event is: Similarly, the chance for hitting 5 spots out of 6 selected is(65)(7415)(8020)≈0.003095639{\displaystyle {{{6 \choose 5}{{74} \choose {15}}} \over {80 \choose 20}}\approx 0.003095639}while a typical payout might be $88. The payout for hitting all 6 would be around $1500 (probability ≈ 0.000128985 or 7752-to-1). The only other nonzero payout might be $1 for hitting 3 numbers (i.e., you get your bet back), which has a probability near 0.129819548. Taking the sum of products of payouts times corresponding probabilities we get an expected return of 0.70986492 or roughly 71% for a 6-spot, for a house advantage of 29%. Other spots-played have a similar expected return. This very poor return (for the player) is usually explained by the large overhead (floor space, equipment, personnel) required for the game.
https://en.wikipedia.org/wiki/Hypergeometric_distribution
Informal languagetheory, acontext-free grammar(CFG) is aformal grammarwhoseproduction rulescan be applied to anonterminal symbolregardless of its context. In particular, in a context-free grammar, each production rule is of the form withA{\displaystyle A}asinglenonterminal symbol, andα{\displaystyle \alpha }a string of terminals and/or nonterminals (α{\displaystyle \alpha }can be empty). Regardless of which symbols surround it, the single nonterminalA{\displaystyle A}on the left hand side can always be replaced byα{\displaystyle \alpha }on the right hand side. This distinguishes it from acontext-sensitive grammar, which can have production rules in the formαAβ→αγβ{\displaystyle \alpha A\beta \rightarrow \alpha \gamma \beta }withA{\displaystyle A}a nonterminal symbol andα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }strings of terminal and/or nonterminal symbols. A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture, replaces⟨Stmt⟩{\displaystyle \langle {\text{Stmt}}\rangle }with⟨Id⟩=⟨Expr⟩;{\displaystyle \langle {\text{Id}}\rangle =\langle {\text{Expr}}\rangle ;}. There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string. Languagesgenerated by context-free grammars are known ascontext-free languages(CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). Thelanguage equalityquestion (do two given context-free grammars generate the same language?) isundecidable. Context-free grammars arise inlinguisticswhere they are used to describe the structure of sentences and words in anatural language, and they were invented by the linguistNoam Chomskyfor this purpose. By contrast, incomputer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure ofprogramming languages. In a newer application, they are used in an essential part of theExtensible Markup Language(XML) called thedocument type definition.[2] In linguistics, some authors use the termphrase structure grammarto refer to context-free grammars, whereby phrase-structure grammars are distinct fromdependency grammars. In computer science, a popular notation for context-free grammars isBackus–Naur form, or BNF. Since at least the time of the ancient Indian scholarPāṇini, linguists have described thegrammarsof languages in terms of their block structure, and described how sentences arerecursivelybuilt up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence: can be logically parenthesized (with the logical metasymbols[ ]) as follows: A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such asagreementandreferenceare not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly. Context-free grammars are a special form ofsemi-Thue systemsthat in their general form date back to the work ofAxel Thue. The formalism of context-free grammars was developed in the mid-1950s byNoam Chomsky,[3]and also theirclassification as a special typeofformal grammar(which he calledphrase-structure grammars).[4]Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense,phrase structure grammarsare also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation ofdependency grammars. In Chomsky'sgenerative grammarframework, the syntax of natural language was described by context-free rules combined with transformation rules.[5] Block structure was introduced into computerprogramming languagesby theAlgolproject (1957–1960), which, as a consequence, also featured a context-free grammar[6]to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known asBackus–Naur form, after two members of the Algol language design committee.[3]The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language. Context-free grammars are simple enough to allow the construction of efficientparsing algorithmsthat, for a given string, determine whether and how it can be generated from the grammar. AnEarley parseris an example of such an algorithm, while the widely usedLRandLL parsersare simpler algorithms that deal only with more restrictive subsets of context-free grammars. A context-free grammarGis defined by the 4-tupleG=(V,Σ,R,S){\displaystyle G=(V,\Sigma ,R,S)}, where[a] Aproduction ruleinRis formalized mathematically as a pair(α,β)∈R{\displaystyle (\alpha ,\beta )\in R}, whereα∈V{\displaystyle \alpha \in V}is a nonterminal andβ∈(V∪Σ)∗{\displaystyle \beta \in (V\cup \Sigma )^{*}}is astringof variables and/or terminals; rather than usingordered pairnotation, production rules are usually written using an arrow operator withα{\displaystyle \alpha }as its left hand side andβas its right hand side:α→β{\displaystyle \alpha \rightarrow \beta }. It is allowed forβto be theempty string, and in this case it is customary to denote it byε. The formα→ε{\displaystyle \alpha \rightarrow \varepsilon }is called an ε-production.[7] It is common to list all right-hand sides for the same left-hand side on the same line, using | (thevertical bar) to separate them. Rulesα→β1{\displaystyle \alpha \rightarrow \beta _{1}}andα→β2{\displaystyle \alpha \rightarrow \beta _{2}}can hence be written asα→β1∣β2{\displaystyle \alpha \rightarrow \beta _{1}\mid \beta _{2}}. In this case,β1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}are called the first and second alternative, respectively. For any stringsu,v∈(V∪Σ)∗{\displaystyle u,v\in (V\cup \Sigma )^{*}}, we sayudirectly yieldsv, written asu⇒v{\displaystyle u\Rightarrow v\,}, if∃(α,β)∈R{\displaystyle \exists (\alpha ,\beta )\in R}withα∈V{\displaystyle \alpha \in V}andu1,u2∈(V∪Σ)∗{\displaystyle u_{1},u_{2}\in (V\cup \Sigma )^{*}}such thatu=u1αu2{\displaystyle u\,=u_{1}\alpha u_{2}}andv=u1βu2{\displaystyle v\,=u_{1}\beta u_{2}}. Thus,vis a result of applying the rule(α,β){\displaystyle (\alpha ,\beta )}tou. For any stringsu,v∈(V∪Σ)∗,{\displaystyle u,v\in (V\cup \Sigma )^{*},}we sayuyieldsvorvisderivedfromuif there is a positive integerkand stringsu1,…,uk∈(V∪Σ)∗{\displaystyle u_{1},\ldots ,u_{k}\in (V\cup \Sigma )^{*}}such thatu=u1⇒u2⇒⋯⇒uk=v{\displaystyle u=u_{1}\Rightarrow u_{2}\Rightarrow \cdots \Rightarrow u_{k}=v}. This relation is denotedu⇒∗v{\displaystyle u~{\stackrel {*}{\Rightarrow }}~v}, oru⇒⇒v{\displaystyle u\Rightarrow \Rightarrow v}in some textbooks. Ifk≥2{\displaystyle k\geq 2}, the relationu⇒+v{\displaystyle u~{\stackrel {+}{\Rightarrow }}~v}holds. In other words,(⇒∗){\displaystyle ({\stackrel {*}{\Rightarrow }})}and(⇒+){\displaystyle ({\stackrel {+}{\Rightarrow }})}are thereflexive transitive closure(allowing a string to yield itself) and thetransitive closure(requiring at least one step) of(⇒){\displaystyle (\Rightarrow )}, respectively. The language of a grammarG=(V,Σ,R,S){\displaystyle G=(V,\Sigma ,R,S)}is the set of all terminal-symbol strings derivable from the start symbol. A languageLis said to be a context-free language (CFL), if there exists a CFGG, such thatL=L(G){\displaystyle L=L(G)}. Non-deterministic pushdown automatarecognize exactly the context-free languages. The grammarG=({S},{a,b},P,S){\displaystyle G=(\{S\},\{\mathrm {a} ,\mathrm {b} \},P,S)}, with productions is context-free. It is not proper since it includes anε-production. A typical derivation in this grammar is This makes it clear thatL(G)={wwR:w∈{a,b}∗}{\displaystyle L(G)=\{ww^{R}:w\in \{a,b\}^{*}\}}. The language is context-free; however, it can be proved that it is notregular. If the productions are added, a context-free grammar for the set of allpalindromesover the alphabet{a, b}is obtained.[8] The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols(and)and one nonterminal symbolS. The production rules are The first rule allows theSsymbol to multiply; the second rule allows theSsymbol to become enclosed by matching parentheses; and the third rule terminates the recursion.[9] A second canonical example is two different kinds of matching nested parentheses, described by the productions: with terminal symbols[,],(,)and nonterminalS. The following sequence can be derived in that grammar: In a context-free grammar, we can pair up characters the way we do withbrackets. The simplest example: This grammar generates the language{anbn:n≥ 1}, which is notregular(according to thepumping lemma for regular languages). The special characterεstands for the empty string. By changing the above grammar to we obtain a grammar generating the language{anbn:n≥ 0}instead. This differs only in that it contains the empty string while the original grammar did not. A context-free grammar for the language consisting of all strings over {a, b} containing an unequal number ofas andbs: Here, the nonterminalTcan generate all strings with more as thanbs, the nonterminalUgenerates all strings with morebs thanas and the nonterminalVgenerates all strings with an equal number ofas andbs. Omitting the third alternative in the rules forTandUdoes not restrict the grammar's language. Another example of a non-regular language is{bnamb2n:n≥0,m≥0}{\displaystyle \{{\text{b}}^{n}{\text{a}}^{m}{\text{b}}^{2n}:n\geq 0,m\geq 0\}}. It is context-free as it can be generated by the following context-free grammar: Theformation rulesfor the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanceddisregarding the other, where the two types need not nest inside one another, for example: or The fact that this language is not context free can be proven usingpumping lemma for context-free languagesand a proof by contradiction, observing that all words of the form(n[n)n]n{\displaystyle {(}^{n}{[}^{n}{)}^{n}{]}^{n}}should belong to the language. This language belongs instead to a more general class and can be described by aconjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the formanbncn. Everyregular grammaris context-free, but not all context-free grammars are regular.[10]The following context-free grammar, for example, is also regular. The terminals here areaandb, while the only nonterminal isS. The language described is all nonempty strings ofas andbs that end ina. This grammar isregular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to anondeterministic finite automaton, so we know that this is aregular language. Using vertical bars, the grammar above can be described more tersely as follows: Aderivationof a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string can be derived from the start symbolSwith the following derivation: Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is which can be summarized as One rightmost derivation is: which can be summarized as The distinction between leftmost derivation and rightmost derivation is important because in mostparsersthe transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an exampleLL parsersandLR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be: where{...}Sdenotes a substring recognized as belonging toS. This hierarchy can also be seen as a tree: This tree is called aparse treeor "concrete syntax tree" of the string, by contrast with theabstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string which defines a string with a different structure and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be anambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are calledinherently ambiguous languages. Every context-free grammar with noε-production has an equivalent grammar inChomsky normal form, and a grammar inGreibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct apolynomial-timealgorithm that decides whether a given string is in the language represented by that grammar or not (theCYK algorithm). Context-free languages areclosedunder the various operations, that is, if the languagesKandLare context-free, so is the result of the following operations: They are not closed under general intersection (hence neither undercomplementation) and set difference.[15] The following are some decidable problems about context-free grammars. The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: Context-free parsing forChomsky normal formgrammars was shown byLeslie G. Valiantto be reducible to Booleanmatrix multiplication, thus inheriting its complexity upper bound ofO(n2.3728639).[16][17][b]Conversely,Lillian Leehas shownO(n3−ε) Boolean matrix multiplication to be reducible toO(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.[18] A nonterminal symbolX{\displaystyle X}is calledproductive, orgenerating, if there is a derivationX⇒∗w{\displaystyle X~{\stackrel {*}{\Rightarrow }}~w}for some stringw{\displaystyle w}of terminal symbols.X{\displaystyle X}is calledreachableif there is a derivationS⇒∗αXβ{\displaystyle S~{\stackrel {*}{\Rightarrow }}~\alpha X\beta }for some stringsα,β{\displaystyle \alpha ,\beta }of nonterminal and terminal symbols from the start symbol.X{\displaystyle X}is calleduselessif it is unreachable or unproductive.X{\displaystyle X}is callednullableif there is a derivationX⇒∗ε{\displaystyle X~{\stackrel {*}{\Rightarrow }}~\varepsilon }. A ruleX→ε{\displaystyle X\rightarrow \varepsilon }is called anε-production. A derivationX⇒+X{\displaystyle X~{\stackrel {+}{\Rightarrow }}~X}is called acycle. Algorithms are known to eliminate from a given grammar, without changing its generated language, In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are calleduseless.[24] In the depicted example grammar, the nonterminalDis unreachable, andEis unproductive, whileC→Ccauses a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "|Cc |Ee" from the right-hand side of the rule forS. A context-free grammar is said to beproperif it has neither useless symbols norε-productions nor cycles.[25]Combining the above algorithms, every context-free grammar not generatingεcan be transformed into aweakly equivalentproper one. It is decidable whether a givengrammaris aregular grammar,[f]as well as whether it is anLL(k) grammarfor a givenk≥ 0.[26]: 233Ifkis not given, the latter problem is undecidable.[26]: 252 Given a context-free grammar, it is not decidable whether its language is regular,[27]nor whether it is an LL(k) language for a givenk.[26]: 254 There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite.[28] Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. theemptiness problem(whether the grammar generates any terminal strings at all), is undecidable forcontext-sensitive grammars, but decidable for context-free grammars. However, many problems areundecidableeven for context-free grammars; the most prominent ones are handled in the following. Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules?[29][30] A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether aTuring machineaccepts a particular input (thehalting problem). The reduction uses the concept of acomputation history, a string describing an entire computation of aTuring machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input. Given two CFGs, do they generate the same language?[30][31] The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings. Given two CFGs, can the first one generate all strings that the second one can generate?[30][31] If this problem was decidable, then language equality could be decided too: two CFGsG1{\displaystyle G_{1}}andG2{\displaystyle G_{2}}generate the same language ifL(G1){\displaystyle L(G_{1})}is a subset ofL(G2){\displaystyle L(G_{2})}andL(G2){\displaystyle L(G_{2})}is a subset ofL(G1){\displaystyle L(G_{1})}. UsingGreibach's theorem, it can be shown that the two following problems are undecidable: Given a CFG, is itambiguous? The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, thePost correspondence problemcould be decided, which is known to be undecidable.[32]This may be proved byOgden's lemma.[33] Given two CFGs, is there any string derivable from both grammars? If this problem was decidable, the undecidablePost correspondence problem(PCP) could be decided, too: given stringsα1,…,αN,β1,…,βN{\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}}over some alphabet{a1,…,ak}{\displaystyle \{a_{1},\ldots ,a_{k}\}}, let the grammar⁠G1{\displaystyle G_{1}}⁠consist of the rule whereβirev{\displaystyle \beta _{i}^{rev}}denotes thereversedstringβi{\displaystyle \beta _{i}}andb{\displaystyle b}does not occur among theai{\displaystyle a_{i}}; and let grammar⁠G2{\displaystyle G_{2}}⁠consist of the rule Then the PCP instance given byα1,…,αN,β1,…,βN{\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}}has a solution if and only if⁠L(G1){\displaystyle L(G_{1})}⁠and⁠L(G2){\displaystyle L(G_{2})}⁠share a derivable string. The left of the string (before theb{\displaystyle b}) will represent the top of the solution for the PCP instance while the right side will be the bottom in reverse. An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such asagreementandreference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach includeaffix grammars,attribute grammars,indexed grammars, and Van Wijngaardentwo-level grammars. Similar extensions exist in linguistics. Anextended context-free grammar(orregular right part grammar) is one in which the right-hand side of the production rules is allowed to be aregular expressionover the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages.[34] Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism ofcontext-sensitive grammars. There are a number of important subclasses of the context-free grammars: LR parsing extends LL parsing to support a larger range of grammars; in turn,generalized LR parsingextends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while onnondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions andparser generatorscontinue to be based on LL, LALR or LR parsing up to the present day. Chomskyinitially hoped to overcome the limitations of context-free grammars by addingtransformation rules.[4] Such rules are another standard device in traditional linguistics; e.g.passivizationin English. Much ofgenerative grammarhas been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, beingTuring completeunless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion). Chomsky's general position regarding the non-context-freeness of natural language has held up since then,[35]although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved.[36]Gerald GazdarandGeoffrey Pullumhave argued that despite a few non-context-free constructions in natural language (such ascross-serial dependenciesinSwiss German[35]andreduplicationinBambara[37]), the vast majority of forms in natural language are indeed context-free.[36]
https://en.wikipedia.org/wiki/Context-free_grammar
Informal languagetheory, acontext-free grammar,G, is said to be inChomsky normal form(first described byNoam Chomsky)[1]if all of itsproduction rulesare of the form:[2][3] whereA,B, andCarenonterminal symbols, the letterais aterminal symbol(a symbol that represents a constant value),Sis the start symbol, and ε denotes theempty string. Also, neitherBnorCmay be thestart symbol, and the third production rule can only appear if ε is inL(G), the language produced by the context-free grammarG.[4]: 92–93, 106 Every grammar in Chomsky normal form is context-free, and conversely, every context-free grammar can be transformed into anequivalentone[note 1]which is in Chomsky normal form and has a size no larger than the square of the original grammar's size. To convert a grammar to Chomsky normal form, a sequence of simple transformations is applied in a certain order; this is described in most textbooks onautomata theory.[4]: 87–94[5][6][7]The presentation here follows Hopcroft, Ullman (1979), but is adapted to use the transformation names from Lange, Leiß (2009).[8][note 2]Each of the following transformations establishes one of the properties required for Chomsky normal form. Introduce a new start symbolS0, and a new rule whereSis the previous start symbol. This does not change the grammar's produced language, andS0will not occur on any rule's right-hand side. To eliminate each rule with a terminal symbolabeing not the only symbol on the right-hand side, introduce, for every such terminal, a new nonterminal symbolNa, and a new rule Change every rule to If several terminal symbols occur on the right-hand side, simultaneously replace each of them by its associated nonterminal symbol. This does not change the grammar's produced language.[4]: 92 Replace each rule with more than 2 nonterminalsX1,...,Xnby rules whereAiare new nonterminal symbols. Again, this does not change the grammar's produced language.[4]: 93 An ε-rule is a rule of the form whereAis notS0, the grammar's start symbol. To eliminate all rules of this form, first determine the set of all nonterminals that derive ε. Hopcroft and Ullman (1979) call such nonterminalsnullable, and compute them as follows: Obtain an intermediate grammar by replacing each rule by all versions with some nullableXiomitted. By deleting in this grammar each ε-rule, unless its left-hand side is the start symbol, the transformed grammar is obtained.[4]: 90 For example, in the following grammar, with start symbolS0, the nonterminalA, and hence alsoB, is nullable, while neitherCnorS0is. Hence the following intermediate grammar is obtained:[note 3] In this grammar, all ε-rules have been "inlinedat the call site".[note 4]In the next step, they can hence be deleted, yielding the grammar: This grammar produces the same language as the original example grammar, viz. {ab,aba,abaa,abab,abac,abb,abc,b,ba,baa,bab,bac,bb,bc,c}, but has no ε-rules. A unit rule is a rule of the form whereA,Bare nonterminal symbols. To remove it, for each rule whereX1...Xnis a string of nonterminals and terminals, add rule unless this is a unit rule which has already been (or is being) removed. The skipping of nonterminal symbolBin the resulting grammar is possible due toBbeing a member of the unit closure of nonterminal symbolA.[9] When choosing the order in which the above transformations are to be applied, it has to be considered that some transformations may destroy the result achieved by other ones. For example,STARTwill re-introduce a unit rule if it is applied afterUNIT. The table shows which orderings are admitted. Moreover, the worst-case bloat in grammar size[note 5]depends on the transformation order. Using |G| to denote the size of the original grammarG, the size blow-up in the worst case may range from |G|2to 22 |G|, depending on the transformation algorithm used.[8]: 7The blow-up in grammar size depends on the order betweenDELandBIN. It may be exponential whenDELis done first, but is linear otherwise.UNITcan incur a quadratic blow-up in the size of the grammar.[8]: 5The orderingsSTART,TERM,BIN,DEL,UNITandSTART,BIN,DEL,UNIT,TERMlead to the least (i.e. quadratic) blow-up. The following grammar, with start symbolExpr, describes a simplified version of the set of all syntactical valid arithmetic expressions in programming languages likeCorAlgol60. Bothnumberandvariableare considered terminal symbols here for simplicity, since in acompiler front endtheir internal structure is usually not considered by theparser. The terminal symbol "^" denotedexponentiationin Algol60. In step "START" of theaboveconversion algorithm, just a ruleS0→Expris added to the grammar. After step "TERM", the grammar looks like this: After step "BIN", the following grammar is obtained: Since there are no ε-rules, step "DEL" does not change the grammar. After step "UNIT", the following grammar is obtained, which is in Chomsky normal form: TheNaintroduced in step "TERM" arePowOp,Open, andClose. TheAiintroduced in step "BIN" areAddOp_Term,MulOp_Factor,PowOp_Primary, andExpr_Close. Another way[4]: 92[10]to define the Chomsky normal form is: Aformal grammaris inChomsky reduced formif all of its production rules are of the form: whereA{\displaystyle A},B{\displaystyle B}andC{\displaystyle C}are nonterminal symbols, anda{\displaystyle a}is aterminal symbol. When using this definition,B{\displaystyle B}orC{\displaystyle C}may be the start symbol. Only those context-free grammars which do not generate theempty stringcan be transformed into Chomsky reduced form. In a letter where he proposed a termBackus–Naur form(BNF),Donald E. Knuthimplied a BNF "syntax in which all definitions have such a form may be said to be in 'Floyd Normal Form'", where⟨A⟩{\displaystyle \langle A\rangle },⟨B⟩{\displaystyle \langle B\rangle }and⟨C⟩{\displaystyle \langle C\rangle }are nonterminal symbols, anda{\displaystyle a}is a terminal symbol, becauseRobert W. Floydfound any BNF syntax can be converted to the above one in 1961.[11]But he withdrew this term, "since doubtless many people have independently used this simple fact in their own work, and the point is only incidental to the main considerations of Floyd's note."[12]While Floyd's note cites Chomsky's original 1959 article, Knuth's letter does not. Besides its theoretical significance, CNF conversion is used in some algorithms as a preprocessing step, e.g., theCYK algorithm, abottom-up parsingfor context-free grammars, and its variant probabilistic CKY.[13]
https://en.wikipedia.org/wiki/Chomsky_normal_form
Inlogic,mathematics,computer science, andlinguistics, aformal languageis a set ofstringswhose symbols are taken from a set called "alphabet". The alphabet of a formal language consists of symbols that concatenate into strings (also called "words").[1]Words that belong to a particular formal language are sometimes calledwell-formed words. A formal language is often defined by means of aformal grammarsuch as aregular grammarorcontext-free grammar. In computer science, formal languages are used, among others, as the basis for defining the grammar ofprogramming languagesand formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings orsemantics. Incomputational complexity theory,decision problemsare typically defined as formal languages, andcomplexity classesare defined as the sets of the formal languages that can beparsed by machineswith limited computational power. Inlogicand thefoundations of mathematics, formal languages are used to represent the syntax ofaxiomatic systems, andmathematical formalismis the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way. The field offormal language theorystudies primarily the purelysyntacticaspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities ofnatural languages. In the 17th century,Gottfried Leibnizimagined and described thecharacteristica universalis, a universal and formal language which utilisedpictographs. Later,Carl Friedrich Gaussinvestigated the problem ofGauss codes.[2] Gottlob Fregeattempted to realize Leibniz's ideas, through a notational system first outlined inBegriffsschrift(1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903).[3]This described a "formal language of pure language."[4] In the first half of the 20th century, several developments were made with relevance to formal languages.Axel Thuepublished four papers relating to words and language between 1906 and 1914. The last of these introduced whatEmil Postlater termed 'Thue Systems', and gave an early example of anundecidable problem.[5]Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble",[6]and later devised thecanonical systemfor the creation of formal languages. In 1907,Leonardo Torres Quevedointroduced a formal language for the description of mechanical drawings (mechanical devices), inVienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines").[7]Heinz Zemanekrated it as an equivalent to aprogramming languagefor the numerical control of machine tools.[8] Noam Chomskydevised an abstract representation of formal and natural languages, known as theChomsky hierarchy.[9]In 1959John Backusdeveloped the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation ofFORTRAN.[10]Peter Naurwas the secretary/editor for the ALGOL60 Report in which he usedBackus–Naur formto describe the Formal part of ALGOL60. Analphabet, in the context of formal languages, can be anyset; its elements are calledletters. An alphabet may contain aninfinitenumber of elements;[note 1]however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use analphabetin the usual sense of the word, or more generally any finitecharacter encodingsuch asASCIIorUnicode. Awordover an alphabet can be any finite sequence (i.e.,string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ*(using theKleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, theempty word, which is often denoted by e, ε, λ or even Λ. Byconcatenationone can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word. In some applications, especially inlogic, the alphabet is also known as thevocabularyand words are known asformulasorsentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor. Given a non-empty setΣ{\displaystyle \Sigma }, aformal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }is asubsetofΣ∗{\displaystyle \Sigma ^{*}}, which is the set ofall possible finite-length words overΣ{\displaystyle \Sigma }. We call the setΣ{\displaystyle \Sigma }the alphabet ofL{\displaystyle L}. On the other hand, given a formal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }, a wordw∈Σ∗{\displaystyle w\in \Sigma ^{*}}iswell-formedifw∈L{\displaystyle w\in L}. Similarly, an expressionE⊆Σ∗{\displaystyle E\subseteq \Sigma ^{*}}iswell-formedifE⊆L{\displaystyle E\subseteq L}. Sometimes, a formal languageL{\displaystyle L}overΣ{\displaystyle \Sigma }has a set of clear rules and constraints for the creation of all possible well-formed words fromΣ∗{\displaystyle \Sigma ^{*}}. In computer science and mathematics, which do not usually deal withnatural languages, the adjective "formal" is often omitted as redundant. On the other hand, we can just say "a formal languageL{\displaystyle L}" when its alphabetΣ{\displaystyle \Sigma }is clear in the context. While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such asregular languagesorcontext-free languages. The notion of aformal grammarmay be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it. The following rules describe a formal languageLover the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}: Under these rules, the string "23+4=555" is inL, but the string "=234=+" is not. This formal language expressesnatural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (theirsyntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc. For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a languageLas justL= {a, b, ab, cba}. Thedegeneratecase of this construction is theempty language, which contains no words at all (L=∅). However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writingL= {a, b, ab, cba}. Here are some examples of formal languages: Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as Typical questions asked about such formalisms include: Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area ofcomputability theoryandcomplexity theory. Formal languages may be classified in theChomsky hierarchybased on the expressive power of their generative grammar as well as the complexity of their recognizingautomaton.Context-free grammarsandregular grammarsprovide a good compromise between expressivity and ease ofparsing, and are widely used in practical applications. Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations. Examples: supposeL1{\displaystyle L_{1}}andL2{\displaystyle L_{2}}are languages over some common alphabetΣ{\displaystyle \Sigma }. Suchstring operationsare used to investigateclosure propertiesof classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, thecontext-free languagesare known to be closed under union, concatenation, and intersection withregular languages, but not closed under intersection or complement. The theory oftriosandabstract families of languagesstudies the most common closure properties of language families in their own right.[11] A compiler usually has two distinct components. Alexical analyzer, sometimes generated by a tool likelex, identifies the tokens of the programming language grammar, e.g.identifiersorkeywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means ofregular expressions. At the most basic conceptual level, aparser, sometimes generated by aparser generatorlikeyacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built. Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically anabstract syntax tree. This is used by subsequent stages of the compiler to eventually generate anexecutablecontainingmachine codethat runs directly on the hardware, or someintermediate codethat requires avirtual machineto execute. Inmathematical logic, aformal theoryis a set ofsentencesexpressed in a formal language. Aformal system(also called alogical calculus, or alogical system) consists of a formal language together with adeductive apparatus(also called adeductive system). The deductive apparatus may consist of a set oftransformation rules, which may be interpreted as valid rules of inference, or a set ofaxioms, or have both. A formal system is used toderiveone expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systemsFS{\displaystyle {\mathcal {FS}}}andFS′{\displaystyle {\mathcal {FS'}}}may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance). Aformal prooforderivationis a finite sequence of well-formed formulas (which may be interpreted as sentences, orpropositions) each of which is an axiom or follows from the preceding formulas in the sequence by arule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions. Formal languages are entirely syntactic in nature, but may be givensemanticsthat give meaning to the elements of the language. For instance, in mathematicallogic, the set of possible formulas of a particular logic is a formal language, and aninterpretationassigns a meaning to each of the formulas—usually, atruth value. The study of interpretations of formal languages is calledformal semantics. In mathematical logic, this is often done in terms ofmodel theory. In model theory, the terms that occur in a formula are interpreted as objects withinmathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; amodelfor a formula is an interpretation of terms such that the formula becomes true.
https://en.wikipedia.org/wiki/Formal_language
Inneuropsychology,linguistics, andphilosophy of language, anatural languageorordinary languageis anylanguagethat occurs naturally in ahumancommunity by a process of use, repetition, andchange. It can take different forms, typically either aspoken languageor asign language. Natural languages are distinguished fromconstructedandformal languagessuch asthose used to program computersor to studylogic.[1] Natural language can be broadly defined as different from Allvarietiesofworld languagesare natural languages, including those that are associated withlinguistic prescriptivismorlanguage regulation. (Nonstandard dialectscan be viewed as awild typein comparison withstandard languages.) Anofficial languagewith a regulating academy such asStandard French, overseen by theAcadémie Française, is classified as a natural language (e.g. in the field ofnatural language processing), as its prescriptive aspects do not make it constructed enough to be a constructed language or controlled enough to be acontrolled natural language. Controlled natural languages are subsets of natural languages whose grammars and dictionaries have been restricted in order to reduceambiguityand complexity. This may be accomplished by decreasing usage ofsuperlativeoradverbialforms, orirregular verbs. Typical purposes for developing and implementing a controlled natural language are to aid understanding by non-native speakers or to ease computer processing. An example of a widely-used controlled natural language isSimplified Technical English, which was originally developed foraerospaceandavionicsindustry manuals. Being constructed,International auxiliary languagessuch asEsperantoandInterlinguaare not considered natural languages, with the possible exception of true native speakers of such languages.[3]Natural languages evolve, through fluctuations in vocabulary and syntax, to incrementally improve human communication. In contrast, Esperanto was created by Polish ophthalmologistL. L. Zamenhofin the late 19th century. Some natural languages have become organically "standardized" through the synthesis of two or more pre-existing natural languages over a relatively short period of time through the development of apidgin, which is not considered a language, into a stablecreole language. A creole such asHaitian Creolehas its own grammar, vocabulary and literature. It is spoken by over 10 million people worldwide and is one of the two official languages of theRepublic of Haiti. As of 1996, there were 350 attested families with one or morenative speakers of Esperanto.Latino sine flexione, another international auxiliary language, is no longer widely spoken.
https://en.wikipedia.org/wiki/Natural_language
Ahidden Markov model(HMM) is aMarkov modelin which the observations are dependent on a latent (orhidden)Markov process(referred to asX{\displaystyle X}). An HMM requires that there be an observable processY{\displaystyle Y}whose outcomes depend on the outcomes ofX{\displaystyle X}in a known way. SinceX{\displaystyle X}cannot be observed directly, the goal is to learn about state ofX{\displaystyle X}by observingY{\displaystyle Y}. By definition of being a Markov model, an HMM has an additional requirement that the outcome ofY{\displaystyle Y}at timet=t0{\displaystyle t=t_{0}}must be "influenced" exclusively by the outcome ofX{\displaystyle X}att=t0{\displaystyle t=t_{0}}and that the outcomes ofX{\displaystyle X}andY{\displaystyle Y}att<t0{\displaystyle t<t_{0}}must be conditionally independent ofY{\displaystyle Y}att=t0{\displaystyle t=t_{0}}givenX{\displaystyle X}at timet=t0{\displaystyle t=t_{0}}. Estimation of the parameters in an HMM can be performed usingmaximum likelihood estimation. For linear chain HMMs, theBaum–Welch algorithmcan be used to estimate parameters. Hidden Markov models are known for their applications tothermodynamics,statistical mechanics,physics,chemistry,economics,finance,signal processing,information theory,pattern recognition—such asspeech,[1]handwriting,gesture recognition,[2]part-of-speech tagging, musical score following,[3]partial discharges[4]andbioinformatics.[5][6] LetXn{\displaystyle X_{n}}andYn{\displaystyle Y_{n}}be discrete-timestochastic processesandn≥1{\displaystyle n\geq 1}. The pair(Xn,Yn){\displaystyle (X_{n},Y_{n})}is ahidden Markov modelif LetXt{\displaystyle X_{t}}andYt{\displaystyle Y_{t}}be continuous-time stochastic processes. The pair(Xt,Yt){\displaystyle (X_{t},Y_{t})}is ahidden Markov modelif The states of the processXn{\displaystyle X_{n}}(resp.Xt){\displaystyle X_{t})}are calledhidden states, andP⁡(Yn∈A∣Xn=xn){\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{n}\in A\mid X_{n}=x_{n}{\bigr )}}(resp.P⁡(Yt∈A∣Xt∈Bt)){\displaystyle \operatorname {\mathbf {P} } {\bigl (}Y_{t}\in A\mid X_{t}\in B_{t}{\bigr )})}is calledemission probabilityoroutput probability. In its discrete form, a hidden Markov process can be visualized as a generalization of theurn problemwith replacement (where each item from the urn is returned to the original urn before the next step).[7]Consider this example: in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, with each ball having a unique label y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for then-th ball depends only upon a random number and the choice of the urn for the(n− 1)-thball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called aMarkov process. It can be described by the upper part of Figure 1. The Markov process cannot be observed, only the sequence of labeled balls, thus this arrangement is called ahidden Markov process. This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls,e.g.y1, y2 and y3 on the conveyor belt, the observer still cannot besurewhich urn (i.e., at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns. Consider two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like. Alice believes that the weather operates as a discreteMarkov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they arehiddenfrom her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are theobservations. The entire system is that of a hidden Markov model (HMM). Alice knows the general weather trends in the area, and what Bob likes to do on average. In other words, the parameters of the HMM are known. They can be represented as follows inPython: In this piece of code,start_probabilityrepresents Alice's belief about which state the HMM is in when Bob first calls her (all she knows is that it tends to be rainy on average). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately{'Rainy': 0.57, 'Sunny': 0.43}. Thetransition_probabilityrepresents the change of the weather in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. Theemission_probabilityrepresents how likely Bob is to perform a certain activity on each day. If it is rainy, there is a 50% chance that he is cleaning his apartment; if it is sunny, there is a 60% chance that he is outside for a walk. A similar example is further elaborated in theViterbi algorithmpage. The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variablex(t) is the hidden state at timet(with the model from the above diagram,x(t) ∈ {x1,x2,x3}). The random variabley(t) is the observation at timet(withy(t) ∈ {y1,y2,y3,y4}). The arrows in the diagram (often called atrellis diagram) denote conditional dependencies. From the diagram, it is clear that theconditional probability distributionof the hidden variablex(t) at timet, given the values of the hidden variablexat all times, dependsonlyon the value of the hidden variablex(t− 1); the values at timet− 2and before have no influence. This is called theMarkov property. Similarly, the value of the observed variabley(t) depends on only the value of the hidden variablex(t) (both at timet). In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from acategorical distribution) or continuous (typically from aGaussian distribution). The parameters of a hidden Markov model are of two types,transition probabilitiesandemission probabilities(also known asoutput probabilities). The transition probabilities control the way the hidden state at timetis chosen given the hidden state at timet−1{\displaystyle t-1}. The hidden state space is assumed to consist of one ofNpossible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of theNpossible states that a hidden variable at timetcan be in, there is a transition probability from this state to each of theNpossible states of the hidden variable at timet+1{\displaystyle t+1}, for a total ofN2{\displaystyle N^{2}}transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, theN×N{\displaystyle N\times N}matrix of transition probabilities is aMarkov matrix. Because any transition probability can be determined once the others are known, there are a total ofN(N−1){\displaystyle N(N-1)}transition parameters. In addition, for each of theNpossible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete withMpossible values, governed by acategorical distribution, there will beM−1{\displaystyle M-1}separate parameters, for a total ofN(M−1){\displaystyle N(M-1)}emission parameters over all hidden states. On the other hand, if the observed variable is anM-dimensional vector distributed according to an arbitrarymultivariate Gaussian distribution, there will beMparameters controlling themeansandM(M+1)2{\displaystyle {\frac {M(M+1)}{2}}}parameters controlling thecovariance matrix, for a total ofN(M+M(M+1)2)=NM(M+3)2=O(NM2){\displaystyle N\left(M+{\frac {M(M+1)}{2}}\right)={\frac {NM(M+3)}{2}}=O(NM^{2})}emission parameters. (In such a case, unless the value ofMis small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.) Severalinferenceproblems are associated with hidden Markov models, as outlined below. The task is to compute in a best way, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences: The probability of observing a sequence of lengthLis given by where the sum runs over all possible hidden-node sequences Applying the principle ofdynamic programming, this problem, too, can be handled efficiently using theforward algorithm. A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observationsy(1),…,y(t){\displaystyle y(1),\dots ,y(t)}. The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to computeP(x(t)∣y(1),…,y(t)){\displaystyle P(x(t)\mid y(1),\dots ,y(t))}. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end. This problem can be handled efficiently using theforward algorithm. An example is when the algorithm is applied to a Hidden Markov Network to determineP(ht∣v1:t){\displaystyle \mathrm {P} {\big (}h_{t}\mid v_{1:t}{\big )}}. This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to computeP(x(k)∣y(1),…,y(t)){\displaystyle P(x(k)\mid y(1),\dots ,y(t))}for somek<t{\displaystyle k<t}. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in timekin the past, relative to timet. Theforward-backward algorithmis a good method for computing the smoothed values for all hidden state variables. The task, unlike the previous two, asks about thejoint probabilityof theentiresequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example ispart-of-speech tagging, where the hidden states represent the underlyingparts of speechcorresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute. This task requires finding a maximum over all possible state sequences, and can be solved efficiently by theViterbi algorithm. For some of the above problems, it may also be interesting to ask aboutstatistical significance. What is the probability that a sequence drawn from somenull distributionwill have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence?[8]When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates thefalse positive rateassociated with failing to reject the hypothesis for the output sequence. The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive themaximum likelihoodestimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using theBaum–Welch algorithmor the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of theexpectation-maximization algorithm. If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, likeMarkov chain Monte Carlo(MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability.[9]Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g.[10]Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference. HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include: Hidden Markov models were described in a series of statistical papers byLeonard E. Baumand other authors in the second half of the 1960s.[29][30][31][32][33]One of the first applications of HMMs wasspeech recognition, starting in the mid-1970s.[34][35][36][37]From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar.[38] In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences,[39]in particularDNA. Since then, they have become ubiquitous in the field ofbioinformatics.[40] In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from acategorical distribution) or continuous (typically from aGaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is alinear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow aGaussian distribution. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using theKalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as theextended Kalman filteror theparticle filter. Nowadays, inference in hidden Markov models is performed innonparametricsettings, where the dependency structure enablesidentifiabilityof the model[41]and the learnability limits are still under exploration.[42] Hidden Markov models aregenerative models, in which thejoint distributionof observations and hidden states, or equivalently both theprior distributionof hidden states (thetransition probabilities) andconditional distributionof observations given states (theemission probabilities), is modeled. The above algorithms implicitly assume auniformprior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is theDirichlet distribution, which is theconjugate priordistribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed theconcentration parameter) controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example inunsupervisedpart-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned usingGibbs samplingor extended versions of theexpectation-maximization algorithm. An extension of the previously described hidden Markov models withDirichletpriors uses aDirichlet processin place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called ahierarchical Dirichlet process hidden Markov model, orHDP-HMMfor short. It was originally described under the name "Infinite Hidden Markov Model"[43]and was further formalized in "Hierarchical Dirichlet Processes".[44] A different type of extension uses adiscriminative modelin place of thegenerative modelof standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-calledmaximum entropy Markov model(MEMM), which models the conditional distribution of the states usinglogistic regression(also known as a "maximum entropymodel"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to bestatistically independentof each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities. A variant of the previously described discriminative model is the linear-chainconditional random field. This uses an undirected graphical model (akaMarkov random field) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-calledlabel biasproblem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's. Yet another variant is thefactorial hidden Markov model, which allows for a single observation to be conditioned on the corresponding hidden variables of a set ofK{\displaystyle K}independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, withNK{\displaystyle N^{K}}states (assuming there areN{\displaystyle N}states for each chain), and therefore, learning in such a model is difficult: for a sequence of lengthT{\displaystyle T}, a straightforward Viterbi algorithm has complexityO(N2KT){\displaystyle O(N^{2K}\,T)}. To find an exact solution, a junction tree algorithm could be used, but it results in anO(NK+1KT){\displaystyle O(N^{K+1}\,K\,T)}complexity. In practice, approximate techniques, such as variational approaches, could be used.[45] All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in generalK{\displaystyle K}adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have anO(NKT){\displaystyle O(N^{K}\,T)}running time, forK{\displaystyle K}adjacent states andT{\displaystyle T}total observations (i.e. a length-T{\displaystyle T}Markov chain). This extension has been widely used inbioinformatics, in the modeling ofDNA sequences. Another recent extension is thetriplet Markov model,[46]in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between thetheory of evidenceand thetriplet Markov models[47]and which allows to fuse data in Markovian context[48]and to model nonstationary data.[49][50]Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g.,[51] Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012.[52]It consists in employing a small recurrent neural network (RNN), specifically a reservoir network,[53]to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution. In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions.[54][55]Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law.[56][57]This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications. The model suitable in the context of longitudinal data is named latent Markov model.[58]The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in[59] Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the statesA,B1,B2{\displaystyle A,B_{1},B_{2}}, with invariant distributionπ=(2/7,4/7,1/7){\displaystyle \pi =(2/7,4/7,1/7)}. By ignoring the distinction betweenB1,B2{\displaystyle B_{1},B_{2}}, this space of subshifts is projected onA,B1,B2{\displaystyle A,B_{1},B_{2}}into another space of subshifts onA,B{\displaystyle A,B}, and this projection also projects the probability measure down to a probability measure on the subshifts onA,B{\displaystyle A,B}. The curious thing is that the probability measure on the subshifts onA,B{\displaystyle A,B}is not created by a Markov chain onA,B{\displaystyle A,B}, not even multiple orders. Intuitively, this is because if one observes a long sequence ofBn{\displaystyle B^{n}}, then one would become increasingly sure that thePr(A∣Bn)→23{\displaystyle \Pr(A\mid B^{n})\to {\frac {2}{3}}}, meaning that the observable part of the system can be affected by something infinitely in the past.[60][61] Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6[61]).
https://en.wikipedia.org/wiki/Hidden_Markov_model
Ingrammar, apart of speechorpart-of-speech(abbreviatedasPOSorPoS, also known asword class[1]orgrammatical category[2][a]) is a category of words (or, more generally, oflexical items) that have similargrammaticalproperties. Words that are assigned to the same part of speech generally display similarsyntacticbehavior (they play similar roles within the grammatical structure of sentences), sometimes similarmorphologicalbehavior in that they undergoinflectionfor similar properties and even similarsemanticbehavior. Commonly listedEnglishparts of speech arenoun,verb,adjective,adverb,pronoun,preposition,conjunction,interjection,numeral,article, anddeterminer. Other terms thanpart of speech—particularly in modernlinguisticclassifications, which often make more precise distinctions than the traditional scheme does—includeword class,lexical class, andlexical category. Some authors restrict the termlexical categoryto refer only to a particular type ofsyntactic category; for them the term excludes those parts of speech that are considered to befunction words, such as pronouns. The termform classis also used, although this has various conflicting definitions.[3]Word classes may be classified asopen or closed:open classes(typically including nouns, verbs and adjectives) acquire new members constantly, whileclosed classes(such as pronouns and conjunctions) acquire new members infrequently, if at all. Almost all languages have the word classes noun and verb, but beyond these two there are significant variations among different languages.[4]For example: Because of such variation in the number of categories and their identifying properties, analysis of parts of speech must be done for each individual language. Nevertheless, the labels for each category are assigned on the basis of universal criteria.[4] The classification of words into lexical categories is found from the earliest moments in thehistory of linguistics.[5] In theNirukta, written in the 6th or 5th century BCE, theSanskritgrammarianYāskadefined four main categories of words:[6] These four were grouped into two larger classes:inflectable(nouns and verbs) and uninflectable (pre-verbs and particles). The ancient work on the grammar of theTamil language,Tolkāppiyam, argued to have been written around 2nd century CE,[7]classifies Tamil words aspeyar(பெயர்; noun),vinai(வினை; verb),idai(part of speech which modifies the relationships between verbs and nouns), anduri(word that further qualifies a noun or verb).[8] A century or two after the work of Yāska, theGreekscholarPlatowrote in hisCratylusdialogue, "sentences are, I conceive, a combination of verbs [rhêma] and nouns [ónoma]".[9]Aristotleadded another class, "conjunction" [sýndesmos], which included not only the words known today asconjunctions, but also other parts (the interpretations differ; in one interpretation it ispronouns,prepositions, and thearticle).[10] By the end of the 2nd century BCE, grammarians had expanded this classification scheme into eight categories, seen in theArt of Grammar, attributed toDionysius Thrax:[11] It can be seen that these parts of speech are defined bymorphological,syntacticandsemanticcriteria. TheLatingrammarianPriscian(fl.500 CE) modified the above eightfold system, excluding "article" (since theLatin language, unlike Greek, does not have articles) but adding "interjection".[13][14] The Latin names for the parts of speech, from which the corresponding modern English terms derive, werenomen,verbum,participium,pronomen,praepositio,adverbium,conjunctioandinterjectio. The categorynomenincludedsubstantives(nomen substantivum, corresponding to what are today called nouns in English),adjectives(nomen adjectivum)andnumerals(nomen numerale). This is reflected in the older English terminologynoun substantive,noun adjectiveandnoun numeral. Later[15]the adjective became a separate class, as often did the numerals, and the English wordnouncame to be applied to substantives only. Works ofEnglish grammargenerally follow the pattern of the European tradition as described above, except that participles are now usually regarded as forms of verbs rather than as a separate part of speech, and numerals are often conflated with other parts of speech: nouns (cardinal numerals, e.g., "one", andcollective numerals, e.g., "dozen"), adjectives (ordinal numerals, e.g., "first", andmultiplier numerals, e.g., "single") and adverbs (multiplicative numerals, e.g., "once", anddistributive numerals, e.g., "singly"). Eight or nine parts of speech are commonly listed: Some traditional classifications consider articles to be adjectives, yielding eight parts of speech rather than nine. And some modern classifications define further classes in addition to these. For discussion see the sections below. Additionally, there are other parts of speech includingparticles(yes,no)[b]andpostpositions(ago,notwithstanding) although many fewer words are in these categories. The classification below, or slight expansions of it, is still followed in mostdictionaries: English words are not generallymarkedas belonging to one part of speech or another; this contrasts with many other European languages, which useinflectionmore extensively, meaning that a given word form can often be identified as belonging to a particular part of speech and having certain additionalgrammatical properties. In English, most words are uninflected, while the inflected endings that exist are mostly ambiguous:-edmay mark a verbal past tense, a participle or a fully adjectival form;-smay mark a plural noun, a possessive noun, or a present-tense verb form;-ingmay mark a participle,gerund, or pure adjective or noun. Although-lyis a frequent adverb marker, some adverbs (e.g.tomorrow,fast,very) do not have that ending, while many adjectives do have it (e.g.friendly,ugly,lovely), as do occasional words in other parts of speech (e.g.jelly,fly,rely). Many English words can belong to more than one part of speech. Words likeneigh,break,outlaw,laser,microwave, andtelephonemight all be either verbs or nouns. In certain circumstances, even words with primarily grammatical functions can be used as verbs or nouns, as in, "We must look to thehowsand not just thewhys." The process whereby a word comes to be used as a different part of speech is calledconversionor zero derivation. Linguistsrecognize that the above list of eight or nine word classes is drastically simplified.[17]For example, "adverb" is to some extent a catch-all class that includes words with many different functions. Some have even argued that the most basic of category distinctions, that of nouns and verbs, is unfounded,[18]or not applicable to certain languages.[19][20]Modern linguists have proposed many different schemes whereby the words of English or other languages are placed into more specific categories and subcategories based on a more precise understanding of their grammatical functions. Common lexical category set defined by function may include the following (not all of them will necessarily be applicable in a given language): Within a given category, subgroups of words may be identified based on more precise grammatical properties. For example, verbs may be specified according to the number and type ofobjectsor othercomplementswhich they take. This is calledsubcategorization. Many modern descriptions of grammar include not only lexical categories or word classes, but alsophrasal categories, used to classifyphrases, in the sense of groups of words that form units having specific grammatical functions. Phrasal categories may includenoun phrases(NP),verb phrases(VP) and so on. Lexical and phrasal categories together are calledsyntactic categories. Word classes may be either open or closed. Anopen classis one that commonly accepts the addition of new words, while aclosed classis one to which new items are very rarely added. Open classes normally contain large numbers of words, while closed classes are much smaller. Typical open classes found in English and many other languages arenouns,verbs(excludingauxiliary verbs, if these are regarded as a separate class),adjectives,adverbsandinterjections.Ideophonesare often an open class, though less familiar to English speakers,[21][22][c]and are often open tononce words. Typical closed classes areprepositions(or postpositions),determiners,conjunctions, andpronouns.[24] The open–closed distinction is related to the distinction betweenlexical and functional categories, and to that betweencontent wordsandfunction words, and some authors consider these identical, but the connection is not strict. Open classes are generally lexical categories in the stricter sense, containing words with greater semantic content,[25]while closed classes are normally functional categories, consisting of words that perform essentially grammatical functions. This is not universal: in many languages verbs and adjectives[26][27][28]are closed classes, usually consisting of few members, and in Japanese the formation of new pronouns from existing nouns is relatively common, though to what extent these form a distinct word class is debated. Words are added to open classes through such processes ascompounding,derivation,coining, andborrowing. When a new word is added through some such process, it can subsequently be used grammatically in sentences in the same ways as other words in its class.[29]A closed class may obtain new items through these same processes, but such changes are much rarer and take much more time. A closed class is normally seen as part of the core language and is not expected to change. In English, for example, new nouns, verbs, etc. are being added to the language constantly (including by the common process ofverbingand other types ofconversion, where an existing word comes to be used in a different part of speech). However, it is very unusual for a new pronoun, for example, to become accepted in the language, even in cases where there may be felt to be a need for one, as in the case ofgender-neutral pronouns. The open or closed status of word classes varies between languages, even assuming that corresponding word classes exist. Most conspicuously, in many languages verbs and adjectives form closed classes of content words. An extreme example is found inJingulu, which has only three verbs, while even the modern Indo-EuropeanPersianhas no more than a few hundred simple verbs, a great deal of which are archaic. (Some twenty Persian verbs are used aslight verbsto form compounds; this lack of lexical verbs is shared with other Iranian languages.) Japanese is similar, having few lexical verbs.[30][failed verification]Basque verbsare also a closed class, with the vast majority of verbal senses instead expressed periphrastically. InJapanese, verbs and adjectives are closed classes,[31]though these are quite large, with about 700 adjectives,[32][33]and verbs have opened slightly in recent years.Japanese adjectivesare closely related to verbs (they can predicate a sentence, for instance). New verbal meanings are nearly always expressed periphrastically by appendingsuru(する, to do)to a noun, as inundō suru(運動する, to (do) exercise), and new adjectival meanings are nearly always expressed byadjectival nouns, using the suffix-na(〜な)when an adjectival noun modifies a noun phrase, as inhen-na ojisan(変なおじさん, strange man). The closedness of verbs has weakened in recent years, and in a few cases new verbs are created by appending-ru(〜る)to a noun or using it to replace the end of a word. This is mostly in casual speech for borrowed words, with the most well-established example beingsabo-ru(サボる, cut class; play hooky), fromsabotāju(サボタージュ, sabotage).[34]This recent innovation aside, the huge contribution ofSino-Japanese vocabularywas almost entirely borrowed as nouns (often verbal nouns or adjectival nouns). Other languages where adjectives are closed class include Swahili,[28]Bemba, andLuganda. By contrast,Japanese pronounsare an open class and nouns become used as pronouns with some frequency; a recent example isjibun(自分, self), now used by some as a first-person pronoun. The status of Japanese pronouns as a distinct class is disputed, however, with some considering it only a use of nouns, not a distinct class. The case is similar in languages of Southeast Asia, including Thai and Lao, in which, like Japanese, pronouns and terms of address vary significantly based on relative social standing and respect.[35] Some word classes are universally closed, however, including demonstratives and interrogative words.[35]
https://en.wikipedia.org/wiki/Part_of_speech
Astatistical modelis amathematical modelthat embodies a set ofstatistical assumptionsconcerning the generation ofsample data(and similar data from a largerpopulation). A statistical model represents, often in considerably idealized form, thedata-generating process.[1]When referring specifically toprobabilities, the corresponding term isprobabilistic model. Allstatistical hypothesis testsand allstatistical estimatorsare derived via statistical models. More generally, statistical models are part of the foundation ofstatistical inference. A statistical model is usually specified as a mathematical relationship between one or morerandom variablesand other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman AdèrquotingKenneth Bollen).[2] Informally, a statistical model can be thought of as astatistical assumption(or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of anyevent. As an example, consider a pair of ordinary six-sideddice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:⁠1/6⁠×⁠1/6⁠=⁠1/36⁠.More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is⁠1/8⁠(because the dice areweighted). From that assumption, we can calculate the probability of both dice coming up 5:⁠1/8⁠×⁠1/8⁠=⁠1/64⁠.We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption doesnotconstitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. In mathematical terms, a statistical model is a pair (S,P{\displaystyle S,{\mathcal {P}}}), whereS{\displaystyle S}is the set of possible observations, i.e. thesample space, andP{\displaystyle {\mathcal {P}}}is a set ofprobability distributionsonS{\displaystyle S}.[3]The setP{\displaystyle {\mathcal {P}}}represents all of the models that are considered possible. This set is typically parameterized:P={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. The setΘ{\displaystyle \Theta }defines theparametersof the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e.Fθ1=Fθ2⇒θ1=θ2{\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}}(in other words, the mapping isinjective), it is said to beidentifiable.[3] In some cases, the model can be more complex. Suppose that we have a population of children, with the ages of the children distributeduniformly, in the population. The height of a child will bestochasticallyrelated to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in alinear regressionmodel, like this: heighti=b0+b1agei+ εi, whereb0is the intercept,b1is a parameter that age is multiplied by to obtain a prediction of height, εiis the error term, andiidentifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti=b0+b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To dostatistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εidistributions arei.i.d.Gaussian, with zero mean. In this instance, the model would have 3 parameters:b0,b1, and the variance of the Gaussian distribution. We can formally specify the model in the form (S,P{\displaystyle S,{\mathcal {P}}}) as follows. The sample space,S{\displaystyle S}, of our model comprises the set of all possible pairs (age, height). Each possible value ofθ{\displaystyle \theta }= (b0,b1,σ2) determines a distribution onS{\displaystyle S}; denote that distribution byFθ{\displaystyle F_{\theta }}. IfΘ{\displaystyle \Theta }is the set of all possible values ofθ{\displaystyle \theta }, thenP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifyingS{\displaystyle S}and (2) making some assumptions relevant toP{\displaystyle {\mathcal {P}}}. There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specifyP{\displaystyle {\mathcal {P}}}—as they are required to do. A statistical model is a special class ofmathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables arestochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance,coin tossingis, in principle, a deterministic process; yet it is commonly modeled as stochastic (via aBernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statisticianSir David Coxhas said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[4] There are three purposes for a statistical model, according to Konishi & Kitagawa:[5] Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description.[6] Suppose that we have a statistical model (S,P{\displaystyle S,{\mathcal {P}}}) withP={Fθ:θ∈Θ}{\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}}. In notation, we write thatΘ⊆Rk{\displaystyle \Theta \subseteq \mathbb {R} ^{k}}wherekis a positive integer (R{\displaystyle \mathbb {R} }denotes thereal numbers; other sets can be used, in principle). Here,kis called thedimensionof the model. The model is said to beparametricifΘ{\displaystyle \Theta }has finite dimension.[citation needed]As an example, if we assume that data arise from a univariateGaussian distribution, then we are assuming that In this example, the dimension,k, equals 2. As another example, suppose that the data consists of points (x,y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formallyθ∈Θ{\displaystyle \theta \in \Theta }is a single parameter that has dimensionk, it is sometimes regarded as comprisingkseparate parameters. For example, with the univariate Gaussian distribution,θ{\displaystyle \theta }is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model isnonparametricif the parameter setΘ{\displaystyle \Theta }is infinite dimensional. A statistical model issemiparametricif it has both finite-dimensional and infinite-dimensional parameters. Formally, ifkis the dimension ofΘ{\displaystyle \Theta }andnis the number of samples, both semiparametric and nonparametric models havek→∞{\displaystyle k\rightarrow \infty }asn→∞{\displaystyle n\rightarrow \infty }. Ifk/n→0{\displaystyle k/n\rightarrow 0}asn→∞{\displaystyle n\rightarrow \infty }, then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models,Sir David Coxhas said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[7] Two statistical models arenestedif the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model has, nested within it, the linear model —we constrain the parameterb2to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. Comparing statistical models is fundamental for much ofstatistical inference.Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following:R2,Bayes factor,Akaike information criterion, and thelikelihood-ratio testtogether with its generalization, therelative likelihood. Another way of comparing two statistical models is through the notion ofdeficiencyintroduced byLucien Le Cam.[8]
https://en.wikipedia.org/wiki/Probabilistic_model
Determiner, also calleddeterminative(abbreviatedDET), is a term used in some models of grammatical description to describe a word or affix belonging to a class of noun modifiers. A determiner combines with anounto express itsreference.[1][2]Examples in English includearticles(theanda/an),demonstratives(this,that),possessive determiners(my,their), andquantifiers(many,both). Not all languages have determiners, and not all systems of grammatical description recognize them as a distinct category. The linguistics term "determiner" was coined byLeonard Bloomfieldin 1933. Bloomfield observed that inEnglish, nouns often require a qualifying word such as anarticleoradjective. He proposed that such words belong to a distinct class which he called "determiners".[3] If a language is said to have determiners, any articles are normally included in the class. Other types of words often regarded as belonging to the determiner class include demonstratives and possessives. Some linguists extend the term to include other words in thenoun phrasesuch as adjectives and pronouns, or even modifiers in other parts of the sentence.[2] Qualifying a lexical item as a determiner may depend on a given language's rules ofsyntax. In English, for example, the wordsmy,youretc. are used without articles and so can be regarded as possessive determiners whereas theirItalianequivalentsmioetc. are used together with articles and so may be better classed as adjectives.[4]Not all languages can be said to have a lexically distinct class of determiners. In some languages, the role of certain determiners can be played byaffixes(prefixes or suffixes) attached to a noun or by other types ofinflection. For example, definite articles are represented by suffixes inRomanian,Bulgarian,Macedonian, andSwedish. In Swedish,bok("book"), when definite, becomesboken("the book"), while the Romaniancaiet("notebook") similarly becomescaietul("the notebook"). Some languages, such asFinnish, havepossessive affixeswhich play the role of possessive determiners likemyandhis. Determiners may bepredeterminers,central determinersorpostdeterminers, based on the order in which they can occur.[citation needed]For example, "all my many very young children" uses one of each. "My all many very young children" is not grammatically correct because a central determiner cannot precede a predeterminer. Determiners are distinguished frompronounsby the presence of nouns.[5] Plural personal pronouns can act as determiners in certain constructions.[6] Some theoreticians unify determiners andpronounsinto a single class. For further information, seePronoun § Linguistics. Some theoretical approaches regard determiners asheadsof their ownphrases, which are described asdeterminer phrases. In such approaches, noun phrases containing only a noun without a determiner present are called "bare noun phrases", and are considered to bedominatedby determiner phrases withnullheads.[7]For more detail on theoretical approaches to the status of determiners, seeNoun phrase § With and without determiners. Some theoreticians analyzepronounsas determiners or determiner phrases. SeePronoun: Theoretical considerations. This is consistent with the determiner phrase viewpoint, whereby a determiner, rather than the noun that follows it, is taken to be the head of the phrase. Articlesare words used (as a standalone word or a prefix or suffix) to specify the grammatical definiteness of a noun, and, in some languages, volume or numerical scope. Articles often include definite articles (such as Englishthe) and indefinite articles (such as Englishaandan). Demonstrativesaredeicticwords, such asthisandthat, used to indicate which entities are being referred to and to distinguish those entities from others. They can indicate how close the things being referenced are to the speaker, listener, or other group of people. In the English language, demonstratives express proximity of things with respect to the speaker. Possessive determinerssuch asmy,their,Jane’sandthe King of England’smodify a noun by attributing possession (or other sense of belonging) to someone or something. They are also known as possessive adjectives. Quantifiersindicate quantity. Some examples of quantifiers include:all,some,many,little,few, andno. Quantifiers only indicate a general quantity of objects, not a precise number such astwelve,first,single, oronce(which are considerednumerals).[8] Distributive determiners, also called distributive adjectives, consider members of a group separately, rather than collectively. Words such aseachandeveryare examples of distributive determiners. Interrogativedeterminers such aswhich,what, andhoware used to ask a question: Manyfunctionalistlinguists dispute that the determiner is a universally valid linguistic category. They argue that the concept isAnglocentric, since it was developed on the basis of the grammar of English and similar languages of north-western Europe. The linguist Thomas Payne comments that the term determiner "is not very viable as a universal natural class", because few languages consistently place all the categories described as determiners in the same place in the noun phrase.[9] The category "determiner" was developed because in languages like English traditional categories like articles, demonstratives and possessives do not occur together. But in many languages these categories freely co-occur, asMatthew Dryerobserves.[10]For instance,Engenni, a Niger-Congo language of Nigeria, allows a possessive word, a demonstrative and an article all to occur as noun modifiers in the same noun phrase:[10] ani wife wò 2SG.POSS âka that nà the ani wò âka nà wife 2SG.POSS that the that wife of yours There are also languages in which demonstratives and articles do not normally occur together, but must be placed on opposite sides of the noun.[10]For instance, in Urak Lawoi, a language of Thailand, the demonstrative follows the noun: rumah house besal big itu that rumah besal itu house big that that big house However, the definite article precedes the noun: koq the nanaq children koq nanaq the children the children As Dryer observes, there is little justification for a category of determiner in such languages.[10]
https://en.wikipedia.org/wiki/Determiner
Conjunctionmay refer to:
https://en.wikipedia.org/wiki/Conjunction
Incorpus linguistics,part-of-speech tagging(POS tagging,PoS tagging, orPOST), also calledgrammatical tagging, is the process of marking up a word in a text (corpus) as corresponding to a particularpart of speech,[1]based on both its definition and itscontext. A simplified form of this is commonly taught to school-age children, in the identification of words asnouns,verbs,adjectives,adverbs, etc. Once performed by hand, POS tagging is now done in the context ofcomputational linguistics, usingalgorithmswhich associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic.E. Brill's tagger, one of the first and most widely used English POS taggers, employs rule-based algorithms. Part-of-speech tagging is harder than just having a list of words and their parts of speech, because some words can represent more than one part of speech at different times, and because some parts of speech are complex. This is not rare—innatural languages(as opposed to manyartificial languages), a large percentage of word-forms areambiguous. For example, even "dogs", which is usually thought of as just a plural noun, can also be a verb: Correctgrammaticaltagging will reflect that "dogs" is here used as a verb, not as the more common plural noun. Grammatical context is one way to determine this;semantic analysiscan also be used to infer that "sailor" and "hatch" implicate "dogs" as 1) in the nautical context and 2) an action applied to the object "hatch" (in this context, "dogs" is anauticalterm meaning "fastens (a watertight door) securely"). Schools commonly teach that there are 9parts of speechin English:noun,verb,article,adjective,preposition,pronoun,adverb,conjunction, andinterjection. However, there are clearly many more categories and sub-categories. For nouns, the plural, possessive, and singular forms can be distinguished. In many languages words are also marked for their "case" (role as subject, object, etc.),grammatical gender, and so on; while verbs are marked fortense,aspect, and other things. In some tagging systems, differentinflectionsof the same root word will get different parts of speech, resulting in a large number of tags. For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see thePOS tagsused in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them asfeaturessomewhat independent from part-of-speech.[2] In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. Work onstochasticmethods for taggingKoine Greek(DeRose 1990) has used over 1,000 parts of speech and found that about as many words wereambiguousin that language as in English. A morphosyntactic descriptor in the case of morphologically rich languages is commonly expressed using very short mnemonics, such asNcmsanfor Category=Noun, Type = common, Gender = masculine, Number = singular, Case = accusative, Animate = no. The most popular "tag set" for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project. It is largely similar to the earlier Brown Corpus and LOB Corpus tag sets, though much smaller. In Europe, tag sets from theEagles Guidelinessee wide use and include versions for multiple languages. POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such asGreekandLatincan be very large; taggingwordsinagglutinative languagessuch asInuit languagesmay be virtually impossible. At the other extreme, Petrov et al.[3]have proposed a "universal" tag set, with 12 categories (for example, no subtypes of nouns, verbs, punctuation, and so on). Whether a very small set of very broad tags or a much larger set of more precise ones is preferable, depends on the purpose at hand. Automatic tagging is easier on smaller tag-sets. Research on part-of-speech tagging has been closely tied tocorpus linguistics. The first major corpus of English for computer analysis was theBrown Corpusdeveloped atBrown UniversitybyHenry KučeraandW. Nelson Francis, in the mid-1960s. It consists of about 1,000,000 words of running English prose text, made up of 500 samples from randomly chosen publications. Each sample is 2,000 or more words (ending at the first sentence-end after 2,000 words, so that the corpus contains only complete sentences). TheBrown Corpuswas painstakingly "tagged" with part-of-speech markers over many years. A first approximation was done with a program by Greene and Rubin, which consisted of a huge handmade list of what categories could co-occur at all. For example, article then noun can occur, but article then verb (arguably) cannot. The program got about 70% correct. Its results were repeatedly reviewed and corrected by hand, and later users sent in errata so that by the late 70s the tagging was nearly perfect (allowing for some cases on which even human speakers might not agree). This corpus has been used for innumerable studies of word-frequency and of part-of-speech and inspired the development of similar "tagged" corpora in many other languages. Statistics derived by analyzing it formed the basis for most later part-of-speech tagging systems, such asCLAWSandVOLSUNGA. However, by this time (2005) it has been superseded by larger corpora such as the 100 million wordBritish National Corpus, even though larger corpora are rarely so thoroughly curated. For some time, part-of-speech tagging was considered an inseparable part ofnatural language processing, because there are certain cases where the correct part of speech cannot be decided without understanding thesemanticsor even thepragmaticsof the context. This is extremely expensive, especially because analyzing the higher levels is much harder when multiple part-of-speech possibilities must be considered for each word. In the mid-1980s, researchers in Europe began to usehidden Markov models(HMMs) to disambiguate parts of speech, when working to tag theLancaster-Oslo-Bergen Corpusof British English. HMMs involve counting cases (such as from the Brown Corpus) and making a table of the probabilities of certain sequences. For example, once you've seen an article such as 'the', perhaps the next word is a noun 40% of the time, an adjective 40%, and a number 20%. Knowing this, a program can decide that "can" in "the can" is far more likely to be a noun than a verb or a modal. The same method can, of course, be used to benefit from knowledge about the following words. More advanced ("higher-order") HMMs learn the probabilities not only of pairs but triples or even larger sequences. So, for example, if you've just seen a noun followed by a verb, the next item may be very likely a preposition, article, or noun, but much less likely another verb. When several ambiguous words occur together, the possibilities multiply. However, it is easy to enumerate every combination and to assign a relative probability to each one, by multiplying together the probabilities of each choice in turn. The combination with the highest probability is then chosen. The European group developed CLAWS, a tagging program that did exactly this and achieved accuracy in the 93–95% range. Eugene Charniakpoints out inStatistical techniques for natural language parsing(1997)[4]that merely assigning the most common tag to each known word and the tag "proper noun" to all unknowns will approach 90% accuracy because many words are unambiguous, and many others only rarely represent their less-common parts of speech. CLAWS pioneered the field of HMM-based part of speech tagging but was quite expensive since it enumerated all possibilities. It sometimes had to resort to backup methods when there were simply too many options (the Brown Corpus contains a case with 17 ambiguous words in a row, and there are words such as "still" that can represent as many as 7 distinct parts of speech.[5] HMMs underlie the functioning of stochastic taggers and are used in various algorithms one of the most widely used being the bi-directional inference algorithm.[6] In 1987,Steven DeRose[7]and Kenneth W. Church[8]independently developeddynamic programmingalgorithms to solve the same problem in vastly less time. Their methods were similar to theViterbi algorithmknown for some time in other fields. DeRose used a table of pairs, while Church used a table of triples and a method of estimating the values for triples that were rare or nonexistent in the Brown Corpus (an actual measurement of triple probabilities would require a much larger corpus). Both methods achieved an accuracy of over 95%. DeRose's 1990 dissertation atBrown Universityincluded analyses of the specific error types, probabilities, and other related data, and replicated his work for Greek, where it proved similarly effective. These findings were surprisingly disruptive to the field of natural language processing. The accuracy reported was higher than the typical accuracy of very sophisticated algorithms that integrated part of speech choice with many higher levels of linguistic analysis: syntax, morphology, semantics, and so on. CLAWS, DeRose's and Church's methods did fail for some of the known cases where semantics is required, but those proved negligibly rare. This convinced many in the field that part-of-speech tagging could usefully be separated from the other levels of processing; this, in turn, simplified the theory and practice of computerized language analysis and encouraged researchers to find ways to separate other pieces as well. Markov Models became the standard method for the part-of-speech assignment. The methods already discussed involve working from a pre-existing corpus to learn tag probabilities. It is, however, also possible tobootstrapusing "unsupervised" tagging. Unsupervised tagging techniques use an untagged corpus for their training data and produce the tagset by induction. That is, they observe patterns in word use, and derive part-of-speech categories themselves. For example, statistics readily reveal that "the", "a", and "an" occur in similar contexts, while "eat" occurs in very different ones. With sufficient iteration, similarity classes of words emerge that are remarkably similar to those human linguists would expect; and the differences themselves sometimes suggest valuable new insights. These two categories can be further subdivided into rule-based, stochastic, and neural approaches. Some current major algorithms for part-of-speech tagging include theViterbi algorithm,Brill tagger,Constraint Grammar, and theBaum-Welch algorithm(also known as the forward-backward algorithm). Hidden Markov model andvisible Markov modeltaggers can both be implemented using the Viterbi algorithm. The rule-based Brill tagger is unusual in that it learns a set of rule patterns, and then applies those patterns rather than optimizing a statistical quantity. Manymachine learningmethods have also been applied to the problem of POS tagging. Methods such asSVM,maximum entropy classifier,perceptron, andnearest-neighborhave all been tried, and most can achieve accuracy above 95%.[citation needed] A direct comparison of several methods is reported (with references) at the ACL Wiki.[9]This comparison uses the Penn tag set on some of the Penn Treebank data, so the results are directly comparable. However, many significant taggers are not included (perhaps because of the labor involved in reconfiguring them for this particular dataset). Thus, it should not be assumed that the results reported here are the best that can be achieved with a given approach; nor even the best thathavebeen achieved with a given approach. In 2014, a paper reporting using thestructure regularization methodfor part-of-speech tagging, achieving 97.36% on a standard benchmark dataset.[10]
https://en.wikipedia.org/wiki/Part-of-speech_tagging
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval
Information retrieval(IR) incomputingandinformation scienceis the task of identifying and retrievinginformation systemresources that are relevant to aninformation need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based onfull-textor other content-based indexing. Information retrieval is thescience[1]of searching for information in a document, searching for documents themselves, and also searching for themetadatathat describes data, and fordatabasesof texts, images or sounds. Automated information retrieval systems are used to reduce what has been calledinformation overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents.Web search enginesare the most visible IR applications. An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevance. An object is an entity that is represented by information in a content collection ordatabase. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. Thisrankingof results is a key difference of information retrieval searching compared to database searching.[2] Depending on theapplicationthe data objects may be, for example, text documents, images,[3]audio,[4]mind maps[5]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates ormetadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[6] there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the articleAs We May ThinkbyVannevar Bushin 1945.[7]It would appear that Bush was inspired by patents for a 'statistical machine' – filed byEmanuel Goldbergin the 1920s and 1930s – that searched for documents stored on film.[8]The first description of a computer searching for information was described by Holmstrom in 1948,[9]detailing an early mention of theUnivaccomputer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedyDesk Set. In the 1960s, the first large information retrieval research group was formed byGerard Saltonat Cornell. By the 1970s several different retrieval techniques had been shown to perform well on smalltext corporasuch as the Cranfield collection (several thousand documents).[7]Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with theNational Institute of Standards and Technology(NIST), cosponsored theText Retrieval Conference(TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods thatscaleto huge corpora. The introduction ofweb search engineshas boosted the need for very large scale retrieval systems even further. By the late 1990s, the rise of the World Wide Web fundamentally transformed information retrieval. While early search engines such asAltaVista(1995) andYahoo!(1994) offered keyword-based retrieval, they were limited in scale and ranking refinement. The breakthrough came in 1998 with the founding ofGoogle, which introduced thePageRankalgorithm,[10]using the web’s hyperlink structure to assess page importance and improve relevance ranking. During the 2000s, web search systems evolved rapidly with the integration of machine learning techniques. These systems began to incorporate user behavior data (e.g., click-through logs), query reformulation, and content-based signals to improve search accuracy and personalization. In 2009,MicrosoftlaunchedBing, introducing features that would later incorporatesemanticweb technologies through the development of its Satori knowledge base. Academic analysis[11]have highlighted Bing’s semantic capabilities, including structured data use and entity recognition, as part of a broader industry shift toward improving search relevance and understanding user intent through natural language processing. A major leap occurred in 2018, when Google deployedBERT(BidirectionalEncoderRepresentations fromTransformers) to better understand the contextual meaning of queries and documents. This marked one of the first times deep neural language models were used at scale in real-world retrieval systems.[12]BERT’s bidirectional training enabled a more refined comprehension of word relationships in context, improving the handling of natural language queries. Because of its success, transformer-based models gained traction in academic research and commercial search applications.[13] Simultaneously, the research community began exploring neural ranking models that outperformed traditional lexical-based methods. Long-standing benchmarks such as theTextREtrievalConference (TREC), initiated in 1992, and more recent evaluation frameworks Microsoft MARCO(MAchineReadingCOmprehension) (2019)[14]became central to training and evaluating retrieval systems across multiple tasks and domains. MS MARCO has also been adopted in the TREC Deep Learning Tracks, where it serves as a core dataset for evaluating advances in neural ranking models within a standardized benchmarking environment.[15] As deep learning became integral to information retrieval systems, researchers began to categorize neural approaches into three broad classes:sparse,dense, andhybridmodels. Sparse models, including traditional term-based methods and learned variants like SPLADE, rely on interpretable representations and inverted indexes to enable efficient exact term matching with added semantic signals.[16]Dense models, such as dual-encoder architectures like ColBERT, use continuous vector embeddings to support semantic similarity beyond keyword overlap.[17]Hybrid models aim to combine the advantages of both, balancing the lexical (token) precision of sparse methods with the semantic depth of dense models. This way of categorizing models balances scalability, relevance, and efficiency in retrieval systems.[18] As IR systems increasingly rely on deep learning, concerns around bias, fairness, and explainability have also come to the picture. Research is now focused not just on relevance and efficiency, but on transparency, accountability, and user trust in retrieval algorithms. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: In order to effectively retrieve relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. In addition to the theoretical distinctions, modern information retrieval models are also categorized on how queries and documents are represented and compared, using a practical classification distinguishing between sparse, dense and hybrid models.[19] This classification has become increasingly common in both academic and the real world applications and is getting widely adopted and used in evaluation benchmarks for Information Retrieval models.[23][20] The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed forBoolean retrieval[clarification needed]or top-k retrieval, includeprecision and recall. All measures assume aground truthnotion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may beill-posedand there may be different shades of relevance.
https://en.wikipedia.org/wiki/Information_retrieval#Models
Vector space modelorterm vector modelis an algebraic model for representing text documents (or more generally, items) asvectorssuch that the distance between vectors represents the relevance between the documents. It is used ininformation filtering,information retrieval,indexingand relevancy rankings. Its first use was in theSMART Information Retrieval System.[1] In this section we consider a particular vector space model based on thebag-of-wordsrepresentation. Documents and queries are represented as vectors. Eachdimensioncorresponds to a separate term. If a term occurs in the document, its value in the vector is non-zero. Several different ways of computing these values, also known as (term) weights, have been developed. One of the best known schemes istf-idfweighting (see the example below). The definition oftermdepends on the application. Typically terms are single words,keywords, or longer phrases. If words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in thecorpus). Vector operations can be used to compare documents with queries.[2] Candidate documents from the corpus can be retrieved and ranked using a variety of methods.Relevancerankingsof documents in a keyword search can be calculated, using the assumptions ofdocument similaritiestheory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as a vector with same dimension as the vectors that represent the other documents. In practice, it is easier to calculate thecosineof the angle between the vectors, instead of the angle itself: Whered2⋅q{\displaystyle \mathbf {d_{2}} \cdot \mathbf {q} }is the intersection (i.e. thedot product) of the document (d2in the figure to the right) and the query (q in the figure) vectors,‖d2‖{\displaystyle \left\|\mathbf {d_{2}} \right\|}is the norm of vector d2, and‖q‖{\displaystyle \left\|\mathbf {q} \right\|}is the norm of vector q. Thenormof a vector is calculated as such: Using the cosine the similarity between documentdjand queryqcan be calculated as: As all vectors under consideration by this model are element-wise nonnegative, a cosine value of zero means that the query and document vector areorthogonaland have no match (i.e. the query term does not exist in the document being considered). Seecosine similarityfor further information.[2] In the classic vector space model proposed bySalton, Wong and Yang[3]the term-specific weights in the document vectors are products of local and global parameters. The model is known asterm frequency-inverse document frequencymodel. The weight vector for documentdisvd=[w1,d,w2,d,…,wN,d]T{\displaystyle \mathbf {v} _{d}=[w_{1,d},w_{2,d},\ldots ,w_{N,d}]^{T}}, where and The vector space model has the following advantages over theStandard Boolean model: Most of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and term frequency-inverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensionalhypercube. Therefore, the possible document representations are2n{\displaystyle 2^{n}}and the maximum Euclidean distance between pairs isn{\displaystyle {\sqrt {n}}}. As documents are added to the document collection, the region defined by the hypercube's vertices become more populated and hence denser. Unlike Boolean, when a document is added using term frequency-inverse document frequency weights, the inverse document frequencies of the terms in the new document decrease while that of the remaining terms increase. In average, as documents are added, the region where documents lie expands regulating the density of the entire collection representation. This behavior models the original motivation of Salton and his colleagues that a document collection represented in a low density region could yield better retrieval results. The vector space model has the following limitations: Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such assingular value decompositionandlexical databasessuch asWordNet. Models based on and extending the vector space model include: The following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them.
https://en.wikipedia.org/wiki/Vector_space_model
In pattern recognition, informationretrieval,object detectionandclassification (machine learning),precisionandrecallare performance metrics that apply to data retrieved from acollection,corpusorsample space. Precision(also calledpositive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: Precision=Relevant retrieved instancesAllretrievedinstances{\displaystyle {\text{Precision}}={\frac {\text{Relevant retrieved instances}}{{\text{All }}{\textbf {retrieved}}{\text{ instances}}}}} Recall(also known assensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: Recall=Relevant retrieved instancesAllrelevantinstances{\displaystyle {\text{Recall}}={\frac {\text{Relevant retrieved instances}}{{\text{All }}{\textbf {relevant}}{\text{ instances}}}}} Both precision and recall are therefore based onrelevance. Consider a computer program for recognizing dogs (therelevantelement) in a digital photograph. Upon processing a picture which contains ten cats and twelve dogs, the program identifies eight dogs. Of the eight elements identified as dogs, only five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting ahypothesis-testingapproach, where in this case, thenull hypothesisis that a given item isirrelevant(not a dog), absence oftype I and type II errors(perfectspecificity and sensitivity) corresponds respectively to perfect precision (no false positives) and perfect recall (no false negatives). More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon theprior distributionof seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors (false negatives), for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). In aclassificationtask, the precision for a class is thenumber of true positives(i.e. the number of items correctly labelled as belonging to the positive class)divided by the total number of elements labelled as belonging to the positive class(i.e. the sum of true positives andfalse positives, which are items incorrectly labelled as belonging to the class). Recall in this context is defined as thenumber of true positives divided by the total number of elements that actually belong to the positive class(i.e. the sum of true positives andfalse negatives, which are items which were not labelled as belonging to the positive class but should have been). Precision and recall are not particularly useful metrics when used in isolation. For instance, it is possible to have perfect recall by simply retrieving every single item. Likewise, it is possible to achieve perfect precision by selecting only a very small number of extremely likely items. In a classification task, a precision score of 1.0 for a class C means that every item labelled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labelled correctly) whereas a recall of 1.0 means that every item from class C was labelled as belonging to class C (but says nothing about how many items from other classes were incorrectly also labelled as belonging to class C). Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other, but context may dictate if one is more valued in a given situation: A smoke detector is generally designed to commit many Type I errors (to alert in many situations when there is no danger), because the cost of a Type II error (failing to sound an alarm during a major fire) is prohibitively high. As such, smoke detectors are designed with recall in mind (to catch all real danger), even while giving little weight to the losses in precision (and making many false alarms). In the other direction,Blackstone's ratio, "It is better that ten guilty persons escape than that one innocent suffer," emphasizes the costs of a Type I error (convicting an innocent person). As such, the criminal justice system is geared toward precision (not convicting innocents), even at the cost of losses in recall (letting more guilty people go free). A brain surgeon removing a cancerous tumor from a patient's brain illustrates the tradeoffs as well: The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain they remove to ensure they have extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain cells they remove to ensure they extract only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome). Usually, precision and recall scores are not discussed in isolation. Aprecision-recall curveplots precision as a function of recall; usually precision will decrease as the recall increases. Alternatively, values for one measure can be compared for a fixed level at the other measure (e.g.precision at a recall level of 0.75) or both are combined into a single measure. Examples of measures that are a combination of precision and recall are theF-measure(the weightedharmonic meanof precision and recall), or theMatthews correlation coefficient, which is ageometric meanof the chance-corrected variants: theregression coefficientsInformedness(DeltaP') andMarkedness(DeltaP).[1][2]Accuracyis a weighted arithmetic mean of Precision and Inverse Precision (weighted by Bias) as well as a weighted arithmetic mean of Recall and Inverse Recall (weighted by Prevalence).[1]Inverse Precision and Inverse Recall are simply the Precision and Recall of the inverse problem where positive and negative labels are exchanged (for both real classes and prediction labels).True Positive RateandFalse Positive Rate, or equivalently Recall and 1 - Inverse Recall, are frequently plotted against each other asROCcurves and provide a principled mechanism to explore operating point tradeoffs. Outside of Information Retrieval, the application of Recall, Precision and F-measure are argued to be flawed as they ignore the true negative cell of thecontingency table, and they are easily manipulated by biasing the predictions.[1]The first problem is 'solved' by usingAccuracyand the second problem is 'solved' by discounting the chance component and renormalizing toCohen's kappa, but this no longer affords the opportunity to explore tradeoffs graphically. However,InformednessandMarkednessare Kappa-like renormalizations of Recall and Precision,[3]and their geometric meanMatthews correlation coefficientthus acts like a debiased F-measure. For classification tasks, the termstrue positives,true negatives,false positives, andfalse negativescompare the results of the classifier under test with trusted external judgments. The termspositiveandnegativerefer to the classifier's prediction (sometimes known as theexpectation), and the termstrueandfalserefer to whether that prediction corresponds to the external judgment (sometimes known as theobservation). Let us define an experiment fromPpositive instances andNnegative instances for some condition. The four outcomes can be formulated in a 2×2contingency tableorconfusion matrix, as follows: Precision and recall are then defined as:[12] Precision=tptp+fpRecall=tptp+fn{\displaystyle {\begin{aligned}{\text{Precision}}&={\frac {tp}{tp+fp}}\\{\text{Recall}}&={\frac {tp}{tp+fn}}\,\end{aligned}}} Recall in this context is also referred to as the true positive rate orsensitivity, and precision is also referred to aspositive predictive value(PPV); other related measures used in classification include true negative rate andaccuracy.[12]True negative rate is also calledspecificity. True negative rate=tntn+fp{\displaystyle {\text{True negative rate}}={\frac {tn}{tn+fp}}\,} Both precision and recall may be useful in cases where there is imbalanced data. However, it may be valuable to prioritize one metric over the other in cases where the outcome of a false positive or false negative is costly. For example, in medical diagnosis, a false positive test can lead to unnecessary treatment and expenses. In this situation, it is useful to value precision over recall. In other cases, the cost of a false negative is high, and recall may be a more valuable metric. For instance, the cost of a false negative in fraud detection is high, as failing to detect a fraudulent transaction can result in significant financial loss.[13] Precision and recall can be interpreted as (estimated)conditional probabilities:[14]Precision is given byP(C=P|C^=P){\displaystyle \mathbb {P} (C=P|{\hat {C}}=P)}while recall is given byP(C^=P|C=P){\displaystyle \mathbb {P} ({\hat {C}}=P|C=P)},[15]whereC^{\displaystyle {\hat {C}}}is the predicted class andC{\displaystyle C}is the actual class (i.e.C=P{\displaystyle C=P}means the actual class is positive). Both quantities are, therefore, connected byBayes' theorem. The probabilistic interpretation allows to easily derive how a no-skill classifier would perform. A no-skill classifiers is defined by the property that the joint probabilityP(C=P,C^=P)=P(C=P)P(C^=P){\displaystyle \mathbb {P} (C=P,{\hat {C}}=P)=\mathbb {P} (C=P)\mathbb {P} ({\hat {C}}=P)}is just the product of the unconditional probabilities since the classification and the presence of the class areindependent. For example the precision of a no-skill classifier is simply a constantP(C=P|C^=P)=P(C=P,C^=P)P(C^=P)=P(C=P),{\displaystyle \mathbb {P} (C=P|{\hat {C}}=P)={\frac {\mathbb {P} (C=P,{\hat {C}}=P)}{\mathbb {P} ({\hat {C}}=P)}}=\mathbb {P} (C=P),}i.e. determined by the probability/frequency with which the class P occurs. A similar argument can be made for the recall:P(C^=P|C=P)=P(C=P,C^=P)P(C=P)=P(C^=P){\displaystyle \mathbb {P} ({\hat {C}}=P|C=P)={\frac {\mathbb {P} (C=P,{\hat {C}}=P)}{\mathbb {P} (C=P)}}=\mathbb {P} ({\hat {C}}=P)}which is the probability for a positive classification. Accuracy=TP+TNTP+TN+FP+FN{\displaystyle {\text{Accuracy}}={\frac {TP+TN}{TP+TN+FP+FN}}\,} Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem. For example, balanced accuracy[16](bACC) normalizes true positive and true negative predictions by the number of positive and negative samples, respectively, and divides their sum by two: Balanced accuracy=TPR+TNR2{\displaystyle {\text{Balanced accuracy}}={\frac {TPR+TNR}{2}}\,} For the previous example (95 negative and 5 positive samples), classifying all as negative gives 0.5 balanced accuracy score (the maximum bACC score is one), which is equivalent to the expected value of a random guess in a balanced data set. Balanced accuracy can serve as an overall performance metric for a model, whether or not the true labels are imbalanced in the data, assuming the cost of FN is the same as FP. The TPR and FPR are a property of a given classifier operating at a specific threshold. However, the overall number of TPs, FPsetcdepend on the class imbalance in the data via the class ratior=P/N{\textstyle r=P/N}. As the recall (or TPR) depends only on positive cases, it is not affected byr{\textstyle r}, but the precision is. We have that Precision=TPTP+FP=P⋅TPRP⋅TPR+N⋅FPR=TPRTPR+1rFPR.{\displaystyle {\text{Precision}}={\frac {TP}{TP+FP}}={\frac {P\cdot TPR}{P\cdot TPR+N\cdot FPR}}={\frac {TPR}{TPR+{\frac {1}{r}}FPR}}.} Thus the precision has an explicit dependence onr{\textstyle r}.[17]Starting with balanced classes atr=1{\textstyle r=1}and gradually decreasingr{\textstyle r}, the corresponding precision will decrease, because the denominator increases. Another metric is the predicted positive condition rate (PPCR), which identifies the percentage of the total population that is flagged. For example, for a search engine that returns 30 results (retrieved documents) out of 1,000,000 documents, the PPCR is 0.003%. Predicted positive condition rate=TP+FPTP+FP+TN+FN{\displaystyle {\text{Predicted positive condition rate}}={\frac {TP+FP}{TP+FP+TN+FN}}\,} According to Saito and Rehmsmeier, precision-recall plots are more informative than ROC plots when evaluating binary classifiers on imbalanced data. In such scenarios, ROC plots may be visually deceptive with respect to conclusions about the reliability of classification performance.[18] Different from the above approaches, if an imbalance scaling is applied directly by weighting the confusion matrix elements, the standard metrics definitions still apply even in the case of imbalanced datasets.[19]The weighting procedure relates the confusion matrix elements to the support set of each considered class. A measure that combines precision and recall is theharmonic meanof precision and recall, the traditional F-measure or balanced F-score: F=2⋅precision⋅recallprecision+recall{\displaystyle F=2\cdot {\frac {\mathrm {precision} \cdot \mathrm {recall} }{\mathrm {precision} +\mathrm {recall} }}} This measure is approximately the average of the two when they are close, and is more generally theharmonic mean, which, for the case of two numbers, coincides with the square of thegeometric meandivided by thearithmetic mean. There are several reasons that the F-score can be criticized, in particular circumstances, due to its bias as an evaluation metric.[1]This is also known as theF1{\displaystyle F_{1}}measure, because recall and precision are evenly weighted. It is a special case of the generalFβ{\displaystyle F_{\beta }}measure (for non-negative real values ofβ{\displaystyle \beta }): Fβ=(1+β2)⋅precision⋅recallβ2⋅precision+recall{\displaystyle F_{\beta }=(1+\beta ^{2})\cdot {\frac {\mathrm {precision} \cdot \mathrm {recall} }{\beta ^{2}\cdot \mathrm {precision} +\mathrm {recall} }}} Two other commonly usedF{\displaystyle F}measures are theF2{\displaystyle F_{2}}measure, which weights recall higher than precision, and theF0.5{\displaystyle F_{0.5}}measure, which puts more emphasis on precision than recall. The F-measure was derived by van Rijsbergen (1979) so thatFβ{\displaystyle F_{\beta }}"measures the effectiveness of retrieval with respect to a user who attachesβ{\displaystyle \beta }times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measureEα=1−1αP+1−αR{\displaystyle E_{\alpha }=1-{\frac {1}{{\frac {\alpha }{P}}+{\frac {1-\alpha }{R}}}}}, the second term being the weighted harmonic mean of precision and recall with weights(α,1−α){\displaystyle (\alpha ,1-\alpha )}. Their relationship isFβ=1−Eα{\displaystyle F_{\beta }=1-E_{\alpha }}whereα=11+β2{\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}}. There are other parameters and strategies for performance metric of information retrieval system, such as the area under theROC curve(AUC)[20]orpseudo-R-squared. Precision and recall values can also be calculated for classification problems with more than two classes.[21]To obtain the precision for a given class, we divide the number of true positives by the classifier bias towards this class (number of times that the classifier has predicted the class). To calculate the recall for a given class, we divide the number of true positives by the prevalence of this class (number of times that the class occurs in the data sample). The class-wise precision and recall values can then be combined into an overall multi-class evaluation score, e.g., using themacro F1 metric.[21]
https://en.wikipedia.org/wiki/Precision_and_recall
Instatisticalanalysis ofbinary classificationandinformation retrievalsystems, theF-scoreorF-measureis a measure of predictive performance. It is calculated from theprecisionandrecallof the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known aspositive predictive value, and recall is also known assensitivityin diagnostic binary classification. TheF1score is theharmonic meanof the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more genericFβ{\displaystyle F_{\beta }}score applies additional weights, valuing one of precision or recall more than the other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if the precision or the recall is zero. The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the FourthMessage Understanding Conference(MUC-4, 1992).[1] The traditional F-measure or balanced F-score (F1score) is theharmonic meanof precision and recall:[2] Withprecision = TP / (TP + FP)andrecall = TP / (TP + FN), it follows that the numerator ofF1is the sum of their numerators and the denominator ofF1is the sum of their denominators. A more general F score,Fβ{\displaystyle F_{\beta }}, that uses a positive real factorβ{\displaystyle \beta }, whereβ{\displaystyle \beta }is chosen such that recall is consideredβ{\displaystyle \beta }times as important as precision, is: In terms ofType I and type II errorsthis becomes: Two commonly used values forβ{\displaystyle \beta }are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision. The F-measure was derived so thatFβ{\displaystyle F_{\beta }}"measures the effectiveness of retrieval with respect to a user who attachesβ{\displaystyle \beta }times as much importance to recall as precision".[3]It is based onVan Rijsbergen's effectiveness measure Their relationship is:Fβ=1−E{\displaystyle F_{\beta }=1-E}whereα=11+β2{\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}} This is related to the field ofbinary classificationwhere recall is often termed "sensitivity". Precision-recall curve, and thus theFβ{\displaystyle F_{\beta }}score, explicitly depends on the ratior{\displaystyle r}of positive to negative test cases.[12]This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 2020[13]) is to use a standard class ratior0{\displaystyle r_{0}}when making such comparisons. The F-score is often used in the field ofinformation retrievalfor measuringsearch,document classification, andquery classificationperformance.[14]It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class. Earlier works focused primarily on the F1score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[15]and soFβ{\displaystyle F_{\beta }}is seen in wide application. The F-score is also used inmachine learning.[16]However, the F-measures do not take true negatives into account, hence measures such as theMatthews correlation coefficient,InformednessorCohen's kappamay be preferred to assess the performance of a binary classifier.[17] The F-score has been widely used in the natural language processing literature,[18]such as in the evaluation ofnamed entity recognitionandword segmentation. The F1score is theDice coefficientof the set of retrieved items and the set of relevant items.[19] David Handand others criticize the widespread use of the F1score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22] According to Davide Chicco and Giuseppe Jurman, the F1score is less truthful and informative than theMatthews correlation coefficient (MCC)in binary evaluation classification.[23] David M W Powershas pointed out that F1ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measuresInformednessandMarkednessfor the two directions, noting that their geometric mean is correlation.[24] Another source of critique of F1is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by theP4 metricdefinition, which is sometimes indicated as a symmetrical extension of F1.[25] Finally, Ferrer[26]and Dyrland et al.[27]argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems. While the F-measure is theharmonic meanof recall and precision, theFowlkes–Mallows indexis theirgeometric mean.[28] The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.[29] Macro F1is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties.[30] Micro F1is the harmonic mean ofmicro precisionandmicro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equalaccuracy, because accuracy takes true negatives into account while micro F1 does not.[31]
https://en.wikipedia.org/wiki/F1_score
Cohen's kappa coefficient('κ', lowercase Greekkappa) is astatisticthat is used to measureinter-rater reliability(and alsointra-rater reliability) for qualitative (categorical) items.[1]It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.[2] The first mention of a kappa-like statistic is attributed to Galton in 1892.[3][4] The seminal paper introducing kappa as a new technique was published byJacob Cohenin the journalEducational and Psychological Measurementin 1960.[5] Cohen's kappa measures the agreement between two raters who each classifyNitems intoCmutually exclusive categories. The definition ofκ{\textstyle \kappa }is wherepois the relative observed agreement among raters, andpeis the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly selecting each category. If the raters are in complete agreement thenκ=1{\textstyle \kappa =1}. If there is no agreement among the raters other than what would be expected by chance (as given bype),κ=0{\textstyle \kappa =0}. It is possible for the statistic to be negative,[6]which can occur by chance if there is no relationship between the ratings of the two raters, or it may reflect a real tendency of the raters to give differing ratings. Forkcategories,Nobservations to categorize andnki{\displaystyle n_{ki}}the number of times rateripredicted categoryk: This is derived from the following construction: Wherepk12^{\displaystyle {\widehat {p_{k12}}}}is the estimated probability that both rater 1 and rater 2 will classify the same item as k, whilepk1^{\displaystyle {\widehat {p_{k1}}}}is the estimated probability that rater 1 will classify an item as k (and similarly for rater 2). The relationpk^=∑kpk1^pk2^{\textstyle {\widehat {p_{k}}}=\sum _{k}{\widehat {p_{k1}}}{\widehat {p_{k2}}}}is based on using the assumption that the rating of the two raters areindependent. The termpk1^{\displaystyle {\widehat {p_{k1}}}}is estimated by using the number of items classified as k by rater 1 (nk1{\displaystyle n_{k1}}) divided by the total items to classify (N{\displaystyle N}):pk1^=nk1N{\displaystyle {\widehat {p_{k1}}}={n_{k1} \over N}}(and similarly for rater 2). In the traditional 2 × 2confusion matrixemployed inmachine learningandstatisticsto evaluatebinary classifications, the Cohen's Kappa formula can be written as:[7] where TP are the true positives, FP are the false positives, TN are the true negatives, and FN are the false negatives. In this case, Cohen's Kappa is equivalent to theHeidke skill scoreknown inMeteorology.[8]The measure was first introduced by Myrick Haskell Doolittle in 1888.[9] Suppose that you were analyzing data related to a group of 50 people applying for a grant. Each grant proposal was read by two readers and each reader either said "Yes" or "No" to the proposal. Suppose the disagreement count data were as follows, where A and B are readers, data on the main diagonal of the matrix (a and d) count the number of agreements and off-diagonal data (b and c) count the number of disagreements: e.g. The observed proportionate agreement is: To calculatepe(the probability of random agreement) we note that: So the expected probability that both would say yes at random is: Similarly: Overall random agreement probability is the probability that they agreed on either Yes or No, i.e.: So now applying our formula for Cohen's Kappa we get: A case sometimes considered to be a problem with Cohen's Kappa occurs when comparing the Kappa calculated for two pairs of raters with the two raters in each pair having the same percentage agreement but one pair give a similar number of ratings in each class while the other pair give a very different number of ratings in each class.[10](In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases there is equal agreement between A and B (60 out of 100 in both cases) in terms of agreement in each class, so we would expect the relative values of Cohen's Kappa to reflect this. However, calculating Cohen's Kappa for each: we find that it shows greater similarity between A and B in the second case, compared to the first. This is because while the percentage agreement is the same, the percentage agreement that would occur 'by chance' is significantly higher in the first case (0.54 compared to 0.46). P-valuefor kappa is rarely reported, probably because even relatively low values of kappa can nonetheless be significantly different from zero but not of sufficient magnitude to satisfy investigators.[11]: 66Still, its standard error has been described[12]and is computed by various computer programs.[13] Confidence intervalsfor Kappa may be constructed, for the expected Kappa values if we had infinite number of items checked, using the following formula:[1] WhereZ1−α/2=1.960{\displaystyle Z_{1-\alpha /2}=1.960}is thestandard normalpercentile whenα=5%{\displaystyle \alpha =5\%}, andSEκ{\displaystyle SE_{\kappa }}is calculated byjackknife,bootstrapor the asymptotic formula described by Fleiss & Cohen.[12] If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? Guidelines would be helpful, but factors other than agreement can influence its magnitude, which makes interpretation of a given magnitude problematic. As Sim and Wright noted, two important factors are prevalence (are the codes equiprobable or do their probabilities vary) and bias (are the marginal probabilities for the two observers similar or different). Other things being equal, kappas are higher when codes are equiprobable. On the other hand, Kappas are higher when codes are distributed asymmetrically by the two observers. In contrast to probability variations, the effect of bias is greater when Kappa is small than when it is large.[14]: 261–262 Another factor is the number of codes. As number of codes increases, kappas become higher. Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, values for kappa were lower when codes were fewer. And, in agreement with Sim & Wrights's statement concerning prevalence, kappas were higher when codes were roughly equiprobable. Thus Bakeman et al. concluded that "no one value of kappa can be regarded as universally acceptable."[15]: 357They also provide a computer program that lets users compute values for kappa specifying number of codes, their probability, and observer accuracy. For example, given equiprobable codes and observers who are 85% accurate, value of kappa are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and 10, respectively. Nonetheless, magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch,[16]who characterized values < 0 as indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful.[17]Fleiss's[18]: 218equally arbitrary guidelines characterize kappas over 0.75 as excellent, 0.40 to 0.75 as fair to good, and below 0.40 as poor. Kappa assumes its theoretical maximum value of 1 only when both observers distribute codes the same, that is, when corresponding row and column sums are identical. Anything less is less than perfect agreement. Still, the maximum value kappa could achieve given unequal distributions helps interpret the value of kappa actually obtained. The equation for κ maximum is:[19] wherePexp=∑i=1kPi+P+i{\displaystyle P_{\exp }=\sum _{i=1}^{k}P_{i+}P_{+i}}, as usual,Pmax=∑i=1kmin(Pi+,P+i){\displaystyle P_{\max }=\sum _{i=1}^{k}\min(P_{i+},P_{+i})}, k= number of codes,Pi+{\displaystyle P_{i+}}are the row probabilities, andP+i{\displaystyle P_{+i}}are the column probabilities. Kappa is an index that considers observed agreement with respect to a baseline agreement. However, investigators must consider carefully whether Kappa's baseline agreement is relevant for the particular research question. Kappa's baseline is frequently described as the agreement due to chance, which is only partially correct. Kappa's baseline agreement is the agreement that would be expected due to random allocation, given the quantities specified by the marginal totals of square contingency table. Thus, κ = 0 when the observed allocation is apparently random, regardless of the quantity disagreement as constrained by the marginal totals. However, for many applications, investigators should be more interested in the quantity disagreement in the marginal totals than in the allocation disagreement as described by the additional information on the diagonal of the square contingency table. Thus for many applications, Kappa's baseline is more distracting than enlightening. Consider the following example: The disagreement proportion is 14/16 or 0.875. The disagreement is due to quantity because allocation is optimal. κ is 0.01. The disagreement proportion is 2/16 or 0.125. The disagreement is due to allocation because quantities are identical. Kappa is −0.07. Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator. Furthermore, a ratio does not reveal its numerator nor its denominator. It is more informative for researchers to report disagreement in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. When predictive accuracy is the goal, researchers can more easily begin to think about ways to improve a prediction by using two components of quantity and allocation, rather than one ratio of Kappa.[2] Some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can make it unreliable for measuring agreement in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the agreement on the rare category.[20]For this reason, κ is considered an overly conservative measure of agreement.[21]Others[22][citation needed]contest the assertion that kappa "takes into account" chance agreement. To do this effectively would require an explicit model of how chance affects rater decisions. The so-called chance adjustment of kappa statistics supposes that, when not completely certain, raters simply guess—a very unrealistic scenario. Moreover, some works[23]have shown how kappa statistics can lead to a wrong conclusion for unbalanced data. A similar statistic, calledpi, was proposed by Scott (1955). Cohen's kappa andScott's pidiffer in terms of howpeis calculated. Note that Cohen's kappa measures agreement betweentworaters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, seeFleiss(1971). The Fleiss kappa, however, is a multi-rater generalization ofScott's pistatistic, not Cohen's kappa. Kappa is also used to compare performance inmachine learning, but the directional version known asInformednessorYouden's J statisticis argued to be more appropriate for supervised learning.[24] The weighted kappa allows disagreements to be weighted differently[25]and is especially useful when codes are ordered.[11]: 66Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros. Off-diagonal cells contain weights indicating the seriousness of that disagreement. Often, cells one off the diagonal are weighted 1, those two off 2, etc. The equation for weighted κ is: wherek=number of codes andwij{\displaystyle w_{ij}},xij{\displaystyle x_{ij}}, andmij{\displaystyle m_{ij}}are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above.
https://en.wikipedia.org/wiki/Cohen%27s_kappa
Inlinguistics,syntax(/ˈsɪntæks/SIN-taks)[1][2]is the study of how words andmorphemescombine to form larger units such asphrasesandsentences. Central concerns of syntax includeword order,grammatical relations, hierarchical sentence structure (constituency),[3]agreement, the nature of crosslinguistic variation, and the relationship between form and meaning (semantics). Diverse approaches, such asgenerative grammarand functional grammar, offer unique perspectives on syntax, reflecting its complexity and centrality to understandinghuman language. The wordsyntaxcomes from theancient Greekwordσύνταξις, meaning an orderly or systematic arrangement, which consists ofσύν-(syn-, "together" or "alike"), andτάξις(táxis, "arrangement"). InHellenistic Greek, this also specifically developed a use referring to the grammatical order of words, with a slightly altered spelling:συντάσσειν. The English term, which first appeared in 1548, is partly borrowed from Latin (syntaxis) and Greek, though the Latin term developed from Greek.[4] The field of syntax contains a number of various topics that a syntactic theory is often designed to handle. The relation between the topics is treated differently in different theories, and some of them may not be considered to be distinct but instead to be derived from one another (i.e. word order can be seen as the result of movement rules derived from grammatical relations). One basic description of a language's syntax is the sequence in which thesubject(S),verb(V), andobject(O) usually appear in sentences. Over 85% of languages usually place the subject first, either in the sequenceSVOor the sequenceSOV. The other possible sequences areVSO,VOS,OVS, andOSV, the last three of which are rare. In most generative theories of syntax, the surface differences arise from a more complex clausal phrase structure, and each order may be compatible with multiple derivations. However, word order can also reflect the semantics or function of the ordered elements.[5] Another description of a language considers the set of possible grammatical relations in a language or in general and how they behave in relation to one another in themorphosyntactic alignmentof the language. The description of grammatical relations can also reflect transitivity,passivization, and head-dependent-marking or other agreement. Languages have different criteria for grammatical relations. For example, subjecthood criteria may have implications for how the subject is referred to from a relative clause or coreferential with an element in an infinite clause.[6] Constituency is the feature of being aconstituentand how words can work together to form a constituent (orphrase). Constituents are often moved as units, and the constituent can be the domain of agreement. Some languages allow discontinuous phrases in which words belonging to the same constituent are not immediately adjacent but are broken up by other constituents. Constituents may berecursive, as they may consist of other constituents, potentially of the same type. TheAṣṭādhyāyīofPāṇini, fromc.4th century BCinAncient India, is often cited as an example of a premodern work that approaches the sophistication of a modern syntactic theory since works ongrammarhad been written long before modern syntax came about.[7]In the West, the school of thought that came to be known as "traditional grammar" began with the work ofDionysius Thrax. For centuries, a framework known asgrammaire générale, first expounded in 1660 byAntoine ArnauldandClaude Lancelotin abook of the same title, dominated work in syntax:[8]as its basic premise the assumption that language is a direct reflection of thought processes and so there is a single most natural way to express a thought.[9] However, in the 19th century, with the development ofhistorical-comparative linguistics, linguists began to realize the sheer diversity of human language and to question fundamental assumptions about the relationship between language and logic. It became apparent that there was no such thing as the most natural way to express a thought and sologiccould no longer be relied upon as a basis for studying the structure of language.[citation needed] ThePort-Royalgrammar modeled the study of syntax upon that of logic. (Indeed, large parts ofPort-Royal Logicwere copied or adapted from theGrammaire générale.[10]) Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of "subject – copula – predicate". Initially, that view was adopted even by the early comparative linguists such asFranz Bopp. The central role of syntax withintheoretical linguisticsbecame clear only in the 20th century, which could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. (For a detailed and critical survey of the history of syntax in the last two centuries, see the monumental work by Giorgio Graffi (2001).[11]) There are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works ofDerek Bickerton,[12]sees syntax as a branch of biology, since it conceives of syntax as the study of linguistic knowledge as embodied in the humanmind. Other linguists (e.g.,Gerald Gazdar) take a morePlatonisticview since they regard syntax to be the study of an abstractformal system.[13]Yet others (e.g.,Joseph Greenberg) consider syntax a taxonomical device to reach broad generalizations across languages. Syntacticians have attempted to explain the causes of word-order variation within individual languages and cross-linguistically. Much of such work has been done within the framework of generative grammar, which holds that syntax depends on agenetic endowmentcommon to the human species. In that framework and in others,linguistic typologyanduniversalshave been primary explicanda.[14] Alternative explanations, such as those byfunctional linguists, have been sought inlanguage processing. It is suggested that the brain finds it easier toparsesyntactic patternsthat are either right- or left-branchingbut not mixed. The most-widely held approach is the performance–grammar correspondence hypothesis byJohn A. Hawkins, who suggests that language is a non-innateadaptationto innatecognitivemechanisms. Cross-linguistic tendencies are considered as being based on language users' preference for grammars that are organized efficiently and on their avoidance of word orderings that cause processing difficulty. Some languages, however, exhibit regular inefficient patterning such as the VO languagesChinese, with theadpositional phrasebefore the verb, andFinnish, which has postpositions, but there are few other profoundly exceptional languages.[15]More recently, it is suggested that the left- versus right-branching patterns are cross-linguistically related only to the place of role-marking connectives (adpositionsandsubordinators), which links the phenomena with the semantic mapping of sentences.[16] Dependency grammaris an approach to sentence structure in which syntactic units are arranged according to the dependency relation, as opposed to the constituency relation ofphrase structure grammars. Dependencies are directed links between words. The (finite) verb is seen as the root of all clause structure and all the other words in the clause are either directly or indirectly dependent on this root (i.e. the verb). Some prominent dependency-based theories of syntax are the following: Lucien Tesnière(1893–1954) is widely seen as the father of modern dependency-based theories of syntax and grammar. He argued strongly against the binary division of the clause intosubjectandpredicatethat is associated with the grammars of his day (S → NP VP) and remains at the core of most phrase structure grammars. In place of that division, he positioned the verb as the root of all clause structure.[17] Categorial grammaris an approach in which constituents combine asfunctionandargument, according to combinatory possibilities specified in theirsyntactic categories. For example, other approaches might posit a rule that combines a noun phrase (NP) and a verb phrase (VP), but CG would posit a syntactic categoryNPand anotherNP\S, read as "a category that searches to the left (indicated by \) for an NP (the element on the left) and outputs a sentence (the element on the right)." Thus, the syntactic category for anintransitiveverb is a complex formula representing the fact that the verb acts as afunction wordrequiring an NP as an input and produces a sentence level structure as an output. The complex category is notated as (NP\S) instead of V. The category oftransitive verbis defined as an element that requires two NPs (its subject and its direct object) to form a sentence. That is notated as (NP/(NP\S)), which means, "A category that searches to the right (indicated by /) for an NP (the object) and generates a function (equivalent to the VP) which is (NP\S), which in turn represents a function that searches to the left for an NP and produces a sentence." Tree-adjoining grammaris a categorial grammar that adds in partialtree structuresto the categories. Theoretical approaches to syntax that are based uponprobability theoryare known asstochastic grammars. One common implementation of such an approach makes use of aneural networkorconnectionism. Functionalist models of grammar study the form–function interaction by performing a structural and a functional analysis. Generative syntax is the study of syntax within the overarching framework ofgenerative grammar. Generative theories of syntax typically propose analyses of grammatical patterns using formal tools such asphrase structure grammarsaugmented with additional operations such assyntactic movement. Their goal in analyzing a particular language is to specify rules which generate all and only the expressions which arewell-formedin that language. In doing so, they seek to identify innate domain-specific principles of linguistic cognition, in line with the wider goals of the generative enterprise. Generative syntax is among the approaches that adopt the principle of theautonomy of syntaxby assuming that meaning and communicative intent is determined by the syntax, rather than the other way around. Generative syntax was proposed in the late 1950s byNoam Chomsky, building on earlier work byZellig Harris,Louis Hjelmslev, and others. Since then, numerous theories have been proposed under its umbrella: Other theories that find their origin in the generative paradigm are: The Cognitive Linguistics framework stems fromgenerative grammarbut adheres toevolutionary, rather thanChomskyan, linguistics. Cognitive models often recognise the generative assumption that the object belongs to the verb phrase. Cognitive frameworks include the following:
https://en.wikipedia.org/wiki/Syntax#Syntactic_structure
Semanticsis the study of linguisticmeaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction betweensense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts withsyntax, which studies the rules that dictate how to creategrammaticallycorrect sentences, andpragmatics, which investigates how people use language in communication. Lexical semanticsis the branch of semantics that studiesword meaning. It examines whether words have one or several meanings and in whatlexical relationsthey stand to one another. Phrasal semantics studies the meaning of sentences by exploring the phenomenon ofcompositionalityor how new meanings can be created by arranging words.Formal semanticsrelies onlogicandmathematicsto provide precise frameworks of the relation between language and meaning.Cognitive semanticsexamines meaning from a psychological perspective and assumes a close relation between language ability and the conceptual structures used to understand the world. Other branches of semantics includeconceptual semantics,computational semantics, and cultural semantics. Theories of meaning are general explanations of the nature of meaning and how expressions are endowed with it. According toreferential theories, the meaning of an expression is the part of reality to which it points. Ideational theories identify meaning withmental stateslike the ideas that an expression evokes in the minds of language users. According to causal theories, meaning is determined by causes and effects, whichbehavioristsemantics analyzes in terms of stimulus and response. Further theories of meaning includetruth-conditional semantics,verificationisttheories, theuse theory, andinferentialist semantics. The study of semantic phenomena began during antiquity but was not recognized as an independent field of inquiry until the 19th century. Semantics is relevant to the fields of formal logic,computer science, andpsychology. Semantics is the study ofmeaninginlanguages.[1]It is a systematic inquiry that examines what linguistic meaning is and how it arises.[2]It investigates howexpressionsare built up from different layers of constituents, likemorphemes,words,clauses,sentences, andtexts, and how the meanings of the constituents affect one another.[3]Semantics can focus on a specific language, like English, but in its widest sense, it investigates meaning structures relevant to all languages.[4][a][b]As a descriptive discipline, it aims to determine how meaning works withoutprescribingwhat meaning people should associate with particular expressions.[7]Some of its key questions are "How do the meanings of words combine to create the meanings of sentences?", "How do meanings relate to the minds of language users, and to the things words refer to?", and "What is the connection between what a word means, and the contexts in which it is used?".[8]The main disciplines engaged in semantics arelinguistics,semiotics, andphilosophy.[9]Besides its meaning as a field of inquiry, semantics can also refer to theories within this field, liketruth-conditional semantics,[10]and to the meaning of particular expressions, like the semantics of the wordfairy.[11] As a field of inquiry, semantics has both an internal and an external side. The internal side is interested in the connection between words and themental phenomenathey evoke, like ideas and conceptual representations. The external side examines how words refer to objects in the world and under what conditions a sentence is true.[12] Many related disciplines investigate language and meaning. Semantics contrasts with other subfields of linguistics focused on distinct aspects of language.Phonologystudies the different types ofsoundsused in languages and how sounds are connected to form words whilesyntaxexaminesthe rulesthat dictate how to arrange words to create sentences. These divisions are reflected in the fact that it is possible to master some aspects of a language while lacking others, like when a person knows how to pronounce a word without knowing its meaning.[13]As a subfield of semiotics, semantics has a more narrow focus on meaning in language while semiotics studies both linguistic and non-linguistic signs. Semiotics investigates additional topics like the meaning ofnon-verbal communication, conventionalsymbols, and natural signs independent of human interaction. Examples includenoddingto signal agreement, stripes on a uniform signifyingrank, and the presence ofvulturesindicating a nearby animal carcass.[14] Semantics further contrasts withpragmatics, which is interested in how people use language in communication.[15]An expression like "That's what I'm talking about" can mean many things depending on who says it and in what situation. Semantics is interested in the possible meanings of expressions: what they can and cannot mean in general. In this regard, it is sometimes defined as the study of context-independent meaning. Pragmatics examines which of these possible meanings is relevant in a particular case. In contrast to semantics, it is interested in actual performance rather than in the generallinguistic competenceunderlying this performance.[16]This includes the topic of additional meaning that can be inferred even though it is not literally expressed, like what it means if a speaker remains silent on a certain topic.[17]A closely related distinction by the semioticianCharles W. Morrisholds that semantics studies the relation between words and the world, pragmatics examines the relation between words and users, and syntax focuses on the relation between different words.[18] Semantics is related toetymology, which studies how words and their meanings changed in the course of history.[7]Another connected field ishermeneutics, which is the art or science of interpretation and is concerned with the rightmethodologyof interpreting text in general andscripturein particular.[19]Metasemanticsexamines themetaphysicalfoundations of meaning and aims to explain where it comes from or how it arises.[20] The wordsemanticsoriginated from the Ancient Greek adjectivesemantikos, meaning 'relating to signs', which is a derivative ofsēmeion, the noun for 'sign'. It was initially used formedical symptomsand only later acquired its wider meaning regarding any type of sign, including linguistic signs. The wordsemanticsentered the English language from the French termsemantique, which the linguistMichel Bréalfirst introduced at the end of the 19th century.[21] Semantics studies meaning in language, which is limited to the meaning of linguistic expressions. It concerns how signs areinterpretedand whatinformationthey contain. An example is the meaning of words provided indictionarydefinitions by giving synonymous expressions or paraphrases, like defining the meaning of the termramasadult male sheep.[22]There are many forms of non-linguistic meaning that are not examined by semantics. Actions and policies can have meaning in relation to the goal they serve. Fields likereligionandspiritualityare interested in themeaning of life, which is about finding a purpose in life or the significance ofexistencein general.[23] Linguistic meaning can be analyzed on different levels.Word meaningis studied bylexical semanticsand investigates the denotation of individual words. It is often related toconceptsof entities, like how the worddogis associated with the concept of the four-legged domestic animal. Sentence meaning falls into the field of phrasal semantics and concerns the denotation of full sentences. It usually expresses a concept applying to a type of situation, as in the sentence "the dog has ruined my blue skirt".[24]The meaning of a sentence is often referred to as aproposition.[25]Different sentences can express the same proposition, like the English sentence "the tree is green" and the German sentence"der Baum ist grün".[26]Utterance meaning is studied by pragmatics and is about the meaning of an expression on a particular occasion. Sentence meaning and utterance meaning come apart in cases where expressions are used in a non-literal way, as is often the case withirony.[27] Semantics is primarily interested in the public meaning that expressions have, like the meaning found in general dictionary definitions. Speaker meaning, by contrast, is the private or subjective meaning that individuals associate with expressions. It can diverge from the literal meaning, like when a person associates the wordneedlewith pain or drugs.[28] Meaning is often analyzed in terms ofsense and reference,[30]also referred to asintension and extensionorconnotationanddenotation.[31]The referent of an expression is the object to which the expression points. The sense of an expression is the way in which it refers to that object or how the object is interpreted. For example, the expressionsmorning starandevening starrefer to the same planet, just like the expressions2 + 2and3 + 1refer to the same number. The meanings of these expressions differ not on the level of reference but on the level of sense.[32]Sense is sometimes understood as a mental phenomenon that helps people identify the objects to which an expression refers.[33]Some semanticists focus primarily on sense or primarily on reference in their analysis of meaning.[34]To grasp the full meaning of an expression, it is usually necessary to understand both to what entities in the world it refers and how it describes them.[35] The distinction between sense and reference can explainidentity statements, which can be used to show how two expressions with a different sense have the same referent. For instance, the sentence "the morning star is the evening star" is informative and people can learn something from it. The sentence "the morning star is the morning star", by contrast, is an uninformativetautologysince the expressions are identical not only on the level of reference but also on the level of sense.[36] Compositionalityis a key aspect of how languages construct meaning. It is the idea that the meaning of a complex expression is a function of the meanings of its parts. It is possible to understand the meaning of the sentence "Zuzana owns a dog" by understanding what the wordsZuzana,owns,aanddogmean and how they are combined.[37]In this regard, the meaning of complex expressions like sentences is different from word meaning since it is normally not possible to deduce what a word means by looking at its letters and one needs to consult a dictionary instead.[38] Compositionality is often used to explain how people can formulate and understand an almost infinite number of meanings even though the amount of words and cognitive resources is finite. Many sentences that people read are sentences that they have never seen before and they are nonetheless able to understand them.[37] When interpreted in a strong sense, the principle of compositionality states that the meaning of a complex expression is not just affected by its parts and how they are combined but fully determined this way. It is controversial whether this claim is correct or whether additional aspects influence meaning. For example, context may affect the meaning of expressions;idiomslike "kick the bucket" carryfigurative or non-literalmeanings that are not directly reducible to the meanings of their parts.[37] Truthis a property of statements that accurately present the world and true statements are in accord withreality. Whether a statement is true usually depends on the relation between the statement and the rest of the world. Thetruth conditionsof a statement are the way the world needs to be for the statement to be true. For example, it belongs to the truth conditions of the sentence "it is raining outside" that raindrops are falling from the sky. The sentence is true if it is used in a situation in which the truth conditions are fulfilled, i.e., if there is actually rain outside.[39] Truth conditions play a central role in semantics and some theories rely exclusively on truth conditions to analyze meaning. To understand a statement usually implies that one has an idea about the conditions under which it would be true. This can happen even if one does not know whether the conditions are fulfilled.[39] Thesemiotic triangle, also called the triangle of meaning, is a model used to explain the relation between language, language users, and the world, represented in the model asSymbol,Thought or Reference, andReferent. The symbol is a linguisticsignifier, either in its spoken or written form. The central idea of the model is that there is no direct relation between a linguistic expression and what it refers to, as was assumed by earlier dyadic models. This is expressed in the diagram by the dotted line between symbol and referent.[40] The model holds instead that the relation between the two is mediated through a third component. For example, the termapplestands for a type of fruit but there is no direct connection between this string of letters and the corresponding physical object. The relation is only established indirectly through the mind of the language user. When they see the symbol, it evokes a mental image or a concept, which establishes the connection to the physical object. This process is only possible if the language user learned the meaning of the symbol before. The meaning of a specific symbol is governed by the conventions of a particular language. The same symbol may refer to one object in one language, to another object in a different language, and to no object in another language.[40] Many other concepts are used to describe semantic phenomena. Thesemantic roleof an expression is the function it fulfills in a sentence. In the sentence "the boy kicked the ball", the boy has the role of the agent who performs an action. The ball is the theme or patient of this action as something that does not act itself but is involved in or affected by the action. The same entity can be both agent and patient, like when someone cuts themselves. An entity has the semantic role of an instrument if it is used to perform the action, for instance, when cutting something with a knife then the knife is the instrument. For some sentences, no action is described but an experience takes place, like when a girl sees a bird. In this case, the girl has the role of the experiencer. Other common semantic roles are location, source, goal, beneficiary, and stimulus.[41] Lexical relations describe how words stand to one another. Two words aresynonymsif they share the same or a very similar meaning, likecarandautomobileorbuyandpurchase.Antonymshave opposite meanings, such as the contrast betweenaliveanddeadorfastandslow.[c]One term is ahyponymof another term if the meaning of the first term is included in the meaning of the second term. For example,antis a hyponym ofinsect. Aprototypeis a hyponym that has characteristic features of the type it belongs to. Arobinis a prototype of abirdbut apenguinis not. Two words with the same pronunciation arehomophoneslikeflourandflower, while two words with the same spelling arehomonyms, like a bank of a river in contrast to a bank as a financial institution.[d]Hyponymy is closely related tomeronymy, which describes the relation between part and whole. For instance,wheelis a meronym ofcar.[44]An expression isambiguousif it has more than one possible meaning. In some cases, it is possible todisambiguatethem to discern the intended meaning.[45]The termpolysemyis used if the different meanings are closely related to one another, like the meanings of the wordhead, which can refer to the topmost part of the human body or the top-ranking person in an organization.[44] The meaning of words can often be subdivided into meaning components calledsemantic features. The wordhorsehas the semantic featureanimatebut lacks the semantic featurehuman. It may not always be possible to fully reconstruct the meaning of a word by identifying all its semantic features.[46] Asemanticor lexical field is a group of words that are all related to the same activity or subject. For instance, the semantic field ofcookingincludes words likebake,boil,spice, andpan.[47] Thecontextof an expression refers to the situation or circumstances in which it is used and includes time, location, speaker, and audience. It also encompasses other passages in a text that come before and after it.[48]Context affects the meaning of various expressions, like thedeicticexpressionhereand theanaphoricexpressionshe.[49] A syntactic environment isextensional or transparentif it is always possible to exchange expressions with the same reference without affecting the truth value of the sentence. For example, the environment of the sentence "the number 8 is even" is extensional because replacing the expression "the number 8" with "the number of planets in theSolar System" does not change its truth value. Forintensional or opaque contexts, this type of substitution is not always possible. For instance, theembedded clausein "Paco believes that the number 8 is even" is intensional since Paco may not know that the number of planets in the solar system is 8.[50] Semanticists commonly distinguish the language they study, called object language, from the language they use to express their findings, calledmetalanguage. When a professor uses Japanese to teach their student how to interpret the language offirst-order logicthen the language of first-order logic is the object language and Japanese is the metalanguage. The same language may occupy the role of object language and metalanguage at the same time. This is the case inmonolingual English dictionaries, in which both the entry term belonging to the object language and the definition text belonging to the metalanguage are taken from the English language.[51] Lexical semantics is the sub-field of semantics that studies word meaning.[52]It examines semantic aspects of individual words and thevocabularyas a whole. This includes the study of lexical relations between words, such as whether two terms are synonyms or antonyms.[53]Lexical semantics categorizes words based on semantic features they share and groups them into semantic fields unified by a common subject.[54]This information is used to create taxonomies to organize lexical knowledge, for example, by distinguishing between physical andabstract entitiesand subdividing physical entities intostuffandindividuated entities.[55]Further topics of interest are polysemy, ambiguity, andvagueness.[56] Lexical semantics is sometimes divided into two complementary approaches:semasiologyandonomasiology. Semasiology starts from words and examines what their meaning is. It is interested in whether words have one or several meanings and how those meanings are related to one another. Instead of going from word to meaning, onomasiology goes from meaning to word. It starts with a concept and examines what names this concept has or how it can be expressed in a particular language.[57] Some semanticists also include the study of lexical units other than words in the field of lexical semantics.Compound expressionslikebeing under the weatherhave a non-literal meaning that acts as a unit and is not a direct function of its parts. Another topic concerns the meaning of morphemes that make up words, for instance, how negativeprefixeslikein-anddis-affect the meaning of the words they are part of, as ininanimateanddishonest.[58] Phrasal semantics studies the meaning of sentences. It relies on the principle of compositionality to explore how the meaning of complex expressions arises from the combination of their parts.[59][e]The different parts can be analyzed assubject,predicate, orargument. The subject of a sentence usually refers to a specific entity while the predicate describes a feature of the subject or an event in which the subject participates. Arguments provide additional information to complete the predicate.[61]For example, in the sentence "Mary hit the ball",Maryis the subject,hitis the predicate, andthe ballis an argument.[61]A more fine-grained categorization distinguishes between different semantic roles of words, such as agent, patient, theme, location, source, and goal.[62] Verbsusually function as predicates and often help to establish connections between different expressions to form a more complex meaning structure. In the expression "Beethoven likes Schubert", the verblikeconnects a liker to the object of their liking.[63]Other sentence parts modify meaning rather than form new connections. For instance, theadjectiveredmodifies the color of another entity in the expressionred car.[64]A further compositional device is variable binding, which is used to determine the reference of a term. For example, the last part of the expression "the woman who likes Beethoven" specifies which woman is meant.[65]Parse treescan be used to show the underlying hierarchy employed to combine the different parts.[66]Various grammatical devices, like thegerundform, also contribute to meaning and are studied by grammatical semantics.[67] Formal semantics uses formal tools fromlogicandmathematicsto analyze meaning in natural languages.[f]It aims to develop precise logical formalisms to clarify the relation between expressions and their denotation.[69]One of its key tasks is to provide frameworks of how language represents the world, for example, usingontological modelsto show how linguistic expressions map to the entities of that model.[69]A common idea is that words refer to individual objects or groups of objects while sentences relate to events and states. Sentences are mapped to atruth valuebased on whether their description of the world is in correspondence with its ontological model.[70] Formal semantics further examines how to use formal mechanisms to represent linguistic phenomena such asquantification,intensionality,noun phrases,plurals, mass terms,tense, andmodality.[71]Montague semanticsis an early and influential theory in formal semantics that provides a detailed analysis of how the English language can be represented using mathematical logic. It relies onhigher-order logic,lambda calculus, andtype theoryto show how meaning is created through the combination of expressions belonging to different syntactic categories.[72] Dynamic semanticsis a subfield of formal semantics that focuses on how information grows over time. According to it, "meaning is context change potential": the meaning of a sentence is not given by the information it contains but by the information change it brings about relative to a context.[73] Cognitive semantics studies the problem of meaning from a psychological perspective or how the mind of the language user affects meaning. As a subdiscipline ofcognitive linguistics, it sees language as a wide cognitive ability that is closely related to the conceptual structures used to understand and represent the world.[74][g]Cognitive semanticists do not draw a sharp distinction between linguistic knowledge and knowledge of the world and see them instead as interrelated phenomena.[76]They study how the interaction between language and human cognition affects the conceptual organization in very general domains like space, time, causation, and action.[77]The contrast between profile and base is sometimes used to articulate the underlying knowledge structure. The profile of a linguistic expression is the aspect of the knowledge structure that it brings to the foreground while the base is the background that provides the context of this aspect without being at the center of attention.[78]For example, the profile of the wordhypotenuseis a straight line while the base is aright-angled triangleof which the hypotenuse forms a part.[79][h] Cognitive semantics further compares the conceptual patterns andlinguistic typologiesacross languages and considers to what extent the cognitive conceptual structures of humans are universal or relative to their linguistic background.[81]Another research topic concerns the psychological processes involved in the application of grammar.[82]Other investigated phenomena include categorization, which is understood as a cognitive heuristic to avoid information overload by regarding different entities in the same way,[83]andembodiment, which concerns how the language user's bodily experience affects the meaning of expressions.[84] Frame semanticsis an important subfield of cognitive semantics.[85]Its central idea is that the meaning of terms cannot be understood in isolation from each other but needs to be analyzed on the background of the conceptual structures they depend on. These structures are made explicit in terms of semantic frames. For example, words like bride, groom, and honeymoon evoke in the mind the frame of marriage.[86] Conceptual semanticsshares with cognitive semantics the idea of studying linguistic meaning from a psychological perspective by examining how humans conceptualize and experience the world. It holds that meaning is not about the objects to which expressions refer but about the cognitive structure of human concepts that connect thought, perception, and action. Conceptual semantics differs from cognitive semantics by introducing a strict distinction between meaning and syntax and by relying on various formal devices to explore the relation between meaning and cognition.[87] Computational semanticsexamines how the meaning of natural language expressions can be represented and processed on computers.[88]It often relies on the insights of formal semantics and applies them to problems that can be computationally solved.[89]Some of its key problems include computing the meaning of complex expressions by analyzing their parts, handling ambiguity, vagueness, and context-dependence, and using the extracted information inautomatic reasoning.[90]It forms part ofcomputational linguistics,artificial intelligence, andcognitive science.[88]Its applications includemachine learningandmachine translation.[91] Cultural semantics studies the relation between linguistic meaning and culture. It compares conceptual structures in different languages and is interested in how meanings evolve and change because of cultural phenomena associated withpolitics, religion, andcustoms.[92]For example, address practices encode cultural values and social hierarchies, as in the difference of politeness of expressions liketuandustedin Spanish orduandSiein German in contrast to English, which lacksthese distinctionsand uses the pronounyouin either case.[93]Closely related fields are intercultural semantics, cross-cultural semantics, and comparative semantics.[94] Pragmatic semantics studies how the meaning of an expression is shaped by the situation in which it is used. It is based on the idea that communicative meaning is usually context-sensitive and depends on who participates in the exchange, what information they share, and what theirintentionsand background assumptions are. It focuses on communicative actions, of which linguistic expressions only form one part. Some theorists include these topics within the scope of semantics while others consider them part of the distinct discipline of pragmatics.[95] Theories of meaning explain what meaning is, what meaning an expression has, and how the relation between expression and meaning is established.[96] Referential theories state that the meaning of an expression is the entity to which it points.[97]The meaning ofsingular termslikenamesis the individual to which they refer. For example, the meaning of the nameGeorge Washingtonis the person with this name.[98]General terms refer not to a single entity but to the set of objects to which this term applies. In this regard, the meaning of the termcatis the set of all cats.[99]Similarly, verbs usually refer to classes of actions or events and adjectives refer to properties of individuals and events.[100] Simple referential theoriesface problems for meaningful expressions that have no clear referent. Names likePegasusandSanta Claushave meaning even though they do not point to existing entities.[101]Other difficulties concern cases in which different expressions are about the same entity. For instance, the expressionsRoger Bannisterandthe first man to run a four-minute milerefer to the same person but do not mean exactly the same thing.[102]This is particularly relevant when talking about beliefs since a person may understand both expressions without knowing that they point to the same entity.[103]A further problem is given by expressions whose meaning depends on the context, like the deictic termshereandI.[104] To avoid these problems, referential theories often introduce additional devices. Some identify meaning not directly with objects but with functions that point to objects. This additional level has the advantage of taking the context of an expression into account since the same expression may point to one object in one context and to another object in a different context. For example, the reference of the wordheredepends on the location in which it is used.[105]A closely related approach ispossible worldsemantics, which allows expressions to refer not only to entities in the actual world but also to entities in other possible worlds.[i]According to this view, expressions likethe first man to run a four-minute milerefer to different persons in different worlds. This view can also be used to analyze sentences that talk about what is possible or what is necessary: possibility is what is true in some possible worlds while necessity is what is true in all possible worlds.[107] Ideational theories, also called mentalist theories, are not primarily interested in the reference of expressions and instead explain meaning in terms of the mental states of language users.[108]One historically influential approach articulated byJohn Lockeholds that expressions stand forideasin the speaker's mind. According to this view, the meaning of the worddogis the idea that people have of dogs. Language is seen as a medium used to transfer ideas from the speaker to the audience. After having learned the same meaning of signs, the speaker can produce a sign that corresponds to the idea in their mind and the perception of this sign evokes the same idea in the mind of the audience.[109] A closely related theory focuses not directly on ideas but onintentions.[110]This view is particularly associated withPaul Grice, who observed that people usually communicate to cause some reaction in their audience. He held that the meaning of an expression is given by the intended reaction. This means that communication is not just about decoding what the speaker literally said but requires an understanding of their intention or why they said it.[111]For example, telling someone looking for petrol that "there is a garage around the corner" has the meaning that petrol can be obtained there because of the speaker's intention to help. This goes beyond the literal meaning, which has no explicit connection to petrol.[112] Causal theories hold that the meaning of an expression depends on the causes and effects it has.[113]According tobehavioristsemantics, also referred to as stimulus-response theory, the meaning of an expression is given by the situation that prompts the speaker to use it and the response it provokes in the audience.[114]For instance, the meaning of yelling "Fire!" is given by the presence of an uncontrolled fire and attempts to control it or seek safety.[115]Behaviorist semantics relies on the idea that learning a language consists in adopting behavioral patterns in the form ofstimulus-response pairs.[116]One of its key motivations is to avoid private mental entities and define meaning instead in terms of publicly observable language behavior.[117] Another causal theory focuses on the meaning of names and holds that a naming event is required to establish the link between name and named entity. This naming event acts as a form of baptism that establishes the first link of a causal chain in which all subsequent uses of the name participate.[118]According to this view, the namePlatorefers to an ancient Greek philosopher because, at some point, he was originally named this way and people kept using this name to refer to him.[119]This view was originally formulated bySaul Kripketo apply to names only but has been extended to cover other types of speech as well.[120] Truth-conditional semanticsanalyzes the meaning of sentences in terms of their truth conditions. According to this view, to understand a sentence means to know what the world needs to be like for the sentence to be true.[121]Truth conditions can themselves be expressed throughpossible worlds. For example, the sentence "Hillary Clintonwon the2016 American presidential election" is false in the actual world but there are some possible worlds in which it is true.[122]The extension of a sentence can be interpreted as its truth value while its intension is the set of all possible worlds in which it is true.[123]Truth-conditional semantics is closely related toverificationist theories, which introduce the additional idea that there should be some kind of verification procedure to assess whether a sentence is true. They state that the meaning of a sentence consists in the method to verify it or in the circumstances that justify it.[124]For instance, scientific claims often make predictions, which can be used to confirm or disconfirm them usingobservation.[125]According to verificationism, sentences that can neither be verified nor falsified are meaningless.[126] Theuse theorystates that the meaning of an expression is given by the way it is utilized. This view was first introduced byLudwig Wittgenstein, who understood language as a collection oflanguage games. The meaning of expressions depends on how they are used inside a game and the same expression may have different meanings in different games.[127]Some versions of this theory identify meaning directly with patterns of regular use.[128]Others focus onsocial normsandconventionsby additionally taking into account whether a certain use is considered appropriate in a given society.[129] Inferentialist semantics, also called conceptual role semantics, holds that the meaning of an expression is given by the role it plays in the premises and conclusions of goodinferences.[130]For example, one can infer from "x is a male sibling" that "x is a brother" and one can infer from "x is a brother" that "x has parents". According to inferentialist semantics, the meaning of the wordbrotheris determined by these and all similar inferences that can be drawn.[131] Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic.[132][j]Inancient Greece, Plato (427–347 BCE) explored the relation between names and things in hisdialogueCratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users.[134]The bookOn InterpretationbyAristotle(384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts.[135]TheStoicsincorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions.[136] Inancient India, theorthodox schoolofNyayaheld that all names refer to real objects. It explored how words lead to an understanding of the thing meant and what consequence this relation has to the creation of knowledge.[138]Philosophers of the orthodox school ofMīmāṃsādiscussed the relation between the meanings of individual words and full sentences while considering which one is more basic.[139]The bookVākyapadīyabyBhartṛhari(4th–5th century CE) distinguished between different types of words and considered how they can carry different meanings depending on how they are used.[140]Inancient China, theMohistsargued that names play a key role in making distinctions to guide moral behavior.[141]They inspired theSchool of Names, which explored the relation between names and entities while examining how names are required to identify and judge entities.[142] In the Middle Ages,Augustine of Hippo(354–430) developed a general conception of signs as entities that stand for other entities and convey them to the intellect. He was the first to introduce the distinction between natural and linguistic signs as different types belonging to a common genus.[143]Boethius(480–528) wrote a translation of and various comments on Aristotle's bookOn Interpretation, which popularized its main ideas and inspired reflections on semantic phenomena in thescholastic tradition.[144]An innovation in the semantics ofPeter Abelard(1079–1142) was his interest in propositions or the meaning of sentences in contrast to the focus on the meaning of individual words by many of his predecessors. He further explored the nature ofuniversals, which he understood as mere semantic phenomena ofcommon namescaused by mental abstractions thatdo not refer to any entities.[145]In the Arabic tradition,Ibn Faris(920–1004) identified meaning with the intention of the speaker whileAbu Mansur al-Azhari(895–980) held that meaning resides directly in speech and needs to be extracted through interpretation.[146] An important topic towards the end of the Middle Ages was the distinction between categorematic andsyncategorematic terms. Categorematic terms have an independent meaning and refer to some part of reality, likehorseandSocrates. Syncategorematic terms lack independent meaning and fulfill other semantic functions, such as modifying or quantifying the meaning of other expressions, like the wordssome,not, andnecessarily.[147]An early version of the causal theory of meaning was proposed byRoger Bacon(c. 1219/20 – c. 1292), who held that things get names similar to how people get names through some kind of initial baptism.[148]His ideas inspired the tradition of thespeculative grammarians, who proposed that there are certain universal structures found in all languages. They arrived at this conclusion by drawing an analogy between the modes of signification on the level of language, the modes of understanding on the level of mind, and the modes of being on the level of reality.[149] In the early modern period,Thomas Hobbes(1588–1679) distinguished between marks, which people use privately to recall their own thoughts, and signs, which are used publicly to communicate their ideas to others.[150]In theirPort-Royal Logic,Antoine Arnauld(1612–1694) andPierre Nicole(1625–1695) developed an early precursor of the distinction between intension and extension.[151]TheEssay Concerning Human Understandingby John Locke (1632–1704) presented an influential version of the ideational theory of meaning, according to which words stand for ideas and help people communicate by transferring ideas from one mind to another.[152]Gottfried Wilhelm Leibniz(1646–1716) understood language as the mirror of thought and tried to conceive the outlines of auniversal formal languageto express scientific and philosophical truths. This attempt inspired theoristsChristian Wolff(1679–1754),Georg Bernhard Bilfinger(1693–1750), andJohann Heinrich Lambert(1728–1777) to develop the idea of a general science of sign systems.[153]Étienne Bonnot de Condillac(1715–1780) accepted and further developed Leibniz's idea of the linguistic nature of thought. Against Locke, he held that language is involved in the creation of ideas and is not merely a medium to communicate them.[154] In the 19th century, semantics emerged and solidified as an independent field of inquiry.Christian Karl Reisig(1792–1829) is sometimes credited as the father of semantics since he clarified its concept and scope while also making various contributions to its key ideas.[155]Michel Bréal(1832–1915) followed him in providing a broad conception of the field, for which he coined the French termsémantique.[156]John Stuart Mill(1806–1873) gave great importance to the role of names to refer to things. He distinguished between the connotation and denotation of names and held that propositions are formed by combining names.[157]Charles Sanders Peirce(1839–1914) conceived semiotics as a general theory of signs with several subdisciplines, which were later identified by Charles W. Morris (1901–1979) as syntactics, semantics, and pragmatics. In his pragmatist approach to semantics, Peirce held that the meaning of conceptions consists in the entirety of their practical consequences.[158]The philosophy ofGottlob Frege(1848–1925) contributed to semantics on many different levels. Frege first introduced the distinction between sense and reference, and his development of predicate logic and the principle of compositionality formed the foundation of many subsequent developments in formal semantics.[159]Edmund Husserl(1859–1938) explored meaning from aphenomenologicalperspective by considering the mental acts that endow expressions with meaning. He held that meaning always implies reference to an object and expressions that lack a referent, likegreen is or, are meaningless.[160] In the 20th century,Alfred Tarski(1901–1983) defined truth in formal languages through hissemantic theory of truth, which was influential in the development of truth-conditional semantics byDonald Davidson(1917–2003).[161]Tarski's studentRichard Montague(1930–1971) formulated a complex formal framework of the semantics of the English language, which was responsible for establishing formal semantics as a major area of research.[162]According tostructural semantics,[k]which was inspired by thestructuralist philosophyofFerdinand de Saussure(1857–1913), language is a complex network of structural relations and the meanings of words are not fixed individually but depend on their position within this network.[164]The theory ofgeneral semanticswas developed byAlfred Korzybski(1879–1950) as an inquiry into how language represents reality and affects human thought.[165]The contributions ofGeorge Lakoff(1941–present) andRonald Langacker(1942–present) provided the foundation of cognitive semantics.[166]Charles J. Fillmore(1929–2014) developed frame semantics as a major approach in this area.[167]The closely related field of conceptual semantics was inaugurated byRay Jackendoff(1945–present).[168] Logicians study correctreasoningand often developformal languagesto express arguments and assess their correctness.[169]One part of this process is to provide a semantics for a formal language to precisely define what its terms mean. A semantics of a formal language is a set of rules, usually expressed as amathematical function, that assigns meanings to formal language expressions.[170]For example, the language of first-order logic uses lowercase letters forindividual constantsand uppercase letters forpredicates. To express the sentence "Bertie is a dog", the formulaD(b){\displaystyle D(b)}can be used whereb{\displaystyle b}is an individual constant for Bertie andD{\displaystyle D}is a predicate for dog. Classical model-theoretic semantics assigns meaning to these terms by defining aninterpretation functionthat maps individual constants to specific objects and predicates tosetsof objects ortuples. The function mapsb{\displaystyle b}to Bertie andD{\displaystyle D}to the set of all dogs. This way, it is possible to calculate the truth value of the sentence: it is true if Bertie is a member of the set of dogs and false otherwise.[171] Formal logic aims to determine whether arguments aredeductively valid, that is, whether the premises entail the conclusion.[172]Entailment can be defined in terms of syntax or in terms of semantics. Syntactic entailment, expressed with the symbol⊢{\displaystyle \vdash }, relies onrules of inference, which can be understood as procedures to transform premises and arrive at a conclusion. These procedures only take thelogical formof the premises on the level of syntax into account and ignore what meaning they express. Semantic entailment, expressed with the symbol⊨{\displaystyle \vDash }, looks at the meaning of the premises, in particular, at their truth value. A conclusion follows semantically from a set of premises if the truth of the premises ensures the truth of the conclusion, that is, if any semantic interpretation function that assigns the premises the valuetruealso assigns the conclusion the valuetrue.[173] In computer science, the semantics of aprogramis how it behaves when a computer runs it. Semantics contrasts with syntax, which is the particular form in which instructions are expressed. The same behavior can usually be described with different forms of syntax. InJavaScript, this is the case for the commandsi += 1andi = i + 1, which are syntactically different expressions to increase the value of the variableiby one. This difference is also reflected in differentprogramming languagessince they rely on different syntax but can usually be employed to create programs with the same behavior on the semantic level.[174] Static semantics focuses on semantic aspects that affect thecompilationof a program. In particular, it is concerned with detecting errors of syntactically correct programs, such astype errors, which arise when an operation receives an incompatibledata type. This is the case, for instance, if a function performing a numerical calculation is given astringinstead of a number as an argument.[175]Dynamic semantics focuses on the run time behavior of programs, that is, what happens during theexecutionof instructions.[176]The main approaches to dynamic semantics aredenotational,axiomatic, andoperational semantics. Denotational semantics relies on mathematical formalisms to describe the effects of each element of the code. Axiomatic semantics uses deductive logic to analyze which conditions must be in place before and after the execution of a program. Operational semantics interprets the execution of a program as a series of steps, each involving the transition from onestateto another state.[177] Psychological semantics examines psychological aspects of meaning. It is concerned with how meaning is represented on a cognitive level and what mental processes are involved in understanding and producing language. It further investigates how meaning interacts with other mental processes, such as the relation between language and perceptual experience.[178][l]Other issues concern how people learn new words and relate them to familiar things and concepts, how they infer the meaning of compound expressions they have never heard before, how they resolve ambiguous expressions, and how semantic illusions lead them to misinterpret sentences.[180] One key topic issemantic memory, which is a form ofgeneral knowledgeof meaning that includes the knowledge of language, concepts, and facts. It contrasts withepisodic memory, which records events that a person experienced in their life. The comprehension of language relies on semantic memory and the information it carries about word meanings.[181]According to a common view, word meanings are stored and processed in relation to their semantic features. The feature comparison model states that sentences like "a robin is a bird" are assessed on a psychological level by comparing the semantic features of the wordrobinwith the semantic features of the wordbird. The assessment process is fast if their semantic features are similar, which is the case if the example is aprototypeof the general category. For atypical examples, as in the sentence "a penguin is a bird", there is less overlap in the semantic features and the psychological process is significantly slower.[182]
https://en.wikipedia.org/wiki/Semantics#Pragmatics
Incomputational linguisticsandcomputer science,edit distanceis astring metric, i.e. a way of quantifying how dissimilar twostrings(e.g., words) are to one another, that is measured by counting the minimum number of operations required to transform one string into the other. Edit distances find applications innatural language processing, where automaticspelling correctioncan determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. Inbioinformatics, it can be used to quantify the similarity ofDNAsequences, which can be viewed as strings of the letters A, C, G and T. Different definitions of an edit distance use different sets of like operations.Levenshtein distanceoperations are the removal, insertion, or substitution of a character in the string. Being the most common metric, the termLevenshtein distanceis often used interchangeably withedit distance.[1] Different types of edit distance allow different sets of string operations. For instance: Some edit distances are defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNAsequence alignmentalgorithms such as theSmith–Waterman algorithm, which make an operation's cost depend on where it is applied. Given two stringsaandbon an alphabetΣ(e.g. the set ofASCIIcharacters, the set ofbytes[0..255], etc.), the edit distanced(a,b)is the minimum-weight series of edit operations that transformsaintob. One of the simplest sets of edit operations is that defined by Levenshtein in 1966:[2] In Levenshtein's original definition, each of these operations has unit cost (except that substitution of a character by itself has zero cost), so the Levenshtein distance is equal to the minimumnumberof operations required to transformatob. A more general definition associates non-negative weight functionswins(x),wdel(x) andwsub(x,y) with the operations.[2] Additional primitive operations have been suggested.Damerau–Levenshtein distancecounts as a single edit a common mistake:transpositionof two adjacent characters, formally characterized by an operation that changesuxyvintouyxv.[3][4]For the task of correctingOCRoutput,mergeandsplitoperations have been used which replace a single character into a pair of them or vice versa.[4] Other variants of edit distance are obtained by restricting the set of operations.Longest common subsequence (LCS)distance is edit distance with insertion and deletion as the only two edit operations, both at unit cost.[1]: 37Similarly, by only allowing substitutions (again at unit cost),Hamming distanceis obtained; this must be restricted to equal-length strings.[1]Jaro–Winkler distancecan be obtained from an edit distance where only transpositions are allowed. TheLevenshtein distancebetween "kitten" and "sitting" is 3. A minimal edit script that transforms the former into the latter is: LCS distance (insertions and deletions only) gives a different distance and minimal edit script: for a total cost/distance of 5 operations. Edit distance with non-negative cost satisfies the axioms of ametric, giving rise to ametric spaceof strings, when the following conditions are met:[1]: 37 With these properties, the metric axioms are satisfied as follows: Levenshtein distance and LCS distance with unit cost satisfy the above conditions, and therefore the metric axioms. Variants of edit distance that are not proper metrics have also been considered in the literature.[1] Other useful properties of unit-cost edit distances include: Regardless of cost/weights, the following property holds of all edit distances: The first algorithm for computing minimum edit distance between a pair of strings was published byDamerauin 1964.[6] Using Levenshtein's original operations, the (nonsymmetric) edit distance froma=a1…am{\displaystyle a=a_{1}\ldots a_{m}}tob=b1…bn{\displaystyle b=b_{1}\ldots b_{n}}is given bydmn{\displaystyle d_{mn}}, defined by therecurrence[2] This algorithm can be generalized to handle transpositions by adding another term in the recursive clause's minimization.[3] The straightforward,recursiveway of evaluating this recurrence takesexponential time. Therefore, it is usually computed using adynamic programmingalgorithm that is commonly credited toWagner and Fischer,[7]although it has a history of multiple invention.[2][3]After completion of the Wagner–Fischer algorithm, a minimal sequence of edit operations can be read off as a backtrace of the operations used during the dynamic programming algorithm starting atdmn{\displaystyle d_{mn}}. This algorithm has atime complexityof Θ(mn) wheremandnare the lengths of the strings. When the full dynamic programming table is constructed, itsspace complexityis alsoΘ(mn); this can be improved toΘ(min(m,n))by observing that at any instant, the algorithm only requires two rows (or two columns) in memory. However, this optimization makes it impossible to read off the minimal series of edit operations.[3]A linear-space solution to this problem is offered byHirschberg's algorithm.[8]: 634A general recursive divide-and-conquer framework for solving such recurrences and extracting an optimal sequence of operations cache-efficiently in space linear in the size of the input is given by Chowdhury, Le, and Ramachandran.[9] Improving on the Wagner–Fisher algorithm described above,Ukkonendescribes several variants,[10]one of which takes two strings and a maximum edit distances, and returnsmin(s,d). It achieves this by only computing and storing a part of the dynamic programming table around its diagonal. This algorithm takes timeO(s×min(m,n)), wheremandnare the lengths of the strings. Space complexity isO(s2)orO(s), depending on whether the edit sequence needs to be read off.[3] Further improvements byLandau,Myers, and Schmidt[1]give anO(s2+ max(m,n))time algorithm.[11] For a finite alphabet and edit costs which are multiples of each other, the fastest known exact algorithm is of Masek and Paterson[12]having worst case runtime of O(nm/logn). Edit distance finds applications incomputational biologyand natural language processing, e.g. the correction of spelling mistakes or OCR errors, andapproximate string matching, where the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. Various algorithms exist that solve problems beside the computation of distance between a pair of strings, to solve related types of problems. A generalization of the edit distance between strings is the language edit distance between a string and a language, usually aformal language. Instead of considering the edit distance between one string and another, the language edit distance is the minimum edit distance that can be attained between a fixed string andanystring taken from a set of strings. More formally, for any languageLand stringxover an alphabetΣ, thelanguage edit distanced(L,x) is given by[14]d(L,x)=miny∈Ld(x,y){\displaystyle d(L,x)=\min _{y\in L}d(x,y)}, whered(x,y){\displaystyle d(x,y)}is the string edit distance. When the languageLiscontext free, there is a cubic time dynamic programming algorithm proposed by Aho and Peterson in 1972 which computes the language edit distance.[15]For less expressive families of grammars, such as theregular grammars, faster algorithms exist for computing the edit distance.[16] Language edit distance has found many diverse applications, such as RNA folding, error correction, and solutions to the Optimum Stack Generation problem.[14][17]
https://en.wikipedia.org/wiki/Edit_distance
Incomputer science, theCocke–Younger–Kasami algorithm(alternatively calledCYK, orCKY) is aparsingalgorithmforcontext-free grammarspublished by Itiroo Sakai in 1961.[1][2]The algorithm is named after some of its rediscoverers:John Cocke, Daniel Younger,Tadao Kasami, andJacob T. Schwartz. It employsbottom-up parsinganddynamic programming. The standard version of CYK operates only on context-free grammars given inChomsky normal form(CNF). However any context-free grammar may be algorithmically transformed into a CNF grammar expressing the same language (Sipser 1997). The importance of the CYK algorithm stems from its high efficiency in certain situations. UsingbigOnotation, theworst case running timeof CYK isO(n3⋅|G|){\displaystyle {\mathcal {O}}\left(n^{3}\cdot \left|G\right|\right)}, wheren{\displaystyle n}is the length of the parsed string and|G|{\displaystyle \left|G\right|}is the size of the CNF grammarG{\displaystyle G}(Hopcroft & Ullman 1979, p. 140). This makes it one of the most efficient[citation needed]parsing algorithms in terms of worst-caseasymptotic complexity, although other algorithms exist with better average running time in many practical scenarios. Thedynamic programmingalgorithm requires the context-free grammar to be rendered intoChomsky normal form(CNF), because it tests for possibilities to split the current sequence into two smaller sequences. Any context-free grammar that does not generate the empty string can be represented in CNF using onlyproduction rulesof the formsA→α{\displaystyle A\rightarrow \alpha }andA→BC{\displaystyle A\rightarrow BC}; to allow for the empty string, one can explicitly allowS→ε{\displaystyle S\to \varepsilon }, whereS{\displaystyle S}is the start symbol.[3] The algorithm inpseudocodeis as follows: Allows to recover the most probable parse given the probabilities of all productions. In informal terms, this algorithm considers every possible substring of the input string and setsP[l,s,v]{\displaystyle P[l,s,v]}to be true if the substring of lengthl{\displaystyle l}starting froms{\displaystyle s}can be generated from the nonterminalRv{\displaystyle R_{v}}. Once it has considered substrings of length 1, it goes on to substrings of length 2, and so on. For substrings of length 2 and greater, it considers every possible partition of the substring into two parts, and checks to see if there is some productionA→BC{\displaystyle A\to B\;C}such thatB{\displaystyle B}matches the first part andC{\displaystyle C}matches the second part. If so, it recordsA{\displaystyle A}as matching the whole substring. Once this process is completed, the input string is generated by the grammar if the substring containing the entire input string is matched by the start symbol. This is an example grammar: Now the sentenceshe eats a fish with a forkis analyzed using the CYK algorithm. In the following table, inP[i,j,k]{\displaystyle P[i,j,k]},iis the number of the row (starting at the bottom at 1), andjis the number of the column (starting at the left at 1). For readability, the CYK table forPis represented here as a 2-dimensional matrixMcontaining a set of non-terminal symbols, such thatRkis in⁠M[i,j]{\displaystyle M[i,j]}⁠if, and only if,⁠P[i,j,k]{\displaystyle P[i,j,k]}⁠. In the above example, since a start symbolSis in⁠M[7,1]{\displaystyle M[7,1]}⁠, the sentence can be generated by the grammar. The above algorithm is arecognizerthat will only determine if a sentence is in the language. It is simple to extend it into aparserthat also constructs aparse tree, by storing parse tree nodes as elements of the array, instead of the boolean 1. The node is linked to the array elements that were used to produce it, so as to build the tree structure. Only one such node in each array element is needed if only one parse tree is to be produced. However, if all parse trees of an ambiguous sentence are to be kept, it is necessary to store in the array element a list of all the ways the corresponding node can be obtained in the parsing process. This is sometimes done with a second table B[n,n,r] of so-calledbackpointers. The end result is then a shared-forest of possible parse trees, where common trees parts are factored between the various parses. This shared forest can conveniently be read as anambiguous grammargenerating only the sentence parsed, but with the same ambiguity as the original grammar, and the same parse trees up to a very simple renaming of non-terminals, as shown byLang (1994). As pointed out byLange & Leiß (2009), the drawback of all known transformations into Chomsky normal form is that they can lead to an undesirable bloat in grammar size. The size of a grammar is the sum of the sizes of its production rules, where the size of a rule is one plus the length of its right-hand side. Usingg{\displaystyle g}to denote the size of the original grammar, the size blow-up in the worst case may range fromg2{\displaystyle g^{2}}to22g{\displaystyle 2^{2g}}, depending on the transformation algorithm used. For the use in teaching, Lange and Leiß propose a slight generalization of the CYK algorithm, "without compromising efficiency of the algorithm, clarity of its presentation, or simplicity of proofs" (Lange & Leiß 2009). It is also possible to extend the CYK algorithm to parse strings usingweightedandstochastic context-free grammars. Weights (probabilities) are then stored in the table P instead of booleans, so P[i,j,A] will contain the minimum weight (maximum probability) that the substring from i to j can be derived from A. Further extensions of the algorithm allow all parses of a string to be enumerated from lowest to highest weight (highest to lowest probability). When the probabilistic CYK algorithm is applied to a long string, the splitting probability can become very small due to multiplying many probabilities together. This can be dealt with by summing log-probability instead of multiplying probabilities. Theworst case running timeof CYK isΘ(n3⋅|G|){\displaystyle \Theta (n^{3}\cdot |G|)}, wherenis the length of the parsed string and |G| is the size of the CNF grammarG. This makes it one of the most efficient algorithms for recognizing general context-free languages in practice.Valiant (1975)gave an extension of the CYK algorithm. His algorithm computes the same parsing table as the CYK algorithm; yet he showed thatalgorithms for efficient multiplicationofmatrices with 0-1-entriescan be utilized for performing this computation. Using theCoppersmith–Winograd algorithmfor multiplying these matrices, this gives an asymptotic worst-case running time ofO(n2.38⋅|G|){\displaystyle O(n^{2.38}\cdot |G|)}. However, the constant term hidden by theBig O Notationis so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large to handle on present-day computers (Knuth 1997), and this approach requires subtraction and so is only suitable for recognition. The dependence on efficient matrix multiplication cannot be avoided altogether:Lee (2002)has proved that any parser for context-free grammars working in timeO(n3−ε⋅|G|){\displaystyle O(n^{3-\varepsilon }\cdot |G|)}can be effectively converted into an algorithm computing the product of(n×n){\displaystyle (n\times n)}-matrices with 0-1-entries in timeO(n3−ε/3){\displaystyle O(n^{3-\varepsilon /3})}, and this was extended by Abboud et al.[4]to apply to a constant-size grammar.
https://en.wikipedia.org/wiki/Cocke%E2%80%93Younger%E2%80%93Kasami_algorithm
The termphrase structure grammarwas originally introduced byNoam Chomskyas the term forgrammarstudied previously byEmil PostandAxel Thue(Post canonical systems). Some authors, however, reserve the term for more restricted grammars in theChomsky hierarchy:context-sensitive grammarsorcontext-free grammars. In a broader sense, phrase structure grammars are also known asconstituency grammars. The defining character of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation ofdependency grammars. In 1956, Chomsky wrote, "A phrase-structure grammar is defined by a finite vocabulary (alphabet) Vp, and a finite set Σ of initial strings in Vp, and a finite set F of rules of the form: X → Y, where X and Y are strings in Vp."[1] Inlinguistics, phrase structure grammars are all those grammars that are based on the constituency relation, as opposed to the dependency relation associated with dependency grammars; hence, phrase structure grammars are also known as constituency grammars.[2]Any of several related theories for theparsing of natural languagequalify as constituency grammars, and most of them have been developed from Chomsky's work, including Further grammar frameworks and formalisms also qualify as constituency-based, although they may not think of themselves as having spawned from Chomsky's work, e.g.
https://en.wikipedia.org/wiki/Phrase_structure_grammar
Dependency grammar(DG) is a class of moderngrammaticaltheories that are all based on the dependency relation (as opposed to theconstituency relationofphrase structure) and that can be traced back primarily to the work ofLucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are calleddependencies. Dependency grammar differs fromphrase structure grammarin that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word (ahead) and its dependents. Dependency structures are flatter than phrase structures in part because they lack afiniteverb phraseconstituent, and they are thus well suited for the analysis of languages with free word order, such asCzechorWarlpiri. The notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g.Pāṇini, and the dependency concept therefore arguably predates that of phrase structure by many centuries.[1]Ibn Maḍāʾ, a 12th-centurylinguistfromCórdoba, Andalusia, may have been the first grammarian to use the termdependencyin the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French, English and other grammars from the widespread study ofterm logicof antiquity.[2]Dependency is also concretely present in the works ofSámuel Brassai(1800–1897), a Hungarian linguist,Franz Kern(1830–1894), a German philologist, and ofHeimann Hariton Tiktin(1850–1936), a Romanian linguist.[3] Modern dependency grammars, however, begin primarily with the work of Lucien Tesnière. Tesnière was a Frenchman, apolyglot, and a professor of linguistics at the universities in Strasbourg and Montpellier. His major workÉléments de syntaxe structuralewas published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed has at least partially influenced the work of others in the 1960s, although it is not clear in what way these works were inspired by other sources.[4]A number of other dependency-based grammars have gained prominence since those early works.[5]DG has generated a lot of interest in Germany[6]in both theoretical syntax and language pedagogy. In recent years, the great development surrounding dependency-based theories has come fromcomputational linguisticsand is due, in part, to the influential work thatDavid Haysdid in machine translation at theRAND Corporationin the 1950s and 1960s. Dependency-based systems are increasingly being used to parse natural language and generatetree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a relatively recent development (Depling 2011,Depling 2013,Depling 2015,Depling 2017,Depling 2019Archived2019-03-06 at theWayback Machine). Dependency is a one-to-one correspondence: for every element (e.g. word or morph) in the sentence, there is exactly one node in the structure of that sentence that corresponds to that element. The result of this one-to-one correspondence is that dependency grammars are word (or morph) grammars. All that exist are the elements and the dependencies that connect the elements into a structure. This situation should be compared withphrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there are one or more nodes in the structure that correspond to that element. The result of this difference is that dependency structures are minimal[7]compared to their phrase structure counterparts, since they tend to contain many fewer nodes. These trees illustrate two possible ways to render the dependency and phrase structure relations (see below). This dependency tree is an "ordered" tree, i.e. it reflects actual word order. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order. This constituency (= phrase structure) tree follows the conventions ofbare phrase structure(BPS), whereby the words themselves are employed as the node labels. The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause. The phrase structure relation derives from an initial binary division, whereby the clause is split into a subjectnoun phrase(NP) and apredicateverb phrase(VP). This division is certainly present in the basic analysis of the clause that we find in the works of, for instance,Leonard BloomfieldandNoam Chomsky. Tesnière, however, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure. Tesnière's stance was that the subject-predicate division stems fromterm logicand has no place in linguistics.[8]The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real, then one is likely to go down the path of phrase structure grammar, while if one rejects this division, then one must consider the verb as the root of all structure, and so go down the path of dependency grammar. The following frameworks are dependency-based: Link grammaris similar to dependency grammar, but link grammar does not include directionality between the linked words, and thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but also includes dependencies between phrasal nodes – see for example theQuranic Arabic Dependency Treebank. The derivation trees oftree-adjoining grammarare dependency structures, although the full trees of TAG rendered in terms of phrase structure, so in this regard, it is not clear whether TAG should be viewed more as a dependency or phrase structure grammar. There are major differences between the grammars just listed. In this regard, the dependency relation is compatible with other major tenets of theories of grammar. Thus like phrase structure grammars, dependency grammars can be mono- or multistratal, representational or derivational, construction- or rule-based. There are various conventions that DGs employ to represent dependencies. The following schemata (in addition to the tree above and the trees further below) illustrate some of these conventions: The representations in (a–d) are trees, whereby the specific conventions employed in each tree vary. Solid lines aredependency edgesand lightly dotted lines areprojection lines. The only difference between tree (a) and tree (b) is that tree (a) employs the category class to label the nodes whereas tree (b) employs the words themselves as the node labels.[9]Tree (c) is a reduced tree insofar as the string of words below and projection lines are deemed unnecessary and are hence omitted. Tree (d) abstracts away from linear order and reflects just hierarchical order.[10]The arrow arcs in (e) are an alternative convention used to show dependencies and are favored byWord Grammar.[11]The brackets in (f) are seldom used, but are nevertheless quite capable of reflecting the dependency hierarchy; dependents appear enclosed in more brackets than their heads. And finally, the indentations like those in (g) are another convention that is sometimes employed to indicate the hierarchy of words.[12]Dependents are placed underneath their heads and indented. Like tree (d), the indentations in (g) abstract away from linear order. The point to these conventions is that they are just that, namely conventions. They do not influence the basic commitment to dependency as the relation that is grouping syntactic units. The dependency representations above (and further below) show syntactic dependencies. Indeed, most work in dependency grammar focuses on syntactic dependencies. Syntactic dependencies are, however, just one of three or four types of dependencies.Meaning–text theory, for instance, emphasizes the role of semantic and morphological dependencies in addition to syntactic dependencies.[13]A fourth type, prosodic dependencies, can also be acknowledged. Distinguishing between these types of dependencies can be important, in part because if one fails to do so, the likelihood that semantic, morphological, and/or prosodic dependencies will be mistaken for syntactic dependencies is great. The following four subsections briefly sketch each of these dependency types. During the discussion, the existence of syntactic dependencies is taken for granted and used as an orientation point for establishing the nature of the other three dependency types. Semantic dependencies are understood in terms ofpredicatesand theirarguments.[14]The arguments of a predicate are semantically dependent on that predicate. Often, semantic dependencies overlap with and point in the same direction as syntactic dependencies. At times, however, semantic dependencies can point in the opposite direction of syntactic dependencies, or they can be entirely independent of syntactic dependencies. The hierarchy of words in the following examples show standard syntactic dependencies, whereas the arrows indicate semantic dependencies: The two argumentsSamandSallyin tree (a) are dependent on the predicatelikes, whereby these arguments are also syntactically dependent onlikes. What this means is that the semantic and syntactic dependencies overlap and point in the same direction (down the tree). Attributive adjectives, however, are predicates that take their head noun as their argument, hencebigis a predicate in tree (b) that takesbonesas its one argument; the semantic dependency points up the tree and therefore runs counter to the syntactic dependency. A similar situation obtains in (c), where the preposition predicateontakes the two argumentsthe pictureandthe wall; one of these semantic dependencies points up the syntactic hierarchy, whereas the other points down it. Finally, the predicateto helpin (d) takes the one argumentJimbut is not directly connected toJimin the syntactic hierarchy, which means that semantic dependency is entirely independent of the syntactic dependencies. Morphological dependencies obtain between words or parts of words.[15]When a given word or part of a word influences the form of another word, then the latter is morphologically dependent on the former. Agreement and concord are therefore manifestations of morphological dependencies. Like semantic dependencies, morphological dependencies can overlap with and point in the same direction as syntactic dependencies, overlap with and point in the opposite direction of syntactic dependencies, or be entirely independent of syntactic dependencies. The arrows are now used to indicate morphological dependencies. The pluralhousesin (a) demands the plural of the demonstrative determiner, hencetheseappears, notthis, which means there is a morphological dependency that points down the hierarchy fromhousestothese. The situation is reversed in (b), where the singular subjectSamdemands the appearance of the agreement suffix-son the finite verbworks, which means there is a morphological dependency pointing up the hierarchy fromSamtoworks. The type of determiner in the German examples (c) and (d) influences the inflectional suffix that appears on the adjectivealt. When the indefinite articleeinis used, the strong masculine ending-erappears on the adjective. When the definite articlederis used, in contrast, the weak ending-eappears on the adjective. Thus since the choice of determiner impacts the morphological form of the adjective, there is a morphological dependency pointing from the determiner to the adjective, whereby this morphological dependency is entirely independent of the syntactic dependencies. Consider further the following French sentences: The masculine subjectle chienin (a) demands the masculine form of the predicative adjectiveblanc, whereas the feminine subjectla maisondemands the feminine form of this adjective. A morphological dependency that is entirely independent of the syntactic dependencies therefore points again across the syntactic hierarchy. Morphological dependencies play an important role intypological studies. Languages are classified as mostlyhead-marking(Sam work-s) or mostlydependent-marking(these houses), whereby most if not all languages contain at least some minor measure of both head and dependent marking.[16] Prosodic dependencies are acknowledged in order to accommodate the behavior ofclitics.[17]A clitic is a syntactically autonomous element that is prosodically dependent on a host. A clitic is therefore integrated into the prosody of its host, meaning that it forms a single word with its host. Prosodic dependencies exist entirely in the linear dimension (horizontal dimension), whereas standard syntactic dependencies exist in the hierarchical dimension (vertical dimension). Classic examples of clitics in English are reduced auxiliaries (e.g.-ll,-s,-ve) and the possessive marker-s. The prosodic dependencies in the following examples are indicated with hyphens and the lack of a vertical projection line: A hyphen that appears on the left of the clitic indicates that the clitic is prosodically dependent on the word immediately to its left (He'll,There's), whereas a hyphen that appears on the right side of the clitic (not shown here) indicates that the clitic is prosodically dependent on the word that appears immediately to its right. A given clitic is often prosodically dependent on its syntactic dependent (He'll,There's) or on its head (would've). At other times, it can depend prosodically on a word that is neither its head nor its immediate dependent (Florida's). Syntactic dependencies are the focus of most work in DG, as stated above. How the presence and the direction of syntactic dependencies are determined is of course often open to debate. In this regard, it must be acknowledged that the validity of syntactic dependencies in the trees throughout this article is being taken for granted. However, these hierarchies are such that many DGs can largely support them, although there will certainly be points of disagreement. The basic question about how syntactic dependencies are discerned has proven difficult to answer definitively. One should acknowledge in this area, however, that the basic task of identifying and discerning the presence and direction of the syntactic dependencies of DGs is no easier or harder than determining the constituent groupings of phrase structure grammars. A variety of heuristics are employed to this end, basictests for constituentsbeing useful tools; the syntactic dependencies assumed in the trees in this article are grouping words together in a manner that most closely matches the results of standard permutation, substitution, and ellipsis tests for constituents.Etymologicalconsiderations also provide helpful clues about the direction of dependencies. A promising principle upon which to base the existence of syntactic dependencies is distribution.[18]When one is striving to identify the root of a given phrase, the word that is most responsible for determining the distribution of that phrase as a whole is its root. Traditionally, DGs have had a different approach to linear order (word order) than phrase structure grammars. Dependency structures are minimal compared to their phrase structure counterparts, and these minimal structures allow one to focus intently on the two ordering dimensions.[19]Separating the vertical dimension (hierarchical order) from the horizontal dimension (linear order) is easily accomplished. This aspect of dependency structures has allowed DGs, starting with Tesnière (1959), to focus on hierarchical order in a manner that is hardly possible for phrase structure grammars. For Tesnière, linear order was secondary to hierarchical order insofar as hierarchical order preceded linear order in the mind of a speaker. The stemmas (trees) that Tesnière produced reflected this view; they abstracted away from linear order to focus almost entirely on hierarchical order. Many DGs that followed Tesnière adopted this practice, that is, they produced tree structures that reflect hierarchical order alone, e.g. The traditional focus on hierarchical order generated the impression that DGs have little to say about linear order, and it has contributed to the view that DGs are particularly well-suited to examine languages with free word order. A negative result of this focus on hierarchical order, however, is that there is a dearth of DG explorations of particular word order phenomena, such as of standarddiscontinuities. Comprehensive dependency grammar accounts oftopicalization,wh-fronting,scrambling, andextrapositionare mostly absent from many established DG frameworks. This situation can be contrasted with phrase structure grammars, which have devoted tremendous effort to exploring these phenomena. The nature of the dependency relation does not, however, prevent one from focusing on linear order. Dependency structures are as capable of exploring word order phenomena as phrase structures. The following trees illustrate this point; they represent one way of exploring discontinuities using dependency structures. The trees suggest the manner in which common discontinuities can be addressed. An example from German is used to illustrate a scramblingdiscontinuity: The a-trees on the left showprojectivityviolations (= crossing lines), and the b-trees on the right demonstrate one means of addressing these violations. The displaced constituent takes on a word as itsheadthat is not itsgovernor. The words in red mark thecatena(=chain) of words that extends from the root of the displaced constituent to thegovernorof that constituent.[20]Discontinuities are then explored in terms of these catenae. The limitations on topicalization,wh-fronting, scrambling, and extraposition can be explored and identified by examining the nature of the catenae involved. Traditionally, DGs have treated the syntactic functions (= grammatical functions,grammatical relations) as primitive. They posit an inventory of functions (e.g. subject, object, oblique, determiner, attribute, predicative, etc.). These functions can appear as labels on the dependencies in the tree structures, e.g.[21] The syntactic functions in this tree are shown in green: ATTR (attribute), COMP-P (complement of preposition), COMP-TO (complement of to), DET (determiner), P-ATTR (prepositional attribute), PRED (predicative), SUBJ (subject), TO-COMP (to complement). The functions chosen and abbreviations used in the tree here are merely representative of the general stance of DGs toward the syntactic functions. The actual inventory of functions and designations employed vary from DG to DG. As a primitive of the theory, the status of these functions is very different from that in some phrase structure grammars. Traditionally, phrase structure grammars derive the syntactic functions from the constellation. For instance, the object is identified as the NP appearing inside finite VP, and the subject as the NP appearing outside of finite VP. Since DGs reject the existence of a finite VP constituent, they were never presented with the option to view the syntactic functions in this manner. The issue is a question of what comes first: traditionally, DGs take the syntactic functions to be primitive and they then derive the constellation from these functions, whereas phrase structure grammars traditionally take the constellation to be primitive and they then derive the syntactic functions from the constellation. This question about what comes first (the functions or the constellation) is not an inflexible matter. The stances of both grammar types (dependency and phrase structure) are not narrowly limited to the traditional views. Dependency and phrase structure are both fully compatible with both approaches to the syntactic functions. Indeed, monostratal systems, that are solely based on dependency or phrase structure, will likely reject the notion that the functions are derived from the constellation or that the constellation is derived from the functions. They will take both to be primitive, which means neither can be derived from the other.
https://en.wikipedia.org/wiki/Dependency_grammar
Indata analysis,cosine similarityis ameasure of similaritybetween two non-zero vectors defined in aninner product space. Cosine similarity is thecosineof the angle between the vectors; that is, it is thedot productof the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval[−1,+1].{\displaystyle [-1,+1].}For example, twoproportional vectorshave a cosine similarity of +1, twoorthogonal vectorshave a similarity of 0, and twooppositevectors have a similarity of −1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in[0,1]{\displaystyle [0,1]}. For example, ininformation retrievalandtext mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents.[1] The technique is also used to measurecohesionwithin clusters in the field ofdata mining.[2] One advantage of cosine similarity is itslow complexity, especially forsparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity includeOrchini similarityandTucker coefficient of congruence; theOtsuka–Ochiai similarity(see below) is cosine similarity applied tobinary data.[3] The cosine of two non-zero vectors can be derived by using theEuclidean dot productformula: Given twon-dimensionalvectorsof attributes,AandB, the cosine similarity,cos(θ), is represented using adot productandmagnitudeas whereAi{\displaystyle A_{i}}andBi{\displaystyle B_{i}}are thei{\displaystyle i}thcomponentsof vectorsA{\displaystyle \mathbf {A} }andB{\displaystyle \mathbf {B} }, respectively. The resulting similarity ranges from −1 meaning exactly opposite, to +1 meaning exactly the same, with 0 indicatingorthogonalityordecorrelation, while in-between values indicate intermediate similarity or dissimilarity. Fortext matching, the attribute vectorsAandBare usually theterm frequencyvectors of the documents. Cosine similarity can be seen as a method ofnormalizingdocument length during comparison. In the case ofinformation retrieval, the cosine similarity of two documents will range from0→1{\displaystyle 0\to 1}, since the term frequencies cannot be negative. This remains true when usingTF-IDFweights. The angle between two term frequency vectors cannot be greater than 90°. If the attribute vectors are normalized by subtracting the vector means (e.g.,A−A¯{\displaystyle A-{\bar {A}}}), the measure is called the centered cosine similarity and is equivalent to thePearson correlation coefficient. For an example of centering, When the distance between two unit-length vectors is defined to be the length of their vector difference thendist⁡(A,B)=(A−B)⋅(A−B)=A⋅A−2(A⋅B)+B⋅B=2(1−SC(A,B)).{\displaystyle \operatorname {dist} (\mathbf {A} ,\mathbf {B} )={\sqrt {(\mathbf {A} -\mathbf {B} )\cdot (\mathbf {A} -\mathbf {B} )}}={\sqrt {\mathbf {A} \cdot \mathbf {A} -2(\mathbf {A} \cdot \mathbf {B} )+\mathbf {B} \cdot \mathbf {B} }}={\sqrt {2(1-S_{C}(\mathbf {A} ,\mathbf {B} ))}}\,.} Nonetheless thecosine distance[4]is often defined without the square root or factor of 2: It is important to note that, by virtue of being proportional to squared Euclidean distance, the cosine distance is not a truedistance metric; it does not exhibit thetriangle inequalityproperty — or, more formally, theSchwarz inequality— and it violates the coincidence axiom. To repair the triangle inequality property while maintaining the same ordering, one can convert toEuclidean distance2(1−SC(A,B)){\textstyle {\sqrt {2(1-S_{C}(A,B))}}}or angular distanceθ= arccos(SC(A,B)). Alternatively, the triangular inequality that does work for angular distances can be expressed directly in terms of the cosines; seebelow. The normalized angle, referred to asangular distance, between any two vectorsA{\displaystyle A}andB{\displaystyle B}is a formaldistance metricand can be calculated from the cosine similarity.[5]The complement of the angular distance metric can then be used to defineangular similarityfunction bounded between 0 and 1, inclusive. When the vector elements may be positive or negative: Or, if the vector elements are always positive: Unfortunately, computing the inverse cosine (arccos) function is slow, making the use of the angular distance more computationally expensive than using the more common (but not metric) cosine distance above. Another effective proxy for cosine distance can be obtained byL2{\displaystyle L_{2}}normalisationof the vectors, followed by the application of normalEuclidean distance. Using this technique each term in each vector is first divided by the magnitude of the vector, yielding a vector of unit length. Then the Euclidean distance over the end-points of any two vectors is a proper metric which gives the same ordering as the cosine distance (amonotonic transformationof Euclidean distance; seebelow) for any comparison of vectors, and furthermore avoids the potentially expensive trigonometric operations required to yield a proper metric. Once the normalisation has occurred, the vector space can be used with the full range of techniques available to any Euclidean space, notably standarddimensionality reductiontechniques. This normalised form distance is often used within manydeep learningalgorithms. In biology, there is a similar concept known as the Otsuka–Ochiai coefficient named afterYanosuke Otsuka(also spelled as Ōtsuka, Ootsuka or Otuka,[6]Japanese:大塚 弥之助)[7]and Akira Ochiai (Japanese:落合 明),[8]also known as the Ochiai–Barkman[9]or Ochiai coefficient,[10]which can be represented as: Here,A{\displaystyle A}andB{\displaystyle B}aresets, and|A|{\displaystyle |A|}is the number of elements inA{\displaystyle A}. If sets are represented as bit vectors, the Otsuka–Ochiai coefficient can be seen to be the same as the cosine similarity. It is identical to the score introduced byGodfrey Thomson.[11] In a recent book,[12]the coefficient is tentatively misattributed to another Japanese researcher with the family name Otsuka. The confusion arises because in 1957 Akira Ochiai attributes the coefficient only to Otsuka (no first name mentioned)[8]by citing an article by Ikuso Hamai (Japanese:浜井 生三),[13]who in turn cites the original 1936 article by Yanosuke Otsuka.[7] The most noteworthy property of cosine similarity is that it reflects a relative, rather than absolute, comparison of the individual vector dimensions. For any positive constanta{\displaystyle a}and vectorV{\displaystyle V}, the vectorsV{\displaystyle V}andaV{\displaystyle aV}are maximally similar. The measure is thus most appropriate for data where frequency is more important than absolute values; notably, term frequency in documents. However more recent metrics with a grounding in information theory, such asJensen–Shannon, SED, and triangular divergence have been shown to have improved semantics in at least some contexts.[14] Cosine similarity is related toEuclidean distanceas follows. Denote Euclidean distance by the usual‖A−B‖{\displaystyle \|A-B\|}, and observe that byexpansion. WhenAandBare normalized to unit length,‖A‖2=‖B‖2=1{\displaystyle \|A\|^{2}=\|B\|^{2}=1}so this expression is equal to In short, the cosine distance can be expressed in terms of Euclidean distance as The Euclidean distance is called thechord distance(because it is the length of the chord on the unit circle) and it is the Euclidean distance between the vectors which were normalized to unit sum of squared values within them. Null distribution:For data which can be negative as well as positive, thenull distributionfor cosine similarity is the distribution of thedot productof two independent randomunit vectors. This distribution has ameanof zero and avarianceof1/n{\displaystyle 1/n}(wheren{\displaystyle n}is the number of dimensions), and although the distribution is bounded between −1 and +1, asn{\displaystyle n}grows large the distribution is increasingly well-approximated by thenormal distribution.[15][16]Other types of data such asbitstreams, which only take the values 0 or 1, the null distribution takes a different form and may have a nonzero mean.[17] The ordinarytriangle inequalityfor angles (i.e., arc lengths on a unit hypersphere) gives us that Because the cosine function decreases as an angle in[0,π]radians increases, the sense of these inequalities is reversed when we take the cosine of each value: Using the cosine addition and subtraction formulas, these two inequalities can be written in terms of the original cosines, This form of the triangle inequality can be used to bound the minimum and maximum similarity of two objects A and B if the similarities to a reference object C is already known. This is used for example in metric data indexing, but has also been used to accelerate sphericalk-means clustering[18]the same way the Euclidean triangle inequality has been used to accelerate regular k-means. A soft cosine or ("soft" similarity) between two vectors considers similarities between pairs of features.[19]The traditional cosine similarity considers thevector space model(VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity. For example, in the field ofnatural language processing(NLP) the similarity among features is quite intuitive. Features such as words,n-grams, or syntacticn-grams[20]can be quite similar, though formally they are considered as different features in the VSM. For example, words "play" and "game" are different words and thus mapped to different points in VSM; yet they are semantically related. In case ofn-grams or syntacticn-grams,Levenshtein distancecan be applied (in fact, Levenshtein distance can be applied to words as well). For calculating soft cosine, the matrixsis used to indicate similarity between features. It can be calculated through Levenshtein distance,WordNetsimilarity, or othersimilarity measures. Then we just multiply by this matrix. Given twoN-dimension vectorsa{\displaystyle a}andb{\displaystyle b}, the soft cosine similarity is calculated as follows: wheresij= similarity(featurei, featurej). If there is no similarity between features (sii= 1,sij= 0fori≠j), the given equation is equivalent to the conventional cosine similarity formula. Thetime complexityof this measure is quadratic, which makes it applicable to real-world tasks. Note that the complexity can be reduced to subquadratic.[21]An efficient implementation of such soft cosine similarity is included in theGensimopen source library.
https://en.wikipedia.org/wiki/Cosine_similarity
Inmathematics, ametric spaceis asettogether with a notion ofdistancebetween itselements, usually calledpoints. The distance is measured by afunctioncalled ametricordistance function.[1]Metric spaces are a general setting for studying many of the concepts ofmathematical analysisandgeometry. The most familiar example of a metric space is3-dimensional Euclidean spacewith its usual notion of distance. Other well-known examples are asphereequipped with theangular distanceand thehyperbolic plane. A metric may correspond to ametaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with theHamming distance, which measures the number of characters that need to be changed to get from one string to another. Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, includingRiemannian manifolds,normed vector spaces, andgraphs. Inabstract algebra, thep-adic numbersarise as elements of thecompletionof a metric structure on therational numbers. Metric spaces are also studied in their own right inmetric geometry[2]andanalysis on metric spaces.[3] Many of the basic notions ofmathematical analysis, includingballs,completeness, as well asuniform,Lipschitz, andHölder continuity, can be defined in the setting of metric spaces. Other notions, such ascontinuity,compactness, andopenandclosed sets, can be defined for metric spaces, but also in the even more general setting oftopological spaces. To see the utility of different notions of distance, consider thesurface of the Earthas a set of points. We can measure the distance between two such points by the length of theshortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural inseismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points. The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts. Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as withWasserstein metricson spaces ofmeasures) or the degree of difference between two objects (for example, theHamming distancebetween two strings of characters, or theGromov–Hausdorff distancebetween metric spaces themselves). Formally, ametric spaceis anordered pair(M,d)whereMis a set anddis ametriconM, i.e., afunctiond:M×M→R{\displaystyle d\,\colon M\times M\to \mathbb {R} }satisfying the following axioms for all pointsx,y,z∈M{\displaystyle x,y,z\in M}:[4][5] If the metricdis unambiguous, one often refers byabuse of notationto "the metric spaceM". By taking all axioms except the second, one can show that distance is always non-negative:0=d(x,x)≤d(x,y)+d(y,x)=2d(x,y){\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}Therefore the second axiom can be weakened toIfx≠y, thend(x,y)≠0{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}and combined with the first to maked(x,y)=0⟺x=y{\textstyle d(x,y)=0\iff x=y}.[6] Thereal numberswith the distance functiond(x,y)=|y−x|{\displaystyle d(x,y)=|y-x|}given by theabsolute differenceform a metric space. Many properties of metric spaces and functions between them are generalizations of concepts inreal analysisand coincide with those concepts when applied to the real line. The Euclidean planeR2{\displaystyle \mathbb {R} ^{2}}can be equipped with many different metrics. TheEuclidean distancefamiliar from school mathematics can be defined byd2((x1,y1),(x2,y2))=(x2−x1)2+(y2−y1)2.{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.} ThetaxicaborManhattandistanceis defined byd1((x1,y1),(x2,y2))=|x2−x1|+|y2−y1|{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article. Themaximum,L∞{\displaystyle L^{\infty }}, orChebyshev distanceis defined byd∞((x1,y1),(x2,y2))=max{|x2−x1|,|y2−y1|}.{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves akingwould have to make on achessboardto travel from one point to another on the given space. In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formulad∞(p,q)≤d2(p,q)≤d1(p,q)≤2d∞(p,q),{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}which holds for every pair of pointsp,q∈R2{\displaystyle p,q\in \mathbb {R} ^{2}}. A radically different distance can be defined by settingd(p,q)={0,ifp=q,1,otherwise.{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}UsingIverson brackets,d(p,q)=[p≠q]{\displaystyle d(p,q)=[p\neq q]}In thisdiscrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points. All of these metrics make sense onRn{\displaystyle \mathbb {R} ^{n}}as well asR2{\displaystyle \mathbb {R} ^{2}}. Given a metric space(M,d)and asubsetA⊆M{\displaystyle A\subseteq M}, we can considerAto be a metric space by measuring distances the same way we would inM. Formally, theinduced metriconAis a functiondA:A×A→R{\displaystyle d_{A}:A\times A\to \mathbb {R} }defined bydA(x,y)=d(x,y).{\displaystyle d_{A}(x,y)=d(x,y).}For example, if we take the two-dimensional sphereS2as a subset ofR3{\displaystyle \mathbb {R} ^{3}}, the Euclidean metric onR3{\displaystyle \mathbb {R} ^{3}}induces the straight-line metric onS2described above. Two more useful examples are the open interval(0, 1)and the closed interval[0, 1]thought of as subspaces of the real line. Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. Hisdistancewas given by logarithm of across ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models forelliptic geometryandhyperbolic geometry, andFelix Klein, in several publications, established the field ofnon-euclidean geometrythrough the use of theCayley-Klein metric. The idea of an abstract space with metric properties was addressed in 1906 byRené Maurice Fréchet[7]and the termmetric spacewas coined byFelix Hausdorffin 1914.[8][9][10] Fréchet's work laid the foundation for understandingconvergence,continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff andStefan Banachfurther refined and expanded the framework of metric spaces. Hausdorff introducedtopological spacesas a generalization of metric spaces. Banach's work infunctional analysisheavily relied on the metric structure. Over time, metric spaces became a central part ofmodern mathematics. They have influenced various fields includingtopology,geometry, andapplied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts. A distance function is enough to define notions of closeness and convergence that were first developed inreal analysis. Properties that depend on the structure of a metric space are referred to asmetric properties. Every metric space is also atopological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are reallytopological properties. For any pointxin a metric spaceMand any real numberr> 0, theopen ballof radiusraroundxis defined to be the set of points that are strictly less than distancerfromx:Br(x)={y∈M:d(x,y)<r}.{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}This is a natural way to define a set of points that are relatively close tox. Therefore, a setN⊆M{\displaystyle N\subseteq M}is aneighborhoodofx(informally, it contains all points "close enough" tox) if it contains an open ball of radiusraroundxfor somer> 0. Anopen setis a set which is a neighborhood of all its points. It follows that the open balls form abasefor a topology onM. In other words, the open sets ofMare exactly the unions of open balls. As in any topology,closed setsare the complements of open sets. Sets may be both open and closed as well as neither open nor closed. This topology does not carry all the information about the metric space. For example, the distancesd1,d2, andd∞defined above all induce the same topology onR2{\displaystyle \mathbb {R} ^{2}}, although they behave differently in many respects. Similarly,R{\displaystyle \mathbb {R} }with the Euclidean metric and its subspace the interval(0, 1)with the induced metric arehomeomorphicbut have very different metric properties. Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are calledmetrizableand are particularly well-behaved in many ways: in particular, they areparacompact[11]Hausdorff spaces(hencenormal) andfirst-countable.[a]TheNagata–Smirnov metrization theoremgives a characterization of metrizability in terms of other topological properties, without reference to metrics. Convergence of sequencesin Euclidean space is defined as follows: Convergence of sequences in a topological space is defined as follows: In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern fortopological propertiesof metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis. Informally, a metric space iscompleteif it has no "missing points": every sequence that looks like it should converge to something actually converges. To make this precise: a sequence(xn)in a metric spaceMisCauchyif for everyε > 0there is an integerNsuch that for allm,n>N,d(xm,xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: ifxmandxnare both less thanεaway from the limit, then they are less than2εaway from each other. If the converse is true—every Cauchy sequence inMconverges—thenMis complete. Euclidean spaces are complete, as isR2{\displaystyle \mathbb {R} ^{2}}with the other metrics described above. Two examples of spaces which are not complete are(0, 1)and the rationals, each with the metric induced fromR{\displaystyle \mathbb {R} }. One can think of(0, 1)as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it inR{\displaystyle \mathbb {R} }(for example, its successive decimal approximations). These examples show that completeness isnota topological property, sinceR{\displaystyle \mathbb {R} }is complete but the homeomorphic space(0, 1)is not. This notion of "missing points" can be made precise. In fact, every metric space has a uniquecompletion, which is a complete space that contains the given space as adensesubset. For example,[0, 1]is the completion of(0, 1), and the real numbers are the completion of the rationals. Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, thep-adic numbersare defined as the completion of the rationals under a different metric. Completion is particularly common as a tool infunctional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example,weak solutionstodifferential equationstypically live in a completion (aSobolev space) rather than the original space of nice functions for which the differential equation actually makes sense. A metric spaceMisboundedif there is anrsuch that no pair of points inMis more than distancerapart.[b]The least suchris called thediameterofM. The spaceMis calledprecompactortotally boundedif for everyr> 0there is a finitecoverofMby open balls of radiusr. Every totally bounded space is bounded. To see this, start with a finite cover byr-balls for some arbitraryr. Since the subset ofMconsisting of the centers of these balls is finite, it has finite diameter, sayD. By the triangle inequality, the diameter of the whole space is at mostD+ 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded isR2{\displaystyle \mathbb {R} ^{2}}(or any other infinite set) with the discrete metric. Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces: One example of a compact space is the closed interval[0, 1]. Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool isLebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover. Unlike in the case of topological spaces or algebraic structures such asgroupsorrings, there is no single "right" type ofstructure-preserving functionbetween metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}are two metric spaces. The words "function" and "map" are used interchangeably. One interpretation of a "structure-preserving" map is one that fully preserves the distance function: It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called anisometry.[13]One perhaps non-obvious example of an isometry between spaces described in this article is the mapf:(R2,d1)→(R2,d∞){\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}defined byf(x,y)=(x+y,x−y).{\displaystyle f(x,y)=(x+y,x-y).} If there is an isometry between the spacesM1andM2, they are said to beisometric. Metric spaces that are isometric areessentially identical. On the other end of the spectrum, one can forget entirely about the metric structure and studycontinuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are: Ahomeomorphismis a continuous bijection whose inverse is also continuous; if there is a homeomorphism betweenM1andM2, they are said to behomeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,R{\displaystyle \mathbb {R} }is unbounded and complete, while(0, 1)is bounded but not complete. A functionf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isuniformly continuousif for every real numberε > 0there existsδ > 0such that for all pointsxandyinM1such thatd(x,y)<δ{\displaystyle d(x,y)<\delta }, we haved2(f(x),f(y))<ε.{\displaystyle d_{2}(f(x),f(y))<\varepsilon .} The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the pointx. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences inM1to Cauchy sequences inM2. In other words, uniform continuity preserves some metric properties which are not purely topological. On the other hand, theHeine–Cantor theoremstates that ifM1is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces. ALipschitz mapis one that stretches distances by at most a bounded factor. Formally, given a real numberK> 0, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}isK-Lipschitzifd2(f(x),f(y))≤Kd1(x,y)for allx,y∈M1.{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric.[14]For example, a curve in a metric space isrectifiable(has finite length) if and only if it has a Lipschitz reparametrization. A 1-Lipschitz map is sometimes called anonexpandingormetric map. Metric maps are commonly taken to be the morphisms of thecategory of metric spaces. AK-Lipschitz map forK< 1is called acontraction. TheBanach fixed-point theoremstates that ifMis a complete metric space, then every contractionf:M→M{\displaystyle f:M\to M}admits a uniquefixed point. If the metric spaceMis compact, the result holds for a slightly weaker condition onf: a mapf:M→M{\displaystyle f:M\to M}admits a unique fixed point ifd(f(x),f(y))<d(x,y)for allx≠y∈M1.{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.} Aquasi-isometryis a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,R2{\displaystyle \mathbb {R} ^{2}}and its subspaceZ2{\displaystyle \mathbb {Z} ^{2}}are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important ingeometric group theory: theŠvarc–Milnor lemmastates that all spaces on which a groupacts geometricallyare quasi-isometric.[15] Formally, the mapf:M1→M2{\displaystyle f\,\colon M_{1}\to M_{2}}is aquasi-isometric embeddingif there exist constantsA≥ 1andB≥ 0such that1Ad2(f(x),f(y))−B≤d1(x,y)≤Ad2(f(x),f(y))+Bfor allx,y∈M1.{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}It is aquasi-isometryif in addition it isquasi-surjective, i.e. there is a constantC≥ 0such that every point inM2{\displaystyle M_{2}}is at distance at mostCfrom some point in the imagef(M1){\displaystyle f(M_{1})}. Given two metric spaces(M1,d1){\displaystyle (M_{1},d_{1})}and(M2,d2){\displaystyle (M_{2},d_{2})}: Anormed vector spaceis a vector space equipped with anorm, which is a function that measures the length of vectors. The norm of a vectorvis typically denoted by‖v‖{\displaystyle \lVert v\rVert }. Any normed vector space can be equipped with a metric in which the distance between two vectorsxandyis given byd(x,y):=‖x−y‖.{\displaystyle d(x,y):=\lVert x-y\rVert .}The metricdis said to beinducedby the norm‖⋅‖{\displaystyle \lVert {\cdot }\rVert }. Conversely,[16]if a metricdon avector spaceXis then it is the metric induced by the norm‖x‖:=d(x,0).{\displaystyle \lVert x\rVert :=d(x,0).}A similar relationship holds betweenseminormsandpseudometrics. Among examples of metrics induced by a norm are the metricsd1,d2, andd∞onR2{\displaystyle \mathbb {R} ^{2}}, which are induced by theManhattan norm, theEuclidean norm, and themaximum norm, respectively. More generally, theKuratowski embeddingallows one to see any metric space as a subspace of a normed vector space. Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied infunctional analysis. Completeness is particularly important in this context: a complete normed vector space is known as aBanach space. An unusual property of normed vector spaces is thatlinear transformationsbetween them are continuous if and only if they are Lipschitz. Such transformations are known asbounded operators. Acurvein a metric space(M,d)is a continuous functionγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}. Thelengthofγis measured byL(γ)=sup0=x0<x1<⋯<xn=T{∑k=1nd(γ(xk−1),γ(xk))}.{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}In general, this supremum may be infinite; a curve of finite length is calledrectifiable.[17]Suppose that the length of the curveγis equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length,γbecomes ageodesic: a curve which is a distance-preserving function.[15]A geodesic is a shortest possible path between any two of its points.[c] Ageodesic metric spaceis a metric space which admits a geodesic between any two of its points. The spaces(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}and(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}are both geodesic metric spaces. In(R2,d2){\displaystyle (\mathbb {R} ^{2},d_{2})}, geodesics are unique, but in(R2,d1){\displaystyle (\mathbb {R} ^{2},d_{1})}, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article. The spaceMis alength space(or the metricdisintrinsic) if the distance between any two pointsxandyis the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points(1, 0)and(-1, 0)can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface. Given any metric space(M,d), one can define a new, intrinsic distance functiondintrinsiconMby setting the distance between pointsxandyto be the infimum of thed-lengths of paths between them. For instance, ifdis the straight-line distance on the sphere, thendintrinsicis the great-circle distance. However, in some casesdintrinsicmay have infinite values. For example, ifMis theKoch snowflakewith the subspace metricdinduced fromR2{\displaystyle \mathbb {R} ^{2}}, then the resulting intrinsic distance is infinite for any pair of distinct points. ARiemannian manifoldis a space equipped with a Riemannianmetric tensor, which determines lengths oftangent vectorsat every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable pathγ:[0,T]→M{\displaystyle \gamma :[0,T]\to M}in a Riemannian manifoldMhas length defined as the integral of the length of the tangent vector to the path:L(γ)=∫0T|γ˙(t)|dt.{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such assub-RiemannianandFinsler metrics. The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is aCAT(k)space(a synthetic condition which depends purely on the metric) if and only if itssectional curvatureis bounded above byk.[20]ThusCAT(k)spaces generalize upper curvature bounds to general metric spaces. Real analysis makes use of both the metric onRn{\displaystyle \mathbb {R} ^{n}}and theLebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside inmetric measure spaces: spaces that have both ameasureand a metric which are compatible with each other. Formally, ametric measure spaceis a metric space equipped with aBorel regular measuresuch that every ball has positive measure.[21]For example Euclidean spaces of dimensionn, and more generallyn-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with theLebesgue measure. Certainfractalmetric spaces such as theSierpiński gasketcan be equipped with the α-dimensionalHausdorff measurewhere α is theHausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure. One application of metric measure spaces is generalizing the notion ofRicci curvaturebeyond Riemannian manifolds. Just asCAT(k)andAlexandrov spacesgeneralize sectional curvature bounds,RCD spacesare a class of metric measure spaces which generalize lower bounds on Ricci curvature.[22] Ametric space isdiscreteif its induced topology is thediscrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular,finite metric spaces(those having afinitenumber of points) are studied incombinatoricsandtheoretical computer science.[23]Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can beisometrically embeddedin a Euclidean space or inHilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.[24][25] For anyundirected connected graphG, the setVof vertices ofGcan be turned into a metric space by defining thedistancebetween verticesxandyto be the length of the shortest edge path connecting them. This is also calledshortest-path distanceorgeodesic distance. Ingeometric group theorythis construction is applied to theCayley graphof a (typically infinite)finitely-generated group, yielding theword metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.[15] An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics. A significant result in this area is that any finite metric space can be probabilistically embedded into atree metricwith an expected distortion ofO(logn){\displaystyle O(logn)}, wheren{\displaystyle n}is the number of points in the metric space.[26] This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound ofΩ(logn){\displaystyle \Omega (logn)}. The tree metrics produced in this embeddingdominatethe original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure. The result has significant implications for various computational problems: The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. TheO(logn){\displaystyle O(logn)}distortion bound has led to improvedapproximation ratiosin several algorithmic problems, demonstrating the practical significance of this theoretical result. In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples: The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves.HausdorffandGromov–Hausdorff distancedefine metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively. Suppose(M,d)is a metric space, and letSbe a subset ofM. Thedistance fromSto a pointxofMis, informally, the distance fromxto the closest point ofS. However, since there may not be a single closest point, it is defined via aninfimum:d(x,S)=inf{d(x,s):s∈S}.{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}In particular,d(x,S)=0{\displaystyle d(x,S)=0}if and only ifxbelongs to theclosureofS. Furthermore, distances between points and sets satisfy a version of the triangle inequality:d(x,S)≤d(x,y)+d(y,S),{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}and therefore the mapdS:M→R{\displaystyle d_{S}:M\to \mathbb {R} }defined bydS(x)=d(x,S){\displaystyle d_{S}(x)=d(x,S)}is continuous. Incidentally, this shows that metric spaces arecompletely regular. Given two subsetsSandTofM, theirHausdorff distanceisdH(S,T)=max{sup{d(s,T):s∈S},sup{d(t,S):t∈T}}.{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}Informally, two setsSandTare close to each other in the Hausdorff distance if no element ofSis too far fromTand vice versa. For example, ifSis an open set in Euclidean spaceTis anε-netinsideS, thendH(S,T)<ε{\displaystyle d_{H}(S,T)<\varepsilon }. In general, the Hausdorff distancedH(S,T){\displaystyle d_{H}(S,T)}can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets ofM. The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. TheGromov–Hausdorff distancebetween compact spacesXandYis the infimum of the Hausdorff distance over all metric spacesZthat containXandYas subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications. If(M1,d1),…,(Mn,dn){\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}are metric spaces, andNis theEuclidean normonRn{\displaystyle \mathbb {R} ^{n}}, then(M1×⋯×Mn,d×){\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}is a metric space, where theproduct metricis defined byd×((x1,…,xn),(y1,…,yn))=N(d1(x1,y1),…,dn(xn,yn)),{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}and the induced topology agrees with theproduct topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained ifNis thetaxicab norm, ap-norm, themaximum norm, or any other norm which is non-decreasing as the coordinates of a positiven-tuple increase (yielding the triangle inequality). Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metricd(x,y)=∑i=1∞12idi(xi,yi)1+di(xi,yi).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.} The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies ofR{\displaystyle \mathbb {R} }is notfirst-countableand thus is not metrizable. IfMis a metric space with metricd, and∼{\displaystyle \sim }is anequivalence relationonM, then we can endow the quotient setM/∼{\displaystyle M/\!\sim }with a pseudometric. The distance between two equivalence classes[x]{\displaystyle [x]}and[y]{\displaystyle [y]}is defined asd′([x],[y])=inf{d(p1,q1)+d(p2,q2)+⋯+d(pn,qn)},{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}where theinfimumis taken over all finite sequences(p1,p2,…,pn){\displaystyle (p_{1},p_{2},\dots ,p_{n})}and(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}withp1∼x{\displaystyle p_{1}\sim x},qn∼y{\displaystyle q_{n}\sim y},qi∼pi+1,i=1,2,…,n−1{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}.[30]In general this will only define apseudometric, i.e.d′([x],[y])=0{\displaystyle d'([x],[y])=0}does not necessarily imply that[x]=[y]{\displaystyle [x]=[y]}. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),d′{\displaystyle d'}is a metric. The quotient metricd′{\displaystyle d'}is characterized by the followinguniversal property. Iff:(M,d)→(X,δ){\displaystyle f\,\colon (M,d)\to (X,\delta )}is a metric (i.e. 1-Lipschitz) map between metric spaces satisfyingf(x) =f(y)wheneverx∼y{\displaystyle x\sim y}, then the induced functionf¯:M/∼→X{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}, given byf¯([x])=f(x){\displaystyle {\overline {f}}([x])=f(x)}, is a metric mapf¯:(M/∼,d′)→(X,δ).{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).} The quotient metric does not always induce thequotient topology. For example, the topological quotient of the metric spaceN×[0,1]{\displaystyle \mathbb {N} \times [0,1]}identifying all points of the form(n,0){\displaystyle (n,0)}is not metrizable since it is notfirst-countable, but the quotient metric is a well-defined metric on the same set which induces acoarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.[31] A topological space issequentialif and only if it is a (topological) quotient of a metric space.[32] There are several notions of spaces which have less structure than a metric space, but more than a topological space. There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, infunctional analysispseudometrics often come fromseminormson vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term intopology. Some authors define metrics so as to allow the distance functiondto attain the value ∞, i.e. distances are non-negative numbers on theextended real number line.[4]Such a function is also called anextended metricor "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using asubadditivemonotonically increasing bounded function which is zero at zero, e.g.d′(x,y)=d(x,y)/(1+d(x,y)){\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}ord″(x,y)=min(1,d(x,y)){\displaystyle d''(x,y)=\min(1,d(x,y))}. The requirement that the metric take values in[0,∞){\displaystyle [0,\infty )}can be relaxed to consider metrics with values in other structures, including: These generalizations still induce auniform structureon the space. ApseudometriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) onlyd(x,x)=0{\displaystyle d(x,x)=0}for allx{\displaystyle x}is required.[34]In other words, the axioms for a pseudometric are: In some contexts, pseudometrics are referred to assemimetrics[35]because of their relation toseminorms. Occasionally, aquasimetricis defined as a function that satisfies all axioms for a metric with the possible exception of symmetry.[36]The name of this generalisation is not entirely standardized.[37] Quasimetrics are common in real life. For example, given a setXof mountain villages, the typical walking times between elements ofXform a quasimetric because travel uphill takes longer than travel downhill. Another example is thelength of car ridesin a city with one-way streets: here, a shortest path from pointAto pointBgoes along a different set of streets than a shortest path fromBtoAand may have a different length. A quasimetric on the reals can be defined by settingd(x,y)={x−yifx≥y,1otherwise.{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}The 1 may be replaced, for example, by infinity or by1+y−x{\displaystyle 1+{\sqrt {y-x}}}or any othersubadditivefunction ofy-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size byfiling it down, but it is difficult or impossible to grow it. Given a quasimetric onX, one can define anR-ball aroundxto be the set{y∈X|d(x,y)≤R}{\displaystyle \{y\in X|d(x,y)\leq R\}}. As in the case of a metric, such balls form a basis for a topology onX, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed)Sorgenfrey line. In ametametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are: Metametrics appear in the study ofGromov hyperbolic metric spacesand their boundaries. Thevisual metametricon such a space satisfiesd(x,x)=0{\displaystyle d(x,x)=0}for pointsx{\displaystyle x}on the boundary, but otherwised(x,x){\displaystyle d(x,x)}is approximately the distance fromx{\displaystyle x}to the boundary. Metametrics were first defined by Jussi Väisälä.[38]In other work, a function satisfying these axioms is called apartial metric[39][40]or adislocated metric.[34] AsemimetriconX{\displaystyle X}is a functiond:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }that satisfies the first three axioms, but not necessarily the triangle inequality: Some authors work with a weaker form of the triangle inequality, such as: The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to asquasimetrics,[41]nearmetrics[42]orinframetrics.[43] The ρ-inframetric inequalities were introduced to modelround-trip delay timesin theinternet.[43]The triangle inequality implies the 2-inframetric inequality, and theultrametric inequalityis exactly the 1-inframetric inequality. Relaxing the last three axioms leads to the notion of apremetric, i.e. a function satisfying the following conditions: This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics[44]or pseudometrics;[45]in translations of Russian books it sometimes appears as "prametric".[46]A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.[47] Any premetric gives rise to a topology as follows. For a positive realr{\displaystyle r}, ther{\displaystyle r}-ballcentered at a pointp{\displaystyle p}is defined as A set is calledopenif for any pointp{\displaystyle p}in the set there is anr{\displaystyle r}-ballcentered atp{\displaystyle p}which is contained in the set. Every premetric space is a topological space, and in fact asequential space. In general, ther{\displaystyle r}-ballsthemselves need not be open sets with respect to this topology. As for metrics, the distance between two setsA{\displaystyle A}andB{\displaystyle B}, is defined as This defines a premetric on thepower setof a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric. Any premetric gives rise to apreclosure operatorcl{\displaystyle cl}as follows: The prefixespseudo-,quasi-andsemi-can also be combined, e.g., apseudoquasimetric(sometimes calledhemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the openr{\displaystyle r}-ballsform a basis of open sets. A very basic example of a pseudoquasimetric space is the set{0,1}{\displaystyle \{0,1\}}with the premetric given byd(0,1)=1{\displaystyle d(0,1)=1}andd(1,0)=0.{\displaystyle d(1,0)=0.}The associated topological space is theSierpiński space. Sets equipped with an extended pseudoquasimetric were studied byWilliam Lawvereas "generalized metric spaces".[48]From acategoricalpoint of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of themetric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients. Lawvere also gave an alternate definition of such spaces asenriched categories. The ordered set(R,≥){\displaystyle (\mathbb {R} ,\geq )}can be seen as acategorywith onemorphisma→b{\displaystyle a\to b}ifa≥b{\displaystyle a\geq b}and none otherwise. Using+as thetensor productand 0 as theidentitymakes this category into amonoidal categoryR∗{\displaystyle R^{*}}. Every (extended pseudoquasi-)metric space(M,d){\displaystyle (M,d)}can now be viewed as a categoryM∗{\displaystyle M^{*}}enriched overR∗{\displaystyle R^{*}}: The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. Amultisetis a generalization of the notion of asetin which an element can occur more than once. Define the multiset unionU=XY{\displaystyle U=XY}as follows: if an elementxoccursmtimes inXandntimes inYthen it occursm+ntimes inU. A functiondon the set of nonempty finite multisets of elements of a setMis a metric[49]if By considering the cases of axioms 1 and 2 in which the multisetXhas two elements and the case of axiom 3 in which the multisetsX,Y, andZhave one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements. A simple example is the set of all nonempty finite multisetsX{\displaystyle X}of integers withd(X)=max(X)−min(X){\displaystyle d(X)=\max(X)-\min(X)}. More complex examples areinformation distancein multisets;[49]andnormalized compression distance(NCD) in multisets.[50]
https://en.wikipedia.org/wiki/Distance_metric
In linguisticmorphology,inflection(less commonly,inflexion) is a process ofword formation[1]in which a word is modified to express differentgrammatical categoriessuch astense,case,voice,aspect,person,number,gender,mood,animacy, anddefiniteness.[2]The inflection ofverbsis calledconjugation, while the inflection ofnouns,adjectives,adverbs, etc.[a]can be calleddeclension. An inflection expresses grammatical categories withaffixation(such asprefix,suffix,infix,circumfix, andtransfix),apophony(asIndo-European ablaut), or other modifications.[3]For example, the Latin verbducam, meaning "I will lead", includes the suffix-am, expressing person (first), number (singular), and tense-mood (future indicative or present subjunctive). The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the wordleadis not inflected for any of person, number, or tense; it is simply thebare formof a verb. The inflected form of a word often contains both one or morefree morphemes(a unit of meaning which can stand by itself as a word), and one or morebound morphemes(a unit of meaning which cannot stand alone as a word). For example, the English wordcarsis a noun that is inflected fornumber, specifically to express the plural; the content morphemecaris unbound because it could stand alone as a word, while the suffix-sis bound because it cannot stand alone as a word. These two morphemes together form the inflected wordcars. Words that are never subject to inflection are said to beinvariant; for example, the English verbmustis an invariant item: it never takes a suffix or changes form to signify a different grammatical category. Its categories can be determined only from its context. Languages that seldom make use of inflection, such asEnglish, are said to beanalytic. Analytic languages that do not make use ofderivational morphemes, such asStandard Chinese, are said to beisolating. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known asconcordoragreement. For example, in "the man jumps", "man" is a singular noun, so "jump" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection aresynthetic languages. They can be highly inflected (such asGeorgianorKichwa), moderately inflected (such asRussianorLatin), weakly inflected (such asEnglish), but not uninflected (such asChinese). Languages that are so inflected that a sentence can consist of a single highly inflected word (such as manyNative American languages) are calledpolysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such asFinnish, are known asagglutinative languages, while languages in which a single inflection can convey multiple grammatical roles (such as both nominative case and plural, as in Latin andGerman) are calledfusional. In English most nouns are inflected fornumberwith the inflectional pluralaffix-s(as in "dog" → "dog-s"), and most English verbs are inflected fortensewith the inflectional past tense affix-ed(as in "call" → "call-ed"). English also inflects verbs by affixation to mark the third person singular in the present tense (with-s), and the present participle (with-ing). English short adjectives are inflected to mark comparative and superlative forms (with-erand-estrespectively). There are eightregularinflectional affixes in the English language.[4][5] Despite the march toward regularization, modern English retains traces of its ancestry, with a minority of its words still using inflection byablaut(sound change, mostly in verbs) andumlaut(a particular type of sound change, mostly in nouns), as well as long-short vowel alternation. For example: For details, seeEnglish plural,English verbs, andEnglish irregular verbs. When a givenword classis subject to inflection in a particular language, there are generally one or more standard patterns of inflection (theparadigmsdescribed below) that words in that class may follow. Words which follow such a standard pattern are said to beregular; those that inflect differently are calledirregular. For instance, many languages that featureverbinflection have bothregular verbs and irregular verbs. In English, regular verbs form theirpast tenseandpast participlewith the ending-[e]d. Therefore, verbs likeplay,arriveandenterare regular, while verbs likesing,keepandgoare irregular. Irregular verbs often preserve patterns that were regular in past forms of the language, but which have now become anomalous; in rare cases, there are regular verbs that were irregular in past forms of the language. (For more details seeEnglish verbsandEnglish irregular verbs.) Other types of irregular inflected form include irregularpluralnouns, such as the Englishmice,childrenandwomen(seeEnglish plural) and the Frenchyeux(the plural ofœil, "eye"); and irregularcomparativeandsuperlativeforms of adjectives or adverbs, such as the Englishbetterandbest(which correspond to the positive formgoodorwell). Irregularities can have four basic causes:[citation needed] For more details on some of the considerations that apply to regularly and irregularly inflected forms, see the article onregular and irregular verbs. Two traditional grammatical terms refer to inflections of specificword classes: An organized list of the inflected forms of a givenlexemeor root word is called itsdeclensionif it is a noun, or itsconjugationif it is a verb. Below is the declension of the English pronounI, which is inflected for case and number. The pronounwhois also inflected according to case. Its declension isdefective, in the sense that it lacks a reflexive form. The following table shows the conjugation of the verbto arrivein the indicativemood:suffixesinflect it for person, number, and tense: Thenon-finite formsarrive(bare infinitive),arrived(past participle) andarriving(gerund/present participle), although not inflected for person or number, can also be regarded as part of the conjugation of the verbto arrive.Compound verb forms, such asI have arrived,I had arrived, orI will arrive, can be included also in the conjugation of the verb for didactic purposes, but they are not overt inflections ofarrive. The formula for deriving the covert form, in which the relevant inflections do not occur in the main verb, is Aninflectional paradigmrefers to a pattern (usually a set of inflectional endings), where a class of words follow the same pattern. Nominal inflectional paradigms are calleddeclensions, and verbal inflectional paradigms are termedconjugations. For instance, there are five types ofLatin declension. Words that belong to the first declension usually end in-aand are usually feminine. These words share a common inflectional framework. InOld English, nouns are divided into two major categories of declension, thestrongandweakones, as shown below: The terms "strong declension" and "weak declension" are primarily relevant to well-knowndependent-marking languages[citation needed](such as theIndo-European languages,[citation needed]orJapanese). In dependent-marking languages, nouns in adpositional (prepositional or postpositional) phrases can carry inflectional morphemes. Inhead-marking languages, the adpositions can carry the inflection in adpositional phrases. This means that these languages will have inflected adpositions. InWestern Apache(San Carlosdialect), the postposition-ká’'on' is inflected for person and number with prefixes: Traditional grammars have specific terms for inflections of nouns and verbs but not for those ofadpositions.[clarification needed] Inflection is the process of addinginflectionalmorphemesthat modify a verb's tense, mood, aspect, voice, person, or number or a noun's case, gender, or number, rarely affecting the word's meaning or class. Examples of applying inflectional morphemes to words are adding -sto the rootdogto formdogsand adding -edtowaitto formwaited. In contrast,derivationis the process of addingderivational morphemes, which create a new word from existing words and change the semantic meaning or the part of speech of the affected word, such as by changing a noun to a verb.[6] Distinctions between verbalmoodsare mainly indicated by derivational morphemes. Words are rarely listed in dictionaries on the basis of their inflectional morphemes (in which case they would be lexical items). However, they often are listed on the basis of their derivational morphemes. For instance, English dictionaries listreadableandreadability, words with derivational suffixes, along with their rootread. However, no traditional English dictionary listsbookas one entry andbooksas a separate entry; the same goes forjumpandjumped. Languages that add inflectional morphemes to words are sometimes calledinflectional languages, which is a synonym forinflected languages. Morphemes may be added in several different ways: Reduplicationis a morphological process where a constituent is repeated. The direct repetition of a word or root is calledtotal reduplication(orfull reduplication). The repetition of a segment is referred to aspartial reduplication. Reduplication can serve bothderivationaland inflectional functions. A few examples are given below: Palancar and Léonard provided an example withTlatepuzco Chinantec(anOto-Manguean languagespoken in SouthernMexico), where tones are able to distinguish mood, person, and number:[12][13] Case can be distinguished with tone as well, as inMaasai language(aNilo-Saharan languagespoken inKenyaandTanzania) (Hyman, 2016):[14] Because theProto-Indo-European languagewas highly inflected, all of its descendantIndo-European languages, such asAlbanian,Armenian,English,German,Ukrainian,Russian,Persian,Kurdish,Italian,Irish,Spanish,French,Hindi,Marathi,Urdu,Bengali, andNepali, are inflected to a greater or lesser extent. In general, older Indo-European languages such asLatin,Ancient Greek,Old English,Old Norse,Old Church SlavonicandSanskritare extensively inflected because of their temporal proximity to Proto-Indo-European.Deflexionhas caused modern versions of some Indo-European languages that were previously highly inflected to be much less so; an example is Modern English, as compared to Old English. In general, languages where deflexion occurs replace inflectional complexity with more rigorousword order, which provides the lost inflectional details. MostSlavic languagesand someIndo-Aryan languagesare an exception to the general Indo-European deflexion trend, continuing to be highly inflected (in some cases acquiring additional inflectional complexity andgrammatical genders, as inCzech&Marathi). Old Englishwas a moderately inflected language, using an extensive case system similar to that of modernIcelandic,FaroeseorGerman. Middle and Modern English lost progressively more of the Old English inflectional system. Modern English is considered a weakly inflected language, since its nouns have only vestiges of inflection (plurals, the pronouns), and its regular verbs have only four forms: an inflected form for the past indicative and subjunctive (looked), an inflected form for the third-person-singular present indicative (looks), an inflected form for the present participle (looking), and an uninflected form for everything else (look). While the English possessive indicator's(as in "Jennifer's book") is a remnant of the Old Englishgenitive casesuffix, it is now considered by syntacticians not to be a suffix but aclitic,[15]although some linguists argue that it has properties of both.[16] Old Norsewas inflected, but modernSwedish,Norwegian, andDanishhave lost much of their inflection.Grammatical casehas largely died out with the exception ofpronouns, just like English. However,adjectives,nouns,determinersandarticlesstill have different forms according to grammatical number and grammatical gender. Danish and Swedish only inflect for two different genders while Norwegian has to some degree retained the feminine forms and inflects for three grammatical genders like Icelandic. However, in comparison to Icelandic, there are considerably fewer feminine forms left in the language. In comparison,Icelandicpreserves almost all of theinflections of Old Norseand remains heavily inflected. It retains all the grammatical cases from Old Norse and is inflected for number and three different grammatical genders. Thedual number formsare however almost completely lost in comparison to Old Norse. Unlike other Germanic languages, nouns are inflected fordefinitenessin all Scandinavian languages, like in the following case forNorwegian (nynorsk): Adjectives andparticiplesare also inflected for definiteness in all Scandinavian languages like inProto-Germanic. ModernGermanremains moderately inflected, retaining four noun cases, although the genitive started falling into disuse in all but formal writing inEarly New High German. The case system ofDutch, simpler than that of German, is also simplified in common usage.Afrikaans, recognized as a distinct language in its own right rather than a Dutch dialect only in the early 20th century, has lost almost all inflection. TheRomance languages, such asSpanish,Italian,French,Portugueseand especially – with its many cases –Romanian, have more overt inflection than English, especially inverb conjugation. Adjectives, nouns and articles are considerably less inflected than verbs, but they still have different forms according to number and grammatical gender. Latin, the mother tongue of the Romance languages, was highly inflected; nouns and adjectives had different forms according to sevengrammatical cases(including five major ones) with five major patterns of declension, and three genders instead of the two found in most Romance tongues. There were four patterns of conjugation in six tenses, three moods (indicative, subjunctive, imperative, plus the infinitive, participle, gerund, gerundive, and supine) and two voices (passive and active), all overtly expressed by affixes (passive voice forms were periphrastic in three tenses). TheBaltic languagesare highly inflected. Nouns and adjectives are declined in up to seven overt cases. Additional cases are defined in various covert ways. For example, aninessive case, anillative case, anadessive caseandallative caseare borrowed from Finnic.Latvianhas only one overtlocative casebut itsyncretizesthe above four cases to the locative marking them by differences in the use of prepositions.[17]Lithuanian breaks them out of thegenitive case,accusative caseandlocative caseby using different postpositions.[18] Dual formis obsolete in standard Latvian and nowadays it is also considered nearly obsolete in standard Lithuanian. For instance, in standard Lithuanian it is normal to say "dvi varnos (plural) – two crows" instead of "dvi varni (dual)". Adjectives, pronouns, and numerals are declined for number, gender, and case to agree with the noun they modify or for which they substitute. Baltic verbs are inflected for tense, mood, aspect, and voice. They agree with the subject in person and number (not in all forms in modern Latvian). AllSlavic languagesmake use of a high degree of inflection, typically having six or seven cases and three genders for nouns and adjectives. However, the overt case system has disappeared almost completely in modernBulgarianandMacedonian. Most verb tenses and moods are also formed by inflection (however, some areperiphrastic, typically the future and conditional). Inflection is also present in adjective comparation and word derivation. Declensional endings depend on case (nominative, genitive, dative, accusative, locative, instrumental, vocative), number (singular, dual or plural), gender (masculine, feminine, neuter) and animacy (animate vs inanimate). Unusual in other language families, declension in most Slavic languages also depends on whether the word is a noun or an adjective. Slovene andSorbian languagesuse a rare third number, (in addition to singular and plural numbers) known asdual(in case of some words dual survived also inPolishand other Slavic languages). Modern Russian, Serbian and Czech also use a more complex form ofdual, but this misnomer applies instead to numbers 2, 3, 4, and larger numbers ending in 2, 3, or 4 (with the exception of the teens, which are handled as plural; thus, 102 is dual, but 12 or 127 are not). In addition, in some Slavic languages, such as Polish, word stems are frequently modified by the addition or absence of endings, resulting inconsonant and vowel alternation. Modern Standard Arabic(also called Literary Arabic) is an inflected language. It uses a system of independent and suffix pronouns classified by person and number and verbal inflections marking person and number. Suffix pronouns are used as markers ofpossessionand as objects of verbs and prepositions. Thetatweel(ـــ) marks where the verb stem, verb form, noun, or preposition is placed.[19] Arabicregional dialects(e.g.MoroccanArabic,EgyptianArabic,GulfArabic), used for everyday communication, tend to have less inflection than the more formal Literary Arabic. For example, inJordanianArabic, the second- and third-person feminine plurals (أنتنّantunnaandهنّhunna) and their respective unique conjugations are lost and replaced by the masculine (أنتمantumandهمhum), whereas in Lebanese and Syrian Arabic,همhumis replaced byهنّhunna. In addition, the system known asʾIʿrābplaces vowel suffixes on each verb, noun, adjective, and adverb, according to its function within a sentence and its relation to surrounding words.[19] TheUralic languagesareagglutinative, following from the agglutination inProto-Uralic. The largest languages areHungarian,Finnish, andEstonian—allEuropean Unionofficial languages. Uralic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Hungarian and Finnish, in particular, often simply concatenate suffixes. For example, Finnishtalossanikinko"in my house, too?" consists oftalo-ssa-ni-kin-ko. However, in theFinnic languages(Finnish, Estonian etc.) and theSami languages, there are processes which affect the root, particularlyconsonant gradation. The original suffixes may disappear (and appear only by liaison), leaving behind the modification of the root. This process is extensively developed in Estonian and Sami, and makes them also inflected, not only agglutinating languages. The Estonianillative case, for example, is expressed by a modified root:maja→majja(historical form *maja-han). Though Altaic is widely considered to be asprachbundby linguists, three language families united by a small subset of linguists as theAltaic language family—Turkic,Mongolic, andManchu-Tungus—areagglutinative. The largest languages areTurkish,AzerbaijaniandUzbek—all Turkic languages. Altaic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Basque, alanguage isolate, is a highly inflected language, heavily inflecting both nouns and verbs. Noun phrase morphology is agglutinative and consists of suffixes which simply attach to the end of a stem. These suffixes are in many cases fused with the article (-afor singular and-akfor plural), which in general is required to close a noun phrase in Basque if no other determiner is present, and unlike an article in many languages, it can only partially be correlated with the concept of definiteness. Proper nouns do not take an article, and indefinite nouns without the article (calledmugagabein Basque grammar) are highly restricted syntactically. Basque is an ergative language, meaning that inflectionally the single argument (subject) of an intransitive verb is marked in the same way as the direct object of a transitive verb. This is called theabsolutivecase and in Basque, as in most ergative languages, it is realized with a zero morph; in other words, it receives no special inflection. The subject of a transitive verb receives a special case suffix, called theergativecase.[20] There is no case marking concord in Basque and case suffixes, including those fused with the article, are added only to the last word in a noun phrase. Plurality is not marked on the noun and is identified only in the article or other determiner, possibly fused with a case marker. The examples below are in the absolutive case with zero case marking, and include the article only:[20] The noun phrase is declined for 11 cases:Absolutive, ergative, dative, possessive-genitive, benefactive, comitative, instrumental, inessive, allative, ablative,andlocal-genitive. These are signaled by suffixes that vary according to the categories ofSingular, Plural, Indefinite,andProper Noun, and many vary depending on whether the stem ends in a consonant or vowel. The Singular and Plural categories are fused with the article, and these endings are used when the noun phrase is not closed by any other determiner. This gives a potential 88 different forms, but the Indefinite and Proper Noun categories are identical in all but the local cases (inessive, allative, ablative, local-genitive), and many other variations in the endings can be accounted for by phonological rules operating to avoid impermissible consonant clusters. Local case endings are not normally added to animate Proper Nouns. The precise meaning of the local cases can be further specified by additional suffixes added after the local case suffixes.[20] Verb forms are extremely complex, agreeing with the subject, direct object, and indirect object; and include forms that agree with a "dative of interest" for intransitive verbs as well as allocutive forms where the verb form is altered if one is speaking to a close acquaintance. These allocutive forms also have different forms depending on whether the addressee is male or female. This is the only area in Basque grammar where gender plays any role at all.[20]Subordination could also plausibly be considered an inflectional category of the Basque verb since subordination is signaled by prefixes and suffixes on the conjugated verb, further multiplying the number of potential forms.[21] Transitivity is a thoroughgoing division of Basque verbs, and it is necessary to know the transitivity of a particular verb in order to conjugate it successfully. In the spoken language only a handful of commonly used verbs are fully conjugated in the present and simple past, most verbs being conjugated by means of an auxiliary which differs according to transitivity. The literary language includes a few more such verbs, but the number is still very small. Even these few verbs require an auxiliary to conjugate other tenses besides the present and simple past.[20] The most common intransitive auxiliary isizan, which is also the verb for "to be". The most common transitive auxiliary isukan, which is also the verb for "to have". (Other auxiliaries can be used in some of the tenses and may vary by dialect.) The compound tenses use an invariable form of the main verb (which appears in different forms according to the "tense group") and a conjugated form of the auxiliary. Pronouns are normally omitted if recoverable from the verb form. A couple of examples will have to suffice to demonstrate the complexity of the Basque verb:[20] Liburu-ak Book-PL.the saldu sell dizkiegu. AUX.3PL/ABS.3PL/DAT.1PL/ERG Liburu-ak saldu dizkiegu. Book-PL.the sell AUX.3PL/ABS.3PL/DAT.1PL/ERG "We sold the books to them." Kafe-a Coffee-the gusta-tzen please-HAB zaidak. AUX.ALLOC/M.3SG/ABS.1SG/DAT Kafe-a gusta-tzen zaidak. Coffee-the please-HAB AUX.ALLOC/M.3SG/ABS.1SG/DAT "I like coffee." ("Coffee pleases me.")(Used when speaking to a male friend.) The morphs that represent the various tense/person/case/mood categories of Basque verbs, especially in the auxiliaries, are so highly fused that segmenting them into individual meaningful units is nearly impossible, if not pointless. Considering the multitude of forms that a particular Basque verb can take, it seems unlikely that an individual speaker would have an opportunity to utter them all in his or her lifetime.[22] Most languages in theMainland Southeast Asia linguistic area(such as thevarieties of Chinese,Vietnamese, andThai) are not overtly inflected, or show very little overt inflection, and are therefore consideredanalytic languages(also known asisolating languages). Standard Chinesedoes not possess overt inflectional morphology. While some languages indicate grammatical relations with inflectional morphemes, Chinese utilizes word order andparticles. Consider the following examples: Both sentences mean 'The boy sees the girl.' This is becausepuer(boy) is singular nominative,puellam(girl) is singular accusative. Since the roles of puer and puellam have been marked with case endings, the change in position does not matter. The situation is very different in Chinese. Since Modern Chinese makes no use of inflection, the meanings ofwǒ('I' or 'me') andtā('he' or 'him') shall be determined with their position. InClassical Chinese, pronouns were overtly inflected to mark case. However, these overt case forms are no longer used; most of the alternative pronouns are considered archaic in modern Mandarin Chinese. Classically, 我 (wǒ) was used solely as the first person accusative. 吾 (Wú) was generally used as the first person nominative.[23] Certainvarieties of Chineseare known to express meaning by means of tone change, although further investigations are required[dubious–discuss]. Note that thetone changemust be distinguished fromtone sandhi.Tone sandhiis a compulsory change that occurs when certain tones are juxtaposed. Tone change, however, is a morphologically conditionedalternationand is used as an inflectional or a derivational strategy. Examples fromTaishanand Zhongshan (bothYue dialectsspoken inGuangdong Province) are shown below:[24] The following table compares the personal pronouns of Sixian dialect (a dialect ofTaiwanese Hakka)[25]with Zaiwa and Jingpho[26](bothTibeto-Burman languagesspoken inYunnanandBurma). The superscripted numbers indicate theChao tone numerals. InShanghainese, the third-person singular pronoun is overtly inflected as to case and the first- and second-person singular pronouns exhibit a change in tone depending on case.[citation needed] Japaneseshows a high degree of overt inflection of verbs, less so of adjectives, and very little of nouns, but it is mostly strictlyagglutinativeand extremely regular. Fusion of morphemes also happen in colloquial speech, for example: the causative-passive〜せられ〜(-serare-)fuses into〜され〜(-sare-), as in行かされる(ikasareru, "is made to go"), and the non-past progressive〜ている(-teiru)fuses into〜てる(-teru)as in食べてる(tabeteru, "is eating"). Formally, every noun phrase must bemarked for case, but this is done by invariable particles (cliticpostpositions). (Many[citation needed]grammarians consider Japanese particles to be separate words, and therefore not an inflection, while others[citation needed]consider agglutination a type of overt inflection, and therefore consider Japanese nouns as overtly inflected.) Someauxiliary languages, such asLingua Franca Nova,Glosa, andFrater, have no inflection. Other auxiliary languages, such as Esperanto, Ido, and Interlingua have comparatively simple inflectional systems. InEsperanto, an agglutinative language, nouns and adjectives are inflected for case (nominative, accusative) and number (singular, plural), according to a simple paradigm without irregularities. Verbs are not inflected for person or number, but they are inflected for tense (past, present, future) and mood (indicative, infinitive, conditional, jussive). They also form active and passive participles, which may be past, present or future. All verbs are regular. Idohas a different form for each verbal tense (past, present, future, volitive and imperative) plus an infinitive, and both a present and past participle. There are though no verbal inflections for person or number, and all verbs are regular. Nouns are marked for number (singular and plural), and the accusative case may be shown in certain situations, typically when the direct object of a sentence precedes its verb. On the other hand, adjectives are unmarked for gender, number or case (unless they stand on their own, without a noun, in which case they take on the same desinences as the missing noun would have taken). The definite article "la" ("the") remains unaltered regardless of gender or case, and also of number, except when there is no other word to show plurality. Pronouns are identical in all cases, though exceptionally the accusative case may be marked, as for nouns. Interlingua, in contrast with the Romance languages, has almost no irregular verb conjugations, and its verb forms are the same for all persons and numbers. It does, however, have compound verb tenses similar to those in the Romance, Germanic, and Slavic languages:ille ha vivite, "he has lived";illa habeva vivite, "she had lived". Nouns are inflected by number, taking a plural-s, but rarely by gender: only when referring to a male or female being. Interlingua has no noun-adjective agreement by gender, number, or case. As a result, adjectives ordinarily have no inflections. They may take the plural form if they are being used in place of a noun:le povres, "the poor".
https://en.wikipedia.org/wiki/Inflectional_morphology
In linguisticmorphology,inflection(less commonly,inflexion) is a process ofword formation[1]in which a word is modified to express differentgrammatical categoriessuch astense,case,voice,aspect,person,number,gender,mood,animacy, anddefiniteness.[2]The inflection ofverbsis calledconjugation, while the inflection ofnouns,adjectives,adverbs, etc.[a]can be calleddeclension. An inflection expresses grammatical categories withaffixation(such asprefix,suffix,infix,circumfix, andtransfix),apophony(asIndo-European ablaut), or other modifications.[3]For example, the Latin verbducam, meaning "I will lead", includes the suffix-am, expressing person (first), number (singular), and tense-mood (future indicative or present subjunctive). The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the wordleadis not inflected for any of person, number, or tense; it is simply thebare formof a verb. The inflected form of a word often contains both one or morefree morphemes(a unit of meaning which can stand by itself as a word), and one or morebound morphemes(a unit of meaning which cannot stand alone as a word). For example, the English wordcarsis a noun that is inflected fornumber, specifically to express the plural; the content morphemecaris unbound because it could stand alone as a word, while the suffix-sis bound because it cannot stand alone as a word. These two morphemes together form the inflected wordcars. Words that are never subject to inflection are said to beinvariant; for example, the English verbmustis an invariant item: it never takes a suffix or changes form to signify a different grammatical category. Its categories can be determined only from its context. Languages that seldom make use of inflection, such asEnglish, are said to beanalytic. Analytic languages that do not make use ofderivational morphemes, such asStandard Chinese, are said to beisolating. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known asconcordoragreement. For example, in "the man jumps", "man" is a singular noun, so "jump" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection aresynthetic languages. They can be highly inflected (such asGeorgianorKichwa), moderately inflected (such asRussianorLatin), weakly inflected (such asEnglish), but not uninflected (such asChinese). Languages that are so inflected that a sentence can consist of a single highly inflected word (such as manyNative American languages) are calledpolysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such asFinnish, are known asagglutinative languages, while languages in which a single inflection can convey multiple grammatical roles (such as both nominative case and plural, as in Latin andGerman) are calledfusional. In English most nouns are inflected fornumberwith the inflectional pluralaffix-s(as in "dog" → "dog-s"), and most English verbs are inflected fortensewith the inflectional past tense affix-ed(as in "call" → "call-ed"). English also inflects verbs by affixation to mark the third person singular in the present tense (with-s), and the present participle (with-ing). English short adjectives are inflected to mark comparative and superlative forms (with-erand-estrespectively). There are eightregularinflectional affixes in the English language.[4][5] Despite the march toward regularization, modern English retains traces of its ancestry, with a minority of its words still using inflection byablaut(sound change, mostly in verbs) andumlaut(a particular type of sound change, mostly in nouns), as well as long-short vowel alternation. For example: For details, seeEnglish plural,English verbs, andEnglish irregular verbs. When a givenword classis subject to inflection in a particular language, there are generally one or more standard patterns of inflection (theparadigmsdescribed below) that words in that class may follow. Words which follow such a standard pattern are said to beregular; those that inflect differently are calledirregular. For instance, many languages that featureverbinflection have bothregular verbs and irregular verbs. In English, regular verbs form theirpast tenseandpast participlewith the ending-[e]d. Therefore, verbs likeplay,arriveandenterare regular, while verbs likesing,keepandgoare irregular. Irregular verbs often preserve patterns that were regular in past forms of the language, but which have now become anomalous; in rare cases, there are regular verbs that were irregular in past forms of the language. (For more details seeEnglish verbsandEnglish irregular verbs.) Other types of irregular inflected form include irregularpluralnouns, such as the Englishmice,childrenandwomen(seeEnglish plural) and the Frenchyeux(the plural ofœil, "eye"); and irregularcomparativeandsuperlativeforms of adjectives or adverbs, such as the Englishbetterandbest(which correspond to the positive formgoodorwell). Irregularities can have four basic causes:[citation needed] For more details on some of the considerations that apply to regularly and irregularly inflected forms, see the article onregular and irregular verbs. Two traditional grammatical terms refer to inflections of specificword classes: An organized list of the inflected forms of a givenlexemeor root word is called itsdeclensionif it is a noun, or itsconjugationif it is a verb. Below is the declension of the English pronounI, which is inflected for case and number. The pronounwhois also inflected according to case. Its declension isdefective, in the sense that it lacks a reflexive form. The following table shows the conjugation of the verbto arrivein the indicativemood:suffixesinflect it for person, number, and tense: Thenon-finite formsarrive(bare infinitive),arrived(past participle) andarriving(gerund/present participle), although not inflected for person or number, can also be regarded as part of the conjugation of the verbto arrive.Compound verb forms, such asI have arrived,I had arrived, orI will arrive, can be included also in the conjugation of the verb for didactic purposes, but they are not overt inflections ofarrive. The formula for deriving the covert form, in which the relevant inflections do not occur in the main verb, is Aninflectional paradigmrefers to a pattern (usually a set of inflectional endings), where a class of words follow the same pattern. Nominal inflectional paradigms are calleddeclensions, and verbal inflectional paradigms are termedconjugations. For instance, there are five types ofLatin declension. Words that belong to the first declension usually end in-aand are usually feminine. These words share a common inflectional framework. InOld English, nouns are divided into two major categories of declension, thestrongandweakones, as shown below: The terms "strong declension" and "weak declension" are primarily relevant to well-knowndependent-marking languages[citation needed](such as theIndo-European languages,[citation needed]orJapanese). In dependent-marking languages, nouns in adpositional (prepositional or postpositional) phrases can carry inflectional morphemes. Inhead-marking languages, the adpositions can carry the inflection in adpositional phrases. This means that these languages will have inflected adpositions. InWestern Apache(San Carlosdialect), the postposition-ká’'on' is inflected for person and number with prefixes: Traditional grammars have specific terms for inflections of nouns and verbs but not for those ofadpositions.[clarification needed] Inflection is the process of addinginflectionalmorphemesthat modify a verb's tense, mood, aspect, voice, person, or number or a noun's case, gender, or number, rarely affecting the word's meaning or class. Examples of applying inflectional morphemes to words are adding -sto the rootdogto formdogsand adding -edtowaitto formwaited. In contrast,derivationis the process of addingderivational morphemes, which create a new word from existing words and change the semantic meaning or the part of speech of the affected word, such as by changing a noun to a verb.[6] Distinctions between verbalmoodsare mainly indicated by derivational morphemes. Words are rarely listed in dictionaries on the basis of their inflectional morphemes (in which case they would be lexical items). However, they often are listed on the basis of their derivational morphemes. For instance, English dictionaries listreadableandreadability, words with derivational suffixes, along with their rootread. However, no traditional English dictionary listsbookas one entry andbooksas a separate entry; the same goes forjumpandjumped. Languages that add inflectional morphemes to words are sometimes calledinflectional languages, which is a synonym forinflected languages. Morphemes may be added in several different ways: Reduplicationis a morphological process where a constituent is repeated. The direct repetition of a word or root is calledtotal reduplication(orfull reduplication). The repetition of a segment is referred to aspartial reduplication. Reduplication can serve bothderivationaland inflectional functions. A few examples are given below: Palancar and Léonard provided an example withTlatepuzco Chinantec(anOto-Manguean languagespoken in SouthernMexico), where tones are able to distinguish mood, person, and number:[12][13] Case can be distinguished with tone as well, as inMaasai language(aNilo-Saharan languagespoken inKenyaandTanzania) (Hyman, 2016):[14] Because theProto-Indo-European languagewas highly inflected, all of its descendantIndo-European languages, such asAlbanian,Armenian,English,German,Ukrainian,Russian,Persian,Kurdish,Italian,Irish,Spanish,French,Hindi,Marathi,Urdu,Bengali, andNepali, are inflected to a greater or lesser extent. In general, older Indo-European languages such asLatin,Ancient Greek,Old English,Old Norse,Old Church SlavonicandSanskritare extensively inflected because of their temporal proximity to Proto-Indo-European.Deflexionhas caused modern versions of some Indo-European languages that were previously highly inflected to be much less so; an example is Modern English, as compared to Old English. In general, languages where deflexion occurs replace inflectional complexity with more rigorousword order, which provides the lost inflectional details. MostSlavic languagesand someIndo-Aryan languagesare an exception to the general Indo-European deflexion trend, continuing to be highly inflected (in some cases acquiring additional inflectional complexity andgrammatical genders, as inCzech&Marathi). Old Englishwas a moderately inflected language, using an extensive case system similar to that of modernIcelandic,FaroeseorGerman. Middle and Modern English lost progressively more of the Old English inflectional system. Modern English is considered a weakly inflected language, since its nouns have only vestiges of inflection (plurals, the pronouns), and its regular verbs have only four forms: an inflected form for the past indicative and subjunctive (looked), an inflected form for the third-person-singular present indicative (looks), an inflected form for the present participle (looking), and an uninflected form for everything else (look). While the English possessive indicator's(as in "Jennifer's book") is a remnant of the Old Englishgenitive casesuffix, it is now considered by syntacticians not to be a suffix but aclitic,[15]although some linguists argue that it has properties of both.[16] Old Norsewas inflected, but modernSwedish,Norwegian, andDanishhave lost much of their inflection.Grammatical casehas largely died out with the exception ofpronouns, just like English. However,adjectives,nouns,determinersandarticlesstill have different forms according to grammatical number and grammatical gender. Danish and Swedish only inflect for two different genders while Norwegian has to some degree retained the feminine forms and inflects for three grammatical genders like Icelandic. However, in comparison to Icelandic, there are considerably fewer feminine forms left in the language. In comparison,Icelandicpreserves almost all of theinflections of Old Norseand remains heavily inflected. It retains all the grammatical cases from Old Norse and is inflected for number and three different grammatical genders. Thedual number formsare however almost completely lost in comparison to Old Norse. Unlike other Germanic languages, nouns are inflected fordefinitenessin all Scandinavian languages, like in the following case forNorwegian (nynorsk): Adjectives andparticiplesare also inflected for definiteness in all Scandinavian languages like inProto-Germanic. ModernGermanremains moderately inflected, retaining four noun cases, although the genitive started falling into disuse in all but formal writing inEarly New High German. The case system ofDutch, simpler than that of German, is also simplified in common usage.Afrikaans, recognized as a distinct language in its own right rather than a Dutch dialect only in the early 20th century, has lost almost all inflection. TheRomance languages, such asSpanish,Italian,French,Portugueseand especially – with its many cases –Romanian, have more overt inflection than English, especially inverb conjugation. Adjectives, nouns and articles are considerably less inflected than verbs, but they still have different forms according to number and grammatical gender. Latin, the mother tongue of the Romance languages, was highly inflected; nouns and adjectives had different forms according to sevengrammatical cases(including five major ones) with five major patterns of declension, and three genders instead of the two found in most Romance tongues. There were four patterns of conjugation in six tenses, three moods (indicative, subjunctive, imperative, plus the infinitive, participle, gerund, gerundive, and supine) and two voices (passive and active), all overtly expressed by affixes (passive voice forms were periphrastic in three tenses). TheBaltic languagesare highly inflected. Nouns and adjectives are declined in up to seven overt cases. Additional cases are defined in various covert ways. For example, aninessive case, anillative case, anadessive caseandallative caseare borrowed from Finnic.Latvianhas only one overtlocative casebut itsyncretizesthe above four cases to the locative marking them by differences in the use of prepositions.[17]Lithuanian breaks them out of thegenitive case,accusative caseandlocative caseby using different postpositions.[18] Dual formis obsolete in standard Latvian and nowadays it is also considered nearly obsolete in standard Lithuanian. For instance, in standard Lithuanian it is normal to say "dvi varnos (plural) – two crows" instead of "dvi varni (dual)". Adjectives, pronouns, and numerals are declined for number, gender, and case to agree with the noun they modify or for which they substitute. Baltic verbs are inflected for tense, mood, aspect, and voice. They agree with the subject in person and number (not in all forms in modern Latvian). AllSlavic languagesmake use of a high degree of inflection, typically having six or seven cases and three genders for nouns and adjectives. However, the overt case system has disappeared almost completely in modernBulgarianandMacedonian. Most verb tenses and moods are also formed by inflection (however, some areperiphrastic, typically the future and conditional). Inflection is also present in adjective comparation and word derivation. Declensional endings depend on case (nominative, genitive, dative, accusative, locative, instrumental, vocative), number (singular, dual or plural), gender (masculine, feminine, neuter) and animacy (animate vs inanimate). Unusual in other language families, declension in most Slavic languages also depends on whether the word is a noun or an adjective. Slovene andSorbian languagesuse a rare third number, (in addition to singular and plural numbers) known asdual(in case of some words dual survived also inPolishand other Slavic languages). Modern Russian, Serbian and Czech also use a more complex form ofdual, but this misnomer applies instead to numbers 2, 3, 4, and larger numbers ending in 2, 3, or 4 (with the exception of the teens, which are handled as plural; thus, 102 is dual, but 12 or 127 are not). In addition, in some Slavic languages, such as Polish, word stems are frequently modified by the addition or absence of endings, resulting inconsonant and vowel alternation. Modern Standard Arabic(also called Literary Arabic) is an inflected language. It uses a system of independent and suffix pronouns classified by person and number and verbal inflections marking person and number. Suffix pronouns are used as markers ofpossessionand as objects of verbs and prepositions. Thetatweel(ـــ) marks where the verb stem, verb form, noun, or preposition is placed.[19] Arabicregional dialects(e.g.MoroccanArabic,EgyptianArabic,GulfArabic), used for everyday communication, tend to have less inflection than the more formal Literary Arabic. For example, inJordanianArabic, the second- and third-person feminine plurals (أنتنّantunnaandهنّhunna) and their respective unique conjugations are lost and replaced by the masculine (أنتمantumandهمhum), whereas in Lebanese and Syrian Arabic,همhumis replaced byهنّhunna. In addition, the system known asʾIʿrābplaces vowel suffixes on each verb, noun, adjective, and adverb, according to its function within a sentence and its relation to surrounding words.[19] TheUralic languagesareagglutinative, following from the agglutination inProto-Uralic. The largest languages areHungarian,Finnish, andEstonian—allEuropean Unionofficial languages. Uralic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Hungarian and Finnish, in particular, often simply concatenate suffixes. For example, Finnishtalossanikinko"in my house, too?" consists oftalo-ssa-ni-kin-ko. However, in theFinnic languages(Finnish, Estonian etc.) and theSami languages, there are processes which affect the root, particularlyconsonant gradation. The original suffixes may disappear (and appear only by liaison), leaving behind the modification of the root. This process is extensively developed in Estonian and Sami, and makes them also inflected, not only agglutinating languages. The Estonianillative case, for example, is expressed by a modified root:maja→majja(historical form *maja-han). Though Altaic is widely considered to be asprachbundby linguists, three language families united by a small subset of linguists as theAltaic language family—Turkic,Mongolic, andManchu-Tungus—areagglutinative. The largest languages areTurkish,AzerbaijaniandUzbek—all Turkic languages. Altaic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Basque, alanguage isolate, is a highly inflected language, heavily inflecting both nouns and verbs. Noun phrase morphology is agglutinative and consists of suffixes which simply attach to the end of a stem. These suffixes are in many cases fused with the article (-afor singular and-akfor plural), which in general is required to close a noun phrase in Basque if no other determiner is present, and unlike an article in many languages, it can only partially be correlated with the concept of definiteness. Proper nouns do not take an article, and indefinite nouns without the article (calledmugagabein Basque grammar) are highly restricted syntactically. Basque is an ergative language, meaning that inflectionally the single argument (subject) of an intransitive verb is marked in the same way as the direct object of a transitive verb. This is called theabsolutivecase and in Basque, as in most ergative languages, it is realized with a zero morph; in other words, it receives no special inflection. The subject of a transitive verb receives a special case suffix, called theergativecase.[20] There is no case marking concord in Basque and case suffixes, including those fused with the article, are added only to the last word in a noun phrase. Plurality is not marked on the noun and is identified only in the article or other determiner, possibly fused with a case marker. The examples below are in the absolutive case with zero case marking, and include the article only:[20] The noun phrase is declined for 11 cases:Absolutive, ergative, dative, possessive-genitive, benefactive, comitative, instrumental, inessive, allative, ablative,andlocal-genitive. These are signaled by suffixes that vary according to the categories ofSingular, Plural, Indefinite,andProper Noun, and many vary depending on whether the stem ends in a consonant or vowel. The Singular and Plural categories are fused with the article, and these endings are used when the noun phrase is not closed by any other determiner. This gives a potential 88 different forms, but the Indefinite and Proper Noun categories are identical in all but the local cases (inessive, allative, ablative, local-genitive), and many other variations in the endings can be accounted for by phonological rules operating to avoid impermissible consonant clusters. Local case endings are not normally added to animate Proper Nouns. The precise meaning of the local cases can be further specified by additional suffixes added after the local case suffixes.[20] Verb forms are extremely complex, agreeing with the subject, direct object, and indirect object; and include forms that agree with a "dative of interest" for intransitive verbs as well as allocutive forms where the verb form is altered if one is speaking to a close acquaintance. These allocutive forms also have different forms depending on whether the addressee is male or female. This is the only area in Basque grammar where gender plays any role at all.[20]Subordination could also plausibly be considered an inflectional category of the Basque verb since subordination is signaled by prefixes and suffixes on the conjugated verb, further multiplying the number of potential forms.[21] Transitivity is a thoroughgoing division of Basque verbs, and it is necessary to know the transitivity of a particular verb in order to conjugate it successfully. In the spoken language only a handful of commonly used verbs are fully conjugated in the present and simple past, most verbs being conjugated by means of an auxiliary which differs according to transitivity. The literary language includes a few more such verbs, but the number is still very small. Even these few verbs require an auxiliary to conjugate other tenses besides the present and simple past.[20] The most common intransitive auxiliary isizan, which is also the verb for "to be". The most common transitive auxiliary isukan, which is also the verb for "to have". (Other auxiliaries can be used in some of the tenses and may vary by dialect.) The compound tenses use an invariable form of the main verb (which appears in different forms according to the "tense group") and a conjugated form of the auxiliary. Pronouns are normally omitted if recoverable from the verb form. A couple of examples will have to suffice to demonstrate the complexity of the Basque verb:[20] Liburu-ak Book-PL.the saldu sell dizkiegu. AUX.3PL/ABS.3PL/DAT.1PL/ERG Liburu-ak saldu dizkiegu. Book-PL.the sell AUX.3PL/ABS.3PL/DAT.1PL/ERG "We sold the books to them." Kafe-a Coffee-the gusta-tzen please-HAB zaidak. AUX.ALLOC/M.3SG/ABS.1SG/DAT Kafe-a gusta-tzen zaidak. Coffee-the please-HAB AUX.ALLOC/M.3SG/ABS.1SG/DAT "I like coffee." ("Coffee pleases me.")(Used when speaking to a male friend.) The morphs that represent the various tense/person/case/mood categories of Basque verbs, especially in the auxiliaries, are so highly fused that segmenting them into individual meaningful units is nearly impossible, if not pointless. Considering the multitude of forms that a particular Basque verb can take, it seems unlikely that an individual speaker would have an opportunity to utter them all in his or her lifetime.[22] Most languages in theMainland Southeast Asia linguistic area(such as thevarieties of Chinese,Vietnamese, andThai) are not overtly inflected, or show very little overt inflection, and are therefore consideredanalytic languages(also known asisolating languages). Standard Chinesedoes not possess overt inflectional morphology. While some languages indicate grammatical relations with inflectional morphemes, Chinese utilizes word order andparticles. Consider the following examples: Both sentences mean 'The boy sees the girl.' This is becausepuer(boy) is singular nominative,puellam(girl) is singular accusative. Since the roles of puer and puellam have been marked with case endings, the change in position does not matter. The situation is very different in Chinese. Since Modern Chinese makes no use of inflection, the meanings ofwǒ('I' or 'me') andtā('he' or 'him') shall be determined with their position. InClassical Chinese, pronouns were overtly inflected to mark case. However, these overt case forms are no longer used; most of the alternative pronouns are considered archaic in modern Mandarin Chinese. Classically, 我 (wǒ) was used solely as the first person accusative. 吾 (Wú) was generally used as the first person nominative.[23] Certainvarieties of Chineseare known to express meaning by means of tone change, although further investigations are required[dubious–discuss]. Note that thetone changemust be distinguished fromtone sandhi.Tone sandhiis a compulsory change that occurs when certain tones are juxtaposed. Tone change, however, is a morphologically conditionedalternationand is used as an inflectional or a derivational strategy. Examples fromTaishanand Zhongshan (bothYue dialectsspoken inGuangdong Province) are shown below:[24] The following table compares the personal pronouns of Sixian dialect (a dialect ofTaiwanese Hakka)[25]with Zaiwa and Jingpho[26](bothTibeto-Burman languagesspoken inYunnanandBurma). The superscripted numbers indicate theChao tone numerals. InShanghainese, the third-person singular pronoun is overtly inflected as to case and the first- and second-person singular pronouns exhibit a change in tone depending on case.[citation needed] Japaneseshows a high degree of overt inflection of verbs, less so of adjectives, and very little of nouns, but it is mostly strictlyagglutinativeand extremely regular. Fusion of morphemes also happen in colloquial speech, for example: the causative-passive〜せられ〜(-serare-)fuses into〜され〜(-sare-), as in行かされる(ikasareru, "is made to go"), and the non-past progressive〜ている(-teiru)fuses into〜てる(-teru)as in食べてる(tabeteru, "is eating"). Formally, every noun phrase must bemarked for case, but this is done by invariable particles (cliticpostpositions). (Many[citation needed]grammarians consider Japanese particles to be separate words, and therefore not an inflection, while others[citation needed]consider agglutination a type of overt inflection, and therefore consider Japanese nouns as overtly inflected.) Someauxiliary languages, such asLingua Franca Nova,Glosa, andFrater, have no inflection. Other auxiliary languages, such as Esperanto, Ido, and Interlingua have comparatively simple inflectional systems. InEsperanto, an agglutinative language, nouns and adjectives are inflected for case (nominative, accusative) and number (singular, plural), according to a simple paradigm without irregularities. Verbs are not inflected for person or number, but they are inflected for tense (past, present, future) and mood (indicative, infinitive, conditional, jussive). They also form active and passive participles, which may be past, present or future. All verbs are regular. Idohas a different form for each verbal tense (past, present, future, volitive and imperative) plus an infinitive, and both a present and past participle. There are though no verbal inflections for person or number, and all verbs are regular. Nouns are marked for number (singular and plural), and the accusative case may be shown in certain situations, typically when the direct object of a sentence precedes its verb. On the other hand, adjectives are unmarked for gender, number or case (unless they stand on their own, without a noun, in which case they take on the same desinences as the missing noun would have taken). The definite article "la" ("the") remains unaltered regardless of gender or case, and also of number, except when there is no other word to show plurality. Pronouns are identical in all cases, though exceptionally the accusative case may be marked, as for nouns. Interlingua, in contrast with the Romance languages, has almost no irregular verb conjugations, and its verb forms are the same for all persons and numbers. It does, however, have compound verb tenses similar to those in the Romance, Germanic, and Slavic languages:ille ha vivite, "he has lived";illa habeva vivite, "she had lived". Nouns are inflected by number, taking a plural-s, but rarely by gender: only when referring to a male or female being. Interlingua has no noun-adjective agreement by gender, number, or case. As a result, adjectives ordinarily have no inflections. They may take the plural form if they are being used in place of a noun:le povres, "the poor".
https://en.wikipedia.org/wiki/Inflection
Aloanword(also aloan word,loan-word) is awordat least partly assimilated from onelanguage(the donor language) into another language (the recipient or target language), through the process ofborrowing.[1][2]Borrowing is a metaphorical term that is well established in the linguistic field despite its acknowledged descriptive flaws: nothing is taken away from the donor language and there is no expectation of returning anything (i.e., the loanword).[3] Loanwords may be contrasted withcalques, in which a word is borrowed into the recipient language by being directly translated from the donor language rather than being adopted in (an approximation of) its original form. They must also be distinguished fromcognates, which are words in two or morerelated languagesthat are similar because they share anetymologicalorigin in the ancestral language, rather than because one borrowed the word from the other. A loanword is distinguished from acalque(orloan translation), which is a word or phrase whosemeaningoridiomis adopted from another language by word-for-wordtranslationinto existing words or word-forming roots of the recipient language.[4]Loanwords, in contrast, arenottranslated. Examples of loanwords in theEnglish languageincludecafé(from Frenchcafé, which means "coffee"),bazaar(from Persianbāzār, which means "market"), andkindergarten(from GermanKindergarten, which literally means "children's garden"). The wordcalqueis a loanword, while the wordloanwordis a calque:calquecomes from the French nouncalque("tracing; imitation; close copy");[5]while the wordloanwordand the phraseloan translationare translated fromGermannounsLehnwort[6]andLehnübersetzung(German:[ˈleːnʔybɐˌzɛt͡sʊŋ]ⓘ).[7] Loans of multi-word phrases, such as the English use of the French termdéjà vu, are known as adoptions, adaptations, or lexical borrowings.[8][9] Although colloquial andinformal registerloanwords are typically spread by word-of-mouth, technical or academic loanwords tend to be first used in written language, often for scholarly, scientific, or literary purposes.[10][11] The termssubstrateandsuperstrateare often used when two languages interact. However, the meaning of these terms is reasonably well-defined only in second language acquisition or language replacement events, when the native speakers of a certain source language (the substrate) are somehow compelled to abandon it for another target language (the superstrate).[12][relevant?] AWanderwortis a word that has been borrowed across a wide range of languages remote from its original source; an example is the wordtea, which originated inHokkienbut has been borrowed into languages all over the world. For a sufficiently old Wanderwort, it may become difficult or impossible to determine in what language it actually originated. Most of thetechnical vocabulary of classical music(such asconcerto,allegro,tempo,aria,opera, andsoprano) is borrowed fromItalian,[13]and that ofballetfromFrench.[14]Much of theterminologyof the sport offencingalso comes from French. Many loanwords come from prepared food, drink, fruits, vegetables, seafood and more from languages around the world. In particular, many come fromFrench cuisine(crêpe,Chantilly,crème brûlée),Italian(pasta,linguine,pizza,espresso), andChinese(dim sum,chow mein,wonton). Loanwords are adapted from one language to another in a variety of ways.[15]The studies byWerner Betz(1971, 1901),Einar Haugen(1958, also 1956), andUriel Weinreich(1963) are regarded as the classical theoretical works on loan influence.[16]The basic theoretical statements all take Betz's nomenclature as their starting point. Duckworth (1977) enlarges Betz's scheme by the type "partial substitution" and supplements the system with English terms. A schematic illustration of these classifications is given below.[17] The phrase "foreign word" used in the image below is a mistranslation of the GermanFremdwort, which refers to loanwords whose pronunciation, spelling, inflection or gender have not been adapted to the new language such that they no longer seem foreign. Such a separation of loanwords into two distinct categories is not used by linguists in English in talking about any language. Basing such a separation mainly on spelling is (or, in fact, was) not common except amongst German linguists, and only when talking about German and sometimes other languages that tend to adapt foreign spellings, which is rare in English unless the word has been widely used for a long time. According to the linguist Suzanne Kemmer, the expression "foreign word" can be defined as follows in English: "[W]hen most speakers do not know the word and if they hear it think it is from another language, the word can be called a foreign word. There are many foreign words and phrases used in English such as bon vivant (French), mutatis mutandis (Latin), and Schadenfreude (German)."[18]This is not how the term is used in this illustration: On the basis of an importation-substitution distinction, Haugen (1950: 214f.) distinguishes three basic groups of borrowings: "(1)Loanwordsshow morphemic importation without substitution.... (2)Loanblendsshow morphemic substitution as well as importation.... (3)Loanshiftsshow morphemic substitution without importation". Haugen later refined (1956) his model in a review of Gneuss's (1955) book on Old English loan coinages, whose classification, in turn, is the one by Betz (1949) again. Weinreich (1953: 47ff.) differentiates between two mechanisms of lexical interference, namely those initiated by simple words and those initiated by compound words and phrases. Weinreich (1953: 47) definessimple words"from the point of view of the bilinguals who perform the transfer, rather than that of the descriptive linguist. Accordingly, the category 'simple' words also includes compounds that are transferred in unanalysed form". After this general classification, Weinreich then resorts to Betz's (1949) terminology. The English language has borrowed many words from other cultures or languages. For examples, seeLists of English words by country or language of originandAnglicisation. Some English loanwords remain relatively faithful to the original phonology even though a particularphonememight not exist or have contrastive status in English. For example, theHawaiianwordʻaʻāis used by geologists to specify lava that is thick, chunky, and rough. The Hawaiian spelling indicates the twoglottal stopsin the word, but the English pronunciation,/ˈɑː(ʔ)ɑː/, contains at most one. The English spelling usually removes theʻokinaandmacrondiacritics.[19] Most English affixes, such asun-,-ing, and-ly, were used in Old English. However, a few English affixes are borrowed. For example, the verbal suffix-ize(American English) orise(British English)comes from Greek -ιζειν (-izein) through Latin-izare. Pronunciation often differs from the original language, occasionally dramatically, especially when dealing withplace names. This often leads to divergence when many speakers anglicize pronunciations as other speakers try to maintain the way the name would sound in the original language, as in thepronunciation of Louisville. During more than 600 years of theOttoman Empire, the literary and administrative language of the empire wasTurkish, with manyPersianandArabicloanwords, calledOttoman Turkish, considerably differing from the everyday spoken Turkish of the time. Many such words were adopted by other languages of the empire, such asAlbanian,Bosnian,Bulgarian,Croatian,Greek,Hungarian,Ladino,Macedonian,MontenegrinandSerbian. After the empire fell afterWorld War Iand theRepublic of Turkeywas founded, the Turkish language underwent an extensivelanguage reformled by the newly foundedTurkish Language Association, during whichmany adopted wordswere replaced with new formations derived fromTurkicroots. That was part of the ongoing cultural reform of the time, in turn a part in the broader framework ofAtatürk's Reforms, which also included the introduction of the newTurkish alphabet. Turkish also has taken many words fromFrench, such aspantolonfortrousers(from Frenchpantalon) andkomikforfunny(from Frenchcomique), most of them pronounced very similarly. Word usage in modern Turkey has acquired a political tinge:right-wingpublications tend to use more Arabic-originated words,left-wingpublications use more words adopted from Indo-European languages such as Persian and French, while centrist publications use more native Turkish root words.[20] Almost 350 years of Dutch presence in what is nowIndonesiahave left significant linguistic traces. Though very few Indonesians have a fluent knowledge of Dutch, the Indonesian language inherited many words from Dutch, both in words for everyday life (e.g.,buncisfrom Dutchboontjesfor (green) beans) and as well in administrative, scientific or technological terminology (e.g.,kantorfrom Dutchkantoorfor office).[21]The Professor of Indonesian Literature atLeiden University,[22]and of Comparative Literature atUCR,[23]argues that roughly 20% ofIndonesianwords can be traced back to Dutch words.[24] In the late 17th century, theDutch Republichad a leading position in shipbuilding. CzarPeter the Great, eager to improve his navy, studied shipbuilding inZaandamandAmsterdam. Many Dutch naval terms have been incorporated in the Russian vocabulary, such asбра́мсель(brámselʹ) from Dutchbramzeilfor thetopgallant sail,домкра́т(domkrát) from Dutchdommekrachtforjack, andматро́с(matrós) from Dutchmatroosfor sailor. A large percentage of the lexicon ofRomance languages, themselves descended fromVulgar Latin, consists of loanwords (laterlearned or scholarly borrowings) from Latin. These words can be distinguished by lack of typical sound changes and other transformations found in descended words, or by meanings taken directly fromClassicalorEcclesiastical Latinthat did not evolve or change over time as expected; in addition, there are also semi-learned terms which were adapted partially to the Romance language's character. Latin borrowings can be known by several names in Romance languages: in French, for example, they are usually referred to asmots savants, in Spanish ascultismos,[25][26]and in Italian aslatinismi. Latin is usually the most common source of loanwords in these languages, such as in Italian, Spanish, French, Portuguese, etc.,[27][28]and in some cases the total number of loans may even outnumber inherited terms[29][30](although the learned borrowings are less often used in common speech, with the most common vocabulary being of inherited, orally transmitted origin from Vulgar Latin). This has led to many cases of etymologicaldoubletsin these languages. For most Romance languages, these loans were initiated by scholars, clergy, or other learned people and occurred in medieval times, peaking in the late Middle Ages and early Renaissance era[28]- in Italian, the 14th century had the highest number of loans.[citation needed]In the case of Romanian, the language underwent a "re-Latinization" process later than the others (seeRomanian lexis,Romanian language § French, Italian, and English loanwords), in the 18th and 19th centuries, partially using French and Italian words (many of these themselves being earlier borrowings from Latin) as intermediaries,[31]in an effort to modernize the language, often adding concepts that did not exist until then, or replacing words of other origins. These common borrowings and features also essentially serve to raise mutual intelligibility of the Romance languages, particularly in academic/scholarly, literary, technical, and scientific domains. Many of these same words are also found in English (through its numerous borrowings from Latin and French) and other European languages. In addition to Latin loanwords, many words of Ancient Greek origin were also borrowed into Romance languages, often in part through scholarly Latin intermediates, and these also often pertained to academic, scientific, literary, and technical topics. Furthermore, to a lesser extent, Romance languages borrowed from a variety of other languages; in particular English has become an important source in more recent times. The study of the origin of these words and their function and context within the language can illuminate some important aspects and characteristics of the language, and it can reveal insights on the phenomenon of lexical borrowing in linguistics as a method of enriching a language.[32] According toHans Henrich Hockand Brian Joseph, "languages and dialects ... do not exist in a vacuum": there is always linguistic contact between groups.[33]The contact influences what loanwords are integrated into the lexicon and which certain words are chosen over others. In some cases, the original meaning shifts considerably through unexpected logical leaps, creatingfalse friends. The English wordVikingbecame Japaneseバイキング(baikingu), meaning "buffet", because the first restaurant in Japan to offerbuffet-style meals, inspired by the Nordicsmörgåsbord, was opened in 1958 by the Imperial Hotel under the name "Viking".[34]The German wordKachel, meaning "tile", became the Dutch wordkachelmeaning "stove", as a shortening ofkacheloven, from GermanKachelofen, acocklestove. The Indonesian wordmansetprimarily means "spandexclothing", "inner bolero", or "detachable sleeve", while its French etymonmanchettemeans "cuff".
https://en.wikipedia.org/wiki/Loanword
Inlinguistics, aneologism(/niˈɒləˌdʒɪzəm/; also known as acoinage) is any newly formed word, term, or phrase that has achieved popular or institutional recognition and is becoming accepted into mainstream language.[1]Most definitively, a word can be considered a neologism once it is published in a dictionary.[2] Neologisms are one facet oflexical innovation, i.e., the linguistic process of new terms and meanings entering a language'slexicon. The most precise studies intolanguage changeandword formation, in fact, identify the process of a "neological continuum": anonce wordis any single-use term that may or may not grow in popularity; aprotologismis such a term used exclusively within a small group; aprelogismis such a term that is gaining usage but is still not mainstream; and aneologismhas become accepted or recognized by social institutions.[3][4] Neologisms are often driven by changes in culture and technology.[5][6]Popular examples of neologisms can be found inscience,technology,fiction(notablyscience fiction), films and television, commercial branding,literature,jargon,cant,linguistics, thevisual arts, and popular culture.[citation needed] Examples of 20th-century neologisms include "laser" (1960), anacronymof "light amplification by stimulated emission of radiation"; "robot" (1921), fromCzechwriterKarel Čapek's playR.U.R. (Rossum's Universal Robots);[7]and "agitprop" (1930), aportmanteauof the Russian "agitatsiya" ("agitation") and "propaganda").[8] Neologisms are often formed by combining existing words (seecompound noun and adjective) or by giving words new and uniquesuffixesorprefixes.[9]Neologisms can also be formed byblendingwords, for example, "brunch" is a blend of the words "breakfast" and "lunch", or throughabbreviationoracronym, by intentionallyrhymingwith existing words or simply through playing with sounds. A relatively rare form of neologism is when proper names are used as words (e.g.,boycott, fromCharles Boycott), includingguy,dick,Chad, andKaren.[9] Neologisms can become popular throughmemetics, throughmass media, theInternet, andword of mouth, including academic discourse in many fields renowned for their use of distinctivejargon, and often become accepted parts of the language. Other times, they disappear from common use just as readily as they appeared. Whether a neologism continues as part of the language depends on many factors, probably the most important of which is acceptance by the public. It is unusual for a word to gain popularity if it does not clearly resemble other words. The term "neologism" is first attested in English in 1772, borrowed from the French "néologisme" (1734).[10]The French word derives from theGreekνέο(néo), meaning "new", andλόγος(lógos), meaning "speech, utterance". In an academic sense, there are no professional neologists, because the study of such things (e.g., of cultural or ethnic vernacular) isinterdisciplinary. Anyone such as alexicographeror anetymologistmight study neologisms, how their uses span the scope of human expression, and how, due to science and technology, they now spread more rapidly than ever.[11] The term "neologism" also has a broader meaning, of "a word which has gained anew meaning".[12][13][14]Sometimes the latter process is called "semantic shifting"[12]or "semantic extension".[15][16]Neologisms are distinct from a person'sidiolect, one's unique patterns of vocabulary, grammar, and pronunciation. Neologisms are usually introduced when a concept is lacking a term, or when an existing term lacks detail, or when a speaker is unaware of the existing term.[17]The law, governmental bodies, and technology have a relatively high frequency of acquiring neologisms.[18][19]Another motive for the coining of a neologism is to disambiguate a term that has multiple meanings.[20] Neologisms may come from a word used in the narrative of fiction such as novels and short stories. Examples include "grok" (to intuitively understand) from the science fiction novel about a Martian entitledStranger in a Strange LandbyRobert A. Heinlein; "McJob" (precarious, poorly-paid employment) fromGeneration X: Tales for an Accelerated CulturebyDouglas Coupland; "cyberspace" (widespread, interconnected digital technology) fromNeuromancerbyWilliam Gibson[21]and "quark" (Slavic slang for "rubbish"; German for a type ofdairy product) fromJames Joyce'sFinnegans Wake. The title of a book may become a neologism, for instance,Catch-22(from the title ofJoseph Heller's novel).[22]Alternatively, the author's name may give rise to the neologism, although the term is sometimes based on only one work of that author. This includes such words as "Orwellian" (fromGeorge Orwell, referring to his dystopian novelNineteen Eighty-Four) and "Kafkaesque" (fromFranz Kafka). Names of famous characters are another source of literary neologisms. Some examples include:Quixotic, referring to a misguided romantic quest like that of thetitle characterinDon QuixotebyMiguel de Cervantes;[23]Scrooge, a pejorative formisersbased on the avaricious main character inCharles Dickens'A Christmas Carol;[24]andPollyanna, referring to people who are unfailingly optimistic like the title character ofEleanor H. Porter'sPollyanna.[25] Neologisms are often introduced in technical writing, so-calledFachtexteor 'technical texts' through the process oflexical innovation. Technical subjects such as philosophy, sociology, physics, etc. are especially rich in neologisms. In philosophy, as an example, many terms became introduced into languages through processes of translation, e.g., from Ancient Greek toLatin, or from Latin toGermanorEnglish, and so on. SoPlatointroduced the Greek term ποιότης (poiotēs), which Cicero rendered with Latinqualitas, which subsequently became our notion of 'quality' in relation to epistemology, e.g., a quality or attribute of a perceived object, as opposed to its essence. In physics, new terms were introduced sometimes via nonce formation (e.g.,Murray Gell-Man'squark, taken fromJames Joyce) or through derivation (e.g. John vonNeumann'skiloton, coined by combining the common prefixkilo-'thousand' with the nounton). Neologisms therefore are a vital component of scientificjargonortermini technici. Polariis a cant used by some actors, circus performers, and thegay subcultureto communicate without outsiders understanding. Some Polari terms have crossed over into mainstream slang, in part through their usage in pop song lyrics and other works. Example include:acdc,barney,blag,butch,camp,khazi,cottaging,hoofer,mince,ogle,scarper,slap,strides,tod,[rough] trade(rough trade). Verlan(French pronunciation:[vɛʁlɑ̃]), (verlanis the reverse of the expression "l'envers") is a type ofargotin theFrench language, featuring inversion ofsyllablesin a word, and is common inslangand youth language. It rests on a long French tradition of transposing syllables of individual words to create slang words.[26]: 50Someverlanwords, such asmeuf("femme", which means "woman" roughly backwards), have become so commonplace that they have been included in thePetit Larousse.[27]Like any slang, the purpose ofverlanis to create a somewhat secret language that only its speakers can understand. Words becoming mainstream is counterproductive. As a result, such newly common words are re-verlanised: reversed a second time. The commonmeufbecamefeumeu.[28][29] Neologism development may be spurred, or at least spread, by popular culture. Examples of pop-culture neologisms include the Americanalt-Right(2010s), the Canadianportmanteau"Snowmageddon" (2009), the Russianparody"Monstration" (c.2004),Santorum(c.2003). Neologisms spread mainly through their exposure inmass media. Thegenericizingofbrand names, such as "coke" forCoca-Cola, "kleenex" forKleenexfacial tissue, and "xerox" forXeroxphotocopying, all spread through their popular use being enhanced by mass media.[30] However, in some limited cases, words break out of their original communities and spread throughsocial media.[citation needed]"DoggoLingo", a term still below the threshold of a neologism according toMerriam-Webster,[31]is an example of the latter which has specifically spread primarily throughFacebookgroup andTwitteraccount use.[31]The suspected origin of this way of referring to dogs stems from a Facebook group founded in 2008 and gaining popularity in 2014 in Australia. In Australian English it is common to usediminutives, often ending in –o, which could be where doggo-lingo was first used.[31]The term has grown so that Merriam-Webster has acknowledged its use but notes the term needs to be found in published, edited work for a longer period of time before it can be deemed a new word, making it the perfect example of a neologism.[31] Because neologisms originate in one language, translations between languages can be difficult. In the scientific community, where English is the predominant language for published research and studies, like-sounding translations (referred to as 'naturalization') are sometimes used.[33]Alternatively, the English word is used along with a brief explanation of meaning.[33]The four translation methods are emphasized in order to translate neologisms:transliteration,transcription, the use of analogues, andloan translation.[34] When translating from English to other languages, the naturalization method is most often used.[35]The most common way that professional translators translate neologisms is through theThink aloud protocol(TAP), wherein translators find the most appropriate and natural sounding word through speech.[citation needed]As such, translators can use potential translations in sentences and test them with different structures and syntax. Correct translations fromEnglish for specific purposesinto other languages is crucial in various industries and legal systems.[36][37]Inaccurate translations can lead to 'translation asymmetry' or misunderstandings and miscommunication.[37]Many technical glossaries of English translations exist to combat this issue in the medical, judicial, and technological fields.[38] Inpsychiatryandneuroscience, the termneologismis used to describe words that have meaning only to the person who uses them, independent of their common meaning.[39][40]This can be seen inschizophrenia, where a person may replace a word with a nonsensical one of their own invention (e.g., "I got so angry I picked up a dish and threw it at the gelsinger").[41]The use of neologisms may also be due toaphasiaacquired afterbrain damageresulting from astrokeorhead injury.[42]
https://en.wikipedia.org/wiki/Neologism
Anabbreviation(fromLatinbrevis'short')[1]is a shortened form of a word or phrase, by any method includingshortening,contraction,initialism(which includesacronym), orcrasis. An abbreviation may be a shortened form of a word, usually ended with a trailing period. For example, the termetc.is the usual abbreviation for theLatin phraseet cetera. Acontractionis an abbreviation formed by replacing letters with an apostrophe. Examples includeI'mforI amandli'lforlittle. Aninitialismoracronymis an abbreviation consisting of the initial letter of a sequence of words without other punctuation. For example,FBI(/ˌɛf.biːˈaɪ/),USA(/ˌjuː.ɛsˈeɪ/),IBM(/ˌaɪ.biːˈɛm/),BBC(/ˌbiː.biːˈsiː/). When initialism is used as the preferred term, acronym refers more specifically to when the abbreviation is pronounced as a word rather than as separate letters; examples includeSWATandNASA. Initialisms, contractions and crasis share somesemanticandphoneticfunctions, and are connected by the termabbreviationin loose parlance.[2]: p167 In early times, abbreviations may have been common due to the effort involved in writing (many inscriptions were carved in stone) or to provide secrecy viaobfuscation. Reduction of a word to a single letter was common in bothGreekandRomanwriting.[3]In Roman inscriptions, "Words were commonly abbreviated by using the initial letter or letters of words, and most inscriptions have at least one abbreviation". However, "some could have more than one meaning, depending on their context. (For example,⟨A⟩can be an abbreviation for many words, such asager,amicus,annus,as,Aulus,Aurelius,aurum, andavus.)"[4]Many frequent abbreviations consisted of more than one letter: for example COS forconsuland COSS for itsnominativeetc. pluralconsules. Abbreviations were frequently used in earlyEnglish. Manuscripts of copies of theOld EnglishpoemBeowulfused many abbreviations, for example theTironian et(⁊) or&forand, andyforsince, so that "not much space is wasted".[5]The standardisation of English in the 15th through 17th centuries included a growth in the use of such abbreviations.[6]At first, abbreviations were sometimes represented with various suspension signs, not only periods. For example, sequences like⟨er⟩were replaced with⟨ɔ⟩, as inmastɔformasterandexacɔbateforexacerbate. While this may seem trivial, it was symptomatic of an attempt by people manually reproducing academic texts to reduce the copy time. Mastɔ subwardenɔ y ɔmēde me to you. And wherɔ y wrot to you the last wyke that y trouyde itt good to differrɔ thelectionɔ ovɔ to quīdenaɔ tinitatis y have be thougħt me synɔ that itt woll be thenɔ a bowte mydsomɔ. In theEarly Modern Englishperiod, between the 15th and 17th centuries, thethornÞwas used forth, as inÞe('the'). In modern times,⟨Þ⟩was often used (in the form⟨y⟩) for promotional reasons, as inYeOlde Tea Shoppe.[7] During the growth ofphilologicallinguistic theory in academic Britain, abbreviating became very fashionable. Likewise, a century earlier inBoston, a fad of abbreviation started that swept the United States, with the globally popular termOKgenerally credited as a remnant of its influence.[8][9] Over the years, however, the lack of convention in some style guides has made it difficult to determine which two-word abbreviations should be abbreviated with periods and which should not. This question is considered below. Widespread use of electronic communication through mobile phones and the Internet during the 1990s led to a marked rise in colloquial abbreviation. This was due largely to increasing popularity of textual communication services such as instant and text messaging. The originalSMSsupported message lengths of 160 characters at most (using theGSM 03.38character set), for instance.[a]This brevity gave rise to an informal abbreviation scheme sometimes calledTextese, with which 10% or more of the words in a typical SMS message are abbreviated.[10]More recently Twitter, a popularsocial networking service, began driving abbreviation use with 140 character message limits. InHTML, abbreviations can be annotated using<abbrtitle="Meaning of the abbreviation.">abbreviation</abbr>to reveal its meaning byhovering the cursor. In modern English, there are multiple conventions for abbreviation, and there is controversy as to which should be used. One generally accepted rule is to be consistent in a body of work. To this end, publishers may express their preferences in astyle guide. Some controversies that arise are described below. If the original word was capitalized then the first letter of its abbreviation should retain the capital, for example Lev. forLeviticus. When a word is abbreviated to more than a single letter and was originally spelled with lower case letters then there is no need for capitalization. However, when abbreviating a phrase where only the first letter of each word is taken, then all letters should be capitalized, as in YTD foryear-to-date, PCB forprinted circuit boardand FYI forfor your information. However, see the following section regarding abbreviations that have become common vocabulary: these are no longer written with capital letters. A period (a.k.a. full stop) is sometimes used to signify abbreviation, but opinion is divided as to when and if this convention is best practice. According toHart's Rules, a word shortened by dropping letters from the end terminates with a period, whereas a word shorted by dropping letters from the middle does not.[2]: p167–170Fowler's Modern English Usagesays a period is used for both of these shortened forms, but recommends against this practice: advising it only for end-shortened words and lower-case initialisms; not for middle-shortened words and upper-case initialisms.[11] Some British style guides, such as forThe GuardianandThe Economist, disallow periods for all abbreviations.[12][13] InAmerican English, the period is usually included regardless of whether or not it is a contraction, e.g.Dr.orMrs.In some cases, periods are optional, as in eitherUSorU.S.forUnited States,EUorE.U.forEuropean Union, andUNorU.N.forUnited Nations. There are some house styles, however—American ones included—that remove the periods from almost all abbreviations. For example: Acronyms that were originally capitalized (with or without periods) but have since entered the vocabulary as generic words are no longer written with capital letters nor with any periods. Examples aresonar,radar,lidar,laser,snafu, andscuba. When an abbreviation appears at the end of a sentence, only one period is used:The capital of the United States is Washington, D.C. In the past, some initialisms were styled with a period after each letter and a space between each pair. For example,U. S., but today this is typicallyUS. There are multiple ways to pluralize an abbreviation. Sometimes this accomplished by adding an apostrophe and ans('s), as in "two PC's have broken screens". But, some find this confusing since the notation can indicatepossessive case. And, this style is deprecated by many style guides. For instance,Kate Turabian, writing about style in academic writings,[14]allows for an apostrophe to form plural acronyms "only when an abbreviation contains internal periods or both capital and lowercase letters". For example, "DVDs" and "URLs" and "Ph.D.'s", while theModern Language Association[15]explicitly says, "do not use an apostrophe to form the plural of an abbreviation". Also, theAmerican Psychological Associationspecifically says,[16][17]"without an apostrophe". However, the 1999 style guide forThe New York Timesstates that the addition of an apostrophe is necessary when pluralizing all abbreviations, preferring "PC's, TV's and VCR's".[18] Forming a plural of an initialization without an apostrophe can also be used for a number, or a letter. Examples:[19] For units of measure, the same form is used for both singular and plural. Examples: When an abbreviation contains more than one period,Hart's Rulesrecommends putting thesafter the final one. Examples: However, the same plurals may be rendered less formally as: According toHart's Rules, an apostrophe may be used in rare cases where clarity calls for it, for example when letters or symbols are referred to as objects. However, the apostrophe can be dispensed with if the items are set in italics or quotes: In Latin, and continuing to the derivative forms in European languages as well as English, single-letter abbreviations had the plural being a doubling of the letter for note-taking. Most of these deal with writing and publishing. A few longer abbreviations use this as well. Publications based in the U.S. tend to follow the style guides ofThe Chicago Manual of Styleand theAssociated Press.[20]The U.S. government follows a style guide published by theU.S. Government Printing Office. TheNational Institute of Standards and Technologysets the style for abbreviations of units. Many British publications follow some of these guidelines in abbreviation: Writers often use shorthand to denote units of measure. Such shorthand can be an abbreviation, such as "in" for "inch" or can be a symbol such as "km" for "kilometre". In theInternational System of Units(SI) manual[22]the word "symbol" is used consistently to define the shorthand used to represent the various SI units of measure. The manual alsodefines the way in which units should be written, the principal rules being: A syllabic abbreviation is usually formed from the initial syllables of several words, such asInterpol=International+police. It is a variant of the acronym. Syllabic abbreviations are usually written usinglower case, sometimes starting with acapital letter, and are always pronounced as words rather than letter by letter. Syllabic abbreviations should be distinguished fromportmanteaus, which combine two words without necessarily taking whole syllables from each. Syllabic abbreviations are not widely used in English. Some UK government agencies such asOfcom(Office of Communications) and the formerOftel(Office of Telecommunications) use this style. New York Cityhas various neighborhoods named by syllabic abbreviation, such asTribeca(Triangle below Canal Street) andSoHo(South of Houston Street). This usage has spread into other American cities, givingSoMa, San Francisco (South of Market) andLoDo, Denver(Lower Downtown), amongst others. Chicago-based electric service providerComEdis a syllabic abbreviation ofCommonwealthand (Thomas)Edison. Sections ofCaliforniaare also often colloquially syllabically abbreviated, as in NorCal (Northern California), CenCal (Central California), and SoCal (Southern California). Additionally, in the context of Los Angeles, the syllabic abbreviation SoHo (Southern Hollywood) refers to the southern portion of theHollywoodneighborhood. Partially syllabic abbreviations are preferred by the US Navy, as they increase readability amidst the large number of initialisms that would otherwise have to fit into the same acronyms. HenceDESRON6is used (in the full capital form) to mean "Destroyer Squadron 6", whileCOMNAVAIRLANTwould be "Commander, Naval Air Force (in the) Atlantic". Syllabic abbreviations are a prominent feature ofNewspeak, the fictional language ofGeorge Orwell's dystopian novelNineteen Eighty-Four. The political contractions of Newspeak—Ingsoc(English Socialism),Minitrue(Ministry of Truth),Miniplenty(Ministry of Plenty)—are described by Orwell as similar to real examples of German(see below)and Russian (see below)contractions in the 20th century. The contractions in Newspeak are supposed to have a political function by virtue of their abbreviated structure itself: nice sounding and easily pronounceable, their purpose is to mask all ideological content from the speaker.[23]: 310–8 A more recent syllabic abbreviation has emerged with the diseaseCOVID-19(Corona Virus Disease 2019) caused by theSevere acute respiratory syndrome coronavirus 2(itself frequently abbreviated toSARS-CoV-2, partly an initialism). In Albanian, syllabic acronyms are sometimes used for composing a person's name, such asMigjeni—an abbreviation from his original name (Millosh Gjergj Nikolla) a famous Albanian poet and writer—orASDRENI(Aleksander Stavre Drenova), another famous Albanian poet. Other such names which are used commonly in recent decades are GETOAR, composed fromGegeria+Tosks(representing the two main dialects of the Albanian language, Gegë and Toskë), andArbanon—which is an alternative way used to describe all Albanian lands. Syllabic abbreviations were and are common inGerman; much like acronyms in English, they have a distinctly modern connotation, although contrary to popular belief, many date back to before1933, if not the end ofthe Great War.Kriminalpolizei, literallycriminal policebut idiomatically theCriminal Investigation Departmentof any German police force, begatKriPo(variously capitalised), and likewiseSchutzpolizei(protection policeoruniform department) begatSchuPo. Along the same lines, the Swiss Federal Railways' Transit Police—theTransportpolizei—are abbreviated as theTraPo. With the National Socialist German Workers' Party gaining power came a frenzy of government reorganisation, and with it a series of entirely new syllabic abbreviations. The single national police force amalgamated from theSchutzpolizeienof the various states became the OrPo (Ordnungspolizei, "order police"); the state KriPos together formed the "SiPo" (Sicherheitspolizei, "security police"); and there was also theGestapo(Geheime Staatspolizei, "secret state police"). The new order of theGerman Democratic Republicin the east brought about a consciousdenazification, but also a repudiation of earlier turns of phrase in favour of neologisms such asStasiforStaatssicherheit("state security", the secret police) andVoPoforVolkspolizei. The phrasepolitisches Büro, which may be rendered literally as "office of politics" or idiomatically as "political party steering committee", becamePolitbüro. Syllabic abbreviations are not only used in politics, however. Many business names, trademarks, and service marks from across Germany are created on the same pattern: for a few examples, there isAldi, fromTheo Albrecht, the name of its founder, followed bydiscount;Haribo, fromHans Riegel, the name of its founder, followed byBonn, the town of its head office; andAdidas, fromAdolf "Adi" Dassler, the nickname of its founder followed by his surname. Syllabic abbreviations are very common in Russian, Belarusian and Ukrainian languages. They are often used as names of organizations. Historically, popularization of abbreviations was a way to simplify mass-education in 1920s (seeLikbez). The wordkolkhoz(kollektívnoye khozyáystvo,collective farm) is another example. Leninist organisations such as theComintern(Communist International) andKomsomol(Kommunisticheskii Soyuz Molodyozhi, or "Communist youth union") used Russian language syllabic abbreviations. In the modern Russian language, words likeRosselkhozbank(from Rossiysky selskokhozyaystvenny bank —Russian Agricultural Bank, RusAg) andMinobrnauki(from Ministerstvo obrazovaniya i nauki — Ministry of Education and Science) are still commonly used. In nearbyBelarus, there areBeltelecom(Belarus Telecommunication) and Belsat (Belarus Satellite). Syllabic abbreviations are common inSpanish; examples abound in organization names such asPemexforPetróleos Mexicanos("Mexican Petroleums") or Fonafifo forFondo Nacional de Financimiento Forestal(National Forestry Financing Fund). In Southeast Asian languages, especially inMalay languages, abbreviations are common; examples includePetronas(forPetroliam Nasional, "National Petroleum"), its Indonesian equivalentPertamina(from its original namePerusahaan Pertambangan Minyak dan Gas Bumi Negara, "State Oil and Natural Gas Mining Company"), andKemenhub(fromKementerian Perhubungan, "Ministry of Transportation"). Malaysian abbreviation often uses letters from each word, while Indonesia usually uses syllables; although some cases do not follow the style. For example, general elections in Malaysian Malay often shortened into PRU (pilihanrayaumum) while Indonesian often shortened into pemilu (pemilihanumum). Another example is Ministry of Health in which Malaysian Malay uses KKM (KementerianKesihatanMalaysia), compared to Indonesian Kemenkes (KementerianKesehatan). East Asian languages whose writing systems useChinese charactersform abbreviations similarly by using key Chinese characters from a term or phrase. For example, in Japanese the term for theUnited Nations,kokusai rengō(国際連合) is often abbreviated tokokuren(国連). (Such abbreviations are calledryakugo(略語) in Japanese; see alsoJapanese abbreviated and contracted words). The syllabic abbreviation ofkanjiwords is frequently used for universities: for instance,Tōdai(東大) forTōkyō daigaku(東京大学,University of Tokyo) and is used similarly in Chinese:Běidà(北大) forBěijīng Dàxué(北京大学,Peking University). Korean universities often follow the same conventions, such asHongdae(홍대) as short forHongik Daehakgyo, orHongik University. The English phrase "Gung ho" originated as a Chinese abbreviation.
https://en.wikipedia.org/wiki/Abbreviation
Thebag-of-words(BoW)modelis a model of text which uses an unordered collection (a "bag") of words. It is used innatural language processingandinformation retrieval(IR). It disregardsword order(and thus most of syntax or grammar) but capturesmultiplicity. The bag-of-words model is commonly used in methods ofdocument classificationwhere, for example, the (frequency of) occurrence of each word is used as afeaturefor training aclassifier.[1]It has also beenused for computer vision.[2] An early reference to "bag of words" in a linguistic context can be found inZellig Harris's 1954 article onDistributional Structure.[3] The following models a text document using bag-of-words. Here are two simple text documents: Based on these two text documents, a list is constructed as follows for each document: Representing each bag-of-words as aJSON object, and attributing to the respectiveJavaScriptvariable: Each key is the word, and each value is the number of occurrences of that word in the given text document. The order of elements is free, so, for example{"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1}is also equivalent toBoW1. It is also what we expect from a strictJSON objectrepresentation. Note: if another document is like a union of these two, its JavaScript representation will be: So, as we see in thebag algebra, the "union" of two documents in the bags-of-words representation is, formally, thedisjoint union, summing the multiplicities of each element. The BoW representation of a text removes all word ordering. For example, the BoW representation of "man bites dog" and "dog bites man" are the same, so any algorithm that operates with a BoW representation of text must treat them in the same way. Despite this lack of syntax or grammar, BoW representation is fast and may be sufficient for simple tasks that do not require word order. For instance, fordocument classification, if the words "stocks" "trade" "investors" appears multiple times, then the text is likely a financial report, even though it would be insufficient to distinguish between Yesterday, investors were rallying, but today, they are retreating. and Yesterday, investors were retreating, but today, they are rallying. and so the BoW representation would be insufficient to determine the detailed meaning of the document. Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, ortf–idf. Additionally, for the specific purpose of classification,supervisedalternatives have been developed to account for the class label of a document.[4]Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in theWEKAmachine learning software system). A common alternative to using dictionaries is thehashing trick, where words are mapped directly to indices with a hashing function.[5]Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets[clarification needed]. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
https://en.wikipedia.org/wiki/Bag-of-words_model