text
stringlengths
16
172k
source
stringlengths
32
122
Unified Parallel C(UPC) is an extension of theC programming languagedesigned forhigh-performance computingon large-scaleparallel machines, including those with a common globaladdress space(SMPandNUMA) and those withdistributed memory(e. g.clusters). Theprogrammeris presented with a singlepartitioned global address space; where shared variables may be directly read and written by anyprocessor, but each variable is physically associated with a single processor. UPC uses asingle program, multiple data(SPMD) model of computation in which the amount of parallelism is fixed at program startup time, typically with a singlethreadof execution per processor. In order to express parallelism, UPC extendsISO C 99with the following constructs: The UPC language evolved from experiences with three other earlier languages that proposed parallel extensions to ISO C 99: AC,Split-C, and Parallel C preprocessor (PCP). UPC is not asupersetof these three languages, but rather an attempt to distill the best characteristics of each. UPC combines the programmability advantages of the shared memory programming paradigm and the control over data layout and performance of themessage passingprogramming paradigm.
https://en.wikipedia.org/wiki/Unified_Parallel_C
Thebulk synchronous parallel(BSP)abstract computeris abridging modelfor designingparallel algorithms. It is similar to theparallel random access machine(PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm. The BSP model was developed byLeslie ValiantofHarvard Universityduring the 1980s. The definitive article was published in 1990.[1] Between 1990 and 1992, Leslie Valiant and Bill McColl ofOxford Universityworked on ideas for a distributed memory BSP programming model, in Princeton and at Harvard. Between 1992 and 1997, McColl led a large research team at Oxford that developed various BSP programming libraries, languages and tools, and also numerous massively parallel BSP algorithms, including many early examples of high-performance communication-avoiding parallel algorithms[2]and recursive "immortal" parallel algorithms that achieve the best possible performance and optimal parametric tradeoffs.[3] With interest and momentum growing, McColl then led a group from Oxford, Harvard, Florida, Princeton, Bell Labs, Columbia and Utrecht that developed and published the BSPlib Standard for BSP programming in 1996.[4] Valiant developed an extension to the BSP model in the 2000s, leading to the publication of the Multi-BSP model in 2011.[5] In 2017, McColl developed a major new extension of the BSP model that providesfault toleranceand tail tolerance for large-scale parallel computations in AI, Analytics andhigh-performance computing(HPC).[6]See also[7] A BSP computer consists of the following: This is commonly interpreted as a set of processors that may follow differentthreadsof computation, with each processor equipped with fast local memory and interconnected by a communication network. BSP algorithms rely heavily on the third feature; a computation proceeds in a series of globalsupersteps, which consists of three components: The computation and communication actions do not have to be ordered in time. Communication typically takes the form of the one-sidedPUTandGETremote direct memory access(RDMA) calls rather than paired two-sidedsendandreceivemessage-passing calls. The barrier synchronization concludes the superstep—it ensures that all one-sided communications are properly concluded. Systems based on two-sided communication include this synchronization cost implicitly for every message sent. The barrier synchronization method relies on the BSP computer's hardware facility. In Valiant's original paper, this facility periodically checks if the end of the current superstep is reached globally. The period of this check is denoted byL{\displaystyle L}.[1] The BSP model is also well-suited for automatic memory management for distributed-memory computing through over-decomposition of the problem and oversubscription of the processors. The computation is divided into more logical processes than there are physical processors, and processes are randomly assigned to processors. This strategy can be shown statistically to lead to almost perfect load balancing, both of work and communication. In many parallel programming systems, communications are considered at the level of individual actions, such as sending and receiving a message or memory-to-memory transfer. This is difficult to work with since there are many simultaneous communication actions in a parallel program, and their interactions are typically complex. In particular, it is difficult to say much about the time any single communication action will take to complete. The BSP model considers communication actionsen masse. This has the effect that an upper bound on the time taken to communicate a set of data can be given. BSP considers all communication actions of a superstep as one unit and assumes all individual messages sent as part of this unit have a fixed size. The maximum number of incoming or outgoing messages for a superstep is denoted byh{\displaystyle h}. The ability of a communication network to deliver data is captured by a parameterg{\displaystyle g}, defined such that it takes timehg{\displaystyle hg}for a processor to deliverh{\displaystyle h}messages of size 1. A message of lengthm{\displaystyle m}obviously takes longer to send than a message of size 1. However, the BSP model does not make a distinction between a message length ofm{\displaystyle m}orm{\displaystyle m}messages of length 1. In either case, the cost is said to bemg{\displaystyle mg}. The parameterg{\displaystyle g}depends on the following: In practice,g{\displaystyle g}is determined empirically for each parallel computer. Note thatg{\displaystyle g}is not the normalized single-word delivery time but the single-word delivery time under continuous traffic conditions. The one-sided communication of the BSP model requiresbarrier synchronization.Barriersare potentially costly but avoid the possibility ofdeadlockor livelock, since barriers cannot createcircular data dependencies. Tools to detect them and deal with them are unnecessary. Barriers also permit novel forms offault tolerance.[citation needed] The cost of barrier synchronization is influenced by a couple of issues: The cost of a barrier synchronization is denoted byl{\displaystyle l}. Note thatl<L{\displaystyle l<L}if the synchronization mechanism of the BSP computer is as suggested by Valiant.[1]In practice, a value ofl{\displaystyle l}is determined empirically. On large computers, barriers are expensive, and this is increasingly so on large scales. There is a large body of literature on removing synchronization points from existing algorithms in the context of BSP computing and beyond. For example, many algorithms allow for the local detection of the global end of a superstep simply by comparing local information to the number of messages already received. This drives the cost of global synchronization, compared to the minimally required latency of communication, to zero.[8]Yet also this minimal latency is expected to increase further for future supercomputer architectures and network interconnects; the BSP model, along with other models for parallel computation, require adaptation to cope with this trend. Multi-BSP is one BSP-based solution.[5] The cost of a superstep is determined as the sum of three terms: Thus, the cost of one superstep forp{\displaystyle p}processors: maxi=1p(wi)+maxi=1p(hig)+l{\displaystyle max_{i=1}^{p}(w_{i})+max_{i=1}^{p}(h_{i}g)+l}wherewi{\displaystyle w_{i}}is the cost for the local computation in processi{\displaystyle i}, andhi{\displaystyle h_{i}}is the number of messages sent or received by processi{\displaystyle i}. Note that homogeneous processors are assumed here. It is more common for the expression to be written asw+hg+l{\displaystyle w+hg+l}wherew{\displaystyle w}andh{\displaystyle h}are maxima. The cost of an entire BSP algorithm is the sum of the cost of each superstep. W+Hg+Sl=∑s=1Sws+g∑s=1Shs+Sl{\displaystyle W+Hg+Sl=\sum _{s=1}^{S}w_{s}+g\sum _{s=1}^{S}h_{s}+Sl}whereS{\displaystyle S}is the number of supersteps. W{\displaystyle W},H{\displaystyle H}, andS{\displaystyle S}are usually modeled as functions that vary with problem size. These three characteristics of a BSP algorithm are usually described in terms ofasymptotic notation, e.g.,H∈O(n/p){\displaystyle H\in O(n/p)}. Interest in BSP has soared, withGoogleadopting it as a major technology for graph analytics at massive scale via Pregel andMapReduce. Also, with the next generation ofHadoopdecoupling the MapReduce model from the rest of the Hadoop infrastructure, there are now active open-source projects to add explicit BSP programming, as well as other high-performance parallel programming models, on top of Hadoop. Examples areApache HamaandApache Giraph.[9] BSP has been extended by many authors to address concerns about BSP's unsuitability for modelling specific architectures or computational paradigms. One example of this is the decomposable BSP model. The model has also been used in the creation of a number of new programming languages and interfaces, such as Bulk Synchronous Parallel ML (BSML), BSPLib,Apache Hama,[9]and Pregel.[10] Notable implementations of the BSPLib standard are the Paderborn University BSP library[11]and the Oxford BSP Toolset by Jonathan Hill.[12]Modern implementations include BSPonMPI[13](which simulates BSP on top of theMessage Passing Interface), and MulticoreBSP[14][15](a novel implementation targeting modern shared-memory architectures). MulticoreBSP for C is especially notable for its capability of starting nested BSP runs, thus allowing for explicit Multi-BSP programming.
https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
SequenceLis a general purposefunctional programminglanguage andauto-parallelizing(Parallel computing) compiler and tool set, whose primary design objectives are performance onmulti-core processorhardware, ease of programming, platform portability/optimization, and code clarity and readability. Its main advantage is that it can be used to write straightforward code that automatically takes full advantage of all the processing power available, withoutprogrammersneeding to be concerned with identifyingparallelisms, specifyingvectorization, avoidingrace conditions, and other challenges of manualdirective-based programmingapproaches such asOpenMP. Programs written in SequenceL can be compiled tomultithreadedcode that runs in parallel, with no explicit indications from a programmer of how or what to parallelize. As of 2015[update], versions of the SequenceLcompilergenerate parallel code inC++andOpenCL, which allows it to work with most popular programming languages, includingC, C++,C#,Fortran,Java, andPython. A platform-specific runtime manages the threads safely, automatically providing parallel performance according to the number of cores available, currently supportingx86,POWER8, andARMplatforms. SequenceL was initially developed over a 20-year period starting in 1989, mostly atTexas Tech University. Primary funding was fromNASA, which originally wanted to develop a specification language which was "self-verifying"; that is, once written, the requirements could beexecuted, and the results verified against the desired outcome. The principal researcher on the project was initially Dr. Daniel Cooke,[2]who was soon joined by Dr. Nelson Rushton (another Texas Tech professor) and later Dr. Brad Nemanich (then a PhD student under Cooke). The goal of creating a language that was simple enough to be readable, but unambiguous enough to be executable, drove the inventors to settle on afunctional,declarativelanguage approach, where a programmer describes desired results, rather than the means to achieve them. The language is then free to solve the problem in the most efficient manner that it can find. As the language evolved, the researchers developed new computational approaches, includingconsume-simplify-produce(CSP).[3]In 1998, research began to apply SequenceL toparallel computing. This culminated in 2004 when it took its more complete form with the addition of thenormalize-transpose(NT) semantic,[4][5]which coincided with the major vendors ofcentral processing units(CPUs) making a major shift tomulti-core processorsrather than continuing to increase clock speeds. NT is the semantic work-horse, being used to simplify and decompose structures, based on adataflow-like execution strategy similar to GAMMA[6]and NESL.[7]The NT semantic achieves a goal similar to that of the Lämmel and Peyton-Jones' boilerplate elimination.[8][9]All other features of the language are definable from these two laws - includingrecursion, subscripting structures, function references, and evaluation of function bodies.[10][11] Though it was not the original intent, these new approaches allowed the language to parallelize a large fraction of the operations it performed, transparently to the programmer. In 2006, a prototype auto-parallelizing compiler was developed at Texas Tech University. In 2009, Texas Tech licensed the intellectual property to Texas Multicore Technologies (TMT),[12]for follow-on commercial development. In January 2017 TMT released v3, which includes a free Community Edition for download in addition to the commercial Professional Edition. SequenceL is designed to be as simple as possible to learn and use, focusing on algorithmic code where it adds value, e.g., the inventors chose not to reinvent I/O since C handled that well. As a result, the fulllanguage reference for SequenceLis only 40 pages, with copious examples, and its formal grammar has around 15 production rules.[13] SequenceL is strictly evaluated (likeLisp), statically typed withtype inference(likeHaskell), and uses a combination of infix and prefix operators that resemble standard, informal mathematical notation (likeC,Pascal,Python, etc.). It is a purely declarative language, meaning that a programmer defines functions, in the mathematical sense, without giving instructions for their implementation. For example, the mathematical definition of matrix multiplication is as follows: The SequenceL definition mirrors that definition more or less exactly: The subscripts following each parameterAandBon the left hand side of the definition indicate thatAandBare depth-2 structures (i.e., lists of lists of scalars), which are here thought of as matrices. From this formal definition, SequenceL infers the dimensions of the defined product from the formula for its (i,j)'th entry (as the set of pairs (i,j) for which the right hand side is defined) and computes each entry by the same formula as in the informal definition above. Notice there are no explicit instructions for iteration in this definition, or for the order in which operations are to be carried out. Because of this, the SequenceL compiler can perform operations in any order (including parallel order) which satisfies the defining equation. In this example, computation of coordinates in the product will be parallelized in a way that, for large matrices, scales linearly with the number of processors. As noted above, SequenceL has no built-in constructs forinput/output(I/O) since it was designed to work in an additive manner with other programming languages. The decision to compile to multithreaded C++ and support the 20+ Simplified Wrapper and Interface Generator (SWIG) languages (C, C++, C#, Java, Python, etc.) means it easily fits into extant design flows, training, and tools. It can be used to enhance extant applications, create multicore libraries, and even create standalone applications by linking the resulting code with other code which performs I/O tasks. SequenceL functions can also be queried from aninterpreterwith given inputs, like Python and other interpreted languages. The main non-scalar construct of SequenceL is the sequence, which is essentially a list. Sequences may be nested to any level. To avoid the routine use of recursion common in many purely functional languages, SequenceL uses a technique termednormalize–transpose(NT), in which scalar operations are automatically distributed over elements of a sequence.[14]For example, in SequenceL we have This results not from overloading the '+' operator, but from the effect of NT that extends to all operations, both built-in and user-defined. As another example, if f() is a 3-argument function whose arguments are scalars, then for any appropriate x and z we will have The NT construct can be used for multiple arguments at once, as in, for example It also works when the expected argument is a non-scalar of any type T, and the actual argument is a list of objects of type T (or, in greater generality, any data structure whose coordinates are of type T). For example, ifAis a matrix andXsis a list of matrices [X1, ..., Xn], and given the above definition of matrix multiply, in SequenceL we would have As a rule, NTs eliminate the need for iteration, recursion, or high level functional operators to This tends to account for most uses of iteration and recursion. A good example that demonstrates the above concepts would be in finding prime numbers. Aprime numberis defined as So a positive integerzis prime if no numbers from 2 throughz-1, inclusive, divide evenly. SequenceL allows this problem to be programmed by literally transcribing the above definition into the language. In SequenceL, a sequence of the numbers from 2 throughz-1, inclusive, is just (2...(z-1)), so a program to find all of the primes between 100 and 200 can be written: Which, in English just says, If that condition isn't met, the function returns nothing. As a result, running this program yields The string "between 100 and 200" doesn't appear in the program. Rather, a programmer will typically pass that part in as the argument. Since the program expects a scalar as an argument, passing it a sequence of numbers instead will cause SequenceL to perform the operation on each member of the sequence automatically. Since the function returns empty for failing values, the result will be the input sequence, but filtered to return only those numbers that satisfy the criteria for primes: In addition to solving this problem with a very short and readable program, SequenceL's evaluation of the nested sequences would all be performed in parallel. The following software components are available and supported by TMT for use in writing SequenceL code. All components are available onx86platforms runningWindows,macOS, and most varieties ofLinux(includingCentOS,RedHat,openSUSE, andUbuntu), and onARMandIBM Powerplatforms running most varieties ofLinux. Acommand-lineinterpreterallows writing code directly into a command shell, or loading code from prewritten text files. This code can be executed, and the results evaluated, to assist in checking code correctness, or finding a quick answer. It is also available via the popularEclipseintegrated development environment(IDE). Code executed in the interpreter does not run in parallel; it executes in one thread. A command-linecompilerreads SequenceL code and generates highly parallelized,vectorized, C++, and optionally OpenCL, which must be linked with the SequenceL runtime library to execute. The runtime environment is a pre-compiled set of libraries which works with the compiled parallelized C++ code to execute optimally on the target platform. It builds on Intel Threaded Building Blocks (TBB)[15]and handles things such as cache optimization, memory management, work queues-stealing, and performance monitoring. AnEclipseintegrated development environmentplug-inprovides standard editing abilities (function rollup, chromacoding, etc.), and a SequenceL debugging environment. This plug-in runs against the SequenceL Interpreter, so cannot be used to debug the multithreaded code; however, by providing automatic parallelization, debugging of parallel SequenceL code is really verifying correctness of sequential SequenceL code. That is, if it runs correctly sequentially, it should run correctly in parallel – so debugging in the interpreter is sufficient. Various math and other standard function libraries are included as SequenceL source code to streamline the programming process and serve as best practice examples. These may be imported, in much the same way that C or C++ libraries are #included.
https://en.wikipedia.org/wiki/SequenceL
Non-uniform memory access(NUMA) is acomputer memorydesign used inmultiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its ownlocal memoryfaster than non-local memory (memory local to another processor or memory shared between processors).[1]NUMA is beneficial for workloads with high memorylocality of referenceand lowlock contention, because a processor may operate on a subset of memory mostly or entirely within its own cache node, reducing traffic on the memory bus.[2] NUMA architectures logically follow in scaling fromsymmetric multiprocessing(SMP) architectures. They were developed commercially during the 1990s byUnisys,Convex Computer(laterHewlett-Packard),HoneywellInformation Systems Italy (HISI) (laterGroupe Bull),Silicon Graphics(laterSilicon Graphics International),Sequent Computer Systems(laterIBM),Data General(laterEMC, nowDell Technologies),Digital(laterCompaq, thenHP, nowHPE) andICL. Techniques developed by these companies later featured in a variety ofUnix-likeoperating systems, and to an extent inWindows NT. The first commercial implementation of a NUMA-based Unix system was[where?][when?]the Symmetrical Multi Processing XPS-100 family of servers, designed by Dan Gielan of VAST Corporation forHoneywell Information SystemsItaly. Modern CPUs operate considerably faster than the main memory they use. In the early days of computing and data processing, the CPU generally ran slower than its own memory. The performance lines of processors and memory crossed in the 1960s with the advent of the firstsupercomputers. Since then, CPUs increasingly have found themselves "starved for data" and having to stall while waiting for data to arrive from memory (e.g. for Von-Neumann architecture-based computers, seeVon Neumann bottleneck). Many supercomputer designs of the 1980s and 1990s focused on providing high-speed memory access as opposed to faster processors, allowing the computers to work on large data sets at speeds other systems could not approach. Limiting the number of memory accesses provided the key to extracting high performance from a modern computer. For commodity processors, this meant installing an ever-increasing amount of high-speedcache memoryand using increasingly sophisticated algorithms to avoidcache misses. But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems without NUMA make the problem considerably worse. Now a system can starve several processors at the same time, notably because only one processor can access the computer's memory at a time.[3] NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory. For problems involving spread data (common forserversand similar applications), NUMA can improve the performance over a single shared memory by a factor of roughly the number of processors (or separate memory banks).[4]Another approach to addressing this problem is themulti-channel memory architecture, in which a linear increase in the number of memory channels increases the memory access concurrency linearly.[5] Of course, not all data ends up confined to a single task, which means that more than one processor may require the same data. To handle these cases, NUMA systems include additional hardware or software to move data between memory banks. This operation slows the processors attached to those banks, so the overall speed increase due to NUMA heavily depends on the nature of the running tasks.[4] AMDimplemented NUMA with itsOpteronprocessor (2003), usingHyperTransport.Intelannounced NUMA compatibility for its x86 andItaniumservers in late 2007 with itsNehalemandTukwilaCPUs.[6]Both Intel CPU families share a commonchipset; the interconnection is called IntelQuickPath Interconnect(QPI), which provides extremely high bandwidth to enable high on-board scalability and was replaced by a new version called IntelUltraPath Interconnectwith the release ofSkylake(2017).[7] Nearly all CPU architectures use a small amount of very fast non-shared memory known ascacheto exploitlocality of referencein memory accesses. With NUMA, maintainingcache coherenceacross shared memory has a significant overhead. Although simpler to design and build, non-cache-coherent NUMA systems become prohibitively complex to program in the standardvon Neumann architectureprogramming model.[8] Typically, ccNUMA uses inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA may perform poorly when multiple processors attempt to access the same memory area in rapid succession. Support for NUMA inoperating systemsattempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary.[9] Alternatively, cache coherency protocols such as theMESIF protocolattempt to reduce the communication required to maintain cache coherency.Scalable Coherent Interface(SCI) is anIEEEstandard defining a directory-based cache coherency protocol to avoid scalability limitations found in earlier multiprocessor systems. For example, SCI is used as the basis for the NumaConnect technology.[10][11] One can view NUMA as a tightly coupled form ofcluster computing. The addition ofvirtual memorypaging to a cluster architecture can allow the implementation of NUMA entirely in software. However, the inter-node latency of software-based NUMA remains several orders of magnitude greater (slower) than that of hardware-based NUMA.[2] Since NUMA largely influences memory access performance, certain software optimizations are needed to allow scheduling threads and processes close to their in-memory data. As of 2011, ccNUMA systems are multiprocessor systems based on theAMD Opteronprocessor, which can be implemented without external logic, and the IntelItanium processor, which requires the chipset to support NUMA. Examples of ccNUMA-enabled chipsets are the SGI Shub (Super hub), the Intel E8870, theHPsx2000 (used in the Integrity and Superdome servers), and those found in NEC Itanium-based systems. Earlier ccNUMA systems such as those fromSilicon Graphicswere based onMIPSprocessors and theDECAlpha 21364(EV7) processor.
https://en.wikipedia.org/wiki/Non-uniform_memory_access
Cache only memory architecture(COMA) is acomputer memoryorganization for use inmultiprocessorsin which the local memories (typicallyDRAM) at each node are used as cache. This is in contrast to using the local memories as actual main memory, as inNUMAorganizations. In NUMA, each address in the global address space is typically assigned a fixed home node. When processors access some data, a copy is made in their local cache, but space remains allocated in the home node. Instead, with COMA, there is no home. An access from a remote node may cause that data to migrate. Compared to NUMA, this reduces the number of redundant copies and may allow more efficient use of the memory resources. On the other hand, it raises problems of how to find a particular data (there is no longer a home node) and what to do if a local memory fills up (migrating some data into the local memory then needs to evict some other data, which doesn't have a home to go to). Hardwarememory coherencemechanisms are typically used to implement the migration. A huge body of research has explored these issues. Various forms of directories, policies for maintaining free space in the local memories, migration policies, and policies for read-only copies have been developed. Hybrid NUMA-COMA organizations have also been proposed, such as Reactive NUMA, which allows pages to start in NUMA mode and switch to COMA mode if appropriate and is implemented in the Sun Microsystems's WildFire.[1][2]A software-based Hybrid NUMA-COMA implementation was proposed and implemented by ScaleMP,[3]allowing for the creation of a shared-memory multiprocessor system out of a cluster of commodity nodes. Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cache-only_memory_architecture
Abackground processis acomputer processthat runsbehind the scenes(i.e., in the background) and without user intervention.[1]Typical tasks for these processes include logging, system monitoring, scheduling,[2]and user notification.[3] On aWindowssystem, a background process is either acomputer programthat does not create auser interface, or aWindows service. The former are started just as any other program is started, e.g., viaStart menu. Windows services, on the other hand, are started byService Control Manager. InWindows Vistaand later, theyare run in a separate session.[citation needed] On aUnixorUnix-likesystem, a background process or job can be further identified as one whoseprocess groupID differs from its terminal group ID (TGID). (The TGID of a process is the process ID of the process group leader that opened the terminal, which is typically the login shell. The TGID identifies the control terminal of the process group.) This type of process is unable to receive keyboard signals from its parent terminal, and typically will not send output to that terminal.[4]This more technical definition does not distinguish between whether or not the process can receive user intervention. Although background processes are typically used for purposes needing few resources, any process can be run in the background, and such a process will behave like any other process, with the exceptions given above.[1] InWindows NTfamily ofoperating systems, a Windows service is a dedicated background process.[5]A Windows service must conform to the interface rules and protocols of theService Control Manager, the component responsible for managing Windows services.[6] Windows services can be configured to start when the operating system starts, and to run in the background as long as Windows runs. Alternatively, they can be started manually or by an event. Windows NT operating systemsinclude numerous serviceswhich run in context of threeuser accounts:System,Network ServiceandLocal Service. These Windows components are often associated with Host Process for Windows Services:svchost.exe. Since Windows services operate in the context of their own dedicated user accounts, they can operate when a user is not logged on. BeforeWindows Vista, services installed as "interactive services" could interact with Windowsdesktopand show agraphical user interface. With Windows Vista, however, interactive services becamedeprecatedand ceased operating properly, as a result ofWindows Service Hardening.[7][8] The three principal means of managing Windows services are: A daemon is a type of background process designed to run continually in the background, waiting for event(s) to occur or condition(s) to be met.[9]When launched with thedaemonfunction, daemons are disassociated from their parent terminal.[10] From a Unix command line, a background process can be launched using the "&" operator. Thebgcommand can resume a suspended job (sendingSIGCONT), running it in the background. Using thefgcommand will also reconnect standard input its parent terminal, bringing it into the foreground. Thejobscommand will list all processes associated with the current terminal and can be used to bring background processes into the foreground.[4][11] When alogin sessionends, via explicit logout or network disconnection, all processes, including background processes, will by default be terminated, to prevent them from becomingorphan processes. Concretely, when the user exits the launching shell process, as part of shutdown it sends ahangupsignal (SIGHUP) to all itsjobs, to terminate all the processes in the correspondingprocess group. To have processes continue to run, one can either not end the session, or end the session without terminating the processes. Aterminal multiplexercan be used to leave a session running but detach a virtual terminal from it, leaving processes running as child processes of the session; the user can then reattach session later. Or, termination can be prevented by either starting the process via thenohupcommand (telling the process to ignore SIGHUP), or by subsequently runningdisownwith the job id, which either removes the job from the job list entirely, or simply prevents SIGHUP from being sent. In the latter case when the session ends, the child processes are not terminated, either because they are not sent SIGHUP or because they ignore it, and thus become orphan processes, which are then adopted by theinitprocess (the kernel sets the init process as their parent), and they continue running without a session, now calleddaemons. In this example running onUnix, thesleeputility was launched into the background. Afterward, thepstool was run in the foreground, where it output the below text. Both were launched from the shell.[12] Many newer versions ofsmartphoneand PDAoperating systemsnow include the ability to start background processes. Due to hardware limits, background processes on mobile operating systems are often restricted to certain tasks or consumption levels. OnAndroid, CPU use for background processes may be bounded at 5–10%.[13]Applications on Apple'siOSare limited to a subset of functions while running in the background.[3]On both iOS and Android, background processes can be killed by the system if they are using too much memory.[3][13]
https://en.wikipedia.org/wiki/Background_process
Acode caveis a series of unused bytes in aprocess's memory. The code cave inside a process's memory is often a reference to a section that has capacity for injecting custom instructions. The concept of a code cave is often employed byhackersandreverse engineersto executearbitrary codein a compiled program. It can be a helpful method to make modifications to a compiled program in the example of including additional dialog boxes, variable modifications or even the removal of software key validation checks. Often using a call instruction commonly found on manyCPU architectures, the code jumps to the new subroutine and pushes the next address onto the stack. After execution of the subroutine a return instruction can be used to pop the previous location off of the stack into the program counter. This allows the existing program to jump to the newly added code without making significant changes to the program flow itself.
https://en.wikipedia.org/wiki/Code_cave
Achild process(CP) in computing is aprocesscreated by another process (theparent process). This technique pertains tomultitasking operating systems, and is sometimes called asubprocessor traditionally asubtask. There are two major procedures for creating a child process: thefork system call(preferred inUnix-likesystems and thePOSIXstandard) and thespawn(preferred in themodern (NT) kernelofMicrosoft Windows, as well as in some historical operating systems). Child processes date to the late 1960s, with an early form in later revisions of theMultiprogramming with a Fixed number of TasksVersion II (MFT-II) form of the IBMOS/360operating system, which introducedsub-tasking(seetask). The current form in Unix draws onMultics(1969), while the Windows NT form draws onOpenVMS(1978), fromRSX-11(1972). A child process inherits most of itsattributes, such asfile descriptors, from its parent. InUnix, a child process is typically created as a copy of the parent, using theforksystem call. The child process can then overlay itself with a different program (usingexec) as required.[1] Each process may create many child processes but will have at most one parent process; if a process does not have a parent this usually indicates that it was created directly by thekernel. In some systems, includingLinux-based systems, the very first process (calledinit) is started by the kernel atbootingtime and never terminates (seeLinux startup process); other parentless processes may be launched to carry out variousdaemontasks inuserspace. Another way for a process to end up without a parent is if its parent dies, leaving anorphan process; but in this case it will shortly be adopted byinit. The SIGCHLDsignalis sent to the parent of a child process when itexits, is interrupted, or resumes after being interrupted. By default the signal is simply ignored.[2] When a child process terminates, some information is returned to the parent process. When a child process terminates before the parent has calledwait, the kernel retains some information about the process, such as itsexit status, to enable its parent to callwaitlater.[3]Because the child is still consuming system resources but not executing it is known as azombie process. Thewaitsystem call is commonly invoked in the SIGCHLD handler. POSIX.1-2001allows a parent process to elect for the kernel to automatically reap child processes that terminate by explicitly setting the disposition of SIGCHLD to SIG_IGN (although ignore is the default, automatic reaping only occurs if the disposition is set to ignore explicitly[4]), or by setting the SA_NOCLDWAIT flag for the SIGCHLD signal. Linux 2.6 kernels adhere to this behavior, and FreeBSD supports both of these methods since version 5.0.[5]However, because of historical differences betweenSystem VandBSDbehaviors with regard to ignoring SIGCHLD, callingwaitremains the most portable paradigm for cleaning up after forked child processes.[6]
https://en.wikipedia.org/wiki/Child_process
On many computeroperating systems, acomputer processterminates itsexecutionby making anexitsystem call. More generally, an exit in amultithreadingenvironment means that athreadof execution has stopped running. Forresource management, theoperating systemreclaimsresources(memory,files, etc.) that were used by the process. The process is said to be adead processafter it terminates. UnderUnixandUnix-likeoperating systems, a process is started when itsparent processexecutes aforksystem call. The parent process may thenwaitfor thechild processto terminate, or may continue execution (possibly forking off other child processes). When the child process terminates ("dies"), either normally by callingexit, orabnormallydue to afatal exceptionorsignal(e.g.,SIGTERM,SIGINT,SIGKILL), anexit statusis returned to the operating system and aSIGCHLDsignal is sent to the parent process. The exit status can then be retrieved by the parent process via thewaitsystem call. Most operating systems allow the terminating process to provide a specificexit statusto the system, which is made available to the parent process. Typically this is an integer value, although some operating systems (e.g.,Plan 9 from Bell Labs) allow acharacter stringto be returned. Systems returning an integer value commonly use a zero value to indicate successful execution and non-zero values to indicate error conditions. Other systems (e.g.,OpenVMS) use even-numbered values for success and odd values for errors. Still other systems (e.g., IBMz/OSand its predecessors) use ranges of integer values to indicate success, warning, and error completion results. The exit operation typically performs clean-up operations within the process space before returning control back to the operating system. Some systems andprogramming languagesallow usersubroutinesto be registered so that they are invoked at program termination before the process actually terminates for good. As the final step of termination, a primitive system exit call is invoked, informing the operating system that the process has terminated and allows it to reclaim the resources used by the process. It is sometimes possible to bypass the usual cleanup;C99offers the_exit()function which terminates the current process without any extra program clean-up. This may be used, for example, in afork-execroutine when theexeccall fails to replace the child process; callingatexitroutines would erroneously release resources belonging to the parent. Some operating systems handle a child process whose parent process has terminated in a special manner. Such anorphan processbecomes a child of a specialroot process, which then waits for the child process to terminate. Likewise, a similar strategy is used to deal with azombie process, which is a child process that has terminated but whose exit status is ignored by its parent process. Such a process becomes the child of a special parent process, which retrieves the child's exit status and allows the operating system to complete the termination of the dead process. Dealing with these special cases keeps the systemprocess tablein a consistent state. The following programs terminate and return a successexit statusto the system. COBOL: Fortran: Java: JavaScript(Node.js): Pascal: DR-DOSbatch file:[1] Perl: PHP: Python: Rust: Unix shell: DOSassembly: Some programmers may prepare everything for INT 21h at once: Linux32-bitx86Assembly: Linux64-bitx86 64Assembly: forFASM OS X64-bitx86 64Assembly: forNASM On Windows, a program can terminate itself by calling ExitProcess or RtlExitUserProcess function.
https://en.wikipedia.org/wiki/Exit_(system_call)
Incomputing, particularly in the context of theUnixoperating system andits workalikes,forkis an operation whereby aprocesscreates a copy of itself. It is an interface which is required for compliance with thePOSIXandSingle UNIX Specificationstandards. It is usually implemented as aC standard librarywrapperto the fork, clone, or othersystem callsof thekernel. Fork is the primary method of process creation on Unix-like operating systems. In multitasking operating systems, processes (running programs) need a way to create new processes, e.g. to run other programs. Fork and its variants are typically the only way of doing so in Unix-like systems. For a process to start the execution of a different program, it first forks to create a copy of itself. Then, the copy, called the "child process", makes any environment changes the child will need and then calls theexecsystem call to overlay itself with the new program: it ceases execution of its former program in favor of the new. (Or, in rarer cases, the child forgoes theexecand continues executing, as a separate process, some other functionality of the original program.) The fork operation creates a separateaddress spacefor the child. The child process has an exact copy of all the memory segments of the parent process. In modern UNIX variants that follow thevirtual memorymodel fromSunOS-4.0,copy-on-writesemantics are implemented and the physical memory need not be actually copied. Instead,virtual memory pagesin both processes may refer to the same pages ofphysical memoryuntil one of them writes to such a page: then it is copied. This optimization is important in the common case where fork is used in conjunction with exec to execute a new program: typically, the child process performs only a small set of actions before it ceases execution of its program in favour of the program to be started, and it requires very few, if any, of its parent'sdata structures. When a process calls fork, it is deemed theparent processand the newly created process is its child. After the fork, both processes not only run the same program, but they resume execution as though both had called the system call. They can then inspect the call'sreturn valueto determine their status, child or parent, and act accordingly. One of the earliest references to a fork concept appeared inA Multiprocessor System DesignbyMelvin Conway, published in 1962.[1]Conway's paper motivated the implementation byL. Peter Deutschof fork in theGENIE time-sharing system, where the concept was borrowed byKen Thompsonfor its earliest appearance[2]inResearch Unix.[3][4]Fork later became a standard interface inPOSIX.[5] The child process starts off with a copy of its parent'sfile descriptors.[5]For interprocess communication, the parent process will often create one or severalpipes, and then after forking the processes will close the ends of the pipes that they do not need.[6] Vfork is a variant of fork with the samecalling conventionand much the same semantics, but only to be used in restricted situations. It originated in the3BSDversion of Unix,[7][8][9]the first Unix to support virtual memory. It was standardized by POSIX, which permitted vfork to have exactly the same behavior as fork, but was marked obsolescent in the 2004 edition[10]and was replaced byposix_spawn() (which is typically implemented via vfork) in subsequent editions. When a vfork system call is issued, the parent process will be suspended until the child process has either completed execution or been replaced with a new executable image via one of the "exec" family of system calls. The child borrows thememory management unitsetup from the parent and memory pages are shared among the parent and child process with no copying done, and in particular with nocopy-on-writesemantics;[10]hence, if the child process makes a modification in any of the shared pages, no new page will be created and the modified pages are visible to the parent process too. Since there is absolutely no page copying involved (consuming additional memory), this technique is an optimization over plain fork in full-copy environments when used with exec. In POSIX, using vfork for any purpose except as a prelude to an immediate call to a function from the exec family (and a select few other operations) gives rise toundefined behavior.[10]As with vfork, the child borrows data structures rather than copying them. vfork is still faster than a fork that uses copy on write semantics. System Vdid not support this function call before System VR4 was introduced,[citation needed]because the memory sharing that it causes is error-prone: Vforkdoes not copy page tables so it is faster than the System Vforkimplementation. But the child process executes in the same physical address space as the parent process (until anexecorexit) and can thus overwrite the parent's data and stack. A dangerous situation could arise if a programmer usesvforkincorrectly, so the onus for callingvforklies with the programmer. The difference between the System V approach and the BSD approach is philosophical: Should the kernel hide idiosyncrasies of its implementation from users, or should it allow sophisticated users the opportunity to take advantage of the implementation to do a logical function more efficiently? Similarly, the Linuxman pagefor vfork strongly discourages its use:[7][failed verification–see discussion] It is rather unfortunate that Linux revived this specter from the past. The BSD man page states: "This system call will be eliminated when proper system sharing mechanisms are implemented. Users should not depend on the memory sharing semantics of vfork() as it will, in that case, be made synonymous to fork(2)." Other problems withvforkincludedeadlocksthat might occur inmultithreadedprograms due to interactions withdynamic linking.[12]As a replacement for thevforkinterface, POSIX introduced theposix_spawnfamily of functions that combine the actions of fork and exec. These functions may be implemented as library routines in terms offork, as is done in Linux,[12]or in terms ofvforkfor better performance, as is done in Solaris,[12][13]but the POSIX specification notes that they were "designed askernel operations", especially for operating systems running on constrained hardware andreal-time systems.[14] While the 4.4BSD implementation got rid of the vfork implementation, causing vfork to have the same behavior as fork, it was later reinstated in theNetBSDoperating system for performance reasons.[8] Some embedded operating systems such asuClinuxomit fork and only implement vfork, because they need to operate on devices where copy-on-write is impossible to implement due to lack of a memory management unit. ThePlan 9operating system, created by the designers of Unix, includes fork but also a variant called "rfork" that permits fine-grained sharing of resources between parent and child processes, including the address space (except for astacksegment, which is unique to each process),environment variablesand the filesystem namespace;[15]this makes it a unified interface for the creation of both processes andthreadswithin them.[16]BothFreeBSD[17]andIRIXadopted the rfork system call from Plan 9, the latter renaming it "sproc".[18] cloneis a system call in theLinux kernelthat creates a child process that may share parts of its executioncontextwith the parent. Like FreeBSD's rfork and IRIX's sproc, Linux's clone was inspired by Plan 9's rfork and can be used to implement threads (though application programmers will typically use a higher-level interface such aspthreads, implemented on top of clone). The "separate stacks" feature from Plan 9 and IRIX has been omitted because (according toLinus Torvalds) it causes too much overhead.[18] In the original design of theVMSoperating system (1977), a copy operation with subsequent mutation of the content of a few specific addresses for the new process as in forking was considered risky.[citation needed]Errors in the current process state may be copied to a child process. Here, the metaphor of process spawning is used: each component of the memory layout of the new process is newly constructed from scratch. Thespawnmetaphor was later adopted in Microsoft operating systems (1993). The POSIX-compatibility component ofVM/CMS(OpenExtensions) provides a very limited implementation of fork, in which the parent is suspended while the child executes, and the child and the parent share the same address space.[19]This is essentially avforklabelled as afork. (This applies to the CMS guest operating system only; other VM guest operating systems, such as Linux, provide standard fork functionality.) The following variant of the"Hello, World!" programdemonstrates the mechanics of theforksystem call in theCprogramming language. The program forks into two processes, each deciding what functionality they perform based on the return value of the fork system call.Boilerplate codesuch asheader inclusionshas been omitted. What follows is a dissection of this program. The first statement inmaincalls theforksystem call to split execution into two processes. The return value offorkis recorded in a variable of typepid_t, which is the POSIX type for process identifiers (PIDs). Minus one indicates an error infork: no new process was created, so an error message is printed. Ifforkwas successful, then there are now two processes, both executing themainfunction from the point whereforkhas returned. To make the processes perform different tasks, the program mustbranchon the return value offorkto determine whether it is executing as thechildprocess or theparentprocess. In the child process, the return value appears as zero (which is an invalidprocess identifier). The child process prints the desired greeting message, then exits. (For technical reasons, the POSIX_exitfunction must be used here instead of the C standardexitfunction.) The other process, the parent, receives fromforkthe process identifier of the child, which is always a positive number. The parent process passes this identifier to thewaitpidsystem call to suspend execution until the child has exited. When this has happened, the parent resumes execution and exits by means of thereturnstatement.
https://en.wikipedia.org/wiki/Fork_(system_call)
In computeroperating systems, alight-weight process(LWP) is a means of achievingmultitasking. In the traditional meaning of the term, as used inUnix System VandSolaris, a LWP runs inuser spaceon top of a singlekernel threadand shares itsaddress spaceand system resources with other LWPs within the sameprocess. Multipleuser-levelthreads, managed by a thread library, can be placed on top of one or many LWPs - allowing multitasking to be done at the user level, which can have some performance benefits.[1] In some operating systems, there is no separate LWP layer between kernel threads and user threads. This means that user threads are implemented directly on top of kernel threads. In those contexts, the term "light-weight process" typically refers to kernel threads and the term "threads" can refer to user threads.[2]OnLinux, user threads are implemented by allowing certain processes to share resources, which sometimes leads to these processes to be called "light weight processes".[3][4]Similarly, inSunOSversion 4 onwards (prior toSolaris) "light weight process" referred to user threads.[1] Kernel threads are handled entirely by thekernel. They need not be associated with a process; a kernel can create them whenever it needs to perform a particular task. Kernel threads cannot execute in user mode. LWPs (in systems where they are a separate layer) bind to kernel threads and provide a user-level context. This includes a link to the shared resources of the process to which the LWP belongs. When a LWP is suspended, it needs to store its user-level registers until it resumes, and the underlying kernel thread must also store its own kernel-level registers. LWPs are slower and more expensive to create than user threads. Whenever an LWP is created a system call must first be made to create a corresponding kernel thread, causing a switch to kernel mode. These mode switches would typically involve copying parameters between kernel and user space, also the kernel may need to have extra steps to verify the parameters to check for invalid behavior. Acontext switchbetween LWPs means that the LWP that is being pre-empted has to save its registers, then go into kernel mode for the kernel thread to save its registers, and the LWP that is being scheduled must restore the kernel and user registers separately also.[1] For this reason, some user level thread libraries allow multiple user threads to be implemented on top of LWPs. User threads can be created, destroyed, synchronized and switched between entirely in user space without system calls and switches into kernel mode. This provides a significant performance improvement in thread creation time and context switches.[1]However, there are difficulties in implementing a user level thread scheduler that works well together with the kernel. While the user threading library will schedule user threads, the kernel will schedule the underlying LWPs. Without coordination between the kernel and the thread library the kernel can make sub-optimal scheduling decisions. Further, it is possible for cases of deadlock to occur when user threads distributed over several LWPs try to acquire the same resources that are used by another user thread that is not currently running.[1] One solution to this problem is scheduler activation. This is a method for the kernel and the thread library to cooperate. The kernel notifies the thread library's scheduler about certain events (such as when a thread is about to block) and the thread library can make a decision on what action to take. The notification call from the kernel is called an "upcall". A user level library has no control over the underlying mechanism, it only receives notifications from the kernel and schedules user threads onto available LWPs, not processors. The kernel's scheduler then decides how to schedule the LWPs onto the processors. This means that LWPs can be seen by the thread library as "virtual processors".[5] Solarishas implemented a separate LWP layer since version 2.2. Prior to version 9, Solaris allowed a many-to-many mapping between LWPs and user threads. However, this was retired due to the complexities it introduced and performance improvements to the kernel scheduler.[1][6] UNIX System Vand its modern derivativesIRIX,SCO OpenServer,HP-UXandIBM AIXallow a many-to-many mapping between user threads and LWPs.[5][7] NetBSD5.0 introduced a new, scalable 1:1 threading model. Each user thread (pthread) has a kernel thread called a light-weight process (LWP). Inside the kernel, both processes and threads are implemented as LWPs, and are served the same by the scheduler.[8]
https://en.wikipedia.org/wiki/Light-weight_process
Anorphan processis acomputer processwhoseparent processhas finished orterminated, though it remains running itself. In aUnix-likeoperating systemany orphaned process will be immediately adopted by an implementation-defined system process: the kernel sets the parent to this process. This operation is calledre-parentingand occurs automatically. Even though technically the process has a system process as its parent, it is still called an orphan process since the process that originally created it no longer exists. In other systems orphaned processes are immediately terminated by the kernel. Most Unix systems have historically usedinitas the system process to which orphans are reparented, but in modernDragonFly BSD,FreeBSD, and Linux systems, an orphan process may be reparented to a "subreaper" process instead ofinit.[1][2] A process can be orphaned unintentionally, such as when the parent process terminates or crashes. Theprocess groupmechanism in most Unix-like operating systems can be used to help protect against accidental orphaning, where in coordination with the user'sshellwill try to terminate all the child processes with the "hangup" signal (SIGHUP), rather than letting them continue to run as orphans. More precisely, as part ofjob control, when the shell exits, because it is the "session leader" (its session id equals its process id), the correspondinglogin sessionends, and the shell sends SIGHUP to all its jobs (internal representation of process groups). It is sometimes desirable to intentionally orphan a process, usually to allow a long-running job to complete without further user attention, or to start an indefinitely running service or agent; such processes (without an associated session) are known asdaemons, particularly if they are indefinitely running. A low-level approach is toforktwice, running the desired process in the grandchild, and immediately terminating the child. The grandchild process is now orphaned, and is not adopted by its grandparent, but rather by init. Higher-level alternatives circumvent the shell's hangup handling, either telling the child process to ignore SIGHUP (by usingnohup), or removing the job from the job table or telling the shell to not send SIGHUP on session end (by usingdisownin either case). In any event, the session id (process id of the session leader, the shell) does not change, and the process id of the session that has ended is still in use until all orphaned processes either terminate or change session id (by starting a new session viasetsid(2)). To simplify system administration, it is often desirable to use aservice wrapperso that processes not designed to be used as services respond correctly to system signals. An alternative to keep processes running without orphaning them is to use aterminal multiplexerand run the processes in a detached session (or a session that becomes detached), so the session is not terminated and the process is not orphaned. A server process is also said to be orphaned when the client that initiated the request unexpectedly crashes after making the request while leaving the server process running. These orphaned processes waste server resources and can potentially leave a server starved for resources. However, there are several solutions to the orphan process problem:
https://en.wikipedia.org/wiki/Orphan_process
In computing, aparent processis a process that has created one or morechild processes. InUnix-likeoperating systems, every process exceptprocess 0(the swapper) is created when another process executes thefork()system call. The process that invoked fork is theparent processand the newly created process is thechild process. Every process (except process 0) has one parent process, but can have many child processes. Theoperating system kernelidentifies each process by its process identifier.Process 0is a special process that is created when the system boots; after forking a child process(process 1),process 0becomes theswapper process(sometimes also known as the "idle task").Process 1, known asinit, is the ancestor of every other process in the system.[1] In theLinux kernel, in which there is a very slim difference between processes andPOSIX threads, there are two kinds of parent processes, namely real parent and parent. Parent is the process that receives theSIGCHLDsignal on child's termination, whereas real parent is the thread that actually created this child process in a multithreaded environment. For a normal process, both these two values are same, but for a POSIX thread which acts as a process, these two values may be different.[2] The operating system maintains a table that associates every process, by means of itsprocess identifier(generally referred to as "pid") to the data necessary for its functioning. During a process's lifetime, such data might include memory segments designated to the process, the arguments it's been invoked with,environment variables, counters about resource usage, user-id, group-id and group set, and maybe other types of information. When a process terminates its execution, either by callingexit(even if implicitly, by executing areturncommand from themainfunction) or by receiving asignalthat causes it to terminate abruptly, the operating system releases most of the resources and information related to that process, but still keeps the data about resource utilization and thetermination statuscode, because a parent process might be interested in knowing if that child executed successfully (by using standard functions to decode the termination status code) and the amount of system resources it consumed during its execution. By default, the system assumes that the parent process is indeed interested in such information at the time of the child's termination, and thus sends the parent the signalSIGCHLDto alert that there is some data about a child to be collected. Such collection is done by calling a function of thewaitfamily (eitherwaititself or one of its relatives, such aswaitpid,waitidorwait4). As soon as this collection is made, the system releases those last bits of information about the child process and removes its pid from the process table. However, if the parent process lingers in collecting the child's data (or fails to do it at all), the system has no option but keep the child's pid and termination data in the process table indefinitely. Such a terminated process whose data has not been collected is called azombie process, or simply azombie, in the UNIX parlance. The name is a humorous analogy due to considering terminated process as "no longer alive" or "dead"—since it has really ceased functioning—and a lingering dead process still "incarnated" in the "world of the living" processes—the process table—which is therefore actually "undead", or "zombie". Zombie processes might pose problems on systems with limited resources or that have limited-size process tables, as the creation of new, active processes might be prevented by the lack of resources still used by long lasting zombies. It is, therefore, a good programming practice in any program that might spawn child processes to have code to prevent the formation of long lasting zombies from its original children. The most obvious approach is to have code that callswaitor one of its relatives somewhere after having created a new process. If the program is expected to create many child processes that may execute asynchronously and terminate in an unpredictable order, it is generally good to create ahandlerfor theSIGCHLDsignal, calling one of thewait-family function in a loop, until no uncollected child data remains. It is possible for the parent process to completely ignore the termination of its children and still not create zombies, but this requires the explicit definition of a handler forSIGCHLDthrough a call tosigactionwith the special option flagSA_NOCLDWAIT.[3] Orphan processesare an opposite situation to zombie processes, referring to the case in which a parent process terminates before its child processes, which are said to become "orphaned". Unlike the asynchronous child-to-parent notification that happens when a child process terminates (via theSIGCHLDsignal), child processes are not notified immediately when their parent finishes. Instead, the system simply redefines the "parent PID" field in the child process's data to be the process that is the "ancestor" of every other process in the system, whose PID generally has the value of 1 (one), and whose name is traditionally "init" (except in the Linux kernel 3.4 and above [more info below]). Thus, it was said that init "adopts" every orphan process on the system.[4][5] A somewhat common assumption by programmers new to UNIX was that the child processes of a terminating process will be adopted by this process's immediate parent process (hence those child processes' "grandparent"). Such assumption was incorrect – unless, of course, that "grandparent" was the init itself. After Linux kernel 3.4 this is no longer true, in fact processes can issue theprctl()system call with the PR_SET_CHILD_SUBREAPER option, and as a result they, not process #1, will become the parent of any of their orphaned descendant processes. This is the way of working of modern service managers and daemon supervision utilities including systemd, upstart, and the nosh service manager. This is an abstract of the manual page, reporting that: A subreaper fulfills the role of init(1) for its descendant processes. When a process becomes orphaned (i.e., its immediate parent terminates) then that process will be reparented to the nearest still living ancestor subreaper. Subsequently, calls to getppid() in the orphaned process will now return the PID of the subreaper process, and when the orphan terminates, it is the subreaper process that will receive a SIGCHLD signal and will be able to wait(2) on the process to discover its termination status.[6]
https://en.wikipedia.org/wiki/Parent_process
In aPOSIX-conformantoperating system, aprocess groupdenotes a collection of one or moreprocesses.[1]Among other things, a process group is used to control the distribution of asignal; when a signal is directed to a process group, the signal is delivered to each process that is a member of the group.[2] Similarly, asessiondenotes a collection of one or more process groups.[3]A process may not create a process group that belongs to another session; furthermore, a process is not permitted to join a process group that is a member of another session—that is, a process is not permitted to migrate from one session to another. When a process replaces its image with a new image (by calling one of theexecfunctions), the new image is subjected to the same process group (and thus session) membership as the old image. The distribution of signals to process groups forms the basis ofjob controlemployed byshell programs. TheTTY devicedriver incorporates a notion of aforeground process group, to which it sends signals generated bykeyboard interrupts, notablySIGINT("interrupt",Control+C),SIGTSTP("terminal stop",Control+Z), andSIGQUIT("quit",Control+\). It also sends theSIGTTINandSIGTTOUsignals to any processes that attempt to read from or write to the terminal and that arenotin the foreground process group. The shell, in turn, partitions the commandpipelinesthat it creates into process groups, and controls what process group is the foreground process group of itscontrolling terminal, thus determining what processes (and thus what command pipelines) may perform I/O to and from the terminal at any given time. When the shellforksa new child process for a command pipeline, both the parent shell process and thechild processimmediately make the child process the leader of the process group for the command pipeline. This ensures that the child is the leader of the process group before either the parent or child relies on this being the case. Where atextual user interfaceis being used on a Unix-like system, sessions are used to implementlogin sessions. A single process, thesession leader, interacts with the controlling terminal in order to ensure that all programs are terminated when a user "hangs up" the terminal connection. (Where a session leader is absent, the processes in the terminal's foreground process group are expected to handle hangups.) Where agraphical user interfaceis being used, the session concept is largely lost, and thekernel's notion of sessions largely ignored. Graphical user interfaces, such as where theX display manageris employed, use a different mechanism for implementing login sessions. Thesystem callsetsidis used to create a new session containing a single (new) process group, with the current process as both the session leader and theprocess group leaderof that single process group.[4]Process groups are identified by a positive integer, theprocess group ID, which is theprocess identifierof the process that is (or was) the process group leader. Process groups need not necessarily have leaders, although they always begin with one. Sessions are identified by the process group ID of the session leader. POSIX prohibits the change of the process group ID of a session leader. The system callsetpgidis used to set the process group ID of a process, thereby either joining the process to an existing process group, or creating a new process group within the session of the process with the process becoming the process group leader of the newly created group.[5]POSIX prohibits the re-use of a process ID where a process group with that identifier still exists (i.e. where the leader of a process group has exited, but other processes in the group still exist). It thereby guarantees that processes may not accidentally become process group leaders. Thesystem callkillis capable of directing signals either to individual processes or to process groups.[2]
https://en.wikipedia.org/wiki/Process_group
Incomputeroperating systems, aprocess(ortask) maywaitfor another process to complete its execution. In most systems, aparent processcan create an independently executingchild process. The parent process may then issue awaitsystem call, which suspends the execution of the parent process while the child executes. When the child process terminates, it returns anexit statusto the operating system, which is then returned to the waiting parent process. The parent process then resumes execution.[1] Modern operating systems also provide system calls that allow a process'sthreadto create other threads and wait for them to terminate ("join" them) in a similar fashion. An operating system may provide variations of thewaitcall that allow a process to wait for any of its child processes toexit, or to wait for a single specific child process (identified by itsprocess ID) to exit. Some operating systems issue asignal(SIGCHLD) to the parent process when a child process terminates, notifying the parent process and allowing it to retrieve the child process's exit status. Theexit statusreturned by a child process typically indicates whether the process terminated normally orabnormally. For normal termination, this status also includes the exit code (usually an integer value) that the process returned to the system. During the first 20 years of UNIX, only the low 8 bits of the exit code were available to the waiting parent. In 1989 withSVR4,[citation needed]a new callwaitidwas introduced that returns all bits from theexitcall in a structure calledsiginfo_tin the structure membersi_status.[citation needed]Waitid has been a mandatory part of the POSIX standard since 2001. When a child process terminates, it becomes azombie process,and continues to exist as an entry in the systemprocess tableeven though it is no longer an actively executing program. Under normal operation it will typically be immediately waited on by its parent, and then reaped by the system, reclaiming the resource (the process table entry). If a child is not waited on by its parent, it continues to consume this resource indefinitely, and thus is aresource leak. Such situations are typically handled with a special "reaper" process[citation needed]that locates zombies and retrieves their exit status, allowing the operating system to then deallocate their resources. Conversely, a child process whose parent process terminates before it does becomes anorphan process. Such situations are typically handled with a special "root" (or "init") process, which is assigned as the new parent of a process when its parent process exits. This special process detects when an orphan process terminates and then retrieves its exit status, allowing the system to deallocate the terminated child process. If a child process receives a signal, a waiting parent will then continue execution leaving an orphan process behind.[citation needed]Hence it is sometimes needed to check the argument set by wait, waitpid or waitid and, in the case that WIFSIGNALED is true, wait for the child process again to deallocate resources.[citation needed]
https://en.wikipedia.org/wiki/Wait_(system_call)
Incomputing, theworking directoryof aprocessis adirectoryof ahierarchical file system, if any,[nb 1]dynamically associated with the process. It is sometimes called thecurrent working directory (CWD), e.g. theBSDgetcwd[1]function, or justcurrent directory.[2]When a process refers to a file using apaththat is arelative path, such as a path on aUnix-likesystem that does not begin with a/(forward slash) or a path onWindowsthat does not begin with a\(backward slash), the path is interpreted as relative to the process's working directory. So, for example a process on a Unix-like system with working directory/rabbit-shoesthat attempts to create the filefoo.txtwill end up creating the file/rabbit-shoes/foo.txt. In most computer file systems, every directory has an entry (usually named ".") which points to the directory itself. In mostDOSandUNIXcommand shells, as well as in theMicrosoft Windowscommand line interpreterscmd.exeandWindows PowerShell, the working directory can be changed by using theCDorCHDIRcommands. InUnix shells, thepwdcommand outputs a full pathname of the working directory; the equivalent command in DOS and Windows isCDorCHDIRwithoutarguments(whereas in Unix,cdused without arguments takes the user back to theirhome directory). Theenvironment variablePWD(in Unix/Linux shells), or thepseudo-environment variablesCD(in WindowsCOMMAND.COMandcmd.exe, but not in OS/2 and DOS), or_CWD,_CWDS,_CWPand_CWPS(under4DOS,4OS2,4NTetc.)[3]can be used in scripts, so that one need not start an external program.Microsoft Windowsfile shortcutshave the ability to store the working directory. COMMAND.COM inDR-DOS 7.02and higher providesECHOS, a variant of theECHOcommand omitting the terminating linefeed.[4][3]This can be used to create a temporary batchjob storing the working directory in an environment variable likeCDfor later use, for example: Alternatively, underMultiuser DOSandDR-DOS 7.02and higher, various internal and external commands support a parameter/B(for "Batch").[5]This modifies the output of commands to become suitable for direct command line input (when redirecting it into a batch file) or usage as a parameter for other commands (using it as input for another command). WhereCHDIRwould issue a directory path likeC:\DOS, a command likeCHDIR /Bwould issueCHDIR C:\DOSinstead, so thatCHDIR /B > RETDIR.BATwould create a temporary batchjob allowing to return to this directory later on. The working directory is also displayed by the$P[nb 2]token of thePROMPTcommand[6]To keep the prompt short even inside of deep subdirectory structures, the DR-DOS 7.07 COMMAND.COM supports a$W[nb 2]token to display only the deepest subdirectory level. So, where a defaultPROMPT $P$Gwould result f.e. inC:\DOS>orC:\DOS\DRDOS>, aPROMPT $N:$W$Gwould instead yieldC:DOS>andC:DRDOS>, respectively. A similar facility (using$Wand$w) was added to4DOSas well.[3] Under DOS, the absolute paths of the working directories of all logical volumes are internally stored in an array-like data structure called the Current Directory Structure (CDS), which gets dynamically allocated at boot time to hold the necessary number of slots for all logical drives (or as defined byLASTDRIVE).[7][8][9]This structure imposes a length-limit of 66 characters on the full path of each working directory, and thus implicitly also limits the maximum possible depth of subdirectories.[7]DOS Plusand older issues of DR DOS (up toDR DOS 6.0, withBDOS6.7 in 1991) had no such limitation[8][10][3]due to their implementation using aDOS emulationon top of aConcurrent DOS- (and thusCP/M-86-)derived kernel, which internally organized subdirectories asrelativelinks to parent directories instead of asabsolutepaths.[8][10]SincePalmDOS(with BDOS 7.0) and DR DOS 6.0 (1992 update with BDOS 7.1) and higher switched to use a CDS formaximum compatibilitywith DOS programs as well, they faced the same limitations as present in other DOSes.[8][10] Mostprogramming languagesprovide aninterfaceto thefile systemfunctions of the operating system, including the ability to set (change) the working directory of the program. In theC language, thePOSIXfunctionchdir()effects thesystem callwhich changes the working directory.[11]Its argument is atext stringwith a path to the new directory, either absolute or relative to the old one. Where available, it can be called by a process to set its working directory. There are similar functions in other languages. For example, inVisual Basicit is usually spelledCHDIR(). InJava, the working directory can be obtained through thejava.nio.file.Pathinterface, or through thejava.io.Fileclass. The working directory cannot be changed.[12]
https://en.wikipedia.org/wiki/Working_directory
OnUnixandUnix-likecomputeroperating systems, azombie processordefunct processis aprocessthat has completed execution (via theexitsystem call) but still has an entry in theprocess table: it is a process in the "terminated state". This occurs for thechild processes, where the entry is still needed to allow theparent processto read its child'sexit status: once the exit status is read via thewaitsystem call, the zombie's entry is removed from the process table and it is said to be "reaped". A child process initially becomes a zombie, only then being removed from the resource table. Under normal system operation, zombies are immediately waited on by their parent and then reaped by the system. Processes that stay zombies for a long time are usually an error and can cause aresource leak. Generally, the only kernel resource they occupy is the process table entry, their process ID. However, zombies can also hold buffers open, consuming memory. Zombies can hold handles to file descriptors, which prevents the space for those files from being available to the filesystem. This effect can be seen by a difference betweenduanddf. Whiledumay show a large amount of free disk space,dfwill show a full partition. If the zombies are not cleaned, this can fill the root partition and crash the system. The termzombie processderives from the common definition ofzombie— anundeadperson. In the term's metaphor, the child process has "died" but has not yet been "reaped". Unlike normal processes, thekillcommand has no effect on a zombie process. Zombie processes should not be confused withorphan processes, a process that is still executing, but whose parent has died. When the parent dies, the orphaned child process is adopted byinit. When orphan processes die, they do not remain as zombie processes; instead, they arewaited on byinit. When a process ends viaexit, all of the memory and resources associated with it are deallocated so they can be used by other processes. However, the process's entry in the process table remains. The parent can read the child's exit status by executing thewaitsystem call, whereupon the zombie is removed. Thewaitcall may be executed in sequential code, but it is commonly executed in ahandlerfor theSIGCHLDsignal, which the parent receives whenever a child has died. After the zombie is removed, itsprocess identifier(PID) and entry in the process table can then be reused. However, if a parent fails to callwait, the zombie will be left in the process table, causing aresource leak. In some situations this may be desirable – the parent process wishes to continue holding this resource – for example if the parent creates another child process it ensures that it will not be allocated the same PID. On modern UNIX-like systems (that comply withSUSv3specification in this respect), the following special case applies: if the parentexplicitlyignores SIGCHLD by setting its handler toSIG_IGN(rather than simply ignoring the signal by default) or has theSA_NOCLDWAITflag set, all child exit status information will be discarded and no zombie processes will be left.[1] Zombies can be identified in the output from the Unixpscommandby the presence of a "Z" in the "STAT" column.[2]Zombies that exist for more than a short period of time typically indicate a bug in the parent program, or just an uncommon decision to not reap children (see example). If the parent program is no longer running, zombie processes typically indicate a bug in the operating system. As with other resource leaks, the presence of a few zombies is not worrisome in itself, but may indicate a problem that would grow serious under heavier loads. Since there is no memory allocated to zombie processes – the only system memory usage is for the process table entry itself – the primary concern with many zombies is not running out of memory, but rather running out of process table entries, concretely process ID numbers. However, zombies can hold open buffers that are associated with file descriptors, and thereby cause memory to be consumed by the zombie. Zombies can also hold a file descriptor to a file that has been deleted. This prevents the file system from recovering the i-nodes for the deleted file. Therefore, the command to show disk usage will not count the deleted files whose space cannot be reused due to the zombie holding the filedescriptor. To remove zombies from a system, the SIGCHLDsignalcan be sent to the parent manually, using thekillcommand. If the parent process still refuses to reap the zombie, and if it would be fine to terminate the parent process, the next step can be to remove the parent process. When a process loses its parent,initbecomes its new parent.initperiodically executes thewaitsystem call to reap any zombies withinitas parent. Synchronouslywaiting for the specific child processes in a (specific) order may leave zombies present longer than the above-mentioned "short period of time". It is not necessarily a program bug. In the first loop, the original (parent) process forks 10 copies of itself. Each of these child processes (detected by the fact that fork() returned zero) prints a message, sleeps, and exits. All of the children are created at essentially the same time (since the parent is doing very little in the loop), so it is somewhat random when each of them gets scheduled for the first time - thus the scrambled order of their messages. During the loop, an array of child process IDs is built. There is a copy of the pids[] array in all 11 processes, but only in the parent is it complete - the copy in each child will be missing the lower-numbered child PIDs, and have zero for its own PID. (Not that this really matters, as only the parent process actually uses this array.) The second loop executes only in the parent process (because all of the children have exited before this point), and waits for each child to exit. It waits for the child that slept 10 seconds first; all the others have long since exited, so all of the messages (except the first) appear in quick succession. There is no possibility of random ordering here, since it is driven by a loop in a single process. The first parent message actually appeared before any of the children messages - the parent was able to continue into the second loop before any of the child processes were able to start. This again is just the random behavior of the process scheduler - the "parent9" message could have appeared anywhere in the sequence prior to "parent8". Child0 through Child8 spend one or more seconds in this state, between the time they exited and the time the parent did a waitpid() on them. The parent was already waiting on Child9 before it exited, so that one process spent essentially no time as a zombie.[3]
https://en.wikipedia.org/wiki/Zombie_process
Programming languagesare used for controlling the behavior of a machine (often acomputer). Likenatural languages, programming languages follow rules forsyntaxandsemantics. There arethousands of programming languages[1]and new ones are created every year. Few languages ever become sufficiently popular that they are used by more than a few people, but professionalprogrammersmay use dozens of languages in a career. Most programming languages are not standardized by an international (or national) standard, even widely used ones, such asPerlorStandard ML(despite the name). Notable standardized programming languages includeALGOL,C,C++, JavaScript (under the nameECMAScript),Smalltalk,Prolog,Common Lisp,Scheme(IEEEstandard),ISLISP,Ada,Fortran,COBOL,SQL, andXQuery. The following table compares general and technical information for a selection of commonly usedprogramming languages. See the individual languages' articles for further information. Most programming languages will print anerror messageor throw anexceptionif aninput/outputoperation or othersystem call(e.g.,chmod,kill) fails, unless the programmer has explicitly arranged for different handling of these events. Thus, these languagesfail safelyin this regard. Some (mostly older) languages require that programmers explicitly add checks for these kinds of errors. Psychologically, differentcognitive biases(e.g.,optimism bias) may affect novices and experts alike and lead them to skip these checks. This can lead toerroneous behavior. Failsafe I/Ois a feature of1C:Enterprise,Ada(exceptions),ALGOL(exceptions or return value depending on function),Ballerina,C#,Common Lisp("conditions and restarts" system),Curry,D(throwing on failure),[45]Erlang,Fortran,Go(unless result explicitly ignored),Gosu,Harbour,Haskell,ISLISP,Java,Julia,Kotlin,LabVIEW,Mathematica,Objective-C(exceptions),OCaml(exceptions),OpenLisp,PHP,Python,Raku,Rebol,Rexx(with optionalsignal on... trap handling),Ring,Ruby,Rust(unless result explicitly ignored),Scala,[46]Smalltalk,Standard ML[citation needed],Swift ≥ 2.0(exceptions),Tcl,Visual Basic,Visual Basic .NET,Visual Prolog,Wolfram Language,Xojo,XPath/XQuery(exceptions), andZeek. No Failsafe I/O:AutoHotkey(global ErrorLevel must be explicitly checked),C,[47]COBOL,Eiffel(it actually depends on the library and it is not defined by the language),GLBasic(will generally cause program to crash),RPG,Lua(some functions do not warn or throw exceptions), andPerl.[48] Some I/O checking is built inC++(STL iostreamsthrow on failure but C APIs likestdioorPOSIXdo not)[47]andObject Pascal, inBash[49]it is optional. The literature on programming languages contains an abundance of informal claims about their relativeexpressive power, but there is no framework for formalizing such statements nor for deriving interesting consequences.[52]This table provides two measures of expressiveness from two different sources. An additional measure of expressiveness, in GZip bytes, can be found on the Computer Language Benchmarks Game.[53] Benchmarksare designed to mimic a particular type of workload on a component or system. The computer programs used for compiling some of the benchmark data in this section may not have been fully optimized, and the relevance of the data is disputed. The most accurate benchmarks are those that are customized to your particular situation. Other people's benchmark data may have some value to others, but proper interpretation brings manychallenges.The Computer Language Benchmarks Gamesite warns against over-generalizing from benchmark data, but contains a large number of micro-benchmarks of reader-contributed code snippets, with an interface that generates various charts and tables comparing specific programming languages and types of tests.[56]
https://en.wikipedia.org/wiki/Comparison_of_programming_languages
Thehistory of programming languagesspans from documentation of early mechanical computers to modern tools forsoftware development. Early programming languages were highly specialized, relying onmathematical notationand similarly obscuresyntax.[1]Throughout the 20th century, research incompilertheory led to the creation ofhigh-level programming languages, which use a more accessible syntax to communicate instructions. The first high-level programming language wasPlankalkül, created byKonrad Zusebetween 1942 and 1945.[2]The first high-level language to have an associatedcompilerwas created byCorrado Böhmin 1951, for hisPhDthesis.[3]The first commercially available language wasFORTRAN(FORmula TRANslation), developed in 1956 (first manual appeared in 1956, but first developed in 1954) by a team led byJohn BackusatIBM. During 1842–1849,Ada Lovelacetranslated the memoir of Italian mathematicianLuigi MenabreaaboutCharles Babbage's newest proposed machine: theAnalytical Engine; she supplemented the memoir with notes that specified in detail a method for calculatingBernoulli numberswith the engine, recognized by most of historians as the world's first published computer program.[4] Jacquard Loomsand Charles Babbage'sDifference Engineboth were designed to utilizepunched cards,[5][6]which would describe the sequence of operations that their programmable machines should perform. The first computercodeswere specialized for their applications: e.g.,Alonzo Churchwas able to express thelambda calculusin a formulaic way and theTuring machinewas an abstraction of the operation of a tape-marking machine. In the 1940s, the first recognizably modern electrically powered computers were created. The limited speed andmemory capacityforced programmers to write hand-tunedassembly languageprograms. It was eventually realized that programming in assembly language required a great deal of intellectual effort.[citation needed] An early proposal for ahigh-level programming languagewasPlankalkül, developed byKonrad Zusefor hisZ1 computerbetween 1942 and 1945 but not implemented at the time.[7] The first functioning programming languages designed to communicate instructions to a computer were written in the early 1950s.John Mauchly'sShort Code, proposed in 1949, was one of the first high-level languages ever developed for anelectronic computer.[8]Unlikemachine code, Short Code statements representedmathematical expressionsin understandable form. However, the program had to beinterpretedinto machine code every time it ran, making the process much slower than running the equivalent machine code. In the early 1950s,Alick GlenniedevelopedAutocode, possibly the first compiled programming language, at theUniversity of Manchester. In 1954, a second iteration of the language, known as the "Mark 1 Autocode", was developed for theMark 1byR. A. Brooker. Brooker, with the University of Manchester, also developed an autocode for theFerranti Mercuryin the 1950s. The version for theEDSAC2 was devised byDouglas HartreeofUniversity of Cambridge Mathematical Laboratoryin 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances and was noted for itsobject codeoptimization and source-language diagnostics which were advanced for the time. A contemporary but separate thread of development,Atlas Autocodewas developed for theUniversity of ManchesterAtlas 1machine. In 1954,FORTRANwas invented at IBM by a team led byJohn Backus; it was the first widely used high-level general purpose language to have a functional implementation, in contrast to only a design on paper.[9][10]When FORTRAN was first introduced, it was viewed with skepticism due to bugs, delays in development, and the comparative efficiency of "hand-coded" programs written in assembly.[11]However, in a hardware market that was rapidly evolving, the language eventually became known for its efficiency. It is still a popular language forhigh-performance computing[12]and is used for programs that benchmark and rank the world'sTOP500fastest supercomputers.[13] Another early programming language was devised byGrace Hopperin the US, namedFLOW-MATIC. It was developed for theUNIVAC IatRemington Randduring the period from 1955 until 1959. Hopper found that businessdata processingcustomers were uncomfortable withmathematical notation, and in early 1955, she and her team wrote a specification for anEnglish languageprogramming language and implemented a prototype.[14]The FLOW-MATIC compiler became publicly available in early 1958 and was substantially complete in 1959.[15]Flow-Matic was a major influence in the design ofCOBOL, since only it and its direct descendantAIMACOwere in use at the time.[16] Other languages still in use today includeLISP(1958), invented byJohn McCarthyandCOBOL(1959), created by the Short Range Committee. Another milestone in the late 1950s was the publication, by a committee of American and European computer scientists, of "a new language for algorithms"; theALGOL60 Report(the "ALGOrithmicLanguage"). This report consolidated many ideas circulating at the time and featured three key language innovations: Another innovation, related to this, was in how the language was described: ALGOL 60was particularly influential in the design of later languages, some of which soon became more popular. TheBurroughs large systemswere designed to be programmed in an extended subset of ALGOL. ALGOL's key ideas were continued, producingALGOL 68: ALGOL 68's many little-used language features (for example, concurrent and parallel blocks) and its complex system of syntactic shortcuts and automatic type coercions made it unpopular with implementers and gained it a reputation of beingdifficult.Niklaus Wirthactually walked out of the design committee to create the simplerPascallanguage. Some notable languages that were developed in this period include: The period from the late 1960s to the late 1970s brought a major flowering of programming languages. Most of the major languageparadigmsnow in use were invented in this period:[original research?] Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry. The 1960s and 1970s also saw considerable debate over the merits of "structured programming", which essentially meant programming without the use ofgoto. A significant fraction of programmers believed that, even in languages that providegoto, it is badprogramming styleto use it except in rare circumstances. This debate was closely related to language design: some languages had nogoto, which forced the use of structured programming. To provide even faster compile times, some languages were structured for "one-pass compilers" which expect subordinate routines to be defined first, as withPascal, where the main routine, or driver function, is the final section of the program listing. Some notable languages that were developed in this period include: The 1980s were years of relative consolidation inimperative languages. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the prior decade.C++combined object-oriented and systems programming. The United States government standardizedAda, a systems programming language intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-calledfifth-generation programming languagesthat incorporated logic programming constructs. The functional languages community moved to standardize ML and Lisp. Research inMiranda, a functional language withlazy evaluation, began to take hold in this decade. One important new trend in language design was an increased focus on programming for large-scale systems through the use ofmodules, or large-scale organizational units of code.Modula, Ada, and ML all developed notable module systems in the 1980s. Module systems were often wedded togeneric programmingconstructs: generics being, in essence, parametrized modules[citation needed](see alsoPolymorphism (computer science)). Although major new paradigms for imperative programming languages did not appear, many researchers expanded on the ideas of prior languages and adapted them to new contexts. For example, the languages of theArgusand Emerald systems adapted object-oriented programming todistributed computingsystems. The 1980s also brought advances in programming language implementation. Thereduced instruction set computer(RISC) movement incomputer architecturepostulated that hardware should be designed forcompilersrather than for human assembly programmers. Aided bycentral processing unit(CPU) speed improvements that enabled increasingly aggressive compiling methods, the RISC movement sparked greater interest in compiler technology for high-level languages. Language technology continued along these lines well into the 1990s. Some notable languages that were developed in this period include: The rapid growth of the Internet in the mid-1990s was the next major historic event in programming languages. By opening up a radically new platform for computer systems, the Internet created an opportunity for new languages to be adopted. In particular, theJavaScriptprogramming language rose to popularity because of its early integration with the Netscape Navigator web browser. Various other scripting languages achieved widespread use in developing customized applications for web servers such as PHP. The 1990s saw no fundamental novelty inimperative languages, but much recombination and maturation of old ideas. This era began the spread offunctional languages. A big driving philosophy was programmer productivity. Manyrapid application development(RAD) languages emerged, which usually came with anintegrated development environment(IDE),garbage collection, and were descendants of older languages. All such languages wereobject-oriented. These includedObject Pascal, Objective Caml (renamedOCaml),Visual Basic, andJava. Java in particular received much attention. More radical and innovative than the RAD languages were the newscripting languages. These did not directly descend from other languages and featured new syntaxes and more liberal incorporation of features. Many consider these scripting languages to be more productive than even the RAD languages, but often because of choices that make small programs simpler but large programs more difficult to write and maintain.[citation needed]Nevertheless, scripting languages came to be the most prominent ones used in connection with the Web. Some programming languages included other languages in their distribution to save the development time. for example both ofPythonandRubyincludedTclto supportGUIprogramming through libraries likeTkinter. Some notable languages that were developed in this period include: Programming language evolution continues, and more programming paradigms are used in production. Some of the trends have included: Big Techcompanies introduced multiple new programming languages that are designed to serve their needs. for example: Some notable languages developed during this period include: Programming language evolution continues with the rise of new programming domains. ManyBig Techcompanies continued introducing new programming languages that are designed to serve their needs and provides first-class support for their platforms. for example: Some notable languages developed during this period include:[20][21] Other new programming languages includeElm,Ballerina,Red,Crystal,V (Vlang),Reason. The development of new programming languages continues, and some new languages appears with focus on providing a replacement for current languages. These new languages try to provide the advantages of a known language like C++ (versatile and fast) while adding safety or reducing complexity. Other new languages try to bring ease of use as provided by Python while adding performance as a priority. Also, the growing of Machine Learning and AI tools still plays a big rule behind these languages' development, where some visual languages focus on integrating these AI tools while other textual languages focus on providing more suitable support for developing them.[22][23][24] Some notable new programming languages include: Some key people who helped develop programming languages:
https://en.wikipedia.org/wiki/History_of_programming_languages
This is an index to notableprogramming languages, in current or historical use. Dialects ofBASIC(which havetheir own page),esoteric programming languages, andmarkup languagesare not included. A programming language does not need to beimperativeorTuring-complete, but must beexecutableand so does not includemarkup languagessuch asHTMLorXML, but does includedomain-specific languagessuch asSQLand its dialects.
https://en.wikipedia.org/wiki/List_of_programming_languages
This is a list of notableprogramming languages, grouped by type. The groupings are overlapping; not mutually exclusive. A language can be listed in multiple groupings. Agent-oriented programming allows the developer to build, extend and usesoftware agents, which are abstractions of objects that can message other agents. Array programming(also termedvectorormultidimensional) languages generalize operations on scalars to apply transparently tovectors,matrices, andhigher-dimensional arrays. Aspect-oriented programming enables developers to add new functionality to code, known as "advice", without modifying that code itself; rather, it uses apointcutto implement the advice into code blocks. Assembly languagesdirectly correspond to amachine language(seebelow), so machine code instructions appear in a form understandable by humans, although there may not be a one-to-one mapping between an individual statement and an individual instruction. Assembly languages let programmers use symbolic addresses, which theassemblerconverts to absolute orrelocatableaddresses. Most assemblers also supportmacrosandsymbolic constants. Anauthoring languageis a programming language designed for use by a non-computer expert to easily create tutorials, websites, and other interactive computer programs. Command-line interface (CLI) languages are also called batch languages or job control languages. Examples: These are languages typically processed bycompilers, though theoretically any language can be compiled or interpreted. Aconcatenative programming languageis apoint-freecomputerprogramming languagein which all expressions denotefunctions, and thejuxtapositionofexpressionsdenotesfunction composition. Message passinglanguages provide language constructs forconcurrency. The predominant paradigm for concurrency in mainstream languages such asJavaisshared memoryconcurrency. Concurrent languages that make use of message passing have generally been inspired by process calculi such ascommunicating sequential processes(CSP) or theπ-calculus. Aconstraint programminglanguage is adeclarative programminglanguage where relationships between variables are expressed asconstraints. Execution proceeds by attempting to find values for the variables which satisfy all declared constraints. Acurly bracketorcurly bracelanguage has syntax that defines a block as the statements betweencurly brackets, a.k.a. braces,{}. This syntax originated withBCPL(1966), and was popularized byC. Many curly bracket languagesdescend from or are strongly influenced by C. Examples: Dataflow programminglanguages rely on a (usually visual) representation of the flow of data to specify the program. Frequently used for reacting to discrete events or for processing streams of data. Examples of dataflow languages include: Data-oriented languages provide powerful ways of searching and manipulating the relations that have been described as entity relationship tables which map one set of things into other sets.[citation needed]Examples of data-oriented languages include: Decision tablescan be used as an aid to clarifying the logic before writing a program in any language, but in the 1960s a number of languages were developed where the main logic is expressed directly in the form of a decision table, including: Declarative languagesexpress the logic of a computation without describing its control flow in detail.Declarative programmingstands in contrast toimperative programmingvia imperative programming languages, where control flow is specified by serial orders (imperatives). (Pure)functionalandlogic-basedprogramming languages are also declarative, and constitute the major subcategories of the declarative category. This section lists additional examples not in those subcategories. Source embeddable languages embed small pieces of executable code inside a piece of free-form text, often a web page. Client-side embedded languages are limited by the abilities of the browser or intended client. They aim to provide dynamism to web pages without the need to recontact the server. Server-side embedded languages are much more flexible, since almost any language can be built into a server. The aim of having fragments of server-side code embedded in a web page is to generate additional markup dynamically; the code itself disappears when the page is served, to be replaced by its output. The above examples are particularly dedicated to this purpose. A large number of other languages, such asErlang,Scala,Perl,RingandRubycan be adapted (for instance, by being made intoApachemodules). A wide variety of dynamic or scripting languages can be embedded in compiled executable code. Basically, object code for the language'sinterpreterneeds to be linked into the executable. Source code fragments for the embedded language can then be passed to an evaluation function as strings. Application control languages can be implemented this way, if the source code is input by the user. Languages with small interpreters are preferred. Languages developed primarily for the purpose of teaching and learning of programming. Anesoteric programming languageis a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke. Extension programming languagesare languages embedded into another program and used to harness its features in extension scripts. Fourth-generation programming languagesarehigh-level programming languagesbuilt arounddatabasesystems. They are generally used in commercial environments. Functional programminglanguages define programs and subroutines as mathematical functions and treat them as first-class. Many so-called functional languages are "impure", containing imperative features. Many functional languages are tied to mathematical calculation tools. Functional languages include: In electronics, ahardware description language(HDL) is a specialized computer language used to describe the structure, design, and operation of electronic circuits, and most commonly, digital logic circuits. The two most widely used and well-supported HDL varieties used in industry areVerilogandVHDL. Hardware description languages include: Imperative programming languages may be multi-paradigm and appear in other classifications. Here is a list of programming languages that follow theimperative paradigm: Known asREPL- Interactive mode languages act as a kind of shell: expressions or statements can be entered one at a time, and the result of their evaluation seen immediately. Interpreted languagesare programming languages in which programs may be executed from source code form, by an interpreter. Theoretically, any language can be compiled or interpreted, so the terminterpreted languagegenerally refers to languages that are usually interpreted rather than compiled. Iterative languages are built around or offeringgenerators. Garbage Collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory that was allocated by the program but is no longer used. Some programming languages without the inherent ability to manually manage memory, likeCython,[25]Swift,[c]andScala[26](Scala Native only), are able to import or call functions likemallocandfreefromCthrough aforeign function interface. List-based languages are a type ofdata-structured languagethat are based on thelistdata structure. Little languages[29]serve a specialized problem domain. Logic-basedlanguages specify a set of attributes that a solution must-have, rather than a set of steps to obtain a solution. Notable languages following thisprogramming paradigminclude: Machine languagesare directly executable by a computer's CPU. They are typically formulated as bit patterns, usually represented inoctalorhexadecimal. Each bit pattern causes the circuits in the CPU to execute one of the fundamental operations of the hardware. The activation of specific electrical inputs (e.g., CPU package pins for microprocessors), and logical settings for CPU state values, control the processor's computation. Individual machine languages are specific to a family of processors; machine-language code for one family of processors cannot run directly on processors in another family unless the processors in question have additional hardware to support it (for example, DEC VAX processors included a PDP-11 compatibility mode). They are (essentially) always defined by the CPU developer, not by 3rd parties.[e]The symbolic version, the processor'sassembly language, is also defined by the developer, in most cases. Some commonly used machine codeinstruction setsare: Macrolanguages transform one source code file into another. A "macro" is essentially a short piece of text that expands into a longer one (not to be confused withhygienic macros), possibly with parameter substitution. They are often used topreprocesssource code. Preprocessors can also supply facilities likefile inclusion. Macro languages may be restricted to acting on specially labeled code regions (pre-fixed with a#in the case of the C preprocessor). Alternatively, they may not, but in this case it is still often undesirable to (for instance) expand a macro embedded in astring literal, so they still need a rudimentary awareness of syntax. That being the case, they are often still applicable to more than one language. Contrast with source-embeddable languages likePHP, which are fully featured. Scripting languagessuch asTclandECMAScript(ActionScript,ECMAScript for XML,JavaScript,JScript) have been embedded into applications. These are sometimes called "macro languages", although in a somewhat different sense to textual-substitution macros likem4. Metaprogrammingis the writing of programs that write or manipulate other programs, including themselves, as their data or that do part of the work that is otherwise done atrun timeduringcompile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually. Multiparadigm languagessupport more than oneprogramming paradigm. They allow aprogramto use more than oneprogrammingstyle. The goal is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way. Several general-purpose programming languages, such asCandPython, are also used for technical computing, this list focuses on languages almost exclusively used for technical computing. Class-basedobject-oriented programminglanguages supportobjectsdefined by their class. Class definitions include member data.Message passingis a key concept, if not the main concept, in object-oriented languages. Polymorphic functions parameterized by the class of some of their arguments are typically calledmethods. In languages withsingle dispatch, classes typically also include method definitions. In languages withmultiple dispatch, methods are defined bygeneric functions. There are exceptions wheresingle dispatchmethods aregeneric functions(e.g.Bigloo's object system). Prototype-based languagesare object-oriented languages where the distinction between classes and instances has been removed: Off-side rulelanguages denote blocks of code by theirindentation. Procedural programminglanguages are based on the concept of the unit and scope (the data viewing range) of an executable code statement. A procedural program is composed of one or more units or modules, either user coded or provided in a code library; each module is composed of one or more procedures, also called a function, routine, subroutine, or method, depending on the language. Examples of procedural languages include: Reflective programminglanguages let programs examine and possibly modify their high-level structure at runtime or compile-time. This is most common in high-level virtual machine programming languages likeSmalltalk, and less common in lower-level programming languages likeC. Languages and platforms supporting reflection: Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:[citation needed] Stack-based languages are a type ofdata-structured languagethat are based on thestackdata structure. Synchronous programming languagesare optimized for programming reactive systems, systems that are often interrupted and must respond quickly. Many such systems are also calledrealtime systems, and are used often inembedded systems. Examples: Ashading languageis a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "color" and "normal". Due to the variety of target markets for 3D computer graphics. They provide both higher hardware abstraction and a more flexible programming model than previous paradigms which hardcoded transformation and shading equations. This gives the programmer greater control over the rendering process and delivers richer content at lower overhead. Shading languages used in offline rendering produce maximum image quality. Processing such shaders is time-consuming. The computational power required can be expensive because of their ability to produce photorealistic results. These languages assist with generatinglexical analyzersandparsersforcontext-free grammars. Thesystem programming languagesare for low-level tasks like memory management or task management. A system programming language usually refers to a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software. System software is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes software categories such as operating systems, utility software, device drivers, compilers, and linkers. Examples of system languages include: Transformation languagesserve the purpose of transforming (translating) source code specified in a certain formal language into a defined destination format code. It is most commonly used in intermediate components of more complex super-systems in order to adopt internal results for input into a succeeding processing routine. Visual programming languageslet users specify programs in a two-(or more)-dimensional way, instead of as one-dimensional text strings, via graphic layouts of various types. Somedataflow programminglanguages are also visual languages. Computer scientistNiklaus Wirthdesigned and implemented several influential languages. These are languages based on or that operate onXML.
https://en.wikipedia.org/wiki/List_of_programming_languages_by_type
In mathematics, especially inalgebraic geometryand the theory ofcomplex manifolds,coherent sheavesare a class ofsheavesclosely linked to the geometric properties of the underlying space. The definition of coherent sheaves is made with reference to asheaf of ringsthat codifies this geometric information. Coherent sheaves can be seen as a generalization ofvector bundles. Unlike vector bundles, they form anabelian category, and so they are closed under operations such as takingkernels,images, andcokernels. Thequasi-coherent sheavesare a generalization of coherent sheaves and include the locally free sheaves of infinite rank. Coherent sheaf cohomologyis a powerful technique, in particular for studying thesectionsof a given coherent sheaf. Aquasi-coherent sheafon aringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}is a sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modulesthat has a local presentation, that is, every point inX{\displaystyle X}has anopen neighborhoodU{\displaystyle U}in which there is anexact sequence for some (possibly infinite) setsI{\displaystyle I}andJ{\displaystyle J}. Acoherent sheafon aringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}is a sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modulessatisfying the following two properties: Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves ofOX{\displaystyle {\mathcal {O}}_{X}}-modules. WhenX{\displaystyle X}is a scheme, the general definitions above are equivalent to more explicit ones. A sheafF{\displaystyle {\mathcal {F}}}ofOX{\displaystyle {\mathcal {O}}_{X}}-modules isquasi-coherentif and only if over each openaffine subschemeU=Spec⁡A{\displaystyle U=\operatorname {Spec} A}the restrictionF|U{\displaystyle {\mathcal {F}}|_{U}}is isomorphic to the sheafM~{\displaystyle {\tilde {M}}}associatedto the moduleM=Γ(U,F){\displaystyle M=\Gamma (U,{\mathcal {F}})}overA{\displaystyle A}. WhenX{\displaystyle X}is a locally Noetherian scheme,F{\displaystyle {\mathcal {F}}}iscoherentif and only if it is quasi-coherent and the modulesM{\displaystyle M}above can be taken to befinitely generated. On an affine schemeU=Spec⁡A{\displaystyle U=\operatorname {Spec} A}, there is anequivalence of categoriesfromA{\displaystyle A}-modules to quasi-coherent sheaves, taking a moduleM{\displaystyle M}to the associated sheafM~{\displaystyle {\tilde {M}}}. The inverse equivalence takes a quasi-coherent sheafF{\displaystyle {\mathcal {F}}}onU{\displaystyle U}to theA{\displaystyle A}-moduleF(U){\displaystyle {\mathcal {F}}(U)}of global sections ofF{\displaystyle {\mathcal {F}}}. Here are several further characterizations of quasi-coherent sheaves on a scheme.[1] Theorem—LetX{\displaystyle X}be a scheme andF{\displaystyle {\mathcal {F}}}anOX{\displaystyle {\mathcal {O}}_{X}}-module on it. Then the following are equivalent. On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on anyschemeform an abelian category, and they are extremely useful in that context.[2] On any ringed spaceX{\displaystyle X}, the coherent sheaves form an abelian category, afull subcategoryof the category ofOX{\displaystyle {\mathcal {O}}_{X}}-modules.[3](Analogously, the category ofcoherent modulesover any ringA{\displaystyle A}is a full abelian subcategory of the category of allA{\displaystyle A}-modules.) So the kernel, image, and cokernel of any map of coherent sheaves are coherent. Thedirect sumof two coherent sheaves is coherent; more generally, anOX{\displaystyle {\mathcal {O}}_{X}}-module that is anextensionof two coherent sheaves is coherent.[4] A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always anOX{\displaystyle {\mathcal {O}}_{X}}-module offinite presentation, meaning that each pointx{\displaystyle x}inX{\displaystyle X}has an open neighborhoodU{\displaystyle U}such that the restrictionF|U{\displaystyle {\mathcal {F}}|_{U}}ofF{\displaystyle {\mathcal {F}}}toU{\displaystyle U}is isomorphic to the cokernel of a morphismOXn|U→OXm|U{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {O}}_{X}^{m}|_{U}}for some natural numbersn{\displaystyle n}andm{\displaystyle m}. IfOX{\displaystyle {\mathcal {O}}_{X}}is coherent, then, conversely, every sheaf of finite presentation overOX{\displaystyle {\mathcal {O}}_{X}}is coherent. The sheaf of ringsOX{\displaystyle {\mathcal {O}}_{X}}is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, theOka coherence theoremstates that the sheaf of holomorphic functions on a complex analytic spaceX{\displaystyle X}is a coherent sheaf of rings. The main part of the proof is the caseX=Cn{\displaystyle X=\mathbf {C} ^{n}}. Likewise, on alocally Noetherian schemeX{\displaystyle X}, the structure sheafOX{\displaystyle {\mathcal {O}}_{X}}is a coherent sheaf of rings.[5] Letf:X→Y{\displaystyle f:X\to Y}be a morphism of ringed spaces (for example, amorphism of schemes). IfF{\displaystyle {\mathcal {F}}}is a quasi-coherent sheaf onY{\displaystyle Y}, then theinverse imageOX{\displaystyle {\mathcal {O}}_{X}}-module (orpullback)f∗F{\displaystyle f^{*}{\mathcal {F}}}is quasi-coherent onX{\displaystyle X}.[10]For a morphism of schemesf:X→Y{\displaystyle f:X\to Y}and a coherent sheafF{\displaystyle {\mathcal {F}}}onY{\displaystyle Y}, the pullbackf∗F{\displaystyle f^{*}{\mathcal {F}}}is not coherent in full generality (for example,f∗OY=OX{\displaystyle f^{*}{\mathcal {O}}_{Y}={\mathcal {O}}_{X}}, which might not be coherent), but pullbacks of coherent sheaves are coherent ifX{\displaystyle X}is locally Noetherian. An important special case is the pullback of a vector bundle, which is a vector bundle. Iff:X→Y{\displaystyle f:X\to Y}is aquasi-compactquasi-separatedmorphism of schemes andF{\displaystyle {\mathcal {F}}}is a quasi-coherent sheaf onX{\displaystyle X}, then the direct image sheaf (orpushforward)f∗F{\displaystyle f_{*}{\mathcal {F}}}is quasi-coherent onY{\displaystyle Y}.[2] The direct image of a coherent sheaf is often not coherent. For example, for afieldk{\displaystyle k}, letX{\displaystyle X}be the affine line overk{\displaystyle k}, and consider the morphismf:X→Spec⁡(k){\displaystyle f:X\to \operatorname {Spec} (k)}; then the direct imagef∗OX{\displaystyle f_{*}{\mathcal {O}}_{X}}is the sheaf onSpec⁡(k){\displaystyle \operatorname {Spec} (k)}associated to the polynomial ringk[x]{\displaystyle k[x]}, which is not coherent becausek[x]{\displaystyle k[x]}has infinite dimension as ak{\displaystyle k}-vector space. On the other hand, the direct image of a coherent sheaf under aproper morphismis coherent, byresults of Grauert and Grothendieck. An important feature of coherent sheavesF{\displaystyle {\mathcal {F}}}is that the properties ofF{\displaystyle {\mathcal {F}}}at a pointx{\displaystyle x}control the behavior ofF{\displaystyle {\mathcal {F}}}in a neighborhood ofx{\displaystyle x}, more than would be true for an arbitrary sheaf. For example,Nakayama's lemmasays (in geometric language) that ifF{\displaystyle {\mathcal {F}}}is a coherent sheaf on a schemeX{\displaystyle X}, then thefiberFx⊗OX,xk(x){\displaystyle {\mathcal {F}}_{x}\otimes _{{\mathcal {O}}_{X,x}}k(x)}ofF{\displaystyle F}at a pointx{\displaystyle x}(a vector space over the residue fieldk(x){\displaystyle k(x)}) is zero if and only if the sheafF{\displaystyle {\mathcal {F}}}is zero on some open neighborhood ofx{\displaystyle x}. A related fact is that the dimension of the fibers of a coherent sheaf isupper-semicontinuous.[11]Thus a coherent sheaf has constant rank on an open set, while the rank can jump up on a lower-dimensional closed subset. In the same spirit: a coherent sheafF{\displaystyle {\mathcal {F}}}on a schemeX{\displaystyle X}is a vector bundle if and only if itsstalkFx{\displaystyle {\mathcal {F}}_{x}}is afree moduleover the local ringOX,x{\displaystyle {\mathcal {O}}_{X,x}}for every pointx{\displaystyle x}inX{\displaystyle X}.[12] On a general scheme, one cannot determine whether a coherent sheaf is a vector bundle just from its fibers (as opposed to its stalks). On areducedlocally Noetherian scheme, however, a coherent sheaf is a vector bundle if and only if its rank is locally constant.[13] For a morphism of schemesX→Y{\displaystyle X\to Y}, letΔ:X→X×YX{\displaystyle \Delta :X\to X\times _{Y}X}be thediagonal morphism, which is aclosed immersionifX{\displaystyle X}isseparatedoverY{\displaystyle Y}. LetI{\displaystyle {\mathcal {I}}}be the ideal sheaf ofX{\displaystyle X}inX×YX{\displaystyle X\times _{Y}X}. Then the sheaf ofdifferentialsΩX/Y1{\displaystyle \Omega _{X/Y}^{1}}can be defined as the pullbackΔ∗I{\displaystyle \Delta ^{*}{\mathcal {I}}}ofI{\displaystyle {\mathcal {I}}}toX{\displaystyle X}. Sections of this sheaf are called1-formsonX{\displaystyle X}overY{\displaystyle Y}, and they can be written locally onX{\displaystyle X}as finite sums∑fjdgj{\displaystyle \textstyle \sum f_{j}\,dg_{j}}for regular functionsfj{\displaystyle f_{j}}andgj{\displaystyle g_{j}}. IfX{\displaystyle X}is locally of finite type over a fieldk{\displaystyle k}, thenΩX/k1{\displaystyle \Omega _{X/k}^{1}}is a coherent sheaf onX{\displaystyle X}. IfX{\displaystyle X}issmoothoverk{\displaystyle k}, thenΩ1{\displaystyle \Omega ^{1}}(meaningΩX/k1{\displaystyle \Omega _{X/k}^{1}}) is a vector bundle overX{\displaystyle X}, called thecotangent bundleofX{\displaystyle X}. Then thetangent bundleTX{\displaystyle TX}is defined to be the dual bundle(Ω1)∗{\displaystyle (\Omega ^{1})^{*}}. ForX{\displaystyle X}smooth overk{\displaystyle k}of dimensionn{\displaystyle n}everywhere, the tangent bundle has rankn{\displaystyle n}. IfY{\displaystyle Y}is a smooth closed subscheme of a smooth schemeX{\displaystyle X}overk{\displaystyle k}, then there is a short exact sequence of vector bundles onY{\displaystyle Y}: which can be used as a definition of thenormal bundleNY/X{\displaystyle N_{Y/X}}toY{\displaystyle Y}inX{\displaystyle X}. For a smooth schemeX{\displaystyle X}over a fieldk{\displaystyle k}and a natural numberi{\displaystyle i}, the vector bundleΩi{\displaystyle \Omega ^{i}}ofi-formsonX{\displaystyle X}is defined as thei{\displaystyle i}-thexterior powerof the cotangent bundle,Ωi=ΛiΩ1{\displaystyle \Omega ^{i}=\Lambda ^{i}\Omega ^{1}}. For a smoothvarietyX{\displaystyle X}of dimensionn{\displaystyle n}overk{\displaystyle k}, thecanonical bundleKX{\displaystyle K_{X}}means the line bundleΩn{\displaystyle \Omega ^{n}}. Thus sections of the canonical bundle are algebro-geometric analogs ofvolume formsonX{\displaystyle X}. For example, a section of the canonical bundle of affine spaceAn{\displaystyle \mathbb {A} ^{n}}overk{\displaystyle k}can be written as wheref{\displaystyle f}is a polynomial with coefficients ink{\displaystyle k}. LetR{\displaystyle R}be a commutative ring andn{\displaystyle n}a natural number. For each integerj{\displaystyle j}, there is an important example of a line bundle on projective spacePn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}, calledO(j){\displaystyle {\mathcal {O}}(j)}. To define this, consider the morphism ofR{\displaystyle R}-schemes given in coordinates by(x0,…,xn)↦[x0,…,xn]{\displaystyle (x_{0},\ldots ,x_{n})\mapsto [x_{0},\ldots ,x_{n}]}. (That is, thinking of projective space as the space of 1-dimensional linear subspaces of affine space, send a nonzero point in affine space to the line that it spans.) Then a section ofO(j){\displaystyle {\mathcal {O}}(j)}over an open subsetU{\displaystyle U}ofPn{\displaystyle \mathbb {P} ^{n}}is defined to be a regular functionf{\displaystyle f}onπ−1(U){\displaystyle \pi ^{-1}(U)}that is homogeneous of degreej{\displaystyle j}, meaning that as regular functions on (A1−0)×π−1(U){\displaystyle \mathbb {A} ^{1}-0)\times \pi ^{-1}(U)}. For all integersi{\displaystyle i}andj{\displaystyle j}, there is an isomorphismO(i)⊗O(j)≅O(i+j){\displaystyle {\mathcal {O}}(i)\otimes {\mathcal {O}}(j)\cong {\mathcal {O}}(i+j)}of line bundles onPn{\displaystyle \mathbb {P} ^{n}}. In particular, everyhomogeneous polynomialinx0,…,xn{\displaystyle x_{0},\ldots ,x_{n}}of degreej{\displaystyle j}overR{\displaystyle R}can be viewed as a global section ofO(j){\displaystyle {\mathcal {O}}(j)}overPn{\displaystyle \mathbb {P} ^{n}}. Note that every closed subscheme of projective space can be defined as the zero set of some collection of homogeneous polynomials, hence as the zero set of some sections of the line bundlesO(j){\displaystyle {\mathcal {O}}(j)}.[14]This contrasts with the simpler case of affine space, where a closed subscheme is simply the zero set of some collection of regular functions. The regular functions on projective spacePn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}are just the "constants" (the ringR{\displaystyle R}), and so it is essential to work with the line bundlesO(j){\displaystyle {\mathcal {O}}(j)}. Serregave an algebraic description of all coherent sheaves on projective space, more subtle than what happens for affine space. Namely, letR{\displaystyle R}be a Noetherian ring (for example, a field), and consider the polynomial ringS=R[x0,…,xn]{\displaystyle S=R[x_{0},\ldots ,x_{n}]}as agraded ringwith eachxi{\displaystyle x_{i}}having degree 1. Then every finitely generated gradedS{\displaystyle S}-moduleM{\displaystyle M}has anassociatedcoherent sheafM~{\displaystyle {\tilde {M}}}onPn{\displaystyle \mathbb {P} ^{n}}overR{\displaystyle R}. Every coherent sheaf onPn{\displaystyle \mathbb {P} ^{n}}arises in this way from a finitely generated gradedS{\displaystyle S}-moduleM{\displaystyle M}. (For example, the line bundleO(j){\displaystyle {\mathcal {O}}(j)}is the sheaf associated to theS{\displaystyle S}-moduleS{\displaystyle S}with its grading lowered byj{\displaystyle j}.) But theS{\displaystyle S}-moduleM{\displaystyle M}that yields a given coherent sheaf onPn{\displaystyle \mathbb {P} ^{n}}is not unique; it is only unique up to changingM{\displaystyle M}by graded modules that are nonzero in only finitely many degrees. More precisely, the abelian category of coherent sheaves onPn{\displaystyle \mathbb {P} ^{n}}is thequotientof the category of finitely generated gradedS{\displaystyle S}-modules by theSerre subcategoryof modules that are nonzero in only finitely many degrees.[15] The tangent bundle of projective spacePn{\displaystyle \mathbb {P} ^{n}}over a fieldk{\displaystyle k}can be described in terms of the line bundleO(1){\displaystyle {\mathcal {O}}(1)}. Namely, there is a short exact sequence, theEuler sequence: It follows that the canonical bundleKPn{\displaystyle K_{\mathbb {P} ^{n}}}(the dual of thedeterminant line bundleof the tangent bundle) is isomorphic toO(−n−1){\displaystyle {\mathcal {O}}(-n-1)}. This is a fundamental calculation for algebraic geometry. For example, the fact that the canonical bundle is a negative multiple of theample line bundleO(1){\displaystyle {\mathcal {O}}(1)}means that projective space is aFano variety. Over the complex numbers, this means that projective space has aKähler metricwith positiveRicci curvature. Consider a smooth degree-d{\displaystyle d}hypersurfaceX⊆Pn{\displaystyle X\subseteq \mathbb {P} ^{n}}defined by the homogeneous polynomialf{\displaystyle f}of degreed{\displaystyle d}. Then, there is an exact sequence where the second map is the pullback of differential forms, and the first map sends Note that this sequence tells us thatO(−d){\displaystyle {\mathcal {O}}(-d)}is the conormal sheaf ofX{\displaystyle X}inPn{\displaystyle \mathbb {P} ^{n}}. Dualizing this yields the exact sequence henceO(d){\displaystyle {\mathcal {O}}(d)}is the normal bundle ofX{\displaystyle X}inPn{\displaystyle \mathbb {P} ^{n}}. If we use the fact that given an exact sequence of vector bundles with ranksr1{\displaystyle r_{1}},r2{\displaystyle r_{2}},r3{\displaystyle r_{3}}, there is an isomorphism of line bundles, then we see that there is the isomorphism showing that One useful technique for constructing rank 2 vector bundles is the Serre construction[16][17]pg 3which establishes a correspondence between rank 2 vector bundlesE{\displaystyle {\mathcal {E}}}on a smooth projective varietyX{\displaystyle X}and codimension 2 subvarietiesY{\displaystyle Y}using a certainExt1{\displaystyle {\text{Ext}}^{1}}-group calculated onX{\displaystyle X}. This is given by a cohomological condition on the line bundle∧2E{\displaystyle \wedge ^{2}{\mathcal {E}}}(see below). The correspondence in one direction is given as follows: for a sections∈Γ(X,E){\displaystyle s\in \Gamma (X,{\mathcal {E}})}we can associated the vanishing locusV(s)⊆X{\displaystyle V(s)\subseteq X}. IfV(s){\displaystyle V(s)}is a codimension 2 subvariety, then In the other direction,[18]for a codimension 2 subvarietyY⊆X{\displaystyle Y\subseteq X}and a line bundleL→X{\displaystyle {\mathcal {L}}\to X}such that there is a canonical isomorphism Hom((ωX⊗L)|Y,ωY)≅Ext1(IY⊗L,OX){\displaystyle {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})\cong {\text{Ext}}^{1}({\mathcal {I}}_{Y}\otimes {\mathcal {L}},{\mathcal {O}}_{X})}, which is functorial with respect to inclusion of codimension2{\displaystyle 2}subvarieties. Moreover, any isomorphism given on the left corresponds to a locally free sheaf in the middle of the extension on the right. That is, fors∈Hom((ωX⊗L)|Y,ωY){\displaystyle s\in {\text{Hom}}((\omega _{X}\otimes {\mathcal {L}})|_{Y},\omega _{Y})}that is an isomorphism there is a corresponding locally free sheafE{\displaystyle {\mathcal {E}}}of rank 2 that fits into a short exact sequence 0→OX→E→IY⊗L→0{\displaystyle 0\to {\mathcal {O}}_{X}\to {\mathcal {E}}\to {\mathcal {I}}_{Y}\otimes {\mathcal {L}}\to 0} This vector bundle can then be further studied using cohomological invariants to determine if it is stable or not. This forms the basis for studyingmoduli of stable vector bundlesin many specific cases, such as onprincipally polarized abelian varieties[17]andK3 surfaces.[19] A vector bundleE{\displaystyle E}on a smooth varietyX{\displaystyle X}over a field hasChern classesin theChow ringofX{\displaystyle X},ci(E){\displaystyle c_{i}(E)}inCHi(X){\displaystyle CH^{i}(X)}fori≥0{\displaystyle i\geq 0}.[20]These satisfy the same formal properties as Chern classes in topology. For example, for any short exact sequence of vector bundles onX{\displaystyle X}, the Chern classes ofB{\displaystyle B}are given by It follows that the Chern classes of a vector bundleE{\displaystyle E}depend only on the class ofE{\displaystyle E}in theGrothendieck groupK0(X){\displaystyle K_{0}(X)}. By definition, for a schemeX{\displaystyle X},K0(X){\displaystyle K_{0}(X)}is the quotient of the free abelian group on the set of isomorphism classes of vector bundles onX{\displaystyle X}by the relation that[B]=[A]+[C]{\displaystyle [B]=[A]+[C]}for any short exact sequence as above. AlthoughK0(X){\displaystyle K_{0}(X)}is hard to compute in general,algebraic K-theoryprovides many tools for studying it, including a sequence of related groupsKi(X){\displaystyle K_{i}(X)}for integersi>0{\displaystyle i>0}. A variant is the groupG0(X){\displaystyle G_{0}(X)}(orK0′(X){\displaystyle K_{0}'(X)}), theGrothendieck groupof coherent sheaves onX{\displaystyle X}. (In topological terms,G-theory has the formal properties of aBorel–Moore homologytheory for schemes, whileK-theory is the correspondingcohomology theory.) The natural homomorphismK0(X)→G0(X){\displaystyle K_{0}(X)\to G_{0}(X)}is an isomorphism ifX{\displaystyle X}is aregularseparated Noetherian scheme, using that every coherent sheaf has a finiteresolutionby vector bundles in that case.[21]For example, that gives a definition of the Chern classes of a coherent sheaf on a smooth variety over a field. More generally, a Noetherian schemeX{\displaystyle X}is said to have theresolution propertyif every coherent sheaf onX{\displaystyle X}has a surjection from some vector bundle onX{\displaystyle X}. For example, every quasi-projective scheme over a Noetherian ring has the resolution property. Since the resolution property states that a coherent sheafE{\displaystyle {\mathcal {E}}}on a Noetherian scheme is quasi-isomorphic in the derived category to the complex of vector bundles :Ek→⋯→E1→E0{\displaystyle {\mathcal {E}}_{k}\to \cdots \to {\mathcal {E}}_{1}\to {\mathcal {E}}_{0}}we can compute the total Chern class ofE{\displaystyle {\mathcal {E}}}with For example, this formula is useful for finding the Chern classes of the sheaf representing a subscheme ofX{\displaystyle X}. If we take the projective schemeZ{\displaystyle Z}associated to the ideal(xy,xz)⊆C[x,y,z,w]{\displaystyle (xy,xz)\subseteq \mathbb {C} [x,y,z,w]}, then since there is the resolution overCP3{\displaystyle \mathbb {CP} ^{3}}. When vector bundles and locally free sheaves of finite constant rank are used interchangeably, care must be given to distinguish between bundle homomorphisms and sheaf homomorphisms. Specifically, given vector bundlesp:E→X,q:F→X{\displaystyle p:E\to X,\,q:F\to X}, by definition, a bundle homomorphismφ:E→F{\displaystyle \varphi :E\to F}is ascheme morphismoverX{\displaystyle X}(i.e.,p=q∘φ{\displaystyle p=q\circ \varphi }) such that, for each geometric pointx{\displaystyle x}inX{\displaystyle X},φx:p−1(x)→q−1(x){\displaystyle \varphi _{x}:p^{-1}(x)\to q^{-1}(x)}is a linear map of rank independent ofx{\displaystyle x}. Thus, it induces the sheaf homomorphismφ~:E→F{\displaystyle {\widetilde {\varphi }}:{\mathcal {E}}\to {\mathcal {F}}}of constant rank between the corresponding locally freeOX{\displaystyle {\mathcal {O}}_{X}}-modules (sheaves of dual sections). But there may be anOX{\displaystyle {\mathcal {O}}_{X}}-module homomorphism that does not arise this way; namely, those not having constant rank. In particular, a subbundleE⊆F{\displaystyle E\subseteq F}is a subsheaf (i.e.,E{\displaystyle {\mathcal {E}}}is a subsheaf ofF{\displaystyle {\mathcal {F}}}). But the converse can fail; for example, for an effective Cartier divisorD{\displaystyle D}onX{\displaystyle X},OX(−D)⊆OX{\displaystyle {\mathcal {O}}_{X}(-D)\subseteq {\mathcal {O}}_{X}}is a subsheaf but typically not a subbundle (since any line bundle has only two subbundles). The quasi-coherent sheaves on any fixed scheme form an abelian category.Gabbershowed that, in fact, the quasi-coherent sheaves on any scheme form a particularly well-behaved abelian category, aGrothendieck category.[22]A quasi-compact quasi-separated schemeX{\displaystyle X}(such as an algebraic variety over a field) is determined up to isomorphism by the abelian category of quasi-coherent sheaves onX{\displaystyle X}, by Rosenberg, generalizing a result ofGabriel.[23] The fundamental technical tool in algebraic geometry is the cohomology theory of coherent sheaves. Although it was introduced only in the 1950s, many earlier techniques of algebraic geometry are clarified by the language ofsheaf cohomologyapplied to coherent sheaves. Broadly speaking, coherent sheaf cohomology can be viewed as a tool for producing functions with specified properties; sections of line bundles or of more general sheaves can be viewed as generalized functions. In complex analytic geometry, coherent sheaf cohomology also plays a foundational role. Among the core results of coherent sheaf cohomology are results on finite-dimensionality of cohomology, results on the vanishing of cohomology in various cases, duality theorems such asSerre duality, relations between topology and algebraic geometry such asHodge theory, and formulas forEuler characteristicsof coherent sheaves such as theRiemann–Roch theorem.
https://en.wikipedia.org/wiki/Coherent_sheaf
Inmathematics, agerbe(/dʒɜːrb/;French:[ʒɛʁb]) is a construct inhomological algebraandtopology. Gerbes were introduced byJean Giraud(Giraud 1971) following ideas ofAlexandre Grothendieckas a tool for non-commutativecohomologyin degree 2. They can be seen as an analogue offibre bundleswhere the fibre is theclassifying stackof a group. Gerbes provide a convenient, if highly abstract, language for dealing with many types ofdeformationquestions especially in modernalgebraic geometry. In addition, special cases of gerbes have been used more recently indifferential topologyanddifferential geometryto give alternative descriptions to certaincohomology classesand additional structures attached to them. "Gerbe" is a French (and archaic English) word that literally meanswheatsheaf. A gerbe on atopological spaceS{\displaystyle S}[1]: 318is astackX{\displaystyle {\mathcal {X}}}ofgroupoidsoverS{\displaystyle S}that islocally non-empty(each pointp∈S{\displaystyle p\in S}has an open neighbourhoodUp{\displaystyle U_{p}}over which thesection categoryX(Up){\displaystyle {\mathcal {X}}(U_{p})}of the gerbe is not empty) andtransitive(for any two objectsa{\displaystyle a}andb{\displaystyle b}ofX(U){\displaystyle {\mathcal {X}}(U)}for any open setU{\displaystyle U}, there is an open coveringU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}ofU{\displaystyle U}such that the restrictions ofa{\displaystyle a}andb{\displaystyle b}to eachUi{\displaystyle U_{i}}are connected by at least one morphism). A canonical example is the gerbeBH{\displaystyle BH}ofprincipal bundleswith a fixedstructure groupH{\displaystyle H}: the section category over an open setU{\displaystyle U}is the category of principalH{\displaystyle H}-bundles onU{\displaystyle U}with isomorphism as morphisms (thus the category is a groupoid). As principal bundles glue together (satisfy the descent condition), these groupoids form a stack. The trivial bundleX×H→X{\displaystyle X\times H\to X}shows that the local non-emptiness condition is satisfied, and finally as principal bundles are locally trivial, they become isomorphic when restricted to sufficiently small open sets; thus the transitivity condition is satisfied as well. The most general definition of gerbes are defined over asite. Given a siteC{\displaystyle {\mathcal {C}}}aC{\displaystyle {\mathcal {C}}}-gerbeG{\displaystyle G}[2][3]: 129is a category fibered in groupoidsG→C{\displaystyle G\to {\mathcal {C}}}such that Note that for a siteC{\displaystyle {\mathcal {C}}}with a final objecte{\displaystyle e}, a category fibered in groupoidsG→C{\displaystyle G\to {\mathcal {C}}}is aC{\displaystyle {\mathcal {C}}}-gerbe admits a local section, meaning satisfies the first axiom, ifOb(Ge)≠∅{\displaystyle {\text{Ob}}(G_{e})\neq \varnothing }. One of the main motivations for considering gerbes on a site is to consider the following naive question: if the Cech cohomology groupH1(U,G){\displaystyle H^{1}({\mathcal {U}},G)}for a suitable coveringU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}of a spaceX{\displaystyle X}gives the isomorphism classes of principalG{\displaystyle G}-bundles overX{\displaystyle X}, what does the iterated cohomology functorH1(−,H1(−,G)){\displaystyle H^{1}(-,H^{1}(-,G))}represent? Meaning, we are gluing together the groupsH1(Ui,G){\displaystyle H^{1}(U_{i},G)}via some one cocycle. Gerbes are a technical response for this question: they give geometric representations of elements in the higher cohomology groupH2(U,G){\displaystyle H^{2}({\mathcal {U}},G)}. It is expected this intuition should hold forhigher gerbes. One of the main theorems concerning gerbes is their cohomological classification whenever they have automorphism groups given by a fixed sheaf of abelian groupsL_{\displaystyle {\underline {L}}},[5][2]called a band. For a gerbeX{\displaystyle {\mathcal {X}}}on a siteC{\displaystyle {\mathcal {C}}}, an objectU∈Ob(C){\displaystyle U\in {\text{Ob}}({\mathcal {C}})}, and an objectx∈Ob(X(U)){\displaystyle x\in {\text{Ob}}({\mathcal {X}}(U))}, the automorphism group of a gerbe is defined as the automorphism groupL=Aut_X(U)(x){\displaystyle L={\underline {\text{Aut}}}_{{\mathcal {X}}(U)}(x)}. Notice this is well defined whenever the automorphism group is always the same. Given a coveringU={Ui→X}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\to X\}_{i\in I}}, there is an associated class c(L_)∈H3(X,L_){\displaystyle c({\underline {L}})\in H^{3}(X,{\underline {L}})} representing theisomorphism classof the gerbeX{\displaystyle {\mathcal {X}}}banded byL{\displaystyle L}. For example, in topology, many examples of gerbes can be constructed by considering gerbes banded by the groupU(1){\displaystyle U(1)}. As the classifying spaceB(U(1))=K(Z,2){\displaystyle B(U(1))=K(\mathbb {Z} ,2)}is the secondEilenberg–Maclanespace for the integers, a bundle gerbe banded byU(1){\displaystyle U(1)}on a topological spaceX{\displaystyle X}is constructed from a homotopy class of maps in [X,B2(U(1))]=[X,K(Z,3)]{\displaystyle [X,B^{2}(U(1))]=[X,K(\mathbb {Z} ,3)]}, which is exactly the thirdsingular homologygroupH3(X,Z){\displaystyle H^{3}(X,\mathbb {Z} )}. It has been found[6]that all gerbes representing torsion cohomology classes inH3(X,Z){\displaystyle H^{3}(X,\mathbb {Z} )}are represented by a bundle of finite dimensional algebrasEnd(V){\displaystyle {\text{End}}(V)}for a fixed complex vector spaceV{\displaystyle V}. In addition, the non-torsion classes are represented as infinite-dimensional principal bundlesPU(H){\displaystyle PU({\mathcal {H}})}of the projective group of unitary operators on a fixed infinite dimensionalseparableHilbert spaceH{\displaystyle {\mathcal {H}}}. Note this is well defined because all separable Hilbert spaces are isomorphic to the space of square-summable sequencesℓ2{\displaystyle \ell ^{2}}. The homotopy-theoretic interpretation of gerbes comes from looking at thehomotopy fiber square X→∗↓↓S→fB2U(1){\displaystyle {\begin{matrix}{\mathcal {X}}&\to &*\\\downarrow &&\downarrow \\S&\xrightarrow {f} &B^{2}U(1)\end{matrix}}} analogous to how a line bundle comes from the homotopy fiber square L→∗↓↓S→fBU(1){\displaystyle {\begin{matrix}L&\to &*\\\downarrow &&\downarrow \\S&\xrightarrow {f} &BU(1)\end{matrix}}} whereBU(1)≃K(Z,2){\displaystyle BU(1)\simeq K(\mathbb {Z} ,2)}, givingH2(S,Z){\displaystyle H^{2}(S,\mathbb {Z} )}as the group of isomorphism classes of line bundles onS{\displaystyle S}. There are natural examples of Gerbes that arise from studying the algebra of compactly supported complex valued functions on a paracompact spaceX{\displaystyle X}[7]pg 3. Given a coverU={Ui}{\displaystyle {\mathcal {U}}=\{U_{i}\}}ofX{\displaystyle X}there is the Cech groupoid defined as G={∐i,jUij⇉∐Ui}{\displaystyle {\mathcal {G}}=\left\{\coprod _{i,j}U_{ij}\rightrightarrows \coprod U_{i}\right\}} with source and target maps given by the inclusions s:Uij↪Ujt:Uij↪Ui{\displaystyle {\begin{aligned}s:U_{ij}\hookrightarrow U_{j}\\t:U_{ij}\hookrightarrow U_{i}\end{aligned}}} and the space of composable arrows is just ∐i,j,kUijk{\displaystyle \coprod _{i,j,k}U_{ijk}} Then a degree 2 cohomology classσ∈H2(X;U(1)){\displaystyle \sigma \in H^{2}(X;U(1))}is just a map σ:∐Uijk→U(1){\displaystyle \sigma :\coprod U_{ijk}\to U(1)} We can then form a non-commutativeC*-algebraCc(G(σ)){\displaystyle C_{c}({\mathcal {G}}(\sigma ))}, which is associated to the set of compact supported complex valued functions of the space G1=∐i,jUij{\displaystyle {\mathcal {G}}_{1}=\coprod _{i,j}U_{ij}} It has a non-commutative product given by a∗b(x,i,k):=∑ja(x,i,j)b(x,j,k)σ(x,i,j,k){\displaystyle a*b(x,i,k):=\sum _{j}a(x,i,j)b(x,j,k)\sigma (x,i,j,k)} where the cohomology classσ{\displaystyle \sigma }twists the multiplication of the standardC∗{\displaystyle C^{*}}-algebra product. LetM{\displaystyle M}be avarietyover analgebraically closed fieldk{\displaystyle k},G{\displaystyle G}analgebraic group, for exampleGm{\displaystyle \mathbb {G} _{m}}. Recall that aG-torsoroverM{\displaystyle M}is analgebraic spaceP{\displaystyle P}with an action ofG{\displaystyle G}and a mapπ:P→M{\displaystyle \pi :P\to M}, such that locally onM{\displaystyle M}(inétale topologyorfppf topology)π{\displaystyle \pi }is a direct productπ|U:G×U→U{\displaystyle \pi |_{U}:G\times U\to U}. AG-gerbe overMmay be defined in a similar way. It is anArtin stackM{\displaystyle {\mathcal {M}}}with a mapπ:M→M{\displaystyle \pi \colon {\mathcal {M}}\to M}, such that locally onM(in étale or fppf topology)π{\displaystyle \pi }is a direct productπ|U:BG×U→U{\displaystyle \pi |_{U}\colon \mathrm {B} G\times U\to U}.[8]HereBG{\displaystyle BG}denotes theclassifying stackofG{\displaystyle G}, i.e. a quotient[∗/G]{\displaystyle [*/G]}of a point by a trivialG{\displaystyle G}-action. There is no need to impose the compatibility with the group structure in that case since it is covered by the definition of a stack. The underlyingtopological spacesofM{\displaystyle {\mathcal {M}}}andM{\displaystyle M}are the same, but inM{\displaystyle {\mathcal {M}}}each point is equipped with a stabilizer group isomorphic toG{\displaystyle G}. Every two-term complex of coherent sheaves E∙=[E−1→dE0]{\displaystyle {\mathcal {E}}^{\bullet }=[{\mathcal {E}}^{-1}\xrightarrow {d} {\mathcal {E}}^{0}]} on a schemeX∈Sch{\displaystyle X\in {\text{Sch}}}has a canonical sheaf of groupoids associated to it, where on an open subsetU⊆X{\displaystyle U\subseteq X}there is a two-term complex ofX(U){\displaystyle X(U)}-modules E−1(U)→dE0(U){\displaystyle {\mathcal {E}}^{-1}(U)\xrightarrow {d} {\mathcal {E}}^{0}(U)} giving a groupoid. It has objects given by elementsx∈E0(U){\displaystyle x\in {\mathcal {E}}^{0}(U)}and a morphismx→x′{\displaystyle x\to x'}is given by an elementy∈E−1(U){\displaystyle y\in {\mathcal {E}}^{-1}(U)}such that dy+x=x′{\displaystyle dy+x=x'} In order for this stack to be a gerbe, the cohomology sheafH0(E){\displaystyle {\mathcal {H}}^{0}({\mathcal {E}})}must always have a section. This hypothesis implies the category constructed above always has objects. Note this can be applied to the situation ofcomodules over Hopf-algebroidsto construct algebraic models of gerbes over affine or projective stacks (projectivity if a gradedHopf-algebroidis used). In addition, two-term spectra from the stabilization of thederived categoryof comodules of Hopf-algebroids(A,Γ){\displaystyle (A,\Gamma )}withΓ{\displaystyle \Gamma }flat overA{\displaystyle A}give additional models of gerbes that arenon-strict. Consider a smoothprojectivecurveC{\displaystyle C}overk{\displaystyle k}of genusg>1{\displaystyle g>1}. LetMr,ds{\displaystyle {\mathcal {M}}_{r,d}^{s}}be themoduli stackofstable vector bundlesonC{\displaystyle C}of rankr{\displaystyle r}and degreed{\displaystyle d}. It has acoarse moduli spaceMr,ds{\displaystyle M_{r,d}^{s}}, which is aquasiprojective variety. These two moduli problems parametrize the same objects, but the stacky version remembersautomorphismsof vector bundles. For any stable vector bundleE{\displaystyle E}the automorphism groupAut(E){\displaystyle Aut(E)}consists only of scalar multiplications, so each point in a moduli stack has a stabilizer isomorphic toGm{\displaystyle \mathbb {G} _{m}}. It turns out that the mapMr,ds→Mr,ds{\displaystyle {\mathcal {M}}_{r,d}^{s}\to M_{r,d}^{s}}is indeed aGm{\displaystyle \mathbb {G} _{m}}-gerbe in the sense above.[9]It is a trivial gerbe if and only ifr{\displaystyle r}andd{\displaystyle d}arecoprime. Another class of gerbes can be found using the construction of root stacks. Informally, ther{\displaystyle r}-th root stack of a line bundleL→S{\displaystyle L\to S}over aschemeis a space representing ther{\displaystyle r}-th root ofL{\displaystyle L}and is denoted L/Sr.{\displaystyle {\sqrt[{r}]{L/S}}.\,}[10]pg 52 Ther{\displaystyle r}-th root stack ofL{\displaystyle L}has the property ⨂rL/Sr≅L{\displaystyle \bigotimes ^{r}{\sqrt[{r}]{L/S}}\cong L} as gerbes. It is constructed as the stack L/Sr:(Sch⁡/S)op→Grpd{\displaystyle {\sqrt[{r}]{L/S}}:(\operatorname {Sch} /S)^{op}\to \operatorname {Grpd} } sending anS{\displaystyle S}-schemeT→S{\displaystyle T\to S}to the category whose objects are line bundles of the form {(M→T,αM):αM:M⊗r→∼L×ST}{\displaystyle \left\{(M\to T,\alpha _{M}):\alpha _{M}:M^{\otimes r}\xrightarrow {\sim } L\times _{S}T\right\}} and morphisms are commutative diagrams compatible with the isomorphismsαM{\displaystyle \alpha _{M}}. This gerbe is banded by thealgebraic groupof roots of unityμr{\displaystyle \mu _{r}}, where on a coverT→S{\displaystyle T\to S}it acts on a point(M→T,αM){\displaystyle (M\to T,\alpha _{M})}by cyclically permuting the factors ofM{\displaystyle M}inM⊗r{\displaystyle M^{\otimes r}}. Geometrically, these stacks are formed as the fiber product of stacks X×BGmBGm→BGm↓↓X→BGm{\displaystyle {\begin{matrix}X\times _{B\mathbb {G} _{m}}B\mathbb {G} _{m}&\to &B\mathbb {G} _{m}\\\downarrow &&\downarrow \\X&\to &B\mathbb {G} _{m}\end{matrix}}} where the vertical map ofBGm→BGm{\displaystyle B\mathbb {G} _{m}\to B\mathbb {G} _{m}}comes from theKummer sequence 1→μr→Gm→(⋅)rGm→1{\displaystyle 1\xrightarrow {} \mu _{r}\xrightarrow {} \mathbb {G} _{m}\xrightarrow {(\cdot )^{r}} \mathbb {G} _{m}\xrightarrow {} 1} This is becauseBGm{\displaystyle B\mathbb {G} _{m}}is the moduli space of line bundles, so the line bundleL→S{\displaystyle L\to S}corresponds to an object of the categoryBGm(S){\displaystyle B\mathbb {G} _{m}(S)}(considered as a point of the moduli space). There is another related construction of root stacks with sections. Given the data above, lets:S→L{\displaystyle s:S\to L}be a section. Then ther{\displaystyle r}-th root stack of the pair(L→S,s){\displaystyle (L\to S,s)}is defined as the lax 2-functor[10][11] (L,s)/Sr:(Sch⁡/S)op→Grpd{\displaystyle {\sqrt[{r}]{(L,s)/S}}:(\operatorname {Sch} /S)^{op}\to \operatorname {Grpd} } sending anS{\displaystyle S}-schemeT→S{\displaystyle T\to S}to the category whose objects are line bundles of the form {(M→T,αM,t):αM:M⊗r→∼L×STt∈Γ(T,M)αM(t⊗r)=s}{\displaystyle \left\{(M\to T,\alpha _{M},t):{\begin{aligned}&\alpha _{M}:M^{\otimes r}\xrightarrow {\sim } L\times _{S}T\\&t\in \Gamma (T,M)\\&\alpha _{M}(t^{\otimes r})=s\end{aligned}}\right\}} and morphisms are given similarly. These stacks can be constructed very explicitly, and are well understood for affine schemes. In fact, these form the affine models for root stacks with sections.[11]: 4Locally, we may assumeS=Spec(A){\displaystyle S={\text{Spec}}(A)}and the line bundleL{\displaystyle L}is trivial, hence any sections{\displaystyle s}is equivalent to taking an elements∈A{\displaystyle s\in A}. Then, the stack is given by the stack quotient (L,s)/Sr=[Spec(B)/μr]{\displaystyle {\sqrt[{r}]{(L,s)/S}}=[{\text{Spec}}(B)/\mu _{r}]}[11]: 9 with B=A[x]xr−s{\displaystyle B={\frac {A[x]}{x^{r}-s}}} Ifs=0{\displaystyle s=0}then this gives an infinitesimal extension of[Spec(A)/μr]{\displaystyle [{\text{Spec}}(A)/\mu _{r}]}. These and more general kinds of gerbes arise in several contexts as both geometric spaces and as formal bookkeeping tools: Gerbes first appeared in the context ofalgebraic geometry. They were subsequently developed in a more traditional geometric framework by Brylinski (Brylinski 1993). One can think of gerbes as being a natural step in a hierarchy of mathematical objects providing geometric realizations of integralcohomologyclasses. A more specialised notion of gerbe was introduced byMurrayand calledbundle gerbes. Essentially they are asmoothversion of abelian gerbes belonging more to the hierarchy starting withprincipal bundlesthan sheaves. Bundle gerbes have been used ingauge theoryand alsostring theory. Current work by others is developing a theory ofnon-abelian bundle gerbes.
https://en.wikipedia.org/wiki/Gerbe
Inmathematicsastackor2-sheafis, roughly speaking, asheafthat takes values incategoriesrather than sets. Stacks are used to formalise some of the main constructions ofdescent theory, and to construct fine moduli stacks whenfine moduli spacesdo not exist. Descent theory is concerned with generalisations of situations whereisomorphic, compatible geometrical objects (such asvector bundlesontopological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced withpullbacks;fibred categoriesthen make a good framework to discuss the possibility of such gluing. The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of aGrothendieck topology. Thus a stack is formally given as a fibred category over anotherbasecategory, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology. Stacks are the underlying structure of algebraic stacks (also called Artin stacks) and Deligne–Mumford stacks, which generalizeschemesandalgebraic spacesand which are particularly useful in studyingmoduli spaces. There are inclusions: schemes ⊆ algebraic spaces ⊆ Deligne–Mumford stacks ⊆ algebraic stacks (Artin stacks) ⊆ stacks. Edidin (2003)andFantechi (2001)give a brief introductory accounts of stacks,Gómez (2001),Olsson (2007)andVistoli (2005)give more detailed introductions, andLaumon & Moret-Bailly (2000)describes the more advanced theory. La conclusion pratique à laquelle je suis arrivé dès maintenant, c'est que chaque fois que en vertu de mes critères, une variété de modules (ou plutôt, un schéma de modules) pour la classification des variations (globales, ou infinitésimales) de certaines structures (variétés complètes non singulières, fibrés vectoriels, etc.) ne peut exister, malgré de bonnes hypothèses de platitude, propreté, et non singularité éventuellement, la raison en est seulement l'existence d'automorphismes de la structure qui empêche la technique de descente de marcher. The concept of stacks has its origin in the definition of effective descent data inGrothendieck (1959). In a 1959 letter to Serre, Grothendieck observed that a fundamental obstruction to constructing good moduli spaces is the existence ofautomorphisms. A major motivation for stacks is that if a modulispacefor some problem does not exist because of the existence of automorphisms, it may still be possible to construct a modulistack. Mumford (1965)studied thePicard groupof themoduli stack of elliptic curves, before stacks had been defined. Stacks were first defined by Giraud (1966,1971), and the term "stack" was introduced byDeligne & Mumford (1969)for the original French term "champ" meaning "field". In this paper they also introducedDeligne–Mumford stacks, which they called algebraic stacks, though the term "algebraic stack" now usually refers to the more generalArtin stacksintroduced byArtin(1974). When defining quotients of schemes by group actions, it is often impossible for the quotient to be a scheme and still satisfy desirable properties for a quotient. For example, if a few points have non-trivial stabilisers, then thecategorical quotientwill not exist among schemes, but it will exist as a stack. In the same way,moduli spacesof curves, vector bundles, or other geometric objects are often best defined as stacks instead of schemes. Constructions of moduli spaces often proceed by first constructing a larger space parametrizing the objects in question, and thenquotienting by group actionto account for objects with automorphisms which have been overcounted. A categoryc{\displaystyle c}with afunctorto a categoryC{\displaystyle C}is called afibered categoryoverC{\displaystyle C}if for any morphismF:X→Y{\displaystyle F:X\to Y}inC{\displaystyle C}and any objecty{\displaystyle y}ofc{\displaystyle c}with imageY{\displaystyle Y}(under the functor), there is a pullbackf:x→y{\displaystyle f:x\to y}ofy{\displaystyle y}byF{\displaystyle F}. This means a morphism with imageF{\displaystyle F}such that any morphismg:z→y{\displaystyle g:z\to y}with imageG=F∘H{\displaystyle G=F\circ H}can be factored asg=f∘h{\displaystyle g=f\circ h}by a unique morphismh:z→x{\displaystyle h:z\to x}inc{\displaystyle c}such that the functor mapsh{\displaystyle h}toH{\displaystyle H}. The elementx=F∗y{\displaystyle x=F^{*}y}is called thepullbackofy{\displaystyle y}alongF{\displaystyle F}and is unique up to canonical isomorphism. The categorycis called aprestackover a categoryCwith aGrothendieck topologyif it is fibered overCand for any objectUofCand objectsx,yofcwith imageU, the functor from the over category C/U to sets takingF:V→Uto Hom(F*x,F*y) is a sheaf. This terminology is not consistent with the terminology for sheaves: prestacks are the analogues of separated presheaves rather than presheaves. Some authors require this as a property of stacks, rather than of prestacks. The categorycis called astackover the categoryCwith a Grothendieck topology if it is a prestack overCand every descent datum is effective. Adescent datumconsists roughly of a covering of an objectVofCby a familyVi, elementsxiin the fiber overVi, and morphismsfjibetween the restrictions ofxiandxjtoVij=Vi×VVjsatisfying the compatibility conditionfki=fkjfji. The descent datum is calledeffectiveif the elementsxiare essentially the pullbacks of an elementxwith imageV. A stack is called astack in groupoidsor a(2,1)-sheafif it is also fibered in groupoids, meaning that its fibers (the inverse images of objects ofC) are groupoids. Some authors use the word "stack" to refer to the more restrictive notion of a stack in groupoids. Analgebraic stackorArtin stackis a stack in groupoidsXover the fppf site such that the diagonal map ofXis representable and there exists a smooth surjection from (the stack associated to) a scheme to X. A morphismY→{\displaystyle \rightarrow }Xof stacks isrepresentableif, for every morphismS→{\displaystyle \rightarrow }Xfrom (the stack associated to) a scheme to X, thefiber productY×XSis isomorphic to (the stack associated to) analgebraic space. Thefiber productof stacks is defined using the usualuniversal property, and changing the requirement that diagrams commute to the requirement that they2-commute. See alsomorphism of algebraic stacksfor further information. The motivation behind the representability of the diagonal is the following: the diagonal morphismΔ:X→X×X{\displaystyle \Delta :{\mathfrak {X}}\to {\mathfrak {X}}\times {\mathfrak {X}}}is representable if and only if for any pair of morphisms of algebraic spacesX,Y→X{\displaystyle X,Y\to {\mathfrak {X}}}, their fiber productX×XY{\displaystyle X\times _{\mathfrak {X}}Y}is representable. ADeligne–Mumford stackis an algebraic stackXsuch that there is an étale surjection from a scheme toX. Roughly speaking, Deligne–Mumford stacks can be thought of as algebraic stacks whose objects have no infinitesimal automorphisms. Since the inception of algebraic stacks it was expected that they are locally quotient stacks of the form[Spec(A)/G]{\displaystyle [{\text{Spec}}(A)/G]}whereG{\displaystyle G}is alinearly reductive algebraic group. This was recently proved to be the case:[1]given a quasi-separated algebraic stackX{\displaystyle {\mathfrak {X}}}locally of finite type over an algebraically closed fieldk{\displaystyle k}whose stabilizers are affine, andx∈X(k){\displaystyle x\in {\mathfrak {X}}(k)}a smooth and closed point with linearly reductive stabilizer groupGx{\displaystyle G_{x}}, there exists anetale coverof theGIT quotient(U,u)→(Nx//Gx,0){\displaystyle (U,u)\to (N_{x}//G_{x},0)}, whereNx=(Jx/Jx2)∨{\displaystyle N_{x}=(J_{x}/J_{x}^{2})^{\vee }}, such that the diagram ([W/Gx],w)→([Nx/Gx],0)↓↓(U,u)→(Nx//Gx,0){\displaystyle {\begin{matrix}([W/G_{x}],w)&\to &([N_{x}/G_{x}],0)\\\downarrow &&\downarrow \\(U,u)&\to &(N_{x}//G_{x},0)\end{matrix}}} is cartesian, and there exists an etale morphism f:([W/Gx],w)→(X,x){\displaystyle f:([W/G_{x}],w)\to ({\mathfrak {X}},x)} inducing an isomorphism of the stabilizer groups atw{\displaystyle w}andx{\displaystyle x}. h:(Sch/S)op→Sets{\displaystyle h:(Sch/S)^{op}\to Sets} IfX{\displaystyle X}is a scheme(Sch/S){\displaystyle (Sch/S)}andG{\displaystyle G}is a smoothaffine groupscheme acting onX{\displaystyle X}, then there is aquotient algebraic stack[X/G]{\displaystyle [X/G]},[2]taking a schemeY→S{\displaystyle Y\to S}to the groupoid ofG{\displaystyle G}-torsors over theS{\displaystyle S}-schemeY{\displaystyle Y}withG{\displaystyle G}-equivariant maps toX{\displaystyle X}. Explicitly, given a spaceX{\displaystyle X}with aG{\displaystyle G}-action, form the stack[X/G]{\displaystyle [X/G]}, which (intuitively speaking)sendsa spaceY{\displaystyle Y}to the groupoid of pullback diagrams [X/G](Y)={Z→ΦX↓↓Y→ϕ[X/G]}{\displaystyle [X/G](Y)={\begin{Bmatrix}Z&{\xrightarrow {\Phi }}&X\\\downarrow &&\downarrow \\Y&{\xrightarrow {\phi }}&[X/G]\end{Bmatrix}}} whereΦ{\displaystyle \Phi }is aG{\displaystyle G}-equivariant morphism of spaces andZ→Y{\displaystyle Z\to Y}is a principalG{\displaystyle G}-bundle. The morphisms in this category are just morphisms of diagrams where the arrows on the right-hand side are equal and the arrows on the left-hand side are morphisms of principalG{\displaystyle G}-bundles. A special case of this whenXis a point gives theclassifying stackBGof a smooth affine group schemeG:BG:=[pt/G].{\displaystyle {\textbf {B}}G:=[pt/G].}It is named so since the categoryBG(Y){\displaystyle \mathbf {B} G(Y)}, the fiber overY, is precisely the categoryBunG⁡(Y){\displaystyle \operatorname {Bun} _{G}(Y)}of principalG{\displaystyle G}-bundles overY{\displaystyle Y}. Note thatBunG⁡(Y){\displaystyle \operatorname {Bun} _{G}(Y)}itself can be considered as a stack, themoduli stack of principalG-bundles onY. An important subexample from this construction isBGLn{\displaystyle \mathbf {B} GL_{n}}, which is the moduli stack of principalGLn{\displaystyle GL_{n}}-bundles. Since the data of a principalGLn{\displaystyle GL_{n}}-bundle is equivalent to the data of a rankn{\displaystyle n}vector bundle, this is isomorphic to themoduli stack of rankn{\displaystyle n}vector bundlesVectn{\displaystyle Vect_{n}}. The moduli stack of line bundles isBGm{\displaystyle B\mathbb {G} _{m}}since every line bundle is canonically isomorphic to a principalGm{\displaystyle \mathbb {G} _{m}}-bundle. Indeed, given a line bundleL{\displaystyle L}over a schemeS{\displaystyle S}, the relative spec Spec_S(SymS(L∨))→S{\displaystyle {\underline {\text{Spec}}}_{S}({\text{Sym}}_{S}(L^{\vee }))\to S} gives a geometric line bundle. By removing the image of the zero section, one obtains a principalGm{\displaystyle \mathbb {G} _{m}}-bundle. Conversely, from the representationid:Gm→Aut(A1){\displaystyle id:\mathbb {G} _{m}\to {\text{Aut}}(\mathbb {A} ^{1})}, the associated line bundle can be reconstructed. Agerbeis a stack in groupoids that is locally nonempty, for example the trivial gerbeBG{\displaystyle BG}that assigns to each scheme the groupoid of principalG{\displaystyle G}-bundles over the scheme, for some groupG{\displaystyle G}. IfAis a quasi-coherentsheaf of algebrasin an algebraic stackXover a schemeS, then there is a stack Spec(A) generalizing the construction of the spectrum Spec(A) of a commutative ringA. An object of Spec(A) is given by anS-schemeT, an objectxofX(T), and a morphism of sheaves of algebras fromx*(A) to the coordinate ringO(T) ofT. IfAis a quasi-coherent sheaf of graded algebras in an algebraic stackXover a schemeS, then there is a stack Proj(A) generalizing the construction of the projective scheme Proj(A) of a graded ringA. Another widely studied class of moduli spaces are theKontsevich moduli spacesparameterizing the space of stable maps between curves of a fixed genus to a fixed spaceX{\displaystyle X}whose image represents a fixed cohomology class. These moduli spaces are denoted[3] M¯g,n(X,β){\displaystyle {\overline {\mathcal {M}}}_{g,n}(X,\beta )} and can have wild behavior, such as being reducible stacks whose components are non-equal dimension. For example,[3]the moduli stack M¯1,0(P2,3[H]){\displaystyle {\overline {\mathcal {M}}}_{1,0}(\mathbb {P} ^{2},3[H])} has smooth curves parametrized by an open subsetU⊂P9=P(Γ(P2,O(3))){\displaystyle U\subset \mathbb {P} ^{9}=\mathbb {P} (\Gamma (\mathbb {P} ^{2},{\mathcal {O}}(3)))}. On the boundary of the moduli space, where curves may degenerate to reducible curves, there is a substack parametrizing reducible curves with a genus0{\displaystyle 0}component and a genus1{\displaystyle 1}component intersecting at one point, and the map sends the genus1{\displaystyle 1}curve to a point. Since all such genus1{\displaystyle 1}curves are parametrized byU{\displaystyle U}, and there is an additional1{\displaystyle 1}dimensional choice of where these curves intersect on the genus1{\displaystyle 1}curve, the boundary component has dimension10{\displaystyle 10}. Constructingweighted projective spacesinvolves taking thequotient varietyof someAn+1−{0}{\displaystyle \mathbb {A} ^{n+1}-\{0\}}by aGm{\displaystyle \mathbb {G} _{m}}-action. In particular, the action sends a tuple g⋅(x0,…,xn)↦(ga0x0,…,ganxn){\displaystyle g\cdot (x_{0},\ldots ,x_{n})\mapsto (g^{a_{0}}x_{0},\ldots ,g^{a_{n}}x_{n})} and the quotient of this action gives the weighted projective spaceWP(a0,…,an){\displaystyle \mathbb {WP} (a_{0},\ldots ,a_{n})}. Since this can instead be taken as a stack quotient, the weighted projective stack[4]pg 30is WP(a0,…,an):=[An−{0}/Gm]{\displaystyle {\textbf {WP}}(a_{0},\ldots ,a_{n}):=[\mathbb {A} ^{n}-\{0\}/\mathbb {G} _{m}]} Taking the vanishing locus of a weighted polynomial in a line bundlef∈Γ(WP(a0,…,an),O(a)){\displaystyle f\in \Gamma ({\textbf {WP}}(a_{0},\ldots ,a_{n}),{\mathcal {O}}(a))}gives a stacky weightedprojective variety. Stacky curves, or orbicurves, can be constructed by taking the stack quotient of a morphism of curves by the monodromy group of the cover over the generic points. For example, take a projective morphism Proj(C[x,y,z]/(x5+y5+z5))→Proj(C[x,y]){\displaystyle {\text{Proj}}(\mathbb {C} [x,y,z]/(x^{5}+y^{5}+z^{5}))\to {\text{Proj}}(\mathbb {C} [x,y])} which is genericallyetale. The stack quotient of the domain byμ5{\displaystyle \mu _{5}}gives a stackyP1{\displaystyle \mathbb {P} ^{1}}with stacky points that have stabilizer groupZ/5{\displaystyle \mathbb {Z} /5}at the fifth roots of unity in thex/y{\displaystyle x/y}-chart. This is because these are the points where the cover ramifies.[citation needed] An example of a non-affine stack is given by the half-line with two stacky origins. This can be constructed as the colimit of two inclusion of[Gm/(Z/2)]→[A1/(Z/2)]{\displaystyle [\mathbb {G} _{m}/(\mathbb {Z} /2)]\to [\mathbb {A} ^{1}/(\mathbb {Z} /2)]}. On an algebraic stack one can construct a category of quasi-coherent sheaves similar to the category of quasi-coherent sheaves over a scheme. A quasi-coherent sheaf is roughly one that looks locally like thesheaf of a moduleover a ring. The first problem is to decide what one means by "locally": this involves the choice of a Grothendieck topology, and there are many possible choices for this, all of which have some problems and none of which seem completely satisfactory. The Grothendieck topology should be strong enough so that the stack is locally affine in this topology: schemes are locally affine in the Zariski topology so this is a good choice for schemes as Serre discovered, algebraic spaces and Deligne–Mumford stacks are locally affine in the etale topology so one usually uses the etale topology for these, while algebraic stacks are locally affine in the smooth topology so one can use the smooth topology in this case. For general algebraic stacks the etale topology does not have enough open sets: for example, if G is a smooth connected group then the only etale covers of the classifying stack BG are unions of copies of BG, which are not enough to give the right theory of quasicoherent sheaves. Instead of using the smooth topology for algebraic stacks one often uses a modification of it called theLis-Et topology(short for Lisse-Etale: lisse is the French term for smooth), which has the same open sets as the smooth topology but the open covers are given by etale rather than smooth maps. This usually seems to lead to an equivalent category of quasi-coherent sheaves, but is easier to use: for example it is easier to compare with the etale topology on algebraic spaces. The Lis-Et topology has a subtle technical problem: a morphism between stacks does not in general give a morphism between the corresponding topoi. (The problem is that while one can construct a pair of adjoint functorsf*,f*, as needed for a geometric morphism of topoi, the functorf* is not left exact in general. This problem is notorious for having caused some errors in published papers and books.[5]) This means that constructing the pullback of a quasicoherent sheaf under a morphism of stacks requires some extra effort. It is also possible to use finer topologies. Most reasonable "sufficiently large" Grothendieck topologies seem to lead to equivalent categories of quasi-coherent sheaves, but the larger a topology is the harder it is to handle, so one generally prefers to use smaller topologies as long as they have enough open sets. For example, the big fppf topology leads to essentially the same category of quasi-coherent sheaves as the Lis-Et topology, but has a subtle problem: the natural embedding of quasi-coherent sheaves into OXmodules in this topology is not exact (it does not preserve kernels in general). Differentiable stacksandtopological stacksare defined in a way similar to algebraic stacks, except that the underlying category of affine schemes is replaced by the category of smooth manifolds or topological spaces. More generally one can define the notion of ann-sheaf orn–1 stack, which is roughly a sort of sheaf taking values inn–1 categories. There are several inequivalent ways of doing this. 1-sheaves are the same as sheaves, and 2-sheaves are the same as stacks. They are calledhigher stacks. A very similar and analogous extension is to develop the stack theory on non-discrete objects (i.e., a space is really aspectrumin algebraic topology). The resulting stacky objects are calledderived stacks(or spectral stacks).Jacob Lurie's under-construction bookSpectral Algebraic Geometrystudies a generalization that he calls aspectral Deligne–Mumford stack. By definition, it is a ringed∞-toposthat is étale-locally theétale spectrumof anE∞-ring(this notion subsumes that of aderived scheme, at least in characteristic zero.) There are some minor set theoretical problems with the usual foundation of the theory of stacks, because stacks are often defined as certain functors to the category of sets and are therefore not sets. There are several ways to deal with this problem:
https://en.wikipedia.org/wiki/Stack_(mathematics)
In algebraic topology, apresheaf of spectraon atopological spaceXis a contravariant functor from the category of open subsets ofX, where morphisms are inclusions, to thegood category of commutative ring spectra. A theorem of Jardine says that such presheaves form asimplicial model category, whereF→Gis a weak equivalence if the induced map ofhomotopy sheavesπ∗F→π∗G{\displaystyle \pi _{*}F\to \pi _{*}G}is an isomorphism. Asheaf of spectrais then a fibrant/cofibrant object in that category. The notion is used to define, for example, aderived schemein algebraic geometry. Thistopology-relatedarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sheaf_of_spectra
The mathematical termperverse sheavesrefers to the objects of certainabelian categoriesassociated totopological spaces, which may be a real or complexmanifold, or more generaltopologically stratified spaces, possibly singular. The concept was introduced in the work ofJoseph Bernstein,Alexander Beilinson, andPierre Deligne(1982) as a consequence of theRiemann-Hilbert correspondence, which establishes a connection between thederived categoriesregular holonomicD-modulesandconstructible sheaves. Perverse sheaves are the objects in the latter that correspond to individual D-modules (and not more general complexes thereof); a perverse sheafisin general represented by a complex of sheaves. The concept of perverse sheaves is already implicit in a 75's paper of Kashiwara on the constructibility of solutions of holonomic D-modules. A key observation was that theintersection homologyofMark GoreskyandRobert MacPhersoncould be described using sheaf complexes that are actually perverse sheaves. It was clear from the outset that perverse sheaves are fundamental mathematical objects at the crossroads ofalgebraic geometry,topology, analysis anddifferential equations. They also play an important role innumber theory, algebra, andrepresentation theory. The nameperverse sheafcomes through rough translation of the French "faisceaux pervers".[1]The justification is that perverse sheaves are complexes of sheaves which have several features in common with sheaves: they form an abelian category, they havecohomology, and to construct one, it suffices to construct it locally everywhere. The adjective "perverse" originates in theintersection homologytheory,[2]and its origin was explained byGoresky (2010). The Beilinson–Bernstein–Deligne definition of a perverse sheaf proceeds through the machinery oftriangulated categoriesinhomological algebraand has a very strong algebraic flavour, although the main examples arising from Goresky–MacPherson theory are topological in nature because the simple objects in the category of perverse sheaves are the intersection cohomology complexes. This motivated MacPherson to recast the whole theory in geometric terms on a basis ofMorse theory. For many applications in representation theory, perverse sheaves can be treated as a 'black box', a category with certain formal properties. Aperverse sheafis an objectCof the boundedderived categoryof sheaves withconstructiblecohomology on a spaceXsuch that the set of pointsxwith has real dimension at most 2i, for alli. Herejxis the inclusion map of the pointx. IfXis a smooth complex algebraic variety and everywhere of dimensiond, then is a perverse sheaf for anylocal systemF{\displaystyle {\mathcal {F}}}.[3]IfXis a flat, locally complete intersection (for example, regular) scheme over ahenseliandiscrete valuation ring, then the constant sheaf shifted bydim⁡X+1{\displaystyle \dim X+1}is an étale perverse sheaf.[4] LetXbe a disk around the origin inC{\displaystyle \mathbb {C} }stratified so that the origin is the unique singular stratum. Then the category of perverse sheaves onXis equivalent to the category of diagrams of vector spacesV⇄vuW{\displaystyle V{\overset {u}{\underset {v}{\rightleftarrows }}}W}whereid−u∘v{\displaystyle \operatorname {id} -u\circ v}andid−v∘u{\displaystyle \operatorname {id} -v\circ u}are invertible.[5]More generally, quivers can be used to describe perverse sheaves.[citation needed] The category of perverse sheaves is an abelian subcategory of the (non-abelian) derived category of sheaves, equal to the core of a suitablet-structure, and is preserved byVerdier duality. The bounded derived category of perverse l-adic sheaves on a schemeXis equivalent to the derived category of constructible sheaves and similarly for sheaves on the complex analytic space associated to a schemeX/C.[6] Perverse sheaves are a fundamental tool for the geometry of singular spaces. Therefore, they are applied in a variety of mathematical areas. In theRiemann-Hilbert correspondence, perverse sheaves correspond to regular holonomicD-modules. This application establishes the notion of perverse sheaf as occurring 'in nature'. Thedecomposition theorem, a far-reaching extension of thehard Lefschetz theoremdecomposition, requires the usage of perverse sheaves.Hodge modulesare, roughly speaking, aHodge-theoreticrefinement of perverse sheaves. Thegeometric Satake equivalenceidentifies equivariant perverse sheaves on theaffine GrassmannianGrG{\displaystyle Gr_{G}}with representations of theLanglands dualgroup of areductive groupG- seeMirković & Vilonen (2007). A proof of theWeil conjecturesusing perverse sheaves is given inKiehl & Weissauer (2001). Massless fields insuperstringcompactificationshave been identified withcohomologyclasses on the target space (i.e. four-dimensionalMinkowski spacewith a six-dimensionalCalabi-Yau (CY) manifold). The determination of the matter and interaction content requires a detailed analysis of the(co)homologyof these spaces: nearly all massless fields in the effectivephysicsmodel are represented by certain (co)homology elements. However, a troubling consequence occurs when the target space issingular. A singular target space means that only the CY manifold part is singular as the Minkowski space factor is smooth. Such a singularCY manifoldis called aconifoldas it is a CY manifold that admits conicalsingularities. Andrew Stromingerobserved (A. Strominger, 1995) that conifolds correspond to masslessblackholes. Conifolds are important objects in string theory:Brian Greeneexplains the physics of conifolds in Chapter 13 of his bookThe Elegant Universe—including the fact that the space can tear near the cone, and itstopologycan change. These singular target spaces, i.e. conifolds, correspond to certain mild degenerations ofalgebraic varietieswhich appear in a large class ofsupersymmetrictheories, including superstring theory (E. Witten, 1982). Essentially, different cohomology theories on singular target spaces yield different results thereby making it difficult to determine which theory physics may favor. Several important characteristics of the cohomology, which correspond to the massless fields, are based on general properties of field theories, specifically, the (2,2)-supersymmetric 2-dimensionalworld-sheetfield theories. These properties, known as theKählerpackage (T. Hubsch, 1992), should hold for singular and smooth target spaces. Paul Green and Tristan Hubsch (P. Green & T. Hubsch, 1988) determined that the manner in which you move between singular CY target spaces require moving through either asmall resolution or deformation of the singularity(T. Hubsch, 1992) and called it the 'conifold transition'. Tristan Hubsch (T. Hubsch, 1997) conjectured what thiscohomologytheory should be for singular target spaces. Tristan Hubsch and Abdul Rahman (T. Hubsch and A. Rahman, 2005) worked to solve the Hubsch conjecture by analyzing the non-transversal case ofWitten'sgauged linear sigma model (E. Witten, 1993) which induces astratificationof thesealgebraic varieties(termed the ground state variety) in the case of isolated conicalsingularities. Under certain conditions it was determined that this ground state variety was aconifold(P. Green & T.Hubsch, 1988; T. Hubsch, 1992) with isolated conicsingularitiesover a certain base with a 1-dimensional exocurve (termed exo-strata) attached at eachsingularpoint. T. Hubsch and A. Rahman determined the (co)-homology of this ground state variety in all dimensions, found it compatible withMirror symmetryandString Theorybut found anobstruction in the middle dimension(T. Hubsch and A. Rahman, 2005). Thisobstructionrequired revisiting Hubsch's conjecture of a Stringy Singular Cohomology (T. Hubsch, 1997). In the winter of 2002, T. Hubsch and A. Rahman met with R.M. Goresky to discuss thisobstructionand in discussions betweenR.M. GoreskyandR. MacPherson, R. MacPherson made the observation that there was such a perverse sheaf that could have the cohomology that satisfied Hubsch's conjecture andresolved the obstruction.R.M. Goreskyand T. Hubsch advised A. Rahman's Ph.D. dissertation on the construction of a self-dual perverse sheaf (A. Rahman, 2009) using the zig-zag construction ofMacPherson-Vilonen(R. MacPherson & K. Vilonen, 1986). This perverse sheaf proved the Hübsch conjecture for isolated conicsingularities, satisfiedPoincaré duality, and aligned with some of the properties of the Kähler package. Satisfaction of all of the Kähler package by this Perverse sheaf for highercodimensionstratais still an open problem. Markus Banagl (M. Banagl, 2010; M. Banagl, et al., 2014) addressed the Hubsch conjecture through intersection spaces for highercodimensionstratainspired by Hubsch's work (T. Hubsch, 1992, 1997; P. Green and T. Hubsch, 1988) and A. Rahman's original ansatz (A. Rahman, 2009) forisolated singularities.
https://en.wikipedia.org/wiki/Perverse_sheaf
Incategory theory, a branch ofmathematics, apresheafon acategoryC{\displaystyle C}is afunctorF:Cop→Set{\displaystyle F\colon C^{\mathrm {op} }\to \mathbf {Set} }. IfC{\displaystyle C}is theposetofopen setsin atopological space, interpreted as a category, then one recovers the usual notion ofpresheafon a topological space. Amorphismof presheaves is defined to be anatural transformationof functors. This makes the collection of all presheaves onC{\displaystyle C}into a category, and is an example of afunctor category. It is often written asC^=SetCop{\displaystyle {\widehat {C}}=\mathbf {Set} ^{C^{\mathrm {op} }}}and it is called thecategory of presheavesonC{\displaystyle C}. A functor intoC^{\displaystyle {\widehat {C}}}is sometimes called aprofunctor. A presheaf that isnaturally isomorphicto the contravarianthom-functorHom(–,A) for someobjectAofCis called arepresentable presheaf. Some authors refer to a functorF:Cop→V{\displaystyle F\colon C^{\mathrm {op} }\to \mathbf {V} }as aV{\displaystyle \mathbf {V} }-valued presheaf.[1] The constructionC↦C^=Fct(Cop,Set){\displaystyle C\mapsto {\widehat {C}}=\mathbf {Fct} (C^{\text{op}},\mathbf {Set} )}is called thecolimit completionofCbecause of the followinguniversal property: Proposition[3]—LetC,Dbe categories and assumeDadmits small colimits. Then each functorη:C→D{\displaystyle \eta :C\to D}factorizes as whereyis the Yoneda embedding andη~:C^→D{\displaystyle {\widetilde {\eta }}:{\widehat {C}}\to D}is a, unique up to isomorphism, colimit-preserving functor called theYoneda extensionofη{\displaystyle \eta }. Proof: Given a presheafF, by thedensity theorem, we can writeF=lim→⁡yUi{\displaystyle F=\varinjlim yU_{i}}whereUi{\displaystyle U_{i}}are objects inC. Then letη~F=lim→⁡ηUi,{\displaystyle {\widetilde {\eta }}F=\varinjlim \eta U_{i},}which exists by assumption. Sincelim→−{\displaystyle \varinjlim -}is functorial, this determines the functorη~:C^→D{\displaystyle {\widetilde {\eta }}:{\widehat {C}}\to D}. Succinctly,η~{\displaystyle {\widetilde {\eta }}}is the leftKan extensionofη{\displaystyle \eta }alongy; hence, the name "Yoneda extension". To seeη~{\displaystyle {\widetilde {\eta }}}commutes with small colimits, we showη~{\displaystyle {\widetilde {\eta }}}is a left-adjoint (to some functor). DefineHom(η,−):D→C^{\displaystyle {\mathcal {H}}om(\eta ,-):D\to {\widehat {C}}}to be the functor given by: for each objectMinDand each objectUinC, Then, for each objectMinD, sinceHom(η,M)(Ui)=Hom⁡(yUi,Hom(η,M)){\displaystyle {\mathcal {H}}om(\eta ,M)(U_{i})=\operatorname {Hom} (yU_{i},{\mathcal {H}}om(\eta ,M))}by the Yoneda lemma, we have: which is to sayη~{\displaystyle {\widetilde {\eta }}}is a left-adjoint toHom(η,−){\displaystyle {\mathcal {H}}om(\eta ,-)}.◻{\displaystyle \square } The proposition yields several corollaries. For example, the proposition implies that the constructionC↦C^{\displaystyle C\mapsto {\widehat {C}}}is functorial: i.e., each functorC→D{\displaystyle C\to D}determines the functorC^→D^{\displaystyle {\widehat {C}}\to {\widehat {D}}}. Apresheaf of spaceson an ∞-categoryCis a contravariant functor fromCto the∞-category of spaces(for example, the nerve of the category ofCW-complexes.)[4]It is an∞-categoryversion of a presheaf of sets, as a "set" is replaced by a "space". The notion is used, among other things, in the ∞-category formulation ofYoneda's lemmathat says:C→C^{\displaystyle C\to {\widehat {C}}}isfully faithful(hereCcan be just asimplicial set.)[5] Acopresheafof a categoryCis a presheaf ofCop. In other words, it is a covariant functor fromCtoSet.[6]
https://en.wikipedia.org/wiki/Presheaf_of_spaces
Inmathematics, aconstructible sheafis asheafofabelian groupsover sometopological spaceX, such thatXis the union of a finite number oflocally closed subsetson each of which the sheaf is alocally constant sheaf. It has its origins inalgebraic geometry, where inétale cohomologyconstructible sheaves are defined in a similar way (Artin, Grothendieck & Verdier 1972, Exposé IX § 2). For thederived categoryof constructible sheaves, see a section inℓ-adic sheaf. Thefiniteness theoremin étale cohomology states that thehigher direct imagesof a constructible sheaf are constructible. Here we use the definition of constructibleétale sheavesfrom the book by Freitag and Kiehl referenced below. In what follows in this subsection, all sheavesF{\displaystyle {\mathcal {F}}}on schemesX{\displaystyle X}are étale sheaves unless otherwise noted. A sheafF{\displaystyle {\mathcal {F}}}is called constructible ifX{\displaystyle X}can be written as a finite union of locally closed subschemesiY:Y→X{\displaystyle i_{Y}:Y\to X}such that for each subschemeY{\displaystyle Y}of the covering, the sheafF|Y=iY∗F{\displaystyle {\mathcal {F}}|_{Y}=i_{Y}^{\ast }{\mathcal {F}}}is a finite locally constant sheaf. In particular, this means for each subschemeY{\displaystyle Y}appearing in the finite covering, there is an étale covering{Ui→Y∣i∈I}{\displaystyle \lbrace U_{i}\to Y\mid i\in I\rbrace }such that for all étale subschemes in the cover ofY{\displaystyle Y}, the sheaf(iY)∗F|Ui{\displaystyle (i_{Y})^{\ast }{\mathcal {F}}|_{U_{i}}}is constant and represented by a finite set. This definition allows us to derive, from Noetherian induction and the fact that an étale sheaf is constant if and only if its restriction fromX{\displaystyle X}toXred{\displaystyle X_{\text{red}}}is constant as well, whereXred{\displaystyle X_{\text{red}}}is the reduction of the schemeX{\displaystyle X}. It then follows that a representable étale sheafF{\displaystyle {\mathcal {F}}}is itself constructible. Of particular interest to the theory of constructible étale sheaves is the case in which one works with constructible étale sheaves of Abelian groups. The remarkable result is that constructible étale sheaves of Abelian groups are precisely the Noetherian objects in the category of all torsion étale sheaves (cf. Proposition I.4.8 of Freitag-Kiehl). Most examples of constructible sheaves come fromintersection cohomologysheaves or from the derived pushforward of alocal systemon a family of topological spaces parameterized by a base space. One nice set of examples of constructible sheaves come from the derived pushforward (with or without compact support) of a local system onU=P1−{0,1,∞}{\displaystyle U=\mathbb {P} ^{1}-\{0,1,\infty \}}. Since any loop around∞{\displaystyle \infty }ishomotopicto a loop around0,1{\displaystyle 0,1}we only have to describe themonodromyaround0{\displaystyle 0}and1{\displaystyle 1}. For example, we can set the monodromy operators to be where the stalks of our local systemL{\displaystyle {\mathcal {L}}}are isomorphic toQ⊕2{\displaystyle \mathbb {Q} ^{\oplus 2}}. Then, if we take the derived pushforwardRj∗{\displaystyle \mathbf {R} j_{*}}orRj!{\displaystyle \mathbf {R} j_{!}}ofL{\displaystyle {\mathcal {L}}}forj:U→P1{\displaystyle j:U\to \mathbb {P} ^{1}}we get a constructible sheaf where the stalks at the points0,1,∞{\displaystyle 0,1,\infty }compute the cohomology of the local systems restricted to a neighborhood of them inU{\displaystyle U}. For example, consider the family of degeneratingelliptic curves overC{\displaystyle \mathbb {C} }. Att=0,1{\displaystyle t=0,1}this family of curves degenerates into a nodal curve. If we denote this family byπ:X→C{\displaystyle \pi :X\to \mathbb {C} }then and where the stalks of the local systemLC−{0,1}{\displaystyle {\mathcal {L}}_{\mathbb {C} -\{0,1\}}}are isomorphic toQ2{\displaystyle \mathbb {Q} ^{2}}. This local monodromy around of this local system around0,1{\displaystyle 0,1}can be computed using thePicard–Lefschetz formula.
https://en.wikipedia.org/wiki/Constructible_sheaf
Inmathematics,de Rham cohomology(named afterGeorges de Rham) is a tool belonging both toalgebraic topologyand todifferential topology, capable of expressing basic topological information aboutsmooth manifoldsin a form particularly adapted to computation and the concrete representation ofcohomology classes. It is acohomology theorybased on the existence ofdifferential formswith prescribed properties. On any smooth manifold, everyexact formis closed, but the converse may fail to hold. Roughly speaking, this failure is related to the possible existence of"holes"in the manifold, and thede Rham cohomology groupscomprise a set oftopological invariantsof smooth manifolds that precisely quantify this relationship.[1] Thede Rham complexis thecochain complexofdifferential formson somesmooth manifoldM, with theexterior derivativeas the differential: whereΩ0(M)is the space ofsmooth functionsonM,Ω1(M)is the space of1-forms, and so forth. Forms that are the image of other forms under theexterior derivative, plus the constant0function inΩ0(M), are calledexactand forms whose exterior derivative is0are calledclosed(seeClosed and exact differential forms); the relationshipd2= 0then says that exact forms are closed. In contrast, closed forms are not necessarily exact. An illustrative case is a circle as a manifold, and the1-form corresponding to the derivative of angle from a reference point at its centre, typically written asdθ(described atClosed and exact differential forms). There is no functionθdefined on the whole circle such thatdθis its derivative; the increase of2πin going once around the circle in the positive direction implies amultivalued functionθ. Removing one point of the circle obviates this, at the same time changing the topology of the manifold. One prominent example when all closed forms are exact is when the underlying space iscontractibleto a point or, more generally, if it issimply connected(no-holes condition). In this case the exterior derivatived{\displaystyle d}restricted to closed forms has a local inverse called ahomotopy operator.[3][4]Since it is alsonilpotent,[3]it forms a dualchain complexwith the arrows reversed[5]compared to the de Rham complex. This is the situation described in thePoincaré lemma. The idea behind de Rham cohomology is to defineequivalence classesof closed forms on a manifold. One classifies two closed formsα,β∈ Ωk(M)ascohomologousif they differ by an exact form, that is, ifα−βis exact. This classification induces an equivalence relation on the space of closed forms inΩk(M). One then defines thek-thde Rham cohomology groupHdRk(M){\displaystyle H_{\mathrm {dR} }^{k}(M)}to be the set of equivalence classes, that is, the set of closed forms inΩk(M)modulo the exact forms. Note that, for any manifoldMcomposed ofmdisconnected components, each of which isconnected, we have that This follows from the fact that any smooth function onMwith zero derivative everywhere is separately constant on each of the connected components ofM. One may often find the general de Rham cohomologies of a manifold using the above fact about the zero cohomology and aMayer–Vietoris sequence. Another useful fact is that the de Rham cohomology is ahomotopyinvariant. While the computation is not given, the following are the computed de Rham cohomologies for some commontopologicalobjects: For then-sphere,Sn{\displaystyle S^{n}}, and also when taken together with a product of open intervals, we have the following. Letn> 0,m≥ 0, andIbe an open real interval. Then Then{\displaystyle n}-torus is the Cartesian product:Tn=S1×⋯×S1⏟n{\displaystyle T^{n}=\underbrace {S^{1}\times \cdots \times S^{1}} _{n}}. Similarly, allowingn≥1{\displaystyle n\geq 1}here, we obtain We can also find explicit generators for the de Rham cohomology of the torus directly using differential forms. Given a quotient manifoldπ:X→X/G{\displaystyle \pi :X\to X/G}and a differential formω∈Ωk(X){\displaystyle \omega \in \Omega ^{k}(X)}we can say thatω{\displaystyle \omega }isG{\displaystyle G}-invariantif given any diffeomorphism induced byG{\displaystyle G},⋅g:X→X{\displaystyle \cdot g:X\to X}we have(⋅g)∗(ω)=ω{\displaystyle (\cdot g)^{*}(\omega )=\omega }. In particular, the pullback of any form onX/G{\displaystyle X/G}isG{\displaystyle G}-invariant. Also, the pullback is an injective morphism. In our case ofRn/Zn{\displaystyle \mathbb {R} ^{n}/\mathbb {Z} ^{n}}the differential formsdxi{\displaystyle dx_{i}}areZn{\displaystyle \mathbb {Z} ^{n}}-invariant sinced(xi+k)=dxi{\displaystyle d(x_{i}+k)=dx_{i}}. But, notice thatxi+α{\displaystyle x_{i}+\alpha }forα∈R{\displaystyle \alpha \in \mathbb {R} }is not an invariant0{\displaystyle 0}-form. This with injectivity implies that Since the cohomology ring of a torus is generated byH1{\displaystyle H^{1}}, taking the exterior products of these forms gives all of the explicitrepresentativesfor the de Rham cohomology of a torus. Punctured Euclidean space is simplyRn{\displaystyle \mathbb {R} ^{n}}with the origin removed. We may deduce from the fact that theMöbius strip,M, can bedeformation retractedto the1-sphere (i.e. the real unit circle), that: Stokes' theoremis an expression ofdualitybetween de Rham cohomology and thehomologyofchains. It says that the pairing of differential forms and chains, via integration, gives ahomomorphismfrom de Rham cohomologyHdRk(M){\displaystyle H_{\mathrm {dR} }^{k}(M)}tosingular cohomology groupsHk(M;R).{\displaystyle H^{k}(M;\mathbb {R} ).}de Rham's theorem, proved byGeorges de Rhamin 1931, states that for a smooth manifoldM, this map is in fact anisomorphism. More precisely, consider the map defined as follows: for any[ω]∈HdRp(M){\displaystyle [\omega ]\in H_{\mathrm {dR} }^{p}(M)}, letI(ω)be the element ofHom(Hp(M),R)≃Hp(M;R){\displaystyle {\text{Hom}}(H_{p}(M),\mathbb {R} )\simeq H^{p}(M;\mathbb {R} )}that acts as follows: The theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. Theexterior productendows thedirect sumof these groups with aringstructure. A further result of the theorem is that the twocohomology ringsare isomorphic (asgraded rings), where the analogous product on singular cohomology is thecup product. For any smooth manifoldM, letR_{\textstyle {\underline {\mathbb {R} }}}be theconstant sheafonMassociated to the abelian groupR{\textstyle \mathbb {R} }; in other words,R_{\textstyle {\underline {\mathbb {R} }}}is the sheaf of locally constant real-valued functions onM.Then we have anatural isomorphism between the de Rham cohomology and thesheaf cohomologyofR_{\textstyle {\underline {\mathbb {R} }}}. (Note that this shows that de Rham cohomology may also be computed in terms ofČech cohomology; indeed, since every smooth manifold isparacompactHausdorffwe have that sheaf cohomology is isomorphic to the Čech cohomologyHˇ∗(U,R_){\textstyle {\check {H}}^{*}({\mathcal {U}},{\underline {\mathbb {R} }})}for anygood coverU{\textstyle {\mathcal {U}}}ofM.) The standard proof proceeds by showing that the de Rham complex, when viewed as a complex of sheaves, is anacyclic resolutionofR_{\textstyle {\underline {\mathbb {R} }}}. In more detail, letmbe the dimension ofMand letΩk{\textstyle \Omega ^{k}}denote thesheaf of germsofk{\displaystyle k}-forms onM(withΩ0{\textstyle \Omega ^{0}}the sheaf ofC∞{\textstyle C^{\infty }}functions onM). By thePoincaré lemma, the following sequence of sheaves is exact (in theabelian categoryof sheaves): Thislong exact sequencenow breaks up intoshort exact sequencesof sheaves where by exactness we have isomorphismsimdk−1≅kerdk{\textstyle \mathrm {im} \,d_{k-1}\cong \mathrm {ker} \,d_{k}}for allk. Each of these induces a long exact sequence in cohomology. Since the sheafΩ0{\textstyle \Omega ^{0}}ofC∞{\textstyle C^{\infty }}functions onMadmitspartitions of unity, anyΩ0{\textstyle \Omega ^{0}}-module is afine sheaf; in particular, the sheavesΩk{\textstyle \Omega ^{k}}are all fine. Therefore, the sheaf cohomology groupsHi(M,Ωk){\textstyle H^{i}(M,\Omega ^{k})}vanish fori>0{\textstyle i>0}since all fine sheaves on paracompact spaces are acyclic. So the long exact cohomology sequences themselves ultimately separate into a chain of isomorphisms. At one end of the chain is the sheaf cohomology ofR_{\textstyle {\underline {\mathbb {R} }}}and at the other lies the de Rham cohomology. The de Rham cohomology has inspired many mathematical ideas, includingDolbeault cohomology,Hodge theory, and theAtiyah–Singer index theorem. However, even in more classical contexts, the theorem has inspired a number of developments. Firstly, theHodge theoryproves that there is an isomorphism between the cohomology consisting of harmonic forms and the de Rham cohomology consisting of closed forms modulo exact forms. This relies on an appropriate definition of harmonic forms and of the Hodge theorem. For further details seeHodge theory. IfMis acompactRiemannian manifold, then each equivalence class inHdRk(M){\displaystyle H_{\mathrm {dR} }^{k}(M)}contains exactly oneharmonic form. That is, every memberω{\displaystyle \omega }of a given equivalence class of closed forms can be written as whereα{\displaystyle \alpha }is exact andγ{\displaystyle \gamma }is harmonic:Δγ=0{\displaystyle \Delta \gamma =0}. Anyharmonic functionon a compact connected Riemannian manifold is a constant. Thus, this particular representative element can be understood to be an extremum (a minimum) of all cohomologously equivalent forms on the manifold. For example, on a2-torus, one may envision a constant1-form as one where all of the "hair" is combed neatly in the same direction (and all of the "hair" having the same length). In this case, there are two cohomologically distinct combings; all of the others are linear combinations. In particular, this implies that the 1stBetti numberof a2-torus is two. More generally, on ann{\displaystyle n}-dimensional torusTn{\displaystyle T^{n}}, one can consider the various combings ofk{\displaystyle k}-forms on the torus. There aren{\displaystyle n}choosek{\displaystyle k}such combings that can be used to form the basis vectors forHdRk(Tn){\displaystyle H_{\text{dR}}^{k}(T^{n})}; thek{\displaystyle k}-th Betti number for the de Rham cohomology group for then{\displaystyle n}-torus is thusn{\displaystyle n}choosek{\displaystyle k}. More precisely, for adifferential manifoldM, one may equip it with some auxiliaryRiemannian metric. Then theLaplacianΔ{\displaystyle \Delta }is defined by withd{\displaystyle d}theexterior derivativeandδ{\displaystyle \delta }thecodifferential. The Laplacian is a homogeneous (ingrading)lineardifferential operatoracting upon theexterior algebraofdifferential forms: we can look at its action on each component of degreek{\displaystyle k}separately. IfM{\displaystyle M}iscompactandoriented, thedimensionof thekernelof the Laplacian acting upon the space ofk-formsis then equal (byHodge theory) to that of the de Rham cohomology group in degreek{\displaystyle k}: the Laplacian picks out a uniqueharmonicformin each cohomology class ofclosed forms. In particular, the space of all harmonick{\displaystyle k}-forms onM{\displaystyle M}is isomorphic toHk(M;R).{\displaystyle H^{k}(M;\mathbb {R} ).}The dimension of each such space is finite, and is given by thek{\displaystyle k}-thBetti number. LetM{\displaystyle M}be acompactorientedRiemannian manifold. TheHodge decompositionstates that anyk{\displaystyle k}-form onM{\displaystyle M}uniquely splits into the sum of threeL2components: whereα{\displaystyle \alpha }is exact,β{\displaystyle \beta }is co-exact, andγ{\displaystyle \gamma }is harmonic. One says that a formβ{\displaystyle \beta }is co-closed ifδβ=0{\displaystyle \delta \beta =0}and co-exact ifβ=δη{\displaystyle \beta =\delta \eta }for some formη{\displaystyle \eta }, and thatγ{\displaystyle \gamma }is harmonic if the Laplacian is zero,Δγ=0{\displaystyle \Delta \gamma =0}. This follows by noting that exact and co-exact forms are orthogonal; the orthogonal complement then consists of forms that are both closed and co-closed: that is, of harmonic forms. Here, orthogonality is defined with respect to theL2inner product onΩk(M){\displaystyle \Omega ^{k}(M)}: By use ofSobolev spacesordistributions, the decomposition can be extended for example to a complete (oriented or not) Riemannian manifold.[6]
https://en.wikipedia.org/wiki/De_Rham%27s_theorem
Chapel, theCascade High Productivity Language, is aparallel programming languagethat was developed byCray,[3]and later byHewlett Packard Enterprisewhich acquired Cray. It was being developed as part of the Cray Cascade project, a participant inDARPA'sHigh Productivity Computing Systems(HPCS) program, which had the goal of increasingsupercomputerproductivity by 2010. It is being developed as anopen sourceproject, under version 2 of theApache license.[4] The Chapel compiler is written inCandC++(C++14). The backend (i.e. the optimizer) isLLVM, written in C++. Python 3.7 or newer is required for some optional components such Chapel’s test system and c2chapel, a tool to generate Cbindingsfor Chapel. By default Chapel compiles to binary executables, but it can also compile to C code, and then LLVM is not used. Chapel code can be compiled to libraries to be callable from C, or Fortran or e.g. Python also supported. Chapel supportsGPU programmingthrough code generation for NVIDIA and AMD graphics processing units.[5] Chapel aims to improve the programmability ofparallel computersin general and the Cascade system in particular, by providing a higher level of expression than current programming languages do and by improving the separation between algorithmic expression anddata structureimplementation details. The language designers aspire for Chapel to bridge the gap between currenthigh-performance computing(HPC) programming practitioners, who they describe as Fortran, C or C++ users writingprocedural codeusing technologies likeOpenMPandMPIon one side, and newly graduating computer programmers who tend to prefer Java, Python or Matlab with only some of them having experience with C++ or C. Chapel should offer the productivity advances offered by the latter suite of languages while not alienating the users of the first.[2] Chapel supports amultithreadedparallel programming model at a high level by supporting abstractions fordata parallelism,task parallelism, andnested parallelism. It enables optimizations for thelocality of dataand computation in the program via abstractions for data distribution anddata-drivenplacement of subcomputations. It allows forcode reuseand generality throughobject-orientedconcepts andgeneric programmingfeatures. For instance, Chapel allows for the declaration oflocales.[6] While Chapel borrows concepts from many preceding languages, its parallel concepts are most closely based on ideas fromHigh Performance Fortran(HPF),ZPL, and theCray MTA's extensions toFortranandC.
https://en.wikipedia.org/wiki/Chapel_(programming_language)
Coarray Fortran(CAF), formerly known asF--, started as an extension ofFortran95/2003 forparallel processingcreated by Robert Numrich and John Reid in the 1990s. TheFortran 2008standard (ISO/IEC 1539-1:2010) now includescoarrays(spelled without hyphen), as decided at the May 2005 meeting of the ISO Fortran Committee; the syntax in the Fortran 2008 standard is slightly different from the original CAF proposal. A CAFprogramis interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed animage. Thearraysyntax of Fortran is extended with additional trailing subscripts in square brackets to provide a concise representation of references to data that is spread across images. The CAF extension was implemented in some Fortrancompilerssuch as those fromCray(since release 3.1). Since the inclusion of coarrays in the Fortran 2008 standard, the number of implementations is growing. The firstopen-sourcecompiler which implemented coarrays as specified in the Fortran 2008 standard forLinux architecturesisG95. Currently,GNU Fortranprovides wide coverage of Fortran's coarray features in single- and multi-image configuration (the latter based on the OpenCoarrays library). Another implementation of coarrays and relatedparallel extensionsfrom Fortran 2008 is available in the OpenUH compiler (a branch ofOpen64) developed at theUniversity of Houston. CAF is often implemented on top of aMessage Passing Interface(MPI) library for portability. Some implementations, such as the ones available in theGNU Fortranand OpenUH compilers, may run on top of other low-level layers (for example, GASNet) designed for supportingpartitioned global address spacelanguages. A simple example is given below. CAF is used in CGPACK, an open source package for simulating polycrystalline materials developed at theUniversity of Bristol.[1] The program above scales poorly because the loop that distributes information executes sequentially. Writing scalable programs often requires a sophisticated understanding of parallel algorithms, a detailed knowledge of the underlying network characteristics, and special tuning for application characteristics such as the size of data transfers. For most application developers, letting the compiler or runtime library decide the best algorithm proves more robust and high-performing. Fortran 2018 will offer collective communication subroutines that empower compiler and runtime library teams to encapsulate efficient parallel algorithms for collective communication and distributed computation in a set of collective subroutines. These subroutines and other new parallel programming features are summarized in a technical specification[2]that the Fortran standards committee has voted to incorporate into Fortran 2018. These enable the user to write a more efficient version of the above algorithm where the lack of explicit synchronization offers the potential for higher performance due to less coordination between the images. Furthermore, TS 18508 guarantees that "A transfer from an image cannot occur before the collective subroutine has been invoked on that image." This implies some partial synchronization inside co_broadcast, but could be higher performing than the "sync all" in the prior example. TS 18508 also incorporates several other new features that address issues targeted by the CAF 2.0 effort described below. Examples include teams of images and events. In 2011,Rice Universitypursued an alternate vision of coarray extensions for the Fortran language.[3]Their perspective is that the Fortran 2008 standard committee's design choices were shaped more by the desire to introduce as few modifications to the language as possible than to assemble the best set of extensions to supportparallel programming. In their view, both Numrich and Reid's original design and the coarray extensions proposed for Fortran 2008 suffer from the following shortcomings: To address these shortcomings, the Rice University group is developing a clean-slate redesign of the Coarray Fortran programming model. Rice's new design for Coarray Fortran, which they call Coarray Fortran 2.0, is an expressive set of coarray-based extensions to Fortran designed to provide a productiveparallel programming model. Compared to Fortran 2008, Rice's new coarray-based language extensions include some additional features:
https://en.wikipedia.org/wiki/Coarray_Fortran
Fortressis a discontinued experimentalprogramming languageforhigh-performance computing, created bySun Microsystemswith funding fromDARPA'sHigh Productivity Computing Systemsproject. One of the language designers wasGuy L. Steele Jr., whose previous work includesScheme,Common Lisp, andJava. The name "Fortress" was intended to connote a secureFortran, i.e., "a language for high-performance computation that provides abstraction and type safety on par with modern programming language principles".[1]Language features includedimplicit parallelism,Unicodesupport and concretesyntaxsimilar tomathematical notation. The language was not designed to be similar to Fortran. Syntactically, it most resemblesScala,Standard ML, andHaskell. Fortress was designed from the outset to have multiple syntactic stylesheets. Source code can be rendered asASCIItext, inUnicode, or as a prettied image. This would allow for support of mathematical symbols and other symbols in the rendered output for easier reading. Anemacs-based tool calledfortifytransforms ASCII-based Fortress source code intoLaTeXoutput.[2] Fortress was also designed to be both highly parallel and have rich functionality contained within libraries, drawing from Java. For example, theforloop construct was a parallel operation, which would not necessarily iterate in a strictly linear manner, depending on the underlying implementation. However, theforconstruct was a library function and could be replaced by another version of the programmer's liking rather than being built into the language. Fortress' designers made its syntax as close as possible topseudocodeand analyzed hundreds ofcomputer scienceandmathematicspapers, courses, books and journals using pseudocode to extract the common usage patterns of the English language and standard mathematical notation when used to representalgorithmsin pseudocode. Then they made the compiler trying to maintain a one-to-one correspondence between pseudocode and executable Fortress.[3][better source needed] Fortress was one of three languages created with funding from theHigh Productivity Computing Systemsproject; the others wereX10from IBM andChapelfromCray, Inc. In November 2006, when DARPA approved funding for the third phase of the HPCS project, X10 and Chapel were funded, but Fortress was not,[4]leading to uncertainty about the future of Fortress. In January 2007, Fortress was released as open-source.[5]Version 1.0 of the Fortress Language Specification was released in April 2008, along with a compliant implementation targeting theJava Virtual Machine. In July 2012, Steele announced that active development on Fortress would cease after a brief winding-down period, citing complications with using Fortress's type system on existing virtual machines.[6] This is the Fortress version of the archetypalhello worldprogram, as presented in theFortress Reference Card:[2] Theexportstatement makes the programexecutableand every executable program in Fortress must implement therun()function. The file where the program is saved for compilation must have the same name as the one specified in the initialcomponentstatement. Theprintln()function is what outputs the "Hello, World!" words on the screen.
https://en.wikipedia.org/wiki/Fortress_(programming_language)
Incomputer science, analgorithmis callednon-blockingif failure orsuspensionof anythreadcannot cause failure or suspension of another thread;[1]for some operations, these algorithms provide a useful alternative to traditionalblocking implementations. A non-blocking algorithm islock-freeif there is guaranteed system-wideprogress, andwait-freeif there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.[2] The word "non-blocking" was traditionally used to describetelecommunications networksthat could route a connection through a set of relays "without having to re-arrange existing calls"[This quote needs a citation](seeClos network). Also, if the telephone exchange "is not defective, it can always make the connection"[This quote needs a citation](seenonblocking minimal spanning switch). The traditional approach to multi-threaded programming is to uselocksto synchronize access to sharedresources. Synchronization primitives such asmutexes,semaphores, andcritical sectionsare all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free. Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority orreal-timetask, it would be highly undesirable to halt its progress. Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such asdeadlock,livelock, andpriority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities forparallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs. Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use ininterrupt handlers: even though thepreemptedthread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.[3] A lock-free data structure can be used to improve performance. A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on amulti-core processor, because access to the shared data structure does not need to be serialized to stay coherent.[4] With few exceptions, non-blocking algorithms useatomicread-modify-writeprimitives that the hardware must provide, the most notable of which iscompare and swap (CAS).Critical sectionsare almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field ofsoftware transactional memorypromises standard abstractions for writing efficient non-blocking code.[5][6] Much research has also been done in providing basicdata structuressuch asstacks,queues,sets, andhash tables. These allow programs to easily exchange data between threads asynchronously. Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include: Several libraries internally use lock-free techniques,[7][8][9]but it is difficult to write lock-free code that is correct.[10][11][12][13] Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order. Optimizing compilers can aggressively re-arrange operations. Even when they don't, many modern CPUs often re-arrange such operations (they have a "weakconsistency model"), unless amemory barrieris used to tell the CPU not to reorder.C++11programmers can usestd::atomicin<atomic>, andC11programmers can use<stdatomic.h>, both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.[14] Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput withstarvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.[15]This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high. It was shown in the 1980s[16]that all algorithms can be implemented wait-free, and many transformations from serial code, calleduniversal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs. Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[17]that the widely available atomicconditionalprimitives,CASandLL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads. However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation needed]is greater.[clarification needed] Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan andPetrank[18]presented a wait-free queue building on theCASprimitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott,[19]which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[20]provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank[21]provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures. Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free.[22]Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce. Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress). All wait-free algorithms are lock-free. In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm isnotlock-free. (If we suspend one thread that holds the lock, then the second thread will block.) An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, ifNprocessors are trying to execute an operation, some of theNprocesses will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors. In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion. The decision about when to assist, abort or wait when an obstruction is met is the responsibility of acontention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations. Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running. Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation.[15]All lock-free algorithms are obstruction-free. Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continuallylive-lockingis the task of a contention manager. Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
https://en.wikipedia.org/wiki/Non-blocking_algorithm
Structured programmingis aprogramming paradigmaimed at improving the clarity, quality, and development time of acomputer programby making specific disciplined use of the structuredcontrol flowconstructs of selection (if/then/else) and repetition (whileandfor),block structures, andsubroutines. It emerged in the late 1950s with the appearance of theALGOL 58andALGOL 60programming languages,[1]with the latter including support for block structures. Contributing factors to its popularity and widespread acceptance, at first in academia and later among practitioners, include the discovery of what is now known as thestructured program theoremin 1966,[2]and the publication of the influential "Go To Statement Considered Harmful" open letter in 1968 by Dutch computer scientistEdsger W. Dijkstra, who coined the term "structured programming".[3] Structured programming is most frequently used with deviations that allow for clearer programs in some particular cases, such as whenexception handlinghas to be performed. Following thestructured program theorem, all programs are seen as composed of threecontrol structures: Subroutines; callable units such as procedures, functions, methods, or subprograms are used to allow a sequence to be referred to by a single statement. Blocksare used to enable groups of statements to be treated as if they were one statement.Block-structuredlanguages have a syntax for enclosing structures in some formal way, such as an if-statement bracketed byif..fias inALGOL 68, or a code section bracketed byBEGIN..END, as inPL/IandPascal,whitespaceindentation as inPython, or the curly braces{...}ofCandmany later languages. It is possible to do structured programming with any programming language enabling code blocks and the three types of control structures, even though aprocedural programming languageis able to break the structure.[4][5]Some of the languages initially used for structured programming include:ALGOL,Pascal,PL/I,AdaandRPLbut most new procedural programming languages since that time have included features to encourage structured programming, and sometimes deliberately left out features – notablyGOTO– in an effort to makeunstructured programmingmore difficult. Structured programming(sometimes known as modular programming[4]) enforces a logical structure on the program being written to make it more efficient and easier to understand and modify. Thestructured program theoremprovides the theoretical basis of structured programming. It states that three ways of combining programs—sequencing, selection, and iteration—are sufficient to express anycomputable function. This observation did not originate with the structured programming movement; these structures are sufficient to describe theinstruction cycleof acentral processing unit, as well as the operation of aTuring machine. Therefore, a processor is always executing a "structured program" in this sense, even if the instructions it reads from memory are not part of a structured program. However, authors usually credit the result to a 1966 paper by Böhm and Jacopini, possibly becauseDijkstracited this paper himself.[6]The structured program theorem does not address how to write and analyze a usefully structured program. These issues were addressed during the late 1960s and early 1970s, with major contributions byDijkstra,Robert W. Floyd,Tony Hoare,Ole-Johan Dahl, andDavid Gries. P. J. Plauger, anearly adopterof structured programming, described his reaction to the structured program theorem: Us converts waved this interesting bit of news under the noses of the unreconstructed assembly-language programmers who kept trotting forth twisty bits of logic and saying, 'I betcha can't structure this.' Neither the proof by Böhm and Jacopini nor our repeated successes at writing structured code brought them around one day sooner than they were ready to convince themselves.[7] Donald Knuthaccepted the principle that programs must be written with provability in mind, but he disagreed with abolishing the GOTO statement, and as of 2018[update]has continued to use it in his programs.[8]In his 1974 paper, "Structured Programming with Goto Statements",[9]he gave examples where he believed that a direct jump leads to clearer and more efficient code without sacrificing provability. Knuth proposed a looser structural constraint: It should be possible to draw a program'sflow chartwith all forward branches on the left, all backward branches on the right, and no branches crossing each other. Many of those knowledgeable incompilersandgraph theoryhave advocated allowing onlyreducible flow graphs.[when defined as?][who?] Structured programming theorists gained a major ally in the 1970s afterIBMresearcherHarlan Millsapplied his interpretation of structured programming theory to the development of an indexing system forThe New York Timesresearch file. The project was a great engineering success, and managers at other companies cited it in support of adopting structured programming, although Dijkstra criticized the ways that Mills's interpretation differed from the published work.[10] As late as 1987 it was still possible to raise the question of structured programming in a computer science journal. Frank Rubin did so in that year with an open letter titled "'GOTO Considered Harmful' Considered Harmful".[11]Numerous objections followed, including a response from Dijkstra that sharply criticized both Rubin and the concessions other writers made when responding to him. By the end of the 20th century, nearly all computer scientists were convinced that it is useful to learn and apply the concepts of structured programming. High-level programming languages that originally lacked programming structures, such asFORTRAN,COBOL, andBASIC, now have them. While goto has now largely been replaced by the structured constructs of selection (if/then/else) and repetition (while and for), few languages are purely structured. The most common deviation, found in many languages, is the use of areturn statementfor early exit from a subroutine. This results in multiple exit points, instead of the single exit point required by structured programming. There are other constructions to handle cases that are awkward in purely structured programming. The most common deviation from structured programming isearly exitfrom a function or loop. At the level of functions, this is areturnstatement. At the level of loops, this is abreakstatement (terminate the loop) orcontinuestatement (terminate the current iteration, proceed with next iteration). In structured programming, these can be replicated by adding additional branches or tests, but for returns from nested code this can add significant complexity.Cis an early and prominent example of these constructs. Some newer languages also have "labeled breaks", which allow breaking out of more than just the innermost loop. Exceptions also allow early exit, but have further consequences, and thus are treated below. Multiple exits can arise for a variety of reasons, most often either that the subroutine has no more work to do (if returning a value, it has completed the calculation), or has encountered "exceptional" circumstances that prevent it from continuing, hence needing exception handling. The most common problem in early exit is that cleanup or final statements are not executed – for example, allocated memory is not deallocated, or open files are not closed, causingmemory leaksorresource leaks. These must be done at each return site, which is brittle and can easily result in bugs. For instance, in later development, a return statement could be overlooked by a developer, and an action that should be performed at the end of a subroutine (e.g., atracestatement) might not be performed in all cases. Languages without a return statement, such as standardPascalandSeed7, do not have this problem. Most modern languages provide language-level support to prevent such leaks;[12]see detailed discussion atresource management. Most commonly this is done via unwind protection, which ensures that certain code is guaranteed to be run when execution exits a block; this is a structured alternative to having a cleanup block and agoto. This is most often known astry...finally,and considered a part ofexception handling. In case of multiplereturnstatements introducingtry...finally,without exceptions might look strange. Various techniques exist to encapsulate resource management. An alternative approach, found primarily in C++, isResource Acquisition Is Initialization, which uses normal stack unwinding (variable deallocation) at function exit to call destructors on local variables to deallocate resources. Kent Beck,Martin Fowlerand co-authors have argued in theirrefactoringbooks that nested conditionals may be harder to understand than a certain type of flatter structure using multiple exits predicated byguard clauses. Their 2009 book flatly states that "one exit point is really not a useful rule. Clarity is the key principle: If the method is clearer with one exit point, use one exit point; otherwise don’t". They offer a cookbook solution for transforming a function consisting only of nested conditionals into a sequence of guarded return (or throw) statements, followed by a single unguarded block, which is intended to contain the code for the common case, while the guarded statements are supposed to deal with the less common ones (or with errors).[13]Herb SutterandAndrei Alexandrescualso argue in their 2004 C++ tips book that the single-exit point is an obsolete requirement.[14] In his 2004 textbook,David Wattwrites that "single-entry multi-exit control flows are often desirable". Using Tennent's framework notion ofsequencer, Watt uniformly describes the control flow constructs found in contemporary programming languages and attempts to explain why certain types of sequencers are preferable to others in the context of multi-exit control flows. Watt writes that unrestricted gotos (jump sequencers) are bad because the destination of the jump is not self-explanatory to the reader of a program until the reader finds and examines the actual label or address that is the target of the jump. In contrast, Watt argues that the conceptual intent of a return sequencer is clear from its own context, without having to examine its destination. Watt writes that a class of sequencers known asescape sequencers, defined as a "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. Watt also notes that while jump sequencers (gotos) have been somewhat restricted in languages like C, where the target must be an inside the local block or an encompassing outer block, that restriction alone is not sufficient to make the intent of gotos in C self-describing and so they can still produce "spaghetti code". Watt also examines how exception sequencers differ from escape and jump sequencers; this is explained in the next section of this article.[15] In contrast to the above,Bertrand Meyerwrote in his 2009 textbook that instructions likebreakandcontinue"are just the oldgotoin sheep's clothing" and strongly advised against their use.[16] Based on the coding error from theAriane 501 disaster, software developer Jim Bonang argues that any exceptions thrown from a function violate the single-exit paradigm, and proposes that all inter-procedural exceptions should be forbidden. Bonang proposes that all single-exit conforming C++ should be written along the lines of: Peter Ritchie also notes that, in principle, even a singlethrowright before thereturnin a function constitutes a violation of the single-exit principle, but argues that Dijkstra's rules were written in a time before exception handling became a paradigm in programming languages, so he proposes to allow any number of throw points in addition to a single return point. He notes that solutions that wrap exceptions for the sake of creating a single-exit have higher nesting depth and thus are more difficult to comprehend, and even accuses those who propose to apply such solutions to programming languages that support exceptions of engaging incargo cultthinking.[17] David Watt also analyzes exception handling in the framework of sequencers (introduced in this article inthe previous section on early exits.) Watt notes that an abnormal situation (generally exemplified with arithmetic overflows or input/output failures like file not found) is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" He notes that in contrast to status flags testing, exceptions have the oppositedefault behavior, causing the program to terminate unless the programmer explicitly deals with the exception in some way, possibly by adding code to willfully ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers (discussed in the previous section) are not as suitable as a dedicated exception sequencer with the semantics discussed above.[18] The textbook by Louden and Lambert emphasizes that exception handling differs from structured programming constructs likewhileloops because the transfer of control "is set up at a different point in the program than that where the actual transfer takes place. At the point where the transfer actually occurs, there may be no syntactic indication that control will in fact be transferred."[19]Computer science professor Arvind Kumar Bansal also notes that in languages which implement exception handling, even control structures likefor, which have the single-exit property in absence of exceptions, no longer have it in presence of exceptions, because an exception can prematurely cause an early exit in any part of the control structure; for instance ifinit()throws an exception infor (init(); check(); increm()), then the usual exit point after check() is not reached.[20]Citing multiple prior studies by others (1999–2004) and their own results, Westley Weimer andGeorge Neculawrote that a significant problem with exceptions is that they "create hidden control-flow paths that are difficult for programmers to reason about".[21] The necessity to limit code to single-exit points appears in some contemporary programming environments focused onparallel computing, such asOpenMP. The various parallel constructs from OpenMP, likeparallel do, do not allow early exits from inside to the outside of the parallel construct; this restriction includes all manner of exits, frombreakto C++ exceptions, but all of these are permitted inside the parallel construct if the jump target is also inside it.[22] More rarely, subprograms allow multipleentry.This is most commonly onlyre-entry into acoroutine(orgenerator/semicoroutine), where a subprogram yields control (and possibly a value), but can then be resumed where it left off. There are a number ofcommon usesof such programming, notably forstreams(particularly input/output), state machines, and concurrency. From a code execution point of view, yielding from a coroutine is closer to structured programming than returning from a subroutine, as the subprogram has not actually terminated, and will continue when called again – it is not an early exit. However, coroutines mean that multiple subprograms have execution state – rather than a single call stack of subroutines – and thus introduce a different form of complexity. It is very rare for subprograms to allow entry to an arbitrary position in the subprogram, as in this case the program state (such as variable values) is uninitialized or ambiguous, and this is very similar to a goto. Some programs, particularlyparsersandcommunications protocols, have a number ofstatesthat follow each other in a way that is not easily reduced to the basic structures, and some programmers implement the state-changes with a jump to the new state. This type of state-switching is often used in the Linux kernel.[citation needed] However, it is possible to structure these systems by making each state-change a separate subprogram and using a variable to indicate the active state (seetrampoline). Alternatively, these can be implemented via coroutines, which dispense with the trampoline.
https://en.wikipedia.org/wiki/Structured_programming
Data parallelismis parallelization across multiple processors inparallel computingenvironments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts totask parallelismas another form of parallelism. A data parallel job on an array ofnelements can be divided equally among all the processors. Let us assume we want to sum all the elements of the given array and the time for a single addition operation is Ta time units. In the case of sequential execution, the time taken by the process will ben×Ta time units as it sums up all the elements of an array. On the other hand, if we execute this job as a data parallel job on 4 processors the time taken would reduce to (n/4)×Ta + merging overhead time units. Parallel execution results in a speedup of 4 over sequential execution. Thelocality of data referencesplays an important part in evaluating the performance of a data parallel programming model. Locality of data depends on the memory accesses performed by the program as well as the size of the cache. Exploitation of the concept of data parallelism started in 1960s with the development of the Solomon machine.[1]The Solomon machine, also called avector processor, was developed to expedite the performance of mathematical operations by working on a large data array (operating on multiple data in consecutive time steps).Concurrencyof data operations was also exploited by operating on multiple data at the same time using a single instruction. These processors were called 'array processors'.[2]In the 1980s, the term was introduced[3]to describe this programming style, which was widely used to programConnection Machinesin data parallel languages likeC*. Today, data parallelism is best exemplified ingraphics processing units(GPUs), which use both the techniques of operating on multiple data in space and time using a single instruction. Most data parallel hardware supports only a fixed number of parallel levels, often only one. This means that within a parallel operation it is not possible to launch more parallel operations recursively, and means that programmers cannot make use of nested hardware parallelism. The programming languageNESLwas an early effort at implementing a nested data-parallel programming model on flat parallel machines, and in particular introduced theflattening transformationthat transforms nested data parallelism to flat data parallelism. This work was continued by other languages such asData Parallel HaskellandFuthark, although arbitrary nested data parallelism is not widely available in current data-parallel programming languages. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different distributed data. In some situations, a single execution thread controls operations on all the data. In others, different threads control the operation, but they execute the same code. For instance, considermatrix multiplicationand addition in a sequential manner as discussed in the example. Below is the sequential pseudo-code for multiplication and addition of two matrices where the result is stored in the matrixC. The pseudo-code for multiplication calculates thedot productof two matricesA,Band stores the result into the output matrixC. If the following programs were executed sequentially, the time taken to calculate the result would be of theO(n3){\displaystyle O(n^{3})}(assuming row lengths and column lengths of both matrices are n) andO(n){\displaystyle O(n)}for multiplication and addition respectively. We can exploit data parallelism in the preceding code to execute it faster as the arithmetic is loop independent. Parallelization of the matrix multiplication code is achieved by usingOpenMP. An OpenMP directive, "omp parallel for" instructs the compiler to execute the code in the for loop in parallel. For multiplication, we can divide matrix A and B into blocks along rows and columns respectively. This allows us to calculate every element in matrix C individually thereby making the task parallel. For example:A[m x n] dot B [n x k]can be finished inO(n){\displaystyle O(n)}instead ofO(m∗n∗k){\displaystyle O(m*n*k)}when executed in parallel usingm*kprocessors. It can be observed from the example that a lot of processors will be required as the matrix sizes keep on increasing. Keeping the execution time low is the priority but as the matrix size increases, we are faced with other constraints like complexity of such a system and its associated costs. Therefore, constraining the number of processors in the system, we can still apply the same principle and divide the data into bigger chunks to calculate the product of two matrices.[4] For addition of arrays in a data parallel implementation, let's assume a more modest system with twocentral processing units(CPU) A and B, CPU A could add all elements from the top half of the arrays, while CPU B could add all elements from the bottom half of the arrays. Since the two processors work in parallel, the job of performing array addition would take one half the time of performing the same operation in serial using one CPU alone. The program expressed inpseudocodebelow—which applies some arbitrary operation,foo, on every element in the arrayd—illustrates data parallelism:[nb 1] In anSPMDsystem executed on 2 processor system, both CPUs will execute the code. Data parallelism emphasizes the distributed (parallel) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism. The process of parallelizing a sequential program can be broken down into four discrete steps.[5] [6] Data and task parallelism, can be simultaneously implemented by combining them together for the same application. This is called Mixed data and task parallelism. Mixed parallelism requires sophisticated scheduling algorithms and software support. It is the best kind of parallelism when communication is slow and number of processors is large.[7] Mixed data and task parallelism has many applications. It is particularly used in the following applications: A variety of data parallel programming environments are available today, most widely used of which are: Data parallelism finds its applications in a variety of fields ranging from physics, chemistry, biology, material sciences to signal processing. Sciences imply data parallelism for simulating models like molecular dynamics,[9]sequence analysis of genome data[10]and other physical phenomenon. Driving forces in signal processing for data parallelism are video encoding, image and graphics processing, wireless communications[11]to name a few.
https://en.wikipedia.org/wiki/Data_parallelism
DOACROSSparallelismis aparallelizationtechnique used to performLoop-level parallelismby utilizingsynchronisationprimitives between statements in a loop. This technique is used when a loop cannot be fully parallelized byDOALL parallelismdue to data dependencies between loop iterations, typically loop-carried dependencies. The sections of the loop which contain loop-carried dependence are synchronized, while treating each section as a parallel task on its own. Therefore, DOACROSS parallelism can be used to complement DOALL parallelism to reduce loop execution times. DOACROSS parallelism is particularly useful when one statement depends on the values generated by another statement. In such a loop, DOALL parallelism can not be implemented in a straightforward manner. If the first statement blocks the execution of the second statement until the required value has been produced, then the two statements would be able to execute independent of each other (i.e.), each of the aforementioned statements would be parallelized for simultaneous execution[1]using DOALL parallelism. The followingpseudocodeillustrates the operation of DOACROSS parallelism in such a situation.[2] In this example, each iteration of the loop requires the value written into a by the previous iteration. However, the entire statement is not dependent on the previous iteration, but only a portion of it. The statement is split into two blocks to illustrate this. The first statement has no loop carried dependence now, and the result of this statement is stored in the variable temp. The post () command is used to signal that the required result has been produced for utilization by other threads. The wait (i-2) command waits for the value a[i-2] before unblocking. The execution time of DOACROSS parallelism largely depends on what fraction of the program suffers from loop-carried dependence. Larger gains are observed when a sizable portion of the loop is affected by loop-carried dependence.[2] DOACROSS parallelism suffers from significant space and granularity overheads due to the synchronization primitives used. Modern day compilers often overlook this method because of this major disadvantage.[1]The overheads may be reduced by reducing the frequency of synchronization across the loop, by applying the primitives for groups of statements at a time.[2]
https://en.wikipedia.org/wiki/DOACROSS_parallelism
Task parallelism(also known asfunction parallelismandcontrol parallelism) is a form ofparallelizationofcomputer codeacross multipleprocessorsinparallel computingenvironments. Task parallelism focuses on distributingtasks—concurrently performed byprocessesorthreads—across different processors. In contrast todata parallelismwhich involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.[1]A common type of task parallelism ispipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others. In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of aworkflow.[2] As a simple example, if a system is running code on a 2-processor system (CPUs"a" & "b") in aparallelenvironment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing therun timeof the execution. The tasks can be assigned usingconditional statementsas described below. Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.[3] Thread-level parallelism(TLP) is theparallelisminherent in an application that runs multiplethreadsat once. This type of parallelism is found largely in applications written for commercialserverssuch as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work. The exploitation of thread-level parallelism has also begun to make inroads into the desktop market with the advent ofmulti-coremicroprocessors. This has occurred because, for various reasons, it has become increasingly impractical to increase either the clock speed or instructions per clock of a single core. If this trend continues, new applications will have to be designed to utilize multiple threads in order to benefit from the increase in potential computing power. This contrasts with previous microprocessor innovations in which existing code was automatically sped up by running it on a newer/faster computer. Thepseudocodebelow illustrates task parallelism: The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows. Code executed by CPU "a": Code executed by CPU "b": This concept can now be generalized to any number of processors. Task parallelism can be supported in general-purpose languages by either built-in facilities or libraries. Notable examples include: Examples of fine-grained task-parallel languages can be found in the realm ofHardware Description LanguageslikeVerilogandVHDL.
https://en.wikipedia.org/wiki/Task_parallelism
Incomputer science,distributed memoryrefers to amultiprocessor computer systemin which eachprocessorhas its own privatememory.[1]Computational tasks can only operate on local data, and if remote data are required, the computational task must communicate with one or more remote processors. In contrast, ashared memorymultiprocessor offers a single memory space used by all processors. Processors do not have to be aware where data resides, except that there may be performance penalties, and that race conditions are to be avoided. In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other. The interconnect can be organised withpoint to point linksor separate hardware can provide a switching network. Thenetwork topologyis a key factor in determining how the multiprocessor machinescales. The links between nodes can be implemented using some standard network protocol (for exampleEthernet), using bespoke network links (used in for example thetransputer), or usingdual-ported memories. The key issue in programming distributed memory systems is how to distribute the data over the memories. Depending on the problem solved, the data can be distributed statically, or it can be moved through the nodes. Data can be moved on demand, or data can be pushed to the new nodes in advance. As an example, if a problem can be described as a pipeline where dataxis processed subsequently through functionsf,g,h, etc. (the result ish(g(f(x)))), then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performsfthat passes the result onto the second node that computesg, and finally to the third node that computesh. This is also known assystolic computation. Data can be kept statically in nodes if most computations happen locally, and only changes on edges have to be reported to other nodes. An example of this is simulation where data is modeled using a grid, and each node simulates a small part of the larger grid. On every iteration, nodes inform all neighboring nodes of the new edge data. Similarly, indistributed shared memoryeach node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory. Distributed shared memory hides the mechanism of communication, it does not hide the latency of communication.
https://en.wikipedia.org/wiki/Distributed_memory
TheMessage Passing Interface(MPI) is a portablemessage-passingstandard designed to function onparallel computingarchitectures.[1]The MPI standard defines thesyntaxandsemanticsoflibrary routinesthat are useful to a wide range of users writingportablemessage-passing programs inC,C++, andFortran. There are severalopen-sourceMPIimplementations, which fostered the development of a parallelsoftware industry, and encouraged development of portable and scalable large-scale parallel applications. The message passing interface effort began in the summer of 1991 when a small group of researchers started discussions at a mountain retreat in Austria. Out of that discussion came a Workshop on Standards for Message Passing in a Distributed Memory Environment, held on April 29–30, 1992 inWilliamsburg, Virginia.[2]Attendees at Williamsburg discussed the basic features essential to a standard message-passing interface and established a working group to continue the standardization process.Jack Dongarra,Tony Hey, and David W. Walker put forward a preliminary draft proposal, "MPI1", in November 1992. In November 1992 a meeting of the MPI working group took place in Minneapolis and decided to place the standardization process on a more formal footing. The MPI working group met every 6 weeks throughout the first 9 months of 1993. The draft MPI standard was presented at the Supercomputing '93 conference in November 1993.[3]After a period of public comments, which resulted in some changes in MPI, version 1.0 of MPI was released in June 1994. These meetings and the email discussion together constituted the MPI Forum, membership of which has been open to all members of thehigh-performance-computingcommunity. The MPI effort involved about 80 people from 40 organizations, mainly in the United States and Europe. Most of the major vendors ofconcurrent computerswere involved in the MPI effort, collaborating with researchers from universities, government laboratories, andindustry. MPI provides parallel hardware vendors with a clearly defined base set of routines that can be efficiently implemented. As a result, hardware vendors can build upon this collection of standardlow-levelroutines to createhigher-levelroutines for the distributed-memory communication environment supplied with theirparallel machines. MPI provides a simple-to-use portable interface for the basic user, yet one powerful enough to allow programmers to use the high-performance message passing operations available on advanced machines. In an effort to create a universal standard for message passing, researchers did not base it off of a single system but it incorporated the most useful features of several systems, including those designed by IBM,Intel,nCUBE,PVM, Express, P4 and PARMACS. The message-passing paradigm is attractive because of wide portability and can be used in communication for distributed-memory and shared-memory multiprocessors, networks of workstations, and a combination of these elements. The paradigm can apply in multiple settings, independent of network speed or memory architecture. Support for MPI meetings came in part fromDARPAand from the U.S.National Science Foundation(NSF) under grant ASC-9310330, NSF Science and Technology Center Cooperative agreement number CCR-8809615, and from theEuropean Commissionthrough Esprit Project P6643. TheUniversity of Tennesseealso made financial contributions to the MPI Forum. MPI is acommunication protocolfor programming[4]parallel computers. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation."[5]MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used inhigh-performance computingas of 2006.[6] MPI is not sanctioned by any major standards body; nevertheless, it has become ade factostandardforcommunicationamong processes that model aparallel programrunning on adistributed memorysystem. Actual distributed memory supercomputers such as computer clusters often run such programs. The principal MPI-1 model has noshared memoryconcept, and MPI-2 has only a limiteddistributed shared memoryconcept. Nonetheless, MPI programs are regularly run on shared memory computers, and bothMPICHandOpen MPIcan use shared memory for message transfer if it is available.[7][8]Designing programs around the MPI model (contrary to explicitshared memorymodels) has advantages when running onNUMAarchitectures since MPI encouragesmemory locality. Explicit shared memory programming was introduced in MPI-3.[9][10][11] Although MPI belongs in layers 5 and higher of theOSI Reference Model, implementations may cover most layers, withsocketsandTransmission Control Protocol(TCP) used in the transport layer. Most MPI implementations consist of a specific set of routines directly callable fromC,C++,Fortran(i.e., an API) and any language able to interface with such libraries, includingC#,JavaorPython. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs). MPI usesLanguage Independent Specifications(LIS) for calls and language bindings. The first MPI standard specifiedANSI Cand Fortran-77 bindings together with the LIS. The draft was presented at Supercomputing 1994 (November 1994)[12]and finalized soon thereafter. About 128 functions constitute the MPI-1.3 standard which was released as the final end of the MPI-1 series in 2008.[13] At present, the standard has several versions: version 1.3 (commonly abbreviatedMPI-1), which emphasizes message passing and has a static runtime environment, MPI-2.2 (MPI-2), which includes new features such as parallel I/O, dynamic process management and remote memory operations,[14]and MPI-3.1 (MPI-3), which includes extensions to the collective operations with non-blocking versions and extensions to the one-sided operations.[15]MPI-2's LIS specifies over 500 functions and provides language bindings for ISOC, ISOC++, andFortran 90. Object interoperability was also added to allow easier mixed-language message passing programming. A side-effect of standardizing MPI-2, completed in 1996, was clarifying the MPI-1 standard, creating the MPI-1.2. MPI-2is mostly a superset of MPI-1, although some functions have been deprecated. MPI-1.3 programs still work under MPI implementations compliant with the MPI-2 standard. MPI-3.0introduces significant updates to the MPI standard, including nonblocking versions of collective operations, enhancements to one-sided operations, and a Fortran 2008 binding. It removes deprecated C++ bindings and various obsolete routines and objects. Importantly, any valid MPI-2.2 program that avoids the removed elements is also valid in MPI-3.0. MPI-3.1is a minor update focused on corrections and clarifications, particularly for Fortran bindings. It introduces new functions for manipulating MPI_Aint values, nonblocking collective I/O routines, and methods for retrieving index values by name for MPI_T performance variables. Additionally, a general index was added. All valid MPI-3.0 programs are also valid in MPI-3.1. MPI-4.0is a major update that introduces large-count versions of many routines, persistent collective operations, partitioned communications, and a new MPI initialization method. It also adds application info assertions and improves error handling definitions, along with various smaller enhancements. Any valid MPI-3.1 program is compatible with MPI-4.0. MPI-4.1 is a minor update focused on corrections and clarifications to the MPI-4.0 standard. It deprecates several routines, the MPI_HOST attribute key, and the mpif.h Fortran include file. A new routine has been added to inquire about the hardware running the MPI program. Any valid MPI-4.0 program remains valid in MPI-4.1. MPI is often compared withParallel Virtual Machine(PVM), which is a popular distributed environment and message passing system developed in 1989, and which was one of the systems that motivated the need for standard parallel message passing. Threaded shared memory programming models (such asPthreadsandOpenMP) and message passing programming (MPI/PVM) can be considered complementary and have been used together on occasion in, for example, servers with multiple large shared-memory nodes. The MPI interface is meant to provide essential virtual topology,synchronization, and communication functionality between a set of processes (that have been mapped to nodes/servers/computer instances) in a language-independent way, with language-specific syntax (bindings), plus a few language-specific features. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, eachCPU(orcorein a multi-core machine) will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called mpirun or mpiexec. MPI library functions include, but are not limited to, point-to-point rendezvous-type send/receive operations, choosing between aCartesianorgraph-like logical process topology, exchanging data between process pairs (send/receive operations), combining partial results of computations (gather and reduce operations), synchronizing nodes (barrier operation) as well as obtaining network-related information such as the number of processes in the computing session, current processor identity that a process is mapped to, neighboring processes accessible in a logical topology, and so on. Point-to-point operations come insynchronous,asynchronous, buffered, andreadyforms, to allow both relatively stronger and weakersemanticsfor the synchronization aspects of a rendezvous-send. Many pending operations are possible in asynchronous mode, in most implementations. MPI-1 and MPI-2 both enable implementations that overlap communication and computation, but practice and theory differ. MPI also specifiesthread safeinterfaces, which havecohesionandcouplingstrategies that help avoid hidden state within the interface. It is relatively easy to write multithreaded point-to-point MPI code, and some implementations support such code.Multithreadedcollective communication is best accomplished with multiple copies of Communicators, as described below. MPI provides several features. The following concepts provide context for all of those abilities and help the programmer to decide what functionality to use in their application programs. Four of MPI's eight basic concepts are unique to MPI-2. Communicator objects connect groups of processes in the MPI session. Each communicator gives each contained process an independent identifier and arranges its contained processes in an orderedtopology. MPI also has explicit groups, but these are mainly good for organizing and reorganizing groups of processes before another communicator is made. MPI understands single group intracommunicator operations, and bilateral intercommunicator communication. In MPI-1, single group operations are most prevalent.Bilateraloperations mostly appear in MPI-2 where they include collective communication and dynamic in-process management. Communicators can be partitioned using several MPI commands. These commands includeMPI_COMM_SPLIT, where each process joins one of several colored sub-communicators by declaring itself to have that color. A number of important MPI functions involve communication between two specific processes. A popular example isMPI_Send, which allows one specified process to send a message to a second specified process. Point-to-point operations, as these are called, are particularly useful in patterned or irregular communication, for example, adata-parallelarchitecture in which each processor routinely swaps regions of data with specific other processors between calculation steps, or amaster–slavearchitecture in which the master sends new task data to a slave whenever the prior task is completed. MPI-1 specifies mechanisms for bothblockingand non-blocking point-to-point communication mechanisms, as well as the so-called 'ready-send' mechanism whereby a send request can be made only when the matching receive request has already been made. Collective functionsinvolve communication among all processes in a process group (which can mean the entire process pool or a program-defined subset). A typical function is theMPI_Bcastcall (short for "broadcast"). This function takes data from one node and sends it to all processes in the process group. A reverse operation is theMPI_Reducecall, which takes data from all processes in a group, performs an operation (such as summing), and stores the results on one node.MPI_Reduceis often useful at the start or end of a large distributed calculation, where each processor operates on a part of the data and then combines it into a result. Other operations perform more sophisticated tasks, such asMPI_Alltoallwhich rearrangesnitems of data such that thenth node gets thenth item of data from each. Many MPI functions require specifing the type of data which is sent between processes. This is because MPI aims to support heterogeneous environments where types might be represented differently on the different nodes[16](for example they might be running different CPU architectures that have differentendianness), in which case MPI implementations can performdata conversion.[16]Since the C language does not allow a type itself to be passed as a parameter, MPI predefines the constantsMPI_INT,MPI_CHAR,MPI_DOUBLEto correspond withint,char,double, etc. Here is an example in C that passes arrays ofints from all processes to one. The one receiving process is called the "root" process, and it can be any designated process but normally it will be process 0. All the processes ask to send their arrays to the root withMPI_Gather, which is equivalent to having each process (including the root itself) callMPI_Sendand the root make the corresponding number of orderedMPI_Recvcalls to assemble all of these arrays into a larger one:[17] However, it may be instead desirable to send data as one block as opposed to 100ints. To do this define a "contiguous block" derived data type: For passing a class or a data structure,MPI_Type_create_structcreates an MPI derived data type fromMPI_predefineddata types, as follows: where: Thedisp(displacements) array is needed fordata structure alignment, since the compiler may pad the variables in a class or data structure. The safest way to find the distance between different fields is by obtaining their addresses in memory. This is done withMPI_Get_address, which is normally the same as C's&operator but that might not be true when dealing withmemory segmentation.[18] Passing a data structure as one block is significantly faster than passing one item at a time, especially if the operation is to be repeated. This is because fixed-size blocks do not requireserializationduring transfer.[19] Given the following data structures: Here's the C code for building an MPI-derived data type: MPI-2 defines three one-sided communications operations,MPI_Put,MPI_Get, andMPI_Accumulate, being a write to remote memory, a read from remote memory, and a reduction operation on the same memory across a number of tasks, respectively. Also defined are three different methods to synchronize this communication (global, pairwise, and remote locks) as the specification does not guarantee that these operations have taken place until a synchronization point. These types of call can often be useful for algorithms in which synchronization would be inconvenient (e.g. distributedmatrix multiplication), or where it is desirable for tasks to be able to balance their load while other processors are operating on data. The key aspect is "the ability of an MPI process to participate in the creation of new MPI processes or to establish communication with MPI processes that have been started separately." The MPI-2 specification describes three main interfaces by which MPI processes can dynamically establish communications,MPI_Comm_spawn,MPI_Comm_accept/MPI_Comm_connectandMPI_Comm_join. TheMPI_Comm_spawninterface allows an MPI process to spawn a number of instances of the named MPI process. The newly spawned set of MPI processes form a newMPI_COMM_WORLDintracommunicator but can communicate with the parent and the intercommunicator the function returns.MPI_Comm_spawn_multipleis an alternate interface that allows the different instances spawned to be different binaries with different arguments.[20] The parallel I/O feature is sometimes called MPI-IO,[21]and refers to a set of functions designed to abstract I/O management on distributed systems to MPI, and allow files to be easily accessed in a patterned way using the existing derived datatype functionality. The little research that has been done on this feature indicates that it may not be trivial to get high performance gains by using MPI-IO. For example, an implementation of sparsematrix-vector multiplicationsusing the MPI I/O library shows a general behavior of minor performance gain, but these results are inconclusive.[22]It was not until the idea of collective I/O[23]implemented into MPI-IO that MPI-IO started to reach widespread adoption. Collective I/O substantially boosts applications' I/O bandwidth by having processes collectively transform the small and noncontiguous I/O operations into large and contiguous ones, thereby reducing thelockingand disk seek overhead. Due to its vast performance benefits, MPI-IO also became the underlying I/O layer for many state-of-the-art I/O libraries, such asHDF5andParallel NetCDF. Its popularity also triggered research on collective I/O optimizations, such as layout-aware I/O[24]and cross-file aggregation.[25][26] Many other efforts are derivatives of MPICH, LAM, and other works, including, but not limited to, commercial implementations fromHPE,Intel,Microsoft, andNEC. While the specifications mandate a C and Fortran interface, the language used to implement MPI is not constrained to match the language or languages it seeks to support at runtime. Most implementations combine C, C++ and assembly language, and target C, C++, and Fortran programmers. Bindings are available for many other languages, including Perl, Python, R, Ruby, Java, andCL(see#Language bindings). TheABIof MPI implementations are roughly split betweenMPICHandOpen MPIderivatives, so that a library from one family works as a drop-in replacement of one from the same family, but direct replacement across families is impossible. The FrenchCEAmaintains a wrapper interface to facilitate such switches.[27] MPI hardware research focuses on implementing MPI directly in hardware, for example viaprocessor-in-memory, building MPI operations into the microcircuitry of theRAMchips in each node. By implication, this approach is independent of language, operating system, and CPU, but cannot be readily updated or removed. Another approach has been to add hardware acceleration to one or more parts of the operation, including hardware processing of MPI queues and usingRDMAto directly transfer data between memory and thenetwork interface controllerwithout CPU or OS kernel intervention. mpicc(and similarlympic++,mpif90, etc.) is a program that wraps over an existing compiler to set the necessary command-line flags when compiling code that uses MPI. Typically, it adds a few flags that enable the code to be the compiled and linked against the MPI library.[28] Bindingsare libraries that extend MPI support to other languages by wrapping an existing MPI implementation such as MPICH or Open MPI. The two managedCommon Language Infrastructure.NETimplementations are Pure Mpi.NET[29]and MPI.NET,[30]a research effort atIndiana Universitylicensed under aBSD-style license. It is compatible withMono, and can make full use of underlying low-latency MPI network fabrics. AlthoughJavadoes not have an official MPI binding, several groups attempt to bridge the two, with different degrees of success and compatibility. One of the first attempts was Bryan Carpenter's mpiJava,[31]essentially a set ofJava Native Interface(JNI) wrappers to a local C MPI library, resulting in a hybrid implementation with limited portability, which also has to be compiled against the specific MPI library being used. However, this original project also defined the mpiJava API[32](ade factoMPIAPIfor Java that closely followed the equivalent C++ bindings) which other subsequent Java MPI projects adopted. One less-used API is MPJ API, which was designed to be moreobject-orientedand closer toSun Microsystems' coding conventions.[33]Beyond the API, Java MPI libraries can be either dependent on a local MPI library, or implement the message passing functions in Java, while some likeP2P-MPIalso providepeer-to-peerfunctionality and allow mixed-platform operation. Some of the most challenging parts of Java/MPI arise from Java characteristics such as the lack of explicitpointersand thelinear memoryaddress space for its objects, which make transferring multidimensional arrays and complex objects inefficient. Workarounds usually involve transferring one line at a time and/or performing explicit de-serializationandcastingat both the sending and receiving ends, simulating C or Fortran-like arrays by the use of a one-dimensional array, and pointers to primitive types by the use of single-element arrays, thus resulting in programming styles quite far from Java conventions. Another Java message passing system is MPJ Express.[34]Recent versions can be executed in cluster and multicore configurations. In the cluster configuration, it can execute parallel Java applications on clusters and clouds. Here Java sockets or specialized I/O interconnects likeMyrinetcan support messaging between MPJ Express processes. It can also utilize native C implementation of MPI using its native device. In the multicore configuration, a parallel Java application is executed on multicore processors. In this mode, MPJ Express processes are represented by Java threads. There is aJulialanguage wrapper for MPI.[35] There are a few academic implementations of MPI usingMATLAB. MATLAB has its own parallel extension library implemented using MPI andPVM. The OCamlMPI Module[36]implements a large subset of MPI functions and is in active use in scientific computing. An 11,000-lineOCamlprogram was "MPI-ified" using the module, with an additional 500 lines of code and slight restructuring and ran with excellent results on up to 170 nodes in a supercomputer.[37] PARI/GPcan be built[38]to use MPI as its multi-thread engine, allowing to run parallel PARI and GP programs on MPI clusters unmodified. Actively maintained MPI wrappers forPythoninclude: mpi4py,[39]numba-mpi[40]and numba-jax.[41] Discontinued developments include: pyMPI, pypar,[42]MYMPI[43]and the MPI submodule inScientificPython. Rbindings of MPI includeRmpi[44]andpbdMPI,[45]where Rmpi focuses onmanager-workersparallelism while pbdMPI focuses onSPMDparallelism. Both implementations fully supportOpen MPIorMPICH2. Here is a"Hello, World!" programin MPI written in C. In this example, we send a "hello" message to each processor, manipulate it trivially, return the results to the main process, and print the messages. When run with 4 processes, it should produce the following output:[46] Here,mpiexecis a command used to execute the example program with 4processes, each of which is an independent instance of the program at run time and assigned ranks (i.e. numeric IDs) 0, 1, 2, and 3. The namempiexecis recommended by the MPI standard, although some implementations provide a similar command under the namempirun. TheMPI_COMM_WORLDis the communicator that consists of all the processes. A single program, multiple data (SPMD) programming model is thereby facilitated, but not required; many MPI implementations allow multiple, different, executables to be started in the same MPI job. Each process has its own rank, the total number of processes in the world, and the ability to communicate between them either with point-to-point (send/receive) communication, or by collective communication among the group. It is enough for MPI to provide an SPMD-style program withMPI_COMM_WORLD, its own rank, and the size of the world to allow algorithms to decide what to do. In more realistic situations, I/O is more carefully managed than in this example. MPI does not stipulate how standard I/O (stdin, stdout, stderr) should work on a given system. It generally works as expected on the rank-0 process, and some implementations also capture and funnel the output from other processes. MPI uses the notion of process rather than processor. Program copies aremappedto processors by the MPIruntime. In that sense, the parallel machine can map to one physical processor, or toNprocessors, whereNis the number of available processors, or even something in between. For maximum parallel speedup, more physical processors are used. This example adjusts its behavior to the size of the worldN, so it also seeks to scale to the runtime configuration without compilation for each size variation, although runtime decisions might vary depending on that absolute amount of concurrency available. Adoption of MPI-1.2 has been universal, particularly in cluster computing, but acceptance of MPI-2.1 has been more limited. Issues include: Some aspects of the MPI's future appear solid; others less so. The MPI Forum reconvened in 2007 to clarify some MPI-2 issues and explore developments for a possible MPI-3, which resulted in versions MPI-3.0 (September 2012)[47]and MPI-3.1 (June 2015).[48]The development continued with the approval of MPI-4.0 on June 9, 2021,[49]and most recently, MPI-4.1 was approved on November 2, 2023.[50] Architectures are changing, with greater internal concurrency (multi-core), better fine-grained concurrency control (threading, affinity), and more levels ofmemory hierarchy.Multithreadedprograms can take advantage of these developments more easily than single-threaded applications. This has already yielded separate, complementary standards forsymmetric multiprocessing, namelyOpenMP. MPI-2 defines how standard-conforming implementations should deal with multithreaded issues, but does not require that implementations be multithreaded, or even thread-safe. MPI-3 adds the ability to use shared-memory parallelism within a node. Implementations of MPI such as Adaptive MPI, Hybrid MPI, Fine-Grained MPI, MPC and others offer extensions to the MPI standard that address different challenges in MPI. Astrophysicist Jonathan Dursi wrote an opinion piece calling MPI obsolescent, pointing to newer technologies like theChapellanguage,Unified Parallel C,Hadoop,SparkandFlink.[51]At the same time, nearly all of the projects in theExascale Computing Projectbuild explicitly on MPI; MPI has been shown to scale to the largest machines as of the early 2020s and is widely considered to stay relevant for a long time to come.
https://en.wikipedia.org/wiki/Message_Passing_Interface
SISAL(Streams and Iteration in a Single Assignment Language) is ageneral-purposesingle assignmentfunctionalprogramming languagewithstrict semantics,implicit parallelism, and efficientarrayhandling. SISAL outputs adataflowgraph in Intermediary Form 1 (IF1). It was derived from the Value-oriented Algorithmic Language (VAL), designed byJack Dennis, and addsrecursionand finite streams. It has aPascal-like syntax and was designed to be a commonhigh-level programming languagefor numerical programs on a variety ofmultiprocessors. SISAL was defined in 1983 by James McGraw et al., at theUniversity of Manchester,Lawrence Livermore National Laboratory(LLNL),Colorado State UniversityandDigital Equipment Corporation(DEC). It was revised in 1985, and the firstcompiledimplementationwas made in 1986. Its performance is superior toCand rivalsFortran, according to some sources,[1]combined with efficient and automatic parallelization. SISAL's name came fromgrepping"sal" for "Single Assignment Language" from the Unix dictionary /usr/dict/words. Versions exist for theCrayX-MP,Y-MP,2;Sequent,Encore Alliant, DECDEC VAX-11/784,dataflowarchitectures, KSR1,InmosTransputers, andsystolic arrays. The requirements for afine-grain parallelismlanguage are better met with adataflow programminglanguage than asystem programming language.[citation needed] SISAL is more than just a dataflow and fine-grain language. It is a set of tools that convert a textual human readable dataflow language into a graph format (namedIF1- Intermediary Form 1). Part of the SISAL project also involved converting this graph format into runnable C code.[2] In 2010 SISAL saw a brief resurgence when a group of undergraduates atWorcester Polytechnic Instituteinvestigated implementing a fine-grain parallelism backend for the SISAL language.[2] In 2018 SISAL was modernized with indent-based syntax, first-class functions, lambdas, closures and lazy semantics within a project SISAL-IS.[3]
https://en.wikipedia.org/wiki/SISAL
Inparallelcomputer architectures, asystolic arrayis a homogeneousnetworkof tightly coupleddata processing units(DPUs) called cells ornodes. Each node or DPU independently computes a partial result as a function of the data received from its upstream neighbours, stores the result within itself and passes it downstream. Systolic arrays were first used inColossus, which was an early computer used to break GermanLorenzciphers duringWorld War II.[1]Due to the classified nature of Colossus, they were independently invented or rediscovered byH. T. KungandCharles Leisersonwho described arrays for many dense linear algebra computations (matrix product, solving systems oflinear equations,LU decomposition, etc.) for banded matrices. Early applications include computinggreatest common divisorsof integers and polynomials.[2]They are sometimes classified asmultiple-instruction single-data(MISD) architectures underFlynn's taxonomy, but this classification is questionable because a strong argument can be made to distinguish systolic arrays from any of Flynn's four categories:SISD,SIMD,MISD,MIMD, as discussed later in this article. The parallel inputdataflows through a network of hard-wiredprocessornodes, which combine, process,mergeorsortthe input data into a derived result. Because thewave-like propagation of data through a systolic array resembles thepulseof the human circulatory system, the namesystolicwas coined from medical terminology. The name is derived fromsystoleas an analogy to the regular pumping of blood by the heart. Systolic arrays are often hard-wired for specific operations, such as "multiply and accumulate", to perform massivelyparallelintegration,convolution,correlation,matrix multiplicationor data sorting tasks. They are also used fordynamic programmingalgorithms, used in DNA and proteinsequence analysis. A systolic array typically consists of a largemonolithicnetworkof primitive computingnodeswhich can be hardwired or software configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. The more generalwavefrontprocessors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. The other distinction is that systolic arrays rely onsynchronousdata transfers, whilewavefronttend to workasynchronously. Unlike the more commonVon Neumann architecture, where program execution follows a script of instructions stored in common memory,addressedand sequenced under the control of theCPU'sprogram counter(PC), the individual nodes within a systolic array are triggered by the arrival of new data and always process the data in exactly the same way. The actual processing within each node may be hard wired or blockmicro coded, in which case the common node personality can be block programmable. The systolic array paradigm with data-streams driven by datacounters, is the counterpart of the Von Neumann architecture with instruction-stream driven by a program counter. Because a systolic array usually sends and receives multiple data streams, and multiple data counters are needed to generate these data streams, it supportsdata parallelism. A major benefit of systolic arrays is that all operand data and partial results are stored within (passing through) the processor array. There is no need to access external buses, main memory or internal caches during each operation as is the case with Von Neumann orHarvardsequential machines. The sequential limits onparallelperformance dictated byAmdahl's Lawalso do not apply in the same way, because data dependencies are implicitly handled by the programmablenodeinterconnect and there are no sequential steps in managing the highly parallel data flow. Systolic arrays are therefore extremely good at artificial intelligence, image processing, pattern recognition, computer vision and other tasks that animal brains do particularly well. Wavefront processors in general can also be very good at machine learning by implementing self configuring neural nets in hardware. While systolic arrays are officially classified asMISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is definitely notSISD. Since theseinputvalues are merged and combined into the result(s) and do not maintain theirindependenceas they would in aSIMDvector processing unit, thearraycannot be classified as such. Consequently, the array cannot be classified as aMIMDeither, because MIMD can be viewed as a mere collection of smaller SISD andSIMDmachines. Finally, because the dataswarmis transformed as it passes through the array fromnodeto node, the multiple nodes are not operating on the same data, which makes the MISD classification amisnomer. The other reason why a systolic array should not qualify as aMISDis the same as the one which disqualifies it from the SISD category: The input data is typically a vector not asingledata value, although one could argue that any given input vector is a single item of data. In spite of all of the above, systolic arrays are often offered as a classic example of MISD architecture in textbooks onparallel computingand in engineering classes. If the array is viewed from the outside asatomicit should perhaps be classified asSFMuDMeR= single function, multiple data, merged result(s). Systolic arrays use a pre-defined computational flow graph that connects their nodes.Kahn process networksuse a similar flow graph, but are distinguished by the nodes working in lock-step in the systolic array: in a Kahn network, there are FIFO queues between each node. A systolic array is composed of matrix-like rows ofdata processing unitscalled cells. Data processing units (DPUs) are similar tocentral processing units(CPUs), (except for the usual lack of aprogram counter,[3]since operation istransport-triggered, i.e., by the arrival of a data object). Each cell shares the information with its neighbors immediately after processing. The systolic array is often rectangular where data flows across the array between neighbourDPUs, often with different data flowing in different directions. The data streams entering and leaving the ports of the array are generated by auto-sequencing memory units, ASMs. Each ASM includes a datacounter. Inembedded systemsa data stream may also be input from and/or output to an external source. An example of a systolicalgorithmmight be designed formatrix multiplication. Onematrixis fed in a row at a time from the top of the array and is passed down the array, the other matrix is fed in a column at a time from the left hand side of the array and passes from left to right. Dummy values are then passed in until each processor has seen one whole row and one whole column. At this point, the result of the multiplication is stored in the array and can now be output a row or a column at a time, flowing down or across the array.[4] Systolic arrays are arrays ofDPUswhich are connected to a small number of nearest neighbour DPUs in a mesh-like topology. DPUs perform a sequence of operations on data that flows between them. Because the traditional systolic array synthesis methods have been practiced by algebraic algorithms, only uniform arrays with only linear pipes can be obtained, so that the architectures are the same in all DPUs. The consequence is, that only applications with regular data dependencies can be implemented on classical systolic arrays. LikeSIMDmachines, clocked systolic arrays compute in "lock-step" with each processor undertaking alternate compute | communicate phases. But systolic arrays with asynchronous handshake between DPUs are calledwavefront arrays. One well-known systolic array is Carnegie Mellon University'siWarpprocessor, which has been manufactured by Intel. An iWarp system has a linear array processor connected by data buses going in both directions. Systolic arrays (also known aswavefront processors), were first described byH. T. KungandCharles E. Leiserson, who published the first paper describing systolic arrays in 1979. However, the first machine known to have used a similar technique was theColossus Mark IIin 1944. Horner's rulefor evaluating a polynomial is: A linear systolic array in which the processors are arranged in pairs: one multiplies its input byx{\displaystyle x}and passes the result to the right, the next addsaj{\displaystyle a_{j}}and passes the result to the right. Consider a chain of processing elements (PEs), each performing amultiply-accumulate operation. It processes input data (xi{\displaystyle x_{i}}) and weights (wi{\displaystyle w_{i}}) systolically, meaning data flows through the array in a regular, rhythmic manner. The weights remain stationary within each PE, while the input data and partial sums (yi{\displaystyle y_{i}}) move in opposite directions. Each PE performs the following operation:yout=yin+w⋅xinxout=xin{\displaystyle {\begin{aligned}y_{out}&=y_{in}+w\cdot x_{in}\\x_{out}&=x_{in}\end{aligned}}}where: From the left, the input stream is…,x3,0,x2,0,x1{\displaystyle \dots ,x_{3},0,x_{2},0,x_{1}}, and from the right, the output stream isy1,y2,y3,…{\displaystyle y_{1},y_{2},y_{3},\dots }. Ify1,x1{\displaystyle y_{1},x_{1}}enter the rightmost PE simultaneously, then the leftmost PE outputsy1=w1x1+w2x2+w3x3+⋯y2=w1x2+w2x3+w3x4+⋯⋮{\displaystyle {\begin{aligned}y_{1}&=w_{1}x_{1}+w_{2}x_{2}+w_{3}x_{3}+\cdots \\y_{2}&=w_{1}x_{2}+w_{2}x_{3}+w_{3}x_{4}+\cdots \\&\vdots \end{aligned}}}This is the 1-dimensional convolution. Similarly, n-dimensional convolution can be computed by an n-dimensional array of PEs. Many other implementations of the 1D convolutions are available, with different data flows.[5] See[5]Figure 12 for an algorithm that performs on-the-flyleast-squaresusing one- and two-dimensional systolic arrays.
https://en.wikipedia.org/wiki/Systolic_array
Incomputer architecture, atransport triggered architecture(TTA) is a kind ofprocessordesign in which programs directly control the internal transportbusesof a processor. Computation happens as a side effect of data transports: writing data into atriggering portof afunctional unittriggers the functional unit to start a computation. This is similar to what happens in asystolic array. Due to its modular structure, TTA is an ideal processor template forapplication-specific instruction set processors(ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators. Typically a transport triggered processor has multiple transport buses and multiple functional units connected to the buses, which provides opportunities forinstruction level parallelism. The parallelism is statically defined by the programmer. In this respect (and obviously due to the large instruction word width), the TTA architecture resembles thevery long instruction word(VLIW) architecture. A TTA instruction word is composed of multiple slots, one slot per bus, and each slot determines the data transport that takes place on the corresponding bus. The fine-grained control allows some optimizations that are not possible in a conventional processor. For example, software can transfer data directly between functional units without using registers. Transport triggering exposes somemicroarchitecturaldetails that are normally hidden from programmers. This greatly simplifies the control logic of a processor, because many decisions normally done atrun timeare fixed atcompile time. However, it also means that a binary compiled for one TTA processor will not run on another one without recompilation if there is even a small difference in the architecture between the two. The binary incompatibility problem, in addition to the complexity of implementing a full context switch, makes TTAs more suitable forembedded systemsthan for general purpose computing. Of all theone-instruction set computerarchitectures, the TTA architecture is one of the few that has had processors based on it built, and the only one that has processors based on it sold commercially. TTAs can be seen as "exposed datapath" VLIW architectures. While VLIW is programmed using operations, TTA splits the operation execution to multiplemoveoperations. The low level programming model enables several benefits in comparison to the standard VLIW. For example, a TTA architecture can provide more parallelism with simpler register files than with VLIW. As the programmer is in control of the timing of the operand and result data transports, the complexity (the number of input and output ports) of theregister file(RF) need not be scaled according to the worst case issue/completion scenario of the multiple parallel instructions. An important unique software optimization enabled by the transport programming is calledsoftware bypassing. In case of software bypassing, the programmer bypasses the register file write back by moving data directly to the next functional unit's operand ports. When this optimization is applied aggressively, the original move that transports the result to the register file can be eliminated completely, thus reducing both the register file port pressure and freeing ageneral-purpose registerfor other temporary variables. The reducedregister pressure, in addition to simplifying the required complexity of the RF hardware, can lead to significantCPU energy savings, an important benefit especially in mobile embedded systems.[1][2] TTA processors are built of independentfunction unitsandregister files, which are connected withtransport busesandsockets. Each function unit implements one or moreoperations, which implement functionality ranging from a simple addition of integers to a complex and arbitrary user-defined application-specific computation. Operands for operations are transferred through function unitports. Each function unit may have an independentpipeline. In case a function unit isfully pipelined, a new operation that takes multipleclock cyclesto finish can be started in every clock cycle. On the other hand, a pipeline can be such that it does not always accept new operation start requests while an old one is still executing. Data memoryaccess and communication to outside of the processor is handled by using special function units. Function units that implement memory accessing operations and connect to a memory module are often calledload/store units. Control unitis a special case of function units which controls execution of programs. Control unit has access to the instruction memory in order to fetch the instructions to be executed. In order to allow the executed programs to transfer the execution (jump) to an arbitrary position in the executed program, control unit provides control flow operations. A control unit usually has aninstruction pipeline, which consists of stages for fetching, decoding and executing program instructions. Register files containgeneral-purpose registers, which are used to store variables in programs. Like function units, also register files have input and output ports. The number of read and write ports, that is, the capability of being able to read and write multiple registers in a same clock cycle, can vary in each register file. Interconnect architectureconsists oftransport buseswhich are connected to function unit ports by means ofsockets. Due to expense of connectivity, it is usual to reduce the number of connections between units (function units and register files). A TTA is said to befully connectedin case there is a path from each unit output port to every unit's input ports. Sockets provide means for programming TTA processors by allowing to select which bus-to-port connections of the socket are enabled at any time instant. Thus, data transports taking place in a clock cycle can be programmed by defining the source and destination socket/port connection to be enabled for each bus. Some TTA implementations supportconditional execution. Conditional executionis implemented with the aid ofguards. Each data transport can be conditionalized by a guard, which is connected to a register (often a 1-bitconditional register) and to a bus. In case the value of the guarded register evaluates to false (zero), the data transport programmed for the bus the guard is connected to issquashed, that is, not written to its destination.Unconditionaldata transports are not connected to any guard and are always executed. All processors, including TTA processors, includecontrol flowinstructions that alter the program counter, which are used to implementsubroutines,if-then-else,for-loop, etc. The assembly language for TTA processors typically includes control flow instructions such as unconditional branches (JUMP), conditional relative branches (BNZ), subroutine call (CALL), conditional return (RETNZ), etc. that look the same as the corresponding assembly language instructions for other processors. Like all other operations on a TTA machine, these instructions are implemented as "move" instructions to a special function unit. TTA implementations that support conditional execution, such as the sTTAck and the first MOVE prototype, can implement most of these control flow instructions as a conditional move to the program counter.[3][4] TTA implementations that only support unconditional data transports, such as theMaxim IntegratedMAXQ,[5]typically have a special function unit tightly connected to the program counter that responds to a variety of destination addresses. Each such address, when used as the destination of a "move", has a different effect on the program counter—each "relative branch <condition>" instruction has a different destination address for each condition; and other destination addresses are used CALL, RETNZ, etc. In more traditional processor architectures, a processor is usually programmed by defining the executed operations and their operands. For example, an addition instruction in a RISC architecture could look like the following. This example operation adds the values of general-purpose registers r1 and r2 and stores the result in register r3. Coarsely, the execution of the instruction in the processor probably results in translating the instruction to control signals which control the interconnection network connections and function units. The interconnection network is used to transfer the current values of registers r1 and r2 to the function unit that is capable of executing the add operation, often called ALU as in Arithmetic-Logic Unit. Finally, a control signal selects and triggers the addition operation in ALU, of which result is transferred back to the register r3. TTA programs do not define the operations, but only the data transports needed to write and read the operand values. Operation itself is triggered by writing data to atriggering operandof an operation. Thus, an operation is executed as a side effect of the triggering data transport. Therefore, executing an addition operation in TTA requires three data transport definitions, also calledmoves. A move defines endpoints for a data transport taking place in a transport bus. For instance, a move can state that a data transport from function unit F, port 1, to register file R, register index 2, should take place in bus B1. In case there are multiple buses in the target processor, each bus can be utilized in parallel in the same clock cycle. Thus, it is possible to exploit data transport level parallelism by scheduling several data transports in the same instruction. An addition operation can be executed in a TTA processor as follows: The second move, a write to the second operand of the function unit called ALU, triggers the addition operation. This makes the result of addition available in the output port 'result' after the execution latency of the 'add'. The ports associated with the ALU may act as anaccumulator, allowing creation ofmacro instructionsthatabstract awaythe underlying TTA: The leading philosophy of TTAs is to move complexity from hardware to software. Due to this, several additional hazards are introduced to the programmer. One of them isdelay slots, the programmer visible operation latency of the function units. Timing is completely the responsibility of the programmer. The programmer has to schedule the instructions such that the result is neither read too early nor too late. There is no hardware detection to lock up the processor in case a result is read too early. Consider, for example, an architecture that has an operationaddwith latency of 1, and operationmulwith latency of 3. When triggering theaddoperation, it is possible to read the result in the next instruction (next clock cycle), but in case ofmul, one has to wait for two instructions before the result can be read. The result is ready for the 3rd instruction after the triggering instruction. Reading a result too early results in reading the result of a previously triggered operation, or in case no operation was triggered previously in the function unit, the read value is undefined. On the other hand, result must be read early enough to make sure the next operation result does not overwrite the yet unread result in the output port. Due to the abundance of programmer-visible processor context which practically includes, in addition to register file contents, also function unit pipeline register contents and/or function unit input and output ports, context saves required for external interrupt support can become complex and expensive to implement in a TTA processor. Therefore, interrupts are usually not supported by TTA processors, but their task is delegated to an external hardware (e.g., an I/O processor) or their need is avoided by using an alternative synchronization/communication mechanism such as polling.
https://en.wikipedia.org/wiki/Transport_triggered_architecture
Asystem on a chip(SoC) is anintegrated circuitthat combines most or all key components of acomputerorelectronic systemonto a single microchip.[1]Typically, an SoC includes acentral processing unit(CPU) withmemory,input/output, anddata storagecontrol functions, along with optional features like agraphics processing unit(GPU),Wi-Ficonnectivity, and radio frequency processing. This high level of integration minimizes the need for separate, discrete components, thereby enhancingpower efficiencyand simplifying device design. High-performance SoCs are often paired with dedicated memory, such asLPDDR, and flash storage chips, such aseUFSoreMMC, which may be stacked directly on top of the SoC in apackage-on-package(PoP) configuration or placed nearby on the motherboard. Some SoCs also operate alongside specialized chips, such ascellular modems.[2] Fundamentally, SoCs integrate one or moreprocessor coreswith critical peripherals. This comprehensive integration is conceptually similar to how amicrocontrolleris designed, but providing far greater computational power. While this unified design delivers lower power consumption and a reducedsemiconductor diearea compared to traditional multi-chip architectures, though at the cost of reduced modularity and component replaceability. SoCs are ubiquitous in mobile computing, where compact, energy-efficient designs are critical. They powersmartphones,tablets, andsmartwatches, and are increasingly important inedge computing, where real-time data processing occurs close to the data source. By driving the trend toward tighter integration, SoCs have reshaped modern hardware design, reshaping the design landscape for modern computing devices.[3][4] In general, there are three distinguishable types of SoCs: SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well asembedded systemsand in applications where previouslymicrocontrollerswould be used. Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability andmean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers.[5]Applications includeAI acceleration, embeddedmachine vision,[6]data collection,telemetry,vector processingandambient intelligence. Often embedded SoCs target theinternet of things, multimedia, networking, telecommunications andedge computingmarkets. Some examples of SoCs for embedded applications include theSTMicroelectronicsSTM32, theRaspberry Pi LtdRP2040, and theAMDZynq 7000. Mobile computingbased SoCs always bundle processors, memories, on-chipcaches,wireless networkingcapabilities and oftendigital camerahardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory andflash memorywill be placed right next to, or above (package on package), the SoC.[7]Some examples of mobile computing SoCs include: In 1992,Acorn Computersproduced theA3010, A3020 and A4000 range of personal computerswith the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous AcornARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighterintegrationof hardware andfirmwaremodules, andLTEand otherwireless networkcommunications integrated on chip (integratednetwork interface controllers).[10] On modern laptops and mini PCs, the low-power variants ofAMD RyzenandIntel Coreprocessors use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips. An SoC consists of hardwarefunctional units, includingmicroprocessorsthat runsoftware code, as well as acommunications subsystemto connect, control, direct and interface between these functional modules. An SoC must have at least oneprocessor core, but typically an SoC has more than one core. Processor cores can be amicrocontroller,microprocessor(μP),[11]digital signal processor(DSP) orapplication-specific instruction set processor(ASIP) core.[12]ASIPs haveinstruction setsthat are customized for anapplication domainand designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. TheARM architectureis a common choice for SoC processor cores because some ARM-architecture cores aresoft processorsspecified asIP cores.[11] SoCs must havesemiconductor memoryblocks to perform their computation, as domicrocontrollersand otherembedded systems. Depending on the application, SoC memory may form amemory hierarchyandcache hierarchy. In the mobile computing market, this is common, but in manylow-powerembedded microcontrollers, this is not necessary. Memory technologies for SoCs includeread-only memory(ROM),random-access memory(RAM), Electrically Erasable Programmable ROM (EEPROM) andflash memory.[11]As in other computer systems, RAM can be subdivided into relatively faster but more expensivestatic RAM(SRAM) and the slower but cheaperdynamic RAM(DRAM). When an SoC has acachehierarchy, SRAM will usually be used to implementprocessor registersand cores'built-in cacheswhereas DRAM will be used formain memory. "Main memory" may be specific to a single processor (which can bemulti-core) when the SoChas multiple processors, in this case it isdistributed memoryand must be sent via§ Intermodule communicationon-chip to be accessed by a different processor.[12]For further discussion of multi-processing memory issues, seecache coherenceandmemory latency. SoCs include externalinterfaces, typically forcommunication protocols. These are often based upon industry standards such asUSB,Ethernet,USART,SPI,HDMI,I²C,CSI, etc. These interfaces will differ according to the intended application.Wireless networkingprotocols such asWi-Fi,Bluetooth,6LoWPANandnear-field communicationmay also be supported. When needed, SoCs includeanaloginterfaces includinganalog-to-digitalanddigital-to-analog converters, often forsignal processing. These may be able to interface with different types ofsensorsoractuators, includingsmart transducers. They may interface with application-specificmodulesor shields.[nb 1]Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processor(DSP) cores are often included on SoCs. They performsignal processingoperations in SoCs forsensors,actuators,data collection,data analysisand multimedia processing. DSP cores typically featurevery long instruction word(VLIW) andsingle instruction, multiple data(SIMD)instruction set architectures, and are therefore highly amenable to exploitinginstruction-level parallelismthroughparallel processingandsuperscalar execution.[12]: 4SP cores most often feature application-specific instructions, and as such are typicallyapplication-specific instruction set processors(ASIP). Such application-specific instructions correspond to dedicated hardwarefunctional unitsthat compute those instructions. Typical DSP instructions includemultiply-accumulate,Fast Fourier transform,fused multiply-add, andconvolutions. As with other computer systems, SoCs requiretiming sourcesto generateclock signals, control execution of SoC functions and provide time context tosignal processingapplications of the SoC, if needed. Popular time sources arecrystal oscillatorsandphase-locked loops. SoCperipheralsincludingcounter-timers, real-timetimersandpower-on resetgenerators. SoCs also includevoltage regulatorsandpower managementcircuits. SoCs comprise manyexecution units. These units must often send data andinstructionsback and forth. Because of this, all but the most trivial SoCs requirecommunications subsystems. Originally, as with othermicrocomputertechnologies,data busarchitectures were used, but recently designs based on sparse intercommunication networks known asnetworks-on-chip(NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.[13] Historically, a shared globalcomputer bustypically connected the different components, also called "blocks" of the SoC.[13]A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory accesscontrollers route data directly between external interfaces and SoC memory, bypassing the CPU orcontrol unit, thereby increasing the datathroughputof the SoC. This is similar to somedevice driversof peripherals on component-basedmulti-chip modulePC architectures. Wire delay is not scalable due to continuedminiaturization,system performancedoes not scale with the number of cores attached, the SoC'soperating frequencymust decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supportingmanycoresystems on chip.[13]: xiii In the late 2010s, a trend of SoCs implementingcommunications subsystemsin terms of a network-like topology instead ofbus-basedprotocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost.[13]: xiiiThis has led to the emergence of interconnection networks withrouter-basedpacket switchingknown as "networks on chip" (NoCs) to overcome thebottlenecksof bus-based networks.[13]: xiii Networks-on-chip have advantages including destination- and application-specificrouting, greater power efficiency and reduced possibility ofbus contention. Network-on-chip architectures take inspiration fromcommunication protocolslikeTCPand theInternet protocol suitefor on-chip communication,[13]although they typically have fewernetwork layers. Optimal network-on-chipnetwork architecturesare an ongoing area of much research interest. NoC architectures range from traditional distributed computingnetwork topologiessuch astorus,hypercube,meshesandtree networkstogenetic algorithm schedulingtorandomized algorithmssuch asrandom walks with branchingand randomizedtime to live(TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limitedfloorplanningchoices as the number of cores in SoCs increase, so asthree-dimensional integrated circuits(3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.[13] A system on a chip consists of both thehardware, described in§ Structure, and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. Thedesign flowfor an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations (§ Optimization goals) and constraints. Most SoCs are developed from pre-qualified hardware componentIP core specificationsfor the hardware elements andexecution units, collectively "blocks", described above, together with softwaredevice driversthat may control their operation. Of particular importance are theprotocol stacksthat drive industry-standard interfaces likeUSB. The hardware blocks are put together usingcomputer-aided designtools, specificallyelectronic design automationtools; thesoftware modulesare integrated using a softwareintegrated development environment. SoCs components are also often designed inhigh-level programming languagessuch asC++,MATLABorSystemCand converted toRTLdesigns throughhigh-level synthesis(HLS) tools such asC to HDLorflow to HDL.[14]HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known tocomputer engineersin a manner independent of time scales, which are typically specified in HDL.[15]Other components can remain software and be compiled and embedded ontosoft-core processorsincluded in the SoC as modules in HDL asIP cores. Once thearchitectureof the SoC has been defined, any new hardware elements are written in an abstracthardware description languagetermedregister transfer level(RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is calledglue logic. Chips are verified for validation correctness before being sent to asemiconductor foundry. This process is calledfunctional verificationand it accounts for a significant portion of the time and energy expended in thechip design life cycle, often quoted as 70%.[16][17]With the growing complexity of chips,hardware verification languageslikeSystemVerilog,SystemC,e, and OpenVera are being used.Bugsfound in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration,emulationor prototyping onreprogrammable hardwareto verify and debug hardware and software for SoC designs prior to the finalization of the design, known astape-out.Field-programmable gate arrays(FPGAs) are favored for prototyping SoCs becauseFPGA prototypesare reprogrammable, allowdebuggingand are more flexible thanapplication-specific integrated circuits(ASICs).[18][19] With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.[citation needed] FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus[20]are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process oflogic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as anetlistdescribing the design as a physical circuit and its interconnections. These netlists are combined with theglue logicconnecting the components to produce the schematic description of the SoC as a circuit which can beprintedonto a chip. This process is known asplace and routeand precedestape-outin the event that the SoCs are produced asapplication-specific integrated circuits(ASIC). SoCs must optimizepower use, area ondie, communication, positioning forlocalitybetween modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use amulti-chip modulearchitecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hardcombinatorial optimizationproblem, and can indeed beNP-hardfairly easily. Therefore, sophisticatedoptimization algorithmsare often required and it may be practical to useapproximation algorithmsorheuristicsin some cases. Additionally, most SoC designs containmultiple variables to optimize simultaneously, soPareto efficientsolutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducingtrade-offsin system design. For broader coverage of trade-offs andrequirements analysis, seerequirements engineering. SoCs are optimized to minimize theelectrical powerused to perform the SoC's functions. Most SoCs must use low power. SoC systems often require longbattery life(such assmartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number ofembeddedSoCs beingnetworked togetherin an area. Additionally, energy costs can be high and conserving energy will reduce thetotal cost of ownershipof the SoC. Finally,waste heatfrom high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is theintegralofpowerconsumed with respect to time, and theaverage rateof power consumption is the product ofcurrentbyvoltage. Equivalently, byOhm's law, power is current squared times resistance or voltage squared divided byresistance: P=IV=V2R=I2R{\displaystyle P=IV={\frac {V^{2}}{R}}={I^{2}}{R}}SoCs are frequently embedded inportable devicessuch assmartphones,GPS navigation devices, digitalwatches(includingsmartwatches) andnetbooks. Customers want long battery lives formobile computingdevices, another reason that power consumption must be minimized in SoCs.Multimedia applicationsare often executed on these devices, including video games,video streaming,image processing; all of which have grown incomputational complexityin recent years with user demands and expectations for higher-qualitymultimedia. Computation is more demanding as expectations move towards3D videoathigh resolutionwithmultiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.[12]: 3 SoCs are optimized to maximizepower efficiencyin performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such asedge computing,distributed processingandambient intelligencerequire a certain level ofcomputational performance, but power is limited in most SoC environments. SoC designs are optimized to minimizewaste heatoutputon the chip. As with otherintegrated circuits, heat generated due to highpower densityare thebottleneckto furtherminiaturizationof components.[21]: 1The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erodereliabilityof the circuit over time. High temperatures and thermal stress negatively impact reliability,stress migration, decreasedmean time between failures,electromigration,wire bonding,metastabilityand other performance degradation of the SoC over time.[21]: 2–9 In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of hightransistor countson modern devices, oftentimes a layout of sufficient throughput and hightransistor densityis physically realizable fromfabrication processesbut would result in unacceptably high amounts of heat in the circuit's volume.[21]: 1 These thermal effects force SoC and other chip designers to apply conservativedesign margins, creating less performant devices to mitigate the risk ofcatastrophic failure. Due to increasedtransistor densitiesas length scales get smaller, eachprocess generationproduces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneousheat fluxes, which cannot be effectively mitigated by uniformpassive cooling.[21]: 1 SoCs are optimized to maximize computational and communicationsthroughput. SoCs are optimized to minimizelatencyfor some or all of their functions. This can be accomplished bylaying outelements with proper proximity andlocalityto each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,functional unitsand memories. In general, optimizing to minimize latency is anNP-completeproblem equivalent to theBoolean satisfiability problem. Fortasksrunning on processor cores, latency and throughput can be improved withtask scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Systems on chip are modeled with standard hardwareverification and validationtechniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect tomultiple-criteria decision analysison the above optimization targets. Task schedulingis an important activity in any computer system with multipleprocessesorthreadssharing a single processor core. It is important to reduce§ Latencyand increase§ Throughputforembedded softwarerunning on an SoC's§ Processor cores. Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involvingshared resources. Software running on SoCs often schedules tasks according tonetwork schedulingandrandomized schedulingalgorithms. Hardware and software tasks are often pipelined inprocessor design. Pipelining is an important principle forspeedupincomputer architecture. They are frequently used inGPUs(graphics pipeline) and RISC processors (evolutions of theclassic RISC pipeline), but are also applied to application-specific tasks such asdigital signal processingand multimedia manipulations in the context of SoCs.[12] SoCs are often analyzed thoughprobabilistic models,queueing networks, andMarkov chains. For instance,Little's lawallows SoC states and NoC buffers to be modeled as arrival processes and analyzed throughPoisson random variablesandPoisson processes. SoCs are often modeled withMarkov chains, bothdiscrete timeandcontinuous timevariants. Markov chain modeling allowsasymptotic analysisof the SoC'ssteady state distributionof power, heat, latency and other factors to allow design decisions to be optimized for the common case. SoC chips are typicallyfabricatedusingmetal–oxide–semiconductor(MOS) technology.[22]The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.[23] SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like mostvery-large-scale integration(VLSI) designs, the total cost[clarification needed]is higher for one large chip than for the same functionality distributed over several smaller chips, because oflower yields[clarification needed]and highernon-recurring engineeringcosts. When it is not feasible to construct an SoC for a particular application, an alternative is asystem in package(SiP) comprising a number of chips in a singlepackage. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler.[24]Another reason SiP may be preferred iswaste heatmay be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Some examples of systems on a chip are: SoCresearch and developmentoften compares many options. Benchmarks, such as COSMIC,[25]are developed to help such evaluations.
https://en.wikipedia.org/wiki/System_on_a_chip
The term is used for two different things: Extremely large datasets may be divided between co-operating systems as in-memorydata grids. PIM could be implemented by:[4] In-memory processing techniques are frequently used by modern smartphones and tablets to improve application performance. This can result in speedier app loading times and more enjoyable user experiences. With disk-based technology, data is loaded on to the computer'shard diskin the form of multiple tables and multi-dimensional structures against which queries are run. Disk-based technologies are oftenrelational database management systems(RDBMS), often based on the structured query language (SQL), such asSQL Server,MySQL,Oracleand many others. RDBMS are designed for the requirements oftransactional processing. Using a database that supports insertions and updates as well as performing aggregations,joins(typical in BI solutions) are typically very slow. Another drawback is that SQL is designed to efficiently fetch rows of data, while BI queries usually involve fetching of partial rows of data involving heavy calculations. To improve query performance, multidimensional databases orOLAP cubes- also called multidimensional online analytical processing (MOLAP) - may be constructed. Designing a cube may be an elaborate and lengthy process, and changing the cube's structure to adapt to dynamically changing business needs may be cumbersome. Cubes are pre-populated with data to answer specific queries and although they increase performance, they are still not optimal for answering all ad-hoc queries.[9] Information technology (IT) staff may spend substantial development time on optimizing databases, constructingindexesandaggregates, designing cubes andstar schemas,data modeling, and query analysis.[10] Reading data from the hard disk is much slower (possibly hundreds of times) when compared to reading the same data from RAM. Especially when analyzing large volumes of data, performance is severely degraded. Though SQL is a very powerful tool, arbitrary complex queries with a disk-based implementation take a relatively long time to execute and often result in bringing down the performance of transactional processing. In order to obtain results within an acceptable response time, manydata warehouseshave been designed to pre-calculate summaries and answer specific queries only. Optimized aggregation algorithms are needed to increase performance. With both in-memory database anddata grid, all information is initially loaded into memory RAM or flash memory instead ofhard disks. With adata gridprocessing occurs at threeorder of magnitudefaster than relational databases which have advanced functionality such asACIDwhich degrade performance in compensation for the additional functionality. The arrival ofcolumn centric databases, which store similar information together, allow data to be stored more efficiently and with greatercompressionratios. This allows huge amounts of data to be stored in the same physical space, reducing the amount of memory needed to perform a query and increasing processing speed. Many users and software vendors have integrated flash memory into their systems to allow systems to scale to larger data sets more economically. Users query the data loaded into the system's memory, thereby avoiding slower database access and performancebottlenecks. This differs fromcaching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as adata martor small data warehouse which is entirely in memory. This can be accessed quickly by multiple concurrent users or applications at a detailed level and offers the potential for enhanced analytics and for scaling and increasing the speed of an application. Theoretically, the improvement in data access speed is 10,000 to 1,000,000 times compared to the disk.[citation needed]It also minimizes the need for performance tuning by IT staff and provides faster service for end users. Certain developments in computer technology and business needs have tended to increase the relative advantages of in-memory technology.[11] A range of in-memory products provide ability to connect to existing data sources and access to visually rich interactive dashboards. This allows business analysts and end users to create custom reports and queries without much training or expertise. Easy navigation and ability to modify queries on the fly is of benefit to many users. Since these dashboards can be populated with fresh data, users have access to real time data and can create reports within minutes. In-memory processing may be of particular benefit incall centersand warehouse management. With in-memory processing, the source database is queried only once instead of accessing the database every time a query is run, thereby eliminating repetitive processing and reducing the burden on database servers. By scheduling to populate the in-memory database overnight, the database servers can be used for operational purposes during peak hours. With a large number of users, a large amount ofRAMis needed for an in-memory configuration, which in turn affects the hardware costs. The investment is more likely to be suitable in situations where speed of query response is a high priority, and where there is significant growth in data volume and increase in demand for reporting facilities; it may still not be cost-effective where information is not subject to rapid change.Securityis another consideration, as in-memory tools expose huge amounts of data to end users. Makers advise ensuring that only authorized users are given access to the data.
https://en.wikipedia.org/wiki/In-memory_computing
Incomputing, avector processororarray processoris acentral processing unit(CPU) that implements aninstruction setwhere itsinstructionsare designed to operate efficiently and effectively on largeone-dimensional arraysof data calledvectors. This is in contrast toscalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additionalsingle instruction, multiple data(SIMD) orSIMD within a register(SWAR) Arithmetic Units. Vector processors can greatly improve performance on certain workloads, notablynumerical simulation,compressionand similar tasks.[1]Vector processing techniques also operate invideo-game consolehardware and ingraphics accelerators. Vector machines appeared in the early 1970s and dominatedsupercomputerdesign through the 1970s into the 1990s, notably the variousCrayplatforms. The rapid fall in theprice-to-performance ratioof conventionalmicroprocessordesigns led to a decline in vector supercomputers during the 1990s. Vector processing development began in the early 1960s at theWestinghouse Electric Corporationin theirSolomonproject. Solomon's goal was to dramatically increase math performance by using a large number of simplecoprocessorsunder the control of a single masterCentral processing unit(CPU). The CPU fed a single common instruction to all of thearithmetic logic units(ALUs), one per cycle, but with a different data point for each one to work on. This allowed the Solomon machine to apply a singlealgorithmto a largedata set, fed in the form of an array.[citation needed] In 1962, Westinghouse cancelled the project, but the effort was restarted by theUniversity of Illinois at Urbana–Champaignas theILLIAC IV. Their version of the design originally called for a 1GFLOPSmachine with 256 ALUs, but, when it was finally delivered in 1972, it had only 64 ALUs and could reach only 100 to 150 MFLOPS. Nevertheless, it showed that the basic concept was sound, and, when used on data-intensive applications, such ascomputational fluid dynamics, the ILLIAC was the fastest machine in the world. The ILLIAC approach of using separate ALUs for each data element is not common to later designs, and is often referred to under a separate category,massively parallelcomputing. Around this time Flynn categorized this type of processing as an early form ofsingle instruction, multiple threads(SIMT).[citation needed] International Computers Limitedsought to avoid many of the difficulties with the ILLIAC concept with its ownDistributed Array Processor(DAP) design, categorising the ILLIAC and DAP as cellular array processors that potentially offered substantial performance benefits over conventional vector processor designs such as the CDC STAR-100 and Cray 1.[2] Acomputer for operations with functionswas presented and developed by Kartsev in 1967.[3] The first vector supercomputers are theControl Data CorporationSTAR-100andTexas InstrumentsAdvanced Scientific Computer(ASC), which were introduced in 1974 and 1972, respectively. The basic ASC (i.e., "one pipe") ALU used a pipeline architecture that supported both scalar and vector computations, with peak performance reaching approximately 20 MFLOPS, readily achieved when processing long vectors. Expanded ALU configurations supported "two pipes" or "four pipes" with a corresponding 2X or 4X performance gain. Memory bandwidth was sufficient to support these expanded modes. The STAR-100 was otherwise slower than CDC's own supercomputers like theCDC 7600, but at data-related tasks they could keep up while being much smaller and less expensive. However the machine also took considerable time decoding the vector instructions and getting ready to run the process, so it required very specific data sets to work on before it actually sped anything up. The vector technique was first fully exploited in 1976 by the famousCray-1. Instead of leaving the data in memory like the STAR-100 and ASC, the Cray design had eightvector registers, which held sixty-four 64-bit words each. The vector instructions were applied between registers, which is much faster than talking to main memory. Whereas the STAR-100 would apply a single operation across a long vector in memory and then move on to the next operation, the Cray design would load a smaller section of the vector into registers and then apply as many operations as it could to that data, thereby avoiding many of the much slower memory access operations. The Cray design usedpipeline parallelismto implement vector instructions rather than multiple ALUs. In addition, the design had completely separate pipelines for different instructions, for example, addition/subtraction was implemented in different hardware than multiplication. This allowed a batch of vector instructions to be pipelined into each of the ALU subunits, a technique they calledvector chaining. The Cray-1 normally had a performance of about 80 MFLOPS, but with up to three chains running it could peak at 240 MFLOPS and averaged around 150 – far faster than any machine of the era. Other examples followed.Control Data Corporationtried to re-enter the high-end market again with itsETA-10machine, but it sold poorly and they took that as an opportunity to leave the supercomputing field entirely. In the early and mid-1980s Japanese companies (Fujitsu,HitachiandNippon Electric Corporation(NEC) introduced register-based vector machines similar to the Cray-1, typically being slightly faster and much smaller.Oregon-basedFloating Point Systems(FPS) built add-on array processors forminicomputers, later building their ownminisupercomputers. Throughout, Cray continued to be the performance leader, continually beating the competition with a series of machines that led to theCray-2,Cray X-MPandCray Y-MP. Since then, the supercomputer market has focused much more onmassively parallelprocessing rather than better implementations of vector processors. However, recognising the benefits of vector processing, IBM developedVirtual Vector Architecturefor use in supercomputers coupling several scalar processors to act as a vector processor. Although vector supercomputers resembling the Cray-1 are less popular these days, NEC has continued to make this type of computer up to the present day with theirSX seriesof computers. Most recently, theSX-Aurora TSUBASAplaces the processor and either 24 or 48 gigabytes of memory on anHBM2 module within a card that physically resembles a graphics coprocessor, but instead of serving as a co-processor, it is the main computer with the PC-compatible computer into which it is plugged serving support functions. Modern graphics processing units (GPUs) include an array ofshader pipelineswhich may be driven bycompute kernels, and can be considered vector processors (using a similar strategy for hiding memory latencies). As shown inFlynn's 1972 paperthe key distinguishing factor of SIMT-based GPUs is that it has a single instruction decoder-broadcaster but that the cores receiving and executing that same instruction are otherwise reasonably normal: their own ALUs, their own register files, their own Load/Store units and their own independent L1 data caches. Thus although all cores simultaneously execute the exact same instruction in lock-step with each other they do so with completely different data from completely different memory locations. This issignificantlymore complex and involved than"Packed SIMD", which is strictly limited to execution of parallel pipelined arithmetic operations only. Although the exact internal details of today's commercial GPUs are proprietary secrets, the MIAOW[4]team was able to piece together anecdotal information sufficient to implement a subset of the AMDGPU architecture.[5] Several modern CPU architectures are being designed as vector processors. TheRISC-V vector extensionfollows similar principles as the early vector processors, and is being implemented in commercial products such as theAndes TechnologyAX45MPV.[6]There are also severalopen sourcevector processor architectures being developed, includingForwardComandLibre-SOC. As of 2016[update]most commodity CPUs implement architectures that feature fixed-length SIMD instructions. On first inspection these can be considered a form of vector processing because they operate on multiple (vectorized, explicit length) data sets, and borrow features from vector processors. However, by definition, the addition of SIMD cannot, by itself, qualify a processor as an actualvector processor, because SIMD isfixed-length, and vectors arevariable-length. The difference is illustrated below with examples, showing and comparing the three categories: Pure SIMD, Predicated SIMD, and Pure Vector Processing.[citation needed] Other CPU designs include some multiple instructions for vector processing on multiple (vectorized) data sets, typically known asMIMD(Multiple Instruction, Multiple Data) and realized withVLIW(Very Long Instruction Word) andEPIC(Explicitly Parallel Instruction Computing). TheFujitsu FR-VVLIW/vector processor combines both technologies. SIMD instruction sets lack crucial features when compared to vector instruction sets. The most important of these is that vector processors, inherently by definition and design, have always been variable-length since their inception. Whereas pure (fixed-width, no predication) SIMD is often mistakenly claimed to be "vector" (because SIMD processes data which happens to be vectors), through close analysis and comparison of historic and modern ISAs, actual vector ISAs may be observed to have the following features that no SIMD ISA has:[citation needed] Predicated SIMD (part ofFlynn's taxonomy) which is comprehensive individual element-level predicate masks on every vector instruction as is now available in ARM SVE2.[10]AndAVX-512, almost qualifies as a vector processor.[how?]Predicated SIMD uses fixed-width SIMD ALUs but allows locally controlled (predicated) activation of units to provide the appearance of variable length vectors. Examples below help explain these categorical distinctions. SIMD, because it uses fixed-width batch processing, isunable by designto cope with iteration and reduction. This is illustrated further with examples, below. Additionally, vector processors can be more resource-efficient by using slower hardware and saving power, but still achieving throughput and having less latency than SIMD, throughvector chaining.[11][12] Consider both a SIMD processor and a vector processor working on 4 64-bit elements, doing a LOAD, ADD, MULTIPLY and STORE sequence. If the SIMD width is 4, then the SIMD processor must LOAD four elements entirely before it can move on to the ADDs, must complete all the ADDs before it can move on to the MULTIPLYs, and likewise must complete all of the MULTIPLYs before it can start the STOREs. This is by definition and by design.[13] Having to perform 4-wide simultaneous 64-bit LOADs and 64-bit STOREs is very costly in hardware (256 bit data paths to memory). Having 4x 64-bit ALUs, especially MULTIPLY, likewise. To avoid these high costs, a SIMD processor would have to have 1-wide 64-bit LOAD, 1-wide 64-bit STORE, and only 2-wide 64-bit ALUs. As shown in the diagram, which assumes amulti-issue execution model, the consequences are that the operations now take longer to complete. If multi-issue is not possible, then the operations take even longer because the LD may not be issued (started) at the same time as the first ADDs, and so on. If there are only 4-wide 64-bit SIMD ALUs, the completion time is even worse: only when all four LOADs have completed may the SIMD operations start, and only when all ALU operations have completed may the STOREs begin. A vector processor, by contrast, even if it issingle-issueand uses no SIMD ALUs, only having 1-wide 64-bit LOAD, 1-wide 64-bit STORE (and, as in theCray-1, the ability to run MULTIPLY simultaneously with ADD), may complete the four operations faster than a SIMD processor with 1-wide LOAD, 1-wide STORE, and 2-wide SIMD. This more efficient resource utilization, due tovector chaining, is a key advantage and difference compared to SIMD. SIMD, by design and definition, cannot perform chaining except to the entire group of results.[14] In general terms, CPUs are able to manipulate one or two pieces of data at a time. For instance, most CPUs have an instruction that essentially says "add A to B and put the result in C". The data for A, B and C could be—in theory at least—encoded directly into the instruction. However, in efficient implementation things are rarely that simple. The data is rarely sent in raw form, and is instead "pointed to" by passing in an address to a memory location that holds the data. Decoding this address and getting the data out of the memory takes some time, during which the CPU traditionally would sit idle waiting for the requested data to show up. As CPU speeds have increased, thismemory latencyhas historically become a large impediment to performance; seeRandom-access memory § Memory wall. In order to reduce the amount of time consumed by these steps, most modern CPUs use a technique known asinstruction pipeliningin which the instructions pass through several sub-units in turn. The first sub-unit reads the address and decodes it, the next "fetches" the values at those addresses, and the next does the math itself. With pipelining the "trick" is to start decoding the next instruction even before the first has left the CPU, in the fashion of anassembly line, so theaddress decoderis constantly in use. Any particular instruction takes the same amount of time to complete, a time known as thelatency, but the CPU can process an entire batch of operations, in an overlapping fashion, much faster and more efficiently than if it did so one at a time. Vector processors take this concept one step further. Instead of pipelining just the instructions, they also pipeline the data itself. The processor is fed instructions that say not just to add A to B, but to add all of the numbers "from here to here" to all of the numbers "from there to there". Instead of constantly having to decode instructions and then fetch the data needed to complete them, the processor reads a single instruction from memory, and it is simply implied in the definition of the instructionitselfthat the instruction will operate again on another item of data, at an address one increment larger than the last. This allows for significant savings in decoding time. To illustrate what a difference this can make, consider the simple task of adding two groups of 10 numbers together. In a normal programming language one would write a "loop" that picked up each of the pairs of numbers in turn, and then added them. To the CPU, this would look something like this: But to a vector processor, this task looks considerably different: Note the complete lack of looping in the instructions, because it is thehardwarewhich has performed 10 sequential operations: effectively the loop count is on an explicitper-instructionbasis. Cray-style vector ISAs take this a step further and provide a global "count" register, called vector length (VL): There are several savings inherent in this approach.[15] Additionally, in more modern vector processor ISAs, "Fail on First" or "Fault First" has been introduced (see below) which brings even more advantages. But more than that, a high performance vector processor may have multiplefunctional unitsadding those numbers in parallel. The checking of dependencies between those numbers is not required as a vector instruction specifies multiple independent operations. This simplifies the control logic required, and can further improve performance by avoiding stalls. The math operations thus completed far faster overall, the limiting factor being the time required to fetch the data from memory. Not all problems can be attacked with this sort of solution. Including these types of instructions necessarily adds complexity to the core CPU. That complexity typically makesotherinstructions run slower—i.e., whenever it isnotadding up many numbers in a row. The more complex instructions also add to the complexity of the decoders, which might slow down the decoding of the more common instructions such as normal adding. (This can be somewhat mitigated by keeping the entire ISA toRISCprinciples: RVV only adds around 190 vector instructions even with the advanced features.[16]) Vector processors were traditionally designed to work best only when there are large amounts of data to be worked on. For this reason, these sorts of CPUs were found primarily insupercomputers, as the supercomputers themselves were, in general, found in places such as weather prediction centers and physics labs, where huge amounts of data are "crunched". However, as shown above and demonstrated by RISC-V RVV theefficiencyof vector ISAs brings other benefits which are compelling even for Embedded use-cases. The vector pseudocode example above comes with a big assumption that the vector computer can process more than ten numbers in one batch. For a greater quantity of numbers in the vector register, it becomes unfeasible for the computer to have a register that large. As a result, the vector processor either gains the ability to perform loops itself, or exposes some sort of vector control (status) register to the programmer, usually known as a vector Length. The self-repeating instructions are found in early vector computers like the STAR-100, where the above action would be described in a single instruction (somewhat likevadd c, a, b, $10). They are also found in thex86architecture as theREPprefix. However, only very simple calculations can be done effectively in hardware this way without a very large cost increase. Since all operands have to be in memory for the STAR-100 architecture, the latency caused by access became huge too. Broadcom included space in all vector operations of theVideocoreIV ISA for aREPfield, but unlike the STAR-100 which uses memory for its repeats, the Videocore IV repeats are on all operations including arithmetic vector operations. The repeat length can be a small range ofpower of twoor sourced from one of the scalar registers.[17] TheCray-1introduced the idea of usingprocessor registersto hold vector data in batches. The batch lengths (vector length, VL) could be dynamically set with a special instruction, the significance compared to Videocore IV (and, crucially as will be shown below, SIMD as well) being that the repeat length does not have to be part of the instruction encoding. This way, significantly more work can be done in each batch; the instruction encoding is much more elegant and compact as well. The only drawback is that in order to take full advantage of this extra batch processing capacity, the memory load and store speed correspondingly had to increase as well. This is sometimes claimed[by whom?]to be a disadvantage of Cray-style vector processors: in reality it is part of achieving high performance throughput, as seen inGPUs, which face exactly the same issue. Modern SIMD computers claim to improve on early Cray by directly using multiple ALUs, for a higher degree of parallelism compared to only using the normal scalar pipeline. Modern vector processors (such as theSX-Aurora TSUBASA) combine both, by issuing multiple data to multiple internal pipelined SIMD ALUs, the number issued being dynamically chosen by the vector program at runtime. Masks can be used to selectively load and store data in memory locations, and use those same masks to selectively disable processing element of SIMD ALUs. Some processors with SIMD (AVX-512, ARMSVE2) are capable of this kind of selective, per-element ("predicated") processing, and it is these which somewhat deserve the nomenclature "vector processor" or at least deserve the claim of being capable of "vector processing". SIMD processors without per-element predication (MMX,SSE,AltiVec) categorically do not. Modern GPUs, which have many small compute units each with their own independent SIMD ALUs, useSingle Instruction Multiple Threads(SIMT). SIMT units run from a shared single broadcast synchronised Instruction Unit. The "vector registers" are very wide and the pipelines tend to be long. The "threading" part of SIMT involves the way data is handled independently on each of the compute units. In addition, GPUs such as the BroadcomVideocoreIV and other external vector processors like theNEC SX-Aurora TSUBASAmay use fewer vector units than the width implies: instead of having 64 units for a 64-number-wide register, the hardware might instead do a pipelined loop over 16 units for a hybrid approach. The BroadcomVideocoreIV is also capable of this hybrid approach: nominally stating that its SIMD QPU Engine supports 16-long FP array operations in its instructions, it actually does them 4 at a time, as (another) form of "threads".[18] This example starts with an algorithm ("IAXPY"), first show it in scalar instructions, then SIMD, then predicated SIMD, and finally vector instructions. This incrementally helps illustrate the difference between a traditional vector processor and a modern SIMD one. The example starts with a 32-bit integer variant of the "DAXPY" function, inC: In each iteration, every element of y has an element of x multiplied by a and added to it. The program is expressed in scalar linear form for readability. The scalar version of this would load one of each of x and y, process one calculation, store one result, and loop: The STAR-like code remains concise, but because the STAR-100's vectorisation was by design based around memory accesses, an extra slot of memory is now required to process the information. Two times the latency is also needed due to the extra requirement of memory access. A modern packed SIMD architecture, known by many names (listed inFlynn's taxonomy), can do most of the operation in batches. The code is mostly similar to the scalar version. It is assumed that both x and y areproperly alignedhere (only start on a multiple of 16) and that n is a multiple of 4, as otherwise some setup code would be needed to calculate a mask or to run a scalar version. It can also be assumed, for simplicity, that the SIMD instructions have an option to automatically repeat scalar operands, like ARM NEON can.[19]If it does not, a "splat" (broadcast) must be used, to copy the scalar argument across a SIMD register: The time taken would be basically the same as a vector implementation ofy = mx + cdescribed above. Note that both x and y pointers are incremented by 16, because that is how long (in bytes) four 32-bit integers are. The decision was made that the algorithmshallonly cope with 4-wide SIMD, therefore the constant is hard-coded into the program. Unfortunately for SIMD, the clue was in the assumption above, "that n is a multiple of 4" as well as "aligned access", which, clearly, is a limited specialist use-case. Realistically, for general-purpose loops such as in portable libraries, where n cannot be limited in this way, the overhead of setup and cleanup for SIMD in order to cope with non-multiples of the SIMD width, can far exceed the instruction count inside the loop itself. Assuming worst-case that the hardware cannot do misaligned SIMD memory accesses, a real-world algorithm will: Eight-wide SIMD requires repeating the inner loop algorithm first with four-wide SIMD elements, then two-wide SIMD, then one (scalar), with a test and branch in between each one, in order to cover the first and last remaining SIMD elements (0 <= n <= 7). This more thantriplesthe size of the code, in fact in extreme cases it results in anorder of magnitudeincrease in instruction count! This can easily be demonstrated by compiling the iaxpy example forAVX-512, using the options"-O3 -march=knl"togcc. Over time as the ISA evolves to keep increasing performance, it results in ISA Architects adding 2-wide SIMD, then 4-wide SIMD, then 8-wide and upwards. It can therefore be seen whyAVX-512exists in x86. Without predication, the wider the SIMD width the worse the problems get, leading to massive opcode proliferation, degraded performance, extra power consumption and unnecessary software complexity.[20] Vector processors on the other hand are designed to issue computations of variable length for an arbitrary count, n, and thus require very little setup, and no cleanup. Even compared to those SIMD ISAs which have masks (but nosetvlinstruction), Vector processors produce much more compact code because they do not need to perform explicit mask calculation to cover the last few elements (illustrated below). Assuming a hypothetical predicated (mask capable) SIMD ISA, and again assuming that the SIMD instructions can cope with misaligned data, the instruction loop would look like this: Here it can be seen that the code is much cleaner but a little complex: at least, however, there is no setup or cleanup: on the last iteration of the loop, the predicate mask will be set to either 0b0000, 0b0001, 0b0011, 0b0111 or 0b1111, resulting in between 0 and 4 SIMD element operations being performed, respectively. One additional potential complication: some RISC ISAs do not have a "min" instruction, needing instead to use a branch or scalar predicated compare. It is clear how predicated SIMD at least merits the term "vector capable", because it can cope with variable-length vectors by using predicate masks. The final evolving step to a "true" vector ISA, however, is to not have any evidence in the ISAat allof a SIMD width, leaving that entirely up to the hardware. For Cray-style vector ISAs such as RVV, an instruction called "setvl" (set vector length) is used. The hardware first defines how many data values it can process in one "vector": this could be either actual registers or it could be an internal loop (the hybrid approach, mentioned above). This maximum amount (the number of hardware "lanes") is termed "MVL" (Maximum Vector Length). Note that, as seen in SX-Aurora and Videocore IV, MVL may be an actual hardware lane quantityor a virtual one.(Note: As mentioned in the ARM SVE2 Tutorial, programmersmustnot make the mistake of assuming a fixed vector width: consequently MVL is not a quantity that the programmer needs to know. This can be a little disconcerting after years of SIMD mindset).[tone] On calling setvl with the number of outstanding data elements to be processed, "setvl" is permitted (essentially required) to limit that to the Maximum Vector Length (MVL) and thus returns theactualnumber that can be processed by the hardware in subsequent vector instructions, and sets the internal special register, "VL", to that same amount. ARM refers to this technique as "vector length agnostic" programming in its tutorials on SVE2.[21] Below is the Cray-style vector assembler for the same SIMD style loop, above. Note that t0 (which, containing a convenient copy of VL, can vary) is used instead of hard-coded constants: This is essentially not very different from the SIMD version (processes 4 data elements per loop), or from the initial Scalar version (processes just the one). n still contains the number of data elements remaining to be processed, but t0 contains the copy of VL – the number that isgoingto be processed in each iteration. t0 is subtracted from n after each iteration, and if n is zero then all elements have been processed. A number of things to note, when comparing against the Predicated SIMD assembly variant: Thus it can be seen, very clearly, how vector ISAs reduce the number of instructions. Also note, that just like the predicated SIMD variant, the pointers to x and y are advanced by t0 times four because they both point to 32 bit data, but that n is decremented by straight t0. Compared to the fixed-size SIMD assembler there is very little apparent difference: x and y are advanced by hard-coded constant 16, n is decremented by a hard-coded 4, so initially it is hard to appreciate the significance. The difference comes in the realisation that the vector hardware could be capable of doing 4 simultaneous operations, or 64, or 10,000, it would be the exact same vector assembler for all of themand there would still be no SIMD cleanup code. Even compared to the predicate-capable SIMD, it is still more compact, clearer, more elegant and uses less resources. Not only is it a much more compact program (saving on L1 Cache size), but as previously mentioned, the vector version can issue far more data processing to the ALUs, again saving power because Instruction Decode and Issue can sit idle. Additionally, the number of elements going in to the function can start at zero. This sets the vector length to zero, which effectively disables all vector instructions, turning them intono-ops, at runtime. Thus, unlike non-predicated SIMD, even when there are no elements to process there is still no wasted cleanup code. This example starts with an algorithm which involves reduction. Just as with the previous example, it will be first shown in scalar instructions, then SIMD, and finally vector instructions, starting inc: Here, an accumulator (y) is used to sum up all the values in the array, x. The scalar version of this would load each of x, add it to y, and loop: This is very straightforward. "y" starts at zero, 32 bit integers are loaded one at a time into r1, added to y, and the address of the array "x" moved on to the next element in the array. This is where the problems start. SIMD by design is incapable of doing arithmetic operations "inter-element". Element 0 of one SIMD register may be added to Element 0 of another register, but Element 0 maynotbe added to anythingotherthan another Element 0. This places some severe limitations on potential implementations. For simplicity it can be assumed that n is exactly 8: At this point four adds have been performed: but with 4-wide SIMD being incapableby designof addingx[0]+x[1]for example, things go rapidly downhill just as they did with the general case of using SIMD for general-purpose IAXPY loops. To sum the four partial results, two-wide SIMD can be used, followed by a single scalar add, to finally produce the answer, but, frequently, the data must be transferred out of dedicated SIMD registers before the last scalar computation can be performed. Even with a general loop (n not fixed), the only way to use 4-wide SIMD is to assume four separate "streams", each offset by four elements. Finally, the four partial results have to be summed. Other techniques involve shuffle: examples online can be found forAVX-512of how to do "Horizontal Sum"[22][23] Aside from the size of the program and the complexity, an additional potential problem arises if floating-point computation is involved: the fact that the values are not being summed in strict order (four partial results) could result in rounding errors. Vector instruction sets have arithmetic reduction operationsbuilt-into the ISA. If it is assumed that n is less or equal to the maximum vector length, only three instructions are required: The code when n is larger than the maximum vector length is not that much more complex, and is a similar pattern to the first example ("IAXPY"). The simplicity of the algorithm is stark in comparison to SIMD. Again, just as with the IAXPY example, the algorithm is length-agnostic (even on Embedded implementations where maximum vector length could be only one). Implementations in hardware may, if they are certain that the right answer will be produced, perform the reduction in parallel. Some vector ISAs offer a parallel reduction mode as an explicit option, for when the programmer knows that any potential rounding errors do not matter, and low latency is critical.[24] This example again highlights a key critical fundamental difference between true vector processors and those SIMD processors, including most commercial GPUs, which are inspired by features of vector processors. Compared to any SIMD processor claiming to be a vector processor, the order of magnitude reduction in program size is almost shocking. However, this level of elegance at the ISA level has quite a high price tag at the hardware level: Overall then there is a choice to either have These stark differences are what distinguishes a vector processor from one that has SIMD. Where many SIMD ISAs borrow or are inspired by the list below, typical features that a vector processor will have are:[25][26][27] With many 3Dshaderapplications needingtrigonometricoperations as well as short vectors for common operations (RGB, ARGB, XYZ, XYZW) support for the following is typically present in modern GPUs, in addition to those found in vector processors: Introduced in ARM SVE2 and RISC-V RVV is the concept of speculative sequential Vector Loads. ARM SVE2 has a special register named "First Fault Register",[36]where RVV modifies (truncates) the Vector Length (VL).[37] The basic principle of ffirst is to attempt a large sequential Vector Load, but to allow the hardware to arbitrarily truncate theactualamount loaded to either the amount that would succeed without raising a memory fault or simply to an amount (greater than zero) that is most convenient. The important factor is thatsubsequentinstructions are notified or may determine exactly how many Loads actually succeeded, using that quantity to only carry out work on the data that has actually been loaded. Contrast this situation with SIMD, which is a fixed (inflexible) load width and fixed data processing width, unable to cope with loads that cross page boundaries, and even if they were they are unable to adapt to what actually succeeded, yet, paradoxically, if the SIMD program were to even attempt to find out in advance (in each inner loop, every time) what might optimally succeed, those instructions only serve to hinder performance because they would, by necessity, be part of the critical inner loop. This begins to hint at the reason why ffirst is so innovative, and is best illustrated by memcpy or strcpy when implemented with standard 128-bit non-predicated non-ffirst SIMD. For IBM POWER9 the number of hand-optimised instructions to implement strncpy is in excess of 240.[38]By contrast, the same strncpy routine in hand-optimised RVV assembler is a mere 22 instructions.[39] The above SIMD example could potentially fault and fail at the end of memory, due to attempts to read too many values: it could also cause significant numbers of page or misaligned faults by similarly crossing over boundaries. In contrast, by allowing the vector architecture the freedom to decide how many elements to load, the first part of a strncpy, if beginning initially on a sub-optimal memory boundary, may return just enough loads such that onsubsequentiterations of the loop the batches of vectorised memory reads are optimally aligned with the underlying caches and virtual memory arrangements. Additionally, the hardware may choose to use the opportunity to end any given loop iteration's memory readsexactlyon a page boundary (avoiding a costly second TLB lookup), with speculative execution preparing the next virtual memory page whilst data is still being processed in the current loop. All of this is determined by the hardware, not the program itself.[40] Letrbe the vector speed ratio andfbe the vectorization ratio. If the time taken for the vector unit to add an array of 64 numbers is 10 times faster than its equivalent scalar counterpart, r = 10. Also, if the total number of operations in a program is 100, out of which only 10 are scalar (after vectorization), then f = 0.9, i.e., 90% of the work is done by the vector unit. It follows the achievable speed up of: r/[(1−f)∗r+f]{\displaystyle r/[(1-f)*r+f]} So, even if the performance of the vector unit is very high (r=∞{\displaystyle r=\infty }) there is a speedup less than1/(1−f){\displaystyle 1/(1-f)}, which suggests that the ratiofis crucial to the performance. This ratio depends on the efficiency of the compilation like adjacency of the elements in memory.
https://en.wikipedia.org/wiki/Vector_processor
Single instruction, multiple data(SIMD) is a type ofparallel processinginFlynn's taxonomy. SIMD describes computers withmultiple processing elementsthat perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through aninstruction set architecture(ISA), but it should not be confused with an ISA. Such machines exploitdata level parallelism, but notconcurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is particularly applicable to common tasks such as adjusting the contrast in adigital imageor adjusting the volume ofdigital audio. Most modernCPUdesigns include SIMD instructions to improve the performance ofmultimediause. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle. SIMD has three different subcategories inFlynn's 1972 Taxonomy, one of which isSIMT. SIMT should not be confused withsoftware threadsorhardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution. A key distinction in SIMT is the presence of control flow mechanisms like warps (NVIDIA terminology) or wavefronts (AMD terminology). These allow divergence and convergence of threads, even under shared instruction streams, thereby offering slightly more flexibility than classical SIMD. Each hardware element (PU) working on individual data item sometimes also referred as SIMD lane or channel. Moderngraphics processing units(GPUs) are often wide SIMD (typically >16 data lanes or channel) implementations.[citation needed]Some newer GPUs go beyond simple SIMD and integrate mixed-precision SIMD pipelines, which allow concurrent execution of 8-bit, 16-bit, and 32-bit operations in different lanes. This is critical for applications like AI inference, where mixed precision boosts throughput. Additionally, SIMD can exist in both fixed and scalable vector forms. Fixed-width SIMD units operate on a constant number of data points per instruction, while scalable designs, like RISC-V Vector or ARM's SVE, allow the number of data elements to vary depending on the hardware implementation. This improves forward compatibility across generations of processors. The first use of SIMD instructions was in theILLIAC IV, which was completed in 1972. This included 64 (of an original design of 256) processors that had local memory to hold different values while performing the same instruction. Separate hardware quickly send out the values to be processed and gathered up the results. SIMD was the basis forvector supercomputersof the early 1970s such as theCDC Star-100and theTexas Instruments ASC, which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized byCrayin the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers:Duncan's Taxonomyincludes them whereasFlynn's Taxonomydoes not, due to Flynn's work (1966, 1972) pre-dating theCray-1(1977). The first era of modern SIMD computers was characterized bymassively parallel processing-stylesupercomputerssuch as theThinking MachinesCM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalarMIMDapproaches based on commodity processors such as theIntel i860 XPbecame more powerful, and interest in SIMD waned.[2] The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand.[3]This resurgence also coincided with the rise of DirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introducedMAXinstructions intoPA-RISC1.1 desktops in 1994 to accelerate MPEG decoding.[4]Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in itsUltraSPARC Imicroprocessor. MIPS followed suit with their similarMDMXsystem. The first widely deployed desktop SIMD was with Intel'sMMXextensions to thex86architecture in 1996. This sparked the introduction of the much more powerfulAltiVecsystem in theMotorolaPowerPCand IBM'sPOWERsystems. Intel responded in 1999 by introducing the all-newSSEsystem. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX,AVX2andAVX-512are developed by Intel. AMD supports AVX,AVX2, andAVX-512in their current products.[5] All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen words, depending on data type and architecture. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures, as earlier SIMD and vector supercomputers had vector lengths from 64 to 64,000. A modern supercomputer is almost always a cluster of MIMD computers, each of which implements (short-vector) SIMD instructions. An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in manymultimediaapplications. One example would be changing the brightness of an image. Eachpixelof an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory. AudioDSPswould likewise, for volume control, multiply both Left and Right channels simultaneously. With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with a traditional CPU design. Moreover, SIMD instructions can exploit data reuse, where the same operand is used across multiple calculations, via broadcasting features. For example, multiplying several pixels by a constant scalar value can be done more efficiently by loading the scalar once and broadcasting it across a SIMD register. Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight data points at once, theaddoperation being applied to the data will happen to all eight values at the same time. This parallelism is separate from the parallelism provided by asuperscalar processor; the eight values are processed in parallel even on a non-superscalar processor, and a superscalar processor may be able to perform multiple SIMD operations in parallel. To remedy problems 1 and 5,RISC-V's vector extension uses an alternative approach: instead of exposing the sub-register-level details to the programmer, the instruction set abstracts them out as a few "vector registers" that use the same interfaces across all CPUs with this instruction set. The hardware handles all alignment issues and "strip-mining" of loops. Machines with different vector sizes would be able to run the same code. LLVM calls this vector type "vscale".[citation needed] An order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitudeor greatereffectiveness (work done per instruction) is achievable with Vector ISAs.[6] ARM'sScalable Vector Extensiontakes another approach, known inFlynn's Taxonomyas "Associative Processing", more commonly known today as"Predicated" (masked)SIMD. This approach is not as compact asVector processingbut is still far better than non-predicated SIMD. Detailed comparative examples are given in theVector processingpage. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.[7] Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) forAlpha. SIMD instructions can be found, to one degree or another, on most CPUs, includingIBM'sAltiVecandSPEforPowerPC,HP'sPA-RISCMultimedia Acceleration eXtensions(MAX),Intel'sMMX and iwMMXt,SSE,SSE2,SSE3SSSE3andSSE4.x,AMD's3DNow!,ARC's ARC Video subsystem,SPARC'sVISand VIS2,Sun'sMAJC,ARM'sNeontechnology,MIPS'MDMX(MaDMaX) andMIPS-3D. The IBM, Sony, Toshiba co-developedCell Processor'sSPU's instruction set is heavily SIMD based.Philips, nowNXP, developed several SIMD processors namedXetal. The Xetal has 320 16-bit processor elements especially designed for vision tasks. Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently. Intel'sAVX-512SIMD instructions process 512 bits of data at once. SIMD instructions are widely used to process 3D graphics, although moderngraphics cardswith embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography.[8][9][10]The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future. Recent compilers such as LLVM, GCC, and Intel's ICC offer aggressive auto-vectorization options. Developers can often enable these with flags like-O3or-ftree-vectorize, which guide the compiler to restructure loops for SIMD compatibility. Adoption of SIMD systems inpersonal computersoftware was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, likeMMXand3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using theFPUand MMXregisters. Compilers also often lacked support, requiring programmers to resort toassembly languagecoding. SIMD onx86had a slow start. The introduction of3DNow!byAMDandSSEbyIntelconfused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives likelibSIMD,SIMDx86andSLEEFhave started to appear (see alsolibm).[11] Apple Computerhad somewhat more success, even though they entered the SIMD market later than the rest.AltiVecoffered a rich system and can be programmed using increasingly sophisticated compilers fromMotorola,IBMandGNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for exampleiTunesandQuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple'sAPIsanddevelopment tools(XCode) were modified to supportSSE2andSSE3as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM andFreescale Semiconductor. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC andPower ISAdesigns from Freescale and IBM. SIMD within a register, orSWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that does not provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly. It is common for publishers of the SIMD instruction sets to make their own C/C++ language extensions withintrinsic functionsor special datatypes (withoperator overloading) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.) TheGNU C Compilertakes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.[12]TheLLVMClang compiler also implements the feature, with an analogous interface defined in the IR.[13]Rust'spacked_simdcrate (and the experimentalstd::simd) uses this interface, and so doesSwift2.0+. C++ has an experimental interfacestd::experimental::simdthat works similarly to the GCC extension. LLVM's libcxx seems to implement it.[citation needed]For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.[14] Microsoftadded SIMD to.NETin RyuJIT.[15]TheSystem.Numerics.Vectorpackage, available on NuGet, implements SIMD datatypes.[16]Java also has a new proposed API for SIMD instructions available inOpenJDK17 in an incubator module.[17]It also has a safe fallback mechanism on unsupported CPUs to simple loops. Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use.OpenMP4.0+ has a#pragma omp simdhint.[18]This OpenMP interface has replaced a wide set of nonstandard extensions, includingCilk's#pragma simd,[19]GCC's#pragma GCC ivdep, and many more.[20] Consumer software is typically expected to work on a range of CPUs covering multiple generations, which could limit the programmer's ability to use new SIMD instructions to improve the computational performance of a program. The solution is to include multiple versions of the same code that uses either older or newer SIMD technologies, and pick one that best fits the user's CPU at run-time (dynamic dispatch). There are two main camps of solutions: FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo.Intel C++ Compiler,GNU Compiler Collectionsince GCC 6, andClangsince clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicittarget_cloneslabels in the code to "clone" functions,[21]while ICC does so automatically (under the command-line option/Qax). TheRust programming languagealso supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.[22] As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed.Glibcsupports LMV and this functionality is adopted by the Intel-backed Clear Linux project.[23] In 2013 John McCutchan announced that he had created a high-performance interface to SIMD instruction sets for theDartprogramming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:[24] Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for4×4matrix multiplication,3D vertex transformation, andMandelbrot setvisualization show near 400% speedup compared to scalar code written in Dart. McCutchan's work on Dart, now called SIMD.js, has been adopted byECMAScriptand Intel announced at IDF 2013 that they are implementing McCutchan's specification for bothV8andSpiderMonkey.[25]However, by 2017, SIMD.js has been taken out of the ECMAScript standard queue in favor of pursuing a similar interface inWebAssembly.[26]As of August 2020, the WebAssembly interface remains unfinished, but its portable 128-bit SIMD feature has already seen some use in many engines.[citation needed] Emscripten, Mozilla's C/C++-to-JavaScript compiler, with extensions can enable compilation of C++ programs that make use of SIMD intrinsics or GCC-style vector code to the SIMD API of JavaScript, resulting in equivalent speedups compared to scalar code.[27]It also supports (and now prefers) the WebAssembly 128-bit SIMD proposal.[28] It has generally proven difficult to find sustainable commercial applications for SIMD-only processors. One that has had some measure of success is theGAPP, which was developed byLockheed Martinand taken to the commercial sector by their spin-offTeranex. The GAPP's recent incarnations have become a powerful tool in real-timevideo processingapplications like conversion between various video standards and frame rates (NTSCto/fromPAL, NTSC to/fromHDTVformats, etc.),deinterlacing,image noise reduction, adaptivevideo compression, and image enhancement. A more ubiquitous application for SIMD is found invideo games: nearly every modernvideo game consolesince1998has incorporated a SIMD processor somewhere in its architecture. ThePlayStation 2was unusual in that one of its vector-float units could function as an autonomousDSPexecuting its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors.Microsoft'sDirect3D 9.0now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions. A later processor that used vector processing is theCell Processorused in the Playstation 3, which was developed byIBMin cooperation withToshibaandSony. It uses a number of SIMD processors (aNUMAarchitecture, each with independentlocal storeand controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers. Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.[29] Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc.ClearSpeed's CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architectBill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.
https://en.wikipedia.org/wiki/Single_instruction,_multiple_data
Acomputer clusteris a set ofcomputersthat work together so that they can be viewed as a single system. Unlikegrid computers, computer clusters have eachnodeset to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing iscloud computing. The components of a cluster are usually connected to each other through fastlocal area networks, with eachnode(computer used as a server) running its own instance of anoperating system. In most circumstances, all of the nodes use the same hardware[1][better source needed]and the same operating system, although in some setups (e.g. usingOpen Source Cluster Application Resources(OSCAR)), different operating systems can be used on each computer, or different hardware.[2] Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3] Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performancedistributed computing.[citation needed]They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastestsupercomputersin the world such asIBM's Sequoia.[4]Prior to the advent of clusters, single-unitfault tolerantmainframeswithmodular redundancywere employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.[5] The desire to get more computing power and better reliability by orchestrating a number of low-costcommercial off-the-shelfcomputers has given rise to a variety of architectures and configurations. The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fastlocal area network.[6]The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via asingle system imageconcept.[6] Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such aspeer-to-peerorgrid computingwhich also use many nodes, but with a far moredistributed nature.[6] A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fastsupercomputer. A basic approach to building a cluster is that of aBeowulfcluster which may be built with a few personal computers to produce a cost-effective alternative to traditionalhigh-performance computing. An early project that showed the viability of the concept was the 133-nodeStone Soupercomputer.[7]The developers usedLinux, theParallel Virtual Machinetoolkit and theMessage Passing Interfacelibrary to achieve high performance at a relatively low cost.[8] Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. TheTOP500organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was theK computerwhich has adistributed memory, cluster architecture.[9] Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10]Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented byGene AmdahlofIBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing:Amdahl's Law. The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster. The first production system designed as a cluster was the BurroughsB5700in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation. The first commercial loosely coupled clustering product wasDatapoint Corporation's"Attached Resource Computer" (ARC) system, developed in 1977, and usingARCnetas the cluster interface. Clustering per se did not really take off untilDigital Equipment Corporationreleased theirVAXclusterproduct in 1984 for theVMSoperating system. The ARC and VAXcluster products not only supportedparallel computing, but also sharedfile systemsandperipheraldevices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were theTandem NonStop(a 1976 high-availability commercial product)[11][12]and theIBM S/390 Parallel Sysplex(circa 1994, primarily for business use). Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network,supercomputersbegan to use them within the same computer. Following the success of theCDC 6600in 1964, theCray 1was delivered in 1976, and introduced internal parallelism viavector processing.[13]While early supercomputers excluded clusters and relied onshared memory, in time some of the fastest supercomputers (e.g. theK computer) relied on cluster architectures. Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use ahigh-availabilityapproach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc. "Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[14]However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simpleround-robin methodby assigning each new request to a different node.[14] Computer clusters are used for computation-intensive purposes, rather than handlingIO-orientedoperations such as web service or databases.[15]For instance, a computer cluster might supportcomputational simulationsof vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing". "High-availability clusters" (also known asfailoverclusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundantnodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminatesingle points of failure. There are commercial implementations of High-Availability clusters for many operating systems. TheLinux-HAproject is one commonly usedfree softwareHA package for theLinuxoperating system. Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enablesscalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g.,RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.[16][17] In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers. When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node. If you have a large number of computers clustered together, this lends itself to the use ofdistributed file systemsandRAID, both of which can increase the reliability and speed of a cluster. One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approachinggrid computing. In aBeowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves.[15]In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.[15]The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.[15] A special purpose 144-nodeDEGIMA clusteris tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.[18] Due to the increasing computing power of each generation ofgame consoles, a novel use has emerged where they are repurposed intoHigh-performance computing(HPC) clusters. Some examples of game console clusters areSony PlayStation clustersandMicrosoftXboxclusters. Another example of consumer game product is theNvidia Tesla Personal Supercomputerworkstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).[2] Computer clusters have historically run on separate physicalcomputerswith the sameoperating system. With the advent ofvirtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[19][citation needed][clarification needed]The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation isXenas the virtualization manager withLinux-HA.[19] As the computer clusters were appearing during the 1980s, so weresupercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied onshared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it. However, the use of aclustered file systemis essential in modern computer clusters.[citation needed]Examples include theIBM General Parallel File System, Microsoft'sCluster Shared Volumesor theOracle Cluster File System. Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).[20] PVM was developed at theOak Ridge National Laboratoryaround 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.[20][21] MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported byARPAandNational Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically useTCP/IPand socket connections.[20]MPI is now a widely available communications model that enables parallel programs to be written in languages such asC,Fortran,Python, etc.[21]Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such asMPICHandOpen MPI.[21][22] One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes.[23]In some cases this provides an advantage toshared memory architectureswith lower administration costs.[23]This has also madevirtual machinespopular, due to the ease of administration.[23] When a large multi-user cluster needs to access very large amounts of data,task schedulingbecomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges.[24]This is an area of ongoing research; algorithms that combine and extendMapReduceandHadoophave been proposed and studied.[24] When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational.[25][26]Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.[25] TheSTONITHmethod stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance,power fencinguses a power controller to turn off an inoperable node.[25] Theresources fencingapproach disallows access to resources without powering off the node. This may includepersistent reservation fencingvia theSCSI3, fibre channel fencing to disable thefibre channelport, orglobal network block device(GNBD) fencing to disable access to the GNBD server. Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achievingtask parallelismwithout multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.[27] Automatic parallelizationof programs remains a technical challenge, butparallel programming modelscan be used to effectuate a higherdegree of parallelismvia the simultaneous execution of separate portions of a program on different processors.[27][28] Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by theHigh Performance Debugging Forum(HPDF) which resulted in the HPD specifications.[21][29]Tools such asTotalViewwere then developed to debug parallel implementations on computer clusters which useMessage Passing Interface(MPI) orParallel Virtual Machine(PVM) for message passing. TheUniversity of California, BerkeleyNetwork of Workstations(NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.[21] Application checkpointingcan be used to restore a given state of the system when a node fails during a long multi-node computation.[30]This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.[30] The Linux world supports various cluster software; for application clustering, there isdistcc, andMPICH.Linux Virtual Server,Linux-HA– director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes.MOSIX,LinuxPMI,Kerrighed,OpenSSIare full-blown clusters integrated into thekernelthat provide for automatic process migration among homogeneous nodes.OpenSSI,openMosixandKerrighedaresingle-system imageimplementations. Microsoft Windowscomputer cluster Server 2003 based on theWindows Serverplatform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools. gLiteis a set of middleware technologies created by theEnabling Grids for E-sciencE(EGEE) project. slurmis also used to schedule and manage some of the largest supercomputer clusters (see top500 list). Although most computer clusters are permanent fixtures, attempts atflash mob computinghave been made to build short-lived clusters for specific computations. However, larger-scalevolunteer computingsystems such asBOINC-based systems have had more followers. Basic concepts Distributed computing Specific systems Computer farms
https://en.wikipedia.org/wiki/Computer_cluster
Asystem on a chip(SoC) is anintegrated circuitthat combines most or all key components of acomputerorelectronic systemonto a single microchip.[1]Typically, an SoC includes acentral processing unit(CPU) withmemory,input/output, anddata storagecontrol functions, along with optional features like agraphics processing unit(GPU),Wi-Ficonnectivity, and radio frequency processing. This high level of integration minimizes the need for separate, discrete components, thereby enhancingpower efficiencyand simplifying device design. High-performance SoCs are often paired with dedicated memory, such asLPDDR, and flash storage chips, such aseUFSoreMMC, which may be stacked directly on top of the SoC in apackage-on-package(PoP) configuration or placed nearby on the motherboard. Some SoCs also operate alongside specialized chips, such ascellular modems.[2] Fundamentally, SoCs integrate one or moreprocessor coreswith critical peripherals. This comprehensive integration is conceptually similar to how amicrocontrolleris designed, but providing far greater computational power. While this unified design delivers lower power consumption and a reducedsemiconductor diearea compared to traditional multi-chip architectures, though at the cost of reduced modularity and component replaceability. SoCs are ubiquitous in mobile computing, where compact, energy-efficient designs are critical. They powersmartphones,tablets, andsmartwatches, and are increasingly important inedge computing, where real-time data processing occurs close to the data source. By driving the trend toward tighter integration, SoCs have reshaped modern hardware design, reshaping the design landscape for modern computing devices.[3][4] In general, there are three distinguishable types of SoCs: SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well asembedded systemsand in applications where previouslymicrocontrollerswould be used. Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability andmean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers.[5]Applications includeAI acceleration, embeddedmachine vision,[6]data collection,telemetry,vector processingandambient intelligence. Often embedded SoCs target theinternet of things, multimedia, networking, telecommunications andedge computingmarkets. Some examples of SoCs for embedded applications include theSTMicroelectronicsSTM32, theRaspberry Pi LtdRP2040, and theAMDZynq 7000. Mobile computingbased SoCs always bundle processors, memories, on-chipcaches,wireless networkingcapabilities and oftendigital camerahardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory andflash memorywill be placed right next to, or above (package on package), the SoC.[7]Some examples of mobile computing SoCs include: In 1992,Acorn Computersproduced theA3010, A3020 and A4000 range of personal computerswith the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous AcornARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighterintegrationof hardware andfirmwaremodules, andLTEand otherwireless networkcommunications integrated on chip (integratednetwork interface controllers).[10] On modern laptops and mini PCs, the low-power variants ofAMD RyzenandIntel Coreprocessors use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips. An SoC consists of hardwarefunctional units, includingmicroprocessorsthat runsoftware code, as well as acommunications subsystemto connect, control, direct and interface between these functional modules. An SoC must have at least oneprocessor core, but typically an SoC has more than one core. Processor cores can be amicrocontroller,microprocessor(μP),[11]digital signal processor(DSP) orapplication-specific instruction set processor(ASIP) core.[12]ASIPs haveinstruction setsthat are customized for anapplication domainand designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. TheARM architectureis a common choice for SoC processor cores because some ARM-architecture cores aresoft processorsspecified asIP cores.[11] SoCs must havesemiconductor memoryblocks to perform their computation, as domicrocontrollersand otherembedded systems. Depending on the application, SoC memory may form amemory hierarchyandcache hierarchy. In the mobile computing market, this is common, but in manylow-powerembedded microcontrollers, this is not necessary. Memory technologies for SoCs includeread-only memory(ROM),random-access memory(RAM), Electrically Erasable Programmable ROM (EEPROM) andflash memory.[11]As in other computer systems, RAM can be subdivided into relatively faster but more expensivestatic RAM(SRAM) and the slower but cheaperdynamic RAM(DRAM). When an SoC has acachehierarchy, SRAM will usually be used to implementprocessor registersand cores'built-in cacheswhereas DRAM will be used formain memory. "Main memory" may be specific to a single processor (which can bemulti-core) when the SoChas multiple processors, in this case it isdistributed memoryand must be sent via§ Intermodule communicationon-chip to be accessed by a different processor.[12]For further discussion of multi-processing memory issues, seecache coherenceandmemory latency. SoCs include externalinterfaces, typically forcommunication protocols. These are often based upon industry standards such asUSB,Ethernet,USART,SPI,HDMI,I²C,CSI, etc. These interfaces will differ according to the intended application.Wireless networkingprotocols such asWi-Fi,Bluetooth,6LoWPANandnear-field communicationmay also be supported. When needed, SoCs includeanaloginterfaces includinganalog-to-digitalanddigital-to-analog converters, often forsignal processing. These may be able to interface with different types ofsensorsoractuators, includingsmart transducers. They may interface with application-specificmodulesor shields.[nb 1]Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processor(DSP) cores are often included on SoCs. They performsignal processingoperations in SoCs forsensors,actuators,data collection,data analysisand multimedia processing. DSP cores typically featurevery long instruction word(VLIW) andsingle instruction, multiple data(SIMD)instruction set architectures, and are therefore highly amenable to exploitinginstruction-level parallelismthroughparallel processingandsuperscalar execution.[12]: 4SP cores most often feature application-specific instructions, and as such are typicallyapplication-specific instruction set processors(ASIP). Such application-specific instructions correspond to dedicated hardwarefunctional unitsthat compute those instructions. Typical DSP instructions includemultiply-accumulate,Fast Fourier transform,fused multiply-add, andconvolutions. As with other computer systems, SoCs requiretiming sourcesto generateclock signals, control execution of SoC functions and provide time context tosignal processingapplications of the SoC, if needed. Popular time sources arecrystal oscillatorsandphase-locked loops. SoCperipheralsincludingcounter-timers, real-timetimersandpower-on resetgenerators. SoCs also includevoltage regulatorsandpower managementcircuits. SoCs comprise manyexecution units. These units must often send data andinstructionsback and forth. Because of this, all but the most trivial SoCs requirecommunications subsystems. Originally, as with othermicrocomputertechnologies,data busarchitectures were used, but recently designs based on sparse intercommunication networks known asnetworks-on-chip(NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.[13] Historically, a shared globalcomputer bustypically connected the different components, also called "blocks" of the SoC.[13]A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory accesscontrollers route data directly between external interfaces and SoC memory, bypassing the CPU orcontrol unit, thereby increasing the datathroughputof the SoC. This is similar to somedevice driversof peripherals on component-basedmulti-chip modulePC architectures. Wire delay is not scalable due to continuedminiaturization,system performancedoes not scale with the number of cores attached, the SoC'soperating frequencymust decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supportingmanycoresystems on chip.[13]: xiii In the late 2010s, a trend of SoCs implementingcommunications subsystemsin terms of a network-like topology instead ofbus-basedprotocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost.[13]: xiiiThis has led to the emergence of interconnection networks withrouter-basedpacket switchingknown as "networks on chip" (NoCs) to overcome thebottlenecksof bus-based networks.[13]: xiii Networks-on-chip have advantages including destination- and application-specificrouting, greater power efficiency and reduced possibility ofbus contention. Network-on-chip architectures take inspiration fromcommunication protocolslikeTCPand theInternet protocol suitefor on-chip communication,[13]although they typically have fewernetwork layers. Optimal network-on-chipnetwork architecturesare an ongoing area of much research interest. NoC architectures range from traditional distributed computingnetwork topologiessuch astorus,hypercube,meshesandtree networkstogenetic algorithm schedulingtorandomized algorithmssuch asrandom walks with branchingand randomizedtime to live(TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limitedfloorplanningchoices as the number of cores in SoCs increase, so asthree-dimensional integrated circuits(3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.[13] A system on a chip consists of both thehardware, described in§ Structure, and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. Thedesign flowfor an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations (§ Optimization goals) and constraints. Most SoCs are developed from pre-qualified hardware componentIP core specificationsfor the hardware elements andexecution units, collectively "blocks", described above, together with softwaredevice driversthat may control their operation. Of particular importance are theprotocol stacksthat drive industry-standard interfaces likeUSB. The hardware blocks are put together usingcomputer-aided designtools, specificallyelectronic design automationtools; thesoftware modulesare integrated using a softwareintegrated development environment. SoCs components are also often designed inhigh-level programming languagessuch asC++,MATLABorSystemCand converted toRTLdesigns throughhigh-level synthesis(HLS) tools such asC to HDLorflow to HDL.[14]HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known tocomputer engineersin a manner independent of time scales, which are typically specified in HDL.[15]Other components can remain software and be compiled and embedded ontosoft-core processorsincluded in the SoC as modules in HDL asIP cores. Once thearchitectureof the SoC has been defined, any new hardware elements are written in an abstracthardware description languagetermedregister transfer level(RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is calledglue logic. Chips are verified for validation correctness before being sent to asemiconductor foundry. This process is calledfunctional verificationand it accounts for a significant portion of the time and energy expended in thechip design life cycle, often quoted as 70%.[16][17]With the growing complexity of chips,hardware verification languageslikeSystemVerilog,SystemC,e, and OpenVera are being used.Bugsfound in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration,emulationor prototyping onreprogrammable hardwareto verify and debug hardware and software for SoC designs prior to the finalization of the design, known astape-out.Field-programmable gate arrays(FPGAs) are favored for prototyping SoCs becauseFPGA prototypesare reprogrammable, allowdebuggingand are more flexible thanapplication-specific integrated circuits(ASICs).[18][19] With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.[citation needed] FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus[20]are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process oflogic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as anetlistdescribing the design as a physical circuit and its interconnections. These netlists are combined with theglue logicconnecting the components to produce the schematic description of the SoC as a circuit which can beprintedonto a chip. This process is known asplace and routeand precedestape-outin the event that the SoCs are produced asapplication-specific integrated circuits(ASIC). SoCs must optimizepower use, area ondie, communication, positioning forlocalitybetween modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use amulti-chip modulearchitecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hardcombinatorial optimizationproblem, and can indeed beNP-hardfairly easily. Therefore, sophisticatedoptimization algorithmsare often required and it may be practical to useapproximation algorithmsorheuristicsin some cases. Additionally, most SoC designs containmultiple variables to optimize simultaneously, soPareto efficientsolutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducingtrade-offsin system design. For broader coverage of trade-offs andrequirements analysis, seerequirements engineering. SoCs are optimized to minimize theelectrical powerused to perform the SoC's functions. Most SoCs must use low power. SoC systems often require longbattery life(such assmartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number ofembeddedSoCs beingnetworked togetherin an area. Additionally, energy costs can be high and conserving energy will reduce thetotal cost of ownershipof the SoC. Finally,waste heatfrom high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is theintegralofpowerconsumed with respect to time, and theaverage rateof power consumption is the product ofcurrentbyvoltage. Equivalently, byOhm's law, power is current squared times resistance or voltage squared divided byresistance: P=IV=V2R=I2R{\displaystyle P=IV={\frac {V^{2}}{R}}={I^{2}}{R}}SoCs are frequently embedded inportable devicessuch assmartphones,GPS navigation devices, digitalwatches(includingsmartwatches) andnetbooks. Customers want long battery lives formobile computingdevices, another reason that power consumption must be minimized in SoCs.Multimedia applicationsare often executed on these devices, including video games,video streaming,image processing; all of which have grown incomputational complexityin recent years with user demands and expectations for higher-qualitymultimedia. Computation is more demanding as expectations move towards3D videoathigh resolutionwithmultiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.[12]: 3 SoCs are optimized to maximizepower efficiencyin performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such asedge computing,distributed processingandambient intelligencerequire a certain level ofcomputational performance, but power is limited in most SoC environments. SoC designs are optimized to minimizewaste heatoutputon the chip. As with otherintegrated circuits, heat generated due to highpower densityare thebottleneckto furtherminiaturizationof components.[21]: 1The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erodereliabilityof the circuit over time. High temperatures and thermal stress negatively impact reliability,stress migration, decreasedmean time between failures,electromigration,wire bonding,metastabilityand other performance degradation of the SoC over time.[21]: 2–9 In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of hightransistor countson modern devices, oftentimes a layout of sufficient throughput and hightransistor densityis physically realizable fromfabrication processesbut would result in unacceptably high amounts of heat in the circuit's volume.[21]: 1 These thermal effects force SoC and other chip designers to apply conservativedesign margins, creating less performant devices to mitigate the risk ofcatastrophic failure. Due to increasedtransistor densitiesas length scales get smaller, eachprocess generationproduces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneousheat fluxes, which cannot be effectively mitigated by uniformpassive cooling.[21]: 1 SoCs are optimized to maximize computational and communicationsthroughput. SoCs are optimized to minimizelatencyfor some or all of their functions. This can be accomplished bylaying outelements with proper proximity andlocalityto each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,functional unitsand memories. In general, optimizing to minimize latency is anNP-completeproblem equivalent to theBoolean satisfiability problem. Fortasksrunning on processor cores, latency and throughput can be improved withtask scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Systems on chip are modeled with standard hardwareverification and validationtechniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect tomultiple-criteria decision analysison the above optimization targets. Task schedulingis an important activity in any computer system with multipleprocessesorthreadssharing a single processor core. It is important to reduce§ Latencyand increase§ Throughputforembedded softwarerunning on an SoC's§ Processor cores. Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involvingshared resources. Software running on SoCs often schedules tasks according tonetwork schedulingandrandomized schedulingalgorithms. Hardware and software tasks are often pipelined inprocessor design. Pipelining is an important principle forspeedupincomputer architecture. They are frequently used inGPUs(graphics pipeline) and RISC processors (evolutions of theclassic RISC pipeline), but are also applied to application-specific tasks such asdigital signal processingand multimedia manipulations in the context of SoCs.[12] SoCs are often analyzed thoughprobabilistic models,queueing networks, andMarkov chains. For instance,Little's lawallows SoC states and NoC buffers to be modeled as arrival processes and analyzed throughPoisson random variablesandPoisson processes. SoCs are often modeled withMarkov chains, bothdiscrete timeandcontinuous timevariants. Markov chain modeling allowsasymptotic analysisof the SoC'ssteady state distributionof power, heat, latency and other factors to allow design decisions to be optimized for the common case. SoC chips are typicallyfabricatedusingmetal–oxide–semiconductor(MOS) technology.[22]The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.[23] SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like mostvery-large-scale integration(VLSI) designs, the total cost[clarification needed]is higher for one large chip than for the same functionality distributed over several smaller chips, because oflower yields[clarification needed]and highernon-recurring engineeringcosts. When it is not feasible to construct an SoC for a particular application, an alternative is asystem in package(SiP) comprising a number of chips in a singlepackage. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler.[24]Another reason SiP may be preferred iswaste heatmay be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Some examples of systems on a chip are: SoCresearch and developmentoften compares many options. Benchmarks, such as COSMIC,[25]are developed to help such evaluations.
https://en.wikipedia.org/wiki/Multiprocessor_system_on_a_chip
Incomputer architecture,cache coherenceis the uniformity of shared resource data that is stored in multiplelocal caches. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched.[1] Acache coherence protocolis used to maintain cache coherency. The two main types aresnoopinganddirectory-basedprotocols. Cache coherence is of particular relevance inmultiprocessingsystems, where eachCPUmay have its own local cache of a shared memory resource. In ashared memorymultiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.[2] The following are the requirements for cache coherence:[3] Theoretically, coherence can be performed at the load/storegranularity. However, in practice it is generally performed at the granularity of cache blocks.[4] Coherence defines the behavior of reads and writes to a single address location.[3] In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence:[5] The above conditions satisfy the Write Propagation criteria required for cache coherence. However, they are not sufficient as they do not satisfy the Transaction Serialization condition. To illustrate this better, consider the following example: A multi-processor system consists of four processors - P1, P2, P3 and P4, all containing cached copies of a shared variableSwhose initial value is 0. Processor P1 changes the value ofS(in its cached copy) to 10 following which processor P2 changes the value ofSin its own cached copy to 20. If we ensure only write propagation, then P3 and P4 will certainly see the changes made toSby P1 and P2. However, P3 may see the change made by P1 after seeing the change made by P2 and hence return 10 on a read toS. P4 on the other hand may see changes made by P1 and P2 in the order in which they are made and hence return 20 on a read toS. The processors P3 and P4 now have an incoherent view of the memory. Therefore, in order to satisfy Transaction Serialization, and hence achieve Cache Coherence, the following condition along with the previous two mentioned in this section must be met: The alternative definition of a coherent system is via the definition ofsequential consistencymemory model: "the cache coherent system must appear to execute all threads’ loads and stores to asinglememory location in a total order that respects the program order of each thread".[4]Thus, the only difference between the cache coherent system and sequentially consistent system is in the number of address locations the definition talks about (single memory location for a cache coherent system, and all memory locations for a sequentially consistent system). Another definition is: "a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order".[7] Rarely, but especially in algorithms, coherence can instead refer to thelocality of reference. Multiple copies of the same data can exist in different cache simultaneously and if processors are allowed to update their own copies freely, an inconsistent view of memory can result. The two most common mechanisms of ensuring coherency aresnoopinganddirectory-based, each having their own benefits and drawbacks.[8]Snooping based protocols tend to be faster, if enoughbandwidthis available, since all transactions are a request/response seen by all processors. The drawback is that snooping isn't scalable. Every request must be broadcast to all nodes in a system, meaning that as the system gets larger, the size of the (logical or physical) bus and the bandwidth it provides must grow. Directories, on the other hand, tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherence. Distributed shared memorysystems mimic these mechanisms in an attempt to maintain consistency between blocks of memory in loosely coupled systems.[11] Coherence protocols apply cache coherence in multiprocessor systems. The intention is that two clients must never see different values for the same shared data. The protocol must implement the basic requirements for coherence. It can be tailor-made for the target system or application. Protocols can also be classified as snoopy or directory-based. Typically, early systems used directory-based protocols where a directory would keep a track of the data being shared and the sharers. In snoopy protocols, the transaction requests (to read, write, or upgrade) are sent out to all processors. All processors snoop the request and respond appropriately. Write propagation in snoopy protocols can be implemented by either of the following methods: If the protocol design states that whenever any copy of the shared data is changed, all the other copies must be "updated" to reflect the change, then it is a write-update protocol. If the design states that a write to a cached copy by any processor requires other processors to discard or invalidate their cached copies, then it is a write-invalidate protocol. However, scalability is one shortcoming of broadcast protocols. Various models and protocols have been devised for maintaining coherence, such asMSI,MESI(aka Illinois),MOSI,MOESI,MERSI,MESIF,write-once, Synapse, Berkeley,FireflyandDragon protocol.[2]In 2011,ARM Ltdproposed the AMBA 4 ACE[12]for handling coherency inSoCs. The AMBA CHI (Coherent Hub Interface) specification[13]fromARM Ltd, which belongs to AMBA5 group of specifications defines the interfaces for the connection of fully coherent processors.
https://en.wikipedia.org/wiki/Cache_coherency
Inparallel computing, anembarrassingly parallelworkload or problem (also calledembarrassingly parallelizable,perfectly parallel,delightfully parallelorpleasingly parallel) is one where little or no effort is needed to split the problem into a number of parallel tasks.[1]This is due to minimal or no dependency upon communication between the parallel tasks, or for results between them.[2] These differ fromdistributed computingproblems, which need communication between tasks, especially communication of intermediate results. They are easier to perform onserver farmswhich lack the special infrastructure used in a truesupercomputercluster. They are well-suited to large, Internet-basedvolunteer computingplatforms such asBOINC, and suffer less fromparallel slowdown. The opposite of embarrassingly parallel problems areinherently serial problems, which cannot be parallelized at all. A common example of an embarrassingly parallel problem is 3D video rendering handled by agraphics processing unit, where each frame (forward method) or pixel (ray tracingmethod) can be handled with no interdependency.[3]Some forms ofpassword crackingare another embarrassingly parallel task that is easily distributed oncentral processing units,CPU cores, or clusters. "Embarrassingly" is used here to refer to parallelization problems which are "embarrassingly easy".[4]The term may imply embarrassment on the part of developers or compilers: "Because so many important problems remain unsolved mainly due to their intrinsic computational complexity, it would be embarrassing not to develop parallel implementations of polynomialhomotopycontinuation methods."[5]The term is first found in the literature in a 1986 book on multiprocessors byMATLAB's creatorCleve Moler,[6]who claims to have invented the term.[7] An alternative term,pleasingly parallel, has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems: "Of course, there is nothing embarrassing about these programs at all."[8] A trivial example involves serving static data. It would take very little effort to have many processing units produce the same set of bits. Indeed, the famousHello Worldproblem could easily be parallelized with few programming considerations or computational costs. Some examples of embarrassingly parallel problems include:
https://en.wikipedia.org/wiki/Embarrassingly_parallel
Automatic parallelization, alsoauto parallelization, orautoparallelizationrefers to converting sequentialcodeintomulti-threadedand/orvectorizedcode in order to use multiple processors simultaneously in a shared-memorymultiprocessor(SMP) machine.[1]Fully automatic parallelization of sequential programs is a challenge because it requires complexprogram analysisand the best approach may depend upon parameter values that are not known at compilation time.[2] The programming control structures on which autoparallelization places the most focus areloops, because, in general, most of theexecution timeof a program takes place inside some form of loop. There are two main approaches to parallelization of loops: pipelined multi-threading and cyclic multi-threading.[3]For example, consider a loop that on each iteration applies a hundred operations, and runs for a thousand iterations. This can be thought of as a grid of 100 columns by 1000 rows, a total of 100,000 operations. Cyclic multi-threading assigns each row to a different thread. Pipelined multi-threading assigns each column to a different thread. This is the first stage where the scanner will read the input source files to identify all static and extern usages. Each line in the file will be checked against pre-defined patterns to segregate intotokens. These tokens will be stored in a file which will be used later by the grammar engine. The grammar engine will check patterns of tokens that match with pre-defined rules to identify variables, loops, control statements, functions etc. in the code. Theanalyzeris used to identify sections of code that can be executed concurrently. The analyzer uses the static data information provided by the scanner-parser. The analyzer will first find all the totally independent functions and mark them as individual tasks. The analyzer then finds which tasks have dependencies. Theschedulerwill list all the tasks and their dependencies on each other in terms of execution and start times. The scheduler will produce the optimal schedule in terms of number of processors to be used or the total execution time for the application. Theschedulerwill generate a list of all the tasks and the details of the cores on which they will execute along with the time that they will execute for. The code Generator will insert special constructs in the code that will be read during execution by the scheduler. These constructs will instruct the scheduler on which core a particular task will execute along with the start and end times. A cyclic multi-threading parallelizing compiler tries tosplit up a loopso that eachiterationcan be executed on a separateprocessorconcurrently. Thecompilerusually conducts two passes of analysis before actual parallelization in order to determine the following: The first pass of the compiler performs adata dependence analysisof the loop to determine whether each iteration of the loop can be executed independently of the others. Data dependence can sometimes be dealt with, but it may incur additional overhead in the form ofmessage passing, synchronization ofshared memory, or some other method of processor communication. The second pass attempts to justify the parallelization effort by comparing the theoretical execution time of the code after parallelization to the code's sequential execution time. Somewhat counterintuitively, code does not always benefit from parallel execution. The extra overhead that can be associated with using multiple processors can eat into the potential speedup of parallelized code. A loop is called DOALL if all of its iterations, in any given invocation, can be executed concurrently. TheFortrancode below is DOALL, and can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of arrayzwill be correct regardless of the execution order of the other iterations. There are manypleasingly parallelproblems that have such DOALL loops. For example, whenrenderinga ray-traced movie, each frame of the movie can be independently rendered, and each pixel of a single frame may be independently rendered. On the other hand, the following code cannot be auto-parallelized, because the value ofz(i)depends on the result of the previous iteration,z(i - 1). This does not mean that the code cannot be parallelized. Indeed, it is equivalent to the DOALL loop However, current parallelizing compilers are not usually capable of bringing out these parallelisms automatically, and it is questionable whether this code would benefit from parallelization in the first place. A pipelined multi-threading parallelizing compiler tries to break up the sequence of operations inside a loop into a series of code blocks, such that each code block can be executed on separateprocessorsconcurrently. There are many pleasingly parallel problems that have such relatively independent code blocks, in particular systems usingpipes and filters. For example, when producing live broadcast television, the following tasks must be performed many times a second: A pipelined multi-threading parallelizing compiler could assign each of these six operations to a different processor, perhaps arranged in asystolic array, inserting the appropriate code to forward the output of one processor to the next processor. Recent research focuses on using the power of GPU's[4]and multicore systems[5]to compute such independent code blocks( or simply independent iterations of a loop) at runtime. The memory accessed (whether direct or indirect) can be simply marked for different iterations of a loop and can be compared for dependency detection. Using this information, the iterations are grouped into levels such that iterations belonging to the same level are independent of each other, and can be executed in parallel. Automatic parallelization by compilers or tools is very difficult due to the following reasons:[6] Due to the inherent difficulties in full automatic parallelization, several easier approaches exist to get a parallel program in higher quality. One of these is to allow programmers to add "hints" to their programs to guide compiler parallelization, such asHPFfordistributed memorysystems andOpenMPorOpenHMPPforshared memorysystems. Another approach is to build an interactive system between programmers and parallelizing tools/compilers. Notable examples areVector Fabrics' Pareon,SUIFExplorer (The Stanford University Intermediate Format compiler), the Polaris compiler, and ParaWise (formally CAPTools). Finally, another approach is hardware-supportedspeculative multithreading. Most researchcompilersfor automatic parallelization considerFortranprograms,[citation needed]because Fortran makes stronger guarantees aboutaliasingthan languages such asC. Typical examples are: Recently, Aubert, Rubiano, Rusch, andSeiller[8]used a dependency analysis technique[9]to automatically parallelise loops inCcode.
https://en.wikipedia.org/wiki/Automatic_parallelization
Incomputer science, abridging modelis an abstract model of acomputerwhich provides aconceptual bridgebetween the physical implementation of the machine and the abstraction available to aprogrammerof that machine; in other words, it is intended to provide a common level of understanding betweenhardwareandsoftwareengineers. A successful bridging model is one which can be efficiently implemented in reality and efficiently targeted by programmers; in particular, it should be possible for acompilerto produce good code from a typical high-level language. The term was introduced byLeslie Valiant's 1990 paperA Bridging Model for Parallel Computation, which argued that the strength of thevon Neumann modelwas largely responsible for the success of computing as a whole.[1]The paper goes on to develop thebulk synchronous parallelmodel as an analogous model forparallel computing. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bridging_model
Thedegree of parallelism(DOP) is a metric which indicates how manyoperationscan be or are being simultaneously executed by a computer. It is used as an indicator of the complexity ofalgorithms, and is especially useful for describing the performance ofparallel programsandmulti-processorsystems.[1] A program running on a parallel computer may utilize different numbers of processors at different times. For each time period, the number of processors used to execute a program is defined as the degree of parallelism. The plot of the DOP as a function of time for a given program is called theparallelism profile.[2] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Degree_of_parallelism
Incomputer programming,explicit parallelismis the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.[1]Most parallel primitives are related to process synchronization, communication and process partitioning.[2]As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead. The advantage of explicitparallel programmingis increased programmer control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it. In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known asimplicit parallelism. Some of the programming languages that support explicit parallelism are: Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Explicit_parallelism
This article lists concurrent andparallel programming languages, categorizing them by a definingparadigm. Concurrent and parallel programming languages involve multiple timelines. Such languages providesynchronizationconstructs whose behavior is defined by a parallelexecution model. Aconcurrent programming languageis defined as one which uses the concept of simultaneously executing processes or threads of execution as a means of structuring a program. A parallel language is able to express programs that are executable on more than one processor. Both types are listed, as concurrency is a useful tool in expressing parallelism, but it is not necessary. In both cases, the features must be part of the language syntax and not an extension such as a library (libraries such as the posix-thread library implement a parallelexecution modelbut lack the syntax and grammar required to be a programming language). The following categories aim to capture the main, defining feature of the languages contained, but they are not necessarily orthogonal. These application programming interfaces support parallelism in host languages.
https://en.wikipedia.org/wiki/List_of_concurrent_and_parallel_programming_languages
Anoptoelectronic systemis basically ahybridsystem that exploits both the advantages ofelectronicandoptical communication.[1][2]Various models of optoelectronicparallel computershave been proposed in recent years.Optical Multi-Trees with Shuffle Exchange(OMTSE) using both electronic and optical links among processors. The processors are organized in the form of an n × narrayof certain groups each containing 3n/2nodes. It can be noted that the entirenetwork topologyis almost regular with an O(log n) diameter. Forparallel computing, the interconnection network is the heart of a parallel processing system, and many systems have failed to meet their design goals for the design of their essential components. Thebandwidthlimitation of the electronic interconnects prompted the need for exploring alternatives that overcome this limitation. Optics is considered as an alternative that is capable of providinginherentcommunication, parallelism, highconnectivityand large bandwidth. When the communication distances exceed a few millimeters, optical interconnects provide advantage over the electronic interconnects in term ofpower,speedandcrosstalkproperty. Therefore, in the construction of very powerful and largemultiprocessorsystems, it is advantageous to interconnect close processors physically using electronic links and far processors (kept in other package) using optical links. Thus we use optical network likeOMTSE,OTIS, andOMULTetc. The OMTSE network consists of two different systems called as optical and electrical. In this network there are using two layer ofTSEnetwork with a completebinary treesof height one and the roots of these binary trees are connected withShuffle-Exchangefashion. The network consists of a total of3n3/2{\displaystyle 3n^{3}/2}processors are built aroundn2{\displaystyle n^{2}}factor networks called TSE networks. Each factor network consists of n leaf nodes. The diameter and bisection width of the OMTSE network is shown to be 6 log n − 1 and(n3)/4{\displaystyle (n^{3})/4}.
https://en.wikipedia.org/wiki/Optical_Multi-Tree_with_Shuffle_Exchange
In computer science, aparallel external memory (PEM) modelis acache-aware, external-memoryabstract machine.[1]It is the parallel-computing analogy to the single-processorexternal memory(EM) model. In a similar way, it is the cache-aware analogy to theparallel random-access machine(PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory. The PEM model[1]is a combination of the EM model and the PRAM model. The PEM model is a computation model which consists ofP{\displaystyle P}processors and a two-levelmemory hierarchy. This memory hierarchy consists of a largeexternal memory(main memory) of sizeN{\displaystyle N}andP{\displaystyle P}smallinternal memories (caches). The processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache. The caches have a sizeM{\displaystyle M}which is partitioned in blocks of sizeB{\displaystyle B}. The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of sizeB{\displaystyle B}. Thecomplexity measureof the PEM model is the I/O complexity,[1]which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So ifP{\displaystyle P}processors load parallelly a data block of sizeB{\displaystyle B}form the main memory into their caches, it is considered as an I/O complexity ofO(1){\displaystyle O(1)}notO(P){\displaystyle O(P)}. A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches. In the PEM model, there is nodirect communication networkbetween the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts[1]occur. Like in the PRAM model, three different variations of this problem are considered: The following two algorithms[1]solve the CREW and EREW problem ifP≤B{\displaystyle P\leq B}processors write to the same block simultaneously. A first approach is to serialize the write operations. Only one processor after the other writes to the block. This results in a total ofP{\displaystyle P}parallel block transfers. A second approach needsO(log⁡(P)){\displaystyle O(\log(P))}parallel block transfers and an additional block for each processor. The main idea is to schedule the write operations in abinary tree fashionand gradually combine the data into a single block. In the first roundP{\displaystyle P}processors combine their blocks intoP/2{\displaystyle P/2}blocks. ThenP/2{\displaystyle P/2}processors combine theP/2{\displaystyle P/2}blocks intoP/4{\displaystyle P/4}. This procedure is continued until all the data is combined in one block. LetM={m1,...,md−1}{\displaystyle M=\{m_{1},...,m_{d-1}\}}be a vector of d-1 pivots sorted in increasing order. LetAbe an unordered set of N elements. A d-way partition[1]ofAis a setΠ={A1,...,Ad}{\displaystyle \Pi =\{A_{1},...,A_{d}\}}, where∪i=1dAi=A{\displaystyle \cup _{i=1}^{d}A_{i}=A}andAi∩Aj=∅{\displaystyle A_{i}\cap A_{j}=\emptyset }for1≤i<j≤d{\displaystyle 1\leq i<j\leq d}.Ai{\displaystyle A_{i}}is called the i-th bucket. The number of elements inAi{\displaystyle A_{i}}is greater thanmi−1{\displaystyle m_{i-1}}and smaller thanmi2{\displaystyle m_{i}^{2}}. In the following algorithm[1]the input is partitioned into N/P-sized contiguous segmentsS1,...,SP{\displaystyle S_{1},...,S_{P}}in main memory. The processor i primarily works on the segmentSi{\displaystyle S_{i}}. The multiway partitioning algorithm (PEM_DIST_SORT[1]) uses a PEMprefix sumalgorithm[1]to calculate the prefix sum with the optimalO(NPB+log⁡P){\displaystyle O\left({\frac {N}{PB}}+\log P\right)}I/O complexity. This algorithm simulates an optimal PRAM prefix sum algorithm. If the vector ofd=O(MB){\displaystyle d=O\left({\frac {M}{B}}\right)}pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model withO(NPB+⌈dB⌉>log⁡(P)+dlog⁡(B)){\displaystyle O\left({\frac {N}{PB}}+\left\lceil {\frac {d}{B}}\right\rceil >\log(P)+d\log(B)\right)}I/O complexity. The content of the final buckets have to be located in contiguous memory. Theselection problemis about finding the k-th smallest item in an unordered listAof sizeN. The following code[1]makes use ofPRAMSORTwhich is a PRAM optimal sorting algorithm which runs inO(log⁡N){\displaystyle O(\log N)}, andSELECT, which is a cache optimal single-processor selection algorithm. Under the assumption that the input is stored in contiguous memory,PEMSELECThas an I/O complexity of: Distribution sortpartitions an input listAof sizeNintoddisjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list. IfP=1{\displaystyle P=1}the task is delegated to a cache-optimal single-processor sorting algorithm. Otherwise the following algorithm[1]is used: The I/O complexity ofPEMDISTSORTis: where If the number of processors is chosen thatf(N,P,d)=O(⌈NPB⌉){\displaystyle f(N,P,d)=O\left(\left\lceil {\tfrac {N}{PB}}\right\rceil \right)}andM<BO(1){\displaystyle M<B^{O(1)}}the I/O complexity is then: O(NPBlogM/B⁡NB){\displaystyle O\left({\frac {N}{PB}}\log _{M/B}{\frac {N}{B}}\right)} WheresortP(N){\displaystyle {\textrm {sort}}_{P}(N)}is the time it takes to sortNitems withPprocessors in the PEM model.
https://en.wikipedia.org/wiki/Parallel_external_memory_(Model)
Indatabasesandtransaction processing,two-phase locking(2PL) is a pessimisticconcurrency controlmethod that guaranteesconflict-serializability.[1][2]It is also the name of the resulting set ofdatabase transactionschedules(histories). The protocol useslocks, applied by a transaction to data, which may block (interpreted as signals to stop) other transactions from accessing the same data during the transaction's life. By the 2PL protocol, locks are applied and removed in two phases: Two types of locks are used by the basic protocol:SharedandExclusivelocks. Refinements of the basic protocol may use more lock types. Using locks that block processes, 2PL, S2PL, and SS2PL may be subject todeadlocksthat result from the mutual blocking of two or more transactions. Locks are used to guaranteeserializability. A transaction isholdingalockon an object if that transaction has acquired a lock on that object which has not yet been released. For 2PL, the only used data-access locks areread-locks(shared locks) andwrite-locks(exclusive locks). Below are the rules forread-locksandwrite-locks: According to thetwo-phase lockingprotocol, each transaction handles its locks in two distinct, consecutive phases during the transaction's execution: The two phase locking rules can be summarized as: each transaction must never acquire a lock after it has released a lock. Theserializabilityproperty is guaranteed for a schedule with transactions that obey this rule. Typically, without explicit knowledge in a transaction on end of phase 1, the rule is safely determined only when a transaction has completed processing and requested commit. In this case, all the locks can be released at once (phase 2). Conservative two-phase locking (C2PL) differs from 2PL in that transactions obtain all the locks they need before the transactions begin. This is to ensure that a transaction that already holds some locks will not block waiting for other locks. C2PLpreventsdeadlocks. In cases of heavy lockcontention, C2PL reduces the time locks are held on average, relative to 2PL and Strict 2PL, because transactions that hold locks are never blocked. In light lock contention, C2PL holds more locks than is necessary, because it is difficult to predict which locks will be needed in the future, thus leading to higher overhead. A C2PL transaction will not obtain any locks if it cannot obtain all the locks it needs in its initial request. Furthermore, each transaction needs to declare its read and write set (the data items that will be read/written), which is not always possible. Because of these limitations, C2PL is not used very frequently. To comply with the strict two-phase locking (S2PL) protocol, a transaction needs to comply with 2PL, and release itswrite (exclusive)locks only after the transaction has ended (i.e., eithercommittedoraborted). On the other hand,read (shared)locks are released regularly during the shrinking phase. Unlike 2PL, S2PL providesstrictness(a special case of cascade-less recoverability). This protocol is not appropriate inB-treesbecause it causes Bottleneck (while B-trees always starts searching from the parent root).[citation needed] orRigorousness, orRigorous scheduling, orRigorous two-phase locking To comply withstrong strict two-phase locking(SS2PL), a transaction's read and write locks are released only after that transaction has ended (i.e., either committed or aborted). A transaction obeying SS2PL has only a phase 1 and lacks a phase 2 until the transaction has completed. Every SS2PL schedule is also an S2PL schedule, but not vice versa.
https://en.wikipedia.org/wiki/Two-phase_locking
Indatabases, andtransaction processing(transaction management),snapshot isolationis a guarantee that all reads made in atransactionwill see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot. Snapshot isolation has been adopted by several majordatabase management systems, such asInterBase,Firebird,Oracle,MySQL,[1]PostgreSQL,SQL Anywhere,MongoDB[2]andMicrosoft SQL Server(2005 and later). The main reason for its adoption is that it allows better performance thanserializability, yet still avoids most of the concurrency anomalies that serializability avoids (but not all). In practice snapshot isolation is implemented withinmultiversion concurrency control(MVCC), where generational values of each data item (versions) are maintained: MVCC is a common way to increase concurrency and performance by generating a new version of adatabase objecteach time the object is written, and allowing transactions' read operations of several last relevant versions (of each object). Snapshot isolation has been used[3]to criticize theANSISQL-92 standard's definition ofisolationlevels, as it exhibits none of the "anomalies" that the SQL standard prohibited, yet is not serializable (the anomaly-free isolation level defined by ANSI). In spite of its distinction from serializability, snapshot isolation is sometimes referred to asserializableby Oracle. A transaction executing under snapshot isolation appears to operate on a personalsnapshotof the database, taken at the start of the transaction. When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such awrite–write conflictwill cause the transaction to abort. In awrite skewanomaly, two transactions (T1 and T2) concurrently read an overlapping data set (e.g. values V1 and V2), concurrently make disjoint updates (e.g. T1 updates V1, T2 updates V2), and finally concurrently commit, neither having seen the update performed by the other. Were the system serializable, such an anomaly would be impossible, as either T1 or T2 would have to occur "first", and be visible to the other. In contrast, snapshot isolation permits write skew anomalies. As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2. If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing. If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neitherupdateconflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200. Some systems built usingmultiversion concurrency control(MVCC) may support (only) snapshot isolation to allow transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. This is convenient because MVCC maintains a series of recent history consistent states. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed. However, MVCC systems (such as MarkLogic) will use locks to serialize writes together with MVCC to obtain some of the performance gains and still support the stronger "serializability" level of isolation. Potential inconsistency problems arising from write skew anomalies can be fixed by adding (otherwise unnecessary) updates to the transactions in order to enforce theserializabilityproperty.[4][5][6][7] In the example above, we can materialize the conflict by adding a new table which makes the hidden constraint explicit, mapping each person to theirtotal balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write–write conflict that would prevent the two from succeeding concurrently. However, this approach violates thenormal form. Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write–write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible. In general, therefore, snapshot isolation puts some of the problem of maintaining non-trivial constraints onto the user, who may not appreciate either the potential pitfalls or the possible solutions. The upside to this transfer is better performance. Snapshot isolation is called "serializable" mode inOracle[8][9][10]andPostgreSQLversions prior to 9.1,[11][12][13]which may cause confusion with the "realserializability" mode. There are arguments both for and against this decision; what is clear is that users must be aware of the distinction to avoid possible undesired anomalous behavior in their database system logic. Snapshot isolation arose from work onmultiversion concurrency controldatabases, where multiple versions of the database are maintained concurrently to allow readers to execute without colliding with writers. Such a system allows a natural definition and implementation of such an isolation level.[3]InterBase, later owned byBorland, was acknowledged to provide SI rather than full serializability in version 4,[3]and likely permitted write-skew anomalies since its first release in 1985.[14] Unfortunately, the ANSISQL-92standard was written with alock-based database in mind, and hence is rather vague when applied to MVCC systems. Berensonet al.wrote a paper in 1995[3]critiquing the SQL standard, and cited snapshot isolation as an example of an isolation level that did not exhibit the standard anomalies described in the ANSI SQL-92 standard, yet still had anomalous behaviour when compared withserializabletransactions. In 2008, Cahillet al.showed that write-skew anomalies could be prevented by detecting and aborting "dangerous" triplets of concurrent transactions.[15]This implementation of serializability is well-suited tomultiversion concurrency controldatabases, and has been adopted in PostgreSQL 9.1,[12][13][16]where it is known as Serializable Snapshot Isolation (SSI). When used consistently, this eliminates the need for the above workarounds. The downside over snapshot isolation is an increase in aborted transactions. This can perform better or worse than snapshot isolation with the above workarounds, depending on workload.
https://en.wikipedia.org/wiki/Snapshot_isolation#Making_Snapshot_Isolation_Serializable
Indatabases, andtransaction processing(transaction management),snapshot isolationis a guarantee that all reads made in atransactionwill see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot. Snapshot isolation has been adopted by several majordatabase management systems, such asInterBase,Firebird,Oracle,MySQL,[1]PostgreSQL,SQL Anywhere,MongoDB[2]andMicrosoft SQL Server(2005 and later). The main reason for its adoption is that it allows better performance thanserializability, yet still avoids most of the concurrency anomalies that serializability avoids (but not all). In practice snapshot isolation is implemented withinmultiversion concurrency control(MVCC), where generational values of each data item (versions) are maintained: MVCC is a common way to increase concurrency and performance by generating a new version of adatabase objecteach time the object is written, and allowing transactions' read operations of several last relevant versions (of each object). Snapshot isolation has been used[3]to criticize theANSISQL-92 standard's definition ofisolationlevels, as it exhibits none of the "anomalies" that the SQL standard prohibited, yet is not serializable (the anomaly-free isolation level defined by ANSI). In spite of its distinction from serializability, snapshot isolation is sometimes referred to asserializableby Oracle. A transaction executing under snapshot isolation appears to operate on a personalsnapshotof the database, taken at the start of the transaction. When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such awrite–write conflictwill cause the transaction to abort. In awrite skewanomaly, two transactions (T1 and T2) concurrently read an overlapping data set (e.g. values V1 and V2), concurrently make disjoint updates (e.g. T1 updates V1, T2 updates V2), and finally concurrently commit, neither having seen the update performed by the other. Were the system serializable, such an anomaly would be impossible, as either T1 or T2 would have to occur "first", and be visible to the other. In contrast, snapshot isolation permits write skew anomalies. As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2. If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing. If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neitherupdateconflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200. Some systems built usingmultiversion concurrency control(MVCC) may support (only) snapshot isolation to allow transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. This is convenient because MVCC maintains a series of recent history consistent states. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed. However, MVCC systems (such as MarkLogic) will use locks to serialize writes together with MVCC to obtain some of the performance gains and still support the stronger "serializability" level of isolation. Potential inconsistency problems arising from write skew anomalies can be fixed by adding (otherwise unnecessary) updates to the transactions in order to enforce theserializabilityproperty.[4][5][6][7] In the example above, we can materialize the conflict by adding a new table which makes the hidden constraint explicit, mapping each person to theirtotal balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write–write conflict that would prevent the two from succeeding concurrently. However, this approach violates thenormal form. Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write–write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible. In general, therefore, snapshot isolation puts some of the problem of maintaining non-trivial constraints onto the user, who may not appreciate either the potential pitfalls or the possible solutions. The upside to this transfer is better performance. Snapshot isolation is called "serializable" mode inOracle[8][9][10]andPostgreSQLversions prior to 9.1,[11][12][13]which may cause confusion with the "realserializability" mode. There are arguments both for and against this decision; what is clear is that users must be aware of the distinction to avoid possible undesired anomalous behavior in their database system logic. Snapshot isolation arose from work onmultiversion concurrency controldatabases, where multiple versions of the database are maintained concurrently to allow readers to execute without colliding with writers. Such a system allows a natural definition and implementation of such an isolation level.[3]InterBase, later owned byBorland, was acknowledged to provide SI rather than full serializability in version 4,[3]and likely permitted write-skew anomalies since its first release in 1985.[14] Unfortunately, the ANSISQL-92standard was written with alock-based database in mind, and hence is rather vague when applied to MVCC systems. Berensonet al.wrote a paper in 1995[3]critiquing the SQL standard, and cited snapshot isolation as an example of an isolation level that did not exhibit the standard anomalies described in the ANSI SQL-92 standard, yet still had anomalous behaviour when compared withserializabletransactions. In 2008, Cahillet al.showed that write-skew anomalies could be prevented by detecting and aborting "dangerous" triplets of concurrent transactions.[15]This implementation of serializability is well-suited tomultiversion concurrency controldatabases, and has been adopted in PostgreSQL 9.1,[12][13][16]where it is known as Serializable Snapshot Isolation (SSI). When used consistently, this eliminates the need for the above workarounds. The downside over snapshot isolation is an increase in aborted transactions. This can perform better or worse than snapshot isolation with the above workarounds, depending on workload.
https://en.wikipedia.org/wiki/Snapshot_isolation
Inconcurrency controlofdatabases,transaction processing(transaction management), and other transactionaldistributed applications,global serializability(ormodular serializability) is a property of aglobal scheduleoftransactions. A global schedule is the unifiedscheduleof all the individual database (and othertransactional object) schedules in a multidatabase environment (e.g.,federated database). Complying with global serializability means that the global schedule isserializable, has theserializabilityproperty, while each component database (module) has a serializable schedule as well. In other words, a collection of serializable components provides overall system serializability, which is usually incorrect. A need in correctness across databases in multidatabase systems makes global serializability a major goal forglobal concurrency control(ormodular concurrency control). With the proliferation of theInternet,Cloud computing,Grid computing, and small, portable, powerful computing devices (e.g.,smartphones), as well as increase insystems managementsophistication, the need for atomic distributed transactions and thus effective global serializability techniques, to ensure correctness in and among distributed transactional applications, seems to increase. In afederated database systemor any other more loosely defined multidatabase system, which are typically distributed in a communication network, transactions span multiple (and possiblydistributed) databases. Enforcing global serializability in such system, where different databases may use different types ofconcurrency control, is problematic. Even if every local schedule of a single database is serializable, the global schedule of a whole system is not necessarily serializable. The massive communication exchanges of conflict information needed between databases to reachconflict serializabilityglobally would lead to unacceptable performance, primarily due to computer and communicationlatency. Achieving global serializability effectively over different types of concurrency control has beenopenfor several years. The difficulties described above translate into the following problem: Lack of an appropriate solution for the global serializability problem has driven researchers to look for alternatives toserializabilityas a correctness criterion in a multidatabase environment (e.g., seeRelaxing global serializabilitybelow), and the problem has been characterized as difficult andopen. The following two quotations demonstrate the mindset about it by the end of the year 1991, with similar quotations in numerous other articles: Several solutions, some partial, have been proposed for the global serializability problem. Among them: Some techniques have been developed forrelaxed global serializability(i.e., they do not guarantee global serializability; see alsoRelaxing serializability). Among them (with several publications each): Another common reason nowadays for Global serializability relaxation is the requirement ofavailabilityofinternetproducts andservices. This requirement is typically answered by large scale datareplication. The straightforward solution for synchronizing replicas' updates of a same database object is including all these updates in a single atomicdistributed transaction. However, with many replicas such a transaction is very large, and may span severalcomputersandnetworksthat some of them are likely to be unavailable. Thus such a transaction is likely to end with abort and miss its purpose.[4]Consequently,Optimistic replication(Lazy replication) is often utilized (e.g., in many products and services byGoogle,Amazon,Yahoo, and alike), while global serializability is relaxed and compromised foreventual consistency. In this case relaxation is done only for applications that are not expected to be harmed by it. Classes of schedules defined byrelaxed global serializabilityproperties either contain the global serializability class, or are incomparable with it. What differentiates techniques forrelaxed global conflict serializability(RGCSR) properties from those ofrelaxed conflict serializability(RCSR) properties that are not RGCSR is typically the different wayglobal cycles(span two or more databases) in theglobal conflict graphare handled. No distinction between global and local cycles exists for RCSR properties that are not RGCSR. RCSR contains RGCSR. Typically RGCSR techniques eliminate local cycles, i.e., providelocal serializability(which can be achieved effectively by regular, knownconcurrency controlmethods); however, obviously they do not eliminate all global cycles (which would achieve global serializability).
https://en.wikipedia.org/wiki/Global_serializability
Zero ASIC Corporation, formerlyAdapteva, Inc., is afablesssemiconductorcompanyfocusing on low powermany coremicroprocessordesign. The company was the second company to announce a design with 1,000 specialized processing cores on a singleintegrated circuit.[1][2] Adapteva was founded in 2008 with the goal of bringing a ten times advancement infloating-pointperformance per wattfor the mobile device market. Products are based on its Epiphany multi-coremultiple instruction, multiple data(MIMD) architecture and its ParallellaKickstarterproject promoting "a supercomputer for everyone" in September 2012. The company name is a combination of "adapt" and the Hebrew word "Teva" meaning nature. Adapteva was founded in March 2008, by Andreas Olofsson. The company was founded with the goal of bringing a 10× advancement infloating-pointprocessingenergy efficiencyfor themobile devicemarket. In May 2009, Olofsson had a prototype of a new type ofmassively parallelmulti-corecomputer architecture. The initial prototype was implemented in 65 nm and had 16 independent microprocessor cores. The initial prototypes enabled Adapteva to secure US$1.5 million in series-A funding from BittWare, a company fromConcord, New Hampshire, in October 2009.[3] Adapteva's first commercial chip product started sampling to customers in early May 2011 and they soon thereafter announced the capability to put up to 4,096 cores on a single chip. TheEpiphany III, was announced in October 2011 using 28 nm and 65 nm manufacturing processes. Adapteva's main product family is the Epiphany scalable multi-coreMIMDarchitecture. The Epiphany architecture could accommodate chips with up to 4,096RISCout-of-ordermicroprocessors, all sharing a single32-bitflat memory space. EachRISCprocessor in the Epiphany architecture issuperscalarwith 64× 32-bitunified register file(integer orsingle-precision) microprocessor operating up to 1GHzand capable of 2GFLOPS(single-precision). Epiphany's RISC processors use a custominstruction set architecture(ISA) optimised forsingle-precision floating-point,[4]but are programmable in high levelANSI Cusing a standardGNU-GCCtool chain. Each RISC processor (in current implementations; not fixed in the architecture) has 32KBof local memory. Code (possibly duplicated in each core) and stack space should be in thatlocal memory; in addition (most) temporary data should fit there for full speed. Data can also be used from other processor cores local memory at a speed penalty, or off-chip RAM with much larger speed penalty. The memory architecture does not employ explicit hierarchy ofhardware caches, similar to the Sony/Toshiba/IBMCell processor, but with the additional benefit of off-chip and inter-core loads and stores being supported (which simplifies porting software to the architecture). It is a hardware implementation ofpartitioned global address space.[citation needed] This eliminated the need for complexcache coherencyhardware, which places a practical limit on the number of cores in a traditionalmulticore system. The design allows the programmer to leverage greater foreknowledge of independent data access patterns to avoid the runtime cost of figuring this out. All processor nodes are connected through anetwork on chip, allowing efficient message passing.[5] The architecture is designed to scale almost indefinitely, with 4e-linksallowing multiple chips to be combined in a grid topology, allowing for systems with thousands of cores. On August 19, 2012, Adapteva posted some specifications and information about Epiphany multi-core coprocessors.[6] In September 2012, a 16-core version, the Epiphany-III (E16G301), was produced using 65 nm[9](11.5 mm2, 500 MHz chip[10]) and engineering samples of 64-core Epiphany-IV (E64G401) were produced using 28 nmGlobalFoundriesprocess (800 MHz).[11] The primary markets for the Epiphany multi-core architecture include: In September 2012, Adapteva started project Parallella onKickstarter, which was marketed as "A Supercomputer for everyone." Architecture reference manuals for the platform were published as part of the campaign to attract attention to the project.[12]The US$750,000 funding goal was reached in a month, with a minimum contribution of US$99 entitling backers to obtain one device; although the initial deadline was set for May 2013, the first single-board computers with 16-core Epiphany chip were finally shipped in December 2013.[13] Size of board is planned to be 86 mm × 53 mm (3.4 in × 2.1 in).[14][15][16] The Kickstarter campaign raised US$898,921.[17][18]Raising US$3 million goal was unsuccessful, so no 64-core version of Parallella will be mass-produced.[19]Kickstarter users having donated more than US$750 will get "parallella-64" variant with 64-core coprocessor (made from initialprototype manufacturingwith 50 chips yield per wafer).[20] By 2016, the firm hadtaped outa 1024-core64-bitvariant of their Epiphany architecture that featured: larger local stores (64 KB), 64-bit addressing,double-precision floating-pointarithmetic orSIMDsingle-precision, and 64-bit integer instructions, implemented in the 16 nm process node.[21]This design included instruction set enhancements aimed atdeep-learningandcryptographyapplications. In July 2017, Adapteva's founder became aDARPAMTOprogram manager[22]and announced that the Epiphany V was "unlikely" to become available as a commercial product.[23] The 16-core Parallella achieves roughly 5.0 GFLOPS/W, and the 64-core Epiphany-IV made with 28 nm estimated as 50 GFLOPS/W (single-precision),[24]and 32-board system based on them achieves 15 GFLOPS/W.[25]For comparison, top GPUs from AMD and Nvidia reached 10 GFLOPS/W for single-precision in 2009–2011 timeframe.[26]
https://en.wikipedia.org/wiki/Adapteva
Michael David May(born 24 February 1951) is a Britishcomputer scientist. He is a Professor in theDepartment of Computer Scienceat theUniversity of Bristoland founder ofXMOS Semiconductor, serving until February 2014 as thechief technology officer.[1] May waslead architectfor thetransputer. As of 2017, he holds 56 patents, all inmicroprocessorsandmulti-processing. May was born inHolmfirth, Yorkshire, England and attendedQueen Elizabeth Grammar School, Wakefield. From 1969 to 1972 he was a student atKing's College, Cambridge,University of Cambridge, at first studying Mathematics and then Computer Science in the University of Cambridge Mathematical Laboratory, now theUniversity of Cambridge Computer Laboratory. He moved to theUniversity of Warwickand started research inrobotics. The challenges of implementing sensing and control systems led him to design and implement an earlyconcurrent programming language, EPL, which ran on a cluster ofsingle-boardmicrocomputersconnected byserial communicationlinks. This early work brought him into contact withTony HoareandIann Barron: one of the founders ofInmos. WhenInmoswas formed in 1978, May joined to work on microcomputer architecture, becoming lead architect of the transputer and designer of the associated programming languageOccam. This extended his earlier work and was also influenced byTony Hoare, who was at the time working onCSPand acting as a consultant to Inmos. The prototype of the transputer was called theSimple 42and was completed in 1982. The first production transputers, theT212andT414, followed in 1985; theT800floating point transputer in 1987. May initiated the design of one of the firstVLSIpacket switches, theC104, together with the communications system of theT9000transputer. Working closely withTony Hoareand theProgramming Research GroupatOxford University, May introduced formal verification techniques into the design of theT800floating point unitand theT9000transputer. These were some of the earliest uses offormal verificationin microprocessor design, involving specifications,correctness preserving transformationsandmodel checking, giving rise to the initial version of the FDR checker developed at Oxford. In 1995, May joined theUniversity of Bristolas a professor of computer science. He was head of the computer science department from 1995 to 2006. He continues to be a professor atBristolwhile supportingXMOS, a University spin-out he co-founded in 2005. Before XMOS, he was involved inPicochip, where he wrote the original instruction set. May is married with three sons and lives in Bristol, United Kingdom. In 1990, May received anHonorary DScfrom theUniversity of Southampton, followed in 1991 by his election as a Fellow ofThe Royal Societyand theClifford Paterson Medal and Prizeof theInstitute of Physicsin 1992. In 2010, he was elected aFellow[2]of theRoyal Academy of Engineering.[3] May's Lawstates, in reference toMoore's Law: Software efficiency halves every 18 months, compensating Moore's Law.[4]
https://en.wikipedia.org/wiki/David_May_(computer_scientist)
Easeis a general purposeparallelprogramming language. It is designed by Steven Ericsson-Zenith, a researcher atYale University, the Institute for Advanced Science & Engineering in Silicon Valley, California, theEcole Nationale Supérieure des Mines de Paris, and thePierre and Marie Curie University, the science department of theSorbonne.[1] The bookProcess Interaction Modelsis the Ease language specification. Ease combines the process constructs ofcommunicating sequential processes(CSP) with logically shared data structures calledcontexts. Contexts areparalleldata types that are constructed by processes and provide a way for processes to interact. The language includes two process constructors. Acooperationincludes an explicit barrier synchronization and is written: If one process finishes before the other, then it will wait until the other processes are finished. Asubordinationcreates a process that shares thecontextsthat are in scope when created and finishes when complete (it does not wait for other processes) and is written: Subordinate processes stop if they attempt to interact with acontextthat has completed because the parent process has stopped. This enables speculative processes to be created that will finish if their result is not needed. Powerfulreplicationsyntax allows multiple processes to be created. For example, createsnsynchronized processes each with a local constanti. Processes cannot sharelocalvariables and cooperate in the construction of sharedcontexts.Certain context types, calledresources, ensure call-reply semantics. There are four functions upon contexts: Context types areSingletons,BagsorStreamsand can be subscripted arrays. Ease has asemioticdefinition. This means that it accounts for the effect the language has on the programmer and how they develop algorithms. The language was designed toeasethe developing of parallel programs. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Ease_(programming_language)
IEEE Standard 1355-1995,IEC 14575, orISO 14575is adata communicationsstandard for Heterogeneous Interconnect (HIC). IEC 14575is a low-cost, low latency, scalable serial interconnection system, originally intended for communication between large numbers of inexpensive computers. IEC 14575lacks many of the complexities of other data networks. The standard defined several different types of transmission media (including wires and optic fiber), to address different applications. Since the high-level network logic is compatible, inexpensive electronic adapters are possible. IEEE 1355 is often used in scientific laboratories. Promoters include large laboratories, such asCERN, and scientific agencies. For example, theESAadvocates a derivative standard calledSpaceWire. The protocol was designed for a simple, low cost switched network made ofpoint-to-pointlinks. This network sends variable length data packets reliably at high speed. It routes the packets usingwormhole routing. UnlikeToken Ringor other types oflocal area networks(LANs) with comparable specifications, IEEE 1355 scales beyond a thousand nodes without requiring higher transmission speeds. The network is designed to carry traffic from other types of networks, notablyInternet ProtocolandAsynchronous Transfer Mode(ATM), but does not depend on other protocols for data transfers or switching. In this, it resemblesMultiprotocol Label Switching(MPLS). IEEE 1355 had goals likeFuturebusand its derivativesScalable Coherent Interface(SCI), andInfiniBand. The packet routing system of IEEE 1355 is also similar toVPLS,[citation needed]and uses a packet labeling scheme similar to MPLS. IEEE 1355 achieves its design goals with relatively simple digital electronics and very little software. This simplicity is valued by many engineers and scientists.[which?]Paul Walker (see links[which?]) said that when implemented in anFPGA, the standard takes about a third the hardware resources of aUART(a standard serial port)[which?][citation needed], and gives one hundred times the data transmission capacity, while implementing a full switching network and being easier to program.[citation needed] Historically, IEEE 1355 derived from the asynchronous serial networks developed for theTransputermodel T9000 on-chip serial data interfaces.[1]The Transputer was amicroprocessordeveloped to inexpensively implement parallel computation. IEEE 1355 resulted from an attempt to preserve the Transputer's unusually simple data network. Thisdata strobe encodingscheme makes the links self-clocking, able to adapt automatically to different speeds. It was patented byInmosunder U.K. patent number 9011700.3, claim 16 (DS-Link bit-level encoding), and in 1991 under US patent 5341371,[2]claim 16. The patent expired in 2011. IEEE 1355 inspiredSpaceWire. It is sometimes used for digital data connections between scientific instruments, controllers and recording systems. IEEE 1355 is used in scientific instrumentation because it is easy to program and it manages most events by itself without complex real-time software. IEEE 1355 includes a definition for cheap, fast, short-distance network media, intended as the internal protocols for electronics, including network switching and routing equipment. It also includes medium, and long-distance network protocols, intended forlocal area networksandwide area networks. IEEE 1355 is designed for point-to-point use. It could therefore take the place of the most common use ofEthernet, if it used equivalent signaling technologies (such asLow voltage differential signaling).[3] IEEE 1355 could work well for consumer digital appliances. The protocol is simpler thanUniversal Serial Bus(USB),FireWire,Peripheral Component Interconnect(PCI) and other consumer protocols. This simplicity can reduce equipment expense and enhance reliability. IEEE 1355 does not define any message-level transactions, so these would have to be defined in auxiliary standards. A 1024 node testbed called Macramé was constructed in Europe in 1997.[4]Researchers measuring the performance and reliability of the Macramé testbed provided useful input to the working group which established the standard.[5] The work of theInstitute of Electrical and Electronics Engineerswas sponsored by the Bus Architecture Standards Committee as part of the Open Microprocessor Systems Initiative. The chair of the group was Colin Whitby-Strevens, co-chair was Roland Marbot, and editor was Andrew Cofler. The standard was approved 21 September 1995 as IEEE Standard for Heterogeneous InterConnect (HIC) (Low-Cost, Low-Latency Scalable Serial Interconnect for Parallel System Construction) and published as IEEE Std 1355-1995.[6]A trade association was formed in October 1999 and maintained a web site until 2004.[7] The family of standards use similar logic and behavior, but operate at a wide range of speeds over several types of media. The authors of the standard say that no single standard addresses all price and performance points for a network. Therefore, the standard includes slices (their words) for single-ended (cheap), differential (reliable) and high speed (fast) electrical interfaces, as well as fiber optic interfaces. Long-distance or fast interfaces are designed so that there is no net power transfer through the cable. Transmission speeds range from 10 megabits per second to 1 gigabit per second. The network's normal data consists of 8-bit bytes sent with flow control. This makes it compatible with other common transmission media, including standard telecommunications links. The maximum length of the different data transmission media range from one meter to 3 kilometers. The 3 km standard is thefastest.The others are cheaper. The connectors are defined so that if a plug fits a jack, the connection is supposed to work. Cables have the same type of plug at both ends, so that each standard has only one type of cable. "Extenders" are defined as two-ended jacks that connect two standard cables. Interface electronics perform most of the packet-handling, routing, housekeeping and protocol management. Software is not needed for these tasks. When there is an error, the two ends of a link exchange an interval of silence or a reset, and then restart the protocol as if from power-up. A switching node reads the first few bytes of a packet as an address, and then forwards the rest of the packet to the next link without reading or changing it. This is called "wormhole switching" in an annex to the standard. Wormhole switching requires no software to implement a switching fabric. Simplehardware logiccan arrange fail-overs to redundant links. Each link defines a full-duplex (continuous bidirectional transmission and reception) point-to-point connection between two communicating pieces of electronics. Every transmission path has a flow control protocol, so that when a receiver begins to get too much data, it can turn down the flow. Every transmission path's electronics can send link control data separately from normal data. When a link is idle, it transmits NULL characters. This maintains synchronization, finishes any remaining transmission quickly, and tests the link. Some Spacewire users are experimenting with half-duplex versions.[1]The general scheme is that half-duplex uses one transmission channel rather than two. In space, this is useful because the weight of wires is half as much. Controllers would reverse the link after sending an end-of-packet character. The scheme is most effective in the self-clocking electrical systems, such as Spacewire. In the high speed optical slices, half-duplex throughput would be limited by the synchronization time of thephase locked loopsused to recover the bit clock. This description is a brief outline. The standard defines more details, such as the connector dimensions, noise margins, and attenuation budgets. IEEE 1355 is defined in layers and slices. The layers are network features that are similar in different media and signal codings. Slices identify a vertical slice of compatible layers. The lowest layer defines signals. The highest defines packets. Combinations of packets, the application or transaction layer, are outside the standard. A slice, an interoperable implementation, is defined by a convenient descriptive code, SC-TM-dd, where: Defined slices include: Spacewireis very similar to DS-DE-02, except it uses a microminiature 9-pin "D" connector (lower-weight), andlow voltage differential signaling. It also defines some higher-level standard message formats, routing methods, and connector and wire materials that work reliably in vacuum and severe vibration. In all slices, each link can continuously transmit in both directions ("full duplex"). Each link has two transmission channels, one for each direction. In a link's cable, the channels have a "half twist" so that input and output always go to the same pins of the connector on both ends of the cable. This makes the cables "promiscuous", that is, each end of any cable will plug into any jack on a piece of equipment. Each end of a link's cable must be clearly marked with the type of link: for example "IEEE 1355 DS-DE Link Cable". Every slice defines 256 data characters. This is enough to represent 8 bits per character. These are called "normal data" or "N-chars." Every slice defines a number of special link control characters, sometimes called "L-chars." The slice cannot confuse them with N-chars. Each slice includes a flow control link-control character, or FCC, as well as L-chars for NULL (no data), ESCAPE, end of packet, and exceptional end of packet. Some slices add a few more to start-up the link, diagnose problems, etc. Every slice has error detection defined at the character layer, usually using parity. The parity is usually distributed over several characters. A flow-control-character gives a node permission to transmit a few normal data characters. The number depends on the slice, with faster slices sending more characters per FCC. Building flow control in at a low level makes the link far more reliable, and removes much of the need to retransmit packets. Once a link starts, it continuously exchanges characters. These are NULLs if there is no data to exchange. This tests the link, and ensures that the parity bits are sent quickly to finish messages. Each slice has its own start-up sequence. For example, DS-SE and DS-DE are silent, then start sending as soon as they are commanded to start. A received character is a command to start. In error detection, normally the two ends of the link exchange a very brief silence (e.g. a few microseconds for DS-SE), or a reset command and then try to reset and restore the link as if from power-up. A packet is a sequence of normal data with a specific order and format, ended by an "end of packet" character. Links do not interleave data from several packets. The first few characters of a packet describe its destination. Hardware can read those bytes to route the packet. Hardware does not need to store the packet, or perform any other calculations on it in order to copy it and route it. One standard way to route packets iswormhole source routingin which the first data byte always tells the router which of its outputs should carry the packet. The router then strips off the first byte, exposing the next byte for use by the next router. IEEE 1355 acknowledges that there must be sequences of packets to perform useful work. It does not define any of these sequences. DS-SE stands for "Data and Strobe, Single-ended Electrical." This is the least expensive electrical standard. It sends data at up to 200 megabits per second, for up to 1 meter, this is useful inside an instrument for reliable low-pin-count communications. A connection has two channels, one per direction. Each channel consists of two wires carrying strobe and data. The strobe line changes state whenever the data line starts a new bit with the same value as the previous bit. This scheme makes the links self-clocking, able to adapt automatically to different speeds. Data characters start with an odd parity, followed by a zero bit. This means that the character is a normal data character, followed by eight data bits. Link control characters start with odd parity, followed by a one bit, followed by two bits. Odd-1 means that the character is a link control character. 00 is the flow control character FCC, 01 is a normal end of packet EOP, 10 is an exceptional end of packet EEOP, and 11 is an escape character ESC. A NULL is the sequence "ESC FCC". An FCC gives permission to send eight (8) normal data characters. Each line can have two states: above 2.0V, and below 0.8 V -- single-ended CMOS or TTLlogic levelsignals.[8]The nominal impedance is either 50 or 100 ohms, for 3.3 V and 5 V systems respectively. Rise and fall times should be <100 ns. Capacitance should be <300pFfor 100 MBd, and <4 pF for 200 MBd. No connectors are defined because DS-SE is designed for use within electronic equipment. DS-DE stands for "Data and Strobe, Differential Electrical." This is the electrical standard that resists electrical noise the best. It sends data at up to 200 megabits per second, for up to 10 meters, which is useful for connecting instruments. The cable is thick, and the standard connectors are both heavy and expensive. Each cable has eight wires carrying data. These eight wires are divided into two channels, one for each direction. Each channel consists of four wires, two twisted pairs. One twisted pair carries differential strobe, and the other carries differential data. The encoding for the character layer and above is otherwise like the DS-SE definition. Since the cable has ten wires, and eight are used for data, a twisted pair is left over. The black/white pair optionally carries 5 V power and return. The driver rise time should be between 0.5 and 2ns. The differential voltage may range from 0.8 V to 1.4 V, with 1.0 V typical—differentialPECLlogic level signals.[8]The differential impedance is 95 ± 10 ohms. The common mode output voltage is 2.5–4 V. The receiver's input impedance should be 100 ohms, within 10%. the receiver input's common mode voltage must be between -1 and 7 V. The receiver's sensitivity should be at least 200 mV. The standard cable has ten wires. The connectors are IEC-61076-4-107. Plug A (pin 1 is first, pin 2 second): a:brown/blue, b:red/green, c:white/black, d:orange/yellow, e:violet/gray (Pin 1 is given first). Plug B (pin 2 is first, pin 1 second): e:brown/blue, d:red/green, c:black/white, b:orange/yellow, a:violet/gray. Note the implementation the "half twist", routing inputs and outputs to the same pins on each plug. The Pin 1C/black, may carry 5 volts, while 2C/white may carry return. If the power supply is present it must have aself-healing fuse, and may have ground fault protection. If it is absent, the pins should include a 1 MΩ resistor to ground to leak away static voltages. TS-FO stands for "Three of Six, Fiber Optical." This is a fiber optic standard designed for affordable plastic fibers operating in the near infrared. It sends 200 megabits/second about 300 meters. The wavelength should be between 760 and 900 nanometers, which is in the nearinfrared. The operating speed should be at most 250 MBd with at most 100 parts per million variation. The dynamic range should be about 12decibels. The cable for this link uses two 62.5micrometer-diametermultimodeoptic fibers. The fiber's maximum attenuation should be 4decibelsper kilometer at aninfraredwavelengthof 850 nanometers. The standard connector on each end is a duplex MU connector. Ferrule 2 is always "in", while ferrule 1 is "out". The centerlines should be on 14 mm centers, and the connector should be 13.9 mm maximum. The cable has a "half twist" to make it promiscuous. Theline code"3/6" sends a stream of six bits, of which three bits are always set. There are twenty possible characters. Sixteen are used to send four bits, two (111000 and 000111) are unused, and two are used to construct link control characters. These are shown with the first bit sent starting on the left. Such aconstant-weight codedetects all single-bit errors. Combined with alongitudinal redundancy check, it avoids the need for aCRCwhich can double the size of small packets. Normal data bytes are sent as two data characters, sent least significantnibblefirst. Special symbols are sent as pairs including at least one control character. The two control characters are called "Control" and "Control*", depending on the previous character. If the previous character ends with a 0, Control is 010101 and Control* is 101010. If the previous character ends with a 1, Control is 101010, and Control* is 010101. Data errors are detected by a longitudinal parity: all the data nibbles exclusive-ored and then the result is sent as the 4-bit checksum nibble in the end-of-packet symbol. This link transmits NULLs when idle. Each flow control character (FCC) authorizes the other end to send eight bytes, i.e. sixteen normal data characters. The link starts by sending INIT characters. After receiving them for125 μs, it switches to sending NULLs. After it sends NULLs for125 μs, it sends a single INIT. When a link has both sent and received a single INIT, it may send an FCC and start receiving data. Receiving two consecutive INITs, or many zeros or ones, indicates disconnection. Like thetwo-out-of-five code, it may be decoded by assigning weights to bit positions, in this case 1-2-0-4-8-0. The two 0-weight bits are assigned to ensure there are a total of three bits set. When the nibble has one or three 1 bits, this is unambiguous. When the nibble is 0 or F (zero or four 1 bits), an exception must be made. And when the nibble has two 1 bits, there is ambiguity: HS-SE stands for "High speed, Single-ended Electrical." This is the fastest electrical slice. It sends a gigabit per second, but the 8 meter range limits its usage to instrument clusters. However, the modulation and link control features of this standard are also used by the wide-area fiber optic protocols. A link cable consists of two 2.85 mm diameter 50 Ω coaxial cables. The impedance of the whole transmission line shall be 50 ohms ±10%. The connectors shall follow IEC 1076-4-107. The coaxial cables do a "half twist" so that pin B is always "in" and pin A is always "out". The electrical link is single-ended. For 3.3 V operation, low is 1.25 V and high is 2 V. For 5 V operation, low is 2.1 V and high is 2.9 V. The signaling speed is 100 MBd to 1GBd. The maximum rise time is 300 picoseconds, and the minimum is 100 picoseconds. The HS link's 8B/12B code is a balancedpaired disparity code, so there is no net power transfer. It arranges this by keeping a running disparity, a count of the average number of ones and zeros. It uses the running disparity to selectively invert characters. An inverted character is marked with a set invert bit. 8B/12B also guarantees a clock transition on each character. 8B/12B first sends an odd parity bit, followed by 8 bits (least-significant bit first), followed by an inversion bit, followed by a 1 (which is the start bit), and a 0 which is the stop bit. When the disparity of a character is zero (that is, it has the same number of ones and zeroes, and therefore will not transfer power), it can be transmitted either inverted or noninverted with no effect on the running disparity. Link control characters have a disparity of zero, and are inverted. This defines 126 possible link characters. Every other character is a normal data character. The link characters are: 0:IDLE 5:START_REQ (start request) 1:START_ACK (start acknowledge) 2:STOP_REQ (stop request) 3:STOP_ACK (stop acknowledge) 4:STOP_NACK (stop negative acknowledge) 125:FCC (flow control character) 6:RESET When a link starts, each side has a bit "CAL" that is zero before the receiver is calibrated to the link. When CAL is zero, the receiver throws away any data it receives. During a unidirectional start up, side A sends IDLE. When side B is calibrated, it begins to send IDLE to A. When A is calibrated, it sends START_REQ. B responds with START_ACK back to A. A then sends START_REQ to B, B responds with START_ACK, and at that point, either A or B can send a flow control character and start to get data. In a bidirectional start-up both sides start sending IDLE. When side A is calibrated, it send START_REQ to side B. Side B sends START_ACK, and then A can send an FCC to start getting data. Side B does exactly the same. If the other side is not ready, it does not respond with a START_ACK. After 5 ms, side A tries again. After 50 ms, side A gives up, turns off the power, stops and reports an error. This behavior is to prevent eye-injuries from a high-powered disconnected optical fiber end. A flow control character (FCC) authorizes the receiver to send thirty-two (32) data characters. A reset character is echoed, and then causes a unidirectional start-up. If a receiver loses calibration, it can either send a reset command, or simply hold its transmitter low, causing a calibration failure in the other link. The link is only shutdown if both nodes request a shutdown. Side A sends STOP_REQ, side B responds with STOP_ACK if it is ready to shut down, or STOP_NACK if it is not ready. Side B must perform the same sequence. "HS-FO" stands for "High Speed Fiber Optical." This is the fastest slice, and has the longest range, as well. It sends a gigabit/second up to 3000 meters. The line code and higher levels are just like HS-SE-10. The cable is very similar to the other optical cable, TS-FO-02, except for the mandatory label and the connector, which should be IEC-1754-6. However, in older cables, it is often exactly the same as TS-FO-02, except for the label. HS-FO-10 and TS-FO-02 will not interoperate. This cable can have 62.5 micrometer multimode cable, 50 micrometer multimode cable, or 9 micrometer single-mode cable. These vary in expense and the distances they permit: 100 meters, 1000 meters, and 3000 meters respectively. For multimode fiber, the transmitter launch power is generally −12dBm. The wavelength is 760–900 nanometer (nearinfrared). On the receiver, the dynamic range is 10 dB, and the sensitivity is −21 dBm with a bit error rate of one bit in 1012bits. For single mode fiber, the transmitter launch power is generally −12 dBm. The wavelength is 1250–1340 nanometers (fartherinfrared). On the receiver, the dynamic range is 12 dB, and the sensitivity is −20 dBm with a bit error rate of one bit in 1012bits.
https://en.wikipedia.org/wiki/IEEE_1355
Inmos International plc(trademarkINMOS) and two operating subsidiaries, Inmos Limited (UK) and Inmos Corporation (US), was a Britishsemiconductorcompany founded byIann Barron, Richard Petritz, and Paul Schroeder in July 1978. Inmos Limited’s head office and design office were atAztec Westbusiness park inBristol, England. Inmos' first products werestatic RAMdevices, followed bydynamic RAMsandEEPROMs. Despite early production difficulties, Inmos eventually captured around 60% of the world SRAM market. However, Barron's long-term aim was to produce an innovativemicroprocessorarchitecture intended forparallel processing, thetransputer.David Mayand Robert Milne were recruited to design this processor, which went into production in 1985 in the form of the T212 and T414 chips.[1][2] The transputer achieved some success as the basis for several parallelsupercomputersfrom companies such asMeiko(formed by ex-Inmos employees in 1985),Floating Point Systems,Parsytecand Parsys. It was used in a few workstations, the most notable probably being theAtari Transputer Workstation.[3]Being a relatively self-contained design, it was also used in someembedded systems. However, the unconventional nature of the transputer and its nativeoccam programming languagelimited its appeal. During the late 1980s, the transputer (even in its later T800 form) also struggled to keep up with the ever-increasing performance of its competitors.[4] Other devices produced by Inmos included the A100, A110 and A121digital signal processors,G364 framebuffer, and a line of videoRAMDACs, including the G170[5]and G171, which was adopted byIBMfor the originalVGAgraphics adapterused in theIBM PS/2.[6] The company was founded byIann Barron, a British computer consultant, Richard Petritz and Paul Schroeder, both American semiconductor industry veterans. Initial funding of £50 million was provided by the UK government via theNational Enterprise Board. AUSsubsidiary, Inmos Corporation, was also established inColorado.Semiconductor fabricationfacilities were built in the US atColorado Springs, Coloradoand in the UK atNewport,South Wales. Under theprivatizationpolicy ofMargaret Thatcherthe National Enterprise Board was merged into theBritish Technology Groupand had to sell its shares in Inmos. Offers for Inmos fromAT&Tand a Dutch consortium had been turned down.[7]In 1982, construction of themicroprocessor factoryinNewport,South Waleswas completed. By July 1984Thorn EMIhad made a £124.1m bid for the state's 76% interest in the company (the remaining 24% had been held by Inmos founders and employees).[8]Later it was raised to £192 million, approved August 1984 and finalized in September.[7] In total, Inmos had received £211 million from the government, but did not become profitable.[9]According to Iann Barron Inmoswasprofitable in 1984 "we were really profitable in 1984 ... we made revenues of £150 million, and we made a profit which was slightly less than £10 million".[10] In April 1989, Inmos was sold to SGS-Thomson (nowSTMicroelectronics). Around the same time, work was started on an enhanced transputer, the T9000. This encountered various technical problems and delays, and was eventually abandoned, signalling the end of the development of the transputer as a parallel processing platform. However, transputer derivatives such as the ST20 were later incorporated into chipsets for embedded applications such asset-top boxes. In December 1994, Inmos was fully assimilated into STMicroelectronics, and the usage of the Inmos brand name was discontinued.
https://en.wikipedia.org/wiki/Inmos
iWarpwas an experimentalparallelsupercomputerarchitecture developed as a joint project byIntelandCarnegie Mellon University. The project started in 1988, as a follow-up to CMU's previousWARPresearch project, in order to explore building an entire parallel-computing "node" in a singlemicroprocessor, complete with memory and communications links. In this respect the iWarp is very similar to theINMOS transputerandnCUBE.[1] Intel announced iWarp in 1989. The first iWarp prototype was delivered to Carnegie Mellon in summer of 1990, and in fall they received the first 64-cell production systems, followed by two more in 1991. With the creation of the Intel Supercomputing Systems Division in the summer of 1992, the iWarp was merged into theiPSCproduct line. Intel kept iWarp as a product but stopped actively marketing it.[2] Each iWarp CPU included a32-bitALUwith a64-bitFPUrunning at 20 MHz. It was purely scalar and completed one instruction per cycle, so the performance was 20MIPSor 20megaflopsforsingle precisionand 10 MFLOPS for double.[3][4]The communications were handled by a separate unit on the CPU that drove fourserialchannels at 40 MB/s, and included networking support in hardware that allowed for up to 20virtual channels(similar to the system added to the INMOS T9000). iWarp processors were combined onto boards along with memory, but unlike other systems Intel chose the faster, but more expensive,static RAMfor use on the iWarp. Boards typically included four CPUs and anywhere from 512 kB to 4 MB of SRAM. Another difference in the iWarp was that the systems were connected together as a n-by-mtorus, instead of the more commonhypercube. A typical system included 64 CPUs connected as an 8×8 torus, which could deliver 1.2gigaflopspeak. George Coxwas the lead architect of the iWarp project.Steven McGeady(later an Intel Vice-president and witness in theMicrosoft antitrust case) wrote an innovative development environment that allowed software to be written for the array before it was completed. Each node of the array was represented by a differentSunworkstation on aLAN, with the iWarp's unique inter-node communication protocol simulated oversockets. Unlike the chip-level simulator, which could not simulate a multi-node array, and which ran very slowly, this environment allowed in-depth development of array software to begin. The production compiler for iWarp was a C and Fortran compiler based on theAT&Tpcccompiler for UNIX, ported under contract for Intel by the Canadian firmHCR Corporationand then extensively modified and extended by Intel.[5][6]
https://en.wikipedia.org/wiki/IWarp
Meiko Scientific Ltd.was a Britishsupercomputercompany based inBristol, founded by members of the design team working on theInmostransputermicroprocessor. In 1985, when Inmos management suggested the release of the transputer be delayed, Miles Chesney, David Alden, Eric Barton, Roy Bottomley, James Cownie, and Gerry Talbot resigned and formed Meiko (Japanesefor "well-engineered") to start work onmassively parallelmachines based on the processor. Nine weeks later in July 1985, they demonstrated a transputer system based on experimental16-bittransputers at theSIGGRAPHin San Francisco. In 1986, a system based on32-bitT414 transputers was launched as theMeiko Computing Surface. By 1990, Meiko had sold more than 300 systems and grown to 125 employees. In 1993, Meiko launched the second-generationMeiko CS-2system, but the company ran into financial difficulties in the mid-1990s. The technical team and technology was transferred to a joint venture company namedQuadrics Supercomputers World Ltd.(QSW), formed byAlenia SpazioofItalyin mid-1996. At Quadrics, the CS-2 interconnect technology was developed intoQsNet. As of 2021[update], a vestigial Meiko website still exists.[1] The Meiko Computing Surface (sometimes retrospectively referred to as the CS-1) was amassively parallelsupercomputer. The system was based on theInmostransputermicroprocessor, later also usingSPARCandIntel i860processors.[2][3] The Computing Surface architecture comprised multiple boards containing transputers connected together by their communications links via Meiko-designed link switch chips. A variety of different boards were produced with different transputer variants,random-access memory(RAM) capacities and peripherals. The initial software environments provided for the Computing Surface wasOccamProgramming System(OPS), Meiko's version of Inmos's D700 Transputer Development System. This was soon superseded by amulti-userversion,MultiOPS. Later, Meiko introducedMeiko Multiple Virtual Computing Surfaces(M²VCS), a multi-user resource management system let the processors of a Computing Surface be partitioned into severaldomainsof different sizes. These domains were allocated by M²VCS to individual users, thus allowing several simultaneous users access to their own virtual Computing Surfaces. M²VCS was used in conjunction with either OPS orMeikOS, aUnix-likesingle-processoroperating system. In 1988, Meiko launched the In-Sun Computing Surface, which repackaged the Computing Surface intoVMEbusboards (designated the MK200 series) suitable for installation in largerSun-3orSun-4systems. The Sun acted asfront-endhost system for managing the transputers, running development tools and providing mass storage. A version of M²VCS running as aSunOSdaemonnamedSun Virtual Computing Surfaces(SVCS) provided access between the transputer network and the Sun host. As the performance of the transputer became less competitive toward the end of the 1980s (the follow-on T9000 transputer being beset with delays), Meiko added the ability to supplement the transputers with Intel i860 processors. Each i860 board (MK086 or MK096) contained two i860s with up to 32 MB of RAM each, and two T800s providing inter-processor communication. Sometimes known as the Concerto or simply the i860 Computing Surface, these systems had limited success. Meiko also produced a SPARC processor board, the MK083, which allowed the integration of theSunOSoperating system into the Computing Surface architecture, similarly to the In-Sun Computing Surface. These were usually used as front-end host processors for transputer or i860 Computing Surfaces. SVCS, or an improved version, called simplyVCSwas used to manage the transputer resources. Computing Surface configurations with multiple MK083 boards were also possible. A major drawback of the Computing Surface architecture was poorI/Obandwidthfor general data shuffling. Although aggregate bandwidth for special case data shuffling could be very high, the general case has very poor performance relative to the compute bandwidth. This made the Meiko Computing Surface uneconomic for many applications. MeikOS (also written asMeikosorMEiKOS) is aUnix-liketransputeroperating systemdeveloped for the Computing Surface during the late 1980s. MeikOS was derived from an early version ofMinix, extensively modified for the Computing Surface architecture. UnlikeHeliOS, another Unix-like transputer operating system, MeikOS is essentially a single-processor operating system with a distributedfile system. MeikOS was intended for use with theMeiko Multiple Virtual Computing Surfaces(M²VCS) resource management software, which partitions the processors of a Computing Surface intodomains, manages user access to these domains, and provides inter-domain communication. MeikOS hasdisklessandfileservervariants, the former running on the seat processor of an M²VCS domain, providing acommand lineuser interface for a given user; the latter running on processors with attachedSCSIhard disks, providing a remote file service (namedSurface File System(SFS)) to instances of diskless MeikOS. The two can communicate via M²VCS. MeikOS was made obsolete by the introduction of the In-Sun Computing Surface and the Meiko MK083SPARCprocessor board, which allowSunOSandSun Virtual Computing Surfaces(SVCS), later developed asVCSto take over the roles of MeikOS and M²VCS respectively. The last MeikOS release was MeikOS 3.06, in early 1991. This was based on thetransputerlink protocol. Meiko developed its own switch silicon on and European Silicon Systems, ES2gate array. Thisapplication-specific integrated circuit(ASIC) provided static connectivity and limited dynamic connectivity and was designed by Moray McLaren. The CS-2[4][5][6]was launched in 1993 and was Meiko's second-generation system architecture, superseding the earlier Computing Surface. The CS-2 was an all-new modular architecture based aroundSuperSPARCorhyperSPARCprocessors[7]and, optionally,FujitsuμVPvector processors.[8]These implemented an instruction set similar to theFujitsu VP2000vector supercomputer and had a nominal performance of 200megaflopsondouble precisionarithmetic and double that onsingle precision. The SuperSPARC processors ran at 40 MHz initially, later increased to 50 MHz. Subsequently, hyperSPARC processors were introduced at 66, 90 or 100 MHz. The CS-2 was intended to scale up to 1024 processors. The largest CS-2 system built was a 224-processor system[9]installed atLawrence Livermore National Laboratory. The CS-2 ran a customized version of Sun's operating systemSolaris, initially Solaris 2.1, later 2.3 and 2.5.1. The processors in a CS-2 were connected by a Meiko-designed multi-stage packet-switchedfat treenetwork implemented in custom silicon.[10][11][12] This project, codenamed Elan-Elite, was started in 1990, as a speculative project to compete with the T9000TransputerfromInmos, which Meiko intended to use as an interconnect technology. TheT9000began to suffer massive delays, such that the internal project became the only viable interconnect choice for the CS-2. This interconnect comprised two devices, code-namedElan(adapter) andElite(switch). Each processing element included an Elan chip, a communications co-processor based on theSPARCarchitecture, accessed via aSun MBuscache coherentinterface and providing two 50 MB/s bi-directional links. The Elite chip was an 8-way linkcrossbar switch, used to form thepacket-switched network. The switch had limited adaption based on load and priority.[13] Both ASICs were fabbed in complementary metal–oxide–semiconductor (CMOS) gate arrays byGEC Plesseyin theirRoborough,Plymouthsemi-conductor fab in 1993. After the Meiko technology was acquired byQuadrics, the Elan/Elite interconnect technology was developed intoQsNet. Meiko had hired Fred (Mark) Homewood and Moray McLaren both of whom had been instrumental in the design of theT800. Together, they designed and developed an improved, higher performanceFPUcore, owned by Meiko. This was initially targeted at theIntel80387instruction set. An ongoing legal battle between Intel,AMDand others over the 80387 made it clear this project was a commercial non-starter. A chance discussion between McLaren andAndy Bechtolsheimwhile visitingSun Microsystemsto discuss licensingSolariscaused Meiko to re-target the design forSPARC. Meiko was able to turn around the coreFPUdesign in a short time andLSI Logicfabbed a device for theSPARCstation 1. A major difference over the T800 FPU was that it fully implemented theIEEE 754 standardfor computer arithmetic. This including all rounding modes, denormalised numbers and square root in hardware without taking anyhardware exceptionsto complete computation. ASPARCstation 2design was also developed together with a combined part targeting the SPARCstation 2 ASIC pinout. LSI fabbed and manufactured the separate FPU L64814, as part of their SparKIT chipset.[14] The Meiko design was eventually fully licensed to Sun which went on to use it in theMicroSPARCfamily of ASICs for several generations[15]in return for a one-off payment and full Solaris source license.
https://en.wikipedia.org/wiki/Meiko_Computing_Surface
NEC SXdescribes a series ofvectorsupercomputersdesigned, manufactured, and marketed byNEC. This computer series is notable for providing the first computer to exceed 1 gigaflop,[1][2]as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004.[3]The current model, as of 2018, is theSX-Aurora TSUBASA. The first models, the SX-1 and SX-2, were announced in April 1983, and released in 1985.[2][4][5][6]The SX-2 was the first computer to exceed 1gigaflop.[1][2]The SX-1 and SX-1E were less powerful models offered by NEC. The SX-3 was announced in 1989,[7][8]and shipped in 1990.[6]The SX-3 allows parallel computing using bothSIMDandMIMD.[9]It also switched from theACOS-4based SX-OS, to theAT&T System V UNIX-basedSUPER-UXoperating system.[6]In 1992 an improved variant, the SX-3R, was announced.[6]A SX-3/44 variant was the fastest computer in the world between 1992-1993 on theTOP500list. It had LSI integrated circuits with 20,000 gates per IC with a per-gate delay time of 70 picoseconds, could house 4 arithmetic processors with up to 4 sharing the same main memory, and up to several processors to achieve up to 22 GFLOPS of performance, with 1.37 GFLOPS of performance with a single processor. 100 LSI ICs were housed in a single multi chip module to achieve 2 million gates per module. The modules were watercooled.[10] The SX-4 series was announced in 1994, and first shipped in 1995.[6]Since the SX-4, SX series supercomputers are constructed in a doubly parallel manner.[citation needed]A number ofcentral processing units(CPUs) are arranged into aparallelvector processingnode.[citation needed]These nodes are then installed in a regularSMParrangement.[citation needed] The SX-5 was announced and shipped in 1998,[6]with the SX-6 following in 2001, and the SX-7 in 2002.[11]Starting in 2001,Craymarketed the SX-5 and SX-6 exclusively in the US, and non-exclusively elsewhere for a short time.[citation needed] TheEarth Simulator, built fromSX-6nodes, was the fastest supercomputer from June 2002 to June 2004 on theLINPACK benchmark, achieving 35.86TFLOPS.[3][12][13][14]The SX-9 was introduced in 2007 and discontinued in 2015.[15] Tadashi Watanabehas been NEC's lead designer for the majority of SX supercomputer systems.[16]For this work he received theEckert–Mauchly Awardin 1998 and theSeymour Cray Computer Engineering Awardin 2006.[citation needed] The NEC SX Vector Engine (VE) is avector processor, and eachVE corehas a Scalar Processing Unit (SPU) with 64 scalar registers of 64 bits, and a Vector Processing Unit (VPU) with 64 vector registers (of up to 256 bits in theSX-Aurora TSUBASA). The SPU implements in hardware theIEEE 754'squadruple-precision floating-point format, and everyinstructionis 64-bit long.[17] Each system has multiple models, and the following table lists the most powerful variant of each system. Further certain systems have revisions, identified by a letter suffix. The SX-1 and SX-2 ran theACOS-4based SX-OS. The SX-3 onwards run theSUPER-UXoperating system(OS); the Earth Simulator runs a custom version of this OS. SUPER-UX comes withFortranandC++compilers. Cray has also developed anAdacompiler which is available as an option. Somevertical applicationsare available through NEC, but in general customers are expected to develop much of their own software. In addition to commercial applications, there is a wide body offree softwarefor the UNIX environment which can be compiled and run on SUPER-UX, such asEmacs, andVim. A port ofGCCis also available for the platform. The SX-Aurora TSUBASA PCIe card is running in aLinuxmachine, the Vector Host (VH), which provides operating system services to the Vector Engine (VE).[20]The VE operating system VEOS runs in user space on the VH. Applications compiled for the VE can use almost all Linux system calls, they are transparently forwarded and executed on the VH. The components of VEOS are licensed under theGNU General Public License.
https://en.wikipedia.org/wiki/SX_architecture
Duncan's taxonomyis a classification ofcomputer architectures, proposed by Ralph Duncan in 1990.[1]Duncan suggested modifications toFlynn's taxonomy[2]to include pipelined vector processes.[3] The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below. This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism.[1] Pipelinedvector processorsare characterized by pipelined functional units that accept a sequential stream of array orvectorelements, such that different stages in a filledpipelineare processing different elements of the vector at a given time.[4]Parallelismis provided both by the pipelining in individualfunctional unitsdescribed above, as well as by operating multiple units of this kind in parallel and bychainingthe output of one unit into another unit as input.[4] Vector architectures that stream vector elements into functional units from specialvector registersare termedregister-to-registerarchitectures, while those that feed functional units from special memory buffers are designated asmemory-to-memoryarchitectures.[1]Early examples of register-to-register architectures from the 1960s and early 1970s include theCray-1[5]and Fujitsu VP-200, while theControl Data CorporationSTAR-100, CDC 205 and theTexas InstrumentsAdvanced Scientific Computerare early examples of memory-to-memory vector architectures.[6] The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (seeNEC SX architecture).RISC-V RVVmay mark the beginning of the modern revival of Vector processing.[speculation?] This scheme uses theSIMD(single instruction stream, multiple data stream) category fromFlynn's taxonomyas a root class forprocessor arrayandassociative memorysubclasses. SIMD architectures[7]are characterized by having a control unit broadcast a common instruction to all processing elements, which execute that instruction in lockstep on diverse operands from local data. Common features include the ability for individual processors to disable an instruction and the ability to propagate instruction results to immediate neighbors over an interconnection network. Systolic arrays, proposed during the 1980s,[8]are multiprocessors in which data and partial results are rhythmically pumped from processor to processor through a regular, local interconnection network.[1]Systolic architectures use a global clock and explicit timing delays to synchronize data flow from processor to processor.[1]Each processor in a systolic system executes an invariant sequence of instructions before data and results are pulsed to neighboring processors.[8] Based on Flynn's multiple-instruction-multiple-data streams terminology, this category spans a wide spectrum of architectures in which processors execute multiple instruction sequences on (potentially) dissimilar data streams without strict synchronization. Although both instruction and data streams can be different for each processor, they need not be. Thus, MIMD architectures can run identical programs that are in various stages at any given time, run unique instruction and data streams on each processor or execute a combination of each these scenarios. This category is subdivided further primarily on the basis of memory organization.[1] The MIMD-based paradigms category subsumes systems in which a specific programming or execution paradigm is at least as fundamental to the architectural design as structural considerations are. Thus, the design ofdataflow architecturesandreduction machinesis as much the product of supporting their distinctive execution paradigm as it is a product of connecting processors and memories in MIMD fashion. The category's subdivisions are defined by these paradigms.[1]
https://en.wikipedia.org/wiki/Duncan%27s_taxonomy#Pipelined_vector_processors
Incomputing, acompute kernelis a routine compiled for high throughputaccelerators(such asgraphics processing units(GPUs),digital signal processors(DSPs) orfield-programmable gate arrays(FPGAs)), separate from but used by a main program (typically running on acentral processing unit). They are sometimes calledcompute shaders, sharingexecution unitswithvertex shadersandpixel shaderson GPUs, but are not limited to execution on one class of device, orgraphics APIs.[1][2] Compute kernels roughly correspond toinner loopswhen implementing algorithms in traditional languages (except there is no implied sequential operation), or to code passed tointernal iterators. They may be specified by a separateprogramming languagesuch as "OpenCL C" (managed by theOpenCLAPI), as "computeshaders" written in ashading language(managed by a graphics API such asOpenGL), or embedded directly inapplication codewritten in ahigh level language, as in the case ofC++AMP. Microsoft support this asDirectCompute. Thisprogramming paradigmmaps well tovector processors: there is an assumption that each invocation of a kernel within a batch is independent, allowing fordata parallelexecution. However,atomic operationsmay sometimes be used forsynchronizationbetween elements (for interdependent work), in some scenarios. Individual invocations are given indices (in 1 or more dimensions) from which arbitrary addressing of buffer data may be performed (includingscatter gatheroperations), so long as the non-overlapping assumption is respected. TheVulkan APIprovides the intermediateSPIR-Vrepresentation to describebothGraphical Shaders, and Compute Kernels, in alanguage independentandmachine independentmanner. The intention is to facilitate language evolution and provide a more natural ability to leverage GPU compute capabilities, in line with hardware developments such asUnified Memory ArchitectureandHeterogeneous System Architecture. This allows closer cooperation between a CPU and GPU. Much work has been done in the field of Kernel generation through LLMs as a means of optimizing code. KernelBench,[3]created by the Scaling Intelligence Lab atStanford, provides a framework to evaluate the ability of LLMs to generate efficient GPU kernels. Cognitionhas created Kevin 32-B[4]to create efficient CUDA kernels which is currently the highest performing model on KernelBench.
https://en.wikipedia.org/wiki/Compute_kernel
Incomputer science,stream processing(also known asevent stream processing,data stream processing, ordistributed stream processing) is aprogramming paradigmwhich viewsstreams, or sequences of events in time, as the central input and output objects ofcomputation. Stream processing encompassesdataflow programming,reactive programming, anddistributeddata processing.[1]Stream processing systems aim to exposeparallel processingfor data streams and rely onstreaming algorithmsfor efficient implementation. Thesoftware stackfor these systems includes components such asprogramming modelsandquery languages, for expressing computation;stream management systems, for distribution andscheduling; and hardware components foraccelerationincludingfloating-point units,graphics processing units, andfield-programmable gate arrays.[2] The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (astream), a series of operations (kernel functions) is applied to each element in the stream. Kernel functions are usuallypipelined, and optimal local on-chip memory reuse is attempted, in order to minimize the loss in bandwidth, associated with external memory interaction.Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can usescoreboarding, for example, to initiate adirect memory access(DMA) when dependencies become known. The elimination of manual DMA management reduces software complexity, and an associated elimination for hardware cached I/O, reduces the data area expanse that has to be involved with service by specialized computational units such asarithmetic logic units. During the 1980s stream processing was explored withindataflow programming. An example is the languageSISAL(Streams and Iteration in a Single Assignment Language). Stream processing is essentially a compromise, driven by a data-centric model that works very well for traditional DSP or GPU-type applications (such as image, video anddigital signal processing) but less so for general purpose processing with more randomized data access (such as databases). By sacrificing some flexibility in the model, the implications allow easier, faster and more efficient execution. Depending on the context,processordesign may be tuned for maximum efficiency or a trade-off for flexibility. Stream processing is especially suitable for applications that exhibit three application characteristics:[citation needed] Examples of records within streams include: For each record we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable. By way of illustration, the following code fragments demonstrate detection of patterns within event streams. The first is an example of processing a data stream using a continuousSQLquery (a query that executes forever processing arriving data based on timestamps and window duration). This code fragment illustrates a JOIN of two data streams, one for stock orders, and one for the resulting stock trades. The query outputs a stream of all Orders matched by a Trade within one second of the Order being placed. The output stream is sorted by timestamp, in this case, the timestamp from the Orders stream. Another sample code fragment detects weddings among a flow of external "events" such as church bells ringing, the appearance of a man in a tuxedo or morning suit, a woman in a flowing white gown and rice flying through the air. A "complex" or "composite" event is what one infers from the individual simple events: a wedding is happening. Basic computers started from a sequential execution paradigm. TraditionalCPUsareSISDbased, which means they conceptually perform only one operation at a time. As the computing needs of the world evolved, the amount of data to be managed increased very quickly. It was obvious that the sequential programming model could not cope with the increased need for processing power. Various efforts have been spent on finding alternative ways to perform massive amounts of computations but the only solution was to exploit some level of parallel execution. The result of those efforts wasSIMD, a programming paradigm which allowed applying one instruction to multiple instances of (different) data. Most of the time, SIMD was being used in aSWARenvironment. By using more complicated structures, one could also haveMIMDparallelism. Although those two paradigms were efficient, real-world implementations were plagued with limitations from memory alignment problems to synchronization issues and limited parallelism. Only few SIMD processors survived as stand-alone components; most were embedded in standard CPUs. Consider a simple program adding up two arrays containing 100 4-componentvectors(i.e. 400 numbers in total). This is the sequential paradigm that is most familiar. Variations do exist (such as inner loops, structures and such), but they ultimately boil down to that construct. This is actually oversimplified. It assumes the instructionvector_sumworks. Although this is what happens withinstruction intrinsics, much information is actually not taken into account here such as the number of vector components and their data format. This is done for clarity. You can see however, this method reduces the number of decoded instructions fromnumElements * componentsPerElementtonumElements. The number of jump instructions is also decreased, as the loop is run fewer times. These gains result from the parallel execution of the four mathematical operations. What happened however is that the packed SIMD register holds a certain amount of data so it's not possible to get more parallelism. The speed up is somewhat limited by the assumption we made of performing four parallel operations (please note this is common for bothAltiVecandSSE). In this paradigm, the whole dataset is defined, rather than each component block being defined separately. Describing the set of data is assumed to be in the first two rows. After that, the result is inferred from the sources and kernel. For simplicity, there's a 1:1 mapping between input and output data but this does not need to be. Applied kernels can also be much more complex. An implementation of this paradigm can "unroll" a loop internally. This allows throughput to scale with chip complexity, easily utilizing hundreds of ALUs.[3][4]The elimination of complex data patterns makes much of this extra power available. While stream processing is a branch of SIMD/MIMD processing, they must not be confused. Although SIMD implementations can often work in a "streaming" manner, their performance is not comparable: the model envisions a very different usage pattern which allows far greater performance by itself. It has been noted that when applied on generic processors such as standard CPU, only a 1.5x speedup can be reached.[5]By contrast, ad-hoc stream processors easily reach over 10x performance, mainly attributed to the more efficient memory access and higher levels of parallel processing.[6] Although there are various degrees of flexibility allowed by the model, stream processors usually impose some limitations on the kernel or stream size. For example, consumer hardware often lacks the ability to perform high-precision math, lacks complex indirection chains or presents lower limits on the number of instructions which can be executed. Stanford Universitystream processing projects included the Stanford Real-Time Programmable Shading Project started in 1999.[7]A prototype called Imagine was developed in 2002.[8]A project called Merrimac ran until about 2004.[9]AT&Talso researched stream-enhanced processors asgraphics processing unitsrapidly evolved in both speed and functionality.[1]Since these early days, dozens of stream processing languages have been developed, as well as specialized hardware. The most immediate challenge in the realm of parallel processing does not lie as much in the type of hardware architecture used, but in how easy it will be to program the system in question in a real-world environment with acceptable performance. Machines like Imagine use a straightforward single-threaded model with automated dependencies, memory allocation andDMAscheduling. This in itself is a result of the research at MIT and Stanford in finding an optimallayering of tasksbetween programmer, tools and hardware. Programmers beat tools in mapping algorithms to parallel hardware, and tools beat programmers in figuring out smartest memory allocation schemes, etc. Of particular concern are MIMD designs such asCell, for which the programmer needs to deal with application partitioning across multiple cores and deal with process synchronization and load balancing. A drawback of SIMD programming was the issue ofarray-of-structures (AoS) and structure-of-arrays (SoA). Programmers often create representations of enitities in memory, for example, the location of an particle in 3D space, the colour of the ball and its size as below: When multiple of these structures exist in memory they are placed end to end creating anarraysin anarray of structures(AoS) topology. This means that should some algorithim be applied to the location of each particle in turn it must skip over memory locations containing the other attributes. If these attributes are not needed this results in wasteful usage of the CPU cache. Additionally, a SIMD instruction will typically expect the data it will operate on to be contiguous in memory, the elements may also need to bealigned. By moving the memory location of the data out of the structure data can be better organised for efficient access in a stream and for SIMD instructions to operate one. Astructure of arrays(SoA), as shown below, can allow this. Instead of holding the data in the structure, it holds only pointers (memory locations) for the data. Shortcomings are that if an multiple attributes to of an object are to be operated on they might now be distant in memory and so result in a cache miss. The aligning and any needed padding lead to increased memory usage. Overall, memory management may be more complicated if structures are added and removed for example. For stream processors, the usage of structures is encouraged. From an application point of view, all the attributes can be defined with some flexibility. Taking GPUs as reference, there is a set of attributes (at least 16) available. For each attribute, the application can state the number of components and the format of the components (but only primitive data types are supported for now). The various attributes are then attached to a memory block, possibly defining astridebetween 'consecutive' elements of the same attributes, effectively allowing interleaved data. When the GPU begins the stream processing, it willgatherall the various attributes in a single set of parameters (usually this looks like a structure or a "magic global variable"), performs the operations andscattersthe results to some memory area for later processing (or retrieving). More modern stream processing frameworks provide a FIFO like interface to structure data as a literal stream. This abstraction provides a means to specify data dependencies implicitly while enabling the runtime/hardware to take full advantage of that knowledge for efficient computation. One of the simplest[citation needed]and most efficient[citation needed]stream processing modalities to date for C++, isRaftLib, which enables linking independentcompute kernelstogether as a data flow graph using C++ stream operators. As an example: Apart from specifying streaming applications in high-level languages, models of computation (MoCs) also have been widely used asdataflowmodels and process-based models. Historically, CPUs began implementing various tiers of memory access optimizations because of the ever-increasing performance when compared to relatively slow growing external memory bandwidth. As this gap widened, big amounts of die area were dedicated to hiding memory latencies. Since fetching information and opcodes to those few ALUs is expensive, very little die area is dedicated to actual mathematical machinery (as a rough estimation, consider it to be less than 10%). A similar architecture exists on stream processors but thanks to the new programming model, the amount of transistors dedicated to management is actually very little. Beginning from a whole system point of view, stream processors usually exist in a controlled environment. GPUs do exist on an add-in board (this seems to also apply to Imagine). CPUs continue do the job of managing system resources, running applications, and such. The stream processor is usually equipped with a fast, efficient, proprietary memory bus (crossbar switches are now common, multi-buses have been employed in the past). The exact amount of memory lanes is dependent on the market range. As this is written, there are still 64-bit wide interconnections around (entry-level). Most mid-range models use a fast 128-bit crossbar switch matrix (4 or 2 segments), while high-end models deploy huge amounts of memory (actually up to 512 MB) with a slightly slower crossbar that is 256 bits wide. By contrast, standard processors fromIntel Pentiumto someAthlon 64have only a single 64-bit wide data bus. Memory access patterns are much more predictable. While arrays do exist, their dimension is fixed at kernel invocation. The thing which most closely matches a multiple pointer indirection is anindirection chain, which is however guaranteed to finally read or write from a specific memory area (inside a stream). Because of the SIMD nature of the stream processor's execution units (ALUs clusters), read/write operations are expected to happen in bulk, so memories are optimized for high bandwidth rather than low latency (this is a difference fromRambusandDDR SDRAM, for example). This also allows for efficient memory bus negotiations. Most (90%) of a stream processor's work is done on-chip, requiring only 1% of the global data to be stored to memory. This is where knowing the kernel temporaries and dependencies pays. Internally, a stream processor features some clever communication and management circuits but what's interesting is theStream Register File(SRF). This is conceptually a large cache in which stream data is stored to be transferred to external memory in bulks. As a cache-like software-controlled structure to the variousALUs, the SRF is shared between all the various ALU clusters. The key concept and innovation here done with Stanford's Imagine chip is that the compiler is able to automate and allocate memory in an optimal way, fully transparent to the programmer. The dependencies between kernel functions and data is known through the programming model which enables the compiler to perform flow analysis and optimally pack the SRFs. Commonly, this cache and DMA management can take up the majority of a project's schedule, something the stream processor (or at least Imagine) totally automates. Tests done at Stanford showed that the compiler did an as well or better job at scheduling memory than if you hand tuned the thing with much effort. There is proof; there can be a lot of clusters because inter-cluster communication is assumed to be rare. Internally however, each cluster can efficiently exploit a much lower amount of ALUs because intra-cluster communication is common and thus needs to be highly efficient. To keep those ALUs fetched with data, each ALU is equipped with local register files (LRFs), which are basically its usable registers. This three-tiered data access pattern, makes it easy to keep temporary data away from slow memories, thus making the silicon implementation highly efficient and power-saving. Although an order of magnitude speedup can be reasonably expected (even from mainstream GPUs when computing in a streaming manner), not all applications benefit from this. Communication latencies are actually the biggest problem. AlthoughPCI Expressimproved this with full-duplex communications, getting a GPU (and possibly a generic stream processor) to work will possibly take long amounts of time. This means it's usually counter-productive to use them for small datasets. Because changing the kernel is a rather expensive operation the stream architecture also incurs penalties for small streams, a behaviour referred to as theshort stream effect. Pipeliningis a very widespread and heavily used practice on stream processors, with GPUs featuring pipelines exceeding 200 stages. The cost for switching settings is dependent on the setting being modified but it is now considered to always be expensive. To avoid those problems at various levels of the pipeline, many techniques have been deployed such as "über shaders" and "texture atlases". Those techniques are game-oriented because of the nature of GPUs, but the concepts are interesting for generic stream processing as well. Most programming languages for stream processors start with Java, C or C++ and add extensions which provide specific instructions to allow application developers to tag kernels and/or streams. This also applies to mostshading languages, which can be considered stream programming languages to a certain degree. Non-commercial examples of stream programming languages include: Commercial implementations are either general purpose or tied to specific hardware by a vendor. Examples of general purpose languages include: Vendor-specific languages include: Event-Based Processing Batch file-based processing (emulates some of actual stream processing, but much lower performance in general[clarification needed][citation needed]) Continuous operator stream processing[clarification needed] Stream processing services:
https://en.wikipedia.org/wiki/Stream_processing
Automatic vectorization, inparallel computing, is a special case ofautomatic parallelization, where acomputer programis converted from ascalarimplementation, which processes a single pair ofoperandsat a time, to avectorimplementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specializedsupercomputers, typically havevector operationsthat simultaneously perform operations such as the following four additions (viaSIMDorSPMDhardware): However, in mostprogramming languagesone typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written inC: A vectorizingcompilertransforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arraysa,bandc. Automatic vectorization is a major research topic in computer science.[citation needed] Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time.Computer languagesand programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations. Loop vectorizationtransforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented inIntel'sMMX,SSE, andAVX, inPower ISA'sAltiVec, inARM'sNEON,SVEand SVE2, and inRISC-V'sVector Extensioninstruction sets. Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because ofpipelinesynchronization or data-movement timing.Loop dependence analysisidentifies loops that can be vectorized, relying on thedata dependenceof the instructions inside loops. Automatic vectorization, like anyloop optimizationor other compile-time optimization, must exactly preserve program behavior. All dependencies must be respected during execution to prevent incorrect results. In general, loop invariant dependencies andlexically forward dependenciescan be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence betweenall statementsremain true to the original. Cyclic dependencies must be processed independently of the vectorized instructions. Integerprecision(bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken withsign extension(because multiple integers are packed inside the same register) and during shift operations, or operations withcarry bitsthat would otherwise be taken into account. Floating-pointprecision must be kept as well, unlessIEEE-754compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually signify programmer error. To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items. The first step is to build thedependency graph, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements.Alias analysiscan be used to certify that the different variables access (or intersect) the same region in memory. The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction. Suppose the vector size is the same as 4 ints: Using the graph, the optimizer can then cluster thestrongly connected components(SCC) and separate vectorizable statements from the rest. For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees. Some non-obvious dependencies can be further optimized based on specific idioms. For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values (RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment. Self-dependence by scalars can be vectorized byvariable elimination. The general framework for loop vectorization is split into four stages: Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly. This run-time check is made in thepreludestage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables. The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the executionstack. On the other hand, the code below has no information on memory positions, because the references arepointersand the memory they point to may overlap. A quick run-time check on theaddressof bothaandb, plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies. (Note that from C99, qualifying the parameters with therestrictkeyword—here:int *restrict a, int *restrict b)—tells the compiler that the memory ranges pointed to byaandbdo not overlap, leading to the same outcome as the example above.) There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes.[1] An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like: This could be vectorized to look something like: Here, c[i:i+3] represents the four array elements from c[i] to c[i+3] and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code. There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based onloop unrolling. This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows. In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization istrue data dependencyshorter than the vector length. Other obstacles include function calls and short iteration counts. Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example. This relatively new technique specifically targets modern SIMD architectures with short vector lengths.[2]Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows. To show step-by-step transformations for this approach, the same example is used again. Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables. Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler,[3][obsolete source]which uses both. The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates.[4]If the following code is used as an example to show these transformations; where (P) denotes a predicate guarding the statement. Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions.[5]Below, AltiVec predicates are used to show how this can be achieved. There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields. Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body. The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline. In mostCandC++compilers, it is possible to useintrinsic functionsto manually vectorise, at the expense of programmer effort and maintainability.
https://en.wikipedia.org/wiki/Automatic_vectorization
Incomputing,chainingis a technique used in computer architecture in whichscalarandvectorregisters generateinterimresults which can be used immediately, without additional memory references which reduce computational speed.[1] The chaining technique was first used bySeymour Crayin the 80 MHzCray 1 supercomputerin 1976.[2] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Chaining_(vector_processing)
Withincomputer engineeringandcomputer science, acomputer for operations with (mathematical) functions(unlike the usualcomputer) operates withfunctionsat thehardwarelevel (i.e. without programming these operations).[1][2][3] A computing machine for operations with functions was presented and developed by Mikhail Kartsev in 1967.[1]Among the operations of this computing machine were the functions addition, subtraction and multiplication, functions comparison, the same operations between a function and a number, finding the function maximum, computingindefinite integral, computingdefinite integralofderivativeof two functions, derivative of two functions, shift of a function along the X-axis etc. By itsarchitecturethis computing machine was (using the modern terminology) avector processororarray processor, acentral processing unit(CPU) that implements an instruction set containing instructions that operate onone-dimensional arraysof data calledvectors. In it there has been used the fact that many of these operations may be interpreted as the known operation on vectors: addition and subtraction of functions - as addition and subtraction of vectors, computing a definite integral of two functions derivative— as computing the vector product of two vectors, function shift along the X-axis – as vector rotation about axes, etc.[1]In 1966 Khmelnik had proposed a functions coding method,[2]i.e. the functions representation by a "uniform" (for a function as a whole) positional code. And so the mentioned operations with functions are performed as unique computer operations with such codes on a "single"arithmetic unit.[3] Source:[2][3] The positional code of an integer numberA{\displaystyle A}is a numeral notation of digitsα{\displaystyle \alpha }in a certainpositional number systemof the form Such code may be called "linear". Unlike it a positional code of one-variablex{\displaystyle x}functionF(x){\displaystyle F(x)}has the form: and so it isflatand "triangular", as the digits in it comprise a triangle. The value of the positional numberA{\displaystyle A}above is that of the sum whereρ{\displaystyle \rho }is the radix of the said number system. The positional code of a one-variable function correspond to a 'double' code of the form whereR{\displaystyle R}is an integer positive number, quantity of values that takenα{\displaystyle \alpha }, andy{\displaystyle y}is a certain function of argumentx{\displaystyle x}. Addition of positional codes of numbers is associated with thecarrytransfer to a higher digit according to the scheme Addition of positional codes of one-variable functions is also associated with the carry transfer to higher digits according to the scheme: Here the same transfer is carried simultaneously totwohigher digits. A triangular code is calledR-nary(and is denoted asTKR{\displaystyle TK_{R}}), if the numbersαmk{\displaystyle \alpha _{mk}}take their values from the set For example, a triangular code is a ternary codeTK3{\displaystyle TK_{3}}, ifαmk∈(−1,0,1){\displaystyle \alpha _{mk}\in (-1,0,1)}, and quaternaryTK4{\displaystyle TK_{4}}, ifαmk∈(−2,−1,0,1){\displaystyle \alpha _{mk}\in (-2,-1,0,1)}.ForR-nary triangular codes the following equalities are valid: wherea{\displaystyle a}is an arbitrary number. There existsTKR{\displaystyle TK_{R}}of an arbitrary integer real number. In particular,TKR(α)=α{\displaystyle TK_{R}(\alpha )=\alpha }. Also there existsTKR{\displaystyle TK_{R}}of any function of the formyk{\displaystyle y^{k}}. For instance,TKR(y2)=(001){\displaystyle TK_{R}(y^{2})=(0\ 0\ 1)}. in R-nary triangular codes consists in the following: This procedure is described (as also for one-digit addition of the numbers) by a table of one-digit addition, where all the values of the termsαmk∈DR{\displaystyle \alpha _{mk}\in D_{R}}andβmk∈DR{\displaystyle \beta _{mk}\in D_{R}}must be present and all the values of carries appearing at decomposition of the sumSmk=σmk+Rpmk{\displaystyle S_{mk}^{}=\sigma _{mk}+Rp_{mk}}. Such a table may be synthesized forR>2.{\displaystyle R>2.}Below we have written the table of one-digit addition forR=3{\displaystyle R=3}: in R-nary triangular codes differs from the one-digit addition only by the fact that in the given(mk){\displaystyle (mk)}-digit the valueSmk{\displaystyle S_{mk}^{}}is determined by the formula in R-nary triangular codes is based on using the correlation: from this it follows that the division of each digit causes carries into two lowest digits. Hence, the digits result in this operation is a sum of the quotient from the division of this digit by R and two carries from two highest digits. Thus, when divided by parameter R This procedure is described by the table of one-digit division by parameter R, where all the values of terms and all values of carries, appearing at the decomposition of the sumSmk=σmk+pmk/R{\displaystyle S_{mk}^{}=\sigma _{mk}+p_{mk}/R}, must be present. Such table may be synthesized forR>2.{\displaystyle R>2.}Below the table will be given for the one-digit division by the parameter R forR=3{\displaystyle R=3}: of R-nary triangular codes consists (as in positional codes of numbers) in subsequently performed one-digit operations. Mind that the one-digit operations in all digits of each column are performed simultaneously. of R-nary triangular codes. Multiplication of a codeTKR′{\displaystyle TK_{R}'^{}}by(mk){\displaystyle (mk)}-digit of another codeTKR″{\displaystyle TK_{R}''^{}}consists in(mk){\displaystyle (mk)}-shift of the codeTKR′{\displaystyle TK_{R}'^{}}, i.e. its shift k columns left and m rows up. Multiplication of codesTKR′{\displaystyle TK_{R}'^{}}andTKR″{\displaystyle TK_{R}''^{}}consists in subsequent(mk){\displaystyle (mk)}-shifts of the codeTKR′{\displaystyle TK_{R}'^{}}and addition of the shifted codeTKR′{\displaystyle TK_{R}'^{}}with the part-product (as in the positional codes of numbers). of R-nary triangular codes. The derivative of functionF(x){\displaystyle F(x)}, defined above, is So the derivation of triangular codes of a functionF(x){\displaystyle F(x)}consists in determining the triangular code of the partial derivative∂F(x)∂y{\displaystyle {\frac {\partial F(x)}{\partial y}}}and its multiplication by the known triangular code of the derivative∂y∂x{\displaystyle {\frac {\partial y}{\partial x}}}. The determination of the triangular code of the partial derivative∂F(x)∂y{\displaystyle {\frac {\partial F(x)}{\partial y}}}is based on the correlation The derivation method consists of organizing carries from mk-digit into (m+1,k)-digit and into (m-1,k)-digit, and their summing in the given digit is performed in the same way as in one-digit addition. of R-nary triangular codes. A function represented by series of the form with integer coefficientsAk{\displaystyle A_{k}}, may be represented by R-nary triangular codes, for these coefficients and functionsyk{\displaystyle y^{k}}have R-nary triangular codes (which was mentioned in the beginning of the section). On the other hand, R-nary triangular code may be represented by the said series, as any termαmkRkyk(1−y)m{\displaystyle \alpha _{mk}R^{k}y^{k}(1-y)^{m}}in the positional expansion of the function (corresponding to this code) may be represented by a similar series. of R-nary triangular codes. This is the name of an operation of reducing the number of "non"-zero columns. The necessity of truncation appears at the emergence of carries beyond the digit net. The truncation consists in division by parameter R. All coefficients of the series represented by the code are reduced R times, and the fractional parts of these coefficients are discarded. The first term of the series is also discarded. Such reduction is acceptable if it is known that the series of functions converge. Truncation consists in subsequently performed one-digit operations of division by parameter R. The one-digit operations in all the digits of a row are performed simultaneously, and the carries from lower row are discarded. R-nary triangular code is accompanied by a scale factor M, similar to exponent for floating-point number. Factor M permits to display all coefficients of the coded series as integer numbers. Factor M is multiplied by R at the code truncation. For addition factors M are aligned, to do so one of added codes must be truncated. For multiplication the factors M are also multiplied. Source:[4] Positional code for function of two variables is depicted on Figure 1. It corresponds to a "triple" sum of the form::F(x,v)=∑k=0n∑m1=0k∑m2=0kαm1,m2,kRkyk−m1(1−y)m1zk−m2(1−z)m2{\displaystyle F(x,v)=\sum _{k=0}^{n}\sum _{m1=0}^{k}\sum _{m2=0}^{k}\alpha _{m1,m2,k}R^{k}y^{k-m1}(1-y)^{m1}z^{k-m2}(1-z)^{m2}},whereR{\displaystyle R}is an integer positive number, number of values of the figureαm1,m2,k{\displaystyle \alpha _{m1,m2,k}}, andy(x),z(v){\displaystyle y(x),~z(v)}— certain functions of argumentsx,v{\displaystyle x,~v}correspondingly. On Figure 1 the nodes correspond to digitsαm1,m2,k{\displaystyle \alpha _{m1,m2,k}}, and in the circles the values of indexesm1,m2,k{\displaystyle {m1,m2,k}}of the corresponding digit are shown. The positional code of the function of two variables is called "pyramidal". Positional code is called R-nary (and is denoted asPKR{\displaystyle PK_{R}}), if the numbersαm1,m2,k{\displaystyle \alpha _{m1,m2,k}}assume the values from the setDR{\displaystyle D_{R}}. At the addition of the codesPKR{\displaystyle PK_{R}}the carry extends to four digits and henceR≥7{\displaystyle R\geq 7}. A positional code for the function from several variables corresponds to a sum of the form whereR{\displaystyle R}is an integer positive number, number of values of the digitαm1,…,ma,k{\displaystyle \alpha _{m_{1},\ldots ,m_{a},k}}, andyi(xi){\displaystyle y_{i}(x_{i})}certain functions of argumentsxi{\displaystyle x_{i}}. A positional code of a function of several variables is called "hyperpyramidal". Of Figure 2 is depicted for example a positional hyperpyramidal code of a function of three variables. On it the nodes correspond to the digitsαm1,m2,m3,k{\displaystyle \alpha _{m1,m2,m3,k}}, and the circles contain the values of indexesm1,m2,m3,k{\displaystyle {m1,m2,m3,k}}of the corresponding digit. A positional hyperpyramidal code is called R-nary (and is denoted asGPKR{\displaystyle GPK_{R}}), if the numbersαm1,…,ma,k{\displaystyle \alpha _{m_{1},\ldots ,m_{a},k}}assume the values from the setDR{\displaystyle D_{R}}. At the codes additionGPKR{\displaystyle GPK_{R}}the carry extends ona-dimensional cube, containing2a{\displaystyle 2^{a}}digits, and henceR≥(2a−1−1){\displaystyle R\geq (2^{a-1}-1)}.
https://en.wikipedia.org/wiki/Computer_for_operations_with_functions
RISC-V[b](pronounced "risk-five"[2]: 1) is anopen standardinstruction set architecture(ISA) based on establishedreduced instruction set computer(RISC) principles. The project commenced in 2010 at theUniversity of California, Berkeley. It transferred to the RISC-V Foundation in 2015, and from there to RISC-V International, a Swiss non-profit entity, in November 2019.[5][6]Similar to several other RISC ISAs, e.g.Amber (ARMv2)orOpenRISC, RISC-V is offered underroyalty-freeopen-source licenses.[7]The documents defining the RISC-V instruction set architecture (ISA) are offered under aCreative Commons licenseor aBSD License. Mainline support for RISC-V was added to the Linux 5.17 kernel in 2022, along with itstoolchain.[8]In July 2023, RISC-V, in its64-bitvariant called riscv64,[9]was included as an official architecture of Linux distributionDebian, in itsunstableversion.[10]The goal of this project was "to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA."[11]Gentooalso supports RISC-V.[12]Fedorasupports RISC-V as an alternative architecture as of 2025.[13][14]TheopenSUSEProject added RISC-V support in 2018.[15] Some RISC-V International members, such asSiFive,Andes Technology,Synopsys,Alibaba's Damo Academy,Raspberry Pi, and Akeana,[16][17]offer or have announced commercialsystems on a chip(SoCs) that incorporate one or more RISC-V compatible CPU cores.[18] The termRISCdates from about 1980.[19]Before then, there was some knowledge (seeJohn Cocke) that simpler computers can be effective, but the design principles were not widely described. Simple, effective computers have always been of academic interest, and resulted in the RISC instruction setDLXfor the first edition ofComputer Architecture: A Quantitative Approachin 1990 of whichDavid Pattersonwas a co-author, and he later participated in the RISC-V origination. DLX was intended for educational use; academics and hobbyists implemented it usingfield-programmable gate arrays(FPGA), but it was never truly intended for commercial deployment.ARMCPUs, versions 2 and earlier, had a public-domain instruction set and are still supported by theGNU Compiler Collection(GCC), a popularfree-softwarecompiler. Three open-sourcecoresexist for this ISA, but were never manufactured.[20][21]OpenRISC,OpenPOWER, andOpenSPARC/LEONcores are offered, by a number of vendors, and have mainline GCC andLinuxkernel support.[22][23][24] Krste Asanovićat theUniversity of California, Berkeley, had a research requirement for an open-source computer system, and in 2010, he decided to develop and publish one in a "short, three-month project over the summer" with several of his graduate students. The plan was to aid both academic and industrial users.[25]David Patterson at Berkeley joined the collaboration as he was the originator of the Berkeley RISC,[19]and the RISC-V is the eponymous fifth generation of his long series of cooperative RISC-based research projects at the University of California, Berkeley (RISC-IandRISC-IIpublished in 1981 by Patterson, who refers[26]to the SOAR architecture[27]from 1984 as "RISC-III" and the SPUR architecture[28]from 1988 as "RISC-IV"). At this stage, students provided initial software, simulations, and CPU designs.[29] The RISC-V authors and their institution originally sourced the ISA documents[30]and several CPU designs underBSD licenses, which allow derivative works—such as RISC-V chip designs—to be either open and free, or closed and proprietary. The ISA specification itself (i.e., the encoding of the instruction set) was published in 2011 as open source,[31]with all rights reserved. The actual technical report (an expression of the specification) was later placed under aCreative Commons licenseto permit enhancement by external contributors through the RISC-V Foundation, and later RISC-V International. A full history of RISC-V has been published on the RISC-V International website.[32] Commercial users require an ISA to be stable before they can use it in a product that may last many years. To address this issue, the RISC-V Foundation was formed in 2015 to own, maintain, and publish intellectual property related to RISC-V's definition.[33]The original authors and owners have surrendered their rights to the foundation.[citation needed]The foundation is led by CEOCalista Redmond, who took on the role in 2019 after leading open infrastructure projects atIBM.[34][failed verification] The founding members of RISC-V were:Andes Technology, Antmicro,Bluespec,Ceva,Codasip, Cortus, Esperanto Technologies,Espressif Systems,ETH Zurich, Google, IBM, ICT,IIT Madras,Lattice Semiconductor,LowRISC,Microchip Technology, theMIT Computer Science and Artificial Intelligence Laboratory,Qualcomm,Rambus, Rumble Development,SiFive, Syntacore and Technolution.[35] In November 2019, the RISC-V Foundation announced that it would relocate to Switzerland, citing concerns over U.S. trade regulations.[36][37]As of March 2020, the organization was named RISC-V International, a Swiss nonprofit business association.[38] As of 2019[update], RISC-V International freely publishes the documents defining RISC-V and permits unrestricted use of the ISA for design of software and hardware. However, only members of RISC-V International can vote to approve changes, and only member organizations use thetrademarkedcompatibility logo.[39] The Linux Foundation Europe started the RISC-V Software Ecosystem (RISE) initiative on May 31, 2023. The goal of RISE is to increase the availability of software for high-performance and power-efficient RISC-V processors running high-level operating systems for a range of market segments by bringing together a large number of hardware and software vendors.Red Hat,Samsung, Qualcomm,Nvidia,MediaTek, Intel, and Google are among the initial members.[40] CPU designrequires design expertise in several specialties: electronicdigital logic,compilers, andoperating systems. To cover the costs of such a team, commercial vendors of processor intellectual property (IP), such asArm Ltd.andMIPS Technologies, chargeroyaltiesfor the use of their designs andpatents.[42][43][44]They also often requirenon-disclosure agreementsbefore releasing documents that describe their designs' detailed advantages. In many cases, they never describe the reasons for their design choices. RISC-V was begun with a goal to make a practical ISA that was open-sourced, usable academically, and deployable in any hardware or software design without royalties.[2]: 1[25]Also, justifying rationales for each design decision of the project are explained, at least in broad terms. The RISC-V authors are academics who have substantial experience in computer design, and the RISC-V ISA is a direct development from a series of academic computer-design projects, especiallyBerkeley RISC. RISC-V was originated in part to aid all such projects.[2]: 1[25] To build a large, continuing community of users and thereby accumulate designs and software, the RISC-V ISA designers intentionally support a wide variety of practical use cases: compact, performance, and low-power real-world implementations[2]: 1–2, 153–154[45]without over-architecting for a givenmicroarchitecture.[2]: 1[46][47][48]The requirements of a large base of contributors is part of the reason why RISC-V was engineered to address many possible uses. The designers' primary assertion is that the instruction set is the key interface in a computer as it is situated at the interface between the hardware and the software. If a good instruction set were open and available for use by all, then it can dramatically reduce the cost of software by enabling far more reuse. It should also trigger increased competition among hardware providers, who can then devote more resources toward design and less for software support.[25] The designers maintain that new principles are becoming rare in instruction set design, as the most successful designs of the last forty years have grown increasingly similar. Of those that failed, most did so because their sponsoring companies were financially unsuccessful, not because the instruction sets were technically poor. Thus, a well-designed open instruction set designed using well-established principles should attract long-term support by many vendors.[25] RISC-V also encourages academic usage. The simplicity of the integer subset permits basic student exercises, and is a simple enough ISA to enable software to control research machines. The variable-length ISA provides room for instruction set extensions for both student exercises and research,[2]: 7and the separatedprivilegedinstruction set permits research in operating system support without redesigning compilers.[3]RISC-V's open intellectual property paradigm allows derivative designs to be published, reused, and modified.[49] RISC-V has a modular design, consisting of alternative base parts, with added optional extensions. The ISA base and its extensions are developed in a collective effort between industry, the research community and educational institutions. The base specifies instructions (and their encoding), control flow, registers (and their sizes), memory and addressing, logic (i.e., integer) manipulation, and ancillaries. The base alone can implement a simplified general-purpose computer, with full software support, including a general-purpose compiler. The standard extensions are specified to work with all of the standard bases, and with each other without conflict. Many RISC-V computers might implement the compressed instructions extension to reduce power consumption, code size, and memory use.[2]: 97–99There are also future plans to supporthypervisorsandvirtualization.[3] Together with the supervisor extension, S, an RVGC instruction set, which includes one of the RV base instruction sets, the G collection of extensions (which includes "I", meaning that the base is non-embedded), and the C extension, defines all instructions needed to conveniently support a general purposeoperating system.[2]: 129, 154 To name the combinations of functions that may be implemented, a nomenclature is defined to specify them in Chapter 27 of the current ratified Unprivileged ISA Specification. The instruction set base is specified first, coding for RISC-V, the register bit-width, and the variant; e.g.,RV64IorRV32E. Then follows letters specifying implemented extensions, in the order of the above table. Each letter may be followed by a major optionally followed by "p" and a minor option number. It defaults to 0 if a minor version number is absent, and 1.0 if all of a version number is absent. ThusRV64IMAFDmay be written asRV64I1p0M1p0A1p0F1p0D1p0or more simply asRV64I1M1A1F1D1. Underscores may be used between extensions for readability, for exampleRV32I2_M2_A2. The base, extended integer & floating-point calculations, with synchronization primitives for multi-core computing, are considered to be necessary for general-purpose computing, and thus we have the shorthand, "G". A small 32-bit computer for an embedded system might beRV32EC. A large 64-bit computer might beRV64GC; i.e.,RV64IMAFDCZicsr_Zifencei. With the growth in the number of extensions, the standard now provides for extensions to be named by a single "Z" followed by an alphabetical name and an optional version number. For example,Zifenceinames the instruction-fetch extension.Zifencei2andZifencei2p0name version 2.0 of the same. The first letter following the "Z" by convention indicates the most closely related alphabetical extension category,IMAFDQLCBJTPVN. Thus the Zam extension for misaligned atomics relates to the "A" standard extension. Unlike single character extensions, Z extensions must be separated by underscores, grouped by category and then alphabetically within each category. For example,Zicsr_Zifencei_Zam. Extensions specific to supervisor privilege level are named in the same way using "S" for prefix. Extensions specific to hypervisor level are named using "H" for prefix. Machine level extensions are prefixed with the three letters "Zxm". Supervisor, hypervisor and machine level instruction set extensions are named after less privileged extensions. RISC-V developers may create their own non-standard instruction set extensions. These follow the "Z" naming convention, but with "X" as the prefix. They should be specified after all standard extensions, and if multiple non-standard extensions are listed, they should be listed alphabetically. Profiles and platforms for standard ISA choice lists are under discussion. This flexibility can be used to highly optimize a specialized design by including only the exact set of ISA features required for an application, but the same flexibility also leads to a combinatorial explosion in possible ISA choices. Profiles specify a much smaller common set of ISA choices that capture the most value for most users, and which thereby enable the software community to focus resources on building a rich software ecosystem.[54] The platform specification defines a set of platforms that specify requirements for interoperability between software and hardware. The Platform Policy defines the various terms used in this platform specification. The platform policy also provides the needed detail regarding the scope, coverage, naming, versioning, structure, life cycle and compatibility claims for the platform specification.[55] As a RISC architecture, the RISC-V ISA is aload–store architecture. Its floating-point instructions useIEEE 754floating-point. Notable features of the RISC-V ISA include: instruction bit field locations chosen to simplify the use ofmultiplexersin a CPU,[2]: 17a design that is architecturally neutral,[dubious–discuss]and a fixed location for the sign bit ofimmediate valuesto speed upsign extension.[2]: 17 The instruction set is designed for a wide range of uses. The base instruction set has a fixed length of32-bitnaturally aligned instructions, and the ISA supports variable length extensions where each instruction can be any number of16-bitparcels in length.[2]: 7–10Extensions support smallembedded systems,personal computers,supercomputerswith vector processors, and warehouse-scaleparallel computers. The instruction set specification defines 32-bit and64-bitaddress spacevariants. The specification includes a description of a128-bitflat address space variant, as an extrapolation of 32- and 64-bit variants, but the 128-bit ISA remains "not frozen" intentionally, because as of 2023[update], there is still little practical experience with such large memory systems.[2]: 41 Unlike other academic designs which are typically optimized only for simplicity of exposition, the designers intended that the RISC-V instruction set be usable for practical computers. As of June 2019, version 2.2 of the user-space ISA[59]and version 1.11 of theprivilegedISA[3]arefrozen, permitting software and hardware development to proceed. The user-space ISA, now renamed the Unprivileged ISA, was updated, ratified and frozen as version 20191213.[2]An external debug specification is available as a draft, version 0.13.2.[60] RISC-V has 32integerregisters (or 16 in the embedded variant),[2]: 13, 33and when the floating-point extension is implemented, an additional 32floating-pointregisters.[2]: 63Except for memory access instructions, instructions address onlyregisters. The first integer register is azero register, and the remainder are general-purpose registers. A store to the zero register has no effect, and a read always provides 0. Using the zero register as a placeholder makes for a simpler instruction set. Control and status registers exist, but user-mode programs can access only those used for performance measurement and floating-point management. No instructions exist to save and restore multiple registers. Those were thought to be needless, too complex, and perhaps too slow.[49] Like many RISC designs, RISC-V is aload–store architecture: instructions address only registers, with load and store instructions conveying data to and from memory. Most load and store instructions include a 12-bit offset and two register identifiers. One register is the base register. The other register is the destination (for a load) or the source (for a store). The offset is added to a base register to get the address.[2]: 24Forming the address as a base register plus offset allows single instructions to access data structures. For example, if the base register points to the top of a stack, single instructions can access a subroutine's local variables in the stack. Likewise the load and store instructions can access a record-style structure or a memory-mapped I/O device. Using the constant zero register as a base address allows single instructions to access memory near address zero. Memory is addressed as 8-bit bytes, with instructions being inlittle-endianorder,[2]: 9–10and with data being in the byte order defined by the execution environment interface in which code is running.[2]: 3, 9–10, 24Words, up to the register size, can be accessed with the load and store instructions. RISC-V was originally specified as little-endian to resemble other familiar, successful computers, for example,x86.[2]: 9–10This also reduces a CPU's complexity and costs slightly less because it reads all sizes of words in the same order. For example, the RISC-V instruction set decodes starting at the lowest-addressed byte of the instruction. Big-endian and bi-endian variants were defined for support of legacy code bases that assume big-endianness.[2]: 9–10The privileged ISA defines bits in themstatusandmstatushregisters that indicate and, optionally, control whether M-mode, S-mode, and U-mode memory accesses other than instruction fetches are little-endian or big-endian; those bits may be read-only, in which case the endianness of the implementation is hardwired, or may be writable.[3]: 23–24 An execution environment interface may allow accessed memory addresses not to be aligned to their word width, but accesses to aligned addresses may be faster; for example, simple CPUs may implement unaligned accesses with slow software emulation driven from an alignment failureinterrupt.[2]: 3, 24–25 Like many RISC instruction sets (and somecomplex instruction set computer(CISC) instruction sets, such asx86andIBM System/360and its successors throughz/Architecture), RISC-V lacks address-modes that write back to the registers. For example, it does not auto-increment.[2]: 24 RISC-V manages memory systems that are shared between CPUs orthreadsby ensuring a thread of execution always sees its memory operations in the programmed order. But between threads and I/O devices, RISC-V is simplified: it doesn't guarantee the order of memory operations, except by specific instructions, such asfence. Afenceinstruction guarantees that the results of predecessor operations are visible to successor operations of other threads or I/O devices.fencecan guarantee the order of combinations of both memory and memory-mapped I/O operations. E.g. it can separate memory read and write operations, without affecting I/O operations. Or, if a system can operate I/O devices in parallel with memory,fencedoesn't force them to wait for each other. One CPU with one thread may decodefenceasnop. Some RISC CPUs (such asMIPS,PowerPC,DLX, and Berkeley's RISC-I) place 16 bits of offset in the loads and stores. They set the upper 16 bits by aload upper wordinstruction. This permits upper-halfword values to be set easily, without shifting bits. However, most use of the upper half-word instruction makes 32-bit constants, like addresses. RISC-V uses aSPARC-like combination of 12-bit offsets and 20-bitset upperinstructions. The smaller 12-bit offset helps compact, 32-bit load and store instructions select two of 32 registers yet still have enough bits to support RISC-V's variable-length instruction coding.[2]: 16 RISC-V handles 32-bit constants and addresses with instructions that set the upper 20 bits of a 32-bit register. Load upper immediateluiloads 20 bits into bits 31 through 12. Then a second instruction such asaddican set the bottom 12 bits. Small numbers or addresses can be formed by using the zero register instead oflui. This method is extended to permitposition-independent codeby adding an instruction,auipcthat generates 20 upper address bits by adding an offset to the program counter and storing the result into a base register. This permits a program to generate 32-bit addresses that are relative to the program counter. The base register can often be used as-is with the 12-bit offsets of the loads and stores. If needed,addican set the lower 12 bits of a register. In 64-bit and 128-bit ISAs,luiandauipcsign-extend the result to get the larger address.[2]: 37 Some fast CPUs may interpret combinations of instructions as singlefusedinstructions.luiorauipcare good candidates to fuse withjalr,addi, loads or stores. RISC-V's subroutine calljal(jump and link) places its return address in a register. This is faster in many computer designs, because it saves a memory access compared to systems that push a return address directly on a stack in memory.jalhas a 20-bit signed (two's complement) offset. The offset is multiplied by 2, then added to the PC (program counter) to generate a relative address to a 32-bit instruction. If the resulting address is not 32-bit aligned (i.e. evenly divisible by 4), the CPU may force anexception.[2]: 20–23, Section 2.5 RISC-V CPUs jump to calculated addresses using ajump and link-register,jalrinstruction.jalris similar tojal, but gets its destination address by adding a 12-bit offset to a base register. (In contrast,jaladds a larger 20-bit offset to the PC.) jalr's bit format is like the register-relative loads and stores. Like them,jalrcan be used with the instructions that set the upper 20 bits of a base register to make 32-bit branches, either to an absolute address (usinglui) or a PC-relative one (usingauipcfor position-independent code). (Using a constant zero base address allows single-instruction calls to a small (the offset), fixed positive or negative address.) RISC-V recyclesjalandjalrto get unconditional 20-bit PC-relative jumps and unconditional register-based 12-bit jumps. Jumps just make the linkage register 0 so that no return address is saved.[2]: 20–23, Section 2.5 RISC-V also recyclesjalrto return from a subroutine: To do this,jalr's base register is set to be the linkage register saved byjalorjalr.jalr's offset is zero and the linkage register is zero, so that there is no offset, and no return address is saved. Like many RISC designs, in a subroutine call, a RISC-V compiler must use individual instructions to save registers to the stack at the start, and then restore these from the stack on exit. RISC-V has nosave multipleorrestore multipleregister instructions. These were thought to make the CPU too complex, and possibly slow.[61]This can take more code space. Designers planned to reduce code size with library routines to save and restore registers.[62] RISC-V has nocondition code registerorcarry bit. The designers believed that condition codes make fast CPUs more complex by forcing interactions between instructions in different stages of execution. This choice makes multiple-precision arithmetic more complex. Also, a few numerical tasks need more energy. As a result,predication(the conditional execution of instructions) is not supported. The designers claim that very fast, out-of-order CPU designs do predication anyway, by doing the comparison branch and conditional code in parallel, then discarding the unused path's effects. They also claim that even in simpler CPUs, predication is less valuable thanbranch prediction, which can prevent most stalls associated with conditional branches. Code without predication is larger, with more branches, but they also claim that acompressed instruction set(such as RISC-V's setC) solves that problem in most cases.[49][failed verification] Instead, RISC-V has short branches that perform comparisons: equal, not-equal, less-than, unsigned less-than, greater-than or equal and unsigned greater-than or equal. Ten comparison-branch operations are implemented with only six instructions, by reversing the order of operands in theassembler. For example,branch if greater thancan be done byless-thanwith a reversed order of operands.[2]: 20–23, Section 2.5 The comparing branches have a twelve-bit signed range, and jump relative to the PC.[2]: 20–23, Section 2.5 Unlike some RISC architectures, RISC-V does not include abranch delay slot, a position after a branch instruction that can be filled with an instruction that is executed whether or not the branch is taken.[2]: 20–23, Section 2.5RISC-V omits a branch delay slot because it complicates multicycle CPUs, superscalar CPUs, and long pipelines. Dynamicbranch predictorshave succeeded well enough to reduce the need for delayed branches.[49] On the first encounter with a branch, RISC-V CPUs should assume that a negative relative branch (i.e. the sign bit of the offset is "1") will be taken.[2]: 20–23, Section 2.5This assumes that a backward branch is a loop, and provides a default direction so that simple pipelined CPUs can fill their pipeline of instructions. Other than this, RISC-V does not requirebranch prediction, but core implementations are allowed to add it. RV32I reserves a "HINT" instruction space that presently does not contain any hints on branches;[2]: 28–29, Section 2.9RV64I does the same.[2]: 38–39, Section 5.4 RISC-V segregates math into a minimal set ofintegerinstructions (setI) withadd, subtract, shift, bitwise logicand comparing-branches. These can simulate most of the other RISC-V instruction sets with software. (The atomic instructions are a notable exception.) RISC-V integer instructions lack thecount leading zeroand bit-field operations normally used to speed software floating-point in a pure-integer processor, However, while nominally in the bit manipulation extension, the ratified Zbb, Zba and Zbs extensions contain further integer instructions including a count leading zero instruction. The integer multiplication instructions (setM) include signed and unsigned multiply and divide. Double-precision integer multiplies and divides are included, as multiplies and divides that produce thehigh wordof the result. The ISA document recommends that implementors of CPUs and compilersfusea standardized sequence of high and low multiply and divide instructions to one operation if possible.[2]: 43–45 Thefloating-pointinstructions (setF) include single-precision arithmetic and also comparison-branches similar to the integer arithmetic. It requires an additional set of 32 floating-point registers. These are separate from the integer registers. The double-precision floating point instructions (setD) generally assume that the floating-point registers are 64-bit (i.e., double-width), and theFsubset is coordinated with theDset. A quad-precision 128-bit floating-point ISA (Q) is also defined.[2]: 63–82RISC-V computers without floating-point can use a floating-point software library. RISC-V does not causeexceptionson arithmetic errors, includingoverflow,[2]: 17–20underflow, subnormal, and divide by zero.[2]: 44–45Instead, both integer and floating-point arithmetic produce reasonable default values, and floating-point instructions set status bits.[2]: 66Divide-by-zero can be discovered by one branch after the division.[2]: 44–45The status bits can be tested by an operating system or periodic interrupt. RISC-V supports computers that share memory between multiple CPUs andthreads. RISC-V's standard memory consistency model isrelease consistency. That is, loads and stores may generally be reordered, but some loads may be designated asacquireoperations which must precede later memory accesses, and some stores may be designated asreleaseoperations which must follow earlier memory accesses.[2]: 83–94 The base instruction set includes minimal support in the form of afenceinstructionto enforce memory ordering.[2]: 26–27Although this is sufficient (fence r, rwprovidesacquireandfence rw, wprovidesrelease), combined operations can be more efficient.[2]: Chapter 8 The atomic memory operation extension supports two types of atomic memory operations for release consistency. First, it provides general purposeload-reservedlrandstore-conditionalscinstructions.lrperforms a load, and tries to reserve that address for its thread. A later store-conditionalscto the reserved address will be performed only if the reservation is not broken by an intervening store from another source. If the store succeeds, a zero is placed in a register. If it failed, a non-zero value indicates that software needs to retry the operation. In either case, the reservation is released.[2]: Chapter 8 The second group of atomic instructions performread-modify-writesequences: a load (which is optionally a load-acquire) to a destination register, then an operation between the loaded value and a source register, then a store of the result (which may optionally be a store-release). Making the memory barriers optional permits combining the operations. The optional operations are enabled byacquireandreleasebits which are present in every atomic instruction. RISC-V defines nine possible operations: swap (use source register value directly); add; bitwise and, or, and exclusive-or; and signed and unsigned minimum and maximum.[2]: Chapter 8 A system design may optimize these combined operations more thanlrandsc. For example, if the destination register for a swap is the constant zero, the load may be skipped. If the value stored is unmodified since the load, the store may be skipped.[59]: 44 TheIBM System/370and its successors includingz/Architecture, andx86, both implement acompare-and-swap(cas) instruction, which tests and conditionally updates a location in memory: if the location contains an expected old value,casreplaces it with a given new value; it then returns an indication of whether it made the change. However, a simple load-type instruction is usually performed before thecasto fetch the old value. The classic problem is that if a thread reads (loads) a valueA, calculates a new valueC, and then uses (cas) to replaceAwithC, it has no way to know whether concurrent activity in another thread has replacedAwith some other valueBand then restored theAin between. In some algorithms (e.g., ones in which the values in memory are pointers to dynamically allocated blocks), thisABA problemcan lead to incorrect results. The most common solution employs adouble-widecasinstruction to update both the pointer and an adjacent counter; unfortunately, such an instruction requires a special instruction format to specify multiple registers, performs several reads and writes, and can have complex bus operation.[2]: 48–49 Thelr/scalternative is more efficient. It usually requires only one memory load, and minimizing slow memory operations is desirable. It's also exact: it controls all accesses to the memory cell, rather than just assuring a bit pattern. However, unlikecas, it can permitlivelock, in which two or more threads repeatedly cause each other's instructions to fail. RISC-V guarantees forward progress (no livelock) if the code follows rules on the timing and sequence of instructions: 1) It must use only theIsubset. 2) To prevent repetitive cache misses, the code (including the retry loop) must occupy no more than 16 consecutive instructions. 3) It must include no system or fence instructions, or taken backward branches between thelrandsc. 4) The backward branch to the retry loop must be to the original sequence.[2]: 48–49 The specification gives an example of how to use the read-modify-write atomic instructions to lock a data structure.[2]: 54 The standard RISC-V ISA specifies that all instructions are 32 bits. This makes for a particularly simple implementation, but like other RISC processors with 32-bit instruction encoding, results in larger code size than in instruction sets with variable-length instructions.[2]: 99[61] To compensate, RISC-V's32-bitinstructions are actually 30 bits;3⁄4of theopcodespace is reserved for an optional (but recommended) variable-lengthcompressedinstruction set, RVC, that includes 16-bit instructions. As inARM ThumbandMIPS16, the compressed instructions are simply alternative encodings for a subset of the larger instructions. Unlike the ARM or MIPS compressed sets, space was reserved from the start so there is no separate operating mode. Standard and compressed instructions may be intermixed freely.[2]: 97[61](Extension letter isC.)[2]: 97 Because (like Thumb-1 and MIPS16) the compressed instructions are simply alternate encodings (aliases) for a selected subset of larger instructions, the compression can be implemented in the assembler, and it is not essential for the compiler to even know about it. A prototype of RVC was tested in 2011.[61]The prototype code was 20% smaller than anx86PC andMIPScompressed code, and 2% larger than ARMThumb-2code.[61]It also substantially reduced both the needed cache memory and the estimated power use of the memory system.[61] The researcher intended to reduce the code's binary size for small computers, especiallyembedded computersystems. The prototype included 33 of the most frequently used instructions, recoded as compact 16-bit formats using operation codes previously reserved for the compressed set.[61]The compression was done in theassembler, with no changes to the compiler. Compressed instructions omitted fields that are often zero, used small immediate values or accessed subsets (16 or 8) of the registers.addiis very common and often compressible.[61] Much of the difference in size compared to ARM's Thumb set occurred because RISC-V, and the prototype, have no instructions to save and restore multiple registers. Instead, the compiler generated conventional instructions that access the stack. The prototype RVC assembler then often converted these to compressed forms that were half the size. However, this still took more code space than the ARM instructions that save and restore multiple registers. The researcher proposed to modify the compiler to call library routines to save and restore registers. These routines would tend to remain in a code cache and thus run fast, though probably not as fast as a save-multiple instruction.[61] Standard RVC requires occasional use of 32-bit instructions. Several nonstandard RVC proposals are complete, requiring no 32-bit instructions, and are said to have higher densities than standard RVC.[63][64]Another proposal builds on these, and claims to use less coding range as well.[65] An instruction set for the smallestembeddedCPUs (set E) is reduced in other ways: Only 16 of the 32 integer registers are supported.[2]: Chapter 4All current extensions may be used; a floating-point extension to use the integer registers for floating-point values is being considered. The privileged instruction set supports only machine mode, user mode and memory schemes that use base-and-bound address relocation.[3] Discussion has occurred for a microcontroller profile for RISC-V, to ease development of deeply embedded systems. It centers on faster, simple C-language support for interrupts, simplified security modes and a simplifiedPOSIXapplication binary interface.[66] Correspondents have also proposed smaller, non-standard, 16-bitRV16EISAs: Several serious proposals would use the 16-bitCinstructions with 8 × 16-bit registers.[64][63]An April fools' joke proposed a very practical arrangement: Utilize 16 × 16-bit integer registers, with the standardEIMCISAs (including 32-bit instructions.) The joke was to usebank switchingwhen a 32-bit CPU would be clearly superior with the larger address space.[67] RISC-V's ISA includes a separateprivilegedinstruction set specification, which mostly describes three privilege levels plus an orthogonal hypervisor mode. As of December 2021[update], version 1.12 is ratified by RISC-V International.[3] Version 1.12 of the specification supports several types of computer systems: These correspond roughly to systems with up to fourringsof privilege and security, at most: machine, hypervisor, supervisor and user. Each layer also is expected to have a thin layer of standardized supporting software that communicates to a more-privileged layer, or hardware.[3] The ISA also includes a hypervisor mode that isorthogonalto the user and supervisor modes.[68]The basic feature is a configuration bit that either permits supervisor-level code to access hypervisor registers, or causes an interrupt on accesses. This bit lets supervisor mode directly handle the hardware needed by a hypervisor. This simplifies the implementation of hypervisors that are hosted by an operating system. This is a popular mode to run warehouse-scale computers. To support non-hosted hypervisors, the bit can cause these accesses to interrupt to a hypervisor. The design also simplifies nesting of hypervisors, in which a hypervisor runs under a hypervisor, and if necessary it lets the kernel use hypervisor features within its own kernel code. As a result, the hypervisor form of the ISA supports five modes: machine, supervisor, user, supervisor-under-hypervisor and user-under-supervisor. The privileged instruction set specification explicitly defineshardwarethreads, orharts. Multiple hardware threads are a common practice in more-capable computers. When one thread is stalled, waiting for memory, others can often proceed. Hardware threads can help make better use of the large number of registers and execution units in fast out-of-order CPUs. Finally, hardware threads can be a simple, powerful way to handleinterrupts: No saving or restoring of registers is required, simply executing a different hardware thread. However, the only hardware thread required in a RISC-V computer is thread zero.[3] Interrupts and exceptions are handled together. Exceptions are caused by instruction execution including illegal instructions and system calls, while interrupts are caused by external events. The existing control and status register definitions support RISC-V's error and memory exceptions, and a small number of interrupts, typically via an "advanced core local interruptor" (ACLINT).[69]For systems with more interrupts, the specification also defines aplatform-level interrupt controller(PLIC) to coordinate large number of interrupts among multiple processors. Interrupts always start at the highest-privileged machine level, and the control registers of each level have explicitforwardingbits to route interrupts to less-privileged code. For example, the hypervisor need not include software that executes on each interrupt to forward an interrupt to an operating system. Instead, on set-up, it can set bits to forward the interrupt.[3] Several memory systems are supported in the specification. Physical-only is suited to the simplest embedded systems. There are also fourUNIX-stylevirtual memorysystems for memory cached in mass-storage systems. The virtual memory systems supportMMUwith four sizes, with addresses sized 32, 39, 48 and 57 bits. All virtual memory systems support 4 KiB pages, multilevel page-table trees and use very similar algorithms to walk the page table trees. All are designed for either hardware or software page-table walking. To optionally reduce the cost of page table walks, super-sized pages may be leaf pages in higher levels of a system's page table tree. SV32 is only supported on 32-bit implementations, has a two-layer page table tree and supports 4 MiB superpages. SV39 has a three level page table, and supports 2 MiB superpages and 1 GiB gigapages. SV48 is required to support SV39. It also has a 4-level page table and supports 2 MiB superpages, 1 GiB gigapages, and 512 GiB terapages. SV57 has a 5-level page table and supports 2 MiB superpages, 1 GiB gigapages, 512 GiB terapages and 256 TiB petapages. Superpages are aligned on the page boundaries for the next-lowest size of page.[3] Some bit-manipulation ISA extensions were ratified in November 2021 (Zba, Zbb, Zbc, Zbs).[51]The Zba, Zbb, and Zbs extensions are arguably extensions of the standard I integer instructions: Zba contains instructions to speed up the computation of the addresses of array elements in arrays of datatypes of size 2, 4, or 8 bytes (sh1add, sh2add, sh3add), and for 64 (and 128) bit processors when indexed with unsigned integers (add.uw, sh1add.uw, sh2add.uw, sh3add.uw and slli.uw). The Zbb instructions contains operations to count leading, trailing 0 bits or all 1 bits in a full and 32 word operations (clz, clzw, ctz, ctzw, cpop, cpopw), byte order reversion (rev8), logical instructions with negation of the second input (andn,orn, xnor), sign and zero extension (sext.b, sext.h, zext.h) that could not be provided as special cases of other instructions (andi, addiw, add.wu), min and max of (signed and unsigned) integers, (left and right) rotation of bits in a register and 32-bit words (rori,roriw, ror, rorw, rol, rolw), and a byte wise "or combine" operation which allows detection of a zero byte in a full register, useful for handling C-style null terminated strings functions. The Zbs extension allows setting, getting, clearing, and toggling individual bits in a register by their index (bseti, bset, bexti, bext, bclri, bclr, binvi,binv). The Zbc extension has instructions for "carryless multiplication", which does the multiplication ofpolynomialsover theGalois fieldGF(2) (clmul, clmulh, clmulr). These are useful for cryptography and CRC checks of data integrity. Done well, a more specialised bit-manipulation subset can aid cryptographic, graphic, and mathematical operations. Further instructions that have been discussed include instructions to shift in ones, a generalized bit-reverse, shuffle and crossbar permutations, bit-field place, extract and deposit pack two words, bytes or halfwords in one register, CRC instructions, bit-matrix operations (RV64 only), conditional mix, conditional move, funnel shifts. The criteria for inclusion documented in the draft were compliant with RISC-V philosophies and ISA formats, substantial improvements in code density or speed (i.e., at least a 3-for-1 reduction in instructions), and substantial real-world applications, including preexisting compiler support. Version 0.93 of the bit-manipulation extension includes those instructions;[70]some of them are now in version 1.0.1 of the scalar andentropy sourceinstructions cryptography extension.[53] Packed-SIMD instructions are widely used by commercial CPUs to inexpensively accelerate multimedia and otherdigital signal processing.[49]For simple, cost-reduced RISC-V systems, the base ISA's specification proposed to use the floating-point registers' bits to perform parallel single instruction, multiple data (SIMD) sub-word arithmetic. In 2017 a vendor published a more detailed proposal to the mailing list, and this can be cited as version 0.1.[71]As of 2019[update], the efficiency of this proposed ISA varies from 2x to 5x a base CPU for a variety of DSP codecs.[72]The proposal lacked instruction formats and a license assignment to RISC-V International, but it was reviewed by the mailing list.[71]Some unpopular parts of this proposal were that it added a condition code, the first in a RISC-V design, linked adjacent registers (also a first), and has a loop counter that can be difficult to implement in some microarchitectures. The proposedvector-processinginstruction set may make the packedSIMDset obsolete. The designers hope to have enough flexibility that a CPU can implement vector instructions in a standard processor's registers. This would enable minimal implementations with similar performance to a multimedia ISA, as above. However, a true vector coprocessor could execute the same code with higher performance.[73] As of 19 September 2021[update], the vector extension is at version 1.0.[74]It is a conservative, flexible design of a general-purpose mixed-precision vector processor, suitable to executecompute kernels. Code would port easily to CPUs with differing vector lengths, ideally without recompiling.[73] In contrast, short-vector SIMD extensions are less convenient. These are used inx86, ARM andPA-RISC. In these, a change in word-width forces a change to the instruction set to expand the vector registers (in the case of x86, from 64-bitMMXregisters to 128-bitStreaming SIMD Extensions(SSE), to 256-bitAdvanced Vector Extensions(AVX), andAVX-512). The result is a growing instruction set, and a need to port working code to the new instructions. In the RISC-V vector ISA, rather than fix the vector length in the architecture, instructions (vsetvli,vsetivli, andvsetvl) are available which take a requested size and sets the vector length to the minimum of the hardware limit and the requested size. So, the RISC-V proposal is more like aCray's long-vector design or ARM's Scalable Vector Extension. That is, each vector in up to 32 vectors is the same length.[74]: 25 The application specifies the total vector width it requires, and the processor determines the vector length it can provide with available on-chip resources. This takes the form of an instruction (vsetcfg) with four immediate operands, specifying the number of vector registers of each available width needed. The total must be no more than the addressable limit of 32, but may be less if the application does not require them all. The vector length is limited by the available on-chip storage divided by the number of bytes of storage needed for each entry. (Added hardware limits may also exist, which in turn may permit SIMD-style implementations.)[73] Outside of vector loops, the application can zero the number of requested vector registers, saving the operating system the work of preserving them oncontext switches.[73] The vector length is not only architecturally variable, but designed to vary at run time also. To achieve this flexibility, the instruction set is likely to use variable-width data paths and variable-type operations using polymorphic overloading.[73]The plan is that these can reduce the size and complexity of the ISA and compiler.[73] Recent experimental vector processors with variable-width data paths also show profitable increases in operations per: second (speed), area (lower cost), and watt (longer battery life).[75] Unlike a typical moderngraphics processing unit, there are no plans to provide special hardware to supportbranch predication. Instead, lower cost compiler-based predication will be used.[73][76] There is a preliminary specification for RISC-V's hardware-assisteddebugger. The debugger will use a transport system such as Joint Test Action Group (JTAG) or Universal Serial Bus (USB) to access debug registers. A standard hardware debug interface may support either astandardized abstract interfaceorinstruction feeding.[77][78] As of January 2017[update], the exact form of theabstract interfaceremains undefined, but proposals include a memory mapped system with standardized addresses for the registers of debug devices or a command register and a data register accessible to the communication system.[77]Correspondents claim that similar systems are used byFreescale'sbackground debug mode interface(BDM) for some CPUs,ARM,OpenRISC, andAeroflex'sLEON.[77] Ininstruction feeding, the CPU will process a debug exception to execute individual instructions written to a register. This may be supplemented with a data-passing register and a module to directly access the memory. Instruction feeding lets the debugger access the computer exactly as software would. It also minimizes changes in the CPU, and adapts to many types of CPU. This was said to be especially apt for RISC-V because it is designed explicitly for many types of computers. The data-passing register allows a debugger to write a data-movement loop to RAM, and then execute the loop to move data into or out of the computer at a speed near the maximum speed of the debug system's data channel.[77]Correspondents say that similar systems are used byMIPS TechnologiesMIPS,Intel Quark,Tensilica'sXtensa, and forFreescalePower ISACPUs'background debug mode interface(BDM).[77] A vendor proposed a hardware trace subsystem for standardization, donated a conforming design, and initiated a review.[79][80]The proposal is for a hardware module that can trace code execution on most RISC-V CPUs. To reduce the data rate, and permit simpler or less-expensive paths for the trace data, the proposal does not generate trace data that can be calculated from a binary image of the code. It sends only data that indicates "uninferrable" paths through the program, such as which conditional branches are taken. To reduce the data rates, branches that can be calculated, such as unconditional branches, are not traced. The proposed interface between the module and the control unit is a logic signal for each uninferrable type of instruction. Addresses and other data are to be provided in a specialized bus attached to appropriate data sources in a CPU. The data structure sent to an external trace unit is a series of short messages with the needed data. The details of the data channel are intentionally not described in the proposal, because several are likely to make sense. The RISC-V organization maintains a list of RISC-V CPU and SoC implementations.[81]Due to trade wars and possible sanctions that would prevent China from accessing proprietary ISAs, as of 2023 the country was planning to shift most of its CPU and MCU architectures to RISC-V cores.[82] In 2023, the European Union was set to provide 270 million euros within a so-called Framework Partnership Agreement (FPA) to a single company that was able and willing to carry out a RISC-V CPU development project aimed at supercomputers, servers, and data centers.[83]The European Union's aim was to become independent from political developments in other countries and to "strengthen its digital sovereignty and set standards, rather than following those of others."[84] According toThe Register, Chinese media reported in March 2025 from the conference where the server-grade CPU Alibaba DAMO Xuantie C930 was launched that senior Alibaba Cloud executives had predicted that RISC-V would become a mainstream cloud architecture as early as 2030.[85]According toReuters, Chinese government bodies in 2025 were working on “guidance” that would promote widespread use of RISC-V throughout China.[85] SiFiveof Santa Clara, California, announced their first RISC-Vout-of-orderhigh performance CPU core, the U8 Series Processor IP, in 2019.[86]SiFive was established specifically for developing RISC-V hardware and began releasing processor models in 2017.[87][88]These included a quad-core, 64-bit (RV64GC)system on a chip(SoC) capable of running general-purpose operating systems such asLinux.[89] DAMO Academy,[90][91]the research arm ofAlibaba GroupofHangzhou, China, announced the 2.5 GHz 16-core 64-bit (RV64GC) Xuantie 910out-of-orderprocessor in July 2019.[92]In October 2021 the Xuantie 910 was released as an open-source design.[93]In November 2023, DAMO unveiled three updated processors: the Xuantie C920, Xuantie C907 and Xuantie R910; these processors were aimed at a variety of application areas, including autonomous vehicles, artificial intelligence (AI), enterprise hard drives, and network communications.[94] In a move whichThe Registersaid provided evidence that the "permissively licensed RISC-V instruction set architecture" appeared to be gaining "significant momentum in China", the server-grade CPU Xuantie C930 was launched in March 2025.[95]The C930 CPU core was advertised as ideal for servers, PCs, and autonomous cars.[95]It creates significant competition for the California-based companySiFiveand its P870 core.[96] SpacemiT, a Chinese company headquartered in Hangzhou, developed the SpacemiT Key Stone K1 in 2024, an octa-core 64-bit processor that is available in theBPI-F3computer, as well as the following other devices: LicheePi 3A, the Milk-V Jupiter, theDeepComputingDC-ROMA LAPTOP II, and the SpacemiT MUSEbook featuring the Bianbu OS operating system. The processor is based on the X60 core design, integrates an IMG BXE-2-32 GPU, and supports the vector extension RVV 1.0.[97]In January 2025, SpacemiT announced the development of a server processor with up to 64 RISC-V cores, called "VitalStone V100" and made with a 12nm-class process technology.[98][99][100] Existing proprietary implementations include: DeepComputing of Hong Kong announced the release on 13 April 2023 of the "world's first laptop with RISC-V processor"; the notebook, called "ROMA", was delivered to its first customers in August 2023[175]and came pre-installed with the ChineseopenKylinLinux operating system.[176]The device's basic model, available fromAlibaba, was still expensive at roughly US$1500[177]considering it was powered by the not very fast[178]Alibaba (DAMO) CPU "XuanTie C910". An upgrade in June 2024 doubled the core count to 8 cores and increased the clock speed to 2 GHz (from 1.5 GHz), while dropping the price to US$1,000.[179]The processor used was aSpacemiTSoC K1.[180][181]A collaboration withCanonical[182]meant that the ROMA II came pre-installed with the major international Linux distributionUbuntu.[183] In 2024, DeepComputing announced a collaboration withFramework Computerto produce amainboardfor their Framework Laptop 13.[184][185]As of 4 February 2025, it is ready to ship and mainly targeted at developers. It features a 4-core StarFive JH7110 processor.[186] A normal problem for a new instruction set is both a lack of CPU designs and of software, which limit its usability and reduce adoption.[25]In addition to already having a large number of CPU hardware designs, RISC-V is also supported by toolchains, operating systems (e.g.Linux),middleware[vague]and design software. Available RISC-V software tools include aGNU Compiler Collection(GCC) toolchain (withGDB, the debugger), anLLVMtoolchain, theOVPsimsimulator (and library of RISC-V Fast Processor Models), the Spike simulator, and a simulator inQEMU(RV32GC/RV64GC).JEP 422: Linux/RISC-V Portis already integrated into mainlineOpenJDKrepository. Java 21+ Temurin OpenJDK builds for RISC-V are available fromAdoptium. Operating system support exists for theLinuxkernel,FreeBSD,NetBSD, andOpenBSDbut the supervisor-mode instructions were unstandardized before version 1.11 of the privileged ISA specification,[3]so this support is provisional. The preliminary FreeBSD port to the RISC-V architecture was upstreamed in February 2016, and shipped in FreeBSD 11.0.[187][139] Ports of theDebian,[188][11]Fedora,[189]andopenSUSE[190]Linux distributions, and a port ofHaiku,[191]are stabilizing (all only support 64-bit RISC-V, with no plans to support the 32-bit version). In June 2024, Hong Kong company DeepComputing announced the commercial availability of the first RISC-V laptop in the world to run the popular Linux operating systemUbuntuin its standard form ("out of the box").[18]"As RISC-V is becoming a competitive ISA in multiple markets, porting Ubuntu to RISC-V to become the reference OS [operating system] for early adopters was a natural choice," Ubuntu-developerCanonicalstated in June 2024.[192] A port ofDas U-Bootexists.[193]UEFI Spec v2.7 has defined the RISC-V binding and aTianoCoreport has been done byHPEengineers[194]and is expected to be upstreamed. A RISC-V boot deep dive was done as part ofopenSUSEHackweek 20.[195]There is a preliminary port ofthe seL4 microkernel.[196][197]Hex Five released the first Secure IoT Stack for RISC-V withFreeRTOSsupport.[198]Alsoxv6, a modern reimplementation ofSixth Edition UnixinANSI Cused for pedagogical purposes inMIT, was ported. Pharos RTOS has been ported to 64-bit RISC-V[199](including time and memory protection).Also seeComparison of real-time operating systems. A simulator exists to run a RISC-V Linux system on aweb browserusingJavaScript.[200][201][202] QEMUsupports running (usingbinary translation) 32- and 64-bit RISC-V systems (e.g. Linux) with many emulated or virtualized devices (serial, parallel, USB, network, storage, real time clock, watchdog, audio), as well as running RISC-V Linux binaries (translating syscalls to the host kernel). It does support multi-core emulation (SMP).[203] The CREATOR simulator is portable and allows the user to learn various assembly languages of different processors (CREATOR has examples with an implementation of RISC-V and MIPS32 instructions).[204][205][206][207][208] Several languages have been applied to creating RISC-V IP cores including aScala-based hardware description language,Chisel,[209]which can reduce the designs toVerilogfor use in devices, and the CodAL processor description language which has been used in to describe RISC-V processor cores and to generate corresponding HDKs (RTL, testbench andUVM) and SDKs.[210]The RISC-V International Compliance Task Group has a GitHub repository for RV32IMC.[211] The extensible educational simulator WepSIM implements amicroprogrammedsubset of RISC-V instructions (RV32I+M) and allows the execution ofsubroutineson both, at assembly and microprogramming level.[212][213]
https://en.wikipedia.org/wiki/RISC-V
RISC-V[b](pronounced "risk-five"[2]: 1) is anopen standardinstruction set architecture(ISA) based on establishedreduced instruction set computer(RISC) principles. The project commenced in 2010 at theUniversity of California, Berkeley. It transferred to the RISC-V Foundation in 2015, and from there to RISC-V International, a Swiss non-profit entity, in November 2019.[5][6]Similar to several other RISC ISAs, e.g.Amber (ARMv2)orOpenRISC, RISC-V is offered underroyalty-freeopen-source licenses.[7]The documents defining the RISC-V instruction set architecture (ISA) are offered under aCreative Commons licenseor aBSD License. Mainline support for RISC-V was added to the Linux 5.17 kernel in 2022, along with itstoolchain.[8]In July 2023, RISC-V, in its64-bitvariant called riscv64,[9]was included as an official architecture of Linux distributionDebian, in itsunstableversion.[10]The goal of this project was "to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA."[11]Gentooalso supports RISC-V.[12]Fedorasupports RISC-V as an alternative architecture as of 2025.[13][14]TheopenSUSEProject added RISC-V support in 2018.[15] Some RISC-V International members, such asSiFive,Andes Technology,Synopsys,Alibaba's Damo Academy,Raspberry Pi, and Akeana,[16][17]offer or have announced commercialsystems on a chip(SoCs) that incorporate one or more RISC-V compatible CPU cores.[18] The termRISCdates from about 1980.[19]Before then, there was some knowledge (seeJohn Cocke) that simpler computers can be effective, but the design principles were not widely described. Simple, effective computers have always been of academic interest, and resulted in the RISC instruction setDLXfor the first edition ofComputer Architecture: A Quantitative Approachin 1990 of whichDavid Pattersonwas a co-author, and he later participated in the RISC-V origination. DLX was intended for educational use; academics and hobbyists implemented it usingfield-programmable gate arrays(FPGA), but it was never truly intended for commercial deployment.ARMCPUs, versions 2 and earlier, had a public-domain instruction set and are still supported by theGNU Compiler Collection(GCC), a popularfree-softwarecompiler. Three open-sourcecoresexist for this ISA, but were never manufactured.[20][21]OpenRISC,OpenPOWER, andOpenSPARC/LEONcores are offered, by a number of vendors, and have mainline GCC andLinuxkernel support.[22][23][24] Krste Asanovićat theUniversity of California, Berkeley, had a research requirement for an open-source computer system, and in 2010, he decided to develop and publish one in a "short, three-month project over the summer" with several of his graduate students. The plan was to aid both academic and industrial users.[25]David Patterson at Berkeley joined the collaboration as he was the originator of the Berkeley RISC,[19]and the RISC-V is the eponymous fifth generation of his long series of cooperative RISC-based research projects at the University of California, Berkeley (RISC-IandRISC-IIpublished in 1981 by Patterson, who refers[26]to the SOAR architecture[27]from 1984 as "RISC-III" and the SPUR architecture[28]from 1988 as "RISC-IV"). At this stage, students provided initial software, simulations, and CPU designs.[29] The RISC-V authors and their institution originally sourced the ISA documents[30]and several CPU designs underBSD licenses, which allow derivative works—such as RISC-V chip designs—to be either open and free, or closed and proprietary. The ISA specification itself (i.e., the encoding of the instruction set) was published in 2011 as open source,[31]with all rights reserved. The actual technical report (an expression of the specification) was later placed under aCreative Commons licenseto permit enhancement by external contributors through the RISC-V Foundation, and later RISC-V International. A full history of RISC-V has been published on the RISC-V International website.[32] Commercial users require an ISA to be stable before they can use it in a product that may last many years. To address this issue, the RISC-V Foundation was formed in 2015 to own, maintain, and publish intellectual property related to RISC-V's definition.[33]The original authors and owners have surrendered their rights to the foundation.[citation needed]The foundation is led by CEOCalista Redmond, who took on the role in 2019 after leading open infrastructure projects atIBM.[34][failed verification] The founding members of RISC-V were:Andes Technology, Antmicro,Bluespec,Ceva,Codasip, Cortus, Esperanto Technologies,Espressif Systems,ETH Zurich, Google, IBM, ICT,IIT Madras,Lattice Semiconductor,LowRISC,Microchip Technology, theMIT Computer Science and Artificial Intelligence Laboratory,Qualcomm,Rambus, Rumble Development,SiFive, Syntacore and Technolution.[35] In November 2019, the RISC-V Foundation announced that it would relocate to Switzerland, citing concerns over U.S. trade regulations.[36][37]As of March 2020, the organization was named RISC-V International, a Swiss nonprofit business association.[38] As of 2019[update], RISC-V International freely publishes the documents defining RISC-V and permits unrestricted use of the ISA for design of software and hardware. However, only members of RISC-V International can vote to approve changes, and only member organizations use thetrademarkedcompatibility logo.[39] The Linux Foundation Europe started the RISC-V Software Ecosystem (RISE) initiative on May 31, 2023. The goal of RISE is to increase the availability of software for high-performance and power-efficient RISC-V processors running high-level operating systems for a range of market segments by bringing together a large number of hardware and software vendors.Red Hat,Samsung, Qualcomm,Nvidia,MediaTek, Intel, and Google are among the initial members.[40] CPU designrequires design expertise in several specialties: electronicdigital logic,compilers, andoperating systems. To cover the costs of such a team, commercial vendors of processor intellectual property (IP), such asArm Ltd.andMIPS Technologies, chargeroyaltiesfor the use of their designs andpatents.[42][43][44]They also often requirenon-disclosure agreementsbefore releasing documents that describe their designs' detailed advantages. In many cases, they never describe the reasons for their design choices. RISC-V was begun with a goal to make a practical ISA that was open-sourced, usable academically, and deployable in any hardware or software design without royalties.[2]: 1[25]Also, justifying rationales for each design decision of the project are explained, at least in broad terms. The RISC-V authors are academics who have substantial experience in computer design, and the RISC-V ISA is a direct development from a series of academic computer-design projects, especiallyBerkeley RISC. RISC-V was originated in part to aid all such projects.[2]: 1[25] To build a large, continuing community of users and thereby accumulate designs and software, the RISC-V ISA designers intentionally support a wide variety of practical use cases: compact, performance, and low-power real-world implementations[2]: 1–2, 153–154[45]without over-architecting for a givenmicroarchitecture.[2]: 1[46][47][48]The requirements of a large base of contributors is part of the reason why RISC-V was engineered to address many possible uses. The designers' primary assertion is that the instruction set is the key interface in a computer as it is situated at the interface between the hardware and the software. If a good instruction set were open and available for use by all, then it can dramatically reduce the cost of software by enabling far more reuse. It should also trigger increased competition among hardware providers, who can then devote more resources toward design and less for software support.[25] The designers maintain that new principles are becoming rare in instruction set design, as the most successful designs of the last forty years have grown increasingly similar. Of those that failed, most did so because their sponsoring companies were financially unsuccessful, not because the instruction sets were technically poor. Thus, a well-designed open instruction set designed using well-established principles should attract long-term support by many vendors.[25] RISC-V also encourages academic usage. The simplicity of the integer subset permits basic student exercises, and is a simple enough ISA to enable software to control research machines. The variable-length ISA provides room for instruction set extensions for both student exercises and research,[2]: 7and the separatedprivilegedinstruction set permits research in operating system support without redesigning compilers.[3]RISC-V's open intellectual property paradigm allows derivative designs to be published, reused, and modified.[49] RISC-V has a modular design, consisting of alternative base parts, with added optional extensions. The ISA base and its extensions are developed in a collective effort between industry, the research community and educational institutions. The base specifies instructions (and their encoding), control flow, registers (and their sizes), memory and addressing, logic (i.e., integer) manipulation, and ancillaries. The base alone can implement a simplified general-purpose computer, with full software support, including a general-purpose compiler. The standard extensions are specified to work with all of the standard bases, and with each other without conflict. Many RISC-V computers might implement the compressed instructions extension to reduce power consumption, code size, and memory use.[2]: 97–99There are also future plans to supporthypervisorsandvirtualization.[3] Together with the supervisor extension, S, an RVGC instruction set, which includes one of the RV base instruction sets, the G collection of extensions (which includes "I", meaning that the base is non-embedded), and the C extension, defines all instructions needed to conveniently support a general purposeoperating system.[2]: 129, 154 To name the combinations of functions that may be implemented, a nomenclature is defined to specify them in Chapter 27 of the current ratified Unprivileged ISA Specification. The instruction set base is specified first, coding for RISC-V, the register bit-width, and the variant; e.g.,RV64IorRV32E. Then follows letters specifying implemented extensions, in the order of the above table. Each letter may be followed by a major optionally followed by "p" and a minor option number. It defaults to 0 if a minor version number is absent, and 1.0 if all of a version number is absent. ThusRV64IMAFDmay be written asRV64I1p0M1p0A1p0F1p0D1p0or more simply asRV64I1M1A1F1D1. Underscores may be used between extensions for readability, for exampleRV32I2_M2_A2. The base, extended integer & floating-point calculations, with synchronization primitives for multi-core computing, are considered to be necessary for general-purpose computing, and thus we have the shorthand, "G". A small 32-bit computer for an embedded system might beRV32EC. A large 64-bit computer might beRV64GC; i.e.,RV64IMAFDCZicsr_Zifencei. With the growth in the number of extensions, the standard now provides for extensions to be named by a single "Z" followed by an alphabetical name and an optional version number. For example,Zifenceinames the instruction-fetch extension.Zifencei2andZifencei2p0name version 2.0 of the same. The first letter following the "Z" by convention indicates the most closely related alphabetical extension category,IMAFDQLCBJTPVN. Thus the Zam extension for misaligned atomics relates to the "A" standard extension. Unlike single character extensions, Z extensions must be separated by underscores, grouped by category and then alphabetically within each category. For example,Zicsr_Zifencei_Zam. Extensions specific to supervisor privilege level are named in the same way using "S" for prefix. Extensions specific to hypervisor level are named using "H" for prefix. Machine level extensions are prefixed with the three letters "Zxm". Supervisor, hypervisor and machine level instruction set extensions are named after less privileged extensions. RISC-V developers may create their own non-standard instruction set extensions. These follow the "Z" naming convention, but with "X" as the prefix. They should be specified after all standard extensions, and if multiple non-standard extensions are listed, they should be listed alphabetically. Profiles and platforms for standard ISA choice lists are under discussion. This flexibility can be used to highly optimize a specialized design by including only the exact set of ISA features required for an application, but the same flexibility also leads to a combinatorial explosion in possible ISA choices. Profiles specify a much smaller common set of ISA choices that capture the most value for most users, and which thereby enable the software community to focus resources on building a rich software ecosystem.[54] The platform specification defines a set of platforms that specify requirements for interoperability between software and hardware. The Platform Policy defines the various terms used in this platform specification. The platform policy also provides the needed detail regarding the scope, coverage, naming, versioning, structure, life cycle and compatibility claims for the platform specification.[55] As a RISC architecture, the RISC-V ISA is aload–store architecture. Its floating-point instructions useIEEE 754floating-point. Notable features of the RISC-V ISA include: instruction bit field locations chosen to simplify the use ofmultiplexersin a CPU,[2]: 17a design that is architecturally neutral,[dubious–discuss]and a fixed location for the sign bit ofimmediate valuesto speed upsign extension.[2]: 17 The instruction set is designed for a wide range of uses. The base instruction set has a fixed length of32-bitnaturally aligned instructions, and the ISA supports variable length extensions where each instruction can be any number of16-bitparcels in length.[2]: 7–10Extensions support smallembedded systems,personal computers,supercomputerswith vector processors, and warehouse-scaleparallel computers. The instruction set specification defines 32-bit and64-bitaddress spacevariants. The specification includes a description of a128-bitflat address space variant, as an extrapolation of 32- and 64-bit variants, but the 128-bit ISA remains "not frozen" intentionally, because as of 2023[update], there is still little practical experience with such large memory systems.[2]: 41 Unlike other academic designs which are typically optimized only for simplicity of exposition, the designers intended that the RISC-V instruction set be usable for practical computers. As of June 2019, version 2.2 of the user-space ISA[59]and version 1.11 of theprivilegedISA[3]arefrozen, permitting software and hardware development to proceed. The user-space ISA, now renamed the Unprivileged ISA, was updated, ratified and frozen as version 20191213.[2]An external debug specification is available as a draft, version 0.13.2.[60] RISC-V has 32integerregisters (or 16 in the embedded variant),[2]: 13, 33and when the floating-point extension is implemented, an additional 32floating-pointregisters.[2]: 63Except for memory access instructions, instructions address onlyregisters. The first integer register is azero register, and the remainder are general-purpose registers. A store to the zero register has no effect, and a read always provides 0. Using the zero register as a placeholder makes for a simpler instruction set. Control and status registers exist, but user-mode programs can access only those used for performance measurement and floating-point management. No instructions exist to save and restore multiple registers. Those were thought to be needless, too complex, and perhaps too slow.[49] Like many RISC designs, RISC-V is aload–store architecture: instructions address only registers, with load and store instructions conveying data to and from memory. Most load and store instructions include a 12-bit offset and two register identifiers. One register is the base register. The other register is the destination (for a load) or the source (for a store). The offset is added to a base register to get the address.[2]: 24Forming the address as a base register plus offset allows single instructions to access data structures. For example, if the base register points to the top of a stack, single instructions can access a subroutine's local variables in the stack. Likewise the load and store instructions can access a record-style structure or a memory-mapped I/O device. Using the constant zero register as a base address allows single instructions to access memory near address zero. Memory is addressed as 8-bit bytes, with instructions being inlittle-endianorder,[2]: 9–10and with data being in the byte order defined by the execution environment interface in which code is running.[2]: 3, 9–10, 24Words, up to the register size, can be accessed with the load and store instructions. RISC-V was originally specified as little-endian to resemble other familiar, successful computers, for example,x86.[2]: 9–10This also reduces a CPU's complexity and costs slightly less because it reads all sizes of words in the same order. For example, the RISC-V instruction set decodes starting at the lowest-addressed byte of the instruction. Big-endian and bi-endian variants were defined for support of legacy code bases that assume big-endianness.[2]: 9–10The privileged ISA defines bits in themstatusandmstatushregisters that indicate and, optionally, control whether M-mode, S-mode, and U-mode memory accesses other than instruction fetches are little-endian or big-endian; those bits may be read-only, in which case the endianness of the implementation is hardwired, or may be writable.[3]: 23–24 An execution environment interface may allow accessed memory addresses not to be aligned to their word width, but accesses to aligned addresses may be faster; for example, simple CPUs may implement unaligned accesses with slow software emulation driven from an alignment failureinterrupt.[2]: 3, 24–25 Like many RISC instruction sets (and somecomplex instruction set computer(CISC) instruction sets, such asx86andIBM System/360and its successors throughz/Architecture), RISC-V lacks address-modes that write back to the registers. For example, it does not auto-increment.[2]: 24 RISC-V manages memory systems that are shared between CPUs orthreadsby ensuring a thread of execution always sees its memory operations in the programmed order. But between threads and I/O devices, RISC-V is simplified: it doesn't guarantee the order of memory operations, except by specific instructions, such asfence. Afenceinstruction guarantees that the results of predecessor operations are visible to successor operations of other threads or I/O devices.fencecan guarantee the order of combinations of both memory and memory-mapped I/O operations. E.g. it can separate memory read and write operations, without affecting I/O operations. Or, if a system can operate I/O devices in parallel with memory,fencedoesn't force them to wait for each other. One CPU with one thread may decodefenceasnop. Some RISC CPUs (such asMIPS,PowerPC,DLX, and Berkeley's RISC-I) place 16 bits of offset in the loads and stores. They set the upper 16 bits by aload upper wordinstruction. This permits upper-halfword values to be set easily, without shifting bits. However, most use of the upper half-word instruction makes 32-bit constants, like addresses. RISC-V uses aSPARC-like combination of 12-bit offsets and 20-bitset upperinstructions. The smaller 12-bit offset helps compact, 32-bit load and store instructions select two of 32 registers yet still have enough bits to support RISC-V's variable-length instruction coding.[2]: 16 RISC-V handles 32-bit constants and addresses with instructions that set the upper 20 bits of a 32-bit register. Load upper immediateluiloads 20 bits into bits 31 through 12. Then a second instruction such asaddican set the bottom 12 bits. Small numbers or addresses can be formed by using the zero register instead oflui. This method is extended to permitposition-independent codeby adding an instruction,auipcthat generates 20 upper address bits by adding an offset to the program counter and storing the result into a base register. This permits a program to generate 32-bit addresses that are relative to the program counter. The base register can often be used as-is with the 12-bit offsets of the loads and stores. If needed,addican set the lower 12 bits of a register. In 64-bit and 128-bit ISAs,luiandauipcsign-extend the result to get the larger address.[2]: 37 Some fast CPUs may interpret combinations of instructions as singlefusedinstructions.luiorauipcare good candidates to fuse withjalr,addi, loads or stores. RISC-V's subroutine calljal(jump and link) places its return address in a register. This is faster in many computer designs, because it saves a memory access compared to systems that push a return address directly on a stack in memory.jalhas a 20-bit signed (two's complement) offset. The offset is multiplied by 2, then added to the PC (program counter) to generate a relative address to a 32-bit instruction. If the resulting address is not 32-bit aligned (i.e. evenly divisible by 4), the CPU may force anexception.[2]: 20–23, Section 2.5 RISC-V CPUs jump to calculated addresses using ajump and link-register,jalrinstruction.jalris similar tojal, but gets its destination address by adding a 12-bit offset to a base register. (In contrast,jaladds a larger 20-bit offset to the PC.) jalr's bit format is like the register-relative loads and stores. Like them,jalrcan be used with the instructions that set the upper 20 bits of a base register to make 32-bit branches, either to an absolute address (usinglui) or a PC-relative one (usingauipcfor position-independent code). (Using a constant zero base address allows single-instruction calls to a small (the offset), fixed positive or negative address.) RISC-V recyclesjalandjalrto get unconditional 20-bit PC-relative jumps and unconditional register-based 12-bit jumps. Jumps just make the linkage register 0 so that no return address is saved.[2]: 20–23, Section 2.5 RISC-V also recyclesjalrto return from a subroutine: To do this,jalr's base register is set to be the linkage register saved byjalorjalr.jalr's offset is zero and the linkage register is zero, so that there is no offset, and no return address is saved. Like many RISC designs, in a subroutine call, a RISC-V compiler must use individual instructions to save registers to the stack at the start, and then restore these from the stack on exit. RISC-V has nosave multipleorrestore multipleregister instructions. These were thought to make the CPU too complex, and possibly slow.[61]This can take more code space. Designers planned to reduce code size with library routines to save and restore registers.[62] RISC-V has nocondition code registerorcarry bit. The designers believed that condition codes make fast CPUs more complex by forcing interactions between instructions in different stages of execution. This choice makes multiple-precision arithmetic more complex. Also, a few numerical tasks need more energy. As a result,predication(the conditional execution of instructions) is not supported. The designers claim that very fast, out-of-order CPU designs do predication anyway, by doing the comparison branch and conditional code in parallel, then discarding the unused path's effects. They also claim that even in simpler CPUs, predication is less valuable thanbranch prediction, which can prevent most stalls associated with conditional branches. Code without predication is larger, with more branches, but they also claim that acompressed instruction set(such as RISC-V's setC) solves that problem in most cases.[49][failed verification] Instead, RISC-V has short branches that perform comparisons: equal, not-equal, less-than, unsigned less-than, greater-than or equal and unsigned greater-than or equal. Ten comparison-branch operations are implemented with only six instructions, by reversing the order of operands in theassembler. For example,branch if greater thancan be done byless-thanwith a reversed order of operands.[2]: 20–23, Section 2.5 The comparing branches have a twelve-bit signed range, and jump relative to the PC.[2]: 20–23, Section 2.5 Unlike some RISC architectures, RISC-V does not include abranch delay slot, a position after a branch instruction that can be filled with an instruction that is executed whether or not the branch is taken.[2]: 20–23, Section 2.5RISC-V omits a branch delay slot because it complicates multicycle CPUs, superscalar CPUs, and long pipelines. Dynamicbranch predictorshave succeeded well enough to reduce the need for delayed branches.[49] On the first encounter with a branch, RISC-V CPUs should assume that a negative relative branch (i.e. the sign bit of the offset is "1") will be taken.[2]: 20–23, Section 2.5This assumes that a backward branch is a loop, and provides a default direction so that simple pipelined CPUs can fill their pipeline of instructions. Other than this, RISC-V does not requirebranch prediction, but core implementations are allowed to add it. RV32I reserves a "HINT" instruction space that presently does not contain any hints on branches;[2]: 28–29, Section 2.9RV64I does the same.[2]: 38–39, Section 5.4 RISC-V segregates math into a minimal set ofintegerinstructions (setI) withadd, subtract, shift, bitwise logicand comparing-branches. These can simulate most of the other RISC-V instruction sets with software. (The atomic instructions are a notable exception.) RISC-V integer instructions lack thecount leading zeroand bit-field operations normally used to speed software floating-point in a pure-integer processor, However, while nominally in the bit manipulation extension, the ratified Zbb, Zba and Zbs extensions contain further integer instructions including a count leading zero instruction. The integer multiplication instructions (setM) include signed and unsigned multiply and divide. Double-precision integer multiplies and divides are included, as multiplies and divides that produce thehigh wordof the result. The ISA document recommends that implementors of CPUs and compilersfusea standardized sequence of high and low multiply and divide instructions to one operation if possible.[2]: 43–45 Thefloating-pointinstructions (setF) include single-precision arithmetic and also comparison-branches similar to the integer arithmetic. It requires an additional set of 32 floating-point registers. These are separate from the integer registers. The double-precision floating point instructions (setD) generally assume that the floating-point registers are 64-bit (i.e., double-width), and theFsubset is coordinated with theDset. A quad-precision 128-bit floating-point ISA (Q) is also defined.[2]: 63–82RISC-V computers without floating-point can use a floating-point software library. RISC-V does not causeexceptionson arithmetic errors, includingoverflow,[2]: 17–20underflow, subnormal, and divide by zero.[2]: 44–45Instead, both integer and floating-point arithmetic produce reasonable default values, and floating-point instructions set status bits.[2]: 66Divide-by-zero can be discovered by one branch after the division.[2]: 44–45The status bits can be tested by an operating system or periodic interrupt. RISC-V supports computers that share memory between multiple CPUs andthreads. RISC-V's standard memory consistency model isrelease consistency. That is, loads and stores may generally be reordered, but some loads may be designated asacquireoperations which must precede later memory accesses, and some stores may be designated asreleaseoperations which must follow earlier memory accesses.[2]: 83–94 The base instruction set includes minimal support in the form of afenceinstructionto enforce memory ordering.[2]: 26–27Although this is sufficient (fence r, rwprovidesacquireandfence rw, wprovidesrelease), combined operations can be more efficient.[2]: Chapter 8 The atomic memory operation extension supports two types of atomic memory operations for release consistency. First, it provides general purposeload-reservedlrandstore-conditionalscinstructions.lrperforms a load, and tries to reserve that address for its thread. A later store-conditionalscto the reserved address will be performed only if the reservation is not broken by an intervening store from another source. If the store succeeds, a zero is placed in a register. If it failed, a non-zero value indicates that software needs to retry the operation. In either case, the reservation is released.[2]: Chapter 8 The second group of atomic instructions performread-modify-writesequences: a load (which is optionally a load-acquire) to a destination register, then an operation between the loaded value and a source register, then a store of the result (which may optionally be a store-release). Making the memory barriers optional permits combining the operations. The optional operations are enabled byacquireandreleasebits which are present in every atomic instruction. RISC-V defines nine possible operations: swap (use source register value directly); add; bitwise and, or, and exclusive-or; and signed and unsigned minimum and maximum.[2]: Chapter 8 A system design may optimize these combined operations more thanlrandsc. For example, if the destination register for a swap is the constant zero, the load may be skipped. If the value stored is unmodified since the load, the store may be skipped.[59]: 44 TheIBM System/370and its successors includingz/Architecture, andx86, both implement acompare-and-swap(cas) instruction, which tests and conditionally updates a location in memory: if the location contains an expected old value,casreplaces it with a given new value; it then returns an indication of whether it made the change. However, a simple load-type instruction is usually performed before thecasto fetch the old value. The classic problem is that if a thread reads (loads) a valueA, calculates a new valueC, and then uses (cas) to replaceAwithC, it has no way to know whether concurrent activity in another thread has replacedAwith some other valueBand then restored theAin between. In some algorithms (e.g., ones in which the values in memory are pointers to dynamically allocated blocks), thisABA problemcan lead to incorrect results. The most common solution employs adouble-widecasinstruction to update both the pointer and an adjacent counter; unfortunately, such an instruction requires a special instruction format to specify multiple registers, performs several reads and writes, and can have complex bus operation.[2]: 48–49 Thelr/scalternative is more efficient. It usually requires only one memory load, and minimizing slow memory operations is desirable. It's also exact: it controls all accesses to the memory cell, rather than just assuring a bit pattern. However, unlikecas, it can permitlivelock, in which two or more threads repeatedly cause each other's instructions to fail. RISC-V guarantees forward progress (no livelock) if the code follows rules on the timing and sequence of instructions: 1) It must use only theIsubset. 2) To prevent repetitive cache misses, the code (including the retry loop) must occupy no more than 16 consecutive instructions. 3) It must include no system or fence instructions, or taken backward branches between thelrandsc. 4) The backward branch to the retry loop must be to the original sequence.[2]: 48–49 The specification gives an example of how to use the read-modify-write atomic instructions to lock a data structure.[2]: 54 The standard RISC-V ISA specifies that all instructions are 32 bits. This makes for a particularly simple implementation, but like other RISC processors with 32-bit instruction encoding, results in larger code size than in instruction sets with variable-length instructions.[2]: 99[61] To compensate, RISC-V's32-bitinstructions are actually 30 bits;3⁄4of theopcodespace is reserved for an optional (but recommended) variable-lengthcompressedinstruction set, RVC, that includes 16-bit instructions. As inARM ThumbandMIPS16, the compressed instructions are simply alternative encodings for a subset of the larger instructions. Unlike the ARM or MIPS compressed sets, space was reserved from the start so there is no separate operating mode. Standard and compressed instructions may be intermixed freely.[2]: 97[61](Extension letter isC.)[2]: 97 Because (like Thumb-1 and MIPS16) the compressed instructions are simply alternate encodings (aliases) for a selected subset of larger instructions, the compression can be implemented in the assembler, and it is not essential for the compiler to even know about it. A prototype of RVC was tested in 2011.[61]The prototype code was 20% smaller than anx86PC andMIPScompressed code, and 2% larger than ARMThumb-2code.[61]It also substantially reduced both the needed cache memory and the estimated power use of the memory system.[61] The researcher intended to reduce the code's binary size for small computers, especiallyembedded computersystems. The prototype included 33 of the most frequently used instructions, recoded as compact 16-bit formats using operation codes previously reserved for the compressed set.[61]The compression was done in theassembler, with no changes to the compiler. Compressed instructions omitted fields that are often zero, used small immediate values or accessed subsets (16 or 8) of the registers.addiis very common and often compressible.[61] Much of the difference in size compared to ARM's Thumb set occurred because RISC-V, and the prototype, have no instructions to save and restore multiple registers. Instead, the compiler generated conventional instructions that access the stack. The prototype RVC assembler then often converted these to compressed forms that were half the size. However, this still took more code space than the ARM instructions that save and restore multiple registers. The researcher proposed to modify the compiler to call library routines to save and restore registers. These routines would tend to remain in a code cache and thus run fast, though probably not as fast as a save-multiple instruction.[61] Standard RVC requires occasional use of 32-bit instructions. Several nonstandard RVC proposals are complete, requiring no 32-bit instructions, and are said to have higher densities than standard RVC.[63][64]Another proposal builds on these, and claims to use less coding range as well.[65] An instruction set for the smallestembeddedCPUs (set E) is reduced in other ways: Only 16 of the 32 integer registers are supported.[2]: Chapter 4All current extensions may be used; a floating-point extension to use the integer registers for floating-point values is being considered. The privileged instruction set supports only machine mode, user mode and memory schemes that use base-and-bound address relocation.[3] Discussion has occurred for a microcontroller profile for RISC-V, to ease development of deeply embedded systems. It centers on faster, simple C-language support for interrupts, simplified security modes and a simplifiedPOSIXapplication binary interface.[66] Correspondents have also proposed smaller, non-standard, 16-bitRV16EISAs: Several serious proposals would use the 16-bitCinstructions with 8 × 16-bit registers.[64][63]An April fools' joke proposed a very practical arrangement: Utilize 16 × 16-bit integer registers, with the standardEIMCISAs (including 32-bit instructions.) The joke was to usebank switchingwhen a 32-bit CPU would be clearly superior with the larger address space.[67] RISC-V's ISA includes a separateprivilegedinstruction set specification, which mostly describes three privilege levels plus an orthogonal hypervisor mode. As of December 2021[update], version 1.12 is ratified by RISC-V International.[3] Version 1.12 of the specification supports several types of computer systems: These correspond roughly to systems with up to fourringsof privilege and security, at most: machine, hypervisor, supervisor and user. Each layer also is expected to have a thin layer of standardized supporting software that communicates to a more-privileged layer, or hardware.[3] The ISA also includes a hypervisor mode that isorthogonalto the user and supervisor modes.[68]The basic feature is a configuration bit that either permits supervisor-level code to access hypervisor registers, or causes an interrupt on accesses. This bit lets supervisor mode directly handle the hardware needed by a hypervisor. This simplifies the implementation of hypervisors that are hosted by an operating system. This is a popular mode to run warehouse-scale computers. To support non-hosted hypervisors, the bit can cause these accesses to interrupt to a hypervisor. The design also simplifies nesting of hypervisors, in which a hypervisor runs under a hypervisor, and if necessary it lets the kernel use hypervisor features within its own kernel code. As a result, the hypervisor form of the ISA supports five modes: machine, supervisor, user, supervisor-under-hypervisor and user-under-supervisor. The privileged instruction set specification explicitly defineshardwarethreads, orharts. Multiple hardware threads are a common practice in more-capable computers. When one thread is stalled, waiting for memory, others can often proceed. Hardware threads can help make better use of the large number of registers and execution units in fast out-of-order CPUs. Finally, hardware threads can be a simple, powerful way to handleinterrupts: No saving or restoring of registers is required, simply executing a different hardware thread. However, the only hardware thread required in a RISC-V computer is thread zero.[3] Interrupts and exceptions are handled together. Exceptions are caused by instruction execution including illegal instructions and system calls, while interrupts are caused by external events. The existing control and status register definitions support RISC-V's error and memory exceptions, and a small number of interrupts, typically via an "advanced core local interruptor" (ACLINT).[69]For systems with more interrupts, the specification also defines aplatform-level interrupt controller(PLIC) to coordinate large number of interrupts among multiple processors. Interrupts always start at the highest-privileged machine level, and the control registers of each level have explicitforwardingbits to route interrupts to less-privileged code. For example, the hypervisor need not include software that executes on each interrupt to forward an interrupt to an operating system. Instead, on set-up, it can set bits to forward the interrupt.[3] Several memory systems are supported in the specification. Physical-only is suited to the simplest embedded systems. There are also fourUNIX-stylevirtual memorysystems for memory cached in mass-storage systems. The virtual memory systems supportMMUwith four sizes, with addresses sized 32, 39, 48 and 57 bits. All virtual memory systems support 4 KiB pages, multilevel page-table trees and use very similar algorithms to walk the page table trees. All are designed for either hardware or software page-table walking. To optionally reduce the cost of page table walks, super-sized pages may be leaf pages in higher levels of a system's page table tree. SV32 is only supported on 32-bit implementations, has a two-layer page table tree and supports 4 MiB superpages. SV39 has a three level page table, and supports 2 MiB superpages and 1 GiB gigapages. SV48 is required to support SV39. It also has a 4-level page table and supports 2 MiB superpages, 1 GiB gigapages, and 512 GiB terapages. SV57 has a 5-level page table and supports 2 MiB superpages, 1 GiB gigapages, 512 GiB terapages and 256 TiB petapages. Superpages are aligned on the page boundaries for the next-lowest size of page.[3] Some bit-manipulation ISA extensions were ratified in November 2021 (Zba, Zbb, Zbc, Zbs).[51]The Zba, Zbb, and Zbs extensions are arguably extensions of the standard I integer instructions: Zba contains instructions to speed up the computation of the addresses of array elements in arrays of datatypes of size 2, 4, or 8 bytes (sh1add, sh2add, sh3add), and for 64 (and 128) bit processors when indexed with unsigned integers (add.uw, sh1add.uw, sh2add.uw, sh3add.uw and slli.uw). The Zbb instructions contains operations to count leading, trailing 0 bits or all 1 bits in a full and 32 word operations (clz, clzw, ctz, ctzw, cpop, cpopw), byte order reversion (rev8), logical instructions with negation of the second input (andn,orn, xnor), sign and zero extension (sext.b, sext.h, zext.h) that could not be provided as special cases of other instructions (andi, addiw, add.wu), min and max of (signed and unsigned) integers, (left and right) rotation of bits in a register and 32-bit words (rori,roriw, ror, rorw, rol, rolw), and a byte wise "or combine" operation which allows detection of a zero byte in a full register, useful for handling C-style null terminated strings functions. The Zbs extension allows setting, getting, clearing, and toggling individual bits in a register by their index (bseti, bset, bexti, bext, bclri, bclr, binvi,binv). The Zbc extension has instructions for "carryless multiplication", which does the multiplication ofpolynomialsover theGalois fieldGF(2) (clmul, clmulh, clmulr). These are useful for cryptography and CRC checks of data integrity. Done well, a more specialised bit-manipulation subset can aid cryptographic, graphic, and mathematical operations. Further instructions that have been discussed include instructions to shift in ones, a generalized bit-reverse, shuffle and crossbar permutations, bit-field place, extract and deposit pack two words, bytes or halfwords in one register, CRC instructions, bit-matrix operations (RV64 only), conditional mix, conditional move, funnel shifts. The criteria for inclusion documented in the draft were compliant with RISC-V philosophies and ISA formats, substantial improvements in code density or speed (i.e., at least a 3-for-1 reduction in instructions), and substantial real-world applications, including preexisting compiler support. Version 0.93 of the bit-manipulation extension includes those instructions;[70]some of them are now in version 1.0.1 of the scalar andentropy sourceinstructions cryptography extension.[53] Packed-SIMD instructions are widely used by commercial CPUs to inexpensively accelerate multimedia and otherdigital signal processing.[49]For simple, cost-reduced RISC-V systems, the base ISA's specification proposed to use the floating-point registers' bits to perform parallel single instruction, multiple data (SIMD) sub-word arithmetic. In 2017 a vendor published a more detailed proposal to the mailing list, and this can be cited as version 0.1.[71]As of 2019[update], the efficiency of this proposed ISA varies from 2x to 5x a base CPU for a variety of DSP codecs.[72]The proposal lacked instruction formats and a license assignment to RISC-V International, but it was reviewed by the mailing list.[71]Some unpopular parts of this proposal were that it added a condition code, the first in a RISC-V design, linked adjacent registers (also a first), and has a loop counter that can be difficult to implement in some microarchitectures. The proposedvector-processinginstruction set may make the packedSIMDset obsolete. The designers hope to have enough flexibility that a CPU can implement vector instructions in a standard processor's registers. This would enable minimal implementations with similar performance to a multimedia ISA, as above. However, a true vector coprocessor could execute the same code with higher performance.[73] As of 19 September 2021[update], the vector extension is at version 1.0.[74]It is a conservative, flexible design of a general-purpose mixed-precision vector processor, suitable to executecompute kernels. Code would port easily to CPUs with differing vector lengths, ideally without recompiling.[73] In contrast, short-vector SIMD extensions are less convenient. These are used inx86, ARM andPA-RISC. In these, a change in word-width forces a change to the instruction set to expand the vector registers (in the case of x86, from 64-bitMMXregisters to 128-bitStreaming SIMD Extensions(SSE), to 256-bitAdvanced Vector Extensions(AVX), andAVX-512). The result is a growing instruction set, and a need to port working code to the new instructions. In the RISC-V vector ISA, rather than fix the vector length in the architecture, instructions (vsetvli,vsetivli, andvsetvl) are available which take a requested size and sets the vector length to the minimum of the hardware limit and the requested size. So, the RISC-V proposal is more like aCray's long-vector design or ARM's Scalable Vector Extension. That is, each vector in up to 32 vectors is the same length.[74]: 25 The application specifies the total vector width it requires, and the processor determines the vector length it can provide with available on-chip resources. This takes the form of an instruction (vsetcfg) with four immediate operands, specifying the number of vector registers of each available width needed. The total must be no more than the addressable limit of 32, but may be less if the application does not require them all. The vector length is limited by the available on-chip storage divided by the number of bytes of storage needed for each entry. (Added hardware limits may also exist, which in turn may permit SIMD-style implementations.)[73] Outside of vector loops, the application can zero the number of requested vector registers, saving the operating system the work of preserving them oncontext switches.[73] The vector length is not only architecturally variable, but designed to vary at run time also. To achieve this flexibility, the instruction set is likely to use variable-width data paths and variable-type operations using polymorphic overloading.[73]The plan is that these can reduce the size and complexity of the ISA and compiler.[73] Recent experimental vector processors with variable-width data paths also show profitable increases in operations per: second (speed), area (lower cost), and watt (longer battery life).[75] Unlike a typical moderngraphics processing unit, there are no plans to provide special hardware to supportbranch predication. Instead, lower cost compiler-based predication will be used.[73][76] There is a preliminary specification for RISC-V's hardware-assisteddebugger. The debugger will use a transport system such as Joint Test Action Group (JTAG) or Universal Serial Bus (USB) to access debug registers. A standard hardware debug interface may support either astandardized abstract interfaceorinstruction feeding.[77][78] As of January 2017[update], the exact form of theabstract interfaceremains undefined, but proposals include a memory mapped system with standardized addresses for the registers of debug devices or a command register and a data register accessible to the communication system.[77]Correspondents claim that similar systems are used byFreescale'sbackground debug mode interface(BDM) for some CPUs,ARM,OpenRISC, andAeroflex'sLEON.[77] Ininstruction feeding, the CPU will process a debug exception to execute individual instructions written to a register. This may be supplemented with a data-passing register and a module to directly access the memory. Instruction feeding lets the debugger access the computer exactly as software would. It also minimizes changes in the CPU, and adapts to many types of CPU. This was said to be especially apt for RISC-V because it is designed explicitly for many types of computers. The data-passing register allows a debugger to write a data-movement loop to RAM, and then execute the loop to move data into or out of the computer at a speed near the maximum speed of the debug system's data channel.[77]Correspondents say that similar systems are used byMIPS TechnologiesMIPS,Intel Quark,Tensilica'sXtensa, and forFreescalePower ISACPUs'background debug mode interface(BDM).[77] A vendor proposed a hardware trace subsystem for standardization, donated a conforming design, and initiated a review.[79][80]The proposal is for a hardware module that can trace code execution on most RISC-V CPUs. To reduce the data rate, and permit simpler or less-expensive paths for the trace data, the proposal does not generate trace data that can be calculated from a binary image of the code. It sends only data that indicates "uninferrable" paths through the program, such as which conditional branches are taken. To reduce the data rates, branches that can be calculated, such as unconditional branches, are not traced. The proposed interface between the module and the control unit is a logic signal for each uninferrable type of instruction. Addresses and other data are to be provided in a specialized bus attached to appropriate data sources in a CPU. The data structure sent to an external trace unit is a series of short messages with the needed data. The details of the data channel are intentionally not described in the proposal, because several are likely to make sense. The RISC-V organization maintains a list of RISC-V CPU and SoC implementations.[81]Due to trade wars and possible sanctions that would prevent China from accessing proprietary ISAs, as of 2023 the country was planning to shift most of its CPU and MCU architectures to RISC-V cores.[82] In 2023, the European Union was set to provide 270 million euros within a so-called Framework Partnership Agreement (FPA) to a single company that was able and willing to carry out a RISC-V CPU development project aimed at supercomputers, servers, and data centers.[83]The European Union's aim was to become independent from political developments in other countries and to "strengthen its digital sovereignty and set standards, rather than following those of others."[84] According toThe Register, Chinese media reported in March 2025 from the conference where the server-grade CPU Alibaba DAMO Xuantie C930 was launched that senior Alibaba Cloud executives had predicted that RISC-V would become a mainstream cloud architecture as early as 2030.[85]According toReuters, Chinese government bodies in 2025 were working on “guidance” that would promote widespread use of RISC-V throughout China.[85] SiFiveof Santa Clara, California, announced their first RISC-Vout-of-orderhigh performance CPU core, the U8 Series Processor IP, in 2019.[86]SiFive was established specifically for developing RISC-V hardware and began releasing processor models in 2017.[87][88]These included a quad-core, 64-bit (RV64GC)system on a chip(SoC) capable of running general-purpose operating systems such asLinux.[89] DAMO Academy,[90][91]the research arm ofAlibaba GroupofHangzhou, China, announced the 2.5 GHz 16-core 64-bit (RV64GC) Xuantie 910out-of-orderprocessor in July 2019.[92]In October 2021 the Xuantie 910 was released as an open-source design.[93]In November 2023, DAMO unveiled three updated processors: the Xuantie C920, Xuantie C907 and Xuantie R910; these processors were aimed at a variety of application areas, including autonomous vehicles, artificial intelligence (AI), enterprise hard drives, and network communications.[94] In a move whichThe Registersaid provided evidence that the "permissively licensed RISC-V instruction set architecture" appeared to be gaining "significant momentum in China", the server-grade CPU Xuantie C930 was launched in March 2025.[95]The C930 CPU core was advertised as ideal for servers, PCs, and autonomous cars.[95]It creates significant competition for the California-based companySiFiveand its P870 core.[96] SpacemiT, a Chinese company headquartered in Hangzhou, developed the SpacemiT Key Stone K1 in 2024, an octa-core 64-bit processor that is available in theBPI-F3computer, as well as the following other devices: LicheePi 3A, the Milk-V Jupiter, theDeepComputingDC-ROMA LAPTOP II, and the SpacemiT MUSEbook featuring the Bianbu OS operating system. The processor is based on the X60 core design, integrates an IMG BXE-2-32 GPU, and supports the vector extension RVV 1.0.[97]In January 2025, SpacemiT announced the development of a server processor with up to 64 RISC-V cores, called "VitalStone V100" and made with a 12nm-class process technology.[98][99][100] Existing proprietary implementations include: DeepComputing of Hong Kong announced the release on 13 April 2023 of the "world's first laptop with RISC-V processor"; the notebook, called "ROMA", was delivered to its first customers in August 2023[175]and came pre-installed with the ChineseopenKylinLinux operating system.[176]The device's basic model, available fromAlibaba, was still expensive at roughly US$1500[177]considering it was powered by the not very fast[178]Alibaba (DAMO) CPU "XuanTie C910". An upgrade in June 2024 doubled the core count to 8 cores and increased the clock speed to 2 GHz (from 1.5 GHz), while dropping the price to US$1,000.[179]The processor used was aSpacemiTSoC K1.[180][181]A collaboration withCanonical[182]meant that the ROMA II came pre-installed with the major international Linux distributionUbuntu.[183] In 2024, DeepComputing announced a collaboration withFramework Computerto produce amainboardfor their Framework Laptop 13.[184][185]As of 4 February 2025, it is ready to ship and mainly targeted at developers. It features a 4-core StarFive JH7110 processor.[186] A normal problem for a new instruction set is both a lack of CPU designs and of software, which limit its usability and reduce adoption.[25]In addition to already having a large number of CPU hardware designs, RISC-V is also supported by toolchains, operating systems (e.g.Linux),middleware[vague]and design software. Available RISC-V software tools include aGNU Compiler Collection(GCC) toolchain (withGDB, the debugger), anLLVMtoolchain, theOVPsimsimulator (and library of RISC-V Fast Processor Models), the Spike simulator, and a simulator inQEMU(RV32GC/RV64GC).JEP 422: Linux/RISC-V Portis already integrated into mainlineOpenJDKrepository. Java 21+ Temurin OpenJDK builds for RISC-V are available fromAdoptium. Operating system support exists for theLinuxkernel,FreeBSD,NetBSD, andOpenBSDbut the supervisor-mode instructions were unstandardized before version 1.11 of the privileged ISA specification,[3]so this support is provisional. The preliminary FreeBSD port to the RISC-V architecture was upstreamed in February 2016, and shipped in FreeBSD 11.0.[187][139] Ports of theDebian,[188][11]Fedora,[189]andopenSUSE[190]Linux distributions, and a port ofHaiku,[191]are stabilizing (all only support 64-bit RISC-V, with no plans to support the 32-bit version). In June 2024, Hong Kong company DeepComputing announced the commercial availability of the first RISC-V laptop in the world to run the popular Linux operating systemUbuntuin its standard form ("out of the box").[18]"As RISC-V is becoming a competitive ISA in multiple markets, porting Ubuntu to RISC-V to become the reference OS [operating system] for early adopters was a natural choice," Ubuntu-developerCanonicalstated in June 2024.[192] A port ofDas U-Bootexists.[193]UEFI Spec v2.7 has defined the RISC-V binding and aTianoCoreport has been done byHPEengineers[194]and is expected to be upstreamed. A RISC-V boot deep dive was done as part ofopenSUSEHackweek 20.[195]There is a preliminary port ofthe seL4 microkernel.[196][197]Hex Five released the first Secure IoT Stack for RISC-V withFreeRTOSsupport.[198]Alsoxv6, a modern reimplementation ofSixth Edition UnixinANSI Cused for pedagogical purposes inMIT, was ported. Pharos RTOS has been ported to 64-bit RISC-V[199](including time and memory protection).Also seeComparison of real-time operating systems. A simulator exists to run a RISC-V Linux system on aweb browserusingJavaScript.[200][201][202] QEMUsupports running (usingbinary translation) 32- and 64-bit RISC-V systems (e.g. Linux) with many emulated or virtualized devices (serial, parallel, USB, network, storage, real time clock, watchdog, audio), as well as running RISC-V Linux binaries (translating syscalls to the host kernel). It does support multi-core emulation (SMP).[203] The CREATOR simulator is portable and allows the user to learn various assembly languages of different processors (CREATOR has examples with an implementation of RISC-V and MIPS32 instructions).[204][205][206][207][208] Several languages have been applied to creating RISC-V IP cores including aScala-based hardware description language,Chisel,[209]which can reduce the designs toVerilogfor use in devices, and the CodAL processor description language which has been used in to describe RISC-V processor cores and to generate corresponding HDKs (RTL, testbench andUVM) and SDKs.[210]The RISC-V International Compliance Task Group has a GitHub repository for RV32IMC.[211] The extensible educational simulator WepSIM implements amicroprogrammedsubset of RISC-V instructions (RV32I+M) and allows the execution ofsubroutineson both, at assembly and microprogramming level.[212][213]
https://en.wikipedia.org/wiki/RISC-V#Vector_set
Abarrel processoris aCPUthat switches betweenthreadsof execution on everycycle. ThisCPU designtechnique is also known as "interleaved" or "fine-grained"temporal multithreading. Unlikesimultaneous multithreadingin modernsuperscalararchitectures, it generally does not allow execution of multiple instructions in one cycle. Likepreemptive multitasking, each thread of execution is assigned its ownprogram counterand otherhardware registers(each thread'sarchitectural state). A barrel processor can guarantee that each thread will execute one instruction everyncycles, unlike apreemptive multitaskingmachine, that typically runs one thread of execution for tens of millions of cycles, while all other threads wait their turn. A technique calledC-slowingcan automatically generate a corresponding barrel processor design from a single-tasking processor design. Ann-way barrel processor generated this way acts much likenseparatemultiprocessingcopies of the original single-tasking processor, each one running at roughly 1/nthe original speed.[citation needed] One of the earliest examples of a barrel processor was the I/O processing system in theCDC 6000 seriessupercomputers. These executed oneinstruction(or a portion of an instruction) from each of 10 different virtual processors (called peripheral processors or PPs) before returning to the first processor.[1]FromCDC 6000 serieswe read that "The peripheral processors are collectively implemented as a barrel processor. Each executes routines independently of the others. They are a loose predecessor of bus mastering ordirect memory access." One motivation for barrel processors was to reduce hardware costs. In the case of the CDC 6x00 PPUs, the digital logic of the processor was much faster than the core memory, so rather than having ten separate processors, there are ten separate core memory units for the PPUs, but they all share the single set of processor logic. Another example is theHoneywell 800, which had 8 groups of registers, allowing up to 8 concurrent programs. After each instruction, the processor would (in most cases) switch to the next active program in sequence.[2] Barrel processors have also been used as large-scale central processors. TheTeraMTA(1988) was a large-scale barrel processor design with 128 threads per core.[3][4]The MTA architecture has seen continued development in successive products, such as theCray Urika-GD, originally introduced in 2012 (as the YarcData uRiKA) and targeted at data-mining applications.[5] Barrel processors are also found in embedded systems, where they are particularly useful for their deterministicreal-timethread performance. An early example is the “Dual CPU” version of thefour-bitCOP400that was introduced byNational Semiconductorin 1981. This single-chipmicrocontrollercontains two ostensibly independent CPUs that share instructions, memory, and most IO devices. In reality, the dual CPUs are a single two-thread barrel processor. It works by duplicating certain sections of the processor—those that store thearchitectural state—but not duplicating the main execution resources such asALU, buses, and memory. Separate architectural states are established with duplicated A (accumulators), B (pointer registers), C (carry flags), N (stack pointers), and PC (program counters).[6] Another example is theXMOSXCore XS1(2007), a four-stage barrel processor with eight threads per core. (Newer processors fromXMOSalso have the same type of architecture.) The XS1 is found in Ethernet, USB, audio, and control devices, and other applications where I/O performance is critical. When the XS1 is programmed in the 'XC' language, software controlleddirect memory accessmay be implemented. Barrel processors have also been used in specialized devices such as the eight-threadUbicomIP3023 network I/O processor (2004). Some 8-bitmicrocontrollersbyPadauk Technologyfeature barrel processors with up to 8 threads per core. A single-tasking processor spends a lot of time idle, not doing anything useful whenever acache missorpipeline stalloccurs. Advantages to employing barrel processors over single-tasking processors include: There are a few disadvantages to barrel processors.
https://en.wikipedia.org/wiki/Barrel_processor
Tensor Processing Unit(TPU) is anAI acceleratorapplication-specific integrated circuit(ASIC) developed byGoogleforneural networkmachine learning, using Google's ownTensorFlowsoftware.[2]Google began using TPUs internally in 2015, and in 2018 made them available forthird-partyuse, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale. Compared to agraphics processing unit, TPUs are designed for a high volume of lowprecisioncomputation (e.g. as little as8-bitprecision)[3]with more input/output operations perjoule, without hardware for rasterisation/texture mapping.[4]The TPUASICsare mounted in a heatsink assembly, which can fit in a hard drive slot within a data centerrack, according toNorman Jouppi.[5] Different types of processors are suited for different types of machine learning models. TPUs are well suited forCNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages forRNNs.[6] According to Jonathan Ross, one of the original TPU engineers,[1]and later the founder ofGroq, three separate groups at Google were developing AI accelerators, with the TPU being the design that was ultimately selected. He was not aware ofsystolic arraysat the time and upon learning the term thought "Oh, that's called a systolic array? It just seemed to make sense."[7] The tensor processing unit was announced in May 2016 atGoogle I/O, when the company said that the TPU had already been used insidetheir data centersfor over a year.[5][4]Google's 2017 paper describing its creation cites previous systolic matrix multipliers of similar architecture built in the 1990s.[8]The chip has been specifically designed for Google'sTensorFlowframework, a symbolic math library which is used formachine learningapplications such asneural networks.[9]However, as of 2017 Google still usedCPUsandGPUsfor other types ofmachine learning.[5]OtherAI acceleratordesigns are appearing from other vendors also and are aimed atembeddedandroboticsmarkets. Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018,The New York Timesreported that Google "would allow other companies to buy access to those chips through its cloud-computing service."[10]Google has said that they were used in theAlphaGo versus Lee Sedolseries of human-versus-machineGogames,[4]as well as in theAlphaZerosystem, which producedChess,Shogiand Go playing programs from the game rules alone and went on to beat the leading programs in those games.[11]Google has also used TPUs forGoogle Street Viewtext processing and was able to find all the text in the Street View database in less than five days. InGoogle Photos, an individual TPU can process over 100 million photos a day.[5]It is also used inRankBrainwhich Google uses to provide search results.[12] Google provides third parties access to TPUs through itsCloud TPUservice as part of theGoogle Cloud Platform[13]and through itsnotebook-basedservicesKaggleandColaboratory.[14][15] 393 (int8) 918 (int8) 1836 (int8) The first-generation TPU is an8-bitmatrix multiplicationengine, driven withCISC instructionsby the host processor across aPCIe 3.0bus. It is manufactured on a 28nmprocess with a die size ≤ 331mm2. Theclock speedis 700MHzand it has athermal design powerof 28–40W. It has 28MiBof on chip memory, and 4MiBof32-bitaccumulatorstaking the results of a 256×256systolic arrayof 8-bitmultipliers.[8]Within the TPU package is 8GiBofdual-channel2133 MHzDDR3 SDRAMoffering 34 GB/s of bandwidth.[18]Instructions transfer data to or from the host, perform matrix multiplications orconvolutions, and applyactivation functions.[8] The second-generation TPU was announced in May 2017.[27]Google stated the first-generation TPU design was limited bymemory bandwidthand using 16GBofHigh Bandwidth Memoryin the second-generation design increased bandwidth to 600 GB/s and performance to 45 teraFLOPS.[18]The TPUs are then arranged into four-chip modules with a performance of 180 teraFLOPS.[27]Then 64 of these modules are assembled into 256-chip pods with 11.5 petaFLOPS of performance.[27]Notably, while the first-generation TPUs were limited to integers, the second-generation TPUs can also calculate infloating point, introducing thebfloat16format invented byGoogle Brain. This makes the second-generation TPUs useful for both training and inference of machine learning models. Google has stated these second-generation TPUs will be available on theGoogle Compute Enginefor use in TensorFlow applications.[28] The third-generation TPU was announced on May 8, 2018.[29]Google announced that processors themselves are twice as powerful as the second-generation TPUs, and would be deployed in pods with four times as many chips as the preceding generation.[30][31]This results in an 8-fold increase in performance per pod (with up to 1,024 chips per pod) compared to the second-generation TPU deployment. On May 18, 2021, Google CEO Sundar Pichai spoke about TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference. TPU v4 improved performance by more than 2x over TPU v3 chips. Pichai said "A single v4 pod contains 4,096 v4 chips, and each pod has 10x the interconnect bandwidth per chip at scale, compared to any other networking technology.”[32]An April 2023 paper by Google claims TPU v4 is 5-87% faster than an NvidiaA100at machine learningbenchmarks.[33] There is also an "inference" version, called v4i,[34]that does not requireliquid cooling.[35] In 2021, Google revealed the physical layout of TPU v5 is being designed with the assistance of a novel application ofdeep reinforcement learning.[36]Google claims TPU v5 is nearly twice as fast as TPU v4,[37]and based on that and the relative performance of TPU v4 over A100, some speculate TPU v5 as being as fast as or faster than anH100.[38] Similar to the v4i being a lighter-weight version of the v4, the fifth generation has a "cost-efficient"[39]version called v5e.[21]In December 2023, Google announced TPU v5p which is claimed to be competitive with the H100.[40] In May 2024, at theGoogle I/Oconference, Google announced TPU v6, which became available in preview in October 2024.[41]Google claimed a 4.7 times performance increase relative to TPU v5e,[42]via larger matrix multiplication units and an increased clock speed. High bandwidth memory (HBM) capacity and bandwidth have also doubled. A pod can contain up to 256 Trillium units.[43] In April 2025, at Google Cloud Next conference, Google unveiled TPU v7. This new chip, called Ironwood, will come in two configurations: a 256-chip cluster and a 9,216-chip cluster. Ironwood will have a peak computational performance rate of 4,614 TFLOP/s.[44] In July 2018, Google announced the Edge TPU. The Edge TPU is Google's purpose-builtASICchip designed to run machine learning (ML) models foredge computing, meaning it is much smaller and consumes far less power compared to the TPUs hosted in Google datacenters (also known as Cloud TPUs[45]). In January 2019, Google made the Edge TPU available to developers with a line of products under theCoralbrand. The Edge TPU is capable of 4 trillion operations per second with 2 W of electrical power.[46] The product offerings include asingle-board computer(SBC), asystem on module(SoM), aUSBaccessory, a miniPCI-ecard, and anM.2card. TheSBCCoral Dev Board and Coral SoM both run Mendel Linux OS – a derivative ofDebian.[47][48]The USB, PCI-e, and M.2 products function as add-ons to existing computer systems, and support Debian-based Linux systems on x86-64 and ARM64 hosts (includingRaspberry Pi). The machine learning runtime used to execute models on the Edge TPU is based onTensorFlow Lite.[49]The Edge TPU is only capable of accelerating forward-pass operations, which means it's primarily useful for performing inferences (although it is possible to perform lightweight transfer learning on the Edge TPU[50]). The Edge TPU also only supports 8-bit math, meaning that for a network to be compatible with the Edge TPU, it needs to either be trained using the TensorFlow quantization-aware training technique, or since late 2019 it's also possible to use post-training quantization. On November 12, 2019,Asusannounced a pair ofsingle-board computer (SBCs)featuring the Edge TPU. TheAsus Tinker Edge T and Tinker Edge R Boarddesigned forIoTandedgeAI. The SBCs officially supportAndroidandDebianoperating systems.[51][52]ASUS has also demonstrated a mini PC called Asus PN60T featuring the Edge TPU.[53] On January 2, 2020, Google announced the Coral Accelerator Module and Coral Dev Board Mini, to be demonstrated atCES 2020later the same month. The Coral Accelerator Module is amulti-chip modulefeaturing the Edge TPU, PCIe and USB interfaces for easier integration. The Coral Dev Board Mini is a smallerSBCfeaturing the Coral Accelerator Module andMediaTek 8167s SoC.[54][55] On October 15, 2019, Google announced thePixel 4smartphone, which contains an Edge TPU called thePixel Neural Core. Google describe it as "customized to meet the requirements of key camera features in Pixel 4", using a neural network search that sacrifices some accuracy in favor of minimizing latency and power use.[56] Google followed the Pixel Neural Core by integrating an Edge TPU into a customsystem-on-chipnamedGoogle Tensor, which was released in 2021 with thePixel 6line of smartphones.[57]The Google Tensor SoC demonstrated "extremely large performance advantages over the competition" in machine learning-focused benchmarks; although instantaneous power consumption also was relatively high, the improved performance meant less energy was consumed due to shorter periods requiring peak performance.[58] In 2019, Singular Computing, founded in 2009 by Joseph Bates, avisiting professoratMIT,[59]filed suit against Google allegingpatent infringementin TPU chips.[60]By 2020, Google had successfully lowered the number of claims the court would consider to just two: claim 53 ofUS 8407273filed in 2012 and claim 7 ofUS 9218156filed in 2013, both of which claim adynamic rangeof 10−6to 106for floating point numbers, which the standardfloat16cannot do (without resorting tosubnormal numbers) as it only has five bits for the exponent. In a 2023 court filing, Singular Computing specifically called out Google's use ofbfloat16, as that exceeds the dynamic range offloat16.[61]Singular claims non-standard floating point formats werenon-obviousin 2009, but Google retorts that the VFLOAT[62]format, with configurable number of exponent bits, existed asprior artin 2002.[63]By January 2024, subsequent lawsuits by Singular had brought the number of patents being litigated up to eight. Towards the end of the trial later that month, Google agreed to a settlement with undisclosed terms.[64][65]
https://en.wikipedia.org/wiki/Tensor_Processing_Unit
Thehistory of supercomputinggoes back to the 1960s when a series of computers atControl Data Corporation(CDC) were designed bySeymour Crayto use innovative designs and parallelism to achieve superior computational peak performance.[1]TheCDC 6600, released in 1964, is generally considered the first supercomputer.[2][3]However, some earlier computers were considered supercomputers for their day such as the 1954IBM NORCin the 1950s,[4]and in the early 1960s, theUNIVAC LARC(1960),[5]theIBM 7030 Stretch(1962),[6]and theManchesterAtlas(1962), all[specify]of which were of comparable power.[citation needed] While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through theteraFLOPScomputational barrier. Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaFLOPS performance levels. The term "Super Computing" was first used in theNew York Worldin 1929[7]to refer to large custom-builttabulatorsthatIBMhad made forColumbia University.[8] There were several lines of second generation computers that were substantially faster than most contemporary mainframes. These included The second generation saw the introduction of features intended to supportmultiprogrammingandmultiprocessorconfigurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, andatomic instructions. In 1957, a group of engineers leftSperry Corporationto formControl Data Corporation(CDC) inMinneapolis, Minnesota.Seymour Crayleft Sperry a year later to join his colleagues at CDC.[1]In 1960, Cray completed theCDC 1604, one of the first generation of commercially successfultransistorizedcomputers and at the time of its release, the fastest computer in the world.[9]However, the sole fully transistorizedHarwell CADETwas operational in 1951, and IBM delivered its commercially successful transistorizedIBM 7090in 1959. Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers, Cray completed theCDC 6600in 1964. Cray switched from germanium to silicon transistors, built byFairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and thespeed of lightrestriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush.[10]The 6600 outperformed the industry's prior recordholder, theIBM 7030 Stretch,[clarification needed]by a factor of three.[11][12]With performance of up to threemegaFLOPS,[13][14]it was dubbed asupercomputerand defined the supercomputing market when two hundred computers were sold at $9 million each.[9][15] The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The MinnesotaFORTRANcompiler for the machine was developed by Liddiard and Mundstock at theUniversity of Minnesotaand with it the 6600 could sustain 500 kiloflops on standard mathematical operations.[16]In 1968, Cray completed theCDC 7600, again the fastest computer in the world.[9]At 36MHz, the 7600 had 3.6 times theclock speedof the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company.[9]Two years after his departure CDC delivered theSTAR-100, which at 100 megaflops was three times the speed of the 7600. Along with theTexas Instruments ASC, the STAR-100 was one of the first machines to usevector processing⁠‍—‍the idea having been inspired around 1964 by theAPL programming language.[17][18] In 1956, a team atManchester Universityin the United Kingdom began development ofMUSE⁠‍—‍a name derived frommicrosecondengine‍—‍with the aim of eventually building a computer that could operate at processing speeds approaching one microsecond per instruction, about one millioninstructions per second.[19]Mu(the name of the Greek letterμ) is a prefix in the SI and other systems of units denoting a factor of 10−6(one millionth). At the end of 1958,Ferrantiagreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamedAtlas, with the joint venture under the control ofTom Kilburn. The first Atlas was officially commissioned on 7 December1962‍—‍nearly three years before the Cray CDC 6600 supercomputer wasintroduced‍—‍as one of the world's firstsupercomputers. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to fourIBM 7094s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost.[20]The Atlas pioneeredvirtual memoryandpagingas a way to extend its working memory by combining its 16,384 words of primarycore memorywith an additional 96K words of secondarydrum memory.[21]Atlas also pioneered theAtlas Supervisor, "considered by many to be the first recognizable modernoperating system".[20] Four years after leaving CDC, Cray delivered the 80 MHzCray-1in 1976, and it became the most successful supercomputer in history.[18][22]The Cray-1, which used integrated circuits with two gates per chip, was avector processor. It introduced a number of innovations, such aschaining, in which scalar and vector registers generate interim results that can be used immediately, without additional memory references which would otherwise reduce computational speed.[10][23]TheCray X-MP(designed bySteve Chen) was released in 1982 as a 105 MHz shared-memoryparallelvector processorwith better chaining support and multiple memory pipelines. All three floating point pipelines on the X-MP could operate simultaneously.[23]By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.[24] TheCray-2, released in 1985, was a four-processorliquid cooledcomputer totally immersed in a tank ofFluorinert, which bubbled as it operated.[10]It reached 1.9 gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier.[25]The Cray-2 was a totally new design. It did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.[23]The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.[26]That trend was partly responsible for a move away from the in-house,Cray Operating SystemtoUNICOSbased onUnix.[26] TheCray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eightvector processorsat 167 MHz with a peak performance of 333 megaflops per processor.[23]In the late 1980s, Cray's experiment on the use ofgallium arsenidesemiconductors in theCray-3did not succeed. Seymour Cray began to work on amassively parallelcomputer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.[22][10] TheCray-2which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1. During the first half of theStrategic Computing Initiative, some massively parallel architectures were proven to work, such as theWARP systolic array, message-passingMIMDlike theCosmic Cubehypercube,SIMDlike theConnection Machine, etc. In 1987, a TeraOPS Computing Technology Program was proposed, with a goal of achieving 1 teraOPS (a trillion operations per second) by 1992, which was considered achievable by scaling up any of the previously proven architectures.[27] TheSX-3/44Rwas announced byNEC Corporationin 1989 and a year later earned the fastest-in-the-world title with a four-processor model.[28]However, Fujitsu'sNumerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.[29][30]TheHitachi SR2201obtained a peak performance of 600 gigaflops in 1996 by using 2,048 processors connected via a fast three-dimensionalcrossbarnetwork.[31][32][33] In the same timeframe theIntel Paragoncould have 1,000 to 4,000Intel i860processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was aMIMDmachine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via theMessage Passing Interface.[34]By 1995, Cray was also shipping massively parallel systems, e.g. theCray T3Ewith over 2,000 processors, using a three-dimensionaltorus interconnect.[35][36] The Paragon architecture soon led to the IntelASCI Redsupercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of theAdvanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelfPentium Proprocessors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpackbenchmark in 1996; eventually reaching 2 teraflops.[37] Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. TheCray C90used 500 kilowatts of power in 1991, while by 2003 theASCI Qused 3,000 kW while being 2,000 times faster, increasing the performance per watt 300 fold.[38] In 2004, theEarth Simulatorsupercomputer built byNECat the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9 teraflops, using 640 nodes, each with eight proprietaryvector processors.[39] TheIBMBlue Genesupercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on theTOP500list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.[40][41] Progress inChinahas been rapid, in that China placed 51st on the TOP500 list in June 2003; this was followed by 14th in November 2003, 10th in June 2004, then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflopTianhe-Isupercomputer.[42][43] In July 2011, the 8.1 petaflop JapaneseK computerbecame the fastest in the world, using over 60,000SPARC64 VIIIfxprocessors housed in over 600 cabinets. The fact that the K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot, demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[44][45][46]By 2014, the Earth Simulator had dropped off the list and by 2018 the K computer had dropped out of the top 10. By 2018,Summithad become the world's most powerful supercomputer, at 200 petaFLOPS. In 2020, the Japanese once again took the top spot with theFugaku supercomputer, capable of 442 PFLOPS. Finally, starting in 2022 and until the present (as of December 2023[update]), theworld's fastest supercomputerhad become the Hewlett Packard EnterpriseFrontier, also known as the OLCF-5 and hosted at theOak Ridge Leadership Computing Facility(OLCF) inTennessee, United States. The Frontier is based on theCray EX, is the world's firstexascalesupercomputer, and uses onlyAMDCPUsandGPUs; it achieved anRmaxof 1.102exaFLOPS, which is 1.102 quintillion operations per second.[47][48][49][50][51] This is a list of the computers which appeared at the top of theTOP500list since 1993.[52]The "Peak speed" is given as the "Rmax" rating. TheCoComand its later replacement, theWassenaar Arrangement, legally regulated, i.e. required licensing and approval and record-keeping; or banned entirely, the export ofhigh-performance computers(HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.[53][54][55][56][57][58]
https://en.wikipedia.org/wiki/History_of_supercomputing
Approaches tosupercomputer architecturehave taken dramatic turns since the earliest systems were introduced in the 1960s. Earlysupercomputerarchitectures pioneered bySeymour Crayrelied on compact innovative designs and localparallelismto achieve superior computational peak performance.[1]However, in time the demand for increased computational power ushered in the age ofmassively parallelsystems. While the supercomputers of the 1970s used only a fewprocessors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands ofcommercial off-the-shelfprocessors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some beinggraphic units) connected by fast connections.[2][3] Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[4][5][6]The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7]There have been diverse approaches to heat management, from pumpingFluorinertthrough the system, to a hybrid liquid-air cooling system or air cooling with normalair conditioningtemperatures.[8][9] Systems with a massive number of processors generally take one of two paths: in one approach, e.g., ingrid computingthe processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[10]In another approach, a large number of processors are used in close proximity to each other, e.g., in acomputer cluster. In such a centralizedmassively parallelsystem the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhancedInfinibandsystems to three-dimensionaltorus interconnects.[11][12] Since the late 1960s the growth in the power and proliferation of supercomputers has been dramatic, and the underlying architectural directions of these systems have taken significant turns. While the early supercomputers relied on a small number of closely connected processors that accessedshared memory, the supercomputers of the 21st century use over 100,000 processors connected by fast networks.[2][3] Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[4]Seymour Cray's "get the heat out" motto was central to his design philosophy and has continued to be a key issue in supercomputer architectures, e.g., in large-scale experiments such asBlue Waters.[4][5][6]The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7] There have been diverse approaches to heat management,e.g., theCray 2pumpedFluorinertthrough the system, whileSystem Xused a hybrid liquid-air cooling system and theBlue Gene/Pis air-cooled with normalair conditioningtemperatures.[8][13][14]The heat from theAquasarsupercomputer is used to warm a university campus.[15][16] The heat density generated by a supercomputer has a direct dependence on the processor type used in the system, with more powerful processors typically generating more heat, given similar underlyingsemiconductor technologies.[7]While early supercomputers used a few fast, closely packed processors that took advantage of local parallelism (e.g.,pipeliningandvector processing), in time the number of processors grew, and computing nodes could be placed further away, e.g., in acomputer cluster, or could be geographically dispersed ingrid computing.[2][17]As the number of processors in a supercomputer grows, "component failure rate" begins to become a serious issue. If a supercomputer uses thousands of nodes, each of which may fail once per year on the average, then the system will experience severalnode failureseach day.[9] As the price/performance ofgeneral purpose graphic processors(GPGPUs) has improved, a number ofpetaflopsupercomputers such asTianhe-IandNebulaehave started to rely on them.[18]However, other systems such as theK computercontinue to use conventional processors such asSPARC-based designs and the overall applicability of GPGPUs in general purpose high performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[19]However, GPUs are gaining ground and in 2012 theJaguar supercomputerwas transformed intoTitanby replacing CPUs with GPUs.[20][21][22] As the number of independent processors in a supercomputer increases, the way they access data in thefile systemand how they share and accesssecondary storageresources becomes prominent. Over the years a number of systems fordistributed file managementwere developed,e.g., theIBM General Parallel File System,BeeGFS, theParallel Virtual File System,Hadoop, etc.[23][24]A number of supercomputers on theTOP100list such as the Tianhe-I useLinux'sLustre file system.[4] TheCDC 6600series of computers were very early attempts at supercomputing and gained their advantage over the existing systems by relegating work toperipheral devices, freeing thecentral processing unit(CPU) to process actual data. With the MinnesotaFORTRANcompiler the 6600 could sustain 500 kiloflops on standard mathematical operations.[25] Other early supercomputers such as theCray 1andCray 2that appeared afterwards used a small number of fast processors that worked in harmony and were uniformly connected to the largest amount ofshared memorythat could be managed at the time.[3] These early architectures introducedparallel processingat the processor level, with innovations such asvector processing, in which the processor can perform several operations during oneclock cycle, rather than having to wait for successive cycles. In time, as the number of processors increased, different architectural issues emerged. Two issues that need to be addressed as the number of processors increases are the distribution of memory and processing. In the distributed memory approach, each processor is physically packaged close with some local memory. The memory associated with other processors is then "further away" based onbandwidthandlatencyparameters innon-uniform memory access. In the 1960spipeliningwas viewed as an innovation, and by the 1970s the use ofvector processorshad been well established. By the 1980s, many supercomputers used parallel vector processors.[2] The relatively small number of processors in early systems, allowed them to easily use ashared memory architecture, which allows processors to access a common pool of memory. In the early days a common approach was the use ofuniform memory access(UMA), in which access time to a memory location was similar between processors. The use ofnon-uniform memory access(NUMA) allowed a processor to access its own local memory faster than other memory locations, whilecache-only memory architectures(COMA) allowed for the local memory of each processor to be used as cache, thus requiring coordination as memory values changed.[26] As the number of processors increases, efficientinterprocessor communicationand synchronization on a supercomputer becomes a challenge. A number of approaches may be used to achieve this goal. For instance, in the early 1980s, in theCray X-MPsystem,shared registerswere used. In this approach, all processors had access toshared registersthat did not move data back and forth but were only used for interprocessor communication and synchronization. However, inherent challenges in managing a large amount of shared memory among many processors resulted in a move to moredistributed architectures.[27] During the 1980s, as the demand for computing power increased, the trend to a much larger number of processors began, ushering in the age ofmassively parallelsystems, withdistributed memoryanddistributed file systems,[2]given thatshared memory architecturescould not scale to a large number of processors.[28]Hybrid approaches such asdistributed shared memoryalso appeared after the early systems.[29] The computer clustering approach connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast, privatelocal area network.[30]The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via asingle system imageconcept.[30] Computer clustering relies on a centralized management approach which makes the nodes available as orchestratedshared servers. It is distinct from other approaches such aspeer-to-peerorgrid computingwhich also use many nodes, but with a far moredistributed nature.[30]By the 21st century, theTOP500organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest in 2011, theK computerwith adistributed memory, cluster architecture.[31][32] When a large number of local semi-independent computing nodes are used (e.g. in a cluster architecture) the speed and flexibility of the interconnect becomes very important. Modern supercomputers have taken different approaches to address this issue, e.g.Tianhe-1uses a proprietary high-speed network based on theInfinibandQDR, enhanced withFeiTeng-1000CPUs.[4]On the other hand, theBlue Gene/L system uses a three-dimensionaltorusinterconnect with auxiliary networks for global communications.[11]In this approach each node is connected to its six nearest neighbors. A similar torus was used by theCray T3E.[12] Massive centralized systems at times use special-purpose processors designed for a specific application, and may usefield-programmable gate arrays(FPGA) chips to gain performance by sacrificing generality. Examples of special-purpose supercomputers includeBelle,[33]Deep Blue,[34]andHydra,[35]for playingchess,Gravity Pipefor astrophysics,[36]MDGRAPE-3for protein structure computation molecular dynamics[37]andDeep Crack,[38]for breaking theDEScipher. Grid computinguses a large number of computers in distributed, diverse administrative domains. It is an opportunistic approach which uses resources whenever they are available.[10]An example isBOINCavolunteer-based, opportunistic grid system.[39]SomeBOINCapplications have reached multi-petaflop levels by using close to half a million computers connected on the internet, whenever volunteer resources become available.[40]However, these types of results often do not appear in theTOP500ratings because they do not run the general purposeLinpackbenchmark. Although grid computing has had success in parallel task execution, demanding supercomputer applications such asweather simulationsorcomputational fluid dynamicshave remained out of reach, partly due to the barriers in reliable sub-assignment of a large number of tasks as well as the reliable availability of resources at a given time.[39][41][42] Inquasi-opportunistic supercomputinga large number of geographicallydisperse computersare orchestrated withbuilt-in safeguards.[43]The quasi-opportunistic approach goes beyondvolunteer computingon a highly distributed systems such asBOINC, or generalgrid computingon a system such as Globus by allowing themiddlewareto provide almost seamless access to many computing clusters so that existing programs in languages such asFortranorCcan be distributed among multiple computing resources.[43] Quasi-opportunistic supercomputing aims to provide a higher quality of service thanopportunistic resource sharing.[44]The quasi-opportunistic approach enables the execution of demanding applications within computer grids by establishing grid-wise resource allocation agreements; andfault tolerantmessage passing to abstractly shield against the failures of the underlying resources, thus maintaining some opportunism, while allowing a higher level of control.[10][43][45] The air-cooled IBMBlue Genesupercomputer architecture trades processor speed for low power consumption so that a larger number of processors can be used at room temperature, by using normal air-conditioning.[14][46]The second-generation Blue Gene/P system has processors with integrated node-to-node communication logic.[47]It is energy-efficient, achieving 371MFLOPS/W.[48] TheK computeris awater-cooled, homogeneous processor,distributed memorysystem with acluster architecture.[32][49]It uses more than 80,000SPARC64 VIIIfxprocessors, each with eightcores, for a total of over 700,000 cores—almost twice as many as any other system. It comprises more than 800 cabinets, each with 96 computing nodes (each with 16 GB of memory), and 6 I/O nodes. Although it is more powerful than the next five systems on the TOP500 list combined, at 824.56 MFLOPS/W it has the lowest power to performance ratio of any current major supercomputer system.[50][51]The follow-up system for the K computer, called thePRIMEHPC FX10uses the same six-dimensional torus interconnect, but still only one processor per node.[52] Unlike the K computer, theTianhe-1Asystem uses a hybrid architecture and integrates CPUs and GPUs.[4]It uses more than 14,000Xeongeneral-purpose processors and more than 7,000Nvidia Teslageneral-purpose graphics processing units(GPGPUs) on about 3,500blades.[53]It has 112 computer cabinets and 262 terabytes of distributed memory; 2 petabytes of disk storage is implemented viaLustreclustered files.[54][55][56][4]Tianhe-1 uses a proprietary high-speed communication network to connect the processors.[4]The proprietary interconnect network was based on theInfinibandQDR, enhanced with Chinese-madeFeiTeng-1000CPUs.[4]In the case of the interconnect the system is twice as fast as the Infiniband, but slower than some interconnects on other supercomputers.[57] The limits of specific approaches continue to be tested, as boundaries are reached through large-scale experiments, e.g., in 2011 IBM ended its participation in theBlue Waterspetaflops project at the University of Illinois.[58][59]The Blue Waters architecture was based on the IBMPOWER7processor and intended to have 200,000 cores with a petabyte of "globally addressable memory" and 10 petabytes of disk space.[6]The goal of a sustained petaflop led to design choices that optimized single-core performance, and hence a lower number of cores. The lower number of cores was then expected to help performance on programs that did not scale well to a large number of processors.[6]The large globally addressable memory architecture aimed to solve memory address problems in an efficient manner, for the same type of programs.[6]Blue Waters had been expected to run at sustained speeds of at least one petaflop, and relied on the specific water-cooling approach to manage heat. In the first four years of operation, the National Science Foundation spent about $200 million on the project. IBM released thePower 775computing node derived from that project's technology soon thereafter, but effectively abandoned the Blue Waters approach.[58][59] Architectural experiments are continuing in a number of directions, e.g. theCyclops64system uses a "supercomputer on a chip" approach, in a direction away from the use of massive distributed processors.[60][61]Each 64-bit Cyclops64 chip contains 80 processors, and the entire system uses aglobally addressablememory architecture.[62]The processors are connected with non-internally blocking crossbar switch and communicate with each other via global interleaved memory. There is nodata cachein the architecture, but half of eachSRAMbank can be used as a scratchpad memory.[62]Although this type of architecture allows unstructured parallelism in a dynamically non-contiguous memory system, it also produces challenges in the efficient mapping of parallel algorithms to amany-coresystem.[61]
https://en.wikipedia.org/wiki/Supercomputer_architecture
This is a non-exhaustivelist ofhairstyles, excludingfacial hairstyles. The style required that the hair were combed back around the sides of the head. The teeth edge of a comb was then used to define a central parting flowing from the crown to the nape at the back of the head, resembling, to many, the rear end of a duck. The hair on the top front of the head was usually that of a pompadour. The sides on back were styled to resemble the folded wings of the duck. Long hairstyles may be considered those which reach beyond the shoulders on women, or require long hair to create, and past the chin on men. 2. Long hair worn in several ponytails running from front of the head to the back of the head resembling a mohawk. Unlike a usual mohawk, hair is not cut from the sides. The word is a portmanteau of 'ponytail' and 'mohawk'.
https://en.wikipedia.org/wiki/List_of_hairstyles
Aprotective hairstyleis a term predominantly used to describe hairstyles suitable forAfro-textured hairwhose purpose is to reduce the risk of hairs breaking off short. These hairstyles are designed to minimize manipulation and exposure of the hair to environmental elements. Factors such as extreme temperatures, humidity, and precipitation can adversely affect hair health. Protective hairstyles are beneficial in mitigating these effects by keeping the hair tucked away and reducing its exposure to potentially damaging conditions. Common types of protective hairstyles includebraids,wigs,locs, andtwists. These styles not only are functional in protecting the hair from weather-related damage but also aid in retaining hair length and promoting growth. The adoption of protective hairstyles can lead to a reduction in hair tangles and knots. Additionally, these styles can offer respite to the hair from constant styling, pulling, and combing, thus contributing to overall hair health. Protective hairstyles have also been recognized for their cultural and social significance. They play a role in the expression of cultural identity and can be seen as a form of artistic and personal expression. The versatility and diversity of these hairstyles reflect the rich cultural heritage associated with Afro-textured hair. Afro-textured hair is often prone to breakage or damage from the elements; protective hairstyles aim to guard against this.[1]However protective hairstyles sometimes involve tension at thescalp, like braids with weaves and wigs,[2]and can cause thinning of the hairline. They may also prevent hair from growing, which, if prolonged, may lead totraction alopecia.[3][4]This happens mainly in cases of untreated hair that is not properly maintained with the necessary oils and products. Protective styles require styling hair for a few days and using the correct styles and products. Depending on the hairstyle and how well it is taken care of, protective hairstyles can last between two weeks and two months. In theUnited States, some jurisdictions have banneddiscriminationbased on hairstyles associated withAfrican Americans, including protective hairstyles.[5]In 2007, radio hostDon Imuscaused an outrage when he called theRutgers University women's basketball team"nappy-headed." This led to cancellations of his future show. In 2020,Noah Cyrusmade a comment about "nappy hair," which led to many controversies. She later on apologized through social media, saying she didn't know the context and history behind the terms she had used.[6]A federal bill called theCrown Actof 2022 (Creating a Respectful and Open World for Natural Hair Act of 2022) was passed with the intention to prohibit race-based discrimination based on hairstyles and hair texture. In present time Black women have created blogs and YouTube channels to embrace their hairstyles in positive ways. Protective hairstyles, including various forms of braids, hold significant cultural importance in African history, with their origins tracing back thousands of years. These hairstyles are not only a reflection of aesthetic preferences but also carry deep cultural symbolism. Intricate patterns and styles in braiding often symbolize strength and creativity within African tribes and communities. Historically, braids served as distinguishing markers of tribal affiliation and were indicative of an individual's wealth, religious beliefs, age, marital status, and ethnicity.[7] In contemporary contexts, braids and similar hairstyles continue to be significant, often viewed as rites of passage and modes of self-expression, particularly among women of color.[8]However, issues of hair discrimination and bias present challenges. Globally, women often feel compelled to alter their natural hairstyles to conform to societal norms, especially in professional settings. This includes changing hair from its natural state to styles perceived as more acceptable, such as straightening curly hair for job interviews.[9] At the wake of slavery, many women and men fromAfricawere forced to shave their heads, stripping them of not only their hair but also their culture and humanity. Before that, many slaves used their braiding hairstyles as maps of the land and storage for small grains and nuts. With this, many laws were created to prohibit braids and other cultural and protective hairstyles.[citation needed]These laws were not overturned until theBlack Power Movementin the 60s and 70s. Even after the laws were overturned, many still faced discrimination due to their hair type and hairstyles. This had stripped many people of the use of their braids as a form of culture to the use of braids as function; to keep hair manageable. Many styles were simplified and sometimes they were a struggle to maintain, not having proper access to products and tools. This led to many people using substances likekerosineto moisturize their hair. Later cultural movements would brings back this sense of culture in wearing these protective hairstyles. The word nappy has been used to reference the "frizzy texture" of African American hair since the 1880s.[10][11] Braids and cornrows were also used to escape slavery. Since slaves were not allowed to learn how to read or write, another methods of communication was necessary. Thus, came the use of cornrows to draw out maps and pass messages to escape slavery. This method was even used within theUnderground Railroad. Additionally, rice and seeds would be woven into the braids in order to grow food after they had escaped.[12][13][failed verification][14] Before adopting a protective hairstyle, the hair and scalp are thoroughly washed. Most protective styles are left in for weeks at a time, and cleansing rids hair of product, dirt and oil buildup. (The hair and head are also washed while the hairstyle is in place.) Asulfate-free shampoo and gentle motions while shampooing are recommended because rough washing can cause friction and lead to breakage. To prevent water damage and restore oils and moisture into the hair after washing, the next necessary step is to use a deep conditioner and sometimes a leave-in conditioner. These conditioners can be paired with additional oils to ensure healthy hair and minimize breakage before, during and after using protective hairstyles to manage hair.[15] After the hair is installed, there are many ways to maintain the health of the hair and the style. One of these ways is to wrap hair before sleeping in satin or silk to minimize friction and frizz created from bedding. A lightweight hair gel can also be added while wrapping hair to further reduce the creation offrizzand flyaways. With the scalp being exposed, it is very important to clean it periodically with shampoo diluted with water. After this and throughout wearing the hairstyles, it is necessary to moisturize the scalp after washing and moisturize the hair regularly. This can be done with many types of oils and leave-in conditioners.[16][17] The adaptability of protective hairstyles becomes particularly relevant for travelers transitioning between diverse climates. Changing weather conditions can pose various challenges to hair health. In colder climates, dry and frigid air increases the risk of hair breakage and dryness, while warm and humid conditions can lead to frizz and discomfort. Protective styles such as wigs, braids, twists, and updos with scarves offer practical solutions for these challenges, combining adaptability, ease of maintenance, and style. Wigs provide versatility, braids like box braids and cornrows protect natural hair from the elements, twists offer chic styling options, and scarves in updos add both protection and fashion flair in varying climates.[18] Maintenance and care of hair also vary depending on the climate. In colder regions, focus on hydration and protecting the ends from breakage is essential, whereas in warmer climates, using products to combat humidity and keep the scalp clean becomes a priority. These considerations are vital for travelers who wish to maintain healthy and stylish hair while adapting to different environmental conditions. Protective hairstyles are specifically intended to reduce hair breakage, but, if placed inappropriately, they can result intraction alopecia(hairs pulled out from the root, rather than broken off midway) andexternal-traction headaches(pain from overly tight or heavy styles).[19]
https://en.wikipedia.org/wiki/Protective_hairstyle
Braids(also referred to asplaits) are a complex hairstyle formed by interlacing three or more strands of hair.[1]Braiding has never been specific to any one part of the world, ethnic type, hair type or culture, but has been used to style and ornament human and animal hair for thousands of years world-wide[2]in various cultures around the world. The simplest and most common version is a flat, solid, three-stranded structure. More complex patterns can be constructed from an arbitrary number of strands to create a wider range of structures (such as a fishtail braid, a five-stranded braid, rope braid, a French braid and a waterfall braid). The structure is usually long and narrow with each component strand functionally equivalent in zigzagging forward through the overlapping mass of the others. Structurally, hair braiding can be compared with the process ofweaving, which usually involves two separate perpendicular groups of strands (warpandweft). The oldest known reproduction of hair braiding may go back about 30,000 years inEurope: theVenus of WillendorfinAustria, now known inacademiaas the Woman ofWillendorf, is a femalefigurineestimated to have been made between about 28,000 and 25,000BCE.[3]It has been disputed whether or not she wears braided hair or some sort of a woven basket on her head. TheVenus of BrassempouyinFranceis estimated to be about 25,000 years old and ostensibly shows a braided hairstyle.[4] Another sample of a different origin was traced back to a burial site calledSaqqaralocated on theNile River, during the first dynasty ofPharaohMenes, although the Venus' of Brassempouy and Willendorf predate these examples by some 25,000-30,000 years. During theBronze Age,Iron Ageand Greco-Roman era (a period spanning 3500 BC to 500 AD) many peoples inWest Asia,Asia Minor,Caucasus,Southeast Europe,East Mediterranean,BalkansandNorth Africabraided hair, beards and moustaches. InMesopotamia, the practice was common among theSumerians,Akkadians,Assyrians,BabyloniansandChaldeans, surviving among some Assyrians into the 18th century AD. InAncient IrantheElamites,Gutians,Lullubi,Kassites,Manneans,Persians,MedesandParthiansare depicted with braided hair and beards. ThroughoutAnatolia(Asia Minor),Hittites,Hattians,Hurrians,Mitanni,Luwians,Mycenean Greeks,UrartiansandLydiansare also depicted with these styles. In theLevant, braiding also appears among theAmorites,Eblaites,Arameans,Israelites,Phoenicians,Judeans,MoabitesUgaritesandEdomitesamong others.Arabian Peninsulaart depictsDilmunites,Arabs,Maganites,UbaritesandShebansin similar fashion. InNorth Africathe practice was common amongEgyptians,Hyksos,LibyansandBerbersand further south amongNubiansandAxumites, as well as amongColchians,ArmeniansandScythiansof theCaucasusandMinoans,Etruscans,Greeks,DaciansandPelasgiansinEurope.[5][6]There has also been foundbog bodiesinNorthern Europewearing braided hairstyles from theNorthern European Iron Age, and later still such braided styles were found among theCelts,Iberians,Germanic peoples,SlavsandVikingsin northern, western, Eastern and southwestern Europe.[7][8] In some regions, a braid was a means of communication. At a glance, one individual could distinguish a wealth of information about another, whether they were married, mourning, or of age for courtship, simply by observing their hairstyle. Braids were a means ofsocial stratification. Certain hairstyles were distinctive to particular tribes or nations. Other styles informed others of an individual's status in society.African peoplesuch as theHimba peopleofNamibia,Maasai peopleofKenyahave been braiding their hair for centuries. In many African tribes, hairstyles are unique and used to identify each tribe. Braid patterns or hairstyles can indicate a person's community, age, marital status, wealth, power, social position, and religion.[9] On July 3, 2019, California became the first US state to prohibit discrimination over natural hair. GovernorGavin Newsomsigned theCROWN Actinto law, banning employers and schools from discriminating against hairstyles such as dreadlocks, braids,afros, and twists.[10]Later in 2019, Assembly Bill 07797 became law in New York state; it "prohibits race discrimination based on natural hair or hairstyles."[11] Braiding is traditionally a social art. Because of the time it takes to braid hair, people have often taken time to socialize while braiding and having their hair braided. It begins with the elders making simple knots and braids for younger children. Older children watch and learn from them, start practicing on younger children, and eventually learn the traditional designs. This carries on a tradition of bonding between elders and the new generation. There are a number of different types of braided hairstyles, including, commonly,French braids,corn rows, andbox braiding.[12]Braided hairstyles may also be used in combination with or as an alternative to simpler bindings, such asponytailsorpigtails. Braiding may also be used to add ornamentation, such as beads orhair extensions, as incrochet braiding. European braids have been a cultural phenomenon for thousands of years. The Romans held braids to express status in both the Republic and Empire. Germanic cultures have also been known to have braids for centuries. The Psalter of Stuttgart in 820AD shows women with braided hair. InIndia, braiding is common in both rural and urban areas. Girls are seen in twin braids especially in schools, though now it is becoming less common. Young girls usually have one long braid. Married women have abunor a braided bun.[citation needed] Braids have been part of black culture going back generations. There are pictures going as far back as the year 1884 showing a Senegalese woman with braided hair in a similar fashion to how they are worn today.[13] Braids are normally done tighter in black culture than in others, such as incornrowsorbox braids. While this leads to the style staying in place for longer, it can also lead to initial discomfort. This is commonly accepted and managed through pain easing techniques. Some include pain killers, letting the braids hang low, and using leave-in-conditioner.[14]Alternative braiding techniques like knotless braids, which incorporate more of a person's natural hair and place less tension on the scalp, can cause less discomfort.[15] Braids are not usually worn year-round in black culture; they are instead alternated with other popular hairstyles such ashair twists,protective hairstylesand more. Curly Mohawk, Half Updo and Side-Swept Cornrows braids are some of the popular and preferred styles in black culture.[16]As long as braids are done with a person's own hair, it can be considered as part of thenatural hair movement. InIndia, manyHindu asceticswear dreadlocks, known asJatas.[17]Young girls and women in India often wear long braided hair at the back of their neck.[18]In theUpanishads, braided hair is mentioned as one of the primary charms of female seduction.[19]A significant tradition of braiding existed inMongolia, where it was traditionally believed that the human soul resided in the hair. Hair was only unbraided when death was imminent.[20][21]InJapan, theSamuraisported a high-bound ponytail (Chonmage), a hairstyle that is still common amongSumowrestlers today. Japanese women wore various types of braids (三つ編みmitsuami) until the late 20th century because school regulations prohibited other hairstyles, leaving braids and thebob hairstyleas the main options for girls.[22]In China, girls traditionally had straight-cutbangsand also wore braids (辮子biànzi). TheManchumen have historically braided their hair. After conquering Beijing in 1644 and establishing theQing Dynasty, they forced the men of the subjugatedHan Chineseto adopt this hairstyle as an expression of loyalty, which involved shaving the forehead and sides and leaving a longqueueat the back (剃髮易服tìfà yìfú). The Han Chinese considered this a humiliation as they had never traditionally cut their hair due toConfucian customs. The last emperor,Puyi, cut off his queue in 1912, marking the end of this male hairstyle in China, the same year when China became a republic.[23][24] Braided hairstyles were widespread among many North American indigenous peoples, with traditions varying greatly from tribe to tribe. For example, among theQuapaw, young girls adorned themselves with spiral braids, while married women wore their hair loose.[25]Among theLenape, women wore their hair very long and often braided it.[26][27]Among theBlackfoot, men wore braids, often on both sides behind the ear.[28]The men of theKiowatribe often wrapped pieces of fur around their braids, called a hair drop. Among theLakota, both men and women wore their hair in 2 braids with men’s being typically longer than women’s. Some had their hair wrapped in furs, typically bison, called ahair drop, some native groups of the Great Plains also had this hairstyle. During times of war, warriors would often have their hair unbraided as a sign of fearlessness. Among theMaya, women had intricate hairstyles with two braids, while men had a single large braid that encircled the head.[29] InJamaica, theRastafarimovement emerged in the 1930s, a Christian faith practiced by descendants of African slaves who often wear dreadlocks and untrimmed beards, in adherence to the Old Testament prohibition on cutting hair. Somefetishistsfind braids to be a strong erotic stimulus. Most commonly, the tightly wovenFrench braidis mentioned in this context. In the olderpsychiatricliterature, there are occasional references to fetishists who, in order to possess the desired object, would cut off female braids. For example, Swiss psychiatristAuguste Foreldescribed the case of a braid-cutter in Berlin in 1906, who was found in possession of 31 braids.[30]Richard von Krafft-Ebinghad already explored a deeper understanding of hair fetishism in the late 19th century.[31] In psychoanalytic literary interpretation, authors have continued to explore braid-cutters to this day. Notably, an episode inErnest Hemingway's novelFor Whom the Bell Tollshas aroused considerable interest.[32][33]Sigmund Freudhad interpreted hair-cutting as a symboliccastrationinTotem and Taboo(1913).[34]Some authors later followed him in seeing the braid as aphallic symbol.[35][36][37]Others interpreted braids as a symbol ofvirginityand the unbraiding or cutting of the braid as a symbol of defloration.[38] Braiding is also used to prepare horses' manes and tails forshowingsuch as inpoloandpolocrosse.[39]
https://en.wikipedia.org/wiki/Braid_(hairstyle)
Box braidsare a type of hair-braiding style that is predominantly popular among African people and theAfrican diaspora. This type of hairstyle is a "protective style" (a style which can be worn for a long period of time to let natural hair grow and protect the ends of the hair) and is "boxy", consisting of square-shaped hair divisions. Box braids are generally installed by using synthetic hair which helps to add thickness as well as helping the natural hair that is in the braid. Because they are not attached to the scalp like other similar styles such ascornrows, box braids can be styled in a number of different ways. The installation process of box braids can be lengthy, but once installed they can last for six to eight weeks. They are known for being easy to maintain.[2][3] Hair-braiding styles were used to help differentiate tribes, locations, and also possibly a symbol of wealth and power due to the amount of effort that went into styling braids.[4]Box braids were not given a specific name until the 1990s when popularized by R&B musicianJanet Jackson, but have been used for years. This style of braiding comes from the Eembuvi braids ofNamibiaor the chin-length bob braids of the women of theNile Valleyfrom over 3,000 years ago.[4]In the Mbalantu tribe of Namibia, braiding was an important social practice. Older women would gather with their girls and teach them how to braid.[5]Box braids are also commonly worn by theKhoisanpeople of South Africa[6]and theAfar peoplein the horn of Africa.[7][8]InAfrica, braid styles and patterns have been used to distinguish tribal membership, marital status, age, wealth, religion and social ranking.[9]In some countries ofAfrica, the braids were used for communication.[10]In some Caribbean islands, braid patterns were used to map routes to escape slavery.[11][12]Layers of finely chopped tree bark and oils can be used to support the hairstyle. Human hair was at one point wefted into fiber wig caps made of durable materials like wool and felt for reuse in traditional clothing as well as different rituals.[4]Cowry shells, jewels, beads and other material items adorned box braids of older women alluding to their readiness to have daughters, emulation of wealth, high priesthood and any other classifications.[4] Hair was and is a very important and symbolic part of different African communities. Africans believed that hair could help with divine communication as it was the elevated part of one's body. Hair styling was entrusted only to close relatives, as it was explained that if a strand fell into the hands of an enemy, harm could come to the hair's owner.[13]Members of royalty would often wear elaborate hairstyles as a symbol of their stature, and those in mourning, usually women, would pay some attention to their hair during the period of grieving. Hair was seen as a symbol of fertility, as thick, long tresses and neat, clean hair symbolised ability to bear healthy daughters.[13]Elaborate patterns were done for special occasions like weddings, social ceremonies or war preparations. People belonging to a tribe could easily be identified by another tribe member with the help of a braid pattern or style.[14] The U.S. Army has strong regulations and restrictions on hairstyles for both men and women. In 2014, the army updated its policies because the old regulations were too restrictive for African-American women. Army policy originally considered African American women's natural hair "not neat" and deemedprotective hairstyles"unprofessional". In the newer regulations, "twists, cornrows and braids can be up to1⁄2inch [13 mm] in diameter. The previous maximum was a diameter of approximately1⁄4inch [6 mm]".[15]This gives more opportunity to wear protective styles. Box braids can be worn by members of the US Army as long as they show no more than3⁄8of the scalp. The parting must be square or rectangular shape. The ends of the braids must be secured. Once the newly grown natural hair outside of the braid, also known as new growth, reaches1⁄2inch [13 mm], the style must be redone. Similar regulations apply for styles like dreadlocks, flat twists, and braids with natural hair. The hairstyles must not interfere with the wear of uniform or covers (uniform hats).[16]Though synthetic hair for box braids exists in multiple colors, the military dictates that enlisted women must have box braids in natural hair colors without any additional jewelry like hairclips or beads. Medium box braids are a popular hairstyle within the African and African American communities. They involve parting the hair into individual square-shaped sections, and then each section is braided from the scalp to the ends. These braids are termed 'medium' due to their thickness, which is typically about the width of a pencil to that of a felt tip marker.[17] The medium size of these braids strikes a balance between the delicate appearance of smaller braids and the more pronounced look of jumbo braids. They are versatile in length, often extending just beyond the wearer's natural hair length, and can be styled in various ways including buns, ponytails, and more. As a protective hairstyle, medium box braids can safeguard the hair from environmental factors and styling stress. They require routine maintenance, including scalp hydration and proper cleansing, to maintain the health of the hair and scalp. These braids can be kept in for several weeks before they need to be redone. Tight or heavy hairstyles, such as long box braids, can also cause anexternal-traction headache, previously called aponytail headache.[18]Overly tight braids may causetraction alopecia.[19]Looser braids have a lower risk than tight braids or other styles, such ascornrowsanddreadlocks.[20]
https://en.wikipedia.org/wiki/Box_braids
Infolklore,fairy-locks(orelflocks) are the result offairiestangling andknottingthe hairs of sleeping children and the manes of beasts as the fairies play in and out of their hair at night.[1] The concept is first attested in English inShakespeare'sRomeo and JulietinMercutio's speech of the many exploits ofQueen Mab, where he seems to imply the locks are only unlucky if combed out: Therefore, the appellation of elf lock or fairy lock could be attributed to any various tangles and knots of unknown origins appearing in the manes of beasts or hair of sleeping children. It can also refer to tangles of elflocks or fairy-locks in human hair. In King Lear, when Edgar impersonates a madman, "elf all my hair in knots."[2](Lear, ii. 3.) What Edgar has done, simply put, is made a mess of his hair. See alsoJane Eyre, Ch. XIX; Jane's description of Rochester disguised as a gypsy: "... elf-locks bristled out from beneath a white band ..." German counterparts of the "elf-lock" areAlpzopf,Drutenzopf,Wichtelzopf,Weichelzopf,Mahrenlocke,Elfklatte, etc. (wherealp,drude,mare, andwightare given as the beings responsible). Grimm, who compiled the list, also remarked on the similarity toFrau Holle, who entangled people's hair and herself had matted hair.[3]The use of the wordelfseems to have declined steadily in English, becoming a rural dialect term, before being revived by translations of fairy tales in the nineteenth century and fantasy fiction in the twentieth. Fairy-locks are ascribed in French traditions to thelutin.[4] In Poland and nearby countries, witches and evil spirits were often blamed forPolish plait. This can be, however, a serious medical condition or an intentional hairstyle.[citation needed]
https://en.wikipedia.org/wiki/Elflock
Cornrows(also calledcanerows) are a style of three-strandbraidsin which the hair is braided very close to the scalp, using an underhand, upward motion to make a continuous, raised row.[1]Cornrows are often done in simple, straight lines, as the term implies, but they can also be styled in elaborate geometric or curvilinear designs. They are considered a traditional hairstyle in manyAfrican cultures, as well as in theAfrican diaspora.[2][3][4]They are distinct from, but may resemble,box braids,Dutch braids, melon coiffures, and other forms ofplaited hair, and are typically tighter than braids used in other cultures.[5] The namecornrowsrefers to the layout of crops in corn and sugar cane fields in theAmericasandCaribbean,[1][6]where enslaved Africans were displaced during theAtlantic slave trade.[7]According toBlackfolklore, cornrows were often used to communicate on theUnderground Railroadand byBenkos Biohóduring his time as a slave in Colombia.[8]They often serve as a form ofBlack self-expression,[9]especially amongAfrican Americans,[1]but have been stigmatized in some cultures.[4][10]Cornrows are traditionally called "kolese" or "irun didi" inYoruba, and are often nicknamed "didi braids" in the Nigerian diaspora.[11] Cornrows are worn by both sexes, and are sometimes adorned with beads, shells, or hair cuffs.[1]The duration of braiding cornrows may take up to five hours, depending on the quantity and width.[12]Often favored for their easy maintenance, cornrows can be left in for weeks at a time if maintained through careful washing of the hair and natural oiling of the scalp. Braids are considered a protective styling on African curly hair as they allow for easy and restorative growth; braids pulled too tightly or worn for longer lengths of time and on different hair types can cause a type of hair loss known astraction alopecia.[13] Modern cornrows originated in Africa,[1]where they likely developed in response to the unique textures of African hair,[14][15]and have held significance for different cultures throughout recorded history.[16][17][18]Early depictions of women with what appear to be cornrows have been found inStone Agepaintings in theTassili Plateauof theSahara, and have been dated as far back as 3000 B.C. A similar style is also seen in depictions of the ancientCushiticpeople of theHorn of Africa, who appear to be wearing this style of braids as far back as 2000 B.C.[19]In Nubia, the remains of a young girl wearing cornrows has been dated to 550–750 A.D.[20]Cornrows have also been documented in the ancient Nok civilization in Nigeria,[21]in the Mende culture of Sierra Leone,[22]and the Dan culture of theCôte d'Ivoire.[16] Women in West Africa have been attested wearing complex hairstyles of threaded or wrapped braids since at least the 18th century. These practices likely influenced the use of cornrows and headwraps (such asdurags) among enslaved Africans taken to the Americas.[15]In Ethiopia and Eritrea, there are many braided hairstyles which may include cornrows or "shuruba", such as Habesha or Albaso braids, and Tigray shuriba.[23][24]Though such hairstyles have always been popular with women, Ethiopian men have also worn such hairstyles. In 19th centuryEthiopia, male warriors and kings such asTewodros IIandYohannes IVwere depicted wearing braided hairstyles, including the shuruba.[25][26][27] Cornrow hairstyles in Africa also cover a wide social terrain: religion, kinship, status, age, racial diversity, and other attributes of identity can all be expressed in hairstyle. Just as important is the act of braiding, which passes on cultural values between generations, expresses bonds between friends, and establishes the role of professional practitioner.[21][14]Braiding is traditionally a social ritual in many African cultures—as is hairstyling in general—and is often performed communally, as White and White explain: In African cultures, the grooming and styling of hair have long been important social rituals. Elaborate hair designs, reflecting tribal affiliation, status, sex, age, occupation, and the like, were common, and the cutting, shaving, wrapping, and braiding of hair were centuries-old arts. In part, it was the texture of African hair that allowed these cultural practices to develop; as the historian John Thornton has observed, "the tightly spiraled hair of Africans makes it possible to design and shape it in many ways impossible for the straighter hair of Europeans."[14] There have been a number of examples of European art and sculpture described as similar to modern cornrows,[29]such as plaits, the melon coiffure and sini crenes.[30][31][32] The oldest of these depictions are the statues known as theVenus of Brassempouy[29][33]and theVenus of Willendorf,[31][34][35]which date between 23,000 and 29,000 years ago[36]and were found in modern dayFranceandAustria. Whether these statues feature cornrows, another type of braids, headdresses, or some other styling has been a matter of vigorous debate — most historians rule out cornrows, however.[29][31][37]The Venus of Brassempouy is often said to wear a wig or a patterned hood,[37]while the Venus of Willendorf is said to be wearingplaited hairor a fibrous cap.[31] Since the early 5th century B.C., Ancient Greek and Roman art shows men and women with a characteristic melon coiffure, especially in the "Oriental Aphrodite" tradition, which may be confused with cornrows.[38][39][32]The traditional hairstyle of RomanVestal Virgins, the sini crenes, also incorporates two braids that resemble cornrows.[40][41][30] The first recorded use of the word "cornrow" was in America in 1769, referring to the corn fields of the Americas. The earliest recorded use of the term "cornrows" to refer a hairstyle was in 1902.[a][1]The name "canerows" may be more common in parts of theCaribbeandue to the historic role ofsugar plantationsin the region.[6] As in Africa, grooming was a social activity for Black people on theAmerican plantations; the enslaved Africans were reported helping each other style their hair into a wide variety of appearances. On his visit to a plantation inNatchez, Mississippi, New EnglanderJoseph lngrahamwrote, "No scene can be livelier or more interesting to a Northerner, than that which the negro quarters of a well regulated plantation present, on a Sabbath morning, just before church hour."[42]Hairstyles were so characteristic of a person, even when their appearance and behaviour was otherwise heavily regulated, that they were often used to identifyrunaways, and enslaved Africans sometimes had their hair shaved as a form of punishment. Generally, however, slaveholders in the British colonies gave their Black slaves a degree of latitude in how they wore their hair.[14]Thus, wearing traditional hairstyles offered a way to assert theirbodily autonomywhen they otherwise had none.[43] Enslaved Black people may have chosen to wear cornrows to keep their hair neat and flat to their scalp while working; the other styles they developed alongside cornrows blended African, European and Native American trends and traditions.[44]African-American, Afro-Latino and Caribbean folklore also relates multiple stories of cornrows being used to communicate or provide maps for slaves across the "New World".[8][45]Today, such styles retain their link with Black self-expression and creativity, and may also serve as a form of political expression.[9][46][47] Cornrows gained in popularity in the United States in the 1960s and 1970s, and again during the 1990s and 2000s. In the 2000s, some athletes wore cornrows, including NBA basketball playersAllen Iverson,Rasheed Wallace, andLatrell Sprewell.[48]Some female mixed martial artists have chosen to wear cornrows for their fights as it prevents their hair from obscuring their vision as they move.[49][50][51] Colonial attitudes and practices towards Black hairstyles have traditionally been used to reinforce racism, exclusion and inequality.[52]For example, during the 18th century, slaves would sometimes have their hair shaved as a lesser form of punishment.[14]Eurocentricbeauty standards, which often denigrate Black hairstyles, can lead to internalized racism, colorism, and marginalization, which negatively affect Black people—and Black women in particular.[47][53][54]Related valuations of hair texture—which portray straighter hair as "good hair" and curlier hair as "bad hair"—are emphasized through the media, advertising, and popular culture.[53][55]These attitudes to hair can devalue African heritage and lead to discrimination.[53][56]The unique type of discrimination that arises from prejudice towards Black women's hair is callednatural hair discrimination.[57][58][59]Despite these challenges, cornrows have gained popularity among Black people as a way to express their Blackness, creativity and individuality.[52][53][60] Over the decades, cornrows, alongsidedreadlocks, have been the subject of several disputes in U.S. workplaces, as well as universities and schools. Some employers and educational institutions[60]have considered cornrows unsuitable or "unprofessional", and have banned them.[48]Employees and civil rights groups have countered that such attitudes evidence cultural bias or racism, and some disputes have resulted in litigation.[52][61]In 1981, Renee Rogers sued American Airlines for their policy which banned cornrows and other braided hairstyles. Other cases, such as Mitchell vs Marriott Hotel and Pitts vs. Wild Adventures, soon followed.[62]Since other traditional Black hairstyles are also often banned, Black women may be forced to straighten their hair or emulate European hairstyles at significant additional cost.[56]The intersection of racialized and gendered discrimination against Black women is often calledmisogynoir.[63]InCalifornia, theCROWN Actwas passed in 2019 to prohibitdiscrimination based on hair style and hair texture.[64] In 2011, theHigh Courtof the United Kingdom, in a decision reported as atest case, ruled against a school's decision to refuse entry to a student with cornrows. The school claimed this was part of its policy mandating "short back and sides" haircuts, and banning styles that might be worn as indicators of gang membership. However, the court ruled that the student was expressing a tradition and that such policies, while possibly justifiable in certain cases (e.g. skinhead gangs), had to accommodate reasonable racial diversities and cultural practices.[65] In some African nations, regularly changing hairstyles can be seen as a sign of social status for a woman, while advertising continues to promote straighter hairstyles as fashionable. Braids provide a way for women to maintain their hair, and are sometimes used with Chinese or Indian wigs to rotate hairstyles.[55]
https://en.wikipedia.org/wiki/Cornrows
AFrench braid, also called aFrench plait, is a type ofbraidedhairstyle. The three-strand gathered plait includes three sections of hair that are braided together from the crown of the head to the nape of the neck. In the simplest form of three-strandbraid, all the hair is initially divided into three sections, which are then simultaneously gathered together near the scalp. In contrast, a French braid starts with three small sections of hair near the crown of the head, which are then braided together toward the nape of the neck, gradually adding more hair to each section as it crosses in from the side into the center of the braid structure. The final result incorporates all of the hair into a smoothly woven pattern over the scalp. If the main mass of hair is initially parted into two or more sections along the scalp that are kept separate from one another, multiple French braids may be created, each in its own section. One unique feature about the French braid is that an individual can braid their own hair without the help of others. (The difficulty of braiding can depend on the type of hair the individual has, however, as some styles of hair are easier to braid than others.) The length of hair also plays a role in the ability to braid; shorter hair can be more of a challenge.Bobby pinscan be useful when braiding shorter hair or hair with many different layers to keep all of the hair in the French braid in place. There are many different ways of French braiding that make it unique; for example, a person can braid at a slant, braid into a bun, or only braid the bangs (fringe). Compared to the simplest form of hair braid, a French braid has several practical advantages: it can restrain hair from the top of the head that is too short to reach the nape of the neck, and it spreads the weight and tension of the braid across a larger portion of the scalp. Its sleek appearance is often regarded as being elegant and sophisticated. A French braid is more difficult to construct than a simple braid because of its greater complexity. When performed on one's own hair, it also requires a more prolonged elevation of the hands above the back of the head, and leaves more tangled hair along the scalp when unbraiding. In this style of braid, start on top of the head and braid it till the end of the hair. Braiding in this manner can be done with different braid types but the most popular are the classic braid and the fishtail braid.[1]A sister braid to the French braid is the Spanish braid. The Spanish braid is like a French braid but in the beginning, instead of grabbing three sections of hair, only two are used. The phrase "French braid" appears in an 1871 issue ofArthur's Home Magazine, used in a piece of short fiction ("Our New Congressman" by March Westland) that describes it as a new hairstyle ("do up your hair in that new French braid").[2]However, no visual illustrations are provided for that context, making it impossible to tell whether it refers to the same hairstyle described above. Variations on thishairstyleinclude:
https://en.wikipedia.org/wiki/French_braid
Polish plait(Latin:Plica polonica,Polish:Kołtun polskiorplika,Kołtunin Polish meaning matted), less commonly known in English asplicaortrichoma, is a particular formation ofhair. This term can refer to either a hairstyle or a medical condition, depending on context. The term is connected to a system of beliefs inEuropeanfolklore, andhealingpractices intraditional medicinein medievalPolish–Lithuanian Commonwealththat believed matted hair was an amulet, or a catchment or trajectory for illness to leave the body. Larry Wolff in his bookInventing Eastern Europe: The Map of Civilization on the Mind of Enlightenmentmentions that in Poland, for about a thousand years, some people wore the hair style of theScythians.Zygmunt Glogerin hisEncyklopedia staropolskamentions that Polish plait was worn as a hair style by some people, regardless of gender, in thePinskregion and theMasoviaregion at the beginning of the 19th century. He used the term "kołtun zapuszczony" which denotes artificial formation of Polish plait. According to folklore studies today, the style was formed using liquids orwax. Among liquids, a mixture of wine and sugar was used or washing hair daily with water in which herbs were boiled. The most commonly used herb wasvinca(Vinca major), followed byLycopodium clavatumandmoss, which caused hair to mat. A similar effect can be had by rubbing hair withwax, or inserting a piece of a candle at the hair ends. Newer Polish dictionaries mentionplicaas a disease, but the old ones also mention artificially created plica. In modern times the hairstyle is also known as mono-dreadlock[1](or mono-dread for short), alluding to how its structure is comparable to a single, massive strand of adreadlockhairstyle, as well as beaver tail[2]as the mass of hair may resemble the tail of abeaver. The hairstyle can vary in size, from large beaver tails to small plaits.[citation needed] The Polish plait was quite common in Europe during past centuries when hair grooming was largely neglected. It affected mostly thepeasantry, but was not unusual among higher social classes. Due tosuperstitiousbeliefs, the Polish plait used to be particularly common in thePolish–Lithuanian Commonwealth, hence its English and Latin name. Similarly, in German it is calledWeichselzopf, orVistulabraid,zopfmeaning a braid, and theVistulabeing a river in Poland. Initially, the plait was considered anamuletto keep illness away from the body, as it was believed that when disease resolved it left the body to live in the hair, resulting in lessened suffering. For this reason, people not only allowed it to develop, but even encouraged it. According to M. Marczewska, who researched the subject from the perspective of folklore studies, animistic beliefs and long-held pagan beliefs relating to illness viewed illness as caused by an invading evil spirit, which by convalescence left the body and was less problematic when living in the hair formation, which was then shed naturally or cut and ritualistically disposed of by persons specializing infolk medicineor practitioners of folk magic. As people believed that the formation of plica was a sign of resolving of disease, plica, as a hairstyle, was also formed artificially by washing with mixtures of herbs, sweetened wine, waxing, etc. In the early 17th century, people began to believe plaits were an external symptom of an internal illness. A growing plait was supposed to take the illness "out" of the body, and therefore it was rarely cut off; in addition, the belief that a cut-off plait could avenge itself and bring an even greater illness discouraged some from attacking it. It was also believed that casting amagicspell on someone could cause that person to develop a Polish plait, hence also the name"elflock"was used in English, alsoHexenzopf(witches' plait) in German. These convictions were so widespread and strong that some people lived their whole lives with a Polish plait. A plait could sometimes grow very long – even up to 80 centimetres (31 in). Polish plaits could take various forms, from a ball of hair to a long tail. Plaits were even categorized; plaits were "male" and "female", "inner" and "outer", "noble" and "fake", "proper" and "parasitical". British diaristHester Thrale, in her bookObservations and Reflections Made in the Course of a Journey through France, Italy, and Germany, described a Polish plait she saw in 1786 in the collection of theElector of SaxonyinDresden: "the size and weight of it was enormous, its length four yards and a half [about 4.1 m]; the person who was killed by its growth was a Polish lady of quality well known in KingAugustus's court." During theAge of Enlightenment, it became common to use the termsplica polonica(Polish plait) andplica judaica(Jewish plait), as well as the term "Polish ringworm" in English. In addition toantisemitism, there was also prejudice against Eastern Europeans. According to Larry Wolff's bookThe Invention of Eastern Europe, Poles were considered "semi-Asians", the descendants ofTatarsand barbarians. Maurice Fishberg in his bookThe Jews: A Study of Race and Environmentmentions both terms. It was a common belief that plica was a contagious disease which originated in Poland after theMongolinvasion and later spread to other countries.Diderotwrote in hisEncyclopédie(due to his misunderstanding ofMartin Cromer's text) that the Tatar invasion of Poland was the source of plica. An example of the belief in the spread of plica as a contagious disease by foreign hosts was theVictorian-eraBritish belief that the plica was spread like a disease by Polish traders in artificial hair.George Lefevre, in his bookAn Apology for the Nerves(1844), mentions the termsplica polonicaandplica judaicaand also debunks the popular belief that wearing the Polish national costume could cause plica in the wearer. He describes the case of a woman in Berlin who did not wear the Polish national costume, yet was affected with plica. He concluded, "Neither, therefore, are strangers free from it, nor is produced by dress alone."[3] Zygmunt Glogerin hisEncyklopedia Staropolska[pl]argued that according to research done by theGrimm Brothersand Rosenbaum,plica polonicaand the idea that it spread from Poland was an error, as it was also found among theGermanicpopulation ofBavariaand Rhine River area. He said that the wordweichselzopf(Vistula plait) was a later alteration of the namewichtelzopf(plait of awight);wichtelmeanswightin German, a being or sentient thing. In the second half of the 19th century, some medical professionals waged a war against superstition and lack of hygiene among the peasantry, and traditional folk medicine. Many plaits, often to the horror of their owners, were cut off. In WesternGalicia, ProfessorJózef Dietlmade a particular effort to examine and treat Polish plaits. He was also a politician, and his methods of dealing with persons with plicas are controversial today: he organized an official census of people suffering from the disease, they were not allowed to receive help by charitable organizations, were forbidden entrance to some buildings such as schools and offices, and he also proposed fines, which spawned rumors that plaits would be taxed. Those practices were said to have helped eradicate the Polish plait in the region. A huge 1.5-meter long plica can be seen preserved in the Museum of the Faculty of Medicine ofJagiellonian University Medical CollegeinKraków. In the areas of the Polish–Lithuanian Commonwealth which were occupied by theRussian Empire, young men with plica were exempt from service in the tsarist army. It is unknown how many plicas were natural or how many were man-made dreadlocks. ThePolishword for the Polish plait,kołtun, is now used figuratively in Poland to denote an uneducated person with an old-fashioned mindset.[citation needed] Plica was believed to be caused by supernatural entities. The names often describe the believed cause of tangled hair. In Britain this condition was believed to be caused byelves, hence the name "elflock" (mentioned inShakespeareanpoetry and folk tales), although this term could refer to tangles much milder than a Polish plait. Folk belief in Germany associated it with witches or wights (Hexen or Wichtel) giving plica the names Hexenzopf or Wichtelzopf; in Poland, the cause was an unclean spirit. One of the names of plica in Polish was wieszczyca, "wieszcz" meansbard, specifically, a folk poet with the gift of prophesy or a vampire-like living person. In German folklore, plica appears in one of the Brothers Grimm'sfairy tales, in which a girl doesn't comb her hair for one year and finds golden coins in her plica. Many illnesses were associated with plica and were synonymous with the folk name for this condition. According to Marczewska, about thirty diseases were associated with plica in Polish folklore. In German andBohemianspells, there are as many as seventy diseases. Poles were afraid to upset the unclean spirit, and, to pacify it, inserted offerings such as coins into the plica. Kołtun (orgościec, its Polish folk name) did not necessarily describe only the hair formation, it also described the bodily illness without the presence of tangled hair. Pain (especially in joints), rheumatism, etc. were synonymous with it. If plica was present, it was blamed for whims and cravings, which needed to be satisfied promptly; people around a person with plica needed to assist the sufferer to comply with the cravings. Marczewska points out that one old Polish dictionary stated that kołtun created strong cravings, especially for wine (which was imported and expensive). Media related toPlica polonicaat Wikimedia Commons
https://en.wikipedia.org/wiki/Polish_plait
Inpolitics,gridlockordeadlockorpolitical stalemateis a situation when there is difficulty passinglawsthat satisfy the needs of the people. A government is gridlocked when the ratio betweenbillspassed and theagendaof thelegislaturedecreases. Gridlock can occur when twolegislative houses, or theexecutive branchand the legislature are controlled by differentpolitical parties, or otherwise cannotagree. The word "gridlock" is used here as a metaphor – referring to thetraffic standstillwhich results when congestion causes the flow to freeze up completely. In countries withproportional representationthe formation ofcoalition governmentsorconsensus governmentsis common. Theveto playertheory predicts that multiparty governments are likely to be gridlocked,[1]while other literature shows empirical absence of increased gridlock.[2] InUnited States politics,gridlockfrequently refers to occasions when theHouse of Representativesand theSenateare controlled by differentparties, or by a different party than the party of thepresident. Gridlock may also occur within the Senate, when no party has a three-fifthsfilibuster-proof majority of 60 seats. Political Gridlockby author Ned Witting identifies many of the causes of gridlock in the United States and outlines ways to get government working again. Law professors such asSanford LevinsonandAdrian Vermeule, as well as political commentators such asMatthew YglesiasandDebbie Parks, have criticized the U.S. Constitution and Senate voting rules for enabling situations of legislative gridlock. Along these lines, David Brady, a professor ofpolitical scienceatStanford University, and Craig Volden, a professor of public policy and politics at theUniversity of Virginia, explain gridlock by pointing to two interrelated factors: first, "the preferences of members of Congress regarding particular policies" and second, "supermajorityinstitutions – the Senatefilibusterand the presidentialveto".[3] As a result, they argue, gridlock is not determined by party control of the government, but rather by an interplay between the existing policy and the spectrum of individual preferences held by congressional representatives. They maintain, in essence, that "the policy preferences of Members of Congress at or near the median are among the crucial determinants of policy outcomes."[4] Marcus Ethridge, an emeritus professor of political science at theUniversity of Wisconsin–Milwaukee, argues in a 2011 policy analysis published by thelibertarianCato Institutethat theU.S. Constitutionwas designed to foster gridlock in order to increase "the likelihood that policies will reflect broad, unorganized interests instead of the interests of narrow, organized groups."[5]Ethridge presented an extended version of his analysis inThe Case for Gridlock: Democracy, Organized Power, and the Legal Foundations of American Government(2010), which argues that "progressive reformers sought to shift the power to shape policy from the legislative branch to the executive bureaucracy" in an attempt to limit the power of special interests, but that this strategy backfired because of "the ability of interest groups to infiltrate the bureaucracy and promote their interests, often in ways diametrically opposed to the reformers' intentions" and "the capacity of Congress to overcome the influence of groups and generate policy change." In order to counter this, Ethridge suggests a "return to the 'constitutional principle' of gridlock, in which special interests must compete in a legislative forum".[6] Researchers such as David R. Jones argue that "higher party polarization increase[s] the likelihood of encountering gridlock".[7]When looking at figures ofpolarizationwithin U.S. politics, "partisan antipathy is deeper and more extensive – than at any point in the last two decades" with 92% ofRepublicansbeing to therightof the medianDemocrat, and 94% of Democrats aligning to theleftof the median Republican voter.[8]This modern polarization paired with a system designed to operate onBurkean representation, not today'sparty-line voting, leads to seemingly inevitable gridlock. Inparliamentary democraciesbased on theWestminster system, political deadlock may occur when a closely-fought election returns ahung parliament, where no one party, or clear coalition of parties holds a majority. This may result in either the formation of acoalition government(if such an outcome is unusual, as in theUnited Kingdom,CanadaandAustralia, but not most of mainland Europe), aminority government, or acaretaker governmentwith a mandate to oversee new elections. In nations withbicameral parliaments, cases may arise where the government controls thelower house(which grants it confidence) but faces a hostile majority in theupper house. This may precipitate aconstitutional crisis, particularly if the upper house is so determined in its opposition as to defeatthe budget, and in a constitutional position to do so (as happened in1910 in the United Kingdomand1975 in Australia), insofar as a government unable to carry a budget cannot continue in office. Solutions to this problem include ajoint session of parliament(as in Australia), giving one house (usually the lower) the ultimate say on legislation (as inIrelandandJapan), stripping the upper house of some of its powers (as was done by the Parliament Act 1911 in the UK), or abolishing it entirely in favor of aunicameralparliament. Whereequal bicameralismis practiced, as inthe Italian Parliament, constitutional practice may require the government maintain the confidence of both houses, making the defeat of crucial legislation such as the budget avote of no confidencewhich forces the government to resign or call elections. Political deadlock may arise after elections when a party wins a majority in one chamber but fails to do so in another, as at the2013 Italian general election, which resulted in the formation of anational unity government, or where a junior coalition partner withdraws its support, denying the government a majority in one house which it possesses in the other (the situation which brought down thesecond Conte government). Insemi-presidential republics, a directly elected President appoints a Prime Minister who must maintain the confidence of at least the lower house of the legislature. Insofar as a majority supporting, or at leastnot opposingthe government is still necessary, gridlock can arise in much the same way as in parliamentary systems. Semi-presidential arrangements have an additional potential source of political friction -cohabitation. In this instance, the legislature and the President may be from opposition parties or coalitions. This may cause a variety of political outcomes depending on the constitutional arrangements and the degree of determination of both sides. On one extreme is Taiwan, where thePremieris an administrator subordinate to the President. In this case, a vote of no confidence would have little practical effect since the President would simply appoint another ally. At the other end of the spectrum is Poland, where thePrime Ministeris the effective chief executive. Should conflict arise, the Polish President will eventually be forced to bow to the will of parliament in appointing a cabinet, though they may stillcreate obstructionsin the process. An intermediate case isFrance, where the degree of independence of thePrime Ministervaries greatly depending on the circumstances. When the President and parliament are aligned, they are the President's chief deputy. In the case ofcohabitation, the political centre of gravity tends to follow the Prime Minister, and not the President. The President may still substantially influence some policy areas directly, particularly foreign affairs, and can negotiate to force Parliament to accept more conciliatory members of the opposition as ministers.
https://en.wikipedia.org/wiki/Gridlock_(politics)
Many words in the English vocabulary are ofFrenchorigin, most coming from theAnglo-Normanspoken by theupper classesin England for several hundred years after theNorman Conquest, before the language settled into what becameModern English. Englishwords of French origin, such asart,competition,force,money, andtableare pronounced according toEnglishrules ofphonology, rather than French, and English speakers commonly use them without any awareness of their French origin. This article covers French words and phrases that have entered the English lexicon without ever losing their character as Gallicisms: they remain unmistakably "French" to an English speaker. They are most common in written English, where they retain Frenchdiacriticsand are usually printed in italics. In spoken English, at least some attempt is generally made to pronounce them as they would sound in French. An entirely English pronunciation is regarded as asolecism. Some of the entries werenever "good French", in the sense of being grammatical, idiomatic French usage. Others were once normal French but have either become very old-fashioned or have acquired different meanings and connotations in the original language, to the extent that a native French speaker would not understand them, either at all or in the intended sense. c'est la guerre:"That's war!", or... c'est la vie:"That's life!" or "Such is life!" Through the evolution of the language, many words and phrases are no longer used in modern French. Also there are expressions that, even though grammatically correct, do not have the same meaning in French as the English words derived from them. Some older word usages still appear inQuebec French. International authorities have adopted a number of words and phrases from French for use by speakers of all languages in voice communications duringair-sea rescues. Note that the "phonetic" versions of spelling are presented as shown and not theIPA. It is a serious breach in most countries, and in international zones, to use any of these phrases without justification. SeeMayday (distress signal)for a more detailed explanation.
https://en.wikipedia.org/wiki/List_of_French_expressions_in_English
AMexican standoffis a confrontation where no strategy exists that allows any party to achieve victory.[1][2]Anyone initiating aggression might trigger their own demise. At the same time, the parties are unable to extract themselves from the situation without either negotiating a truce or suffering a loss, maintaining strategic tension until one of those three potential organic outcomes occurs or some outside force intervenes. The termMexican standoffwas originally used in the context of using firearms and it still commonly implies a situation in which the parties face some form of threat from one another; the standoffs can span from someone holding a phone threatening to call the police being held in check by ablackmailer, to global confrontations. The Mexican standoff as an armed stalemate is a recurringcinematic trope. Sources claim the reference is to theMexican–American Waror post-war Mexicanbanditsin the 19th century.[3] The earliest known use of the phrase in print was on 19 March 1876, in a short story about Mexico, featuring the line:[4] "Go-!" said he sternly then. "We will call it a stand-off, a Mexican stand-off, you lose your money, but you save your life!" In popular culture, the termMexican standoffreferences confrontations in which neither opponent appears to have a measurable advantage. Historically, commentators have used the term to reference theSoviet Union–United Statesnuclear confrontation during theCold War, specifically theCuban Missile Crisisof 1962. The key element that makes such situationsMexican standoffsis the perceived equality of power exercised amongst the involved parties.[3][unreliable source?]The inability of any particular party to advance its position safely is a condition common amongst all standoffs; in a "Mexican standoff", however, there is an additional disadvantage: no party has a safe way towithdrawfrom its position, thus making the standoff effectively permanent. Theclichéof a Mexican standoff where each party is threatening another with a gun is now considered a movietrope, stemming from its frequent use as aplot devicein cinema. A notable example is inSergio Leone's 1966 WesternThe Good, the Bad and the Ugly, where the characters representing each played byClint Eastwood,Lee Van CleefandEli Wallach, face each other in a showdown.[6][7] DirectorJohn Woo, considered a major influence on theaction filmgenre, is known for his use of the "Mexican standoff" trope.[8]DirectorQuentin Tarantino(who has cited Woo as an influence) has featured Mexican standoff scenes in films includingInglourious Basterds(the tavern scene features multiple Mexican standoffs including meta-discussion) and bothReservoir DogsandPulp Fiction, which depicts a standoff among four characters in the climactic scene.[9] Writer/DirectorFrancis Galluppi's 2023 movieThe Last Stop in Yuma Countyfeatures a mexican standoff scene in the diner.[citation needed]
https://en.wikipedia.org/wiki/Mexican_standoff
Ingame theory, theNash equilibriumis the most commonly usedsolution conceptfornon-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed).[1]The idea of Nash equilibrium dates back to the time ofCournot, who in 1838 applied it tohis model of competitionin anoligopoly.[2] If each player has chosen astrategy– an action plan based on what has happened so far in the game – and no one can increase one's own expected payoff by changing one's strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium. If two playersAlice and Bobchoose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D) is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth. John Nashshowed that there is a Nash equilibrium, possibly inmixed strategies, for everyfinite game.[3] Game theorists use Nash equilibrium to analyze the outcome of thestrategic interactionof severaldecision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to undo their decision given what the others are deciding. The concept has been used to analyze hostile situations such as wars and arms races[4](seeprisoner's dilemma), and also how conflict may be mitigated by repeated interaction (seetit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (seebattle of the sexes), and whether they will take risks to achieve a cooperative outcome (seestag hunt). It has been used to study the adoption oftechnical standards,[citation needed]and also the occurrence ofbank runsandcurrency crises(seecoordination game). Other applications include traffic flow (seeWardrop's principle), how to organize auctions (seeauction theory), the outcome of efforts exerted by multiple parties in the education process,[5]regulatory legislation such as environmental regulations (seetragedy of the commons),[6]natural resource management,[7]analysing strategies in marketing,[8]penalty kicks infootball(I.e. soccer; seematching pennies),[9]robot navigationin crowds,[10]energy systems, transportation systems, evacuation problems[11]and wireless communications.[12] Nash equilibrium is named after American mathematicianJohn Forbes Nash Jr. The same idea was used in a particular application in 1838 byAntoine Augustin Cournotin his theory ofoligopoly.[13]In Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. ACournot equilibriumoccurs when each firm's output maximizes its profits given the output of the other firms, which is apure-strategyNash equilibrium. Cournot also introduced the concept ofbest responsedynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally. The modern concept of Nash equilibrium is instead defined in terms ofmixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced byJohn von NeumannandOskar Morgensternin their 1944 bookThe Theory of Games and Economic Behavior, but their analysis was restricted to the special case ofzero-sumgames. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions.[14]The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ theKakutani fixed-point theoremin his 1950 paper to prove existence of equilibria. His 1951 paper used the simplerBrouwer fixed-point theoremfor the same purpose.[15] Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed manysolution concepts('refinements' of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965Reinhard Seltenproposedsubgame perfect equilibriumas a refinement that eliminates equilibria which depend onnon-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game isrepeated, or what happens if a game is played in theabsence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others. A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?" For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is abest responseto the other players' strategies in that equilibrium.[16] Formally, letSi{\displaystyle S_{i}}be the set of all possible strategies for playeri{\displaystyle i}, wherei=1,…,N{\displaystyle i=1,\ldots ,N}. Lets∗=(si∗,s−i∗){\displaystyle s^{*}=(s_{i}^{*},s_{-i}^{*})}be a strategy profile, a set consisting of one strategy for each player, wheres−i∗{\displaystyle s_{-i}^{*}}denotes theN−1{\displaystyle N-1}strategies of all the players excepti{\displaystyle i}. Letui(si,s−i∗){\displaystyle u_{i}(s_{i},s_{-i}^{*})}be playeri's payoff as a function of the strategies. The strategy profiles∗{\displaystyle s^{*}}is a Nash equilibrium ifui(si∗,s−i∗)≥ui(si,s−i∗)for allsi∈Si.{\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})\geq u_{i}(s_{i},s_{-i}^{*})\ {\text{for all}}\ s_{i}\in S_{i}.} A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might beweak: a player might be indifferent among several strategies given the other players' choices. It is unique and called astrict Nash equilibriumif the inequality is strict so one strategy is the unique best response:ui(si∗,s−i∗)>ui(si,s−i∗)for allsi∈Si,si≠si∗.{\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})>u_{i}(s_{i},s_{-i}^{*})\ {\text{for all}}\ s_{i}\in S_{i},s_{i}\neq s_{i}^{*}.} The strategy setSi{\displaystyle S_{i}}can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g.Si={Yes,No}.{\displaystyle S_{i}=\{{\text{Yes}},{\text{No}}\}.}Or the strategy set might be a finite set of conditional strategies responding to other players, e.g.Si={Yes∣p=Low,No∣p=High}.{\displaystyle S_{i}=\{{\text{Yes}}\mid p={\text{Low}},{\text{No}}\mid p={\text{High}}\}.}Or it might be an infinite set, a continuum or unbounded, e.g.Si={Price}{\displaystyle S_{i}=\{{\text{Price}}\}}such thatPrice{\displaystyle {\text{Price}}}is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it. A game can have apure-strategyor amixed-strategyNash equilibrium. In the latter, not every player always plays the same strategy. Instead, there is aprobability distributionover different strategies. Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?" If every player's answer is "Yes", then the equilibrium is classified as astrict Nash equilibrium.[17] If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between switching and not), then the equilibrium is classified as aweak[note 1]ornon-strict Nash equilibrium[citation needed][clarification needed]. The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough.Strong Nash equilibriumallows for deviations by every conceivable coalition.[18]Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members.[19]However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be weaklyPareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium. A refined Nash equilibrium known ascoalition-proof Nash equilibrium(CPNE)[18]occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported byiterated strict dominanceand on thePareto frontieris a CPNE.[20]Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to thetheory of the core. Nash proved that ifmixed strategies(where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player. Nash equilibria need not exist if the set of choices is infinite and non-compact. For example: However, a Nash equilibrium exists if the set of choices iscompactwith each player's payoff continuous in the strategies of all the players.[21] Rosen[22]extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each playeriis a vectorsiin the Euclidean space Rmi.Denotem:=m1+...+mn; so a strategy-tuple is a vector in Rm. Part of the definition of a game is a subsetSof Rmsuch that the strategy-tuple must be inS. This means that the actions of players may potentially be constrained based on actions of other players. A common special case of the model is whenSis a Cartesian product of convex setsS1,...,Sn, such that the strategy of playerimust be inSi. This represents the case that the actions of each playeriare constrained independently of other players' actions. If the following conditions hold: Then a Nash equilibrium exists. The proof uses theKakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique. Nash's result refers to the special case in which eachSiis asimplex(representing all possible mixtures of pure strategies), and the payoff functions of all players arebilinear functionsof the strategies. The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarilyPareto optimal. Nash equilibrium may also have non-rational consequences insequential gamesbecause players may "threaten" each other with threats they would not actually carry out. For such games thesubgame perfect Nash equilibriummay be more meaningful as a tool of analysis. Thecoordination gameis a classic two-player, two-strategygame, as shown in the examplepayoff matrixto the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and (B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1. A famous example of a coordination game is thestag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1 utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit. This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner corresponding with cooperation. Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix: In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admitmixed strategies(where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%). An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there arex{\displaystyle x}"cars" traveling fromAtoD, what is the expected distribution of traffic in the network? This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route fromAtoD(one ofABD,ABCD, orACD). The "payoff" of each strategy is the travel time of each route. In the graph on the right, a car travelling viaABDexperiences travel time of1+x100+2{\displaystyle 1+{\frac {x}{100}}+2}, wherex{\displaystyle x}is the number of cars traveling on edgeAB. Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it. Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the graph on the right, if, for example, 100 cars are travelling fromAtoD, then equilibrium will occur when 25 drivers travel viaABD, 50 viaABCD, and 25 viaACD. Every driver now has a total travel time of 3.75 (to see this, a total of 75 cars take theABedge, and likewise, 75 cars take theCDedge). Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel viaABDand the other 50 throughACD, then travel time for any single car would actually be 3.5, which is less than 3.75. This is also the Nash equilibrium if the path betweenBandCis removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon known asBraess's paradox. This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then they have to give up two points to the other. This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3). There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell – then the cell represents a Nash equilibrium. We can apply this rule to a 3×3 matrix: Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns. This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×Npure-strategyNash equilibria. The concept ofstability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria. A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold: If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed. In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities (which would neither benefit or damage theexpectationof the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%). Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium. Finally in the eighties, building with great depth on such ideasMertens-stable equilibriawere introduced as asolution concept. Mertens stable equilibria satisfy bothforward inductionandbackward induction. In agame theorycontextstable equilibrianow usually refer to Mertens stable equilibria.[citation needed] If a game has auniqueNash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are: Examples ofgame theoryproblems in which these conditions are not met: In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon. (...)One interpretation is rationalistic: if we assume that players are rational, know the full structure of the game, the game is played just once, and there is just one Nash equilibrium, then players will play according to that equilibrium. This idea was formalized by R. Aumann and A. Brandenburger, 1995,Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known). A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players: [i]t is unnecessary to assume that the participants have full knowledge of the total structure of the game, or the ability and inclination to go through any complex reasoning processes.What is assumed is that there is a population of participants for each position in the game, which will be played throughout time by participants drawn at random from the different populations. If there is a stable average frequency with which each pure strategy is employed by theaverage memberof the appropriate population, then this stable average frequency constitutes a mixed strategy Nash equilibrium. For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory",Journal of Economic Theory, 69, 153–185. Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a theoretical concept ineconomicsandevolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission; both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been borne out by research.[24] The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium in every subgame of that game. This eliminates allnon-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy. The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when suchdynamic inconsistenciesarise. Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via theKakutani fixed-point theorem, following Nash's 1950 paper (he creditsDavid Galewith the observation that such a simplification is possible). To prove the existence of a Nash equilibrium, letri(σ−i){\displaystyle r_{i}(\sigma _{-i})}be the best response of player i to the strategies of all other players.ri(σ−i)=argmaxσi⁡ui(σi,σ−i){\displaystyle r_{i}(\sigma _{-i})=\mathop {\underset {\sigma _{i}}{\operatorname {arg\,max} }} u_{i}(\sigma _{i},\sigma _{-i})} Here,σ∈Σ{\displaystyle \sigma \in \Sigma }, whereΣ=Σi×Σ−i{\displaystyle \Sigma =\Sigma _{i}\times \Sigma _{-i}}, is a mixed-strategy profile in the set of all mixed strategies andui{\displaystyle u_{i}}is the payoff function for player i. Define aset-valued functionr:Σ→2Σ{\displaystyle r\colon \Sigma \rightarrow 2^{\Sigma }}such thatr=ri(σ−i)×r−i(σi){\displaystyle r=r_{i}(\sigma _{-i})\times r_{-i}(\sigma _{i})}. The existence of a Nash equilibrium is equivalent tor{\displaystyle r}having a fixed point. Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied. Condition 1. is satisfied from the fact thatΣ{\displaystyle \Sigma }is a simplex and thus compact. Convexity follows from players' ability to mix strategies.Σ{\displaystyle \Sigma }is nonempty as long as players have strategies. Condition 2. and 3. are satisfied by way of Berge'smaximum theorem. Becauseui{\displaystyle u_{i}}is continuous and compact,r(σi){\displaystyle r(\sigma _{i})}is non-empty andupper hemicontinuous. Condition 4. is satisfied as a result of mixed strategies. Supposeσi,σi′∈r(σ−i){\displaystyle \sigma _{i},\sigma '_{i}\in r(\sigma _{-i})}, thenλσi+(1−λ)σi′∈r(σ−i){\displaystyle \lambda \sigma _{i}+(1-\lambda )\sigma '_{i}\in r(\sigma _{-i})}. i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff. Therefore, there exists a fixed point inr{\displaystyle r}and a Nash equilibrium.[25] When Nash made this point toJohn von Neumannin 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just afixed-point theorem." (See Nasar, 1998, p. 94.) We have a gameG=(N,A,u){\displaystyle G=(N,A,u)}whereN{\displaystyle N}is the number of players andA=A1×⋯×AN{\displaystyle A=A_{1}\times \cdots \times A_{N}}is the action set for the players. All of the action setsAi{\displaystyle A_{i}}are finite. LetΔ=Δ1×⋯×ΔN{\displaystyle \Delta =\Delta _{1}\times \cdots \times \Delta _{N}}denote the set of mixed strategies for the players. The finiteness of theAi{\displaystyle A_{i}}s ensures the compactness ofΔ{\displaystyle \Delta }. We can now define the gain functions. For a mixed strategyσ∈Δ{\displaystyle \sigma \in \Delta }, we let the gain for playeri{\displaystyle i}on actiona∈Ai{\displaystyle a\in A_{i}}beGaini(σ,a)=max{0,ui(a,σ−i)−ui(σi,σ−i)}.{\displaystyle {\text{Gain}}_{i}(\sigma ,a)=\max\{0,u_{i}(a,\sigma _{-i})-u_{i}(\sigma _{i},\sigma _{-i})\}.} The gain function represents the benefit a player gets by unilaterally changing their strategy. We now defineg=(g1,…,gN){\displaystyle g=(g_{1},\dotsc ,g_{N})}wheregi(σ)(a)=σi(a)+Gaini(σ,a){\displaystyle g_{i}(\sigma )(a)=\sigma _{i}(a)+{\text{Gain}}_{i}(\sigma ,a)}forσ∈Δ,a∈Ai{\displaystyle \sigma \in \Delta ,a\in A_{i}}. We see that∑a∈Aigi(σ)(a)=∑a∈Aiσi(a)+Gaini(σ,a)=1+∑a∈AiGaini(σ,a)>0.{\displaystyle \sum _{a\in A_{i}}g_{i}(\sigma )(a)=\sum _{a\in A_{i}}\sigma _{i}(a)+{\text{Gain}}_{i}(\sigma ,a)=1+\sum _{a\in A_{i}}{\text{Gain}}_{i}(\sigma ,a)>0.} Next we define:{f=(f1,⋯,fN):Δ→Δfi(σ)(a)=gi(σ)(a)∑b∈Aigi(σ)(b)a∈Ai{\displaystyle {\begin{cases}f=(f_{1},\cdots ,f_{N}):\Delta \to \Delta \\f_{i}(\sigma )(a)={\frac {g_{i}(\sigma )(a)}{\sum _{b\in A_{i}}g_{i}(\sigma )(b)}}&a\in A_{i}\end{cases}}} It is easy to see that eachfi{\displaystyle f_{i}}is a valid mixed strategy inΔi{\displaystyle \Delta _{i}}. It is also easy to check that eachfi{\displaystyle f_{i}}is a continuous function ofσ{\displaystyle \sigma }, and hencef{\displaystyle f}is a continuous function. As the cross product of a finite number of compact convex sets,Δ{\displaystyle \Delta }is also compact and convex. Applying the Brouwer fixed point theorem tof{\displaystyle f}andΔ{\displaystyle \Delta }we conclude thatf{\displaystyle f}has a fixed point inΔ{\displaystyle \Delta }, call itσ∗{\displaystyle \sigma ^{*}}. We claim thatσ∗{\displaystyle \sigma ^{*}}is a Nash equilibrium inG{\displaystyle G}. For this purpose, it suffices to show that∀i∈{1,⋯,N},∀a∈Ai:Gaini(σ∗,a)=0.{\displaystyle \forall i\in \{1,\cdots ,N\},\forall a\in A_{i}:\quad {\text{Gain}}_{i}(\sigma ^{*},a)=0.} This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium. Now assume that the gains are not all zero. Therefore,∃i∈{1,⋯,N},{\displaystyle \exists i\in \{1,\cdots ,N\},}anda∈Ai{\displaystyle a\in A_{i}}such thatGaini(σ∗,a)>0{\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)>0}. Then∑a∈Aigi(σ∗,a)=1+∑a∈AiGaini(σ∗,a)>1.{\displaystyle \sum _{a\in A_{i}}g_{i}(\sigma ^{*},a)=1+\sum _{a\in A_{i}}{\text{Gain}}_{i}(\sigma ^{*},a)>1.} So letC=∑a∈Aigi(σ∗,a).{\displaystyle C=\sum _{a\in A_{i}}g_{i}(\sigma ^{*},a).} Also we shall denoteGain(i,⋅){\displaystyle {\text{Gain}}(i,\cdot )}as the gain vector indexed by actions inAi{\displaystyle A_{i}}. Sinceσ∗{\displaystyle \sigma ^{*}}is the fixed point we have:σ∗=f(σ∗)⇒σi∗=fi(σ∗)⇒σi∗=gi(σ∗)∑a∈Aigi(σ∗)(a)⇒σi∗=1C(σi∗+Gaini(σ∗,⋅))⇒Cσi∗=σi∗+Gaini(σ∗,⋅)⇒(C−1)σi∗=Gaini(σ∗,⋅)⇒σi∗=(1C−1)Gaini(σ∗,⋅).{\displaystyle {\begin{aligned}\sigma ^{*}=f(\sigma ^{*})&\Rightarrow \sigma _{i}^{*}=f_{i}(\sigma ^{*})\\&\Rightarrow \sigma _{i}^{*}={\frac {g_{i}(\sigma ^{*})}{\sum _{a\in A_{i}}g_{i}(\sigma ^{*})(a)}}\\[6pt]&\Rightarrow \sigma _{i}^{*}={\frac {1}{C}}\left(\sigma _{i}^{*}+{\text{Gain}}_{i}(\sigma ^{*},\cdot )\right)\\[6pt]&\Rightarrow C\sigma _{i}^{*}=\sigma _{i}^{*}+{\text{Gain}}_{i}(\sigma ^{*},\cdot )\\&\Rightarrow \left(C-1\right)\sigma _{i}^{*}={\text{Gain}}_{i}(\sigma ^{*},\cdot )\\&\Rightarrow \sigma _{i}^{*}=\left({\frac {1}{C-1}}\right){\text{Gain}}_{i}(\sigma ^{*},\cdot ).\end{aligned}}} SinceC>1{\displaystyle C>1}we have thatσi∗{\displaystyle \sigma _{i}^{*}}is some positive scaling of the vectorGaini(σ∗,⋅){\displaystyle {\text{Gain}}_{i}(\sigma ^{*},\cdot )}. Now we claim that∀a∈Ai:σi∗(a)(ui(ai,σ−i∗)−ui(σi∗,σ−i∗))=σi∗(a)Gaini(σ∗,a){\displaystyle \forall a\in A_{i}:\quad \sigma _{i}^{*}(a)(u_{i}(a_{i},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*}))=\sigma _{i}^{*}(a){\text{Gain}}_{i}(\sigma ^{*},a)} To see this, first ifGaini(σ∗,a)>0{\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)>0}then this is true by definition of the gain function. Now assume thatGaini(σ∗,a)=0{\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)=0}. By our previous statements we have thatσi∗(a)=(1C−1)Gaini(σ∗,a)=0{\displaystyle \sigma _{i}^{*}(a)=\left({\frac {1}{C-1}}\right){\text{Gain}}_{i}(\sigma ^{*},a)=0} and so the left term is zero, giving us that the entire expression is0{\displaystyle 0}as needed. So we finally have that0=ui(σi∗,σ−i∗)−ui(σi∗,σ−i∗)=(∑a∈Aiσi∗(a)ui(ai,σ−i∗))−ui(σi∗,σ−i∗)=∑a∈Aiσi∗(a)(ui(ai,σ−i∗)−ui(σi∗,σ−i∗))=∑a∈Aiσi∗(a)Gaini(σ∗,a)by the previous statements=∑a∈Ai(C−1)σi∗(a)2>0{\displaystyle {\begin{aligned}0&=u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})\\&=\left(\sum _{a\in A_{i}}\sigma _{i}^{*}(a)u_{i}(a_{i},\sigma _{-i}^{*})\right)-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})\\&=\sum _{a\in A_{i}}\sigma _{i}^{*}(a)(u_{i}(a_{i},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*}))\\&=\sum _{a\in A_{i}}\sigma _{i}^{*}(a){\text{Gain}}_{i}(\sigma ^{*},a)&&{\text{ by the previous statements }}\\&=\sum _{a\in A_{i}}\left(C-1\right)\sigma _{i}^{*}(a)^{2}>0\end{aligned}}} where the last inequality follows sinceσi∗{\displaystyle \sigma _{i}^{*}}is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore,σ∗{\displaystyle \sigma ^{*}}is a Nash equilibrium forG{\displaystyle G}as needed. If a player A has adominant strategysA{\displaystyle s_{A}}then there exists a Nash equilibrium in which A playssA{\displaystyle s_{A}}. In the case of two players A and B, there exists a Nash equilibrium in which A playssA{\displaystyle s_{A}}and B plays a best response tosA{\displaystyle s_{A}}. IfsA{\displaystyle s_{A}}is a strictly dominant strategy, A playssA{\displaystyle s_{A}}in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy. In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived.[16] In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium, assign A the probabilityp{\displaystyle p}of playing H and(1−p){\displaystyle (1-p)}of playing T, and assign B the probabilityq{\displaystyle q}of playing H and(1−q){\displaystyle (1-q)}of playing T. E[payoff for A playing H]=(−1)q+(+1)(1−q)=1−2q,E[payoff for A playing T]=(+1)q+(−1)(1−q)=2q−1,E[payoff for A playing H]=E[payoff for A playing T]⟹1−2q=2q−1⟹q=12.E[payoff for B playing H]=(+1)p+(−1)(1−p)=2p−1,E[payoff for B playing T]=(−1)p+(+1)(1−p)=1−2p,E[payoff for B playing H]=E[payoff for B playing T]⟹2p−1=1−2p⟹p=12.{\displaystyle {\begin{aligned}&\mathbb {E} [{\text{payoff for A playing H}}]=(-1)q+(+1)(1-q)=1-2q,\\&\mathbb {E} [{\text{payoff for A playing T}}]=(+1)q+(-1)(1-q)=2q-1,\\&\mathbb {E} [{\text{payoff for A playing H}}]=\mathbb {E} [{\text{payoff for A playing T}}]\implies 1-2q=2q-1\implies q={\frac {1}{2}}.\\&\mathbb {E} [{\text{payoff for B playing H}}]=(+1)p+(-1)(1-p)=2p-1,\\&\mathbb {E} [{\text{payoff for B playing T}}]=(-1)p+(+1)(1-p)=1-2p,\\&\mathbb {E} [{\text{payoff for B playing H}}]=\mathbb {E} [{\text{payoff for B playing T}}]\implies 2p-1=1-2p\implies p={\frac {1}{2}}.\end{aligned}}} Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T withp=12{\displaystyle p={\frac {1}{2}}}andq=12{\displaystyle q={\frac {1}{2}}}. In 1971, Robert Wilson came up with the "oddness theorem",[26]which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative proof of the result.[27]"Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed, with probability one it would have an odd number of equilibria instead. Theprisoner's dilemma, for example, has one equilibrium, while thebattle of the sexeshas three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead.
https://en.wikipedia.org/wiki/Nash_equilibrium
Anobstacle(also called abarrier,impediment, orstumbling block) is an object, thing, action or situation that causes anobstruction.[1]A obstacle blocks or hinders our way forward. Different types of obstacles include physical,economic,biopsychosocial, cultural, political, technological and military. As physical obstacles, we can enumerate all those physical barriers that block the action and prevent the progress or the achievement of a concrete goal. Examples: In sports, a variety of physical barriers or obstacles were introduced in the competition rules to make them even more difficult and competitive: Can be defined as those elements of material deprivation that people may have to achieve certain goals, such as: People are prevented to achieve certain goals by biological, psychological, social or cultural barriers, such as: Obstacles or difficulties which groups of citizens, theirpolitical representatives,political partiesor countries interpose to each other in order to hinder the actions of certain of their opponents, such as: The improvement of living conditions of any human community is constantly challenged by the need of technologies still inaccessible or unavailable, which can be internally developed or acquired from other communities that have already developed them, and in both cases must overcome such barriers as: When different communities or countries, which border or not, cannot develop good relations, for economic, cultural or political reasons, they may exceed the limits of diplomatic negotiations, creating military defensive or offensive obstacles to their opponents or enemies, such as:
https://en.wikipedia.org/wiki/Obstacle
Stalemateis a situation inchesswhere the player whose turn it is to move is not incheckand has no legal move. Stalemate results in adraw. During theendgame, stalemate is a resource that can enable the player with the inferior position to draw the game rather than lose.[2]In more complex positions, stalemate is much rarer, usually taking the form of aswindlethat succeeds only if the superior side is inattentive.[citation needed]Stalemate is also a common theme inendgame studiesand otherchess problems. The outcome of a stalemate was standardized as a draw in the 19th century (see§ History of the stalemate rule, below). Before this standardization, its treatment varied widely, including being deemed a win for the stalemating player, a half-win for that player, or a loss for that player; not being permitted; and resulting in the stalemated player missing a turn. Stalemate rules vary invariants and other games of the chess family. The first recorded use of stalemate is from 1765. It is a compounding ofMiddle Englishstaleandmate(meaningcheckmate).Staleis probably derived from Anglo-Frenchestalemeaning "standstill", acognateof "stand" and "stall", both ultimately derived from theProto-Indo-Europeanroot*sta-. The first recorded use in a figurative sense is in 1885.[3][4] Stalemate has become a widely usedmetaphorfor other situations where there is a conflict or contest between two parties, such as war orpoliticalnegotiations, and neither side is able to achieve victory, resulting in what is also called animpasse, adeadlock, or aMexican standoff. Chess writers note that this usage is amisnomerbecause, unlike in chess, the situation is often a temporary one that is ultimately resolved, even if it seems currently intractable.[5][6][7][8]The term "stalemate" is sometimes used incorrectly as a generic term for a draw in chess. Whiledrawsare common, they are rarely the direct result of stalemate.[9] With Black to move, Black is stalemated in diagrams 1 to 5. Stalemate is an important factor in theendgame– the endgame setup in diagram 1, for example, quite frequently is relevant in play (seeKing and pawn versus king endgame). The position in diagram 1 occurred in an 1898 game betweenAmos BurnandHarry Pillsbury[10]and also in a 1925 game betweenSavielly TartakowerandRichard Réti.[11]The same position, except shifted to the e-file, occurred in a 2009 game betweenGata KamskyandVladimir Kramnik.[12] The position in diagram 3 is an example of apawndrawing against aqueen. Stalemates of this sort can often save a player from losing an apparently hopeless position (seeQueen versus pawn endgame). The position in diagram 5 is a special kind of stalemate, in which no move is possible even if one ignores the need to avoid self-check. George P. Jelliss has called this type of stalemate adeadlock. Adding a White knight on f2 would produce achecklock: a checkmate position where no moves are possible, even if one ignores the need to avoid self-check. In general, positions with no moves at all available (even ignoring the need to avoid self-check) are calledlocks.[13] In this position from the gameViswanathan Anand–Vladimir Kramnikfrom the2007 World Chess Championship,[14]Black played 65...Kxf5, stalemating White.[15](Any other move by Black loses.) An intentional stalemate occurred on the 124th move of the fifth game of the1978 World Championshipmatch betweenViktor KorchnoiandAnatoly Karpov.[16]The game had been a theoretical draw for many moves.[17][18]White's bishop is useless; itcannot defend the queening square at a8nor attack the black pawn on the light a4-square. If the white king heads towards the black pawn, the black king can move towards a8 and set up afortress. The players were not on speaking terms, however, so neither would offer adraw by agreement. On his 124th move, White played 124.Bg7, delivering stalemate. Korchnoi said that it gave him pleasure to stalemate Karpov and that it was slightly humiliating.[19]Until 2021, this was the longest game played in aWorld Chess Championshipfinal match, as well as the only World Championship game to end in stalemate before 2007.[20] Sometimes, a surprise stalemate saves a game. In the gameOssip Bernstein–Vasily Smyslov[21](first diagram), Black can win by sacrificing the f-pawn and using the king to support the b-pawn. However, Smyslov thought it was good to advance the b-pawn because he could win the white rook with askewerif it captured the pawn. Play went: Now 60...Rh2+ 61.Kf3! Rxb2 would be stalemate (second diagram). Smyslov played 60...Kg4, and the game was drawn after 61.Kf1 (seeRook and pawn versus rook endgame).[22] Whereas the possibility of stalemate arose in the Bernstein–Smyslov game because of ablunder, it can also arise without one, as in the gameMilan Matulović–Nikolay Minev(first diagram). Play continued: The only meaningful attempt to make progress. Now all moves by Black (like 3...Ra3+?) lose, with one exception. Now 4.Rxa6 would be stalemate. White played 4.Rc5+ instead, and the game was drawn several moves later.[23] In the gameElijah Williams–Daniel Harrwitz[24](first diagram), Black was up aknightand a pawn in an endgame. This would normally be a decisivematerialadvantage, but Black could find no way to make progress because of various stalemate resources available to White. The game continued: Avoiding the threatened 73...Nc2+. 76...Nc2+ 77.Rxc2+! Kxc2 is stalemate. 77...Kxc3 is stalemate. 79...Rd3 80.Rxd3+! leaves Black with either insufficient material to win after 80...Nxd3 81.Kxa2 or a standardfortress in a cornerdraw after 80...Kxd3. Now the playersagreed to a draw, since 84...Kxb3 or 84...Rxb3 is stalemate, as is 84...Ra8 85.Rxc3+! Kxc3. Black could still have won the game until his critical mistake on move 82. Instead of 82...Nc3, 82...Nb4 wins; for example, after 83.Rc8 Re3 84.Rb8+ Kc5 85.Rc8+ Kd5 86.Rd8+ Kc6 87.Ra8 Re1+ 88.Kb2 Kc5 89.Kc3 a1=Q+, Black wins.[citation needed] This 2007 game,Magnus Carlsen–Loek van Wely, ended in stalemate.[25]White used the second-rank defense in arook and bishop versus rook endgamefor 46 moves. Thefifty-move rulewas about to come into effect, under which White could claim a draw. The game ended: White was stalemated.[26] Although stalemate usually occurs in the endgame, it can also occur with more pieces on the board. Outside of relatively simple endgame positions, such as those above, stalemate occurs rarely, usually when the side with the superior position has overlooked the possibility of stalemate.[27]This is typically realized by the inferior side's sacrifice of one or more pieces in order to force stalemate. A piece that is offered as a sacrifice to bring about stalemate is sometimes called adesperado. One of the best-known examples of thedesperadois the gameLarry Evans–Samuel Reshevsky[28]that was dubbed "TheSwindleof the Century".[29]Evans sacrificed his queen on move 49 and offered his rook on move 50. White's rook has been called theeternal rook. Capturing it results in stalemate, but otherwise it stays on the seventhrankand checks Black's kingad infinitum(i.e.perpetual check). The game would inevitably end in adraw by agreement, bythreefold repetition, or by an eventual claim under thefifty-move rule.[30] After 48...Qg6! 49.Rf8 Qe6! 50.Rh8+ Kg6, Black remains a piece ahead after 51.Qxe6 Nxe6, orforces mateafter 51.gxf4 Re1+ and 52...Qa2+.[31] The position at right occurred inBoris Gelfand–Vladimir Kramnik, 1994FIDECandidates match, game 6, inSanghi Nagar, India.[32]Kramnik, down two pawns and on the defensive, would be very happy with a draw. Gelfand has just played67. Re4–e7?(first diagram), a strong-looking move that threatens 68.Qxf6, winning a third pawn, or 68.Rc7, further constricting Black. Black responded67... Qc1!If White takes Black's undefendedrookwith 68.Qxd8, Black's desperado queen forces the draw with 68...Qh1+ 69.Kg3 Qh2+!, compelling 70.Kxh2 stalemate (second diagram). If White avoids the stalemate with 68.Rxg7+ Kxg7 69.Qxd8, Black draws byperpetual checkwith 69...Qh1+ 70.Kg3 Qg1+ 71.Kf4 Qc1+! 72.Ke4 Qc6+! 73.Kd3!?(73.d5 Qc4+; 73.Qd5 Qc2+) Qxf3+! 74.Kd2 Qg2+! 75.Kc3 Qc6+ 76.Kb4 Qb5+ 77.Ka3 Qd3+. Gelfand played68. d5instead but still only drew. InTroitsky–Vogt[clarification needed: full name], 1896, the famousendgame studycomposer Alexey Troitsky pulled off an elegant swindle in actual play. After Troitsky's1. Rd1!, Black fell into the trap with the seemingly crushing1... Bh3?, threatening 2...Qg2#. The game concluded2. Rxd8+ Kxd8 3. Qd1+! Qxd1 stalemate. White's bishop, knight, and f-pawn are allpinnedand unable to move.[33] Stalemate is a frequent theme inendgame studies[34]and otherchess compositions. An example is the "White to Play and Draw" study at right, composed by theAmericanmasterFrederick Rhine[35]and published in 2006.[36]White saves a draw with1. Ne5+!Black wins after 1.Nb4+? Kb5! or 1.Qe8+? Bxe8 2.Ne5+ Kb5! 3.Rxb2+ Nb3.1... Bxe5After 1...Kb5? 2.Rxb2+ Nb3 3.Rxc4! Qxe3 (best; 3...Qb8+ 4.Kd7 Qxh8 5.Rxb3+ forcescheckmate) 4.Rxb3+! Qxb3 5.Qh1! Bf5+ 6.Kd8!, White is winning.2. Qe8+!2.Qxe5? Qb7+ 3.Kd8 Qd7#.2... Bxe8 3. Rh6+ Bd63...Kb5 4.Rxb6+ Kxb6 5.Nxc4+ also leads to a drawn endgame. Not 5.Rxb2+? Bxb2 6.Nc4+ Kb5 7.Nxb2 Bh5! trapping White's knight.4. Rxd6+! Kxd6 5. Nxc4+! Nxc4 6. Rxb6+ Nxb6+Moving the king is actually a better try, but the resulting endgame of two knights and a bishop against a rook is a well-established theoretical draw.[37][38][39][40]7. Kd8!(rightmost diagram) Black is three pieces ahead, but if White is allowed to take the bishop, thetwo knights are insufficient to force checkmate. The only way to save the bishop is to move it, resulting in stalemate. A similar idea occasionally enables the inferior side to save a draw in the ending ofbishop, knight, and king versus lone king. At right is a composition byA. J. Roycroftwhich was published in theBritish Chess Magazinein 1957. White draws with1. c7!after which there are two main lines: Somechess problemsrequire "White to move and stalemate Black innmoves" (rather than the more common "White to move and checkmate Black innmoves"). Problemists have also tried to construct the shortest possible game ending in stalemate.Sam Loyddevised one just ten moves long: 1.e3 a5 2.Qh5 Ra6 3.Qxa5 h5 4.Qxc7 Rah6 5.h4 f6 6.Qxd7+ Kf7 7.Qxb7 Qd3 8.Qxb8 Qh7 9.Qxc8 Kg6 10.Qe6 (first diagram). A similar stalemate is reached after: 1.d4 c5 2.dxc5 f6 3.Qxd7+ Kf7 4.Qxd8 Bf5 5.Qxb8 h5 6.Qxa8 Rh6 7.Qxb7 a6 8.Qxa6 Bh7 9.h4 Kg6 10.Qe6 (Frederick Rhine). Loyd also demonstrated that stalemate can occur with all the pieces on the board: 1.d4 d6 2.Qd2 e5 3.a4 e4 4.Qf4 f5 5.h3 Be7 6.Qh2 Be6 7.Ra3 c5 8.Rg3 Qa5+ 9.Nd2 Bh4 10.f3 Bb3 11.d5 e3 12.c4 f4 (second diagram). Games such as this are occasionally played in tournaments as a pre-arranged draw.[42] There arechess compositionsfeaturing double stalemate. To the right are two double stalemate positions, in which neither side has a legal move. An example from actual play is given below:[43] White played1. Ngxf6+ Qxf6+(if 1...exf6 then 2.Ne7#)2. Nxf6+ exf6 3. c4 c5 4. a4 a5, leaving a double stalemate position. 1.Ndxf6+ would not have worked, for then 1...exf6 is possible.[43](Under the present rules, the game would have ended after 1...Qxf6+, as the position is then dead: no sequence of legal moves leads to either side being checkmated.) The fastest known game ending in a double stalemate position was discovered by Enzo Minerva and published in the Italian newspaperl'Unitàon 14 August 2007: 1.c4 d5 2.Qb3 Bh3 3.gxh3 f5 4.Qxb7 Kf7 5.Qxa7 Kg6 6.f3 c5 7.Qxe7 Rxa2 8.Kf2 Rxb2 9.Qxg7+ Kh5 10.Qxg8 Rxb1 11.Rxb1 Kh4 12.Qxh8 h5 13.Qh6 Bxh6 14.Rxb8 Be3+ 15.dxe3 Qxb8 16.Kg2 Qf4 17.exf4 d4 18.Be3 dxe3.[45] The stalemate rule has had a convoluted history.[46]Although stalemate is universally recognized as a draw today, that was not the case for much of the game's history. In the forerunners to modern chess, such aschaturanga, delivering stalemate resulted in a loss.[47]This was changed inshatranj, however, where stalemating was a win. This practice persisted in chess as played in early 15th-century Spain.[48]Lucena(c. 1497), however, treated stalemate as an inferior form of victory;[49]it won only half the stake in games played for money, and this continued to be the case in Spain as late as 1600.[50]From about 1600 to 1800, the rule in England was that stalemate was alossfor the player administering it, a rule that the eminent chess historianH. J. R. Murraybelieves may have been adopted from Russian chess.[51]That rule disappeared in England before 1820, being replaced by the French and Italian rule that a stalemate was a drawn game.[52] Throughout history, a stalemate has at various times been: Periodically, writers have argued that stalemate should again be made a win for the side causing the stalemate, on the grounds that the goal of chess is conceptually to capture the king and checkmate merely ends it when this is inevitable.[13]GrandmasterLarry Kaufmanwrites, "In my view, calling stalemate a draw is totally illogical, since it represents the ultimatezugzwang, where any move would get your king taken".[76]The British masterT. H. Tylorargued in a 1940 article in theBritish Chess Magazinethat the present rule, treating stalemate as a draw, "is without historical foundation and irrational, and primarily responsible for a vast percentage of draws, and hence should be abolished".[77]Years later,Fred Reinfeldwrote, "When Tylor wrote his attack on the stalemate rule, he released about his unhappy head a swarm of peevish maledictions that are still buzzing."[78]Larry Evanscalls the proposal to make stalemate a win for the stalemating player a "crude proposal that ... would radically alter centuries of tradition and make chess boring".[79]This rule change would cause a greater emphasis onmaterial; an extra pawn would be a greater advantage than it is today. However, Kaufman tested the idea of scoring stalemates higher than draws with the chess engineKomodo, and found that the impact is quite small because it is rare to be able to force stalemate but not checkmate: while allking and pawn versus kingendgames become wins when the pawn is protected (except when the attacking king is trapped in front of its own rook pawn), this does not turn out to be common enough. The problem is that king and lone minor piece against king cannot force stalemate in general.Emanuel LaskerandRichard Rétiproposed that both stalemateandking and minor versus king (with the minor piece side to move) should give ¾ points to the superior side: this would effectively restore not only the old stalemate rule but also the oldbare kingrule. Kaufman and correspondence grandmasterArno Nickelhave proposed going even further, and giving only ¼ point as well to the side that brings about athreefold repetition(which likewise has precedents in xiangqi, shogi, andGo). According to his tests with Komodo, chess at the level of a human World Championship match would have a draw rate of 65.6%; scoring stalemate as ¾–¼ reduces the draw rate to 63.4%; scoring stalemateandbare king as ¾–¼ brings it to 55.9%; and scoring stalemate, bare king,andthreefold repetition as ¾–¼ brings it all the way down to 22.6%. (The same reduction of draws would occur if stalemate, bare king, and threefold repetition were scored as 1–0 instead of ¾–¼, but the point of the ¾–¼ scoring is to allow the weaker side to still benefit from avoiding checkmate, while giving the stronger side something to play for even when checkmate cannot be attained.)[80] Jelliss has suggested that under the logic that stalemate should be a win (since any move would get the king taken), checklock should be a draw. (In a checklock position, no forward play is possible even if exposing the king to check is valid, so the king cannot get captured. The same logic would apply to deadlock.)[13] If stalemate were a loss for the player unable to move, the outcome of someendgameswould be affected.[33]In some situations the superior side can force stalemate but not checkmate. In others, the defending player can use stalemate as a defensive technique to avoid losing (under the current rule): The effect if stalemates were to be scored as ¾–¼ would be similar but less severe, as then the weaker side would still be rewarded somewhat for avoiding checkmate via stalemate, just not as much as before.[80] Not allvariants of chessconsider the stalemate to be a draw. Many regional variants, as well some variants of Western chess, have adopted their own rules on how to treat the stalemated player. Inchaturanga, which is widely considered to be the common ancestor of all variants of chess, a stalemate was a win for the stalemated player.[82][83]Around the 7th century, this game was adopted in the Middle East asshatranjwith very similar rules to its predecessor; however, the stalemate rule was changed to its exact opposite: i.e. it was a win for the player delivering the stalemate.[84]This game was in turn introduced to theWestern world, where it would eventually evolve to modern-day Westernchess, although the stalemate rule for Western chess was not standardized as a draw until the 19th century (seehistory of the rule). Chaturanga also evolved into several other games in various regions ofAsia, all of which have varying rules on stalemating: The majority ofvariants of Western chessdo not specify any alterations to the rule of stalemate. There are some variants, however, where the ruleisspecified to differ from that of standard chess: There is a world of difference between no choice ... and a poor choice. Editorial writers often talk about a political stalemate when the analogy they probably have in mind is a political "zugzwang". In stalemate a player has no legal moves, period. In zugzwang he has nothing pleasant to do.
https://en.wikipedia.org/wiki/Stalemate
Adatabase transactionsymbolizes aunit of work, performed within adatabase management system(or similar system) against adatabase, that is treated in a coherent and reliable way independent of other transactions. A transaction generally represents any change in a database. Transactions in a database environment have two main purposes: In a database management system, a transaction is a single unit of logic or work, sometimes made up of multiple operations. Any logical calculation done in a consistent mode in a database is known as a transaction. One example is a transfer from one bank account to another: the complete transaction requires subtracting the amount to be transferred from one account and adding that same amount to the other. A database transaction, by definition, must beatomic(it must either be complete in its entirety or have no effect whatsoever),consistent(it must conform to existing constraints in the database),isolated(it must not affect other transactions) anddurable(it must get written to persistent storage).[1]Database practitioners often refer to these properties of database transactions using the acronymACID. Databasesand other data stores which treat theintegrityof data as paramount often include the ability to handle transactions to maintain the integrity of data. A single transaction consists of one or more independent units of work, each reading and/or writing information to a database or other data store. When this happens it is often important to ensure that all such processing leaves the database or data store in a consistent state. Examples fromdouble-entry accounting systemsoften illustrate the concept of transactions. In double-entry accounting every debit requires the recording of an associated credit. If one writes a check for $100 to buy groceries, a transactional double-entry accounting system must record the following two entries to cover the single transaction: A transactional system would make both entries pass or both entries would fail. By treating the recording of multiple entries as an atomic transactional unit of work the system maintains the integrity of the data recorded. In other words, nobody ends up with a situation in which a debit is recorded but no associated credit is recorded, or vice versa. Atransactional databaseis aDBMSthat provides theACID propertiesfor a bracketed set of database operations (begin-commit). Transactions ensure that the database is always in a consistent state, even in the event of concurrent updates and failures.[2]All the write operations within a transaction have an all-or-nothing effect, that is, either the transaction succeeds and all writes take effect, or otherwise, the database is brought to a state that does not include any of the writes of the transaction. Transactions also ensure that the effect of concurrent transactions satisfies certain guarantees, known asisolation level. The highest isolation level isserializability, which guarantees that the effect of concurrent transactions is equivalent to their serial (i.e. sequential) execution. Most modern[update]relational database management systemssupport transactions.NoSQLdatabases prioritize scalability along with supporting transactions in order to guarantee data consistency in the event of concurrent updates and accesses. In a database system, a transaction might consist of one or more data-manipulation statements and queries, each reading and/or writing information in the database. Users ofdatabase systemsconsiderconsistencyandintegrityof data as highly important. A simple transaction is usually issued to the database system in a language likeSQLwrapped in a transaction, using a pattern similar to the following: A transaction commit operation persists all the results of data manipulations within the scope of the transaction to the database. A transaction rollback operation does not persist the partial results of data manipulations within the scope of the transaction to the database. In no case can a partial transaction be committed to the database since that would leave the database in an inconsistent state. Internally, multi-user databases store and process transactions, often by using a transactionIDor XID. There are multiple varying ways for transactions to be implemented other than the simple way documented above.Nested transactions, for example, are transactions which contain statements within them that start new transactions (i.e. sub-transactions).Multi-level transactionsare a variant of nested transactions where the sub-transactions take place at different levels of a layered system architecture (e.g., with one operation at the database-engine level, one operation at the operating-system level).[3]Another type of transaction is thecompensating transaction. Transactions are available in most SQL database implementations, though with varying levels of robustness. For example,MySQLbegan supporting transactions from early version 3.23, but theInnoDBstorage engine was not default before version 5.5. The earlier available storage engine,MyISAMdoes not support transactions. A transaction is typically started using the commandBEGIN(although the SQL standard specifiesSTART TRANSACTION). When the system processes aCOMMITstatement, the transaction ends with successful completion. AROLLBACKstatement can also end the transaction, undoing any work performed sinceBEGIN. Ifautocommitwas disabled with the start of a transaction, autocommit will also be re-enabled with the end of the transaction. One can set theisolation levelfor individual transactional operations as well as globally. At the highest level (READ COMMITTED), the result of any operation performed after a transaction has started will remain invisible to other database users until the transaction has ended. At the lowest level (READ UNCOMMITTED), which may occasionally be used to ensure high concurrency, such changes will be immediately visible. Relational databases are traditionally composed of tables with fixed-size fields and records. Object databases comprise variable-sizedblobs, possiblyserializableor incorporating amime-type. The fundamental similarities between Relational and Object databases are the start and thecommitorrollback. After starting a transaction, database records or objects are locked, either read-only or read-write. Reads and writes can then occur. Once the transaction is fully defined, changes are committed or rolled backatomically, such that at the end of the transaction there is noinconsistency. Database systems implementdistributed transactions[4]as transactions accessing data over multiple nodes. A distributed transaction enforces the ACID properties over multiple nodes, and might include systems such as databases, storage managers, file systems, messaging systems, and other data managers. In a distributed transaction there is typically an entity coordinating all the process to ensure that all parts of the transaction are applied to all relevant systems. Moreover, the integration of Storage as a Service (StaaS) within these environments is crucial, as it offers a virtually infinite pool of storage resources, accommodating a range of cloud-based data store classes with varying availability, scalability, and ACID properties. This integration is essential for achieving higher availability, lower response time, and cost efficiency in data-intensive applications deployed across cloud-based data stores.[5] TheNamesysReiser4filesystem forLinux[6]supports transactions, and as ofMicrosoftWindows Vista, the MicrosoftNTFSfilesystem[7]supportsdistributed transactionsacross networks. There is occurring research into more data coherent filesystems, such as theWarp Transactional Filesystem(WTF).[8]
https://en.wikipedia.org/wiki/Database_transaction
Dekker's algorithmis the first known correct solution to themutual exclusionproblem inconcurrent programmingwhere processes only communicate via shared memory. The solution is attributed toDutchmathematicianTh. J. DekkerbyEdsger W. Dijkstrain an unpublished paper on sequential process descriptions[1]and his manuscript oncooperating sequential processes.[2]It allows two threads to share a single-use resource without conflict, using onlyshared memoryfor communication. It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the firstmutual exclusionalgorithms to be invented. If two processes attempt to enter acritical sectionat the same time, the algorithm will allow only one process in, based on whoseturnit is. If one process is already in the critical section, the other process willbusy waitfor the first process to exit. This is done by the use of two flags,wants_to_enter[0]andwants_to_enter[1], which indicate an intention to enter the critical section on the part of processes 0 and 1, respectively, and a variableturnthat indicates who has priority between the two processes. Dekker's algorithm can be expressed inpseudocode, as follows.[3] Processes indicate an intention to enter the critical section which is tested by the outer while loop. If the other process has not flagged intent, the critical section can be entered safely irrespective of the current turn. Mutual exclusion will still be guaranteed as neither process can become critical before setting their flag (implying at least one process will enter the while loop). This also guarantees progress as waiting will not occur on a process which has withdrawn intent to become critical. Alternatively, if the other process's variable was set, the while loop is entered and the turn variable will establish who is permitted to become critical. Processes without priority will withdraw their intention to enter the critical section until they are given priority again (the inner while loop). Processes with priority will break from the while loop and enter their critical section. Dekker's algorithm guaranteesmutual exclusion, freedom fromdeadlock, and freedom fromstarvation. Let us see why the last property holds. Suppose p0 is stuck inside thewhile wants_to_enter[1]loop forever. There is freedom from deadlock, so eventually p1 will proceed to its critical section and setturn = 0(and the value of turn will remain unchanged as long as p0 doesn't progress). Eventually p0 will break out of the innerwhile turn ≠ 0loop (if it was ever stuck on it). After that it will setwants_to_enter[0]to true and settle down to waiting forwants_to_enter[1]to become false (sinceturn = 0, it will never do the actions in the while loop). The next time p1 tries to enter its critical section, it will be forced to execute the actions in itswhile wants_to_enter[0]loop. In particular, it will eventually setwants_to_enter[1]to false and get stuck in thewhile turn ≠ 1loop (since turn remains 0). The next time control passes to p0, it will exit thewhile wants_to_enter[1]loop and enter its critical section. If the algorithm were modified by performing the actions in thewhile wants_to_enter[1]loop without checking ifturn = 0, then there is a possibility of starvation. Thus all the steps in the algorithm are necessary. One advantage of this algorithm is that it doesn't require specialtest-and-set(atomic read/modify/write) instructions and is therefore highly portable between languages and machine architectures. One disadvantage is that it is limited to two processes and makes use ofbusy waitinginstead of process suspension. (The use of busy waiting suggests that processes should spend a minimum amount of time inside the critical section.) Modern operating systems provide mutual exclusion primitives that are more general and flexible than Dekker's algorithm. However, in the absence of actual contention between the two processes, the entry and exit from critical section is extremely efficient when Dekker's algorithm is used. Many modernCPUsexecute their instructions in an out-of-order fashion; even memory accesses can be reordered (seememory ordering). This algorithm won't work onSMPmachines equipped with these CPUs without the use ofmemory barriers. Additionally, many optimizing compilers can perform transformations that will cause this algorithm to fail regardless of the platform. In many languages, it is legal for a compiler to detect that the flag variableswants_to_enter[0]andwants_to_enter[1]are never accessed in the loop. It can then remove the writes to those variables from the loop, using a process calledloop-invariant code motion. It would also be possible for many compilers to detect that theturnvariable is never modified by the inner loop, and perform a similar transformation, resulting in a potentialinfinite loop. If either of these transformations is performed, the algorithm will fail, regardless of architecture. To alleviate this problem,volatilevariables should be marked as modifiable outside the scope of the currently executing context. For example, in C, C++, C# or Java, one would annotate these variables as 'volatile'. Note however that the C/C++ "volatile" attribute only guarantees that the compiler generates code with the proper ordering; it does not include the necessarymemory barriersto guarantee in-orderexecutionof that code.C++11atomic variables can be used to guarantee the appropriate ordering requirements — by default, operations on atomic variables are sequentially consistent so if the wants_to_enter and turn variables are atomic a naive implementation will "just work". Alternatively, ordering can be guaranteed by the explicit use of separate fences, with the load and store operations using a relaxed ordering.
https://en.wikipedia.org/wiki/Dekker%27s_algorithm
TheEisenberg & McGuire algorithmis an algorithm for solving the critical sections problem, a general version of thedining philosophers problem. It was described in 1972 byMurray A. EisenbergandMichael R. McGuire. All then-processes share the following variables: The variableturnis set arbitrarily to a number between 0 andn−1 at the start of thealgorithm. Theflagsvariable for each process is set to WAITING whenever it intends to enter thecritical section.flagstakes either IDLE or WAITING or ACTIVE. Initially theflagsvariable for each process is initialized to IDLE.
https://en.wikipedia.org/wiki/Eisenberg_%26_McGuire_algorithm
Lamport's bakery algorithmis a computeralgorithmdevised by computer scientistLeslie Lamport, as part of his long study of theformal correctnessofconcurrent systems, which is intended to improve the safety in the usage of shared resources among multiplethreadsby means ofmutual exclusion. Incomputer science, it is common for multiple threads to simultaneously access the same resources.Data corruptioncan occur if two or more threads try to write into the samememorylocation, or if one thread reads a memory location before another has finished writing into it. Lamport's bakery algorithm is one of manymutual exclusionalgorithms designed to prevent concurrent threads enteringcritical sectionsof code concurrently to eliminate the risk of data corruption. Lamport envisioned a bakery with a numbering machine at its entrance so each customer is given a unique number. Numbers increase by one as customers enter the store. A global counter displays the number of the customer that is currently being served. All other customers must wait in a queue until the baker finishes serving the current customer and the next number is displayed. When the customer is done shopping and has disposed of his or her number, the clerk increments the number, allowing the next customer to be served. That customer must draw another number from the numbering machine in order to shop again. According to the analogy, the "customers" are threads, identified by the letteri, obtained from a global variable. Due to the limitations of computer architecture, some parts of Lamport'sanalogyneed slight modification. It is possible that more than one thread will get the same numbernwhen they request it; this cannot be avoided (without first solving the mutual exclusion problem, which is the goal of the algorithm). Therefore, it is assumed that the thread identifieriis also a priority. A lower value ofimeans a higher priority and threads with higher priority will enter thecritical sectionfirst. The critical section is that part of code that requires exclusive access to resources and may only be executed by one thread at a time. In the bakery analogy, it is when the customer trades with the baker that others must wait. When a thread wants to enter the critical section, it has to check whether now is its turn to do so. It should check the numbernof every other thread to make sure that it has the smallest one. In case another thread has the same number, the thread with the smallestiwill enter the critical section first. Inpseudocodethis comparison between threadsaandbcan be written in the form: which is equivalent to: Once the thread ends its critical job, it gets rid of its number and enters thenon-critical section. The non-critical section is the part of code that doesn't need exclusive access. It represents some thread-specific computation that doesn't interfere with other threads' resources and execution. This part is analogous to actions that occur after shopping, such as putting change back into the wallet. In Lamport's original paper, theenteringvariable is known aschoosing, and the following conditions apply: In this example, all threads execute the same "main" function,Thread. In real applications, different threads often have different "main" functions. Note that as in the original paper, the thread checks itself before entering the critical section. Since the loop conditions will evaluate asfalse, this does not cause much delay. Each thread only writes its own storage, only reads are shared. It is remarkable that this algorithm is not built on top of some lower level "atomic" operation, e.g.compare-and-swap. The original proof shows that for overlapping reads and writes to the same storage cell only the write must be correct.[clarification needed]The read operation can return an arbitrary number. Therefore, this algorithm can be used to implement mutual exclusion on memory that lacks synchronisation primitives, e.g., a simple SCSI disk shared between two computers. The necessity of the variableEnteringmight not be obvious as there is no 'lock' around lines 7 to 13. However, suppose the variable was removed and two processes computed the sameNumber[i]. If the higher-priority process was preempted before settingNumber[i], the low-priority process will see that the other process has a number of zero, and enters the critical section; later, the high-priority process will ignore equalNumber[i]for lower-priority processes, and also enters the critical section. As a result, two processes can enter the critical section at the same time. The bakery algorithm uses theEnteringvariable to make the assignment on line 6 look like it was atomic; processiwill never see a number equal to zero for a processjthat is going to pick the same number asi. When implementing the pseudo code in a single process system or undercooperative multitasking, it is better to replace the "do nothing" sections with code that notifies the operating system to immediately switch to the next thread. This primitive is often referred to asyield. Lamport's bakery algorithm assumes a sequential consistency memory model. Few, if any, languages or multi-core processors implement such a memory model. Therefore, correct implementation of the algorithm typically requires inserting fences to inhibit reordering.[1] We declare N to be the number of processes, and we assume that N is a natural number. We define P to be the set {1, 2, ... , N} of processes. The variables num and flag are declared as global. The following definesLL(j, i)to be true iff<<num[j], j>>is less than or equal to<<num[i], i>>in the usuallexicographical ordering. For each element in P there is a process with local variables unread, max and nxt. Steps between consecutive labels p1, ..., p7, cs are considered atomic. The statement with(x\inS){body}sets id to a nondeterministically chosen element of the set S and then executes body. A step containing the statement await expr can be executed only when the value of expr isTRUE. We use the AtomicIntegerArray class not for its built in atomic operations but because its get and set methods work like volatile reads and writes. Under theJava Memory Modelthis ensures that writes are immediately visible to all threads.
https://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm
Peterson's algorithm(orPeterson's solution) is aconcurrent programmingalgorithmformutual exclusionthat allows two or more processes to share a single-use resource without conflict, using only shared memory forcommunication. It was formulated byGary L. Petersonin 1981.[1]While Peterson's original formulation worked with only two processes, the algorithm can be generalized for more than two.[2] The algorithm uses two variables:flagandturn. Aflag[n]value oftrueindicates that the processnwants to enter thecritical section. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by settingturnto0. The algorithm satisfies the three essential criteria to solve the critical-section problem. The while condition works even with preemption.[1] The three criteria aremutual exclusion, progress, and bounded waiting.[3] Sinceturncan take on one of two values, it can be replaced by a single bit, meaning that the algorithm requires only three bits of memory.[4]: 22 P0 and P1 can never be in the critical section at the same time. If P0 is in its critical section, thenflag[0]is true. In addition, eitherflag[1]isfalse(meaning that P1 has left its critical section), orturnis0(meaning that P1 is just now trying to enter the critical section, but graciously waiting), or P1 is at labelP1_gate(trying to enter its critical section, after settingflag[1]totruebut before settingturnto0and busy waiting). So if both processes are in their critical sections, then we conclude that the state must satisfyflag[0]andflag[1]andturn = 0andturn = 1. No state can satisfy bothturn = 0andturn = 1, so there can be no state where both processes are in their critical sections. (This recounts an argument that is made rigorous in Schneider 1997.[5]) Progress is defined as the following: if no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in making the decision as to which process will enter its critical section next. Note that for a process or thread, the remainder sections are parts of the code that are not related to the critical section. This selection cannot be postponed indefinitely.[3]A process cannot immediately re-enter the critical section if the other process has set its flag to say that it would like to enter its critical section. Bounded waiting, orbounded bypass, means that the number of times a process is bypassed by another process after it has indicated its desire to enter the critical section is bounded by a function of the number of processes in the system.[3][4]: 11In Peterson's algorithm, a process will never wait longer than one turn for entrance to the critical section. The filter algorithm generalizes Peterson's algorithm toN> 2processes.[6]Instead of a boolean flag, it requires an integer variable per process, stored in a single-writer/multiple-reader (SWMR) atomicregister, andN− 1additional variables in similar registers. The registers can be represented inpseudocodeasarrays: Thelevelvariables take on values up toN− 1, each representing a distinct "waiting room" before the critical section.[6]Processes advance from one room to the next, finishing in roomN− 1, which is the critical section. Specifically, to acquire a lock, processiexecutes[4]: 22 To release the lock upon exiting the critical section, processisetslevel[i]to −1. That this algorithm achieves mutual exclusion can be proven as follows. Processiexits the inner loop when there is either no process with a higher level thanlevel[i], so the next waiting room is free; or, wheni ≠ last_to_enter[ℓ], so another process joined its waiting room. At level zero, then, even if allNprocesses were to enter waiting room zero at the same time, no more thanN− 1will proceed to the next room, the final one finding itself the last to enter the room. Similarly, at the next level,N− 2will proceed,etc., until at the final level, only one process is allowed to leave the waiting room and enter the critical section, giving mutual exclusion.[4]: 22–24 Unlike the two-process Peterson algorithm, the filter algorithm does not guarantee bounded waiting.[4]: 25–26 When working at thehardwarelevel, Peterson's algorithm is typically not needed to achieve atomic access. Most modern processors have special instructions, which, by locking thememory bus, can be used to guaranteeatomicityand providemutual exclusioninSMPsystems. Examples includetest-and-set(XCHG) andcompare-and-swap(CMPXCHG) onx86processors andload-link/store-conditionalonAlpha,MIPS,PowerPC, and other architectures. These instructions are intended to provide a way to buildsynchronizationprimitives more efficiently than can be done with pure shared memory approaches. Most modern CPUs reorder memory accesses to improve execution efficiency (seememory orderingfor types of reordering allowed). Such processors invariably give some way to force ordering in a stream of memory accesses, typically through amemory barrierinstruction. Implementation of Peterson's and related algorithms on processors that reorder memory accesses generally requires use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don't reorder instructions (such as thePowerPCprocessor in theXbox 360).[citation needed]
https://en.wikipedia.org/wiki/Peterson%27s_algorithm