text
stringlengths
16
172k
source
stringlengths
32
122
General-purpose computing on graphics processing units(GPGPU, or less oftenGPGP) is the use of agraphics processing unit(GPU), which typically handles computation only forcomputer graphics, to perform computation in applications traditionally handled by thecentral processing unit(CPU).[1][2][3][4]The use of multiplevideo cardsin one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.[5] Essentially, a GPGPUpipelineis a kind ofparallel processingbetween one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number ofcores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a largespeedup. GPGPU pipelines were developed at the beginning of the 21st century forgraphics processing(e.g. for bettershaders). These pipelines were found to fitscientific computingneeds well, and have since been developed in this direction. The best-known GPGPUs areNvidia Teslathat are used forNvidia DGX, alongsideAMD Instinctand Intel Gaudi. In principle, any arbitraryBoolean function, including addition, multiplication, and other mathematical functions, can be built up from afunctionally completeset of logic operators. In 1987,Conway's Game of Lifebecame one of the first examples of general-purpose computing using an earlystream processorcalled ablitterto invoke a special sequence oflogical operationson bit vectors.[6] General-purpose computing on GPUs became more practical and popular after about 2001, with the advent of both programmableshadersandfloating pointsupport on graphics processors. Notably, problems involvingmatricesand/orvectors– especially two-, three-, or four-dimensional vectors – were easy to translate to a GPU, which acts with native speed and support on those types. A significant milestone for GPGPU was the year 2003 when two research groups independently discovered GPU-based approaches for the solution of general linear algebra problems on GPUs that ran faster than on CPUs.[7][8]These early efforts to use GPUs as general-purpose processors required reformulating computational problems in terms of graphics primitives, as supported by the two major APIs for graphics processors,OpenGLandDirectX. This cumbersome translation was obviated by the advent of general-purpose programming languages and APIs such asSh/RapidMind,Brookand Accelerator.[9][10][11] These were followed by Nvidia'sCUDA, which allowed programmers to ignore the underlying graphical concepts in favor of more commonhigh-performance computingconcepts.[12]Newer, hardware-vendor-independent offerings include Microsoft'sDirectComputeand Apple/Khronos Group'sOpenCL.[12]This means that modern GPGPU pipelines can leverage the speed of a GPU without requiring full and explicit conversion of the data to a graphical form. Mark Harris, the founder of GPGPU.org, claims he coined the termGPGPU.[13] Any language that allows the code running on the CPU to poll a GPUshaderfor return values, can create a GPGPU framework. Programming standards for parallel computing includeOpenCL(vendor-independent),OpenACC,OpenMPandOpenHMPP. As of 2016[update], OpenCL is the dominant open general-purpose GPU computing language, and is an open standard defined by theKhronos Group.[citation needed]OpenCL provides across-platformGPGPU platform that additionally supports data parallel compute on CPUs. OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. The Khronos Group has also standardised and implementedSYCL, a higher-level programming model forOpenCLas a single-source domain specific embedded language based on pure C++11. The dominant proprietary framework isNvidiaCUDA.[14]Nvidia launched CUDA in 2006, asoftware development kit(SDK) andapplication programming interface(API) that allows using the programming languageCto code algorithms for execution onGeForce 8 seriesand later GPUs. ROCm, launched in 2016, is AMD's open-source response to CUDA. It is, as of 2022, on par with CUDA with regards to features,[citation needed]and still lacking in consumer support.[citation needed] OpenVIDIA was developed atUniversity of Torontobetween 2003–2005,[15]in collaboration with Nvidia. Altimesh Hybridizer created byAltimeshcompilesCommon Intermediate Languageto CUDA binaries.[16][17]It supports generics and virtual functions.[18]Debugging and profiling is integrated withVisual Studioand Nsight.[19]It is available as a Visual Studio extension on Visual Studio Marketplace. Microsoftintroduced theDirectComputeGPU computing API, released with theDirectX 11API. Alea GPU,[20]created by QuantAlea,[21]introduces native GPU computing capabilities for the Microsoft .NET languagesF#[22]andC#. Alea GPU also provides a simplified GPU programming model based on GPU parallel-for and parallel aggregate using delegates and automatic memory management.[23] MATLABsupports GPGPU acceleration using theParallel Computing ToolboxandMATLAB Distributed Computing Server,[24]and third-party packages likeJacket. GPGPU processing is also used to simulateNewtonian physicsbyphysics engines,[25]and commercial implementations includeHavok Physics, FXandPhysX, both of which are typically used for computer andvideo games. C++ Accelerated Massive Parallelism (C++ AMP) is a library that accelerates execution ofC++code by exploiting the data-parallel hardware on GPUs. Due to a trend of increasing power of mobile GPUs, general-purpose programming became available also on the mobile devices running majormobile operating systems. GoogleAndroid4.2 enabled runningRenderScriptcode on the mobile device GPU.[26]Renderscript has since been deprecated in favour of first OpenGL compute shaders[27]and later Vulkan Compute.[28]OpenCL is available on many Android devices, but is not officially supported by Android.[29]Appleintroduced the proprietaryMetalAPI foriOSapplications, able to execute arbitrary code through Apple's GPU compute shaders.[citation needed] Computervideo cardsare produced by various vendors, such asNvidia,AMD. Cards from such vendors differ on implementing data-format support, such asintegerandfloating-pointformats (32-bit and 64-bit).Microsoftintroduced aShader Modelstandard, to help rank the various features of graphic cards into a simple Shader Model version number (1.0, 2.0, 3.0, etc.). Pre-DirectX 9 video cards only supportedpalettedor integer color types. Sometimes another alpha value is added, to be used for transparency. Common formats are: For earlyfixed-functionor limited programmability graphics (i.e., up to and including DirectX 8.1-compliant GPUs) this was sufficient because this is also the representation used in displays. This representation does have certain limitations. Given sufficient graphics processing power even graphics programmers would like to use better formats, such asfloating pointdata formats, to obtain effects such ashigh-dynamic-range imaging. Many GPGPU applications require floating point accuracy, which came with video cards conforming to the DirectX 9 specification. DirectX 9 Shader Model 2.x suggested the support of two precision types: full and partial precision. Full precision support could either be FP32 or FP24 (floating point 32- or 24-bit per component) or greater, while partial precision was FP16.ATI'sRadeon R300series of GPUs supported FP24 precision only in the programmable fragment pipeline (although FP32 was supported in the vertex processors) whileNvidia'sNV30series supported both FP16 and FP32; other vendors such asS3 GraphicsandXGIsupported a mixture of formats up to FP24. The implementations of floating point on Nvidia GPUs are mostlyIEEEcompliant; however, this is not true across all vendors.[30]This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs. Some GPU architectures sacrifice IEEE compliance, while others lack double-precision. Efforts have occurred to emulate double-precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computing onto the GPU in the first place.[31] Most operations on the GPU operate in a vectorized fashion: one operation can be performed on up to four values at once. For example, if one color⟨R1, G1, B1⟩is to be modulated by another color⟨R2, G2, B2⟩, the GPU can produce the resulting color⟨R1*R2, G1*G2, B1*B2⟩in one operation. This functionality is useful in graphics because almost every basic data type is a vector (either 2-, 3-, or 4-dimensional).[citation needed]Examples include vertices, colors, normal vectors, and texture coordinates. Many other applications can put this to good use, and because of their higher performance, vector instructions, termed single instruction, multiple data (SIMD), have long been available on CPUs.[citation needed] Originally, data was simply passed one-way from acentral processing unit(CPU) to agraphics processing unit(GPU), then to adisplay device. As time progressed, however, it became valuable for GPUs to store at first simple, then complex structures of data to be passed back to the CPU that analyzed an image, or a set of scientific-data represented as a 2D or 3D format that a video card can understand. Because the GPU has access to every draw operation, it can analyze data in these forms quickly, whereas a CPU must poll every pixel or data element much more slowly, as the speed of access between a CPU and its larger pool ofrandom-access memory(or in an even worse case, ahard drive) is slower than GPUs and video cards, which typically contain smaller amounts of more expensive memory that is much faster to access. Transferring the portion of the data set to be actively analyzed to that GPU memory in the form of textures or other easily readable GPU forms results in speed increase. The distinguishing feature of a GPGPU design is the ability to transfer informationbidirectionallyback from the GPU to the CPU; generally the data throughput in both directions is ideally high, resulting in amultipliereffect on the speed of a specific high-usealgorithm. GPGPU pipelines may improve efficiency on especially large data sets and/or data containing 2D or 3D imagery. It is used in complex graphics pipelines as well asscientific computing; more so in fields with large data sets likegenome mapping, or where two- or three-dimensional analysis is useful – especially at presentbiomoleculeanalysis,proteinstudy, and other complexorganic chemistry. An example of such applications isNVIDIA software suite for genome analysis. Such pipelines can also vastly improve efficiency inimage processingandcomputer vision, among other fields; as well asparallel processinggenerally. Some very heavily optimized pipelines have yielded speed increases of several hundred times the original CPU-based pipeline on one high-use task. A simple example would be a GPU program that collects data about averagelightingvalues as it renders some view from either a camera or a computer graphics program back to the main program on the CPU, so that the CPU can then make adjustments to the overall screen view. A more advanced example might useedge detectionto return both numerical information and a processed image representing outlines to acomputer visionprogram controlling, say, a mobile robot. Because the GPU has fast and local hardware access to everypixelor other picture element in an image, it can analyze and average it (for the first example) or apply aSobel edge filteror otherconvolutionfilter (for the second) with much greater speed than a CPU, which typically must access slowerrandom-access memorycopies of the graphic in question. GPGPU is fundamentally a software concept, not a hardware concept; it is a type ofalgorithm, not a piece of equipment. Specialized equipment designs may, however, even further enhance the efficiency of GPGPU pipelines, which traditionally perform relatively few algorithms on very large amounts of data. Massively parallelized, gigantic-data-level tasks thus may be parallelized even further via specialized setups such as rack computing (many similar, highly tailored machines built into arack), which adds a third layer – many computing units each using many CPUs to correspond to many GPUs. SomeBitcoin"miners" used such setups for high-quantity processing. Historically, CPUs have used hardware-managedcaches, but the earlier GPUs only provided software-managed local memories. However, as GPUs are being increasingly used for general-purpose applications, state-of-the-art GPUs are being designed with hardware-managed multi-level caches which have helped the GPUs to move towards mainstream computing. For example,GeForce 200 seriesGT200 architecture GPUs did not feature an L2 cache, theFermiGPU has 768 KiB last-level cache, theKeplerGPU has 1.5 MiB last-level cache,[32]theMaxwellGPU has 2 MiB last-level cache, and thePascalGPU has 4 MiB last-level cache. GPUs have very largeregister files, which allow them to reduce context-switching latency. Register file size is also increasing over different GPU generations, e.g., the total register file size on Maxwell (GM200), Pascal and Volta GPUs are 6 MiB, 14 MiB and 20 MiB, respectively.[33][34]By comparison, the size of aregister file on CPUsis small, typically tens or hundreds of kilobytes. The high performance of GPUs comes at the cost of high power consumption, which under full load is in fact as much power as the rest of the PC system combined.[35]The maximum power consumption of the Pascal series GPU (Tesla P100) was specified to be 250W.[36] Before CUDA was published in 2007, GPGPU was "classical" and involved repurposing graphics primitives. A standard structure of such was: More examples are available in part 4 ofGPU Gems 2.[37] Using GPU for numerical linear algebra began at least in 2001.[38]It had been used for Gauss-Seidel solver, conjugate gradients, etc.[39] GPUs are designed specifically for graphics and thus are very restrictive in operations and programming. Due to their design, GPUs are only effective for problems that can be solved usingstream processingand the hardware can only be used in certain ways. The following discussion referring to vertices, fragments and textures concerns mainly the legacy model of GPGPU programming, where graphics APIs (OpenGLorDirectX) were used to perform general-purpose computation. With the introduction of theCUDA(Nvidia, 2007) andOpenCL(vendor-independent, 2008) general-purpose computing APIs, in new GPGPU codes it is no longer necessary to map the computation to graphics primitives. The stream processing nature of GPUs remains valid regardless of the APIs used. (See e.g.,[40]) GPUs can only process independent vertices and fragments, but can process many of them in parallel. This is especially effective when the programmer wants to process many vertices or fragments in the same way. In this sense, GPUs are stream processors – processors that can operate in parallel by running one kernel on many records in a stream at once. Astreamis simply a set of records that require similar computation. Streams provide data parallelism.Kernelsare the functions that are applied to each element in the stream. In the GPUs,verticesandfragmentsare the elements in streams and vertex and fragment shaders are the kernels to be run on them.[dubious–discuss]For each element we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable.[vague] Arithmetic intensity is defined as the number of operations performed per word of memory transferred. It is important for GPGPU applications to have high arithmetic intensity else the memory access latency will limit computational speedup.[41] Ideal GPGPU applications have large data sets, high parallelism, and minimal dependency between data elements. There are a variety of computational resources available on the GPU: In fact, a program can substitute a write only texture for output instead of the framebuffer. This is done either throughRender to Texture(RTT), Render-To-Backbuffer-Copy-To-Texture (RTBCTT), or the more recent stream-out. The most common form for a stream to take in GPGPU is a 2D grid because this fits naturally with the rendering model built into GPUs. Many computations naturally map into grids: matrix algebra, image processing, physically based simulation, and so on. Since textures are used as memory, texture lookups are then used as memory reads. Certain operations can be done automatically by the GPU because of this. Compute kernelscan be thought of as the body ofloops. For example, a programmer operating on a grid on the CPU might have code that looks like this: On the GPU, the programmer only specifies the body of the loop as the kernel and what data to loop over by invoking geometry processing. In sequential code it is possible to control the flow of the program using if-then-else statements and various forms of loops. Such flow control structures have only recently been added to GPUs.[42]Conditional writes could be performed using a properly crafted series of arithmetic/bit operations, but looping and conditional branching were not possible. Recent[when?]GPUs allow branching, but usually with a performance penalty. Branching should generally be avoided in inner loops, whether in CPU or GPU code, and various methods, such as static branch resolution, pre-computation, predication, loop splitting,[43]and Z-cull[44]can be used to achieve branching when hardware support does not exist. The map operation simply applies the given function (the kernel) to every element in the stream. A simple example is multiplying each value in the stream by a constant (increasing the brightness of an image). The map operation is simple to implement on the GPU. The programmer generates a fragment for each pixel on screen and applies a fragment program to each one. The result stream of the same size is stored in the output buffer. Some computations require calculating a smaller stream (possibly a stream of only one element) from a larger stream. This is called a reduction of the stream. Generally, a reduction can be performed in multiple steps. The results from the prior step are used as the input for the current step and the range over which the operation is applied is reduced until only one stream element remains. Stream filtering is essentially a non-uniform reduction. Filtering involves removing items from the stream based on some criteria. The scan operation, also termedparallel prefix sum, takes in a vector (stream) of data elements and an(arbitrary) associative binary function '+' with an identity element 'i'. If the input is [a0, a1, a2, a3, ...], anexclusive scanproduces the output [i, a0, a0 + a1, a0 + a1 + a2, ...], while aninclusive scanproduces the output [a0, a0 + a1, a0 + a1 + a2, a0 + a1 + a2 + a3, ...] anddoes not require an identityto exist. While at first glance the operation may seem inherently serial, efficient parallel scan algorithms are possible and have been implemented on graphics processing units. The scan operation has uses in e.g., quicksort and sparse matrix-vector multiplication.[40][45][46][47] Thescatteroperation is most naturally defined on the vertex processor. The vertex processor is able to adjust the position of thevertex, which allows the programmer to control where information is deposited on the grid. Other extensions are also possible, such as controlling how large an area the vertex affects. The fragment processor cannot perform a direct scatter operation because the location of each fragment on the grid is fixed at the time of the fragment's creation and cannot be altered by the programmer. However, a logical scatter operation may sometimes be recast or implemented with another gather step. A scatter implementation would first emit both an output value and an output address. An immediately following gather operation uses address comparisons to see whether the output value maps to the current output slot. In dedicatedcompute kernels, scatter can be performed by indexed writes. Gatheris the reverse of scatter. After scatter reorders elements according to a map, gather can restore the order of the elements according to the map scatter used. In dedicated compute kernels, gather may be performed by indexed reads. In other shaders, it is performed with texture-lookups. The sort operation transforms an unordered set of elements into an ordered set of elements. The most common implementation on GPUs is usingradix sortfor integer and floating point data and coarse-grainedmerge sortand fine-grainedsorting networksfor general comparable data.[48][49] The search operation allows the programmer to find a given element within the stream, or possibly find neighbors of a specified element. Mostly the search method used isbinary searchon sorted elements. A variety of data structures can be represented on the GPU: The following are some of the areas where GPUs have been used for general purpose computing: GPGPU usage in Bioinformatics:[65][90] † Expected speedups are highly dependent on system configuration. GPU performance compared against multi-corex86CPU socket. GPU performance benchmarked on GPU supported features and may be akernelto kernel performance comparison. For details on configuration used, view application website. Speedups as per Nvidia in-house testing or ISV's documentation. ‡ Q=Quadro GPU, T=Tesla GPU. Nvidia recommended GPUs for this application. Check with developer or ISV to obtain certification information.
https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
Incomputer science, asegmented scanis a modification of theprefix sumwith an equal-sized array of flag bits to denote segment boundaries on which the scan should be performed.[1] In the following, the '1' flag bits indicate the beginning of each segment. An alternative method used byHigh Performance Fortranis to begin a new segment at every transition of flag value. An advantage of this representation is that it is useful with both prefix and suffix (backwards) scans without changing its interpretation. In HPF, Fortran logical data type is used to represent segments. So the equivalent flag array for the above example would be as follows: 123456inputTTTFFTflag values136496segmented scan +{\displaystyle {\begin{array}{|rrrrrr|l|}1&2&3&4&5&6&{\text{input}}\\\hline T&T&T&F&F&T&{\text{flag values}}\\\hline 1&3&6&4&9&6&{\text{segmented scan +}}\\\end{array}}} This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Segmented_scan
Asummed-area tableis adata structureandalgorithmfor quickly and efficiently generating the sum of values in a rectangular subset of a grid. In theimage processingdomain, it is also known as anintegral image. It was introduced tocomputer graphicsin 1984 byFrank Crowfor use withmipmaps. Incomputer visionit was popularized by Lewis[1]and then given the name "integral image" and prominently used within theViola–Jones object detection frameworkin 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respectivecumulative distribution functions.[2] As the name suggests, the value at any point (x,y) in the summed-area table is the sum of all the pixels above and to the left of (x,y), inclusive:[3][4]I(x,y)=∑x′≤xy′≤yi(x′,y′){\displaystyle I(x,y)=\sum _{\begin{smallmatrix}x'\leq x\\y'\leq y\end{smallmatrix}}i(x',y')}wherei(x,y){\displaystyle i(x,y)}is the value of the pixel at (x,y). The summed-area table can be computed efficiently in a single pass over the image, as the value in the summed-area table at (x,y) is just:[5]I(x,y)=i(x,y)+I(x,y−1)+I(x−1,y)−I(x−1,y−1){\displaystyle I(x,y)=i(x,y)+I(x,y-1)+I(x-1,y)-I(x-1,y-1)}(Noted that the summed matrix is calculated from top left corner) Once the summed-area table has been computed, evaluating the sum of intensities over any rectangular area requires exactly four array references regardless of the area size. That is, the notation in the figure at right, havingA= (x0,y0),B= (x1,y0),C= (x0,y1)andD= (x1,y1), the sum ofi(x,y)over the rectangle spanned byA,B,C,andDis:∑x0<x≤x1y0<y≤y1i(x,y)=I(D)+I(A)−I(B)−I(C){\displaystyle \sum _{\begin{smallmatrix}x_{0}<x\leq x_{1}\\y_{0}<y\leq y_{1}\end{smallmatrix}}i(x,y)=I(D)+I(A)-I(B)-I(C)} This method is naturally extended to continuous domains.[2] The method can be also extended to high-dimensional images.[6]If the corners of the rectangle arexp{\displaystyle x^{p}}withp{\displaystyle p}in{0,1}d{\displaystyle \{0,1\}^{d}}, then the sum of image values contained in the rectangle are computed with the formula∑p∈{0,1}d(−1)d−‖p‖1I(xp){\displaystyle \sum _{p\in \{0,1\}^{d}}(-1)^{d-\|p\|_{1}}I(x^{p})}whereI(x){\displaystyle I(x)}is the integral image atx{\displaystyle x}andd{\displaystyle d}the image dimension. The notationxp{\displaystyle x^{p}}correspond in the example tod=2{\displaystyle d=2},A=x(0,0){\displaystyle A=x^{(0,0)}},B=x(1,0){\displaystyle B=x^{(1,0)}},C=x(1,1){\displaystyle C=x^{(1,1)}}andD=x(0,1){\displaystyle D=x^{(0,1)}}. Inneuroimaging, for example, the images have dimensiond=3{\displaystyle d=3}ord=4{\displaystyle d=4}, when usingvoxelsor voxels with a time-stamp. This method has been extended to high-order integral image as in the work of Phan et al.[7]who provided two, three, or four integral images for quickly and efficiently calculating the standard deviation (variance), skewness, and kurtosis of local block in the image. This is detailed below: To computevarianceorstandard deviationof a block, we need two integral images:I(x,y)=∑x′≤xy′≤yi(x′,y′){\displaystyle I(x,y)=\sum _{\begin{smallmatrix}x'\leq x\\y'\leq y\end{smallmatrix}}i(x',y')}I2(x,y)=∑x′≤xy′≤yi2(x′,y′){\displaystyle I^{2}(x,y)=\sum _{\begin{smallmatrix}x'\leq x\\y'\leq y\end{smallmatrix}}i^{2}(x',y')}The variance is given by:Var⁡(X)=1n∑i=1n(xi−μ)2.{\displaystyle \operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}LetS1{\displaystyle S_{1}}andS2{\displaystyle S_{2}}denote the summations of blockABCD{\displaystyle ABCD}ofI{\displaystyle I}andI2{\displaystyle I^{2}}, respectively.S1{\displaystyle S_{1}}andS2{\displaystyle S_{2}}are computed quickly by integral image. Now, we manipulate the variance equation as:Var⁡(X)=1n∑i=1n(xi2−2μxi+μ2)=1n[∑i=1nxi2−2∑i=1nμxi+∑i=1nμ2]=1n[∑i=1nxi2−2∑i=1nμxi+nμ2]=1n[∑i=1nxi2−2μ∑i=1nxi+nμ2]=1n[S2−2S1nS1+n(S1n)2]=1n[S2−S12n]{\displaystyle {\begin{aligned}\operatorname {Var} (X)&={\frac {1}{n}}\sum _{i=1}^{n}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[1ex]&={\frac {1}{n}}\left[\sum _{i=1}^{n}x_{i}^{2}-2\sum _{i=1}^{n}\mu x_{i}+\sum _{i=1}^{n}\mu ^{2}\right]\\[1ex]&={\frac {1}{n}}\left[\sum _{i=1}^{n}x_{i}^{2}-2\sum _{i=1}^{n}\mu x_{i}+n\mu ^{2}\right]\\[1ex]&={\frac {1}{n}}\left[\sum _{i=1}^{n}x_{i}^{2}-2\mu \sum _{i=1}^{n}x_{i}+n\mu ^{2}\right]\\[1ex]&={\frac {1}{n}}\left[S_{2}-2{\frac {S_{1}}{n}}S_{1}+n\left({\frac {S_{1}}{n}}\right)^{2}\right]\\[1ex]&={\frac {1}{n}}\left[S_{2}-{\frac {S_{1}^{2}}{n}}\right]\end{aligned}}}Whereμ=S1/n{\displaystyle \mu =S_{1}/n}andS2=∑i=1nxi2{\textstyle S_{2}=\sum _{i=1}^{n}x_{i}^{2}}. Similar to the estimation of the mean (μ{\displaystyle \mu }) and variance (Var{\displaystyle \operatorname {Var} }), which requires the integral images of the first and second power of the image respectively (i.e.I,I2{\displaystyle I,I^{2}}); manipulations similar to the ones mentioned above can be made to the third and fourth powers of the images (i.e.I3(x,y),I4(x,y){\displaystyle I^{3}(x,y),I^{4}(x,y)}.) for obtaining the skewness and kurtosis.[7]But one important implementation detail that must be kept in mind for the above methods, as mentioned by F Shafait et al.[8]is that of integer overflow occurring for the higher order integral images in case 32-bit integers are used. The data type for the sums may need to be different from and larger than the data type used for the original values, in order to accommodate the largest expected sum withoutoverflow. For floating-point data, error can be reduced usingcompensated summation.
https://en.wikipedia.org/wiki/Summed-area_table
Inmathematicsandcomputer science, arecursive definition, orinductive definition, is used to define theelementsin asetin terms of other elements in the set (Aczel1977:740ff). Some examples of recursively definable objects includefactorials,natural numbers,Fibonacci numbers, and theCantor ternary set. Arecursivedefinitionof afunctiondefines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, thefactorialfunctionn!is defined by the rules This definition is valid for each natural numbern, because the recursion eventually reaches thebase caseof 0. The definition may also be thought of as giving a procedure for computing the value of the functionn!, starting fromn= 0and proceeding onwards withn= 1, 2, 3etc. The recursion theoremstates that such a definition indeed defines a function that is unique. The proof usesmathematical induction.[1] An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set⁠N{\displaystyle \mathbb {N} }⁠ofnatural numbersis: There are many sets that satisfy (1) and (2) – for example, the set{0, 1, 1.649, 2, 2.649, 3, 3.649, …}satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members. Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds ofn+ 1whenever it holds ofn, then the property holds of all natural numbers (Aczel 1977:742). Most recursive definitions have two foundations: a base case (basis) and an inductive clause. The difference between acircular definitionand a recursive definition is that a recursive definition must always havebase cases, cases that satisfy the definitionwithoutbeing defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e.,closerto those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case".[2] In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to aninfinite regress. That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as therecursion theorem, the proof of which is non-trivial.[3]Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value off(0)(i.e., base case) is given, and that forn> 0, an algorithm is given for determiningf(n)in terms ofn,f(0),f(1),…,f(n−1){\displaystyle f(0),f(1),\dots ,f(n-1)}(i.e., inductive clause). More generally, recursive definitions of functions can be made whenever the domain is awell-ordered set, using the principle oftransfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found inJames Munkres'Topology. However, a specific case (domain is restricted to the positiveintegersinstead of any well-ordered set) of the general recursive definition will be given below.[4] LetAbe a set and leta0be an element ofA. Ifρis a function which assigns to each functionfmapping a nonempty section of the positive integers intoA, an element ofA, then there exists a unique functionh:Z+→A{\displaystyle h:\mathbb {Z} _{+}\to A}such that Additionis defined recursively based on counting as Multiplicationis defined recursively as Exponentiationis defined recursively as Binomial coefficientscan be defined recursively as The set ofprime numberscan be defined as the unique set of positive integers satisfying The primality of the integer 2 is the base case; checking the primality of any larger integerXby this definition requires knowing the primality of every integer between 2 andX, which is well defined by this definition. That last point can be proved by induction onX, for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible. Theeven numberscan be defined as consisting of The notion of awell-formed formula(wff) in propositional logic is defined recursively as the smallest set satisfying the three rules: The definition can be used to determine whether any particular string of symbols is a wff: Logic programscan be understood as sets of recursivedefinitions.[5][6]For example, the recursive definition of even number can be written as the logic program: Here:-representsif, ands(X)represents the successor ofX, namelyX+1, as inPeano arithmetic. The logic programming languagePrologusesbackward reasoningto solve goals and answer queries. For example, given the query?-even(s(s(0)))it produces the answertrue. Given the query?-even(s(0))it produces the answerfalse. The program can be used not only to check whether a query is true, but also to generate answers that are true. For example: Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented bynegation as failure, as in the definition:
https://en.wikipedia.org/wiki/Recursive_definition
Incomputer programming, especiallyfunctional programmingandtype theory, analgebraic data type(ADT) is a kind ofcomposite data type, i.e., adata typeformed by combining other types. Two common classes of algebraic types areproduct types(i.e.,tuples, andrecords) andsum types(i.e.,taggedordisjoint unions,coproducttypes orvariant types).[1] Thevaluesof a product type typically contain several values, calledfields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., theCartesian product, of the sets of all possible values of its field types. The values of a sum type are typically grouped into several classes, calledvariants. A value of a variant type is usually created with a quasi-functional entity called aconstructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., thedisjoint union, of the sets of all possible values of its variants.Enumerated typesare a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor. Values of algebraic types are analyzed withpattern matching, which identifies a value by its constructor or field names and extracts the data it contains. Algebraic data types were introduced inHope, a smallfunctionalprogramming languagedeveloped in the 1970s at theUniversity of Edinburgh.[2] One of the most common examples of an algebraic data type is thesingly linked list. A list type is a sum type with two variants,Nilfor an empty list andConsxxsfor the combination of a new elementxwith a listxsto create a new list. Here is an example of how a singly linked list would be declared inHaskell: or Consis an abbreviation ofconstruct. Many languages have special syntax for lists defined in this way. For example, Haskell andMLuse[]forNil,:or::forCons, respectively, and square brackets for entire lists. SoCons 1 (Cons 2 (Cons 3 Nil))would normally be written as1:2:3:[]or[1,2,3]in Haskell, or as1::2::3::[]or[1,2,3]in ML. For a slightly more complex example,binary treesmay be implemented in Haskell as follows: or Here,Emptyrepresents an empty tree,Leafrepresents a leaf node, andNodeorganizes the data into branches. In most languages that support algebraic data types, it is possible to defineparametrictypes. Examples are given later in this article. Somewhat similar to a function, a data constructor is applied to arguments of an appropriate type, yielding an instance of the data type to which the type constructor belongs. For example, the data constructorLeafis logically a functionInt -> Tree, meaning that giving an integer as an argument toLeafproduces a value of the typeTree. AsNodetakes two arguments of the typeTreeitself, the datatype isrecursive. Operations on algebraic data types can be defined by usingpattern matchingto retrieve the arguments. For example, consider a function to find the depth of aTree, given here in Haskell: Thus, aTreegiven todepthcan be constructed using any ofEmpty,Leaf, orNodeand must be matched for any of them respectively to deal with all cases. In case ofNode, the pattern extracts the subtreeslandrfor further processing. Algebraic data types are highly suited to implementingabstract syntax. For example, the following algebraic data type describes a simple language representing numerical expressions: An element of such a data type would have a form such asMult (Add (Number 4) (Minus (Number 0) (Number 1))) (Number 2). Writing an evaluation function for this language is a simple exercise; however, more complex transformations also become feasible. For example, an optimization pass in a compiler might be written as a function taking an abstract expression as input and returning an optimized form. Algebraic data types are used to represent values that can be one of severaltypes of things. Each type of thing is associated with an identifier called aconstructor, which can be considered a tag for that kind of data. Each constructor can carry with it a different type of data. For example, considering the binaryTreeexample shown above, a constructor could carry no data (e.g.,Empty), or one piece of data (e.g.,Leafhas one Int value), or multiple pieces of data (e.g.,Nodehas oneIntvalue and twoTreevalues). To do something with a value of thisTreealgebraic data type, it isdeconstructedusing a process calledpattern matching. This involves matching the data with a series ofpatterns. The example functiondepthabove pattern-matches its argument with three patterns. When the function is called, it finds the first pattern that matches its argument, performs any variable bindings that are found in the pattern, and evaluates the expression corresponding to the pattern. Each pattern above has a form that resembles the structure of some possible value of this datatype. The first pattern simply matches values of the constructorEmpty. The second pattern matches values of the constructorLeaf. Patterns are recursive, so then the data that is associated with that constructor is matched with the pattern "n". In this case, a lowercase identifier represents a pattern that matches any value, which then is bound to a variable of that name — in this case, a variable "n" is bound to the integer value stored in the data type — to be used in the expression to evaluate. The recursion in patterns in this example are trivial, but a possible more complex recursive pattern would be something like: Nodei(Nodej(Leaf4)x)(Nodeky(NodeEmptyz)) Recursive patterns several layers deep are used for example in balancingred–black trees, which involve cases that require looking at colors several layers deep. The example above is operationally equivalent to the followingpseudocode: The advantages of algebraic data types can be highlighted by comparison of the above pseudocode with a pattern matching equivalent. Firstly, there istype safety. In the pseudocode example above, programmer diligence is required to not accessfield2when the constructor is aLeaf. The type system would have difficulties assigning a static type in a safe way for traditionalrecorddata structures. However, in pattern matching such problems are not faced. The type of each extracted value is based on the types declared by the relevant constructor. The number of values that can be extracted is known based on the constructor. Secondly, in pattern matching, the compiler performs exhaustiveness checking to ensure all cases are handled. If one of the cases of thedepthfunction above were missing, the compiler would issue a warning. Exhaustiveness checking may seem easy for simple patterns, but with many complex recursive patterns, the task soon becomes difficult for the average human (or compiler, if it must check arbitrary nested if-else constructs). Similarly, there may be patterns which never match (i.e., are already covered by prior patterns). The compiler can also check and issue warnings for these, as they may indicate an error in reasoning. Algebraic data type pattern matching should not be confused withregular expressionstring pattern matching. The purpose of both is similar (to extract parts from a piece of data matching certain constraints) however, the implementation is very different. Pattern matching on algebraic data types matches on the structural properties of an object rather than on the character sequence of strings. A general algebraic data type is a possibly recursivesum typeofproduct types. Each constructor tags a product type to separate it from others, or if there is only one constructor, the data type is a product type. Further, the parameter types of a constructor are the factors of the product type. A parameterless constructor corresponds to theempty product. If a datatype is recursive, the entire sum of products is wrapped in arecursive type, and each constructor also rolls the datatype into the recursive type. For example, the Haskell datatype: is represented intype theoryasλα.μβ.1+α×β{\displaystyle \lambda \alpha .\mu \beta .1+\alpha \times \beta }with constructorsnilα=roll(inl⟨⟩){\displaystyle \mathrm {nil} _{\alpha }=\mathrm {roll} \ (\mathrm {inl} \ \langle \rangle )}andconsαxl=roll(inr⟨x,l⟩){\displaystyle \mathrm {cons} _{\alpha }\ x\ l=\mathrm {roll} \ (\mathrm {inr} \ \langle x,l\rangle )}. The Haskell List datatype can also be represented in type theory in a slightly different form, thus:μϕ.λα.1+α×ϕα{\displaystyle \mu \phi .\lambda \alpha .1+\alpha \times \phi \ \alpha }. (Note how theμ{\displaystyle \mu }andλ{\displaystyle \lambda }constructs are reversed relative to the original.) The original formation specified a type function whose body was a recursive type. The revised version specifies a recursive function on types. (The type variableϕ{\displaystyle \phi }is used to suggest a function rather than abase typelikeβ{\displaystyle \beta }, sinceϕ{\displaystyle \phi }is like a Greekf.) The function must also now be appliedϕ{\displaystyle \phi }to its argument typeα{\displaystyle \alpha }in the body of the type. For the purposes of the List example, these two formulations are not significantly different; but the second form allows expressing so-callednested data types, i.e., those where the recursive type differs parametrically from the original. (For more information on nested data types, see the works ofRichard Bird,Lambert Meertens, and Ross Paterson.) Inset theorythe equivalent of a sum type is adisjoint union, a set whose elements are pairs consisting of a tag (equivalent to a constructor) and an object of a type corresponding to the tag (equivalent to the constructor arguments).[3] Many programming languages incorporate algebraic data types as a first class notion, including:
https://en.wikipedia.org/wiki/Algebraic_data_type
Intype theory, a system hasinductive typesif it has facilities for creating a new type from constants and functions that create terms of that type. The feature serves a role similar todata structuresin a programming language and allows a type theory to add concepts likenumbers,relations, andtrees. As the name suggests, inductive types can be self-referential, but usually only in a way that permitsstructural recursion. The standard example is encoding thenatural numbersusingPeano's encoding. It can be defined inRocq(previously known asCoq) as follows: Here, a natural number is created either from the constant "0" or by applying the function "S" to another natural number. "S" is thesuccessor functionwhich represents adding 1 to a number. Thus, "0" is zero, "S 0" is one, "S (S 0)" is two, "S (S (S 0))" is three, and so on. Since their introduction, inductive types have been extended to encode more and more structures, while still beingpredicativeand supporting structural recursion. Inductive types usually come with a function to prove properties about them. Thus, "nat" may come with (in Rocq syntax): In words: for any predicate "P" over natural numbers, given a proof of "P 0" and a proof of "P n -> P (n+1)", we get back a proof of "forall n, P n". This is the familiar induction principle for natural numbers. W-types arewell-foundedtypes inintuitionistic type theory(ITT).[1]They generalize natural numbers, lists, binary trees, and other "tree-shaped" data types. LetUbe auniverse of types. Given a typeA:Uand adependent familyB:A→U, one can form a W-typeWa:AB(a){\displaystyle {\mathsf {W}}_{a:A}B(a)}. The typeAmay be thought of as "labels" for the (potentially infinitely many) constructors of the inductive type being defined, whereasBindicates the (potentially infinite)arityof each constructor. W-types (resp. M-types) may also be understood as well-founded (resp. non-well-founded) trees with nodes labeled by elementsa:Aand where the node labeled byahasB(a)-many subtrees.[2]Each W-type is isomorphic to the initial algebra of a so-calledpolynomial functor. Let0,1,2, etc. be finite types with inhabitants 11:1, 12, 22:2, etc. One may define the natural numbers as the W-typeN:=Wx:2f(x){\displaystyle \mathbb {N} :={\mathsf {W}}_{x:\mathbf {2} }f(x)}withf:2→Uis defined byf(12) =0(representing the constructor for zero, which takes no arguments), andf(22) =1(representing the successor function, which takes one argument). One may define lists over a typeA:UasList⁡(A):=W(x:1+A)f(x){\displaystyle \operatorname {List} (A):={\mathsf {W}}_{(x:\mathbf {1} +A)}f(x)}wheref(inl⁡(11))=0f(inr⁡(a))=1{\displaystyle {\begin{aligned}f(\operatorname {inl} (1_{\mathbf {1} }))&=\mathbf {0} \\f(\operatorname {inr} (a))&=\mathbf {1} \end{aligned}}}and 11is the sole inhabitant of1. The value off(inl⁡(11)){\displaystyle f(\operatorname {inl} (1_{\mathbf {1} }))}corresponds to the constructor for the empty list, whereas the value off(inr⁡(a)){\displaystyle f(\operatorname {inr} (a))}corresponds to the constructor that appendsato the beginning of another list. The constructor for elements of a generic W-typeWx:AB(x){\displaystyle {\mathsf {W}}_{x:A}B(x)}has typesup:∏a:A(B(a)→Wx:AB(x))→Wx:AB(x).{\displaystyle {\mathsf {sup}}:\prod _{a:A}{\Big (}B(a)\to {\mathsf {W}}_{x:A}B(x){\Big )}\to {\mathsf {W}}_{x:A}B(x).}We can also write this rule in the style of anatural deductionproof,a:Af:B(a)→Wx:AB(x)sup(a,f):Wx:AB(x).{\displaystyle {\frac {a:A\qquad f:B(a)\to {\mathsf {W}}_{x:A}B(x)}{{\mathsf {sup}}(a,f):{\mathsf {W}}_{x:A}B(x)}}.} The elimination rule for W-types works similarly tostructural inductionon trees. If, whenever a property (under thepropositions-as-typesinterpretation)C:Wx:AB(x)→U{\displaystyle C:{\mathsf {W}}_{x:A}B(x)\to U}holds for all subtrees of a given tree it also holds for that tree, then it holds for all trees. w:Wa:AB(a)a:A,f:B(a)→Wx:AB(x),c:∏b:B(a)C(f(b))⊢h(a,f,c):C(sup(a,f))elim(w,h):C(w){\displaystyle {\frac {w:{\mathsf {W}}_{a:A}B(a)\qquad a:A,\;f:B(a)\to {\mathsf {W}}_{x:A}B(x),\;c:\prod _{b:B(a)}C(f(b))\;\vdash \;h(a,f,c):C({\mathsf {sup}}(a,f))}{{\mathsf {elim}}(w,h):C(w)}}} In extensional type theories, W-types (resp. M-types) can be defined up toisomorphismasinitial algebras(resp. final coalgebras) forpolynomial functors. In this case, the property of initiality (res. finality) corresponds directly to the appropriate induction principle.[3]In intensional type theories with theunivalence axiom, this correspondence holds up to homotopy (propositional equality).[4][5][6] M-types aredualto W-types, and representcoinductive(potentially infinite) data such asstreams.[7]M-types can be derived from W-types.[8] This technique allowssomedefinitions of multiple types that depend on each other. For example, defining twoparitypredicates onnatural numbersusing two mutually inductive types in Rocq: Induction-recursionstarted as a study into the limits of ITT. Once found, the limits were turned into rules that allowed defining new inductive types. These types could depend upon a function and the function on the type, as long as both were defined simultaneously. Universe typescan be defined using induction-recursion. Induction-inductionallows definition of a type and a family of types at the same time. So, a typeAand a family of typesB:A→Type{\displaystyle B:A\to Type}. This is a current research area inHomotopy Type Theory(HoTT). HoTT differs from ITT by its identity type (equality). Higher inductive types not only define a new type with constants and functions that create elements of the type, but also new instances of the identity type that relate them. A simple example is thecircletype, which is defined with two constructors, a basepoint; and a loop; The existence of a new constructor for the identity type makescirclea higher inductive type.
https://en.wikipedia.org/wiki/Inductive_type
Anodeis a basic unit of adata structure, such as alinked listortreedata structure. Nodes containdataand also may link to other nodes. Links between nodes are often implemented bypointers. Nodes are often arranged into tree structures. A node represents the information contained in a single data structure. These nodes may contain a value or condition, or possibly serve as another independent data structure. Nodes are represented by a single parent node. The highest point on a tree structure is called a root node, which does not have a parent node, but serves as the parent or 'grandparent' of all of the nodes below it in the tree. The height of a node is determined by the total number of edges on the path from that node to the furthest leaf node, and the height of the tree is equal to the height of the root node.[1]Node depth is determined by the distance between that particular node and the root node. The root node is said to have a depth of zero.[2]Data can be discovered along these network paths.[3]An IP address uses this kind of system of nodes to define its location in a network. Another common use of node trees is inweb development. In programming,XMLis used to communicate information between computer programmers and computers alike. For this reason XML is used to create commoncommunication protocolsused inoffice productivity software, and serves as the base for the development of modern webmarkup languageslikeXHTML. Though similar in how it is approached by a programmer,HTMLandCSSis typically the language used to develop website text and design. While XML, HTML and XHTML provide the language and expression, theDOMserves as a translator.[4] Different types of nodes in a tree are represented by specific interfaces. In other words, the node type is defined by how it communicates with other nodes. Each node has a node type property, which specifies the type of node, such as sibling or leaf. For example, if the node type property is the constant properties for a node, this property specifies the type of the node. So if a node type property is the constant node ELEMENT_NODE, one can know that this node object is an object Element. This object uses the Element interface to define all the methods and properties of that particular node. Different W3CWorld Wide Web Consortiumnode types and descriptions: A node object is represented by a single node in a tree. It can be an element node, attribute node, text node, or any type that is described in section "node type". All objects can inherit properties and methods for dealing with parent and child nodes, but not all of the objects have parent or child nodes. For example, with text nodes that cannot have child nodes, trying to add child nodes results in aDOMerror. Objects in the DOM tree may be addressed and manipulated by using methods on the objects. The public interface of a DOM is specified in itsapplication programming interface(API). The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s betweenNetscape NavigatorandMicrosoft Internet Explorer, as well as with that ofJavaScriptandJScript, the firstscripting languagesto be widely implemented in thelayout enginesofweb browsers.
https://en.wikipedia.org/wiki/Node_(computer_science)
Ahierarchical queryis a type ofSQL querythat handleshierarchical modeldata. They are special cases of more general recursivefixpointqueries, which computetransitive closures. In standardSQL:1999hierarchical queries are implemented by way of recursivecommon table expressions(CTEs). Unlike Oracle's earlierconnect-by clause, recursive CTEs were designed with fixpoint semantics from the beginning.[1]Recursive CTEs from the standard were relatively close to the existing implementation in IBM DB2 version 2.[1]Recursive CTEs are also supported byMicrosoft SQL Server(since SQL Server 2008 R2),[2]Firebird 2.1,[3]PostgreSQL 8.4+,[4]SQLite 3.8.3+,[5]IBM Informixversion 11.50+,CUBRID,MariaDB 10.2+andMySQL 8.0.1+.[6]Tableau has documentationdescribing how CTEs can be used. TIBCO Spotfire does not support CTEs, while Oracle 11g Release 2's implementation lacks fixpoint semantics. Without common table expressions or connected-by clauses it is possible to achieve hierarchical queries with user-defined recursive functions.[7] A common table expression, or CTE, (inSQL) is a temporary named result set, derived from a simple query and defined within the execution scope of aSELECT,INSERT,UPDATE, orDELETEstatement. CTEs can be thought of as alternatives to derived tables (subquery),views, and inline user-defined functions. Common table expressions are supported byTeradata(starting with version 14),IBM Db2,Informix(starting with version 14.1),Firebird(starting with version 2.1),[8]Microsoft SQL Server(starting with version 2005),Oracle(with recursion since 11g release 2),PostgreSQL(since 8.4),MariaDB(since 10.2[9]),MySQL(since 8.0),SQLite(since 3.8.3),HyperSQL,Informix(since 14.10),[10]GoogleBigQuery,Sybase(starting with version 9),Vertica,H2(experimental),[11]andmany others. Oracle calls CTEs "subquery factoring".[12] The syntax for a CTE (which may or may not be recursive) is as follows: wherewith_query's syntax is: Recursive CTEs can be used to traverse relations (as graphs or trees) although the syntax is much more involved because there are no automatic pseudo-columns created (likeLEVELbelow); if these are desired, they have to be created in the code. See MSDN documentation[2]or IBM documentation[13][14]for tutorial examples. TheRECURSIVEkeyword is not usually needed after WITH in systems other than PostgreSQL.[15] In SQL:1999 a recursive (CTE) query may appear anywhere a query is allowed. It's possible, for example, to name the result usingCREATE[RECURSIVE]VIEW.[16]Using a CTE inside anINSERT INTO, one can populate a table with data generated from a recursive query; random data generation is possible using this technique without using any procedural statements.[17] Some Databases, like PostgreSQL, support a shorter CREATE RECURSIVE VIEW format which is internally translated into WITH RECURSIVE coding.[18] An example of a recursive query computing thefactorialof numbers from 0 to 9 is the following: An alternative syntax is the non-standardCONNECT BYconstruct; it was introduced by Oracle in the 1980s.[19]Prior to Oracle 10g, the construct was only useful for traversing acyclic graphs because it returned an error on detecting any cycles; in version 10g Oracle introduced the NOCYCLE feature (and keyword), making the traversal work in the presence of cycles as well.[20] CONNECT BYis supported bySnowflake,EnterpriseDB,[21]Oracle database,[22]CUBRID,[23]IBM Informix[24]andIBM Db2although only if it is enabled as a compatibility mode.[25]The syntax is as follows: The output from the above query would look like: The following example returns the last name of each employee in department 10, each manager above that employee in the hierarchy, the number of levels between manager and employee, and the path between the two: Academic textbooks. Note that these cover only the SQL:1999 standard (and Datalog), but not the Oracle extension.
https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL
Inmathematics, theKleene–Rosser paradoxis a paradox that shows that certain systems offormal logicareinconsistent, in particular the version ofHaskell Curry'scombinatory logicintroduced in 1930, andAlonzo Church's originallambda calculus, introduced in 1932–1933, both originally intended as systems of formal logic. The paradox was exhibited byStephen KleeneandJ. B. Rosserin 1935. Kleene and Rosser were able to show that both systems are able to characterize and enumerate their provably total, definable number-theoretic functions, which enabled them to construct a term that essentially replicatesRichard's paradoxin formal language. Curry later managed to identify the crucial ingredients of the calculi that allowed the construction of this paradox, and used this to construct a much simpler paradox, now known asCurry's paradox. Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Kleene%E2%80%93Rosser_paradox
this,self, andMearekeywordsused in some computerprogramming languagesto refer to the object, class, or other entity which the currently running code is a part of. The entity referred to thus depends on theexecution context(such as which object has its method called). Different programming languages use these keywords in slightly different ways. In languages where a keyword like "this" is mandatory, the keyword is the only way to access data and methods stored in the current object. Where optional, these keywords can disambiguate variables and functions with the same name. In manyobject-orientedprogramming languages,this(also calledselforMe) is a variable that is used ininstance methodsto refer to the object on which they are working. The first OO language,SIMULA 67, usedthisto explicitly reference the local object.[1]: 4.3.2.3C++and languages which derive in style from it (such asJava,C#,D, andPHP) also generally usethis.Smalltalkand others, such asObject Pascal,Perl,Python,Ruby,Rust,Objective-C,DataFlexandSwift, useself. Microsoft'sVisual BasicusesMe. The concept is similar in all languages:thisis usually an immutablereferenceorpointerwhich refers to the current object; the current object often being the code that acts as 'parent' or 'invocant' to theproperty,method, sub-routine or function that contains thethiskeyword. After an object is properly constructed, or instantiated,thisis always a valid reference. Some languages require it explicitly; others uselexical scopingto use it implicitly to make symbols within their class visible. Or alternatively, the current object referred to bythismay be an independent code object that has called the function or method containing the keywordthis. Such a thing happens, for example, when aJavaScriptevent handler attached to an HTML tag in a web page calls a function containing the keywordthisstored in the global space outside the document object; in that context,thiswill refer to the page element within the document object, not the enclosing window object.[2] In some languages, for example C++, Java, and Rakuthisorselfis akeyword, and the variable automatically exists in instance methods. In others, for example, Python, Rust, and Perl 5, the firstparameterof an instance method is such a reference. It needs to be specified explicitly. In Python and Perl, the parameter need not necessarily be namedthisorself; it can be named freely by the programmer like any other parameter. However, by informal convention, the first parameter of an instance method in Perl or Python is namedself. Rust requires the self object to be called&selforself, depending on whether the invoked function borrows the invocant, or moves it in, respectively. Static methodsin C++ or Java are not associated with instances but classes, and so cannot usethis, because there is no object. In other languages, such as Ruby, Smalltalk, Objective-C, or Swift, the method is associated with aclass objectthat is passed asthis, and they are calledclass methods. For class methods, Python usesclsto access to theclass object. When lexical scoping is used to inferthis, the use ofthisin code, while not illegal, may raise warning bells to a maintenance programmer, although there are still legitimate uses ofthisin this case, such as referring to instance variables hidden by local variables of the same name, or if the method wants to return a reference to the current object, i.e.this, itself. In some compilers (for exampleGCC), pointers to C++ instance methods can be directly cast to a pointer of another type, with an explicitthispointer parameter.[3] The dispatch semantics ofthis, namely that method calls onthisare dynamically dispatched, is known asopen recursion, and means that these methods can beoverriddenby derived classes or objects. By contrast, direct named recursion oranonymous recursionof a function usesclosed recursion, with static dispatch. For example, in the followingPerlcode for the factorial, the token__SUB__is a reference to the current function: By contrast, in C++ (using an explicitthisfor clarity, though not necessary) thethisbinds to the object itself, but if the class method was declared "virtual" i.e. polymorphic in the base, it's resolved via dynamic dispatch so that derived classes can override it. This example is artificial since this is direct recursion, so overriding thefactorialmethod would override this function; more natural examples are when a method in a derived class calls the same method in a base class, or in cases of mutual recursion.[4][5] Thefragile base classproblem has been blamed on open recursion, with the suggestion that invoking methods onthisdefault to closed recursion (static dispatch) rather than open recursion (dynamic dispatch), only using open recursion when it is specifically requested; external calls (not usingthis) would be dynamically dispatched as usual.[6][7]The way this is solved in practice in the JDK is through a certain programmer discipline; this discipline has been formalized by C. Ruby and G. T. Leavens; it consists of the following rules:[8] Early versions of C++ would let thethispointer be changed; by doing so a programmer could change which object a method was working on. This feature was eventually removed, and nowthisin C++ is anr-value.[9] Early versions of C++ did not include references and it has been suggested that had they been so inC++from the beginning,thiswould have been a reference, not a pointer.[10] C++ lets objects destroy themselves with the source code statement:delete this. The keywordthisinC#works the same way as in Java, for reference types. However, within C#value types,thishas quite different semantics, being similar to an ordinary mutable variable reference, and can even occur on the left side of an assignment. One use ofthisin C# is to allow reference to an outer field variable within a method that contains a local variable that has the same name. In such a situation, for example, the statementvar n = localAndFieldname;within the method will assign the type and value of the local variablelocalAndFieldnameton, whereas the statementvar n = this.localAndFieldname;will assign the type and value of the outer field variable ton.[11] InDthisin a class, struct, or union method refers to an immutable reference of the instance of the enclosing aggregate. Classes arereferencetypes, and structs and unions are value types. In the first version of D, the keywordthisis used as a pointer to the instance of the object the method is bound to, while in D2 it has the character of an implicitreffunction argument. In the programming languageDylan, which is an object-oriented language that supportsmultimethodsand doesn't have a concept ofthis, sending a message to an object is still kept in the syntax. The two forms below work in the same way; the differences are justsyntactic sugar. and Within a class text, thecurrent typeis the type obtained from thecurrent class. Within features (routines, commands and queries) of a class, one may use the keywordCurrentto reference the current class and its features. The use of the keywordCurrentis optional as the keywordCurrentis implied by simply referring to the name of the current class feature openly. For example: One might have a feature `foo' in a class MY_CLASS and refer to it by: [12] Line #10 (above) has the implied reference toCurrentby the call to simple `foo'. Line #10 (below) has the explicit reference toCurrentby the call to `Current.foo'. Either approach is acceptable to the compiler, but the implied version (e.g.x := foo) is preferred as it is less verbose. As with other languages, there are times when the use of the keywordCurrentis mandated, such as: In the case of the code above, the call on line #11 tomake_with_somethingis passing the current class by explicitly passing the keywordCurrent. The keywordthisis aJavalanguage keyword that represents the current instance of the class in which it appears. It is used to access class variables and methods. Since all instance methods are virtual in Java,thiscan never be null.[13] In JavaScript, which is a programming orscripting languageused extensively in web browsers,thisis an important keyword, although what it evaluates to depends on where it is used. To work around the different meaning ofthisin nested functions such as DOM event handlers, it is a common idiom in JavaScript to save thethisreference of the calling object in a variable (commonly calledthatorself), and then use the variable to refer to the calling object in nested functions. For example: Notably, JavaScript makes use of boththisand the related keywordself[17](in contrast to most other languages which tend to employ one or the other), withselfbeing restricted specifically to web workers.[18] Finally, as a reliable way of specifically referencing the global (window or equivalent) object, JavaScript features theglobalThiskeyword.[19] In Lua,selfis created assyntactic sugarwhen functions are defined using the:operator.[20]When invoking a method using:, the object being indexed will be implicitly given as the first argument to the function being invoked. For example, the following two functions are equivalent: Lua itself is not object-oriented, but when combined with another feature called metatables, the use ofselflets programmers define functions in a manner resembling object-oriented programming. In PowerShell, the specialautomatic variable$_contains the current object in the pipeline object. You can use this variable in commands that perform an action on every object or on selected objects in a pipeline.[21] Also starting with PowerShell 5.0, which adds a formal syntax to define classes and other user-defined types,[22]$thisvariable describes the current instance of the object. In Python, there is no keyword forthis. When a member function is called on an object, it invokes the member function with the same name on the object's class object, with the object automatically bound to the first argument of the function. Thus, the obligatory first parameter ofinstance methodsserves asthis; this parameter is conventionally namedself, but can be named anything. In class methods (created with theclassmethoddecorator), the first argument refers to the class object itself, and is conventionally calledcls; these are primarily used for inheritable constructors,[23]where the use of the class as a parameter allows subclassing the constructor. In static methods (created with thestaticmethoddecorator), no special first argument exists. In Rust, types are declared separately from the functions associated with them. Functions designed to be analogous to instance methods in more traditionally object-oriented languages must explicitly takeselfas their first parameter. These functions can then be called usinginstance.method()syntax sugar. For example: This defines a type,Foo, which has four associated functions. The first,Foo::new(), is not an instance function and must be specified with the type prefix. The remaining three all take aselfparameter in a variety of ways and can be called on aFooinstance using the dot-notation syntax sugar, which is equivalent to calling the type-qualified function name with an explicitselffirst parameter. TheSelflanguage is named after this use of "self". Selfis strictly used within methods of a class. Another way to refer toSelfis to use::.
https://en.wikipedia.org/wiki/Open_recursion
Sierpiński curvesare arecursivelydefinedsequenceofcontinuousclosed planefractal curvesdiscovered byWacław Sierpiński, which in the limitn→∞{\displaystyle n\to \infty }completely fill the unit square: thus their limit curve, also calledthe Sierpiński curve, is an example of aspace-filling curve. Because the Sierpiński curve is space-filling, itsHausdorff dimension(in the limitn→∞{\displaystyle n\to \infty }) is2{\displaystyle 2}.TheEuclidean lengthof then{\displaystyle n}thiteration curveSn{\displaystyle S_{n}}is i.e., it growsexponentiallywithn{\displaystyle n}beyond any limit, whereas the limit forn→∞{\displaystyle n\to \infty }of the area enclosed bySn{\displaystyle S_{n}}is5/12{\displaystyle 5/12\,}that of the square (in Euclidean metric). The Sierpiński curve is useful in several practical applications because it is more symmetrical than other commonly studied space-filling curves. For example, it has been used as a basis for the rapid construction of an approximate solution to theTravelling Salesman Problem(which asks for the shortest sequence of a given set of points): The heuristic is simply to visit the points in the same sequence as they appear on the Sierpiński curve.[3]To do this requires two steps: First compute an inverse image of each point to be visited; then sort the values. This idea has been used to build routing systems for commercial vehicles based only on Rolodex card files.[4] A space-filling curve is a continuous map of the unit interval onto a unit square and so a (pseudo) inverse maps the unit square to the unit interval. One way of constructing a pseudo-inverse is as follows. Let the lower-left corner (0, 0) of the unit square correspond to 0.0 (and 1.0). Then the upper-left corner (0, 1) must correspond to 0.25, the upper-right corner (1, 1) to 0.50, and the lower-right corner (1, 0) to 0.75. The inverse map of interior points are computed by taking advantage of the recursive structure of the curve. Here is a function coded in Java that will compute the relative position of any point on the Sierpiński curve (that is, a pseudo-inverse value). It takes as input the coordinates of the point (x, y) to be inverted, and the corners of an enclosing right isosceles triangle (ax, ay), (bx, by), and (cx, cy). (The unit square is the union of two such triangles.) The remaining parameters specify the level of accuracy to which the inverse should be computed. The Sierpiński curve can be expressed by arewrite system(L-system). Here, bothFandGmean "draw forward", + means "turn left 45°", and−means "turn right 45°" (seeturtle graphics). The curve is usually drawn with different lengths for F and G. The Sierpiński square curve can be similarly expressed: TheSierpiński arrowhead curveis a fractal curve similar in appearance and identical in limit to theSierpiński triangle. The Sierpiński arrowhead curve draws an equilateral triangle with triangular holes at equal intervals. It can be described with two substituting production rules: (A → B-A-B) and (B → A+B+A). A and B recur and at the bottom do the same thing — draw a line. Plus and minus (+ and -) mean turn 60 degrees either left or right. The terminating point of the Sierpiński arrowhead curve is always the same provided you recur an even number of times and you halve the length of the line at each recursion. If you recur to an odd depth (order is odd) then you end up turned 60 degrees, at a different point in the triangle. An alternate construction is given in the article on thede Rham curve: one uses the same technique as the de Rham curves, but instead of using a binary (base-2) expansion, one uses a ternary (base-3) expansion. Given the drawing functionsvoid draw_line(double distance);andvoid turn(int angle_in_degrees);, the code to draw an (approximate) Sierpiński arrowhead curve inC++looks like this: The Sierpiński arrowhead curve can be expressed by arewrite system(L-system). Here,Fmeans "draw forward", + means "turn left 60°", and−means "turn right 60°" (seeturtle graphics).
https://en.wikipedia.org/wiki/Sierpi%C5%84ski_curve
TheMcCarthy 91 functionis arecursive function, defined by thecomputer scientistJohn McCarthyas a test case forformal verificationwithincomputer science. The McCarthy 91 function is defined as The results of evaluating the function are given byM(n) = 91 for all integer argumentsn≤ 100, andM(n) =n− 10 forn> 100. Indeed, the result of M(101) is also 91 (101 - 10 = 91). All results of M(n) after n = 101 are continually increasing by 1, e.g. M(102) = 92, M(103) = 93. The 91 function was introduced in papers published byZohar Manna,Amir PnueliandJohn McCarthyin 1970. These papers represented early developments towards the application offormal methodstoprogram verification. The 91 function was chosen for being nested-recursive (contrasted withsingle recursion, such as definingf(n){\displaystyle f(n)}by means off(n−1){\displaystyle f(n-1)}). The example was popularized by Manna's book,Mathematical Theory of Computation(1974). As the field of Formal Methods advanced, this example appeared repeatedly in the research literature. In particular, it is viewed as a "challenge problem" for automated program verification. It is easier to reason abouttail-recursivecontrol flow, this is an equivalent (extensionally equal) definition: As one of the examples used to demonstrate such reasoning, Manna's book includes a tail-recursive algorithm equivalent to the nested-recursive 91 function. Many of the papers that report an "automated verification" (ortermination proof) of the 91 function only handle the tail-recursive version. This is an equivalentmutuallytail-recursive definition: A formal derivation of the mutually tail-recursive version from the nested-recursive one was given in a 1980 article byMitchell Wand, based on the use ofcontinuations. Example A: Example B: Here is an implementation of the nested-recursive algorithm inLisp: Here is an implementation of the nested-recursive algorithm inHaskell: Here is an implementation of the nested-recursive algorithm inOCaml: Here is an implementation of the tail-recursive algorithm inOCaml: Here is an implementation of the nested-recursive algorithm inPython: Here is an implementation of the nested-recursive algorithm inC: Here is an implementation of the tail-recursive algorithm inC: Here is a proof that the McCarthy 91 functionM{\displaystyle M}is equivalent to the non-recursive algorithmM′{\displaystyle M'}defined as: Forn> 100, the definitions ofM′{\displaystyle M'}andM{\displaystyle M}are the same. The equality therefore follows from the definition ofM{\displaystyle M}. Forn≤ 100, astrong inductiondownward from 100 can be used: For 90 ≤n≤ 100, This can be used to showM(n) =M(101) = 91 for 90 ≤n≤ 100: M(n) =M(101) = 91 for 90 ≤n≤ 100 can be used as the base case of the induction. For the downward induction step, letn≤ 89 and assumeM(i) = 91 for alln<i≤ 100, then This provesM(n) = 91 for alln≤ 100, including negative values. Donald Knuthgeneralized the 91 function to include additional parameters.[1]John Cowlesdeveloped a formal proof that Knuth's generalized function was total, using theACL2theorem prover.[2]
https://en.wikipedia.org/wiki/McCarthy_91_function
Inmathematical logicandcomputer science, ageneral recursive function,partial recursive function, orμ-recursive functionis apartial functionfromnatural numbersto natural numbers that is "computable" in an intuitive sense – as well as in aformal one. If the function is total, it is also called atotal recursive function(sometimes shortened torecursive function).[1]Incomputability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed byTuring machines[2][4](this is one of the theorems that supports theChurch–Turing thesis). The μ-recursive functions are closely related toprimitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is theAckermann function. Other equivalent classes of functions are the functions oflambda calculusand the functions that can be computed byMarkov algorithms. The subset of alltotalrecursive functions with values in{0,1}is known incomputational complexity theoryas thecomplexity class R. Theμ-recursive functions(orgeneral recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and theminimization operatorμ. The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class ofprimitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, theAckermann functioncan be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Operators (thedomain of a functiondefined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for whichfis not defined, then the search never terminates, andμ(f){\displaystyle \mu (f)}is not defined for the argument(x1,…,xk).{\displaystyle (x_{1},\ldots ,x_{k}).} While some textbooks use the μ-operator as defined here,[5][6]others[7][8]demand that the μ-operator is applied tototalfunctionsfonly. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (seebelow).[5][6]The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total.[7] Thestrong equalityrelation≃{\displaystyle \simeq }can be used to compare partial μ-recursive functions. This is defined for all partial functionsfandgso that holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. Examples not involving the minimization operator can be found atPrimitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. A general recursive function is calledtotal recursive functionif it is defined for every input, or, equivalently, if it can be computed by atotal Turing machine. There is no way to computably tell if a given general recursive function is total - seeHalting problem. In theequivalence of models of computability, a parallel is drawn betweenTuring machinesthat do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). Anormal form theoremdue to Kleene says that for eachkthere are primitive recursive functionsU(y){\displaystyle U(y)\!}andT(y,e,x1,…,xk){\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!}such that for any μ-recursive functionf(x1,…,xk){\displaystyle f(x_{1},\ldots ,x_{k})\!}withkfree variables there is anesuch that The numbereis called anindexorGödel numberfor the functionf.[10]: 52–53A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minskyobserves theU{\displaystyle U}defined above is in essence the μ-recursive equivalent of theuniversal Turing machine: To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort,and essentially the same ideas, as we have invested in constructing the universal Turing machine[11] A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xnis abbreviated asx: Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions He arrives at:
https://en.wikipedia.org/wiki/%CE%9C-recursive_function
Incomputability theory, aprimitive recursive functionis, roughly speaking, a function that can be computed by acomputer programwhoseloopsare all"for" loops(that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strictsubsetof thosegeneral recursive functionsthat are alsototal functions. The importance of primitive recursive functions lies in the fact that mostcomputable functionsthat are studied innumber theory(and more generally in mathematics) are primitive recursive. For example,additionanddivision, thefactorialandexponential function, and the function which returns thenth prime are all primitive recursive.[1]In fact, for showing that a computable function is primitive recursive, it suffices to show that itstime complexityis bounded above by a primitive recursive function of the input size.[2]It is hence not particularly easy to devise acomputable functionthat isnotprimitive recursive; some examples are shown in section§ Limitationsbelow. The set of primitive recursive functions is known asPRincomputational complexity theory. A primitive recursive function takes a fixed number of arguments, each anatural number(nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takesnarguments it is calledn-ary. The basic primitive recursive functions are given by theseaxioms: More complex primitive recursive functions can be obtained by applying theoperationsgiven by these axioms: Interpretation: Theprimitive recursive functionsare the basic functions and those obtained from the basic functions by applying these operations a finite number of times. A definition of the 2-ary functionAdd{\displaystyle Add}, to compute the sum of its arguments, can be obtained using the primitive recursion operatorρ{\displaystyle \rho }. To this end, the well-known equations are "rephrased in primitive recursive function terminology": In the definition ofρ(g,h){\displaystyle \rho (g,h)}, the first equation suggests to chooseg=P11{\displaystyle g=P_{1}^{1}}to obtainAdd(0,y)=g(y)=y{\displaystyle Add(0,y)=g(y)=y}; the second equation suggests to chooseh=S∘P23{\displaystyle h=S\circ P_{2}^{3}}to obtainAdd(S(x),y)=h(x,Add(x,y),y)=(S∘P23)(x,Add(x,y),y)=S(Add(x,y)){\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))}. Therefore, the addition function can be defined asAdd=ρ(P11,S∘P23){\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})}. As a computation example, GivenAdd{\displaystyle Add}, the 1-ary functionAdd∘(P11,P11){\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})}doubles its argument,(Add∘(P11,P11))(x)=Add(x,x)=x+x{\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x}. In a similar way as addition, multiplication can be defined byMul=ρ(C01,Add∘(P23,P33)){\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))}. This reproduces the well-known multiplication equations: and The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rulesPred(0)=0{\displaystyle Pred(0)=0}andPred(S(n))=n{\displaystyle Pred(S(n))=n}. A primitive recursive definition isPred=ρ(C00,P12){\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})}. As a computation example, The limited subtraction function (also called "monus", and denoted "−.{\displaystyle {\stackrel {.}{-}}}") is definable from the predecessor function. It satisfies the equations Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction,RSub(y,x)=x−.y{\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y}. Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, asRSub=ρ(P11,Pred∘P23){\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})}. To get rid of the reversed argument order, then defineSub=RSub∘(P22,P12){\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})}. As a computation example, In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers withtruth values(that ist{\displaystyle t}for true andf{\displaystyle f}for false),[citation needed]or that produce truth values as outputs.[4]This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth valuet{\displaystyle t}with the number1{\displaystyle 1}and the truth valuef{\displaystyle f}with the number0{\displaystyle 0}. Once this identification has been made, thecharacteristic functionof a setA{\displaystyle A}, which always returns1{\displaystyle 1}or0{\displaystyle 0}, can be viewed as a predicate that tells whether a number is in the setA{\displaystyle A}. Such an identification of predicates with numeric functions will be assumed for the remainder of this article. As an example for a primitive recursive predicate, the 1-ary functionIsZero{\displaystyle IsZero}shall be defined such thatIsZero(x)=1{\displaystyle IsZero(x)=1}ifx=0{\displaystyle x=0}, andIsZero(x)=0{\displaystyle IsZero(x)=0}, otherwise. This can be achieved by definingIsZero=ρ(C10,C02){\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})}. Then,IsZero(0)=ρ(C10,C02)(0)=C10(0)=1{\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1}and e.g.IsZero(8)=ρ(C10,C02)(S(7))=C02(7,IsZero(7))=0{\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0}. Using the propertyx≤y⟺x−.y=0{\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0}, the 2-ary functionLeq{\displaystyle Leq}can be defined byLeq=IsZero∘Sub{\displaystyle Leq=IsZero\circ Sub}. ThenLeq(x,y)=1{\displaystyle Leq(x,y)=1}ifx≤y{\displaystyle x\leq y}, andLeq(x,y)=0{\displaystyle Leq(x,y)=0}, otherwise. As a computation example, Once a definition ofLeq{\displaystyle Leq}is obtained, the converse predicate can be defined asGeq=Leq∘(P22,P12){\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})}. Then,Geq(x,y)=Leq(y,x){\displaystyle Geq(x,y)=Leq(y,x)}is true (more precisely: has value 1) if, and only if,x≥y{\displaystyle x\geq y}. The 3-ary if-then-else operator known from programming languages can be defined byIf=ρ(P22,P34){\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})}. Then, for arbitraryx{\displaystyle x}, and That is,If(x,y,z){\displaystyle {\textit {If}}(x,y,z)}returns the then-part,y{\displaystyle y}, if the if-part,x{\displaystyle x}, is true, and the else-part,z{\displaystyle z}, otherwise. Based on theIf{\displaystyle {\textit {If}}}function, it is easy to define logical junctors. For example, definingAnd=If∘(P12,P22,C02){\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})}, one obtainsAnd(x,y)=If(x,y,0){\displaystyle And(x,y)={\textit {If}}(x,y,0)}, that is,And(x,y){\displaystyle And(x,y)}is trueif, and only if, bothx{\displaystyle x}andy{\displaystyle y}are true (logical conjunctionofx{\displaystyle x}andy{\displaystyle y}). Similarly,Or=If∘(P12,C12,P22){\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})}andNot=If∘(P11,C01,C11){\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})}lead to appropriate definitions ofdisjunctionandnegation:Or(x,y)=If(x,1,y){\displaystyle Or(x,y)={\textit {If}}(x,1,y)}andNot(x)=If(x,0,1){\displaystyle Not(x)={\textit {If}}(x,0,1)}. Using the above functionsLeq{\displaystyle Leq},Geq{\displaystyle Geq}andAnd{\displaystyle And}, the definitionEq=And∘(Leq,Geq){\displaystyle Eq=And\circ (Leq,Geq)}implements the equality predicate. In fact,Eq(x,y)=And(Leq(x,y),Geq(x,y)){\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))}is true if, and only if,x{\displaystyle x}equalsy{\displaystyle y}. Similarly, the definitionLt=Not∘Geq{\displaystyle Lt=Not\circ Geq}implements the predicate "less-than", andGt=Not∘Leq{\displaystyle Gt=Not\circ Leq}implements "greater-than". Exponentiationandprimality testingare primitive recursive. Given primitive recursive functionse{\displaystyle e},f{\displaystyle f},g{\displaystyle g}, andh{\displaystyle h}, a function that returns the value ofg{\displaystyle g}whene≤f{\displaystyle e\leq f}and the value ofh{\displaystyle h}otherwise is primitive recursive. By usingGödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers andrational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then thefieldoperations are all primitive recursive. The following examples and definitions are fromKleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, inBoolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation. In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =defa'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed asGödel numbers. The broader class ofpartial recursive functionsis defined by introducing anunbounded search operator. The use of this operator may result in apartial function, that is, a relation withat mostone value for each argument, but does not necessarily haveanyvalue for any argument (seedomain). An equivalent definition states that a partial recursive function is one that can be computed by aTuring machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. TheAckermann functionA(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursiveif and only ifthere is a natural numbermsuch that the function can be computed by a Turingmachine that always haltswithin A(m,n) or fewer steps, wherenis the sum of the arguments of the primitive recursive function.[5] An important property of the primitive recursive functions is that they are arecursively enumerablesubset of the set of alltotal recursive functions(which is not itself recursively enumerable). This means that there is a single computable functionf(m,n) that enumerates the primitive recursive functions, namely: fcan be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use adiagonalizationargument to show thatfis not recursive primitive in itself: had it been such, so would beh(n) =f(n,n)+1. But if this equals some primitive recursive function, there is anmsuch thath(n) =f(m,n) for alln, and thenh(m) =f(m,m), leading to contradiction. However, the set of primitive recursive functions is not thelargestrecursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant ofCantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: Now define the "evaluator function"ev{\displaystyle ev}with two arguments, byev(i,j)=fi(j){\displaystyle ev(i,j)=f_{i}(j)}. Clearlyev{\displaystyle ev}is total and computable, since one can effectively determine the definition offi{\displaystyle f_{i}}, and being a primitive recursive functionfi{\displaystyle f_{i}}is itself total and computable, sofi(j){\displaystyle f_{i}(j)}is always defined and effectively computable. However a diagonal argument will show that the functionev{\displaystyle ev}of two arguments is not primitive recursive. This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the articleMachine that always halts. Note however that thepartialcomputable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: Instead ofCnk{\displaystyle C_{n}^{k}}, alternative definitions use just one 0-aryzero functionC00{\displaystyle C_{0}^{0}}as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. Robinson[6]considered various restrictions of the recursion rule. One is the so-callediteration rulewhere the functionhdoes not have access to the parametersxi(in this case, we may assume without loss of generality that the functiongis just the identity, as the general case can be obtained by substitution): He proved that the class of all primitive recursive functions can still be obtained in this way. Another restriction considered by Robinson[6]ispure recursion, wherehdoes not have access to the induction variabley: Gladstone[7]proved that this rule is enough to generate all primitive recursive functions. Gladstone[8]improved this so that even the combination of these two restrictions, i.e., thepure iterationrule below, is enough: Further improvements are possible: Severin[9]prove that even the pure iteration rulewithout parameters, namely suffices to generate allunaryprimitive recursive functions if we extend the set of initial functions with truncated subtractionx ∸ y. We getallprimitive recursive functions if we additionally include + as an initial function. Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing.Course-of-values recursiondefines primitive recursive functions. Some forms ofmutual recursionalso define primitive recursive functions. The functions that can be programmed in theLOOP programming languageare exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to aTuring-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basicfor loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such aswhile loopsor IF-THEN plusGOTO, are admitted in a primitive recursive language. TheLOOP language, introduced in a 1967 paper byAlbert R. MeyerandDennis M. Ritchie,[10]is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language isDouglas Hofstadter'sBlooPinGödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the languagegeneral recursiveandTuring-complete, as are all real-world computer programming languages. The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, thehalting problemisundecidablefor general recursive functions. The primitive recursive functions are closely related to mathematicalfinitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired.Primitive recursive arithmetic(PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose. PRA is much weaker thanPeano arithmetic, which is not a finitistic system. Nevertheless, many results innumber theoryand inproof theorycan be proved in PRA. For example,Gödel's incompleteness theoremcan be formalized into PRA, giving the following theorem: Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs. In proof theory andset theory, there is an interest in finitisticconsistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theoryTimplies the consistency of a theorySby producing a primitive recursive function that can transform any proof of an inconsistency fromSinto a proof of an inconsistency fromT. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained byforcingcan be recast as syntactic proofs that can be formalized in PRA. Recursive definitionshad been used more or less formally in mathematics before, but the construction of primitive recursion is traced back toRichard Dedekind's theorem 126 of hisWas sind und was sollen die Zahlen?(1888). This work was the first to give a proof that a certain recursive construction defines a unique function.[11][12][13] Primitive recursive arithmeticwas first proposed byThoralf Skolem[14]in 1923. The current terminology was coined byRózsa Péter(1934) afterAckermannhad proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions.[12][13]
https://en.wikipedia.org/wiki/Primitive_recursive_function
Incomputer science, theTak functionis arecursive function, named afterIkuo Takeuchi[ja]. It is defined as follows: τ(x,y,z)={τ(τ(x−1,y,z),τ(y−1,z,x),τ(z−1,x,y))ify<xzotherwise{\displaystyle \tau (x,y,z)={\begin{cases}\tau (\tau (x-1,y,z),\tau (y-1,z,x),\tau (z-1,x,y))&{\text{if }}y<x\\z&{\text{otherwise}}\end{cases}}} This function is often used as abenchmarkfor languages with optimization forrecursion.[1][2][3][4] The original definition by Takeuchi was as follows: tarai is short forたらい回し(tarai mawashi, "to pass around")in Japanese. John McCarthynamed this function tak() after Takeuchi.[5] However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly fromlazy evaluation. Though written in exactly the same manner as others, theHaskellcode below runs much faster. One can easily accelerate this function viamemoizationyet lazy evaluation still wins. The best known way to optimize tarai is to use a mutually recursive helper function as follows. Here is an efficient implementation of tarai() in C: Note the additional check for (x <= y) before z (the third argument) is evaluated, avoiding unnecessary recursive evaluation.
https://en.wikipedia.org/wiki/Tak_(function)
Inmathematics, specificallycategory theory, afunctoris amappingbetweencategories. Functors were first considered inalgebraic topology, where algebraic objects (such as thefundamental group) are associated totopological spaces, and maps between these algebraic objects are associated tocontinuousmaps between spaces. Nowadays, functors are used throughout modern mathematics to relate various categories. Thus, functors are important in all areas within mathematics to whichcategory theoryis applied. The wordscategoryandfunctorwere borrowed by mathematicians from the philosophersAristotleandRudolf Carnap, respectively.[1]The latter usedfunctorin alinguisticcontext;[2]seefunction word. LetCandDbecategories. AfunctorFfromCtoDis a mapping that[3] That is, functors must preserveidentity morphismsandcompositionof morphisms. There are many constructions in mathematics that would be functors but for the fact that they "turn morphisms around" and "reverse composition". We then define acontravariant functorFfromCtoDas a mapping that Variance of functor (composite)[4] Note that contravariant functors reverse the direction of composition. Ordinary functors are also calledcovariant functorsin order to distinguish them from contravariant ones. Note that one can also define a contravariant functor as acovariantfunctor on theopposite categoryCop{\displaystyle C^{\mathrm {op} }}.[5]Some authors prefer to write all expressions covariantly. That is, instead of sayingF:C→D{\displaystyle F\colon C\to D}is a contravariant functor, they simply writeF:Cop→D{\displaystyle F\colon C^{\mathrm {op} }\to D}(or sometimesF:C→Dop{\displaystyle F\colon C\to D^{\mathrm {op} }}) and call it a functor. Contravariant functors are also occasionally calledcofunctors.[6] There is a convention which refers to "vectors"—i.e.,vector fields, elements of the space of sectionsΓ(TM){\displaystyle \Gamma (TM)}of atangent bundleTM{\displaystyle TM}—as "contravariant" and to "covectors"—i.e.,1-forms, elements of the space of sectionsΓ(T∗M){\displaystyle \Gamma {\mathord {\left(T^{*}M\right)}}}of acotangent bundleT∗M{\displaystyle T^{*}M}—as "covariant". This terminology originates in physics, and its rationale has to do with the position of the indices ("upstairs" and "downstairs") inexpressionssuch asx′i=Λjixj{\displaystyle {x'}^{\,i}=\Lambda _{j}^{i}x^{j}}forx′=Λx{\displaystyle \mathbf {x} '={\boldsymbol {\Lambda }}\mathbf {x} }orωi′=Λijωj{\displaystyle \omega '_{i}=\Lambda _{i}^{j}\omega _{j}}forω′=ωΛT.{\displaystyle {\boldsymbol {\omega }}'={\boldsymbol {\omega }}{\boldsymbol {\Lambda }}^{\textsf {T}}.}In this formalism it is observed that the coordinate transformation symbolΛij{\displaystyle \Lambda _{i}^{j}}(representing the matrixΛT{\displaystyle {\boldsymbol {\Lambda }}^{\textsf {T}}}) acts on the "covector coordinates" "in the same way" as on the basis vectors:ei=Λijej{\displaystyle \mathbf {e} _{i}=\Lambda _{i}^{j}\mathbf {e} _{j}}—whereas it acts "in the opposite way" on the "vector coordinates" (but "in the same way" as on the basis covectors:ei=Λjiej{\displaystyle \mathbf {e} ^{i}=\Lambda _{j}^{i}\mathbf {e} ^{j}}). This terminology is contrary to the one used in category theory because it is the covectors that havepullbacksin general and are thuscontravariant, whereas vectors in general arecovariantsince they can bepushed forward. See alsoCovariance and contravariance of vectors. Every functorF:C→D{\displaystyle F\colon C\to D}induces theopposite functorFop:Cop→Dop{\displaystyle F^{\mathrm {op} }\colon C^{\mathrm {op} }\to D^{\mathrm {op} }}, whereCop{\displaystyle C^{\mathrm {op} }}andDop{\displaystyle D^{\mathrm {op} }}are theopposite categoriestoC{\displaystyle C}andD{\displaystyle D}.[7]By definition,Fop{\displaystyle F^{\mathrm {op} }}maps objects and morphisms in the identical way as doesF{\displaystyle F}. SinceCop{\displaystyle C^{\mathrm {op} }}does not coincide withC{\displaystyle C}as a category, and similarly forD{\displaystyle D},Fop{\displaystyle F^{\mathrm {op} }}is distinguished fromF{\displaystyle F}. For example, when composingF:C0→C1{\displaystyle F\colon C_{0}\to C_{1}}withG:C1op→C2{\displaystyle G\colon C_{1}^{\mathrm {op} }\to C_{2}}, one should use eitherG∘Fop{\displaystyle G\circ F^{\mathrm {op} }}orGop∘F{\displaystyle G^{\mathrm {op} }\circ F}. Note that, following the property ofopposite category,(Fop)op=F{\displaystyle \left(F^{\mathrm {op} }\right)^{\mathrm {op} }=F}. Abifunctor(also known as abinary functor) is a functor whose domain is aproduct category. For example, theHom functoris of the typeCop×C→Set. It can be seen as a functor intwoarguments; it is contravariant in one argument, covariant in the other. Amultifunctoris a generalization of the functor concept tonvariables. So, for example, a bifunctor is a multifunctor withn= 2. Two important consequences of the functoraxiomsare: One can compose functors, i.e. ifFis a functor fromAtoBandGis a functor fromBtoCthen one can form the composite functorG∘FfromAtoC. Composition of functors is associative where defined. Identity of composition of functors is the identity functor. This shows that functors can be considered as morphisms in categories of categories, for example in thecategory of small categories. A small category with a single object is the same thing as amonoid: the morphisms of a one-object category can be thought of as elements of the monoid, and composition in the category is thought of as the monoid operation. Functors between one-object categories correspond to monoidhomomorphisms. So in a sense, functors between arbitrary categories are a kind of generalization of monoid homomorphisms to categories with more than one object. LetCandDbe categories. The collection of all functors fromCtoDforms the objects of a category: thefunctor category. Morphisms in this category arenatural transformationsbetween functors. Functors are often defined byuniversal properties; examples are thetensor product, thedirect sumanddirect productof groups or vector spaces, construction of free groups and modules,directandinverselimits. The concepts oflimit and colimitgeneralize several of the above. Universal constructions often give rise to pairs ofadjoint functors. Functors sometimes appear infunctional programming. For instance, the programming languageHaskellhas aclassFunctorwherefmapis apolytypic functionused to mapfunctions(morphismsonHask, the category of Haskell types)[10]between existing types to functions between some new types.[11]
https://en.wikipedia.org/wiki/Functor
Infunctional programming, anapplicative functor, or an applicative for short, is an intermediate structure betweenfunctorsandmonads. Incategory theorythey are calledclosed monoidal functors. Applicative functors allow for functorial computations to be sequenced (unlike plain functors), but don't allow using results from prior computations in the definition of subsequent ones (unlike monads). Applicative functors are the programming equivalent oflax monoidal functorswithtensorial strengthin category theory. Applicative functors were introduced in 2008 by Conor McBride and Ross Paterson in their paperApplicative programming with effects.[1] Applicative functors first appeared as a library feature inHaskell, but have since spread to other languages such asIdris,Agda,OCaml,Scala, andF#. Glasgow Haskell, Idris, and F# offer language features designed to ease programming with applicative functors. In Haskell, applicative functors are implemented in theApplicativetype class. While in languages like Haskell monads are applicative functors, this is not always the case in general settings of category theory - examples of monads which arenotstrong can be found onMath Overflow. In Haskell, an applicative is aparameterized typethat can be thought of as being a container for data of the parameter type with two additional methods:pureand<*>. Thepuremethod for an applicative of parameterized typefhas type and can be thought of as bringing values into the applicative. The<*>method for an applicative of typefhas type and can be thought of as the equivalent of function application inside the applicative.[2] Alternatively, instead of providing<*>, one may provide a function calledliftA2. These two functions may be defined in terms of each other; therefore only one is needed for a minimally complete definition.[3] Applicatives are also required to satisfy four equational laws:[3] Every applicative is a functor. To be explicit, given the methodspureand<*>,fmapcan be implemented as[3] The commonly-used notationg<$>xis equivalent topureg<*>x. In Haskell, theMaybe typecan be made an instance of the type classApplicativeusing the following definition:[2] As stated in the Definition section,pureturns anainto aMaybea, and<*>applies a Maybe function to a Maybe value. Using the Maybe applicative for typeaallows one to operate on values of typeawith the error being handled automatically by the applicative machinery. For example, to addm::MaybeIntandn::MaybeInt, one needs only write For the non-error case, addingm=Justiandn=JustjgivesJust(i+j). If either ofmornisNothing, then the result will beNothingalso. This example also demonstrates how applicatives allow a sort of generalized function application.
https://en.wikipedia.org/wiki/Applicative_functor
In manycomputer programminglanguages, ado while loopis acontrol flowstatementthat executes a block of code and then either repeats the block or exits the loop depending on a givenbooleancondition. Thedo whileconstruct consists of a process symbol and a condition. First the code within the block is executed. Then the condition is evaluated. If the condition istruethe code within the block is executed again. This repeats until the condition becomesfalse. Do while loops check the condition after the block of code is executed. This control structure can be known as apost-test loop. This means the do-while loop is an exit-condition loop. However awhile loopwill test the condition before the code within the block is executed. This means that the code is always executed first and then the expression or test condition is evaluated. This process is repeated as long as the expression evaluates to true. If the expression is false the loop terminates. A while loop sets the truth of a statement as a necessary condition for the code's execution. A do-while loop provides for the action's ongoing execution until the condition is no longer true. It is possible and sometimes desirable for the condition to always evaluate to be true. This creates aninfinite loop. When an infinite loop is created intentionally there is usually another control structure that allows termination of the loop. For example, abreak statementwould allow termination of an infinite loop. Some languages may use a different naming convention for this type of loop. For example, thePascalandLualanguages have a "repeat until" loop, which continues to rununtilthe control expression is true and then terminates. In contrast a "while" loop runswhilethe control expression is true and terminates once the expression becomes false. is equivalent to In this manner, the do ... while loop saves the initial "loop priming" withdo_work();on the line before thewhileloop. As long as thecontinuestatement is not used, the above is technically equivalent to the following (though these examples are not typical or modern style used in everyday computers): or These example programs calculate thefactorialof 5 using their respective languages' syntax for a do-while loop. Early BASICs (such asGW-BASIC) used the syntax WHILE/WEND. Modern BASICs such asPowerBASICprovide both WHILE/WEND and DO/LOOP structures, with syntax such as DO WHILE/LOOP, DO UNTIL/LOOP, DO/LOOP WHILE, DO/LOOP UNTIL, and DO/LOOP (without outer testing, but with a conditional EXIT LOOP somewhere inside the loop). Typical BASIC source code: Do-while(0) statements are also commonly used in C macros as a way to wrap multiple statements into a regular (as opposed to compound) statement. It makes a semicolon needed after the macro, providing a more function-like appearance for simple parsers and programmers as well as avoiding the scoping problem withif. It is recommended inCERT C Coding Standardrule PRE10-C.[1] With legacyFortran 77there is no DO-WHILE construct but the same effect can be achieved with GOTO: Fortran 90and later supports a DO-While construct: Pascaluses repeat/until syntax instead of do while. ThePL/IDO statement subsumes the functions of the post-test loop (do until), the pre-test loop (do while), and thefor loop. All functions can be included in a single statement. The example shows only the "do until" syntax. Python does not have a DO-WHILE loop, but its effect can be achieved by an infinite loop with a breaking condition at the end. In Racket, as in otherSchemeimplementations, a "named-let" is a popular way to implement loops: Compare this with the first example of thewhile loopexample for Racket. Be aware that a named let can also take arguments. Racket and Scheme also provide a proper do loop.
https://en.wikipedia.org/wiki/Do_while_loop
Incomputer science, afor-looporfor loopis acontrol flowstatementfor specifyingiteration. Specifically, a for-loop functions by running a section of code repeatedly until a certain condition has been satisfied. For-loops have two parts: a header and a body. The header defines the iteration and the body is the code executed once per iteration. The header often declares an explicitloop counteror loopvariable. This allows the body to know which iteration is being executed. For-loops are typically used when the number of iterations is known before entering the loop. For-loops can be thought of as shorthands forwhile-loopswhich increment and test a loop variable. Various keywords are used to indicate the usage of a for loop: descendants ofALGOLuse "for", while descendants ofFortranuse "do". There are other possibilities, for exampleCOBOLwhich usesPERFORM VARYING. The namefor-loopcomes from the wordfor.Foris used as thereserved word(or keyword) in many programming languages to introduce a for-loop. The term in English dates toALGOL 58and was popularized inALGOL 60. It is the direct translation of the earlier Germanfürand was used inSuperplan(1949–1951) byHeinz Rutishauser. Rutishauser was involved in defining ALGOL 58 and ALGOL 60.[1]The loop body is executed "for" the given values of the loop variable. This is more explicit inALGOLversions of the for statement where a list of possible values and increments can be specified. In Fortran andPL/I, the keywordDOis used for the same thing and it is named ado-loop; this is different from ado while loop. A for-loop statement is available in mostimperative programminglanguages. Even ignoring minor differences insyntax, there are many differences in how these statements work and the level of expressiveness they support. Generally, for-loops fall into one of four categories: The for-loop of languages likeALGOL,Simula,BASIC,Pascal,Modula,Oberon,Ada,MATLAB,OCaml,F#, and so on, requires acontrol variablewith start- and end-values, which looks something like this: Depending on the language, an explicitassignmentsign may be used in place of theequal sign(and some languages require the wordinteven in the numerical case). An optional step-value (an increment or decrement ≠ 1) may also be included, although the exact syntaxes used for this differ a bit more between the languages. Some languages require a separate declaration of the control variable, some do not. Another form was popularized by theC language. It requires 3 parts: theinitialization(loop variant), thecondition, and the advancement to the next iteration. All these three parts are optional. This type of "semicolon loops" came fromB programming languageand it was originally invented byStephen Johnson.[2] In the initialization part, any variables needed are declared (and usually assigned values). If multiple variables are declared, they should all be the same type. The condition part checks a certain condition and exits the loop if false, even if the loop is never executed. If the condition is true, then the lines of code inside the loop are executed. The advancement to the next iteration part is performed exactly once every time the loop ends. The loop is then repeated if the condition evaluates to true. Here is an example of the C-style traditional for-loop inJava. These loops are also sometimes namednumeric for-loopswhen contrasted with foreach loops (see below). This type of for-loop is a generalization of the numeric range type of for-loop, as it allows for the enumeration of sets of items other than number sequences. It is usually characterized by the use of an implicit or explicititerator, in which the loop variable takes on each of the values in a sequence or other data collection. A representative example inPythonis: Wheresome_iterable_objectis either a data collection that supports implicit iteration (like a list of employee's names), or may be an iterator itself. Some languages have this in addition to another for-loop syntax; notably, PHP has this type of loop under the namefor each, as well as a three-expression for-loop (see below) under the namefor. Some languages offer a for-loop that acts as if processing all iterationsin parallel, such as thefor allkeyword inFortran 95which has the interpretation thatallright-hand-sideexpressions are evaluated beforeanyassignments are made, as distinct from the explicit iteration form. For example, in theforstatement in the following pseudocode fragment, when calculating the new value forA(i), except for the first (withi = 2) the reference toA(i - 1)will obtain the new value that had been placed there in the previous step. In thefor allversion, however, each calculation refers only to the original, unalteredA. The difference may be significant. Some languages (such as PL/I, Fortran 95) also offer array assignment statements, that enable many for-loops to be omitted. Thus pseudocode such asA:= 0;would set all elements of array A to zero, no matter its size or dimensionality. The example loop could be rendered as But whether that would be rendered in the style of the for-loop or the for-all-loop or something else may not be clearly described in the compiler manual. Introduced withALGOL 68and followed by PL/I, this allows the iteration of a loop to be compounded with a test, as in That is, a value is assigned to the loop variableiand only if thewhile expressionistruewill the loop body be executed. If the result werefalsethe for-loop's execution stops short. Granted that the loop variable's valueisdefined after the termination of the loop, then the above statement will find the first non-positive element in arrayA(and if no such, its value will beN + 1), or, with suitable variations, the first non-blank character in a string, and so on. Incomputer programming, aloop counteris a control variable that controls the iterations of a loop (a computerprogramming languageconstruct). It is so named because most uses of this construct result in the variable taking on a range of integer values in some orderly sequences (for example., starting at 0 and ending at 10 in increments of 1) Loop counters change with each iteration of a loop, providing a unique value for each iteration. The loop counter is used to decide when the loop should terminate and for the program flow to continue to the nextinstructionafter the loop. A commonidentifier naming conventionis for the loop counter to use the variable namesi,j, andk(and so on if needed), whereiwould be the most outer loop,jthe next inner loop, etc. The reverse order is also used by some programmers. This style is generally agreed to have originated from the early programming of Fortran[citation needed], where these variable names beginning with these letters were implicitly declared as having an integer type, and so were obvious choices for loop counters that were only temporarily required. The practice dates back further tomathematical notationwhereindicesforsumsandmultiplicationsare ofteni,j, etc. A variant convention is the use of duplicated letters for the index,ii,jj, andkk, as this allows easier searching and search-replacing than using a single letter.[3] An example of C code involving nested for loops, where the loop counter variables areiandj: Loops in C can also be used to print the reverse of a word. As: Here, if the input isapple, the output will beelppa. This C-style for-loop is commonly the source of aninfinite loopsince the fundamental steps of iteration are completely in the control of the programmer. When infinite loops are intended, this type of for-loop can be used (with empty expressions), such as: This style is used instead of infinitewhile (1)loops to avoid a type conversion warning in some C/C++ compilers.[4]Some programmers prefer the more succinctfor (;;)form over the semantically equivalent but more verbosewhile (true)form. Some languages may also provide other supporting statements, which when present can alter how the for-loop iteration proceeds. Common among these are thebreakandcontinuestatements found in C and its derivatives. The break statement causes the innermost loop to be terminated immediately when executed. The continue statement will move at once to the next iteration without further progress through the loop body for the current iteration. A for statement also terminates when a break, goto, or return statement within the statement body is executed.[Wells] Other languages may have similar statements or otherwise provide means to alter the for-loop progress; for example in Fortran 90: Some languages offer further facilities such as naming the various loop constructs so that with multiple nested loops there is no doubt as to which loop is involved. Fortran 90, for example: Thus, when "trouble" is detected in the inner loop, the CYCLE X1 (not X2) means that the skip will be to the next iteration for I,notJ. The compiler will also be checking that each END DO has the appropriate label for its position: this is not just a documentation aid. The programmer must still code the problem correctly, but some possible blunders will be blocked. Different languages specify different rules for what value the loop variable will hold on termination of its loop, and indeed some hold that it "becomes undefined". This permits acompilerto generate code that leaves any value in the loop variable, or perhaps even leaves it unchanged because the loop value was held in a register and never stored in memory. Actual behavior may even vary according to the compiler's optimization settings, as with the Honeywell Fortran66 compiler. In some languages (notCorC++) the loop variable isimmutablewithin the scope of the loop body, with any attempt to modify its value being regarded as a semantic error. Such modifications are sometimes a consequence of a programmer error, which can be very difficult to identify once made. However, only overt changes are likely to be detected by the compiler. Situations, where the address of the loop variable is passed as an argument to asubroutine, make it very difficult to check because the routine's behavior is in general unknowable to the compiler unless the language supports procedure signatures and argument intents. Some examples in the style of pre-Fortran-90: A common approach is to calculate the iteration count at the start of a loop (with careful attention to overflow as infor i:= 0: 65535 do ... ;in sixteen-bit integer arithmetic) and with each iteration decrement this count while also adjusting the value ofI: double counting results. However, adjustments to the value ofIwithin the loop will not change the number of iterations executed. Still, another possibility is that the code generated may employ an auxiliary variable as the loop variable, possibly held in a machine register, whose value may or may not be copied toIon each iteration. Again, modifications ofIwould not affect the control of the loop, but now a disjunction is possible: within the loop, references to the value ofImight be to the (possibly altered) current value ofIor to the auxiliary variable (held safe from improper modification) and confusing results are guaranteed. For instance, within the loop a reference to elementIof an array would likely employ the auxiliary variable (especially if it were held in a machine register), but ifIis a parameter to some routine (for instance, aprint-statement to reveal its value), it would likely be a reference to the proper variableIinstead. It is best to avoid such possibilities. Just as the index variable might be modified within a for-loop, so also may its bounds and direction. But to uncertain effect. A compiler may prevent such attempts, they may have no effect, or they might even work properly - though many would declare that to do so would be wrong. Consider a statement such as If the approach to compiling such a loop was to be the evaluation offirst,lastandstepand the calculation of an iteration count via something like(last - first)/steponce only at the start, then if those items were simple variables and their values were somehow adjusted during the iterations, this would have no effect on the iteration count even if the element selected for division byA(last)changed. ALGOL 60, PL/I, and ALGOL 68, allow loops in which the loop variable is iterated over a list of ranges of values instead of a single range. The following PL/I example will execute the loop with six values of i: 1, 7, 12, 13, 14, 15: A for-loop is generally equivalent to a while-loop: Is equivalent to: As demonstrated by the output of the variables. Given an action that must be repeated, for instance, five times, different languages' for-loops will be written differently. The syntax for a three-expression for-loop is nearly identical in all languages that have it, after accounting for different styles of block termination and so on. Fortran's equivalent of theforloop is theDOloop, using the keyword do instead of for, The syntax of Fortran'sDOloop is: The following two examples behave equivalently to the three argument for-loop in other languages, initializing the counter variable to 1, incrementing by 1 each iteration of the loop, and stopping at five (inclusive). As of Fortran 90, block structuredEND DOwas added to the language. With this, the end of loop label became optional: The step part may be omitted if the step is one. Example: In Fortran 90, theGO TOmay be avoided by using anEXITstatement. Alternatively, aDO - WHILEconstruct could be used: ALGOL 58 introduced theforstatement, using the form as Superplan: For example to print 0 to 10 incremented by 1: COBOLwas formalized in late 1959 and has had many elaborations. It uses the PERFORM verb which has many options. Originally all loops had to be out-of-line with the iterated code occupying a separate paragraph. Ignoring the need for declaring and initializing variables, the COBOL equivalent of afor-loop would be. In the 1980s, the addition of in-line loops andstructured programmingstatements such as END-PERFORM resulted in afor-loop with a more familiar structure. If the PERFORM verb has the optional clause TEST AFTER, the resulting loop is slightly different: the loop body is executed at least once, before any test. InBASIC, a loop is sometimes named afor-next loop. The end-loop marker specifies the name of the index variable, which must correspond to the name of the index variable at the start of the for-loop. Some languages (PL/I, Fortran 95, and later) allow a statement label at the start of a for-loop that can be matched by the compiler against the same text on the corresponding end-loop statement. Fortran also allows theEXITandCYCLEstatements to name this text; in a nest of loops, this makes clear which loop is intended. However, in these languages, the labels must be unique, so successive loops involving the same index variable cannot use the same text nor can a label be the same as the name of a variable, such as the index variable for the loop. TheLEAVEstatement may be used to exit the loop. Loops can belabeled, andleavemay leave a specific labeled loop in a group of nested loops. Some PL/I dialects include theITERATEstatement to terminate the current loop iteration and begin the next. ALGOL 68 has what was consideredtheuniversal loop, the full syntax is: Further, the single iteration range could be replaced by a list of such ranges. There are several unusual aspects of the construct Subsequentextensionsto the standard ALGOL 68 allowed thetosyntactic element to be replaced withuptoanddowntoto achieve a small optimization. The same compilers also incorporated: Decrementing (counting backwards) is usingdowntokeyword instead ofto, as in: The numeric range for-loop varies somewhat more. Thestatementis often a block statement; an example of this would be: The ISO/IEC 9899:1999 publication (commonly known asC99) also allows initial declarations inforloops. All three sections in the for loop are optional, with an empty condition equivalent to true. Contrary to other languages, inSmalltalka for-loop is not alanguage constructbut is defined in the class Number as a method with two parameters, the end value and aclosure, using self as start value. Theexitstatement may be used to exit the loop. Loops can be labeled, andexitmay leave a specifically labeled loop in a group of nested loops: Maple has two forms of for-loop, one for iterating over a range of values, and the other for iterating over the contents of a container. The value range form is as follows: All parts exceptdoandodare optional. TheforIpart, if present, must come first. The remaining parts (fromf,byb,tot,whilew) can appear in any order. Iterating over a container is done using this form of loop: Theincclause specifies the container, which may be a list, set, sum, product, unevaluated function, array, or object implementing an iterator. A for-loop may be terminated byod,end, orend do. In Maxima CAS, one can use also integer values: The for-loop, written as[initial] [increment] [limit] { ... } forinitializes an internal variable, and executes the body as long as the internal variable is not more than the limit (or not less, if the increment is negative) and, at the end of each iteration, increments the internal variable. Before each iteration, the value of the internal variable is pushed onto the stack.[5] There is also a simple repeat loop. The repeat-loop, written asX { ... } repeat, repeats the body exactly X times.[6] After the loop,nwould be 5 in this example. Asiis used for theImaginary unit, its use as a loop variable is discouraged. "There's more than one way to do it" is a Perl programming motto. The construct corresponding to most other languages' for-loop is namedDoin Mathematica. Mathematica also has a For construct that mimics the for-loop of C-like languages. An empty loop (i.e., one with no commands betweendoanddone) is a syntax error. If the above loops contained only comments, execution would result in the message "syntax error near unexpected token 'done'". In Haskell98, the functionmapM_maps amonadicfunction over a list, as The functionmapMcollects each iteration result in a list: Haskell2010 adds functionsforM_andforM, which are equivalent tomapM_andmapM, but with their arguments flipped: When compiled with optimization, none of the expressions above will create lists. But, to save the space of the [1..5] list if optimization is turned off, aforLoop_function could be defined as and used as In the original Oberon language, the for-loop was omitted in favor of the more general Oberon loop construct. The for-loop was reintroduced in Oberon-2. Python does not contain the classical for loop, rather aforeachloop is used to iterate over the output of the built-inrange()function which returns an iterable sequence of integers. Usingrange(6)would run the loop from 0 to 5. When the loop variable is not needed, it is common practice to use an underscore (_) as a placeholder. This convention signals to other developers that the variable will not be used inside the loop. For example: This will print “Hello” five times without using the loop variable. It can also iterate through a list of items, similar to what can be done with arrays in other languages: Aexit repeatmay also be used to exit a loop at any time. Unlike other languages, AppleScript currently has no command to continue to the next iteration of a loop. So, this code will print: For-loops can also loop through a table using to iterate numerically through arrays and to iterate randomly through dictionaries. Generic for-loop making use of closures: Simple index loop: Using an array: Using a list of string values: The abovelistexample is only available in the dialect of CFML used byLuceeandRailo. Simple index loop: Using an array: Using a "list" of string values: For the extended for-loop, seeForeach loop § Java. JavaScript supports C-style "three-expression" loops. Thebreakandcontinuestatements are supported inside loops. Alternatively, it is possible to iterate over all keys of an array. This prints out a triangle of * Rubyhas several possible syntaxes, including the above samples. See expression syntax.[7] Nimhas aforeach-type loop and various operations for creating iterators.[8]
https://en.wikipedia.org/wiki/For_loop
In most computerprogramming languages, awhile loopis acontrol flowstatementthat allows code to be executed repeatedly based on a givenBooleancondition. Thewhileloop can be thought of as a repeatingif statement. Thewhileconstruct consists of a block of code and a condition/expression.[1]The condition/expression is evaluated, and if the condition/expression istrue,[1]the code within all of their following in the block is executed. This repeats until the condition/expression becomesfalse. Because thewhileloop checks the condition/expression before the block is executed, the control structure is often also known as apre-test loop. Compare this with thedo whileloop, which tests the condition/expressionafterthe loop has executed. For example, in the languagesC,Java,C#,[2]Objective-C, andC++, (whichuse the same syntaxin this case), the code fragment first checks whether x is less than 5, which it is, so then the {loop body} is entered, where theprintffunction is run and x is incremented by 1. After completing all the statements in the loop body, the condition, (x < 5), is checked again, and the loop is executed again, this process repeating until thevariablex has the value 5. It is possible, and in some cases desirable, for the condition toalwaysevaluate to true, creating aninfinite loop. When such a loop is created intentionally, there is usually another control structure (such as abreakstatement) that controls termination of the loop. For example: Thesewhileloops will calculate thefactorialof the number 5: or simply Go has nowhilestatement, but it has the function of aforstatement when omitting some elements of theforstatement. The code for the loop is the same for Java, C# and D: Non-terminating while loop: Pascal has two forms of the while loop,whileandrepeat. While repeats one statement (unless enclosed in a begin-end block) as long as the condition is true. The repeat statement repetitively executes a block of one or more statements through anuntilstatement and continues repeating unless the condition is false. The main difference between the two is the while loop may execute zero times if the condition is initially false, the repeat-until loop always executes at least once. Whileloops are frequently used for reading data line by line (as defined by the$/line separator) from open filehandles: Non-terminating while loop: In Racket, as in otherSchemeimplementations, anamed-letis a popular way to implement loops: Using a macro system, implementing awhileloop is a trivial exercise (commonly used to introduce macros): However, an imperative programming style is often discouraged in Scheme and Racket. Contrary to other languages, in Smalltalk awhileloop is not alanguage constructbut defined in the classBlockClosureas a method with one parameter, the body as aclosure, using self as the condition. Smalltalk also has a corresponding whileFalse: method. While[3]is a simple programming language constructed from assignments, sequential composition, conditionals, and while statements, used in the theoretical analysis of imperative programming languagesemantics.[4][5]
https://en.wikipedia.org/wiki/While_loop
Incomputer science, aprogramming languageis said to havefirst-class functionsif it treatsfunctionsasfirst-class citizens. This means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures.[1]Some programming language theorists require support foranonymous functions(function literals) as well.[2]In languages with first-class functions, thenamesof functions do not have any special status; they are treated like ordinaryvariableswith afunction type.[3]The term was coined byChristopher Stracheyin the context of "functions as first-class citizens" in the mid-1960s.[4] First-class functions are a necessity for thefunctional programmingstyle, in which the use ofhigher-order functionsis a standard practice. A simple example of a higher-ordered function is themapfunction, which takes, as its arguments, a function and a list, and returns the list formed by applying the function to each member of the list. For a language to supportmap, it must support passing a function as an argument. There are certain implementation difficulties in passing functions as arguments or returning them as results, especially in the presence ofnon-local variablesintroduced innestedandanonymous functions. Historically, these were termed thefunarg problems, the name coming fromfunction argument.[5]In early imperative languages these problems were avoided by either not supporting functions as result types (e.g.ALGOL 60,Pascal) or omitting nested functions and thus non-local variables (e.g.C). The early functional languageLisptook the approach ofdynamic scoping, where non-local variables refer to the closest definition of that variable at the point where the function is executed, instead of where it was defined. Proper support forlexically scopedfirst-class functions was introduced inSchemeand requires handling references to functions asclosuresinstead of barefunction pointers,[4]which in turn makesgarbage collectiona necessity.[citation needed] In this section, we compare how particular programming idioms are handled in a functional language with first-class functions (Haskell) compared to an imperative language where functions are second-class citizens (C). In languages where functions are first-class citizens, functions can be passed as arguments to other functions in the same way as other values (a function taking another function as argument is called a higher-order function). In the languageHaskell: Languages where functions are not first-class often still allow one to write higher-order functions through the use of features such asfunction pointersordelegates. In the languageC: There are a number of differences between the two approaches that arenotdirectly related to the support of first-class functions. The Haskell sample operates onlists, while the C sample operates onarrays. Both are the most natural compound data structures in the respective languages and making the C sample operate on linked lists would have made it unnecessarily complex. This also accounts for the fact that the C function needs an additional parameter (giving the size of the array.) The C function updates the arrayin-place, returning no value, whereas in Haskell data structures arepersistent(a new list is returned while the old is left intact.) The Haskell sample usesrecursionto traverse the list, while the C sample usesiteration. Again, this is the most natural way to express this function in both languages, but the Haskell sample could easily have been expressed in terms of afoldand the C sample in terms of recursion. Finally, the Haskell function has apolymorphictype, as this is not supported by C we have fixed all type variables to the type constantint. In languages supporting anonymous functions, we can pass such a function as an argument to a higher-order function: In a language which does not support anonymous functions, we have to bind it to a name instead: Once we have anonymous or nested functions, it becomes natural for them to refer to variables outside of their body (callednon-local variables): If functions are represented with bare function pointers, we can not know anymore how the value that is outside of the function's body should be passed to it, and because of that a closure needs to be built manually. Therefore we can not speak of "first-class" functions here. Also note that themapis now specialized to functions referring to twoints outside of their environment. This can be set up more generally, but requires moreboilerplate code. Iffwould have been anested functionwe would still have run into the same problem and this is the reason they are not supported in C.[6] When returning a function, we are in fact returning its closure. In the C example any local variables captured by the closure will go out of scope once we return from the function that builds the closure. Forcing the closure at a later point will result in undefined behaviour, possibly corrupting the stack. This is known as theupwards funarg problem. Assigningfunctions tovariablesand storing them inside (global) datastructures potentially suffers from the same difficulties as returning functions. As one can test most literals and values for equality, it is natural to ask whether a programming language can support testing functions for equality. On further inspection, this question appears more difficult and one has to distinguish between several types of function equality:[7] Intype theory, the type of functions accepting values of typeAand returning values of typeBmay be written asA→BorBA. In theCurry–Howard correspondence,function typesare related tological implication; lambda abstraction corresponds to discharging hypothetical assumptions and function application corresponds to themodus ponensinference rule. Besides the usual case of programming functions, type theory also uses first-class functions to modelassociative arraysand similardata structures. Incategory-theoreticalaccounts of programming, the availability of first-class functions corresponds to theclosed categoryassumption. For instance, thesimply typed lambda calculuscorresponds to the internal language ofCartesian closed categories. Functional programming languages, such asErlang,Scheme,ML,Haskell,F#, andScala, all have first-class functions. WhenLisp, one of the earliest functional languages, was designed, not all aspects of first-class functions were then properly understood, resulting in functions being dynamically scoped. The laterSchemeandCommon Lispdialects do have lexically scoped first-class functions. Many scripting languages, includingPerl,Python,PHP,Lua,Tcl/Tk,JavaScriptandIo, have first-class functions. For imperative languages, a distinction has to be made between Algol and its descendants such as Pascal, the traditional C family, and the modern garbage-collected variants. The Algol family has allowed nested functions and higher-order taking function as arguments, but not higher-order functions that return functions as results (except Algol 68, which allows this). The reason for this was that it was not known how to deal with non-local variables if a nested-function was returned as a result (and Algol 68 produces runtime errors in such cases). The C family allowed both passing functions as arguments and returning them as results, but avoided any problems by not supporting nested functions. (The gcc compiler allows them as an extension.) As the usefulness of returning functions primarily lies in the ability to return nested functions that have captured non-local variables, instead of top-level functions, these languages are generally not considered to have first-class functions. Modern imperative languages often support garbage-collection making the implementation of first-class functions feasible. First-class functions have often only been supported in later revisions of the language, including C# 2.0 and Apple's Blocks extension to C, C++, and Objective-C. C++11 has added support for anonymous functions and closures to the language, but because of the non-garbage collected nature of the language, special care has to be taken for non-local variables in functions to be returned as results (see below). Explicit partial application possible withstd::bind.
https://en.wikipedia.org/wiki/First-class_function
In computer science,function-levelprogramming refers to one of the two contrastingprogramming paradigmsidentified byJohn Backusin his work on programs as mathematical objects, the other beingvalue-level programming. In his 1977Turing Awardlecture, Backus set forth what he considered to be the need to switch to a different philosophy in programming language design:[1] Programming languages appear to be in trouble. Each successive language incorporates, with a little cleaning up, all the features of its predecessors plus a few more. [...] Each new language claims new and fashionable features... but the plain fact is that few languages make programming sufficiently cheaper or more reliable to justify the cost of producing and learning to use them. He designedFPto be the firstprogramming languageto specifically support the function-level programming style. Afunction-levelprogram isvariable-free(cf.point-freeprogramming), sinceprogram variables, which are essential in value-level definitions, are not needed in function-level programs. In the function-level style of programming, a program is built directly from programs that are given at the outset, by combining them withprogram-forming operationsorfunctionals. Thus, in contrast with the value-level approach that applies the given programs to values to form asuccession of valuesculminating in the desired result value, the function-level approach applies program-forming operations to the given programs to form asuccession of programsculminating in the desired result program. As a result, the function-level approach to programming invites study of thespace of programs under program-forming operations, looking to derive useful algebraic properties of these program-forming operations. The function-level approach offers the possibility of making the set of programs amathematical spaceby emphasizing the algebraic properties of the program-forming operations over thespace of programs. Another potential advantage of the function-level view is the ability to use onlystrict functionsand thereby havebottom-up semantics, which are the simplest kind of all. Yet another is the existence of function-level definitions that are not thelifted(that is,liftedfrom a lower value-level to a higher function-level) image of any existing value-level one: these (often terse) function-level definitions represent a more powerful style of programming not available at the value-level. When Backus studied and publicized his function-level style of programming, his message was mostly misunderstood[2]as supporting the traditionalfunctional programmingstyle languages instead of his ownFPand its successorFL. Backus calls functional programmingapplicative programming;[clarification needed]his function-level programming is a particular, constrained type. A key distinction from functional languages is that Backus' language has the following hierarchy of types: ...and the only way to generate new functions is to use one of the functional forms, which are fixed: you cannot build your own functional form (at least not within FP; you can within FFP (Formal FP)). This restriction means that functions in FP are amodule(generated by the built-in functions) over the algebra of functional forms, and are thus algebraically tractable. For instance, the general question of equality of two functions is equivalent to thehalting problem, and is undecidable, but equality of two functions in FP is just equality in the algebra, and thus (Backus imagines) easier. Even today, many users oflambda stylelanguages often misinterpret Backus' function-level approach as a restrictive variant of the lambda style, which is ade factovalue-level style. In fact, Backus would not have disagreed with the 'restrictive' accusation: he argued that it waspreciselydue to such restrictions that a well-formed mathematical space could arise, in a manner analogous to the waystructured programminglimits programming to arestrictedversion of all the control-flow possibilities available in plain, unrestrictedunstructured programs. The value-free style of FP is closely related to the equational logic of acartesian-closed category. The canonical function-level programming language isFP. Others includeFL, andJ.
https://en.wikipedia.org/wiki/Function-level_programming
Inmathematical logic,category theory, andcomputer science,kappa calculusis aformal systemfor definingfirst-orderfunctions. Unlikelambda calculus, kappa calculus has nohigher-order functions; its functions are notfirst class objects. Kappa-calculus can be regarded as "a reformulation of the first-order fragment of typed lambda calculus".[1] Because its functions are not first-class objects, evaluation of kappa calculusexpressionsdoes not requireclosures. The definition below has been adapted from the diagrams on pages 205 and 207 of Hasegawa.[1] Kappa calculus consists oftypesandexpressions,given by the grammar below: In other words, The:1→τ{\displaystyle :1{\to }\tau }and the subscripts ofid,!, andlift{\displaystyle \operatorname {lift} }are sometimes omitted when they can be unambiguously determined from the context. Juxtaposition is often used as an abbreviation for a combination oflift{\displaystyle \operatorname {lift} }and composition: The presentation here uses sequents (Γ⊢e:τ{\displaystyle \Gamma \vdash e:\tau }) rather than hypothetical judgments in order to ease comparison with the simply typed lambda calculus. This requires the additional Var rule, which does not appear in Hasegawa[1] In kappa calculus an expression has two types: the type of itssourceand the type of itstarget. The notatione:τ1→τ2{\displaystyle e:\tau _{1}{\to }\tau _{2}}is used to indicate that expression e has source typeτ1{\displaystyle {\tau _{1}}}and target typeτ2{\displaystyle {\tau _{2}}}. Expressions in kappa calculus are assigned types according to the following rules: In other words, Kappa calculus obeys the following equalities: The last two equalities are reduction rules for the calculus, rewriting from left to right. The type1can be regarded as theunit type. Because of this, any two functions whose argument type is the same and whose result type is1should be equal – since there is only a single value of type1both functions must return that value for every argument (Terminality). Expressions with type1→τ{\displaystyle 1{\to }\tau }can be regarded as "constants" or values of "ground type"; this is because1is the unit type, and so a function from this type is necessarily a constant function. Note that the kappa rule allows abstractions only when the variable being abstracted has the type1→τ{\displaystyle 1{\to }\tau }for someτ. This is the basic mechanism which ensures that all functions are first-order. Kappa calculus is intended to be the internal language ofcontextually completecategories. Expressions with multiple arguments have source types which are "right-imbalanced" binary trees. For example, a function f with three arguments of types A, B, and C and result type D will have type If we define left-associative juxtapositionfc{\displaystyle f\;c}as an abbreviation for(f∘lift⁡(c)){\displaystyle (f\circ \operatorname {lift} (c))}, then – assuming thata:1→A{\displaystyle a:1{\to }A},b:1→B{\displaystyle b:1{\to }B}, andc:1→C{\displaystyle c:1{\to }C}– we can apply this function: Since the expressionfabc{\displaystyle f\;a\;b\;c}has source type1, it is a "ground value" and may be passed as an argument to another function. Ifg:(D×E)→F{\displaystyle g:(D\times E){\to }F}, then Much like a curried function of typeA→(B→(C→D)){\displaystyle A{\to }(B{\to }(C{\to }D))}in lambda calculus, partial application is possible: However no higher types (i.e.(τ→τ)→τ{\displaystyle (\tau {\to }\tau ){\to }\tau }) are involved. Note that because the source type off ais not1, the following expression cannot be well-typed under the assumptions mentioned so far: Because successive application is used for multiple arguments it is not necessary to know thearityof a function in order to determine its typing; for example, if we know thatc:1→C{\displaystyle c:1{\to }C}then the expression is well-typed as long asjhas type andβ. This property is important when calculating theprincipal typeof an expression, something which can be difficult when attempting to exclude higher-order functions from typed lambda calculi by restricting the grammar of types. Barendregt originally introduced[2]the term "functional completeness" in the context of combinatory algebra. Kappa calculus arose out of efforts by Lambek[3]to formulate an appropriate analogue of functional completeness for arbitrary categories (see Hermida and Jacobs,[4]section 1). Hasegawa later developed kappa calculus into a usable (though simple) programming language including arithmetic over natural numbers and primitive recursion.[1]Connections toarrowswere later investigated[5]by Power, Thielecke, and others. It is possible to explore versions of kappa calculus withsubstructural typessuch aslinear,affine, andorderedtypes. These extensions require eliminating or restricting the!τ{\displaystyle !_{\tau }}expression. In such circumstances the×type operator is not a true cartesian product, and is generally written⊗to make this clear.
https://en.wikipedia.org/wiki/Kappa_calculus
Ahigher order message(HOM) in a computerprogramming languageis a form ofhigher-order programmingthat allows messages that have other messages as arguments. The concept was introduced atMacHack2003[1][2]byMarcel Weiherand presented in a more complete form in 2005 by Marcel Weiher andStéphane Ducasse.[3]Loops can be written without naming the collections looped over, higher order messages can be viewed as a form of point-free ortacit programming. In ordinarySmalltalkcode, without using HOM, obtaining a collection of the employees that have a salary of 1000 would be achieved with the following code: However, using HOM, it can be expressed as follows: Here,selectis a higher order message, andhasSalary:is understood to be called on the select message itself, rather than on its result. The Smalltalk language was not modified to implement this feature. Instead,selectreturns a message thatreifiestheselectsend, which then interprets thehasSalary:message. Another example is the use of future message sends in theCroquet Project:[4] In this example, thefuture:message causes theaddRotationARoundY:message to be sent to the cube object after 1 second. The reference implementation inObjective-Cleverages the trait that in Objective-C, objects that don't understand a message sent to them, still get it delivered in a special hook method, calledforward:. Higher order messaging was implemented in a number of languages that share this feature includingRubyand Smalltalk.[5][6] ECMAScript Harmony's Proxies documentation specifically mentions higher order messages as an application for their Catchall Proxies.[7] The programming languageJdistinguishes betweenverbsandadverbs. Adverbs modify the functioning of verbs. This is similar to higher order messages (the adverbs) modifying the messages that follow (the verbs). In the Croquet example above, theaddRotationAroundY:. message is still sent and has its normal meaning, but its delivery is modified by thefuture:1000message, it will be sent sometime in the future.
https://en.wikipedia.org/wiki/Higher_order_message
{n∣∃k∈Z,n=2k}{\displaystyle \{n\mid \exists k\in \mathbb {Z} ,n=2k\}} Inmathematicsand more specifically inset theory,set-builder notationis anotationfor specifying asetby a property that characterizes its members.[1] Specifying sets by member properties is allowed by theaxiom schema of specification. This is also known asset comprehensionandset abstraction. Set-builder notation can be used to describe a set that is defined by apredicate, that is, a logical formula that evaluates totruefor an element of the set, andfalseotherwise.[2]In this form, set-builder notation has three parts: a variable, acolonorvertical barseparator, and a predicate. Thus there is a variable on the left of the separator, and a rule on the right of it. These three parts are contained in curly brackets: or The vertical bar (or colon) is a separator that can be read as "such that", "for which", or "with the property that". The formulaΦ(x)is said to be theruleor thepredicate. All values ofxfor which the predicate holds (is true) belong to the set being defined. All values ofxfor which the predicate does not hold do not belong to the set. Thus{x∣Φ(x)}{\displaystyle \{x\mid \Phi (x)\}}is the set of all values ofxthat satisfy the formulaΦ.[3]It may be theempty set, if no value ofxsatisfies the formula. A domainEcan appear on the left of the vertical bar:[4] or by adjoining it to the predicate: The ∈ symbol here denotesset membership, while the∧{\displaystyle \land }symbol denotes the logical "and" operator, known aslogical conjunction. This notation represents the set of all values ofxthat belong to some given setEfor which the predicate is true (see "Set existence axiom" below). IfΦ(x){\displaystyle \Phi (x)}is a conjunctionΦ1(x)∧Φ2(x){\displaystyle \Phi _{1}(x)\land \Phi _{2}(x)}, then{x∈E∣Φ(x)}{\displaystyle \{x\in E\mid \Phi (x)\}}is sometimes written{x∈E∣Φ1(x),Φ2(x)}{\displaystyle \{x\in E\mid \Phi _{1}(x),\Phi _{2}(x)\}}, using a comma instead of the symbol∧{\displaystyle \land }. In general, it is not a good idea to consider sets without defining adomain of discourse, as this would represent thesubsetofall possible things that may existfor which the predicate is true. This can easily lead to contradictions and paradoxes. For example,Russell's paradoxshows that the expression{x|x∉x},{\displaystyle \{x~|~x\not \in x\},}although seemingly well formed as a set builder expression, cannot define a set without producing a contradiction.[5] In cases where the setEis clear from context, it may be not explicitly specified. It is common in the literature for an author to state the domain ahead of time, and then not specify it in the set-builder notation. For example, an author may say something such as, "Unless otherwise stated, variables are to be taken to be natural numbers," though in less formal contexts where the domain can be assumed, a written mention is often unnecessary. The following examples illustrate particular sets defined by set-builder notation via predicates. In each case, the domain is specified on the left side of the vertical bar, while the rule is specified on the right side. An extension of set-builder notation replaces the single variablexwith anexpression. So instead of{x∣Φ(x)}{\displaystyle \{x\mid \Phi (x)\}}, we may have{f(x)∣Φ(x)},{\displaystyle \{f(x)\mid \Phi (x)\},}which should be read For example: When inverse functions can be explicitly stated, the expression on the left can be eliminated through simple substitution. Consider the example set{2t+1∣t∈Z}{\displaystyle \{2t+1\mid t\in \mathbb {Z} \}}. Make the substitutionu=2t+1{\displaystyle u=2t+1}, which is to sayt=(u−1)/2{\displaystyle t=(u-1)/2}, then replacetin the set builder notation to find Two sets are equal if and only if they have the same elements. Sets defined by set builder notation are equal if and only if their set builder rules, including the domain specifiers, are equivalent. That is if and only if Therefore, in order to prove the equality of two sets defined by set builder notation, it suffices to prove the equivalence of their predicates, including the domain qualifiers. For example, because the two rule predicates are logically equivalent: This equivalence holds because, for any real numberx, we havex2=1{\displaystyle x^{2}=1}if and only ifxis a rational number with|x|=1{\displaystyle |x|=1}. In particular, both sets are equal to the set{−1,1}{\displaystyle \{-1,1\}}. In many formal set theories, such asZermelo–Fraenkel set theory, set builder notation is not part of the formal syntax of the theory. Instead, there is aset existence axiom scheme, which states that ifEis a set andΦ(x)is a formula in the language of set theory, then there is a setYwhose members are exactly the elements ofEthat satisfyΦ: The setYobtained from this axiom is exactly the set described in set builder notation as{x∈E∣Φ(x)}{\displaystyle \{x\in E\mid \Phi (x)\}}. A similar notation available in a number ofprogramming languages(notablyPythonandHaskell) is thelist comprehension, which combinesmapandfilteroperations over one or morelists. In Python, the set-builder's braces are replaced with square brackets, parentheses, or curly braces, giving list,generator, and set objects, respectively. Python uses an English-based syntax. Haskell replaces the set-builder's braces with square brackets and uses symbols, including the standard set-builder vertical bar. The same can be achieved inScalausing Sequence Comprehensions, where the "for" keyword returns a list of the yielded variables using the "yield" keyword.[6] Consider these set-builder notation examples in some programming languages: The set builder notation and list comprehension notation are both instances of a more general notation known asmonad comprehensions, which permits map/filter-like operations over anymonadwith azero element.
https://en.wikipedia.org/wiki/Set-builder_notation
TheSQLSELECTstatement returns aresult setof rows, from one or moretables.[1][2] A SELECT statement retrieves zero or more rows from one or moredatabase tablesor databaseviews. In most applications,SELECTis the most commonly useddata manipulation language(DML) command. As SQL is adeclarative programminglanguage,SELECTqueries specify a result set, but do not specify how to calculate it. The database translates the query into a "query plan" which may vary between executions, database versions and database software. This functionality is called the "query optimizer" as it is responsible for finding the best possible execution plan for the query, within applicable constraints. The SELECT statement has many optional clauses: SELECTis the most common operation in SQL, called "the query".SELECTretrieves data from one or moretables, or expressions. StandardSELECTstatements have no persistent effects on the database. Some non-standard implementations ofSELECTcan have persistent effects, such as theSELECT INTOsyntax provided in some databases.[4] Queries allow the user to describe desired data, leaving thedatabase management system (DBMS)to carry outplanning,optimizing, and performing the physical operations necessary to produce that result as it chooses. A query includes a list of columns to include in the final result, normally immediately following theSELECTkeyword. An asterisk ("*") can be used to specify that the query should return all columns of all the queried tables.SELECTis the most complex statement in SQL, with optional keywords and clauses that include: The following example of aSELECTquery returns a list of expensive books. The query retrieves all rows from theBooktable in which thepricecolumn contains a value greater than 100.00. The result is sorted in ascending order bytitle. The asterisk (*) in theselect listindicates that all columns of theBooktable should be included in the result set. The example below demonstrates a query of multiple tables, grouping, and aggregation, by returning a list of books and the number of authors associated with each book. Example output might resemble the following: Under the precondition thatisbnis the only common column name of the two tables and that a column namedtitleonly exists in theBooktable, one could re-write the query above in the following form: However, many[quantify]vendors either do not support this approach, or require certain column-naming conventions for natural joins to work effectively. SQL includes operators and functions for calculating values on stored values. SQL allows the use of expressions in theselect listto project data, as in the following example, which returns a list of books that cost more than 100.00 with an additionalsales_taxcolumn containing a sales tax figure calculated at 6% of theprice. Queries can be nested so that the results of one query can be used in another query via arelational operatoror aggregation function. A nested query is also known as asubquery. While joins and other table operations provide computationally superior (i.e. faster) alternatives in many cases (all depending on implementation), the use of subqueries introduces a hierarchy in execution that can be useful or necessary. In the following example, the aggregation functionAVGreceives as input the result of a subquery: A subquery can use values from the outer query, in which case it is known as acorrelated subquery. Since 1999 the SQL standard allows WITH clauses, i.e. named subqueries often calledcommon table expressions(named and designed after the IBM DB2 version 2 implementation; Oracle calls thesesubquery factoring). CTEs can also berecursiveby referring to themselves;the resulting mechanismallows tree or graph traversals (when represented as relations), and more generallyfixpointcomputations. A derived table is a subquery in a FROM clause. Essentially, the derived table is a subquery that can be selected from or joined to. Derived table functionality allows the user to reference the subquery as a table. The derived table also is referred to as aninline viewor aselect in from list. In the following example, the SQL statement involves a join from the initial Books table to the derived table "Sales". This derived table captures associated book sales information using the ISBN to join to the Books table. As a result, the derived table provides the result set with additional columns (the number of items sold and the company that sold the books): Given a table T, thequerySELECT*FROMTwill result in all the elements of all the rows of the table being shown. With the same table, the querySELECTC1FROMTwill result in the elements from the column C1 of all the rows of the table being shown. This is similar to aprojectioninrelational algebra, except that in the general case, the result may contain duplicate rows. This is also known as a Vertical Partition in some database terms, restricting query output to view only specified fields or columns. With the same table, the querySELECT*FROMTWHEREC1=1will result in all the elements of all the rows where the value of column C1 is '1' being shown – inrelational algebraterms, aselectionwill be performed, because of the WHERE clause. This is also known as a Horizontal Partition, restricting rows output by a query according to specified conditions. With more than one table, the result set will be every combination of rows. So if two tables are T1 and T2,SELECT*FROMT1,T2will result in every combination of T1 rows with every T2 rows. E.g., if T1 has 3 rows and T2 has 5 rows, then 15 rows will result. Although not in standard, most DBMS allows using a select clause without a table by pretending that an imaginary table with one row is used. This is mainly used to perform calculations where a table is not needed. The SELECT clause specifies a list of properties (columns) by name, or the wildcard character (“*”) to mean “all properties”. Often it is convenient to indicate a maximum number of rows that are returned. This can be used for testing or to prevent consuming excessive resources if the query returns more information than expected. The approach to do this often varies per vendor. InISOSQL:2003, result sets may be limited by using ISOSQL:2008introduced theFETCH FIRSTclause. According to PostgreSQL v.9 documentation, an SQL window function "performs a calculation across a set of table rows that are somehow related to the current row", in a way similar to aggregate functions.[7]The name recalls signal processingwindow functions. A window function call always contains anOVERclause. ROW_NUMBER() OVERmay be used for asimple tableon the returned rows, e.g. to return no more than ten rows: ROW_NUMBER can benon-deterministic: ifsort_keyis not unique, each time you run the query it is possible to get different row numbers assigned to any rows wheresort_keyis the same. Whensort_keyis unique, each row will always get a unique row number. TheRANK() OVERwindow function acts like ROW_NUMBER, but may return more or less thannrows in case of tie conditions, e.g. to return the top-10 youngest persons: The above code could return more than ten rows, e.g. if there are two people of the same age, it could return eleven rows. Since ISOSQL:2008results limits can be specified as in the following example using theFETCH FIRSTclause. This clause currently is supported by CA DATACOM/DB 11, IBM DB2, SAP SQL Anywhere, PostgreSQL, EffiProz, H2, HSQLDB version 2.0, Oracle 12c andMimer SQL. Microsoft SQL Server 2008 and highersupportsFETCH FIRST, but it is considered part of theORDER BYclause. TheORDER BY,OFFSET, andFETCH FIRSTclauses are all required for this usage. Some DBMSs offer non-standard syntax either instead of or in addition to SQL standard syntax. Below, variants of thesimple limitquery for different DBMSes are listed: Rows Pagination[9]is an approach used to limit and display only a part of the total data of a query in the database. Instead of showing hundreds or thousands of rows at the same time, the server is requested only one page (a limited set of rows, per example only 10 rows), and the user starts navigating by requesting the next page, and then the next one, and so on. It is very useful, specially in web systems, where there is no dedicated connection between the client and the server, so the client does not have to wait to read and display all the rows of the server. Some databases providespecialised syntaxforhierarchical data. A window function inSQL:2003is anaggregate functionapplied to a partition of the result set. For example, calculates the sum of the populations of all rows having the samecityvalue as the current row. Partitions are specified using theOVERclause which modifies the aggregate. Syntax: The OVER clause can partition and order the result set. Ordering is used for order-relative functions such as row_number. The processing of a SELECT statement according to ANSI SQL would be the following:[10] The implementation of window function features by vendors of relational databases and SQL engines differs wildly. Most databases support at least some flavour of window functions. However, when we take a closer look it becomes clear that most vendors only implement a subset of the standard. Let's take the powerful RANGE clause as an example. Only Oracle, DB2, Spark/Hive, and Google Big Query fully implement this feature. More recently, vendors have added new extensions to the standard, e.g. array aggregation functions. These are particularly useful in the context of running SQL against a distributed file system (Hadoop, Spark, Google BigQuery) where we have weaker data co-locality guarantees than on a distributed relational database (MPP). Rather than evenly distributing the data across all nodes, SQL engines running queries against a distributed filesystem can achieve data co-locality guarantees by nesting data and thus avoiding potentially expensive joins involving heavy shuffling across the network. User-defined aggregate functions that can be used in window functions are another extremely powerful feature. Method to generate data based on the union all SQL Server 2008 supports the "row constructor" feature, specified in theSQL:1999standard
https://en.wikipedia.org/wiki/Select_(SQL)
Structured Query Language(SQL) (pronounced/ˌɛsˌkjuˈɛl/S-Q-L;or alternatively as/ˈsiːkwəl/"sequel")[4][5]is adomain-specific languageused to manage data, especially in arelational database management system(RDBMS). It is particularly useful in handlingstructured data, i.e., data incorporating relations among entities and variables. Introduced in the 1970s, SQL offered two main advantages over older read–writeAPIssuch asISAMorVSAM. Firstly, it introduced the concept of accessing manyrecordswith one singlecommand. Secondly, it eliminates the need to specifyhowto reach a record, i.e., with or without anindex. Originally based uponrelational algebraandtuple relational calculus, SQL consists of many types of statements,[6]which may be informally classed assublanguages, commonly:Data query Language(DQL),Data Definition Language(DDL),Data Control Language(DCL), andData Manipulation Language(DML).[7] The scope of SQL includes data query, data manipulation (insert, update, and delete), data definition (schemacreation and modification), and data access control. Although SQL is essentially adeclarative language(4GL), it also includesproceduralelements. SQL was one of the first commercial languages to useEdgar F. Codd'srelational model. The model was described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks".[8]Despite not entirely adhering tothe relational model as described by Codd, SQL became the most widely used database language.[9][10] SQL became astandardof theAmerican National Standards Institute(ANSI) in 1986 and of theInternational Organization for Standardization(ISO) in 1987.[11]Since then, the standard has been revised multiple times to include a larger set of features and incorporate common extensions. Despite the existence of standards, virtually no implementations in existence adhere to it fully, and most SQL code requires at least some changes before being ported to differentdatabasesystems. SQL was initially developed atIBMbyDonald D. ChamberlinandRaymond F. Boyceafter learning about the relational model fromEdgar F. Codd[12]in the early 1970s.[13]This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasirelational database management system,System R, which a group atIBM San Jose Research Laboratoryhad developed during the 1970s.[13] Chamberlin and Boyce's first attempt at a relational database language was SQUARE (Specifying Queries in A Relational Environment), but it was difficult to use due to subscript/superscript notation. After moving to the San Jose Research Laboratory in 1973, they began work on a sequel to SQUARE.[12]The original name SEQUEL, which is widely regarded as a pun onQUEL, the query language ofIngres,[14]was later changed to SQL (dropping the vowels) because "SEQUEL" was atrademarkof the UK-basedHawker SiddeleyDynamics Engineering Limited company.[15]The label SQL later became the acronym for Structured Query Language.[16] After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype, includingSystem/38,SQL/DS, andIBM Db2, which were commercially available in 1979, 1981, and 1983, respectively.[17] In the late 1970s, Relational Software, Inc. (nowOracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-basedRDBMSwith aspirations of selling it to theU.S. Navy,Central Intelligence Agency, and otherU.S. governmentagencies. In June 1979, Relational Software introduced one of the first commercially available implementations of SQL,OracleV2 (Version2) forVAXcomputers. By 1986,ANSIandISOstandard groups officially adopted the standard "Database Language SQL" language definition. New versions of the standard were published in 1989, 1992, 1996, 1999, 2003, 2006, 2008, 2011,[12]2016 and most recently, 2023.[18] SQL implementations are incompatible between vendors and do not necessarily completely follow standards. In particular, date and time syntax, string concatenation,NULLs, and comparisoncase sensitivityvary from vendor to vendor.PostgreSQL[19]andMimer SQL[20]strive for standards compliance, though PostgreSQL does not adhere to the standard in all cases. For example, the folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard,[21]which says that unquoted names should be folded to upper case.[22]Thus, according to the standard,Fooshould be equivalent toFOO, notfoo. Popular implementations of SQL commonly omit support for basic features of Standard SQL, such as theDATEorTIMEdata types. The most obvious such examples, and incidentally the most popular commercial and proprietary SQL DBMSs, are Oracle (whoseDATEbehaves asDATETIME,[23][24]and lacks aTIMEtype)[25]and MS SQL Server (before the 2008 version). As a result, SQL code can rarely be ported between database systems without modifications. Several reasons for the lack of portability between database systems include: SQL was adopted as a standard by the ANSI in 1986 as SQL-86[27]and the ISO in 1987.[11]It is maintained byISO/IEC JTC 1, Information technology, Subcommittee SC 32, Data management and interchange. Until 1996, theNational Institute of Standards and Technology(NIST) data-management standards program certified SQL DBMS compliance with the SQL standard. Vendors now self-certify the compliance of their products.[28] The original standard declared that the official pronunciation for "SQL" was aninitialism:/ˌɛsˌkjuːˈɛl/("ess cue el").[9]Regardless, many English-speaking database professionals (including Donald Chamberlin himself[29]) use theacronym-like pronunciation of/ˈsiːkwəl/("sequel"),[30]mirroring the language's prerelease development name, "SEQUEL".[13][15][29]The SQL standard has gone through a number of revisions: The standard is commonly denoted by the pattern:ISO/IEC 9075-n:yyyy Part n: title, or, as a shortcut,ISO/IEC 9075. Interested parties may purchase the standards documents from ISO,[35]IEC, or ANSI. Some old drafts are freely available.[36][37] ISO/IEC 9075is complemented byISO/IEC 13249: SQL Multimedia and Application Packagesand someTechnical reports. The SQL language is subdivided into several language elements, including: SQL is designed for a specific purpose: to querydatacontained in arelational database. SQL is aset-based,declarative programming language, not animperative programming languagelikeCorBASIC. However, extensions to Standard SQL addprocedural programming languagefunctionality, such as control-of-flow constructs. In addition to the standardSQL/PSMextensions and proprietary SQL extensions, procedural andobject-orientedprogrammability is available on many SQL platforms via DBMS integration with other languages. The SQL standard definesSQL/JRTextensions (SQL Routines and Types for the Java Programming Language) to supportJavacode in SQL databases.Microsoft SQL Server 2005uses theSQLCLR(SQL Server Common Language Runtime) to host managed.NETassemblies in thedatabase, while prior versions of SQL Server were restricted to unmanaged extended stored procedures primarily written in C.PostgreSQLlets users write functions in a wide variety of languages—includingPerl,Python,Tcl,JavaScript(PL/V8) and C.[39] A distinction should be made between alternatives to SQL as a language, and alternatives to the relational model itself. Below are proposed relational alternatives to the SQL language. Seenavigational databaseandNoSQLfor alternatives to the relational model. Distributed Relational Database Architecture(DRDA) was designed by a workgroup within IBM from 1988 to 1994. DRDA enables network-connected relational databases to cooperate to fulfill SQL requests.[41][42] An interactive user or program can issue SQL statements to a local RDB and receive tables of data and status indicators in reply from remote RDBs. SQL statements can also be compiled and stored in remote RDBs as packages and then invoked by package name. This is important for the efficient operation of application programs that issue complex, high-frequency queries. It is especially important when the tables to be accessed are located in remote systems. The messages, protocols, and structural components of DRDA are defined by theDistributed Data Management Architecture. Distributed SQL processing ala DRDA is distinctive from contemporarydistributed SQLdatabases. SQL deviates in several ways from its theoretical foundation, the relational model and its tuple calculus. In that model, a table is asetof tuples, while in SQL, tables and query results arelistsof rows; the same row may occur multiple times, and the order of rows can be employed in queries (e.g., in the LIMIT clause). Critics argue that SQL should be replaced with a language that returns strictly to the original foundation: for example, seeThe Third Manifestoby Hugh Darwen and C.J. Date (2006,ISBN0-321-39942-0). Early specifications did not support major features, such as primary keys. Result sets could not be named, and subqueries had not been defined. These were added in 1992.[12] The lack ofsum typeshas been described as a roadblock to full use of SQL's user-defined types. JSON support, for example, needed to be added by a new standard in 2016.[43] The concept ofNullis the subject of somedebate. The Null marker indicates the absence of a value, and is distinct from a value of 0 for an integer column or an empty string for a text column. The concept of Nulls enforces the3-valued-logic in SQL, which is a concrete implementation of the general3-valued logic.[12] Another popular criticism is that it allows duplicate rows, making integration with languages such asPython, whose data types might make accurately representing the data difficult,[12]in terms of parsing and by the absence of modularity. This is usually avoided by declaring a primary key, or a unique constraint, with one or more columns that uniquely identify a row in the table. In a sense similar toobject–relational impedance mismatch, a mismatch occurs between the declarative SQL language and the procedural languages in which SQL is typically embedded.[citation needed] The SQL standard defines three kinds ofdata types(chapter 4.1.1 of SQL/Foundation): Constructed typesare one of ARRAY, MULTISET, REF(erence), or ROW.User-defined typesare comparable to classes in object-oriented language with their own constructors, observers, mutators, methods, inheritance, overloading, overwriting, interfaces, and so on.Predefined data typesare intrinsically supported by the implementation.
https://en.wikipedia.org/wiki/SQL#Queries
Incomputing,algorithmic skeletons, orparallelism patterns, are a high-levelparallel programming modelfor parallel and distributed computing. Algorithmic skeletons take advantage of common programming patterns to hide the complexity of parallel and distributed applications. Starting from a basic set of patterns (skeletons), more complex patterns can be built by combining the basic ones. The most outstanding feature of algorithmic skeletons, which differentiates them from other high-level parallel programming models, is that orchestration and synchronization of the parallel activities is implicitly defined by the skeleton patterns. Programmers do not have to specify the synchronizations between the application's sequential parts. This yields two implications. First, as the communication/data access patterns are known in advance, cost models can be applied to schedule skeletons programs.[1]Second, that algorithmic skeleton programming reduces the number of errors when compared to traditional lower-level parallel programming models (Threads, MPI). The following example is based on the JavaSkandiumlibrary for parallel programming. The objective is to implement an Algorithmic Skeleton-based parallel version of theQuickSortalgorithm using the Divide and Conquer pattern. Notice that the high-level approach hides Thread management from the programmer. The functional codes in this example correspond to four types Condition, Split, Execute, and Merge. The ShouldSplit class implements the Condition interface. The function receives an input, Range r in this case, and returning true or false. In the context of the Divide and Conquer where this function will be used, this will decide whether a sub-array should be subdivided again or not. The SplitList class implements the split interface, which in this case divides an (sub-)array into smaller sub-arrays. The class uses a helper functionpartition(...)which implements the well-known QuickSort pivot and swap scheme. The Sort class implements and Execute interface, and is in charge of sorting the sub-array specified byRange r. In this case we simply invoke Java's default (Arrays.sort) method for the given sub-array. Finally, once a set of sub-arrays are sorted we merge the sub-array parts into a bigger array with the MergeList class which implements a Merge interface. ASSIST[2][3]is a programming environment which provides programmers with a structured coordination language. The coordination language can express parallel programs as an arbitrary graph of software modules. The module graph describes how a set of modules interact with each other using a set of typed data streams. The modules can be sequential or parallel. Sequential modules can be written in C, C++, or Fortran; and parallel modules are programmed with a special ASSIST parallel module (parmod). AdHoc,[4][5]a hierarchical and fault-tolerant Distributed Shared Memory (DSM) system is used to interconnect streams of data between processing elements by providing a repository with: get/put/remove/execute operations. Research around AdHoc has focused on transparency, scalability, and fault-tolerance of the data repository. While not a classical skeleton framework, in the sense that no skeletons are provided, ASSIST's genericparmodcan be specialized into classical skeletons such as:farm,map, etc. ASSIST also supports autonomic control ofparmods, and can be subject to a performance contract by dynamically adapting the number of resources used. CO2P3S(Correct Object-Oriented Pattern-based Parallel Programming System), is a pattern oriented development environment,[6]which achieves parallelism using threads in Java. CO2P3Sis concerned with the complete development process of a parallel application. Programmers interact through a programming GUI to choose a pattern and its configuration options. Then, programmers fill the hooks required for the pattern, and new code is generated as a framework in Java for the parallel execution of the application. The generated framework uses three levels, in descending order of abstraction: patterns layer, intermediate code layer, and native code layer. Thus, advanced programmers may intervene the generated code at multiple levels to tune the performance of their applications. The generated code is mostlytype safe, using the types provided by the programmer which do not require extension of superclass, but fails to be completely type safe such as in the reduce(..., Object reducer) method in the mesh pattern. The set of patterns supported in CO2P3S corresponds to method-sequence, distributor, mesh, and wavefront. Complex applications can be built by composing frameworks with their object references. Nevertheless, if no pattern is suitable, the MetaCO2P3S graphical tool addresses extensibility by allowing programmers to modify the pattern designs and introduce new patterns into CO2P3S. Support fordistributed memoryarchitectures in CO2P3S was introduced in later.[7]To use a distributed memory pattern, programmers must change the pattern's memory option from shared to distributed, and generate the new code. From the usage perspective, the distributed memory version of the code requires the management of remote exceptions. Calciumis greatly inspired by Lithium and Muskel. As such, it provides algorithmic skeleton programming as a Java library. Both task and data parallel skeletons are fully nestable; and are instantiated via parametric skeleton objects, not inheritance. Calciumsupports the execution of skeleton applications on top of theProActiveenvironment for distributed cluster like infrastructure. Additionally, Calcium has three distinctive features for algorithmic skeleton programming. First, a performance tuning model which helps programmers identify code responsible for performance bugs.[8]Second, a type system for nestable skeletons which is proven to guarantee subject reduction properties and is implemented using Java Generics.[9]Third, a transparent algorithmic skeleton file access model, which enables skeletons for data intensive applications.[10] Skandiumis a complete re-implementation ofCalciumfor multi-core computing. Programs written onSkandiummay take advantage of shared memory to simplify parallel programming.[11] Eden[12]is a parallel programming language for distributed memory environments, which extends Haskell. Processes are defined explicitly to achieve parallel programming, while their communications remain implicit. Processes communicate through unidirectional channels, which connect one writer to exactly one reader. Programmers only need to specify which data a processes depends on. Eden's process model provides direct control over process granularity, data distribution and communication topology. Edenis not a skeleton language in the sense that skeletons are not provided as language constructs. Instead, skeletons are defined on top of Eden's lower-level process abstraction, supporting both task anddata parallelism. So, contrary to most other approaches, Eden lets the skeletons be defined in the same language and at the same level, the skeleton instantiation is written: Eden itself. Because Eden is an extension of a functional language, Eden skeletons arehigher order functions. Eden introduces the concept of implementation skeleton, which is an architecture independent scheme that describes a parallel implementation of an algorithmic skeleton. TheEdinburgh Skeleton Library(eSkel) is provided in C and runs on top of MPI. The first version of eSkel was described in,[13]while a later version is presented in.[14] In,[15]nesting-mode and interaction-mode for skeletons are defined. The nesting-mode can be either transient or persistent, while the interaction-mode can be either implicit or explicit. Transient nesting means that the nested skeleton is instantiated for each invocation and destroyed Afterwards, while persistent means that the skeleton is instantiated once and the same skeleton instance will be invoked throughout the application. Implicit interaction means that the flow of data between skeletons is completely defined by the skeleton composition, while explicit means that data can be generated or removed from the flow in a way not specified by the skeleton composition. For example, a skeleton that produces an output without ever receiving an input has explicit interaction. Performance prediction for scheduling and resource mapping, mainly for pipe-lines, has been explored by Benoit et al.[16][17][18][19]They provided a performance model for each mapping, based on process algebra, and determine the best scheduling strategy based on the results of the model. More recent works have addressed the problem of adaptation on structured parallel programming,[20]in particular for the pipe skeleton.[21][22] FastFlowis a skeletal parallel programming framework specifically targeted to the development of streaming and data-parallel applications. Being initially developed to targetmulti-coreplatforms, it has been successively extended to target heterogeneous platforms composed of clusters of shared-memory platforms,[23][24]possibly equipped with computing accelerators such as NVidia GPGPUs, Xeon Phi, Tilera TILE64. The main design philosophy ofFastFlowis to provide application designers with key features for parallel programming (e.g. time-to-market, portability, efficiency and performance portability) via suitable parallel programming abstractions and a carefully designed run-time support.[25]FastFlowis a general-purpose C++ programming framework for heterogeneous parallel platforms. Like other high-level programming frameworks, such as IntelTBBand OpenMP, it simplifies the design and engineering of portable parallel applications. However, it has a clear edge in terms of expressiveness and performance with respect to other parallel programming frameworks in specific application scenarios, including, inter alia: fine-grain parallelism on cache-coherent shared-memory platforms; streaming applications; coupled usage of multi-core and accelerators. In other casesFastFlowis typically comparable to (and is some cases slightly faster than) state-of-the-art parallel programming frameworks such as Intel TBB, OpenMP, Cilk, etc.[26] Higher-order Divide and Conquer(HDC)[27]is a subset of the functional languageHaskell. Functional programs are presented as polymorphic higher-order functions, which can be compiled into C/MPI, and linked with skeleton implementations. The language focus on divide and conquer paradigm, and starting from a general kind of divide and conquer skeleton, more specific cases with efficient implementations are derived. The specific cases correspond to: fixed recursion depth, constant recursion degree, multiple block recursion, elementwise operations, and correspondent communications[28] HDCpays special attention to the subproblem's granularity and its relation with the number of Available processors. The total number of processors is a key parameter for the performance of the skeleton program as HDC strives to estimate an adequate assignment of processors for each part of the program. Thus, the performance of the application is strongly related with the estimated number of processors leading to either exceeding number of subproblems, or not enough parallelism to exploit available processors. HOC-SA is anGlobus Incubator project.HOC-SA stands for Higher-Order Components-Service Architecture. Higher-Order Components (HOCs) have the aim of simplifying Grid application development.The objective of HOC-SA is to provide Globus users, who do not want to know about all the details of the Globus middleware (GRAM RSL documents, Web services and resource configuration etc.), with HOCs that provide a higher-level interface to the Grid than the core Globus Toolkit.HOCs are Grid-enabled skeletons, implemented as components on top of the Globus Toolkit, remotely accessibly via Web Services.[29] JaSkel[30]is a Java-based skeleton framework providing skeletons such as farm, pipe and heartbeat. Skeletons are specialized using inheritance. Programmers implement the abstract methods for each skeleton to provide their application specific code. Skeletons in JaSkel are provided in both sequential, concurrent and dynamic versions. For example, the concurrent farm can be used in shared memory environments (threads), but not in distributed environments (clusters) where the distributed farm should be used. To change from one version to the other, programmers must change their classes' signature to inherit from a different skeleton. The nesting of skeletons uses the basic Java Object class, and therefore no type system is enforced during the skeleton composition. The distribution aspects of the computation are handled inJaSkelusing AOP, more specifically the AspectJ implementation. Thus,JaSkelcan be deployed on both cluster and Grid like infrastructures.[31]Nevertheless, a drawback of theJaSkelapproach is that the nesting of the skeleton strictly relates to the deployment infrastructure. Thus, a double nesting of farm yields a better performance than a single farm on hierarchical infrastructures. This defeats the purpose of using AOP to separate the distribution and functional concerns of the skeleton program. Lithium[32][33][34]and its successorMuskelare skeleton frameworks developed at University of Pisa, Italy. Both of them provide nestable skeletons to the programmer as Java libraries. The evaluation of a skeleton application follows a formal definition of operational semantics introduced by Aldinucci and Danelutto,[35][36]which can handle both task and data parallelism. The semantics describe both functional and parallel behavior of the skeleton language using a labeled transition system. Additionally, several performance optimization are applied such as: skeleton rewriting techniques [18, 10], task lookahead, and server-to-server lazy binding.[37] At the implementation level, Lithium exploits macro-data flow[38][39]to achieve parallelism. When the input stream receives a new parameter, the skeleton program is processed to obtain a macro-data flow graph. The nodes of the graph are macro-data flow instructions (MDFi) which represent the sequential pieces of code provided by the programmer. Tasks are used to group together several MDFi, and are consumed by idle processing elements from a task pool. When the computation of the graph is concluded, the result is placed into the output stream and thus delivered back to the user. Muskelalso provides non-functional features such as Quality of Service (QoS);[40]security between task pool and interpreters;[41][42]and resource discovery, load balancing, and fault tolerance when interfaced with Java / Jini Parallel Framework (JJPF),[43]a distributed execution framework.Muskelalso provides support for combining structured with unstructured programming[44]and recent research has addressed extensibility.[45] Mallba[46]is a library for combinatorial optimizations supporting exact, heuristic and hybrid search strategies.[47]Each strategy is implemented in Mallba as a generic skeleton which can be used by providing the required code. On the exact search algorithms Mallba provides branch-and-bound and dynamic-optimization skeletons. For local search heuristics Mallba supports:hill climbing, metropolis,simulated annealing, andtabu search; and also population based heuristics derived fromevolutionary algorithmssuch asgenetic algorithms, evolution strategy, and others (CHC). The hybrid skeletons combine strategies, such as: GASA, a mixture of genetic algorithm and simulated annealing, and CHCCES which combines CHC and ES. The skeletons are provided as a C++ library and are not nestable but type safe. A custom MPI abstraction layer is used, NetStream, which takes care of primitive data type marshalling, synchronization, etc. A skeleton may have multiple lower-level parallel implementations depending on the target architectures: sequential, LAN, and WAN. For example: centralized master-slave, distributed master-slave, etc. Mallbaalso provides state variables which hold the state of the search skeleton. The state links the search with the environment, and can be accessed to inspect the evolution of the search and decide on future actions. For example, the state can be used to store the best solution found so far, or α, β values for branch and bound pruning.[48] Compared with other frameworks, Mallba's usage of skeletons concepts is unique. Skeletons are provided as parametric search strategies rather than parametric parallelization patterns. Marrow[49][50]is a C++ algorithmic skeleton framework for the orchestration ofOpenCLcomputations in, possibly heterogeneous, multi-GPUenvironments. It provides a set of both task and data-parallel skeletons that can be composed, through nesting, to build compound computations. The leaf nodes of the resulting composition trees represent the GPU computational kernels, while the remainder nodes denote the skeleton applied to the nested sub-tree. The framework takes upon itself the entire host-side orchestration required to correctly execute these trees in heterogeneous multi-GPU environments, including the proper ordering of the data-transfer and of the execution requests, and the communication required between the tree's nodes. Among Marrow's most distinguishable features are a set of skeletons previously unavailable in the GPU context, such as Pipeline and Loop, and the skeleton nesting ability – a feature also new in this context. Moreover, the framework introduces optimizations that overlap communication and computation, hence masking the latency imposed by thePCIebus. The parallel execution of a Marrow composition tree by multiple GPUs follows a data-parallel decomposition strategy, that concurrently applies the entire computational tree to different partitions of the input dataset. Other than expressing which kernel parameters may be decomposed and, when required, defining how the partial results should be merged, the programmer is completely abstracted from the underlying multi-GPU architecture. More information, as well as the source code, can be found at theMarrow website The Muenster Skeleton LibraryMuesli[51][52]is a C++ template library which re-implements many of the ideas and concepts introduced inSkil, e.g. higher order functions, currying, and polymorphic types[1]. It is built on top ofMPI1.2 andOpenMP2.5 and supports, unlike many other skeleton libraries, both task and data parallel skeletons. Skeleton nesting (composition) is similar to the two tier approach ofP3L, i.e. task parallel skeletons can be nested arbitrarily while data parallel skeletons cannot, but may be used at the leaves of a task parallel nesting tree.[53]C++ templates are used to render skeletons polymorphic, but no type system is enforced. However, the library implements an automated serialization mechanism inspired by[54]such that, in addition to the standard MPI data types, arbitrary user-defined data types can be used within the skeletons. The supported task parallel skeletons[55]are Branch & Bound,[56]Divide & Conquer,[57][58]Farm,[59][60]and Pipe, auxiliary skeletons are Filter, Final, and Initial. Data parallel skeletons, such as fold (reduce), map, permute, zip, and their variants are implemented as higher order member functions of a distributed data structure. Currently, Muesli supports distributed data structures for arrays, matrices, and sparse matrices.[61] As a unique feature, Muesli's data parallel skeletons automatically scale both on single- as well as on multi-core, multi-node cluster architectures.[62][63]Here, scalability across nodes and cores is ensured by simultaneously using MPI and OpenMP, respectively. However, this feature is optional in the sense that a program written with Muesli still compiles and runs on a single-core, multi-node cluster computer without changes to the source code, i.e. backward compatibility is guaranteed. This is ensured by providing a very thin OpenMP abstraction layer such that the support of multi-core architectures can be switched on/off by simply providing/omitting the OpenMP compiler flag when compiling the program. By doing so, virtually no overhead is introduced at runtime. P3L[64](Pisa Parallel Programming Language) is a skeleton based coordination language.P3Lprovides skeleton constructs which are used to coordinate the parallel or sequential execution of C code. A compiler named Anacleto[65]is provided for the language. Anacleto uses implementation templates to compile P3 L code into a target architecture. Thus, a skeleton can have several templates each optimized for a different architecture. A template implements a skeleton on a specific architecture and provides a parametric process graph with a performance model. The performance model can then be used to decide program transformations which can lead to performance optimizations.[66] AP3Lmodule corresponds to a properly defined skeleton construct with input and output streams, and other sub-modules or sequential C code. Modules can be nested using the two tier model, where the outer level is composed of task parallel skeletons, while data parallel skeletons may be used in the inner level [64]. Type verification is performed at the data flow level, when the programmer explicitly specifies the type of the input and output streams, and by specifying the flow of data between sub-modules. SkIE[67](Skeleton-based Integrated Environment) is quite similar toP3L, as it is also based on a coordination language, but provides advanced features such as debugging tools, performance analysis, visualization and graphical user interface. Instead of directly using the coordination language, programmers interact with a graphical tool, where parallel modules based on skeletons can be composed. SKELib[68]builds upon the contributions ofP3LandSkIEby inheriting, among others, the template system. It differs from them because a coordination language is no longer used, but instead skeletons are provided as a library in C, with performance similar as the one achieved inP3L. Contrary toSkil, another C like skeleton framework, type safety is not addressed inSKELib. PAS(Parallel Architectural Skeletons) is a framework for skeleton programming developed in C++ and MPI.[69][70]Programmers use an extension of C++ to write their skeleton applications1 . The code is then passed through a Perl script which expands the code to pure C++ where skeletons are specialized through inheritance. InPAS, every skeleton has a Representative (Rep) object which must be provided by the programmer and is in charge of coordinating the skeleton's execution. Skeletons can be nested in a hierarchical fashion via the Rep objects. Besides the skeleton's execution, the Rep also explicitly manages the reception of data from the higher level skeleton, and the sending of data to the sub-skeletons. A parametrized communication/synchronization protocol is used to send and receive data between parent and sub-skeletons. An extension of PAS labeled asSuperPas[71]and later asEPAS[72]addresses skeleton extensibility concerns. With theEPAStool, new skeletons can be added toPAS. A Skeleton Description Language (SDL) is used to describe the skeleton pattern by specifying the topology with respect to a virtual processor grid. The SDL can then be compiled into native C++ code, which can be used as any other skeleton. SBASCO(Skeleton-BAsed Scientific COmponents) is a programming environment oriented towards efficient development of parallel and distributed numerical applications.[73]SBASCOaims at integrating two programming models: skeletons and components with a custom composition language. An application view of a component provides a description of its interfaces (input and output type); while a configuration view provides, in addition, a description of the component's internal structure and processor layout. A component's internal structure can be defined using three skeletons: farm, pipe and multi-block. SBASCO's addresses domain decomposable applications through its multi-block skeleton. Domains are specified through arrays (mainly two dimensional), which are decomposed into sub-arrays with possible overlapping boundaries. The computation then takes place in an iterative BSP like fashion. The first stage consists of local computations, while the second stage performs boundary exchanges. A use case is presented for a reaction-diffusion problem in.[74] Two type of components are presented in.[75]Scientific Components (SC) which provide the functional code; and Communication Aspect Components (CAC) which encapsulate non-functional behavior such as communication, distribution processor layout and replication. For example, SC components are connected to a CAC component which can act as a manager at runtime by dynamically re-mapping processors assigned to a SC. A use case showing improved performance when using CAC components is shown in.[76] TheStructured Coordination Language(SCL)[77]was one of the earliest skeleton programming languages. It provides a co-ordination language approach for skeleton programming over software components. SCL is considered a base language, and was designed to be integrated with a host language, for example Fortran or C, used for developing sequential software components. InSCL, skeletons are classified into three types:configuration,elementaryandcomputation. Configuration skeletons abstract patterns for commonly used data structures such as distributed arrays (ParArray). Elementary skeletons correspond to data parallel skeletons such as map, scan, and fold. Computation skeletons which abstract the control flow and correspond mainly to task parallel skeletons such as farm, SPMD, and iterateUntil. The coordination language approach was used in conjunction with performance models for programming traditional parallel machines as well as parallel heterogeneous machines that have different multiple cores on each processing node.[78] SkePU[79]SkePU is a skeleton programming framework for multicore CPUs and multi-GPU systems. It is a C++ template library with six data-parallel and one task-parallel skeletons, two container types, and support for execution on multi-GPU systems both with CUDA and OpenCL. Recently, support for hybrid execution, performance-aware dynamic scheduling and load balancing is developed in SkePU by implementing a backend for the StarPU runtime system. SkePU is being extended for GPU clusters. SKiPPERis a domain specific skeleton library for vision applications[80]which provides skeletons in CAML, and thus relies on CAML for type safety. Skeletons are presented in two ways: declarative and operational. Declarative skeletons are directly used by programmers, while their operational versions provide an architecture specific target implementation. From the runtime environment, CAML skeleton specifications, and application specific functions (provided in C by the programmer), new C code is generated and compiled to run the application on the target architecture. One of the interesting things aboutSKiPPERis that the skeleton program can be executed sequentially for debugging. Different approaches have been explored inSKiPPERfor writing operational skeletons: static data-flow graphs, parametric process networks, hierarchical task graphs, and tagged-token data-flow graphs.[81] QUAFF[82]is a more recent skeleton library written in C++ and MPI. QUAFF relies on template-based meta-programming techniques to reduce runtime overheads and perform skeleton expansions and optimizations at compilation time. Skeletons can be nested and sequential functions are stateful. Besides type checking, QUAFF takes advantage of C++ templates to generate, at compilation time, new C/MPI code. QUAFF is based on the CSP-model, where the skeleton program is described as a process network and production rules (single, serial, par, join).[83] TheSkeTo[84]project is a C++ library which achieves parallelization using MPI. SkeTo is different from other skeleton libraries because instead of providing nestable parallelism patterns, SkeTo provides parallel skeletons for parallel data structures such as: lists, trees,[85][86]and matrices.[87]The data structures are typed using templates, and several parallel operations can be invoked on them. For example, the list structure provides parallel operations such as: map, reduce, scan, zip, shift, etc... Additional research around SkeTo has also focused on optimizations strategies by transformation, and more recently domain specific optimizations.[88]For example,SkeToprovides a fusion transformation[89]which merges two successive function invocations into a single one, thus decreasing the function call overheads and avoiding the creation of intermediate data structures passed between functions. Skil[90]is an imperative language for skeleton programming. Skeletons are not directly part of the language but are implemented with it.Skiluses a subset of C language which provides functional language like features such as higher order functions, curring and polymorphic types. WhenSkilis compiled, such features are eliminated and a regular C code is produced. Thus,Skiltransforms polymorphic high order functions into monomorphic first order C functions.Skildoes not support nestable composition of skeletons. Data parallelism is achieved using specific data parallel structures, for example to spread arrays among available processors. Filter skeletons can be used. InSTAPL Skeleton Framework[91][92]skeletons are defined as parametric data flow graphs, letting them scale beyond 100,000 cores. In addition, this framework addresses composition of skeletons as point-to-point composition of their corresponding data flow graphs through the notion of ports, allowing new skeletons to be easily added to the framework. As a result, this framework eliminate the need for reimplementation and global synchronizations in composed skeletons.STAPL Skeleton Frameworksupports nested composition and can switch between parallel and sequential execution in each level of nesting. This framework benefits from scalable implementation of STAPL parallel containers[93]and can run skeletons on various containers including vectors, multidimensional arrays, and lists. T4Pwas one of the first systems introduced for skeleton programming.[94]The system relied heavily on functional programming properties, and five skeletons were defined as higher order functions: Divide-and-Conquer, Farm, Map, Pipe and RaMP. A program could have more than one implementation, each using a combination of different skeletons. Furthermore, each skeleton could have different parallel implementations. A methodology based on functional program transformations guided by performance models of the skeletons was used to select the most appropriate skeleton to be used for the program as well as the most appropriate implementation of the skeleton.[95] Operators: compose, repeat, do-while, do-all, do-across
https://en.wikipedia.org/wiki/Algorithmic_skeleton
Incomputer programming, specifically when using theimperative programmingparadigm, anassertionis apredicate(aBoolean-valued functionover thestate space, usually expressed as alogical propositionusing thevariablesof a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help acompilercompile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberatelycrashesor throws an assertion failureexception. The following code contains two assertions,x > 0andx > 1, and they are indeed true at the indicated points during execution: Programmers can use assertions to help specify programs and to reason about program correctness. For example, aprecondition—an assertion placed at the beginning of a section of code—determines the set of states under which the programmer expects the code to execute. Apostcondition—placed at the end—describes the expected state at the end of execution. For example:x > 0 { x++ } x > 1. The example above uses the notation for including assertions used byC. A. R. Hoarein his 1969 article.[1]That notation cannot be used in existing mainstream programming languages. However, programmers can include unchecked assertions using thecomment featureof their programming language. For example, inC++: The braces included in the comment help distinguish this use of a comment from other uses. Librariesmay provide assertion features as well. For example, in C usingglibcwith C99 support: Several modern programming languages include checked assertions –statementsthat are checked atruntimeor sometimes statically. If an assertion evaluates to false at runtime, an assertion failure results, which typically causes execution to abort. This draws attention to the location at which the logical inconsistency is detected and can be preferable to the behaviour that would otherwise result. The use of assertions helps the programmer design, develop, and reason about a program. In languages such asEiffel, assertions form part of the design process; other languages, such asCandJava, use them only to check assumptions atruntime. In both cases, they can be checked for validity at runtime but can usually also be suppressed. Assertions can function as a form of documentation: they can describe the state the code expects to find before it runs (itspreconditions), and the state the code expects to result in when it is finished running (postconditions); they can also specifyinvariantsof aclass.Eiffelintegrates such assertions into the language and automatically extracts them to document the class. This forms an important part of the method ofdesign by contract. This approach is also useful in languages that do not explicitly support it: the advantage of using assertion statements rather than assertions incommentsis that the program can check the assertions every time it runs; if the assertion no longer holds, an error can be reported. This prevents the code from getting out of sync with the assertions. An assertion may be used to verify that an assumption made by the programmer during the implementation of the program remains valid when the program is executed. For example, consider the followingJavacode: InJava,%is theremainderoperator (modulo), and in Java, if its first operand is negative, the result can also be negative (unlike the modulo used in mathematics). Here, the programmer has assumed thattotalis non-negative, so that the remainder of a division with 2 will always be 0 or 1. The assertion makes this assumption explicit: ifcountNumberOfUsersdoes return a negative value, the program may have a bug. A major advantage of this technique is that when an error does occur it is detected immediately and directly, rather than later through often obscure effects. Since an assertion failure usually reports the code location, one can often pin-point the error without further debugging. Assertions are also sometimes placed at points the execution is not supposed to reach. For example, assertions could be placed at thedefaultclause of theswitchstatement in languages such asC,C++, andJava. Any case which the programmer does not handle intentionally will raise an error and the program will abort rather than silently continuing in an erroneous state. InDsuch an assertion is added automatically when aswitchstatement doesn't contain adefaultclause. InJava, assertions have been a part of the language since version 1.4. Assertion failures result in raising anAssertionErrorwhen the program is run with the appropriate flags, without which the assert statements are ignored. InC, they are added on by the standard headerassert.hdefiningassert (assertion)as a macro that signals an error in the case of failure, usually terminating the program. InC++, bothassert.handcassertheaders provide theassertmacro. The danger of assertions is that they may cause side effects either by changing memory data or by changing thread timing. Assertions should be implemented carefully so they cause no side effects on program code. Assertion constructs in a language allow for easytest-driven development(TDD) without the use of a third-party library. During thedevelopment cycle, the programmer will typically run the program with assertions enabled. When an assertion failure occurs, the programmer is immediately notified of the problem. Many assertion implementations will also halt the program's execution: this is useful, since if the program continued to run after an assertion violation occurred, it might corrupt its state and make the cause of the problem more difficult to locate. Using the information provided by the assertion failure (such as the location of the failure and perhaps astack trace, or even the full program state if the environment supportscore dumpsor if the program is running in adebugger), the programmer can usually fix the problem. Thus assertions provide a very powerful tool in debugging. When a program is deployed toproduction, assertions are typically turned off, to avoid any overhead or side effects they may have. In some cases assertions are completely absent from deployed code, such as in C/C++ assertions via macros. In other cases, such as Java, assertions are present in the deployed code, and can be turned on in the field for debugging.[2] Assertions may also be used to promise the compiler that a given edge condition is not actually reachable, thereby permitting certainoptimizationsthat would not otherwise be possible. In this case, disabling the assertions could actually reduce performance. Assertions that are checked at compile time are called static assertions. Static assertions are particularly useful in compile timetemplate metaprogramming, but can also be used in low-level languages like C by introducing illegal code if (and only if) the assertion fails.C11andC++11support static assertions directly throughstatic_assert. In earlier C versions, a static assertion can be implemented, for example, like this: If the(BOOLEAN CONDITION)part evaluates to false then the above code will not compile because the compiler will not allow twocase labelswith the same constant. The boolean expression must be a compile-time constant value, for example(sizeof(int)==4)would be a valid expression in that context. This construct does not work at file scope (i.e. not inside a function), and so it must be wrapped inside a function. Another popular[3]way of implementing assertions in C is: If the(BOOLEAN CONDITION)part evaluates to false then the above code will not compile because arrays may not have a negative length. If in fact the compiler allows a negative length then the initialization byte (the'!'part) should cause even such over-lenient compilers to complain. The boolean expression must be a compile-time constant value, for example(sizeof(int) == 4)would be a valid expression in that context. Both of these methods require a method of constructing unique names. Modern compilers support a__COUNTER__preprocessor define that facilitates the construction of unique names, by returning monotonically increasing numbers for each compilation unit.[4] Dprovides static assertions through the use ofstatic assert.[5] Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are often enabled during development and disabled during final testing and on release to the customer. Not checking assertions avoids the cost of evaluating the assertions while (assuming the assertions are free ofside effects) still producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can mean that a program that would have aborted will continue to run. This is sometimes preferable. Some languages, includingC,YASSandC++, can completely remove assertions at compile time using thepreprocessor. Similarly, launching thePythoninterpreter with "-O" (for "optimize") as an argument will cause the Python code generator to not emit any bytecode for asserts.[6] Java requires an option to be passed to the run-time engine in order toenableassertions. Absent the option, assertions are bypassed, but they always remain in the code unless optimised away by aJIT compilerat run-time orexcluded at compile timevia the programmer manually placing each assertion behind anif (false)clause. Programmers can build checks into their code that are always active by bypassing or manipulating the language's normal assertion-checking mechanisms. Assertions are distinct from routineerror-handling. Assertions document logically impossible situations and discover programming errors: if the impossible occurs, then something fundamental is clearly wrong with the program. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is unwise: assertions do not allow for recovery from errors; an assertion failure will normally halt the program's execution abruptly; and assertions are often disabled in production code. Assertions also do not display a user-friendly error message. Consider the following example of using an assertion to handle an error: Here, the programmer is aware thatmallocwill return aNULLpointerif memory is not allocated. This is possible: the operating system does not guarantee that every call tomallocwill succeed. If an out of memory error occurs the program will immediately abort. Without the assertion, the program would continue running untilptrwas dereferenced, and possibly longer, depending on the specific hardware being used. So long as assertions are not disabled, an immediate exit is assured. But if a graceful failure is desired, the program has to handle the failure. For example, a server may have multiple clients, or may hold resources that will not be released cleanly, or it may have uncommitted changes to write to a datastore. In such cases it is better to fail a single transaction than to abort abruptly. Another error is to rely on side effects of expressions used as arguments of an assertion. One should always keep in mind that assertions might not be executed at all, since their sole purpose is to verify that a condition which should always be true does in fact hold true. Consequently, if the program is considered to be error-free and released, assertions may be disabled and will no longer be evaluated. Consider another version of the previous example: This might look like a smart way to assign the return value ofmalloctoptrand check if it isNULLin one step, but themalloccall and the assignment toptris a side effect of evaluating the expression that forms theassertcondition. When theNDEBUGparameter is passed to the compiler, as when the program is considered to be error-free and released, theassert()statement is removed, somalloc()isn't called, renderingptruninitialised. This could potentially result in asegmentation faultor similarnull pointererror much further down the line in program execution, causing bugs that may besporadicand/or difficult to track down. Programmers sometimes use a similar VERIFY(X) define to alleviate this problem. Modern compilers may issue a warning when encountering the above code.[7] In 1947 reports byvon NeumannandGoldstine[8]on their design for theIAS machine, they described algorithms using an early version offlow charts, in which they included assertions: "It may be true, that whenever C actually reaches a certain point in the flow diagram, one or more bound variables will necessarily possess certain specified values, or possess certain properties, or satisfy certain properties with each other. Furthermore, we may, at such a point, indicate the validity of these limitations. For this reason we will denote each area in which the validity of such limitations is being asserted, by a special box, which we call an assertion box." The assertional method for proving correctness of programs was advocated byAlan Turing. In a talk "Checking a Large Routine" at Cambridge, June 24, 1949 Turing suggested: "How can one check a large routine in the sense of making sure that it's right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definiteassertionswhich can be checked individually, and from which the correctness of the whole program easily follows".[9]
https://en.wikipedia.org/wiki/Assertion_(computing)
TheGuarded Command Language(GCL) is aprogramming languagedefined byEdsger Dijkstraforpredicate transformer semanticsin EWD472.[1]It combines programming concepts in a compact way. It makes it easier to develop a program and its proof hand-in-hand, with the proof ideas leading the way; moreover, parts of a program can actually becalculated. An important property ofGCLisnondeterminism. For example, in the if-statement, several alternatives may be true, and the choice is made at runtime, when the if-statement is executed. This frees the programmer from having to make unnecessary choices and is an aid in the formal development of programs. GCLincludes the multiple assignment statement. For example, execution of the statementx, y:= y, xis done by first evaluating the righthand side values and then storing them in the lefthand variables. Thus, this statement swaps the values ofxandy. The following books discuss the development of programs usingGCL: A guarded command consists of a boolean condition orguard, and a statement "guarded" by it. The statement is only executed if the guard is true, so when reasoning about the statement, the condition can be assumed true. This makes it easier to prove theprogrammeets aspecification. A guarded command is astatementof the form G → S, where skipandabortare important statements in the guarded command language.abortis the undefined instruction: do anything. It does not even need to terminate. It is used to describe the program when formulating a proof, in which case the proof usually fails.skipis the empty instruction: do nothing. It is often used when the syntax requires a statement but thestateshould not change. Assigns values tovariables. or where Statements are separated by one semicolon (;) The selection (often called the "conditional statement" or "if statement") is a list of guarded commands, of which one is chosen to execute. If more than one guard is true, one statement whose guard is true is arbitrarily chosen to be executed. If no guard is true, the result is undefined, that is, equivalent toabort. Because at least one of the guards must be true, the empty statementskipis often needed. The statementif fihas no guarded commands, so there is never a true guard. Hence,if fiis equivalent toabort. Upon execution of a selection, the guards are evaluated. If none of the guards istrue, then the selection aborts, otherwise one of the clauses with atrueguard is chosen arbitrarily and its statement is executed. GCL does not specify an implementation. Since guards cannot haveside effectsand the choice of clause is arbitrary, an implementation may evaluate the guards in any sequence and choose the firsttrueclause, for example. Inpseudocode: In guarded command language: In pseudocode: In guarded command language: If the second guard is omitted and error is False, the result is abort. If a = b, either a or b is chosen as the new value for the maximum, with equal results. However, theimplementationmay find that one is easier or faster than the other. Since there is no difference to the programmer, any implementation will do. Execution of this repetition, or loop, is shown below. Execution of the repetition consists of executing 0 or moreiterations, where an iteration consists of arbitrarily choosing a guarded commandGi → Siwhose guardGiis true and executing the commandSi. Thus, if all guards are initially false, the repetition terminates immediately, without executing an iteration. Execution of the repetitiondo od, which has no guarded commands, executes 0 iterations, sodo odis equivalent toskip. This repetition ends when a = b, in which case a and b hold thegreatest common divisorof A and B. Dijkstra sees in this algorithm a way of synchronizing two infinite cyclesa := a - bandb := b - ain such a way thata≥0andb≥0remains true. This repetition ends when b = 0, in which case the variables hold the solution toBézout's identity: xA + yB = gcd(A,B) . The program keeps on permuting elements while one of them is greater than its successor. This non-deterministicbubble sortis not more efficient than its deterministic version, but easier to prove: it will not stop while the elements are not sorted and that each step it sorts at least 2 elements. This algorithm finds the value 1 ≤y≤nfor which a given integer functionfis maximal. Not only the computation but also the final state is not necessarily uniquely determined. Generalizing the observationalcongruenceof Guarded Commands into alatticehas led toRefinement Calculus.[2]This has been mechanized inFormal MethodslikeB-Methodthat allow one to formally derive programs from their specifications. Guarded commands are suitable forquasi-delay-insensitive circuitdesign because the repetition allows arbitrary relative delays for the selection of different commands. In this application, a logic gate driving a nodeyin the circuit consists of two guarded commands, as follows: PullDownGuardandPullUpGuardhere are functions of the logic gate's inputs, which describe when the gate pulls the output down or up, respectively. Unlike classical circuit evaluation models, the repetition for a set of guarded commands (corresponding to an asynchronous circuit) can accurately describe all possible dynamic behaviors of that circuit. Depending on the model one is willing to live with for the electrical circuit elements, additional restrictions on the guarded commands may be necessary for a guarded-command description to be entirely satisfactory. Common restrictions include stability, non-interference, and absence of self-invalidating commands.[3]AI Guarded commands are used within thePromelaprogramming language, which is used by theSPIN model checker. SPIN verifies correct operation of concurrent software applications. The Perl moduleCommands::Guardedimplements a deterministic, rectifying variant on Dijkstra's guarded commands.
https://en.wikipedia.org/wiki/Guarded_Command_Language
Inconcurrent programming,guarded suspension[1]is asoftware design patternfor managing operations that require both alockto be acquired and apreconditionto be satisfied before the operation can be executed. The guarded suspension pattern is typically applied to method calls in object-oriented programs, and involves suspending the method call, and the calling thread, until the precondition (acting as aguard) is satisfied. Because it isblocking, the guarded suspension pattern is generally only used when the developer knows that a method call will be suspended for a finite and reasonable period of time. If a method call is suspended for too long, then the overall program will slow down or stop, waiting for the precondition to be satisfied. If the developer knows that the method call suspension will be indefinite or for an unacceptably long period, then thebalking patternmay be preferred. In Java, the Object class provides thewait()andnotify()methods to assist with guarded suspension. In the implementation below, originally found inKuchana (2004), if there is no precondition satisfied for the method call to be successful, then the method will wait until it finally enters a valid state. An example of an actual implementation would be a queue object with agetmethod that has a guard to detect when there are no items in the queue. Once theputmethod notifies the other methods (for example, agetmethod), then thegetmethod can exit its guarded state and proceed with a call. Once the queue is empty, then thegetmethod will enter a guarded state once again. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Guarded_suspension
Inmathematics, theIverson bracket, named afterKenneth E. Iverson, is a notation that generalises theKronecker delta, which is the Iverson bracket of the statementx=y. It maps anystatementto afunctionof thefree variablesin that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets:[P]={1ifPis true;0otherwise.{\displaystyle [P]={\begin{cases}1&{\text{if }}P{\text{ is true;}}\\0&{\text{otherwise.}}\end{cases}}}In other words, the Iverson bracket of a statement is theindicator functionof the set of values for which the statement is true. The Iverson bracket allows usingcapital-sigma notationwithout restriction on the summation index. That is, for any propertyP(k){\displaystyle P(k)}of the integerk{\displaystyle k}, one can rewrite the restricted sum∑k:P(k)f(k){\displaystyle \sum _{k:P(k)}f(k)}in the unrestricted form∑kf(k)⋅[P(k)]{\displaystyle \sum _{k}f(k)\cdot [P(k)]}. With this convention,f(k){\displaystyle f(k)}does not need to be defined for the values ofkfor which the Iverson bracket equals0; that is, a summandf(k)[false]{\displaystyle f(k)[{\textbf {false}}]}must evaluate to 0 regardless of whetherf(k){\displaystyle f(k)}is defined. The notation was originally introduced byKenneth E. Iversonin his programming languageAPL,[1][2]though restricted to single relational operators enclosed in parentheses, while the generalisation to arbitrary statements, notational restriction to square brackets, and applications to summation, was advocated byDonald Knuthto avoid ambiguity in parenthesized logical expressions.[3] There is a direct correspondence between arithmetic on Iverson brackets, logic, and set operations. For instance, letAandBbe sets andP(k1,…){\displaystyle P(k_{1},\dots )}any property of integers; then we have[P∧Q]=[P][Q];[P∨Q]=[P]+[Q]−[P][Q];[¬P]=1−[P];[PXORQ]=|[P]−[Q]|;[k∈A]+[k∈B]=[k∈A∪B]+[k∈A∩B];[x∈A∩B]=[x∈A][x∈B];[∀m:P(k,m)]=∏m[P(k,m)];[∃m:P(k,m)]=min{1,∑m[P(k,m)]}=1−∏m[¬P(k,m)];#{m|P(k,m)}=∑m[P(k,m)].{\displaystyle {\begin{aligned}[][\,P\land Q\,]~&=~[\,P\,]\,[\,Q\,]~~;\\[1em][\,P\lor Q\,]~&=~[\,P\,]\;+\;[\,Q\,]\;-\;[\,P\,]\,[\,Q\,]~~;\\[1em][\,\neg \,P\,]~&=~1-[\,P\,]~~;\\[1em][\,P{\scriptstyle {\mathsf {\text{ XOR }}}}Q\,]~&=~{\Bigl |}\,[\,P\,]\;-\;[\,Q\,]\,{\Bigr |}~~;\\[1em][\,k\in A\,]\;+\;[\,k\in B\,]~&=~[\,k\in A\cup B\,]\;+\;[\,k\in A\cap B\,]~~;\\[1em][\,x\in A\cap B\,]~&=~[\,x\in A\,]\,[\,x\in B\,]~~;\\[1em][\,\forall \,m\ :\,P(k,m)\,]~&=~\prod _{m}\,[\,P(k,m)\,]~~;\\[1em][\,\exists \,m\ :\,P(k,m)\,]~&=~\min {\Bigl \{}\;1\,,\,\sum _{m}\,[\,P(k,m)\,]\;{\Bigr \}}=1\;-\;\prod _{m}\,[\,\neg \,P(k,m)\,]~~;\\[1em]\#{\Bigl \{}\;m\,{\Big |}\,P(k,m)\;{\Bigr \}}~&=~\sum _{m}\,[\,P(k,m)\,]~~.\end{aligned}}} The notation allows moving boundary conditions of summations (or integrals) as a separate factor into the summand, freeing up space around the summation operator, but more importantly allowing it to be manipulated algebraically. We mechanically derive a well-known sum manipulation rule using Iverson brackets:∑k∈Af(k)+∑k∈Bf(k)=∑kf(k)[k∈A]+∑kf(k)[k∈B]=∑kf(k)([k∈A]+[k∈B])=∑kf(k)([k∈A∪B]+[k∈A∩B])=∑k∈A∪Bf(k)+∑k∈A∩Bf(k).{\displaystyle {\begin{aligned}\sum _{k\in A}f(k)+\sum _{k\in B}f(k)&=\sum _{k}f(k)\,[k\in A]+\sum _{k}f(k)\,[k\in B]\\&=\sum _{k}f(k)\,([k\in A]+[k\in B])\\&=\sum _{k}f(k)\,([k\in A\cup B]+[k\in A\cap B])\\&=\sum _{k\in A\cup B}f(k)\ +\sum _{k\in A\cap B}f(k).\end{aligned}}} The well-known rule∑j=1n∑k=1jf(j,k)=∑k=1n∑j=knf(j,k){\textstyle \sum _{j=1}^{n}\sum _{k=1}^{j}f(j,k)=\sum _{k=1}^{n}\sum _{j=k}^{n}f(j,k)}is likewise easily derived:∑j=1n∑k=1jf(j,k)=∑j,kf(j,k)[1≤j≤n][1≤k≤j]=∑j,kf(j,k)[1≤k≤j≤n]=∑j,kf(j,k)[1≤k≤n][k≤j≤n]=∑k=1n∑j=knf(j,k).{\displaystyle {\begin{aligned}\sum _{j=1}^{n}\,\sum _{k=1}^{j}f(j,k)&=\sum _{j,k}f(j,k)\,[1\leq j\leq n]\,[1\leq k\leq j]\\&=\sum _{j,k}f(j,k)\,[1\leq k\leq j\leq n]\\&=\sum _{j,k}f(j,k)\,[1\leq k\leq n]\,[k\leq j\leq n]\\&=\sum _{k=1}^{n}\,\sum _{j=k}^{n}f(j,k).\end{aligned}}} For instance,Euler's totient functionthat counts the number of positive integers up tonwhich arecoprimetoncan be expressed byφ(n)=∑i=1n[gcd(i,n)=1],forn∈N+.{\displaystyle \varphi (n)=\sum _{i=1}^{n}[\gcd(i,n)=1],\qquad {\text{for }}n\in \mathbb {N} ^{+}.} Another use of the Iverson bracket is to simplify equations with special cases. For example, the formula∑1≤k≤ngcd(k,n)=1k=12nφ(n){\displaystyle \sum _{1\leq k\leq n \atop \gcd(k,n)=1}\!\!k={\frac {1}{2}}n\varphi (n)} is valid forn> 1but is off by⁠1/2⁠forn= 1. To get an identity valid for all positive integersn(i.e., all values for whichφ(n){\displaystyle \varphi (n)}is defined), a correction term involving the Iverson bracket may be added:∑1≤k≤ngcd(k,n)=1k=12n(φ(n)+[n=1]){\displaystyle \sum _{1\leq k\leq n \atop \gcd(k,n)=1}\!\!k={\frac {1}{2}}n{\Big (}\varphi (n)+[n=1]{\Big )}} Many common functions, especially those with a natural piecewise definition, may be expressed in terms of the Iverson bracket. TheKronecker deltanotation is a specific case of Iverson notation when the condition is equality. That is,δij=[i=j].{\displaystyle \delta _{ij}=[i=j].} Theindicator functionof a setA{\displaystyle A}, often denoted1A(x){\displaystyle \mathbf {1} _{A}(x)},IA(x){\displaystyle \mathbf {I} _{A}(x)}orχA(x){\displaystyle \chi _{A}(x)}, is an Iverson bracket with set membership as its condition:IA(x)=[x∈A].{\displaystyle \mathbf {I} _{A}(x)=[x\in A].} TheHeaviside step function,sign function,[1]and absolute value function are also easily expressed in this notation:H(x)=[x≥0],sgn⁡(x)=[x>0]−[x<0],{\displaystyle {\begin{aligned}H(x)&=[x\geq 0],\\\operatorname {sgn}(x)&=[x>0]-[x<0],\end{aligned}}} and|x|=x[x>0]−x[x<0]=x([x>0]−[x<0])=x⋅sgn⁡(x).{\displaystyle {\begin{aligned}|x|&=x[x>0]-x[x<0]\\&=x([x>0]-[x<0])\\&=x\cdot \operatorname {sgn}(x).\end{aligned}}} The comparison functions max and min (returning the larger or smaller of two arguments) may be written asmax(x,y)=x[x>y]+y[x≤y]{\displaystyle \max(x,y)=x[x>y]+y[x\leq y]}andmin(x,y)=x[x≤y]+y[x>y].{\displaystyle \min(x,y)=x[x\leq y]+y[x>y].} Thefloor and ceiling functionscan be expressed as⌊x⌋=∑nn⋅[n≤x<n+1]{\displaystyle \lfloor x\rfloor =\sum _{n}n\cdot [n\leq x<n+1]}and⌈x⌉=∑nn⋅[n−1<x≤n],{\displaystyle \lceil x\rceil =\sum _{n}n\cdot [n-1<x\leq n],}where the indexn{\displaystyle n}of summation is understood to range over all the integers. Theramp functioncan be expressedR(x)=x⋅[x≥0].{\displaystyle R(x)=x\cdot [x\geq 0].} Thetrichotomyof the reals is equivalent to the following identity:[a<b]+[a=b]+[a>b]=1.{\displaystyle [a<b]+[a=b]+[a>b]=1.} TheMöbius functionhas the property (and can be defined by recurrence as[4])∑d|nμ(d)=[n=1].{\displaystyle \sum _{d|n}\mu (d)\ =\ [n=1].} In the 1830s,Guglielmo dalla Sommajaused the expression00x{\displaystyle 0^{0^{x}}}to represent what now would be written[x>0]{\displaystyle [x>0]}; he also used variants, such as(1−00−x)(1−00x−a){\displaystyle \left(1-0^{0^{-x}}\right)\left(1-0^{0^{x-a}}\right)}for[0≤x≤a]{\displaystyle [0\leq x\leq a]}.[3]Following onecommon convention(that00=1{\displaystyle 0^{0}=1}), those quantities are equal where defined:00x{\displaystyle 0^{0^{x}}}is 1 ifx> 0, is 0 ifx= 0, and is undefined otherwise. In addition to the now-standard square brackets[ · ] ,and the original parentheses( · ) ,blackboard boldbrackets have also been used, e.g.⟦ · ⟧,as well as other unusual forms of bracketing marks available in the publisher's typeface, accompanied by a marginal note.
https://en.wikipedia.org/wiki/Iverson_bracket
Thematerial conditional(also known asmaterial implication) is abinary operationcommonly used inlogic. When the conditional symbol→{\displaystyle \to }isinterpretedas material implication, a formulaP→Q{\displaystyle P\to Q}is true unlessP{\displaystyle P}is true andQ{\displaystyle Q}is false. Material implication is used in all the basic systems ofclassical logicas well as somenonclassical logics. It is assumed as a model of correct conditional reasoning within mathematics and serves as the basis for commands in manyprogramming languages. However, many logics replace material implication with other operators such as thestrict conditionaland thevariably strict conditional. Due to theparadoxes of material implicationand related problems, material implication is not generally considered a viable analysis ofconditional sentencesinnatural language. In logic and related fields, the material conditional is customarily notated with an infix operator→{\displaystyle \to }.[1]The material conditional is also notated using the infixes⊃{\displaystyle \supset }and⇒{\displaystyle \Rightarrow }.[2]In the prefixedPolish notation, conditionals are notated asCpq{\displaystyle Cpq}. In a conditional formulap→q{\displaystyle p\to q}, the subformulap{\displaystyle p}is referred to as theantecedentandq{\displaystyle q}is termed theconsequentof the conditional. Conditional statements may be nested such that the antecedent or the consequent may themselves be conditional statements, as in the formula(p→q)→(r→s){\displaystyle (p\to q)\to (r\to s)}. InArithmetices Principia: Nova Methodo Exposita(1889),Peanoexpressed the proposition "IfA{\displaystyle A}, thenB{\displaystyle B}" asA{\displaystyle A}ƆB{\displaystyle B}with the symbol Ɔ, which is the opposite of C.[3]He also expressed the propositionA⊃B{\displaystyle A\supset B}asA{\displaystyle A}ƆB{\displaystyle B}.[4][5][6]Hilbertexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1918.[1]Russellfollowed Peano in hisPrincipia Mathematica(1910–1913), in which he expressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}. Following Russell,Gentzenexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}.Heytingexpressed the proposition "IfA, thenB" asA⊃B{\displaystyle A\supset B}at first but later came to express it asA→B{\displaystyle A\to B}with a right-pointing arrow.Bourbakiexpressed the proposition "IfA, thenB" asA→B{\displaystyle A\to B}in 1954.[7] From aclassicalsemantic perspective, material implication is thebinarytruth functionaloperator which returns "true" unless its first argument is true and its second argument is false. This semantics can be shown graphically in the followingtruth table: One can also consider the equivalenceA→B≡¬(A∧¬B)≡¬A∨B{\displaystyle A\to B\equiv \neg (A\land \neg B)\equiv \neg A\lor B}. The conditionals(A→B){\displaystyle (A\to B)}where the antecedentA{\displaystyle A}is false, are called "vacuous truths". Examples are ... Formulas over the set of connectives{→,⊥}{\displaystyle \{\to ,\bot \}}[8]are calledf-implicational.[9]Inclassical logicthe other connectives, such as¬{\displaystyle \neg }(negation),∧{\displaystyle \land }(conjunction),∨{\displaystyle \lor }(disjunction) and↔{\displaystyle \leftrightarrow }(equivalence), can be defined in terms of→{\displaystyle \to }and⊥{\displaystyle \bot }(falsity):[10]¬A=defA→⊥A∧B=def(A→(B→⊥))→⊥A∨B=def(A→⊥)→BA↔B=def{(A→B)→[(B→A)→⊥]}→⊥{\displaystyle {\begin{aligned}\neg A&\quad {\overset {\text{def}}{=}}\quad A\to \bot \\A\land B&\quad {\overset {\text{def}}{=}}\quad (A\to (B\to \bot ))\to \bot \\A\lor B&\quad {\overset {\text{def}}{=}}\quad (A\to \bot )\to B\\A\leftrightarrow B&\quad {\overset {\text{def}}{=}}\quad \{(A\to B)\to [(B\to A)\to \bot ]\}\to \bot \\\end{aligned}}} The validity of f-implicational formulas can be semantically established by themethod of analytic tableaux. The logical rules are Hilbert-style proofscan be foundhereorhere. AHilbert-style proofcan be foundhere. The semantic definition by truth tables does not permit the examination of structurally identical propositional forms in variouslogical systems, where different properties may be demonstrated. The language considered here is restricted tof-implicational formulas. Consider the following (candidate)natural deductionrules. If assumingA{\displaystyle A}one can deriveB{\displaystyle B}, then one can concludeA→B{\displaystyle A\to B}. [A]⋮BA→B{\displaystyle {\frac {\begin{array}{c}[A]\\\vdots \\B\end{array}}{A\to B}}}(→{\displaystyle \to }I) [A]{\displaystyle [A]}is an assumption that is discharged when applying the rule. This rule corresponds tomodus ponens. A→BAB{\displaystyle {\frac {A\to B\quad A}{B}}}(→{\displaystyle \to }E) AA→BB{\displaystyle {\frac {A\quad A\to B}{B}}}(→{\displaystyle \to }E) (A→⊥)→⊥A{\displaystyle {\frac {\begin{array}{c}(A\to \bot )\to \bot \\\end{array}}{A}}}(¬¬{\displaystyle \neg \neg }E) From falsum (⊥{\displaystyle \bot }) one can derive any formula.(ex falso quodlibet) ⊥A{\displaystyle {\frac {\bot }{A}}}(⊥{\displaystyle \bot }E) Inclassical logicmaterial implication validates the following: Similarly, on classical interpretations of the other connectives, material implication validates the followingentailments: Tautologiesinvolving material implication include: Material implication does not closely match the usage ofconditional sentencesinnatural language. For example, even though material conditionals with false antecedents arevacuously true, the natural language statement "If 8 is odd, then 3 is prime" is typically judged false. Similarly, any material conditional with a true consequent is itself true, but speakers typically reject sentences such as "If I have a penny in my pocket, then Paris is in France". These classic problems have been called theparadoxes of material implication.[16]In addition to the paradoxes, a variety of other arguments have been given against a material implication analysis. For instance,counterfactual conditionalswould all be vacuously true on such an account, when in fact some are false.[17] In the mid-20th century, a number of researchers includingH. P. GriceandFrank Jacksonproposed thatpragmaticprinciples could explain the discrepancies between natural language conditionals and the material conditional. On their accounts, conditionalsdenotematerial implication but end up conveying additional information when they interact with conversational norms such asGrice's maxims.[16][18]Recent work informal semanticsandphilosophy of languagehas generally eschewed material implication as an analysis for natural-language conditionals.[18]In particular, such work has often rejected the assumption that natural-language conditionals aretruth functionalin the sense that the truth value of "IfP, thenQ" is determined solely by the truth values ofPandQ.[16]Thus semantic analyses of conditionals typically propose alternative interpretations built on foundations such asmodal logic,relevance logic,probability theory, andcausal models.[18][16][19] Similar discrepancies have been observed by psychologists studying conditional reasoning, for instance, by the notoriousWason selection taskstudy, where less than 10% of participants reasoned according to the material conditional. Some researchers have interpreted this result as a failure of the participants to conform to normative laws of reasoning, while others interpret the participants as reasoning normatively according to nonclassical laws.[20][21][22]
https://en.wikipedia.org/wiki/Logical_conditional
In computer programming, asentinel nodeis a specifically designatednodeused withlinked listsandtreesas a traversal path terminator. This type of node does not hold or reference any data managed by the data structure. Sentinels are used as an alternative over usingNULLas the path terminator in order to get one or more of the following benefits: Below are two versions of a subroutine (implemented in theC programming language) for looking up a given search key in asingly linked list. The first one uses thesentinel valueNULL, and the second one a (pointer to the) sentinel nodeSentinel, as the end-of-list indicator. The declarations of the singly linked list data structure and the outcomes of both subroutines are the same. Thefor-loop contains two tests (yellow lines) per iteration: The globally available pointersentinelto the deliberately prepared data structureSentinelis used as end-of-list indicator. Note that thepointersentinel has always to be kept at the end of the list. This has to be maintained by the insert and delete functions. It is, however, about the same effort as when using a NULL pointer. Thefor-loop contains only one test (yellow line) per iteration: Linked list implementations, especially one of a circular, doubly-linked list, can be simplified remarkably using a sentinel node to demarcate the beginning and end of the list. Following is a Python implementation of a circular doubly-linked list: Notice how theadd_node()method takes the node that will be displaced by the new node in the parametercurnode. For appending to the left, this is the head of a non-empty list, while for appending to right, it is the tail. But because of how the linkage is set up to refer back to the sentinel, the code just works for empty lists as well, wherecurnodewill be the sentinel node. General declarations, similar to articleBinary search tree: The globally availablepointersentinelto thesingledeliberately prepared data structureSentinel = *sentinelis used to indicate the absence of a child. Note that thepointersentinel has always to represent every leaf of the tree. This has to be maintained by the insert and delete functions. It is, however, about the same effort as when using a NULL pointer.
https://en.wikipedia.org/wiki/Sentinel_node
Incomputer programming languages, aswitch statementis a type of selection control mechanism used to allow the value of avariableor expression to change thecontrol flowof program execution via search and map. Switch statements function somewhat similarly to theifstatement used in programming languages likeC/C++,C#,Visual Basic .NET,Javaand exist in most high-levelimperative programminglanguages such asPascal,Ada,C/C++,C#,[1]: 374–375Visual Basic .NET,Java,[2]: 157–167and in many other types of language, using suchkeywordsasswitch,case,select, orinspect. Switch statements come in two main variants: a structured switch, as in Pascal, which takes exactly one branch, and an unstructured switch, as in C, which functions as a type ofgoto. The main reasons for using a switch include improving clarity, by reducing otherwise repetitive coding, and (if theheuristicspermit) also offering the potential for faster execution through easiercompiler optimizationin many cases. In his 1952 textIntroduction to Metamathematics,Stephen Kleeneformally proved that the CASE function (the IF-THEN-ELSE function being its simplest form) is aprimitive recursive function, where he defines the notion "definition by cases" in the following manner: "#F. The function φ defined thus where Q1, ... , Qmare mutually exclusive predicates (or φ(x1, ... , xn) shall have the value given by the first clause which applies) is primitive recursive in φ1, ..., φm+1, Q1, ..., Qm+1. Kleene provides a proof of this in terms of the Boolean-like recursive functions "sign-of" sg( ) and "not sign of" ~sg( ) (Kleene 1952:222-223); the first returns 1 if its input is positive and −1 if its input is negative. Boolos-Burgess-Jeffrey make the additional observation that "definition by cases" must be bothmutually exclusiveandcollectively exhaustive. They too offer a proof of the primitive recursiveness of this function (Boolos-Burgess-Jeffrey 2002:74-75). The IF-THEN-ELSE is the basis of theMcCarthy formalism: its usage replaces both primitive recursion and themu-operator. The earliestFortrancompilers supported thecomputed GOTOstatement for multi-way branching. EarlyALGOLcompilers supported a SWITCH data type which contains a list of "designational expressions". A GOTO statement could reference a switch variable and, by providing an index, branch to the desired destination. With experience it was realized that a more formal multi-way construct, with single point of entrance and exit, was needed. Languages such asBCPL,ALGOL-W, andALGOL-68introduced forms of this construct which have survived through modern languages. In most languages, programmers write a switch statement across many individual lines using one or two keywords. A typical syntax involves: Each alternative begins with the particular value, or list of values (see below), that the control variable may match and which will cause the control togotothe corresponding sequence of statements. The value (or list/range of values) is usually separated from the corresponding statement sequence by a colon or by an implication arrow. In many languages, every case must also be preceded by a keyword such ascaseorwhen. An optional default case is typically also allowed, specified by adefault,otherwise, orelsekeyword. This executes when none of the other cases match the control expression. In some languages, such as C, if no case matches and thedefaultis omitted theswitchstatement simply does nothing. In others, like PL/I, an error is raised. Semantically, there are two main forms of switch statements. The first form are structured switches, as in Pascal, where exactly one branch is taken, and the cases are treated as separate, exclusive blocks. This functions as a generalized if–then–elseconditional, here with any number of branches, not just two. The second form are unstructured switches, as in C, where the cases are treated as labels within a single block, and the switch functions as a generalized goto. This distinction is referred to as the treatment of fallthrough, which is elaborated below. In many languages, only the matching block is executed, and then execution continues at the end of the switch statement. These include thePascalfamily (Object Pascal, Modula, Oberon, Ada, etc.) as well asPL/I, modern forms ofFortranandBASICdialects influenced by Pascal, most functional languages, and many others. To allow multiple values to execute the same code (and avoid needing toduplicate code), Pascal-type languages permit any number of values per case, given as a comma-separated list, as a range, or as a combination. Languages derived from C language, and more generally those influenced by Fortran'scomputed GOTO, instead feature fallthrough, where control moves to the matching case, and then execution continues ("falls through") to the statements associated with thenextcase in the source text. This also allows multiple values to match the same point without any special syntax: they are just listed with empty bodies. Values can bespecial conditionedwith code in the case body. In practice, fallthrough is usually prevented with abreakkeyword at the end of the matching body, which exits execution of the switch block, but this can cause bugs due to unintentional fallthrough if the programmer forgets to insert thebreakstatement. This is thus seen by many[4]as a language wart, and warned against in somelint tools. Syntactically, the cases are interpreted as labels, not blocks, and the switch and break statements explicitly change control flow. Some languages influenced by C, such asJavaScript, retain default fallthrough, while others remove fallthrough, or only allow it in special circumstances. Notable variations on this in the C-family includeC#, in which all blocks must be terminated with abreakorreturnunless the block is empty (i.e. fallthrough is used as a way to specify multiple values). In some cases languages provide optional fallthrough. For example,Perldoes not fall through by default, but a case may explicitly do so using acontinuekeyword. This prevents unintentional fallthrough but allows it when desired. Similarly,Bashdefaults to not falling through when terminated with;;, but allows fallthrough[5]with;&or;;&instead. An example of a switch statement that relies on fallthrough isDuff's device. Optimizing compilerssuch asGCCorClangmay compile a switch statement into either abranch tableor abinary searchthrough the values in the cases.[6]A branch table allows the switch statement to determine with a small, constant number of instructions which branch to execute without having to go through a list of comparisons, while a binary search takes only a logarithmic number of comparisons, measured in the number of cases in the switch statement. Normally, the only method of finding out if this optimization has occurred is by actually looking at the resultantassemblyormachine codeoutput that has been generated by the compiler. In some languages and programming environments, the use of acaseorswitchstatement is considered superior to an equivalent series ofifelse ifstatements because it is: Additionally, anoptimizedimplementation may execute much faster than the alternative, because it is often implemented by using an indexedbranch table.[7]For example, deciding program flow based on a single character's value, if correctly implemented, is vastly more efficient than the alternative, reducinginstruction path lengthsconsiderably. When implemented as such, a switch statement essentially becomes aperfect hash. In terms of thecontrol-flow graph, a switch statement consists of two nodes (entrance and exit), plus one edge between them for each option. By contrast, a sequence of "if...else if...else if" statements has an additional node for every case other than the first and last, together with a corresponding edge. The resulting control-flow graph for the sequences of "if"s thus has many more nodes and almost twice as many edges, with these not adding any useful information. However, the simple branches in the if statements are individually conceptually easier than the complex branch of a switch statement. In terms ofcyclomatic complexity, both of these options increase it byk−1 if givenkcases. Switch expressionsare introduced inJava SE 12, 19 March 2019, as a preview feature. Here a whole switch expression can be used to return a value. There is also a new form of case label,case L->where the right-hand-side is a single expression. This also prevents fall through and requires that cases are exhaustive. In Java SE 13 theyieldstatement is introduced, and in Java SE 14 switch expressions become a standard language feature.[8][9][10]For example: Many languages evaluate expressions insideswitchblocks at runtime, allowing a number of less obvious uses for the construction. This prohibits certain compiler optimizations, so is more common in dynamic and scripting languages where the enhanced flexibility is more important than the performance overhead. For example, inPHP, a constant can be used as the "variable" to check against, and the first case statement which evaluates to that constant will be executed: This feature is also useful for checking multiple variables against one value rather than one variable against many values. COBOL also supports this form (and other forms) in theEVALUATEstatement. PL/I has an alternative form of theSELECTstatement where the control expression is omitted altogether and the firstWHENthat evaluates totrueis executed. InRuby, due to its handling of===equality, the statement can be used to test for variable’s class: Ruby also returns a value that can be assigned to a variable, and doesn’t actually require thecaseto have any parameters (acting a bit like anelse ifstatement): A switch statement inassembly language: For Python 3.10.6,PEPs634-636 were accepted, which addedmatchandcasekeywords.[11][12][13][14]Unlike other languages, Python notably doesn't exhibit fallthrough behavior. A number of languages implement a form of switch statement inexception handling, where if an exception is raised in a block, a separate branch is chosen, depending on the exception. In some cases a default branch, if no exception is raised, is also present. An early example isModula-3, which use theTRY...EXCEPTsyntax, where eachEXCEPTdefines a case. This is also found inDelphi,Scala, andVisual Basic .NET. Some alternatives to switch statements can be:
https://en.wikipedia.org/wiki/Switch_statement
Substructural type systemsare a family oftype systemsanalogous tosubstructural logicswhere one or more of thestructural rulesare absent or only allowed under controlled circumstances. Such systems can constrain access tosystem resourcessuch asfiles,locks, andmemoryby keeping track of changes of state and prohibiting invalid states.[1]: 4 Several type systems have emerged by discarding some of thestructural rulesof exchange, weakening, and contraction: The explanation for affine type systems is best understood if rephrased as “everyoccurrenceof a variable is used at most once”. Ordered typescorrespond tononcommutative logicwhere exchange, contraction and weakening are discarded. This can be used to modelstack-based memory allocation(contrast with linear types which can be used to modelheap-based memory allocation).[1]: 30–31Without the exchange property, an object may only be used when at the top of the modelled stack, after which it is popped off, resulting in every variable being used exactly once in the order it was introduced. Linear typescorrespond tolinear logicand ensure that objects are used exactly once. This allows the system to safelydeallocatean object after its use,[1]: 6or to designsoftware interfacesthat guarantee a resource cannot be used once it has been closed or transitioned to a different state.[2] TheClean programming languagemakes use ofuniqueness types(a variant of linear types) to help support concurrency,input/output, and in-place update ofarrays.[1]: 43 Linear type systems allowreferencesbut notaliases. To enforce this, a reference goes out ofscopeafter appearing on the right-hand side of anassignment, thus ensuring that only one reference to any object exists at once. Note that passing a reference as anargumentto afunctionis a form of assignment, as the function parameter will be assigned the value inside the function, and therefore such use of a reference also causes it to go out of scope. The single-reference property makes linear type systems suitable as programming languages forquantum computing, as it reflects theno-cloning theoremof quantum states. From thecategory theorypoint of view, no-cloning is a statement that there is nodiagonal functorwhich could duplicate states; similarly, from thecombinatory logicpoint of view, there is no K-combinator which can destroy states. From thelambda calculuspoint of view, a variablexcan appear exactly once in a term.[3] Linear type systems are theinternal languageofclosed symmetric monoidal categories, much in the same way thatsimply typed lambda calculusis the language ofCartesian closed categories. More precisely, one may constructfunctorsbetween the category of linear type systems and the category of closed symmetric monoidal categories.[4] Affine typesare a version of linear types allowing todiscard(i.e.not use) a resource, corresponding toaffine logic. An affine resourcecanbe usedat mostonce, while a linear onemustbe usedexactlyonce. Relevant typescorrespond torelevant logicwhich allows exchange and contraction, but not weakening, which translates to every variable being used at least once. The nomenclature offered by substructural type systems is useful to characterizeresource managementaspects of a language. Resource management is the aspect of language safety concerned with ensuring that eachallocated resourceis deallocated exactly once. Thus,the resource interpretationis only concerned with uses that transfer ownership –moving, where ownership is the responsibility to free the resource. Uses that don't transfer ownership –borrowing– are not in scope of this interpretation, butlifetime semanticsfurther restrict these uses to be between allocation and deallocation. Under the resource interpretation, anaffinetype can not be spent more than once. As an example, the same variant ofHoare's vending machinecan be expressed in English, logic and inRust: What it means forCointo be an affine type in this example (which it is unless it implements theCopytrait) is that trying to spend the same coin twice is an invalid program that the compiler is entitled to reject: In other words, an affine type system can express thetypestate pattern: Functions can consume and return an object wrapped in different types, acting like state transitions in astate machinethat stores its state as a type in the caller's context – atypestate. AnAPIcan exploit this to statically enforce that its functions are called in a correct order. What it doesn't mean, however, is that a variable can't be used without using it up: What Rust is not able to express is a coin type that cannot go out of scope – that would take a linear type. Under the resource interpretation, alineartype not onlycanbe moved, like an affine type, butmustbe moved – going out of scope is an invalid program. An attraction with linear types is that destructors become regular functions that can take arguments, can fail and so on.[5]This may for example avoid the need to keep state that is only used for destruction. A general advantage of passing function dependencies explicitly is that the order of function calls – destruction order – becomes statically verifiable in terms of the arguments' lifetimes. Compared to internal references, this does not require lifetime annotations as in Rust. As with manual resource management, a practical problem is that anyearly return, as is typical of error handling, must achieve the same cleanup. This becomes pedantic in languages that havestack unwinding, where every function call is a potential early return. However, as a close analogy, the semantic of implicitly inserted destructor calls can be restored with deferred function calls.[6] Under the resource interpretation, anormaltype does not restrict how many times a variable can be moved from.C++(specifically nondestructive move semantics) falls in this category. The following programming languages support linear or affine types[citation needed]:
https://en.wikipedia.org/wiki/Linear_type
Linear logicis asubstructural logicproposed by FrenchlogicianJean-Yves Girardas a refinement ofclassicalandintuitionistic logic, joining thedualitiesof the former with many of theconstructiveproperties of the latter.[1]Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such asprogramming languages,game semantics, andquantum physics(because linear logic can be seen as the logic ofquantum information theory),[2]as well aslinguistics,[3]particularly because of its emphasis on resource-boundedness, duality, and interaction. Linear logic lends itself to many different presentations, explanations, and intuitions.Proof-theoretically, it derives from an analysis of classicalsequent calculusin which uses of (thestructural rules)contractionandweakeningare carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulatingresourcesthat cannot always be duplicated or thrown away at will. In terms of simpledenotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacingcartesian (closed) categoriesbysymmetric monoidal (closed) categories, or the interpretation of classical logic by replacingBoolean algebrasbyC*-algebras.[citation needed] The language ofclassical linear logic(CLL) is defined inductively by theBNF notation Herepandp⊥range overlogical atoms. For reasons to be explained below, theconnectives⊗, ⅋, 1, and ⊥ are calledmultiplicatives, the connectives &, ⊕, ⊤, and 0 are calledadditives, and the connectives ! and ? are calledexponentials. We can further employ the following terminology: Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &. Every propositionAin CLL has adualA⊥, defined as follows: Observe that(-)⊥is aninvolution, i.e.,A⊥⊥=Afor all propositions.A⊥is also called thelinear negationofA. The columns of the table suggest another way of classifying the connectives of linear logic, termedpolarity: the connectives negated in the left column (⊗, ⊕, 1, 0, !) are calledpositive, while their duals on the right (⅋, &, ⊥, ⊤, ?) are callednegative; cf. table on the right. Linear implicationis not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, byA⊸B:=A⊥⅋B. The connective ⊸ is sometimes pronounced "lollipop", owing to its shape. One way of defining linear logic is as asequent calculus. We use the lettersΓandΔto range over lists of propositionsA1, ...,An, also calledcontexts. Asequentplaces a context to the left and the right of theturnstile, writtenΓ⊢{\displaystyle \vdash }Δ. Intuitively, the sequent asserts that the conjunction ofΓentailsthe disjunction ofΔ(though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using onlyone-sidedsequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised. We now giveinference rulesdescribing how to build proofs of sequents.[4] First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule ofexchange: Note that we donotadd the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present. Next we addinitial sequentsandcuts: The cut rule can be seen as a way of composing proofs, and initial sequents serve as theunitsfor composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, thiscanonical formproperty (which can be divided into thecompleteness of atomic initial sequentsand thecut-elimination theorem, inducing a notion ofanalytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-awarelambda-calculus. Now, we explain the connectives by givinglogical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain"harmony"between the rule(s) for a connective and the rule(s) for its dual. The rules for multiplicative conjunction (⊗) and disjunction (⅋): and for their units: Observe that the rules for multiplicative conjunction and disjunction areadmissiblefor plainconjunctionanddisjunctionunder a classical interpretation (i.e., they are admissible rules inLK). The rules for additive conjunction (&) and disjunction (⊕) : and for their units: Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises. The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for?'d propositions:[5] and use the following logical rules, in which?Γstands for a list of propositions each prefixed with?: One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of thenormal modal logicS4, and that there is no longer such a clear symmetry between the duals!and?. This situation is remedied in alternative presentations of CLL (e.g., theLUpresentation). In addition to theDe Morgan dualitiesdescribed above, some important equivalences in linear logic include: By definition ofA⊸BasA⊥⅋B, the last two distributivity laws also give: (HereA≣Bis(A⊸B) & (B⊸A).) A map that is not an isomorphism yet plays a crucial role in linear logic is: Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in Cockett & Seely (1997) and called a "weak distribution".[6]In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic. The following distributivity formulas are not in general an equivalence, only an implication: Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as!A⊸B, while classical implication can be encoded as!?A⊸ ?Bor!A⊸ ?!B(or a variety of alternative possible translations).[7]The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic. Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using theGödel–Gentzen negative translation, we can thus embed classicalfirst-order logicinto linear first-order logic. Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations.Tony Hoare(1985)'s classic example of the vending machine can be used to illustrate this idea. Suppose we represent having a candy bar by the atomic propositioncandy, and having a dollar by$1. To state the fact that a dollar will buy you one candy bar, we might write the implication$1⇒candy. But in ordinary (classical or intuitionistic) logic, fromAandA⇒Bone can concludeA∧B. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, we can avoid this problem by using more sophisticated encodings,[clarification needed]although typically such encodings suffer from theframe problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than$1⇒candy, we express the property of the vending machine as alinearimplication$1⊸candy. From$1and this fact, we can concludecandy, butnot$1⊗candy. In general, we can use the linear logic propositionA⊸Bto express the validity of transforming resourceAinto resourceB. Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistentlogical truth.) Multiplicative conjunction(A⊗B)denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requestinggum⊗drink. The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗. Additive conjunction(A&B)represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write$1⊸ (candy&chips&drink). We donotwrite$1⊸ (candy⊗chips⊗drink), which would imply that one dollar suffices for buying all three products together. However, from($1⊸ (candy&chips&drink)) ⊗ ($1⊸ (candy&chips&drink)) ⊗ ($1⊸ (candy&chips&drink)), we can correctly deduce$3⊸ (candy⊗chips⊗drink), where$3:=$1⊗$1⊗$1. The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write$3⊸ (candy⊗ ⊤)to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.). Additive disjunction(A⊕B)represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as$1⊸ (candy⊕chips⊕drink). The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produceAor0is as good as a machine that always producesAbecause it will never succeed in producing a 0). So unlike above, we cannot deduce$3⊸ (candy⊗chips⊗drink)from this. Introduced byJean-Yves Girard, proof nets have been created to avoid thebureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view. For instance, these two proofs are "morally" identical: The goal of proof nets is to make them identical by creating a graphical representation of them. Theentailmentrelation in full CLL isundecidable.[8]When considering fragments of CLL, the decision problem has varying complexity: Many variations of linear logic arise by further tinkering with the structural rules: Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and ? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (seefirst-order logicandhigher-order logic).
https://en.wikipedia.org/wiki/Linear_logic
Incomputer science,array-access analysisis acompiler analysisapproach used to decide the read and write access patterns to elements or portions of arrays.[1] The major data type manipulated in scientific programs is the array. The define/use analysis on a whole array is insufficient for aggressivecompiler optimizationssuch asauto parallelizationand arrayprivatization. Array access analysis aims to obtain the knowledge of which portions or even which elements of the array are accessed by a given code segment (basic block,loop, or even at theprocedurelevel). Array-access analysis can be largely categorized into exact (or reference-list-based) and summary methods for different tradeoffs of accuracy and complexity. Exact methods are precise but very costly in terms of computation and space storage, while summary methods are approximate but can be computed quickly and economically. Typical exact array-access analysis include linearization andatom images. Summary methods can be further divided intoarray sections, bounded regular sections usingtriplet notation, linear-constraint methods such as data-access descriptors andarray-region analysis. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Array_access_analysis
Anarray database management systemorarray DBMSprovidesdatabaseservices specifically forarrays(also calledraster data), that is: homogeneous collections of data items (often calledpixels,voxels, etc.), sitting on a regular grid of one, two, or more dimensions. Often arrays are used to represent sensor, simulation, image, or statistics data. Such arrays tend to beBig Data, with single objects frequently ranging into Terabyte and soon Petabyte sizes; for example, today's earth and space observation archives typically grow by Terabytes a day. Array databases aim at offering flexible, scalable storage and retrieval on this information category. In the same style as standarddatabase systemsdo on sets, Array DBMSs offer scalable, flexible storage and flexible retrieval/manipulation on arrays of (conceptually) unlimited size. As in practice arrays never appear standalone, such an array model normally is embedded into some overall data model, such as the relational model. Some systems implement arrays as an analogy to tables, some introduce arrays as an additional attribute type. Management of arrays requires novel techniques, particularly due to the fact that traditional database tuples and objects tend to fit well into a single database page – a unit of disk access on server, typically 4KB– while array objects easily can span several media. The prime task of the array storage manager is to give fast access to large arrays and sub-arrays. To this end, arrays get partitioned, during insertion, into so-calledtilesorchunksof convenient size which then act as units of access during query evaluation. Array DBMSs offerquery languagesgivingdeclarativeaccess to such arrays, allowing to create, manipulate, search, and delete them. Like with, e.g.,SQL, expressions of arbitrary complexity can be built on top of a set of core array operations. Due to the extensions made in the data and query model, Array DBMSs sometimes are subsumed under theNoSQLcategory, in the sense of "not only SQL". Queryoptimizationandparallelizationare important for achievingscalability; actually, many array operators lend themselves well towards parallel evaluation, by processing each tile on separate nodes or cores. Important application domains of Array DBMSs include Earth, Space, Life, and Social sciences, as well as the related commercial applications (such ashydrocarbon explorationin industry andOLAPin business). The variety occurring can be observed, e.g., in geo data where 1-D environmental sensor time series, 2-D satellite images, 3-D x/y/t image time series and x/y/z geophysics data, as well as 4-D x/y/z/t climate and ocean data can be found. Therelational data model, which is prevailing today, does not directly support the array paradigm to the same extent as sets and tuples.ISOSQLlists an array-valued attribute type, but this is only one-dimensional, with almost no operational support, and not usable for theapplication domainsof Array DBMSs. Another option is to resort toBLOBs("binary large objects") which are the equivalent to files: byte strings of (conceptually) unlimited length, but again without any query language functionality, such as multi-dimensional subsetting. First significant work in going beyond BLOBs has been established with PICDMS.[1]This system offers the precursor of a 2-D array query language, albeit still procedural and without suitable storage support. A first declarative query language suitable for multiple dimensions and with an algebra-based semantics has been published byBaumann, together with a scalable architecture.[2][3]Another array database language, constrained to 2-D, has been presented by Marathe and Salem.[4]Seminal theoretical work has been accomplished by Libkin et al.;[5]in their model, called NCRA, they extend a nested relational calculus with multidimensional arrays; among the results are important contributions on array query complexity analysis. A map algebra, suitable for 2-D and 3-D spatial raster data, has been published by Mennis et al.[6] In terms of Array DBMS implementations, therasdamansystem has the longest implementation track record of n-D arrays with full query support.Oracle GeoRasteroffers chunked storage of 2-D raster maps, albeit without SQL integration.TerraLibis an open-source GIS software that extends object-relational DBMS technology to handle spatio-temporal data types; while main focus is on vector data, there is also some support for rasters. Starting with version 2.0,PostGISembeds raster support for 2-D rasters; a special function offers declarative raster query functionality.SciQLis an array query language being added to theMonetDBDBMS.SciDBis a more recent initiative to establish array database support. Like SciQL, arrays are seen as an equivalent to tables, rather than a new attribute type as in rasdaman and PostGIS. For the special case ofsparse data,OLAPdata cubes are well established; they store cell values together with their location – an adequate compression technique in face of the few locations carrying valid information at all – and operate with SQL on them. As this technique does not scale in density, standard databases are not used today for dense data, like satellite images, where most cells carry meaningful information; rather, proprietary ad hoc implementations prevail in scientific data management and similar situations. Hence, this is where Array DBMSs can make a particular contribution. Generally, Array DBMSs are an emerging technology. While operationally deployed systems exist, likeOracle GeoRaster,PostGIS 2.0andrasdaman, there are still many open research questions, including query language design and formalization, query optimization, parallelization anddistributed processing, and scalability issues in general. Besides, scientific communities still appear reluctant in taking up array database technology and tend to favor specialized, proprietary technology. When adding arrays to databases, all facets of database design need to be reconsidered – ranging from conceptual modeling (such as suitable operators) over storage management (such as management of arrays spanning multiple media) to query processing (such as efficient processing strategies). Formally, an arrayAis given by a (total or partial) functionA:X→VwhereX, thedomainis ad-dimensional integer interval for somed> 0andV, calledrange, is some (non-empty) value set; in set notation, this can be rewritten as{ (p,v) |p∈X,v∈V}. Each (p,v) inAdenotes an array element orcell, and following common notation we writeA[p] =v. Examples forXinclude {0..767} × {0..1023} (forXGAsized images), examples forVinclude {0..255} for 8-bit greyscale images and {0..255} × {0..255} × {0..255} for standardRGBimagery. Following established database practice, an array query language should bedeclarativeand safe in evaluation. As iteration over an array is at the heart of array processing, declarativeness very much centers on this aspect. The requirement, then, is that conceptually all cells should be inspected simultaneously – in other words, the query does not enforce any explicit iteration sequence over the array cells during evaluation. Evaluation safety is achieved when every query terminates after a finite number of (finite-time) steps; again, avoiding general loops and recursion is a way of achieving this. At the same time, avoiding explicit loop sequences opens up manifold optimization opportunities. As an example for array query operators therasdamanalgebra and query language can serve, which establish an expression language over a minimal set of array primitives. We begin with the generic core operators and then present common special cases and shorthands. Themarrayoperator creates an array over some given domain extent and initializes its cells: whereindex-range-specificationdefines the result domain and binds an iteration variable to it, without specifying iteration sequence. Thecell-value-expressionis evaluated at each location of the domain. Example:"A cutout of array A given by the corner points (10,20) and (40,50)." This special case, pure subsetting, can be abbreviated as This subsetting keeps the dimension of the array; to reduce dimension by extracting slices, a single slicepoint value is indicated in the slicing dimension. Example:"A slice through an x/y/t timeseries at position t=100, retrieving all available data in x and y." The wildcard operator*indicates that the current boundary of the array is to be used; note that arrays where dimension boundaries are left open at definition time may change size in that dimensions over the array's lifetime. The above examples have simply copied the original values; instead, these values may be manipulated. Example:"Array A, with a log() applied to each cell value." This can be abbreviated as: Through a principle calledinduced operations,[7]the query language offers all operations the cell type offers on array level, too. Hence, on numeric values all the usual unary and binary arithmetic, exponential, and trigonometric operations are available in a straightforward manner, plus the standard set of Boolean operators. Thecondenseoperator aggregates cell values into one scalar result, similar to SQL aggregates. Its application has the general form: As withmarraybefore, theindex-range-specificationspecifies the domain to be iterated over and binds an iteration variable to it – again, without specifying iteration sequence. Likewise,cell-value-expressionis evaluated at each domain location. Thecondense-opclause specifies the aggregating operation used to combine the cell value expressions into one single value. Example:"The sum over all values in A." A shorthand for this operation is: In the same manner and in analogy to SQL aggregates, a number of further shorthands are provided, including counting, average, minimum, maximum, and Boolean quantifiers. The next example demonstrates combination ofmarrayandcondenseoperators by deriving a histogram. Example:"A histogram over 8-bit greyscale image A." The induced comparison,A=bucket, establishes a Boolean array of the same extent asA. The aggregation operator counts the occurrences oftruefor each value ofbucket, which subsequently is put into the proper array cell of the 1-D histogram array. Such languages allow formulating statistical and imaging operations which can be expressed analytically without using loops. It has been proven[8]that the expressive power of such array languages in principle is equivalent to relational query languages with ranking. Array storage has to accommodate arrays of different dimensions and typically large sizes. A core task is to maintain spatial proximity on disk so as to reduce the number of disk accesses during subsetting. Note that an emulation of multi-dimensional arrays as nested lists (or 1-D arrays) will not per se accomplish this and, therefore, in general will not lead to scalable architectures. Commonly arrays are partitioned into sub-arrays which form the unit of access. Regular partitioning where all partitions have the same size (except possibly for boundaries) is referred to aschunking.[9]A generalization which removes the restriction to equally sized partitions by supporting any kind of partitioning istiling.[10]Array partitioning can improve access to array subsets significantly: by adjusting tiling to the access pattern, the server ideally can fetch all required data with only one disk access. Compression of tiles can sometimes reduce substantially the amount of storage needed. Also for transmission of results compression is useful, as for the large amounts of data under consideration networks bandwidth often constitutes a limiting factor. A tile-based storage structure suggests a tile-by-tile processing strategy (inrasdamancalledtile streaming). A large class of practically relevant queries can be evaluated by loading tile after tile, thereby allowing servers to process arrays orders of magnitude beyond their main memory. Due to the massive sizes of arrays in scientific/technical applications in combination with often complex queries, optimization plays a central role in making array queries efficient. Both hardware and software parallelization can be applied. An example for heuristic optimization is the rule "maximum value of an array resulting from the cell-wise addition of two input images is equivalent to adding the maximum values of each input array". By replacing the left-hand variant by the right-hand expression, costs shrink from three (costly) array traversals to two array traversals plus one (cheap) scalar operation (see Figure, which uses the SQL/MDA query standard). In many – if not most – cases where some phenomenon is sampled or simulated the result is a rasterized data set which can conveniently be stored, retrieved, and forwarded as an array. Typically, the array data are ornamented with metadata describing them further; for example, geographically referenced imagery will carry its geographic position and the coordinate reference system in which it is expressed. The following are representative domains in which large-scale multi-dimensional array data are handled: These are but examples; generally, arrays frequently represent sensor, simulation, image, and statistics data. More and more spatial and time dimensions are combined withabstractaxes, such as sales and products; one example where such abstract axes are explicitly foreseen is the [Open_Geospatial_Consortium |Open Geospatial Consortium] (OGC)coverage model. Many communities have established data exchange formats, such asHDF,NetCDF, andTIFF. A de facto standard in the Earth Science communities isOPeNDAP, a data transport architecture and protocol. While this is not a database specification, it offers important components that characterize a database system, such as a conceptual model and client/server implementations. A declarative geo raster query language,Web Coverage Processing Service(WCPS), has been standardized by theOpen Geospatial Consortium(OGC). In June 2014, ISO/IEC JTC1 SC32 WG3, which maintains the SQL database standard, has decided to add multi-dimensional array support to SQL as a new column type,[11]based on the initial array support available since the2003 version of SQL. The new standard, adopted in Fall 2018, is namedISO 9075 SQL Part 15: MDA (Multi-Dimensional Arrays).
https://en.wikipedia.org/wiki/Array_database_management_system
Incomputer science,bounds-checking eliminationis acompiler optimizationuseful inprogramming languagesorruntime systemsthat enforcebounds checking, the practice of checking every index into anarrayto verify that the index is within the defined valid range of indexes.[1]Its goal is to detect which of these indexing operations do not need to be validated atruntime, and eliminating those checks. One common example is accessing an array element, modifying it, and storing the modified value in the same array at the same location. Normally, this example would result in a bounds check when the element is read from the array and a second bounds check when the modified element is stored using the same array index. Bounds-checking elimination could eliminate the second check if the compiler or runtime can determine that neither the array size nor the index could change between the two array operations. Another example occurs when a programmerloops overthe elements of the array, and the loop condition guarantees that the index is within the bounds of the array. It may be difficult to detect that the programmer's manual check renders the automatic check redundant. However, it may still be possible for the compiler or runtime to perform proper bounds-checking elimination in this case. One technique for bounds-checking elimination is to use a typedstatic single assignment formrepresentation and for each array to create a new type representing a safe index for that particular array. The first use of a value as an array index results in a runtime type cast (and appropriate check), but subsequently the safe index value can be used without a type cast, without sacrificing correctness or safety. Just-in-time compiledlanguages such asJavaandC#often check indexes at runtime before accessingarrays. Some just-in-time compilers such asHotSpotare able to eliminate some of these checks if they discover that the index is always within the correct range, or if an earlier check would have already thrown an exception.[2][3]
https://en.wikipedia.org/wiki/Bounds-checking_elimination
Formats that usedelimiter-separated values(alsoDSV)[2]: 113store two-dimensional arrays of data by separating the values in each row with specificdelimitercharacters. Mostdatabaseandspreadsheetprograms are able to read or save data in a delimited format. Due to their wide support, DSV files can be used indata exchangeamong many applications. Adelimited text fileis atext fileused to store data, in which each line represents a single book, company, or other thing, and each line has fields separated by the delimiter.[3]Compared to the kind offlat filethat uses spaces to force every field to the same width, adelimited filehas the advantage of allowing field values of any length.[4] Any character may be used to separate the values, but the most common delimiters are thecomma,tab, andcolon.[2]: 113[5]Thevertical bar(also referred to aspipe) andspaceare also sometimes used.[2]: 113Column headers are sometimes included as the first line, and each subsequent line is a row of data. The lines are separated bynewlines. For example, the following fields in each record are delimited by commas, and each record by newlines: Note the use of thedouble quoteto enclose each field. This prevents the comma in the actual field value (Bloggs, Fred; Doe, Jane; etc.) from being interpreted as a field separator. This necessitates a way to "escape" the field wrapper itself, in this case the double quote; it is customary to double the double quotes actually contained in a field as with those surrounding "Hank". In this way, anyASCIItext including newlines can be contained in a field. ASCIIandUnicodeinclude severalcontrol charactersthat are intended to be used as delimiters. They are:28 for File Separator,29 for Group Separator,30 for Record Separator, and31 for Unit Separator. Example of such use isMARC 21bibliographic data format.[6]Use of these characters has not achieved widespread adoption; some systems have replaced their control properties with more accepted controls such asCR/LFand TAB.[citation needed] Due to their widespread use, comma- and tab-delimited text files can be opened by several kinds of applications, including mostspreadsheetprograms andstatistical packages, sometimes even without the user designating which delimiter has been used.[7][8]Despite that each of those applications has its owndatabase designand its ownfile format(for example, accdb or xlsx), they can all map the fields in a DSV file to their owndata modeland format.[citation needed] Typically a delimited file format is indicated by a specification. Some specifications provide conventions for avoidingdelimiter collision; others do not. Delimiter collision is a problem that occurs when a character that is intended as part of the data gets interpreted as a delimiter instead. Comma- and space-separated formats often suffer from this problem, since in many contexts those characters are legitimate parts of a data field. Most such files avoid delimiter collision either by surrounding all data fields in double quotes, or only quoting those data fields that contain the delimiter character. One problem with tab-delimited text files is that tabs are difficult to distinguish from spaces; therefore, there are sometimes problems with the files being corrupted when people try to edit them by hand. Another set of problems occur due to errors in the file structure, usually during import of file into adatabase(in the example above, such error may be a pupil's first name missing). Depending on the data itself, it may be beneficial to use non-standard characters such as the tilde (~) as delimiters. With rising prevalence of web sites and other applications that store snippets of code in databases, simply using a " which occurs in every hyperlink and image source tag is not sufficient to avoid this type of collision. Since colons (:), semi-colons (;), pipes (|), and many other characters are also used, it can be quite challenging to find a character that is not being used elsewhere.
https://en.wikipedia.org/wiki/Delimiter-separated_values
Incomputer programming,bounds checkingis any method of detecting whether avariableis within someboundsbefore it is used. It is usually used to ensure that a number fits into a given type (range checking), or that a variable being used as anarrayindex is within the bounds of the array (index checking). A failed bounds check usually results in the generation of some sort ofexceptionsignal. As performing bounds checking during each use can be time-consuming, it is not always done.Bounds-checking eliminationis acompiler optimizationtechnique that eliminates unneeded bounds checking. A range check is a check to make sure a number is within a certain range; for example, to ensure that a value about to be assigned to a 16-bit integer is within the capacity of a 16-bit integer (i.e. checking againstwrap-around). This is not quite the same astype checking.[how?]Other range checks may be more restrictive; for example, a variable to hold the number of a calendar month may be declared to accept only the range 1 to 12. Example inPython: Index checking means that, in allexpressionsindexing an array, the index value is checked against the bounds of the array (which were established when the array was defined), and if the index is out-of-bounds, further execution is suspended via some sort of error. Because reading or especially writing a value outside the bounds of an array may cause the program to malfunction or crash or enable security vulnerabilities (seebuffer overflow), index checking is a part of manyhigh-level languages. Early compiledprogramming languageswith index checking ability includedALGOL 60,ALGOL 68andPascal, as well as interpreted programming languages such asBASIC. Many programming languages, such asC, never perform automatic bounds checking to raise speed. However, this leaves manyoff-by-one errorsandbuffer overflowsuncaught. Many programmers believe these languages sacrifice too much for rapid execution.[1]In his 1980Turing Awardlecture,C. A. R. Hoaredescribed his experience in the design ofALGOL 60, a language that included bounds checking, saying: A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law. Mainstream languages that enforce run time checking includeAda,C#,Haskell,Java,JavaScript,Lisp,PHP,Python,Ruby,Rust, andVisual Basic. TheDandOCamllanguages have run time bounds checking that is enabled or disabled with a compiler switch. InC++run time checking is not part of the language, but part of theSTLand is enabled with a compiler switch (_GLIBCXX_DEBUG=1 or _LIBCPP_DEBUG=1). C# also supportsunsafe regions: sections of code that (among other things) temporarily suspend bounds checking to raise efficiency. These are useful for speeding up small time-critical bottlenecks without sacrificing the safety of a whole program. TheJS++programming language is able to analyze if an array index or map key is out-of-bounds at compile time usingexistent types, which is anominal typedescribing whether the index or key is within-bounds or out-of-bounds and guides code generation. Existent types have been shown to add only1ms overhead[clarify]to compile times.[2] The safety added by bounds checking necessarily costs CPU time if the checking is performed in software; however, if the checks could be performed by hardware, then the safety can be provided "for free" with no runtime cost. An early system with hardware bounds checking was theICL 2900 Seriesmainframe announced in 1974.[3]TheVAXcomputer has an INDEX assembly instruction for array index checking which takes six operands, all of which can use any VAX addressing mode. The B6500 and similarBurroughscomputers performed bound checking via hardware, irrespective of which computer language had been compiled to produce the machine code. A limited number of laterCPUshave specialised instructions for checking bounds, e.g., the CHK2 instruction on theMotorola 68000series. Research has been underway since at least 2005 regarding methods to use x86's built-in virtual memory management unit to ensure safety of array and buffer accesses.[4]In 2015 Intel provided theirIntel MPXextensions in theirSkylakeprocessor architecture which stores bounds in a CPU register and table in memory. As of early 2017 at leastGCCsupports MPX extensions.
https://en.wikipedia.org/wiki/Index_checking
Incomputing, a group ofparallel arrays(also known asstructure of arraysor SoA) is a form ofimplicit data structurethat uses multiplearraysto represent a singular array ofrecords. It keeps a separate,homogeneous dataarray for each field of the record, each having the same number of elements. Then, objects located at the same index in each array are implicitly the fields of a single record. Pointers from one object to another are replaced by array indices. This contrasts with the normal approach of storing all fields of each record together in memory (also known asarray of structuresor AoS). For example, one might declare an array of 100 names, each a string, and 100 ages, each an integer, associating each name with the age that has the same index. An example inCusing parallel arrays: inPerl(using a hash of arrays to hold references to each array): Or, inPython: Parallel arrays have a number of practical advantages over the normal approach: Several of these advantages depend strongly on the particular programming language and implementation in use. However, parallel arrays also have several strong disadvantages, which serves to explain why they are not generally preferred: The bad locality of reference can be alleviated in some cases: if a structure can be divided into groups of fields that are generally accessed together, an array can be constructed for each group, and its elements are records containing only these subsets of the larger structure's fields. (seedata-oriented design). This is a valuable way of speeding up access to very large structures with many members, while keeping the portions of the structure tied together. An alternative to tying them together using array indexes is to usereferencesto tie the portions together, but this can be less efficient in time and space. Another alternative is to use a single array, where each entry is a record structure. Many language provide a way to declare actual records, and arrays of them. In other languages it may be feasible to simulate this by declaring an array of n*m size, where m is the size of all the fields together, packing the fields into what is effectively a record, even though the particular language lacks direct support for records. Somecompiler optimizations, particularly forvector processors, are able to perform this transformation automatically when arrays of structures are created in the program.[citation needed]
https://en.wikipedia.org/wiki/Parallel_array
Innumerical analysisandscientific computing, asparse matrixorsparse arrayis amatrixin which most of the elements are zero.[1]There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify assparsebut a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considereddense.[1]The number of zero-valued elements divided by the total number of elements (e.g.,m×nfor anm×nmatrix) is sometimes referred to as thesparsityof the matrix. Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system, as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful incombinatoricsand application areas such asnetwork theoryandnumerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear inscientificorengineeringapplications when solvingpartial differential equations. When storing and manipulating sparse matrices on acomputer, it is beneficial and often necessary to use specializedalgorithmsanddata structuresthat take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,[2]as they are common in the machine learning field.[3]Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing andmemoryare wasted on the zeros. Sparse data is by nature more easilycompressedand thus requires significantly lessstorage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms. An important special type of sparse matrices isband matrix, defined as follows. Thelower bandwidth of a matrixAis the smallest numberpsuch that the entryai,jvanishes wheneveri>j+p. Similarly, theupper bandwidthis the smallest numberpsuch thatai,j= 0wheneveri<j−p(Golub & Van Loan 1996, §1.2.1). For example, atridiagonal matrixhas lower bandwidth1and upper bandwidth1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.[XXX⋅⋅⋅⋅XX⋅XX⋅⋅X⋅X⋅X⋅⋅⋅X⋅X⋅X⋅⋅XX⋅XXX⋅⋅⋅XXX⋅⋅⋅⋅⋅X⋅X]{\displaystyle {\begin{bmatrix}X&X&X&\cdot &\cdot &\cdot &\cdot &\\X&X&\cdot &X&X&\cdot &\cdot &\\X&\cdot &X&\cdot &X&\cdot &\cdot &\\\cdot &X&\cdot &X&\cdot &X&\cdot &\\\cdot &X&X&\cdot &X&X&X&\\\cdot &\cdot &\cdot &X&X&X&\cdot &\\\cdot &\cdot &\cdot &\cdot &X&\cdot &X&\\\end{bmatrix}}} Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices. By rearranging the rows and columns of a matrixAit may be possible to obtain a matrixA′with a lower bandwidth. A number of algorithms are designed forbandwidth minimization. A very efficient structure for an extreme case of band matrices, thediagonal matrix, is to store just the entries in themain diagonalas aone-dimensional array, so a diagonaln×nmatrix requires onlynentries. A symmetric sparse matrix arises as theadjacency matrixof anundirected graph; it can be stored efficiently as anadjacency list. Ablock-diagonal matrixconsists of sub-matrices along its diagonal blocks. A block-diagonal matrixAhas the formA=[A10⋯00A2⋯0⋮⋮⋱⋮00⋯An],{\displaystyle \mathbf {A} ={\begin{bmatrix}\mathbf {A} _{1}&0&\cdots &0\\0&\mathbf {A} _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\mathbf {A} _{n}\end{bmatrix}},} whereAkis a square matrix for allk= 1, ...,n. Thefill-inof a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. Thesymbolic Cholesky decompositioncan be used to calculate the worst possible fill-in before doing the actualCholesky decomposition. There are other methods than theCholesky decompositionin use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in. Bothiterativeand direct methods exist for sparse matrix solving. Iterative methods, such asconjugate gradientmethod andGMRESutilize fast computations of matrix-vector productsAxi{\displaystyle Ax_{i}}, where matrixA{\displaystyle A}is sparse. The use ofpreconditionerscan significantly accelerate convergence of such iterative methods. A matrix is typically stored as a two-dimensional array. Each entry in the array represents an elementai,jof the matrix and is accessed by the twoindicesiandj. Conventionally,iis the row index, numbered from top to bottom, andjis the column index, numbered from left to right. For anm×nmatrix, the amount of memory required to store the matrix in this format is proportional tom×n(disregarding the fact that the dimensions of the matrix also need to be stored). In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously. Formats can be divided into two groups: DOK consists of adictionarythat maps(row, column)-pairsto the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[4] LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[5] COO stores a list of(row, column, value)tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.[6] Thecompressed sparse row(CSR) orcompressed row storage(CRS) or Yale format represents a matrixMby three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.[7] The CSR format stores a sparsem×nmatrixMin row form using three (one-dimensional) arrays(V, COL_INDEX, ROW_INDEX). LetNNZdenote the number of nonzero entries inM. (Note thatzero-based indicesshall be used here.) For example, the matrix(5000080000300600){\displaystyle {\begin{pmatrix}5&0&0&0\\0&8&0&0\\0&0&3&0\\0&6&0&0\\\end{pmatrix}}}is a4 × 4matrix with 4 nonzero elements, hence assuming a zero-indexed language. To extract a row, we first define: Then we take slices from V and COL_INDEX starting at row_start and ending at row_end. To extract the row 1 (the second row) of this matrix we setrow_start=1androw_end=2. Then we make the slicesV[1:2] = [8]andCOL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8. In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only whenNNZ < (m(n− 1) − 1) / 2. Another example, the matrix(10200000030040000050607000000080){\displaystyle {\begin{pmatrix}10&20&0&0&0&0\\0&30&0&40&0&0\\0&0&50&60&70&0\\0&0&0&0&0&80\\\end{pmatrix}}}is a4 × 6matrix (24 entries) with 8 nonzero elements, so The whole is stored as 21 entries: 8 inV, 8 inCOL_INDEX, and 5 inROW_INDEX. Note that in this format, the first value ofROW_INDEXis always zero and the last is alwaysNNZ, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored,NNZwould not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formulaROW_INDEX[i+ 1] − ROW_INDEX[i]works for any rowi. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix. The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combinesROW_INDEXandCOL_INDEXinto a single array and handles the diagonal of the matrix separately.[9] Forlogicaladjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation. It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.[10] CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is(val, row_ind, col_ptr), wherevalis an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix;row_indis the row indices corresponding to the values; and,col_ptris the list ofvalindexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via thesparsefunction). Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source: The termsparse matrixwas possibly coined byHarry Markowitzwho initiated some pioneering work but then left the field.[11]
https://en.wikipedia.org/wiki/Sparse_array
Incomputer programming, avariable-length array(VLA), also calledvariable-sizedorruntime-sized, is anarray data structurewhose length is determined atruntime, instead of atcompile time.[1]In the languageC, the VLA is said to have a variably modifieddata typethat depends on a value (seeDependent type). The main purpose of VLAs is to simplify programming ofnumerical algorithms. Programming languages that support VLAs includeAda,ALGOL 68(for non-flexible rows),APL,C#(as unsafe-modestack-allocatedarrays),COBOL,Fortran90,J, andObject Pascal(the language used inDelphiandLazarus, that uses FPC).C99introduced support for VLAs, although they were subsequently relegated inC11to a conditional feature, which implementations are not required to support;[2][3]on some platforms, VLAs could be implemented formerly withalloca()or similar functions. Growable arrays (also calleddynamic arrays) are generally more useful than VLAs because dynamic arrays can do everything VLAs can do, and also support growing the array at run-time. For this reason, many programming languages (JavaScript,Java,Python,R, etc.) only support growable arrays. Even in languages that support variable-length arrays, it's often recommended to avoid using (stack-based) variable-length arrays, and instead use (heap-based) dynamic arrays.[4] The followingC99function allocates a variable-length array of a specified size, fills it with floating-point values, and then passes it to another function for processing. Because the array is declared as an automatic variable, its lifetime ends whenread_and_process()returns. In C99, the length parameter must come before the variable-length array parameter in function calls.[1]In C11, a__STDC_NO_VLA__macro is defined if VLA is not supported.[6]The C23 standard makes VLA types mandatory again. Only creation of VLA objects with automatic storage duration is optional.[7]GCC had VLA as an extension before C99, one that also extends into its C++ dialect. Linus Torvaldshas expressed his displeasure in the past over VLA usage for arrays with predetermined small sizes because it generates lower quality assembly code.[8]With the Linux 4.20 kernel, theLinux kernelis effectively VLA-free.[9] Although C11 does not explicitly name a size-limit for VLAs, some believe it should have the same maximum size as all other objects, i.e. SIZE_MAX bytes.[10]However, this should be understood in the wider context of environment and platform limits, such as the typical stack-guard page size of 4 KiB, which is many orders of magnitude smaller than SIZE_MAX. It is possible to have VLA object with dynamic storage by using a pointer to an array. The following is the same example inAda. Ada arrays carry their bounds with them, so there is no need to pass the length to the Process function. The equivalentFortran 90function is when utilizing the Fortran 90 feature of checking procedure interfaces at compile time; on the other hand, if the functions use pre-Fortran 90 call interface, the (external) functions must first be declared, and the array length must be explicitly passed as an argument (as in C): The followingCOBOLfragment declares a variable-length array of recordsDEPT-PERSONhaving a length (number of members) specified by the value ofPEOPLE-CNT: TheCOBOLVLA, unlike that of other languages mentioned here, is safe because COBOL requires specifying maximum array size. In this example,DEPT-PERSONcannot have more than 20 items, regardless of the value ofPEOPLE-CNT. The followingC#fragment declares a variable-length array of integers. Before C# version 7.2, a pointer to the array is required, requiring an "unsafe" context. The "unsafe" keyword requires an assembly containing this code to be marked as unsafe. C# version 7.2 and later allow the array to be allocated without the "unsafe" keyword, through the use of the Span feature.[11] Object Pascaldynamic arrays are allocated on the heap.[12] In this language, it is called a dynamic array. The declaration of such a variable is similar to the declaration of a static array, but without specifying its size. The size of the array is given at the time of its use. Removing the contents of a dynamic array is done by assigning it a size of zero.
https://en.wikipedia.org/wiki/Variable-length_array
Incomputer science, adisjoint-set data structure, also called aunion–find data structureormerge–find set, is adata structurethat stores a collection ofdisjoint(non-overlapping)sets. Equivalently, it stores apartition of a setinto disjointsubsets. It provides operations for adding new sets, merging sets (replacing them with theirunion), and finding a representative member of a set. The last operation makes it possible to determine efficiently whether any two elements belong to the same set or to different sets. While there are several ways of implementing disjoint-set data structures, in practice they are often identified with a particular implementation known as adisjoint-set forest. This specialized type offorestperforms union and find operations in near-constantamortized time. For a sequence ofmaddition, union, or find operations on a disjoint-set forest withnnodes, the total time required isO(mα(n)), whereα(n)is the extremely slow-growinginverse Ackermann function. Although disjoint-set forests do not guarantee this time per operation, each operation rebalances the structure (via tree compression) so that subsequent operations become faster. As a result, disjoint-set forests are bothasymptotically optimaland practically efficient. Disjoint-set data structures play a key role inKruskal's algorithmfor finding theminimum spanning treeof a graph. The importance of minimum spanning trees means that disjoint-set data structures support a wide variety of algorithms. In addition, these data structures find applications in symbolic computation and in compilers, especially forregister allocationproblems. Disjoint-set forests were first described byBernard A. GallerandMichael J. Fischerin 1964.[2]In 1973, their time complexity was bounded toO(log∗⁡(n)){\displaystyle O(\log ^{*}(n))}, theiterated logarithmofn{\displaystyle n}, byHopcroftandUllman.[3]In 1975,Robert Tarjanwas the first to prove theO(mα(n)){\displaystyle O(m\alpha (n))}(inverse Ackermann function) upper bound on the algorithm's time complexity.[4]He also proved it to be tight. In 1979, he showed that this was the lower bound for a certain class of algorithms, that include the Galler-Fischer structure.[5]In 1989,FredmanandSaksshowed thatΩ(α(n)){\displaystyle \Omega (\alpha (n))}(amortized) words ofO(log⁡n){\displaystyle O(\log n)}bits must be accessed byanydisjoint-set data structure per operation,[6]thereby proving the optimality of the data structure in this model. In 1991, Galil and Italiano published a survey of data structures for disjoint-sets.[7] In 1994, Richard J. Anderson and Heather Woll described a parallelized version of Union–Find that never needs to block.[8] In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-persistentversion of the disjoint-set forest data structure and formalized its correctness using theproof assistantCoq.[9]"Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones. Their fastest implementation achieves performance almost as efficient as the non-persistent algorithm. They do not perform a complexity analysis. Variants of disjoint-set data structures with better performance on a restricted class of problems have also been considered. Gabow and Tarjan showed that if the possible unions are restricted in certain ways, then a truly linear time algorithm is possible.[10] Each node in a disjoint-set forest consists of a pointer and some auxiliary information, either a size or a rank (but not both). The pointers are used to makeparent pointer trees, where each node that is not the root of a tree points to its parent. To distinguish root nodes from others, their parent pointers have invalid values, such as a circular reference to the node or a sentinel value. Each tree represents a set stored in the forest, with the members of the set being the nodes in the tree. Root nodes provide set representatives: Two nodes are in the same set if and only if the roots of the trees containing the nodes are equal. Nodes in the forest can be stored in any way convenient to the application, but a common technique is to store them in an array. In this case, parents can be indicated by their array index. Every array entry requiresΘ(logn)bits of storage for the parent pointer. A comparable or lesser amount of storage is required for the rest of the entry, so the number of bits required to store the forest isΘ(nlogn). If an implementation uses fixed size nodes (thereby limiting the maximum size of the forest that can be stored), then the necessary storage is linear inn. Disjoint-set data structures support three operations: Making a new set containing a new element; Finding the representative of the set containing a given element; and Merging two sets. TheMakeSetoperation adds a new element into a new set containing only the new element, and the new set is added to the data structure. If the data structure is instead viewed as a partition of a set, then theMakeSetoperation enlarges the set by adding the new element, and it extends the existing partition by putting the new element into a new subset containing only the new element. In a disjoint-set forest,MakeSetinitializes the node's parent pointer and the node's size or rank. If a root is represented by a node that points to itself, then adding an element can be described using the following pseudocode: This operation has linear time complexity. In particular, initializing a disjoint-set forest withnnodes requiresO(n)time. Lack of a parent assigned to the node implies that the node is not present in the forest. In practice,MakeSetmust be preceded by an operation that allocates memory to holdx. As long as memory allocation is an amortized constant-time operation, as it is for a gooddynamic arrayimplementation, it does not change the asymptotic performance of the random-set forest. TheFindoperation follows the chain of parent pointers from a specified query nodexuntil it reaches a root element. This root element represents the set to whichxbelongs and may bexitself.Findreturns the root element it reaches. Performing aFindoperation presents an important opportunity for improving the forest. The time in aFindoperation is spent chasing parent pointers, so a flatter tree leads to fasterFindoperations. When aFindis executed, there is no faster way to reach the root than by following each parent pointer in succession. However, the parent pointers visited during this search can be updated to point closer to the root. Because every element visited on the way to a root is part of the same set, this does not change the sets stored in the forest. But it makes futureFindoperations faster, not only for the nodes between the query node and the root, but also for their descendants. This updating is an important part of the disjoint-set forest's amortized performance guarantee. There are several algorithms forFindthat achieve the asymptotically optimal time complexity. One family of algorithms, known aspath compression, makes every node between the query node and the root point to the root. Path compression can be implemented using a simple recursion as follows: This implementation makes two passes, one up the tree and one back down. It requires enough scratch memory to store the path from the query node to the root (in the above pseudocode, the path is implicitly represented using the call stack). This can be decreased to a constant amount of memory by performing both passes in the same direction. The constant memory implementation walks from the query node to the root twice, once to find the root and once to update pointers: TarjanandVan Leeuwenalso developed one-passFindalgorithms that retain the same worst-case complexity but are more efficient in practice.[4]These are called path splitting and path halving. Both of these update the parent pointers of nodes on the path between the query node and the root.Path splittingreplaces every parent pointer on that path by a pointer to the node's grandparent: Path halvingworks similarly but replaces only every other parent pointer: The operationUnion(x,y)replaces the set containingxand the set containingywith their union.Unionfirst usesFindto determine the roots of the trees containingxandy. If the roots are the same, there is nothing more to do. Otherwise, the two trees must be merged. This is done by either setting the parent pointer ofx's root toy's, or setting the parent pointer ofy's root tox's. The choice of which node becomes the parent has consequences for the complexity of future operations on the tree. If it is done carelessly, trees can become excessively tall. For example, suppose thatUnionalways made the tree containingxa subtree of the tree containingy. Begin with a forest that has just been initialized with elements1,2,3,…,n,{\displaystyle 1,2,3,\ldots ,n,}and executeUnion(1, 2),Union(2, 3), ...,Union(n- 1,n). The resulting forest contains a single tree whose root isn, and the path from 1 tonpasses through every node in the tree. For this forest, the time to runFind(1)isO(n). In an efficient implementation, tree height is controlled usingunion by sizeorunion by rank. Both of these require a node to store information besides just its parent pointer. This information is used to decide which root becomes the new parent. Both strategies ensure that trees do not become too deep. In the case of union by size, a node stores its size, which is simply its number of descendants (including the node itself). When the trees with rootsxandyare merged, the node with more descendants becomes the parent. If the two nodes have the same number of descendants, then either one can become the parent. In both cases, the size of the new parent node is set to its new total number of descendants. The number of bits necessary to store the size is clearly the number of bits necessary to storen. This adds a constant factor to the forest's required storage. For union by rank, a node stores itsrank, which is an upper bound for its height. When a node is initialized, its rank is set to zero. To merge trees with rootsxandy, first compare their ranks. If the ranks are different, then the larger rank tree becomes the parent, and the ranks ofxandydo not change. If the ranks are the same, then either one can become the parent, but the new parent's rank is incremented by one. While the rank of a node is clearly related to its height, storing ranks is more efficient than storing heights. The height of a node can change during aFindoperation, so storing ranks avoids the extra effort of keeping the height correct. In pseudocode, union by rank is: It can be shown that every node has rank⌊log⁡n⌋{\displaystyle \lfloor \log n\rfloor }or less.[11]Consequently each rank can be stored inO(log logn)bits and all the ranks can be stored inO(nlog logn)bits. This makes the ranks an asymptotically negligible portion of the forest's size. It is clear from the above implementations that the size and rank of a node do not matter unless a node is the root of a tree. Once a node becomes a child, its size and rank are never accessed again. A disjoint-set forest implementation in whichFinddoes not update parent pointers, and in whichUniondoes not attempt to control tree heights, can have trees with heightO(n). In such a situation, theFindandUnionoperations requireO(n)time. If an implementation uses path compression alone, then a sequence ofnMakeSetoperations, followed by up ton− 1Unionoperations andfFindoperations, has a worst-case running time ofΘ(n+f⋅(1+log2+f/n⁡n)){\displaystyle \Theta (n+f\cdot \left(1+\log _{2+f/n}n\right))}.[11] Using union by rank, but without updating parent pointers duringFind, gives a running time ofΘ(mlog⁡n){\displaystyle \Theta (m\log n)}formoperations of any type, up tonof which areMakeSetoperations.[11] The combination of path compression, splitting, or halving, with union by size or by rank, reduces the running time formoperations of any type, up tonof which areMakeSetoperations, toΘ(mα(n)){\displaystyle \Theta (m\alpha (n))}.[4][5]This makes theamortized running timeof each operationΘ(α(n)){\displaystyle \Theta (\alpha (n))}. This is asymptotically optimal, meaning that every disjoint set data structure must useΩ(α(n)){\displaystyle \Omega (\alpha (n))}amortized time per operation.[6]Here, the functionα(n){\displaystyle \alpha (n)}is theinverse Ackermann function. The inverse Ackermann function grows extraordinarily slowly, so this factor is4or less for anynthat can actually be written in the physical universe. This makes disjoint-set operations practically amortized constant time. The precise analysis of the performance of a disjoint-set forest is somewhat intricate. However, there is a much simpler analysis that proves that the amortized time for anymFindorUnionoperations on a disjoint-set forest containingnobjects isO(mlog*n), wherelog*denotes theiterated logarithm.[12][13][14][15] Lemma 1: As thefind functionfollows the path along to the root, the rank of node it encounters is increasing. We claim that as Find and Union operations are applied to the data set, this fact remains true over time. Initially when each node is the root of its own tree, it's trivially true. The only case when the rank of a node might be changed is when theUnion by Rankoperation is applied. In this case, a tree with smaller rank will be attached to a tree with greater rank, rather than vice versa. And during the find operation, all nodes visited along the path will be attached to the root, which has larger rank than its children, so this operation won't change this fact either. Lemma 2: A nodeuwhich is root of a subtree with rankrhas at least2r{\displaystyle 2^{r}}nodes. Initially when each node is the root of its own tree, it's trivially true. Assume that a nodeuwith rankrhas at least2rnodes. Then when two trees with rankrare merged using the operationUnion by Rank, a tree with rankr+ 1results, the root of which has at least2r+2r=2r+1{\displaystyle 2^{r}+2^{r}=2^{r+1}}nodes. Lemma 3: The maximum number of nodes of rankris at mostn2r.{\displaystyle {\frac {n}{2^{r}}}.} Fromlemma 2, we know that a nodeuwhich is root of a subtree with rankrhas at least2r{\displaystyle 2^{r}}nodes. We will get the maximum number of nodes of rankrwhen each node with rankris the root of a tree that has exactly2r{\displaystyle 2^{r}}nodes. In this case, the number of nodes of rankrisn2r.{\displaystyle {\frac {n}{2^{r}}}.} At any particular point in the execution, we can group the vertices of the graph into "buckets", according to their rank. We define the buckets' ranges inductively, as follows: Bucket 0 contains vertices of rank 0. Bucket 1 contains vertices of rank 1. Bucket 2 contains vertices of ranks 2 and 3. In general, if theB-th bucket contains vertices with ranks from interval[r,2r−1]=[r,R−1]{\displaystyle \left[r,2^{r}-1\right]=[r,R-1]}, then the (B+1)st bucket will contain vertices with ranks from interval[R,2R−1].{\displaystyle \left[R,2^{R}-1\right].} ForB∈N{\displaystyle B\in \mathbb {N} }, lettower(B)=22⋯2⏟Btimes{\displaystyle {\text{tower}}(B)=\underbrace {2^{2^{\cdots ^{2}}}} _{B{\text{ times}}}}. Then bucketB{\displaystyle B}will have vertices with ranks in the interval[tower(B−1),tower(B)−1]{\displaystyle [{\text{tower}}(B-1),{\text{tower}}(B)-1]}. We can make two observations about the buckets' sizes. LetFrepresent the list of "find" operations performed, and let T1=∑F(link to the root){\displaystyle T_{1}=\sum _{F}{\text{(link to the root)}}}T2=∑F(number of links traversed where the buckets are different){\displaystyle T_{2}=\sum _{F}{\text{(number of links traversed where the buckets are different)}}}T3=∑F(number of links traversed where the buckets are the same).{\displaystyle T_{3}=\sum _{F}{\text{(number of links traversed where the buckets are the same).}}} Then the total cost ofmfinds isT=T1+T2+T3.{\displaystyle T=T_{1}+T_{2}+T_{3}.} Since each find operation makes exactly one traversal that leads to a root, we haveT1=O(m). Also, from the bound above on the number of buckets, we haveT2=O(mlog*n). ForT3, suppose we are traversing an edge fromutov, whereuandvhave rank in the bucket[B, 2B− 1]andvis not the root (at the time of this traversing, otherwise the traversal would be accounted for inT1). Fixuand consider the sequencev1,v2,…,vk{\displaystyle v_{1},v_{2},\ldots ,v_{k}}that take the role ofvin different find operations. Because of path compression and not accounting for the edge to a root, this sequence contains only different nodes and because ofLemma 1we know that the ranks of the nodes in this sequence are strictly increasing. By both of the nodes being in the bucket we can conclude that the lengthkof the sequence (the number of times nodeuis attached to a different root in the same bucket) is at most the number of ranks in the bucketsB, that is, at most2B−1−B<2B.{\displaystyle 2^{B}-1-B<2^{B}.} Therefore,T3≤∑[B,2B−1]∑u2B.{\displaystyle T_{3}\leq \sum _{[B,2^{B}-1]}\sum _{u}2^{B}.} From Observations1and2, we can conclude thatT3≤∑B2B2n2B≤2nlog∗⁡n.{\textstyle T_{3}\leq \sum _{B}2^{B}{\frac {2n}{2^{B}}}\leq 2n\log ^{*}n.} Therefore,T=T1+T2+T3=O(mlog∗⁡n).{\displaystyle T=T_{1}+T_{2}+T_{3}=O(m\log ^{*}n).} The worst-case time of theFindoperation in trees withUnion by rankorUnion by weightisΘ(log⁡n){\displaystyle \Theta (\log n)}(i.e., it isO(log⁡n){\displaystyle O(\log n)}and this bound is tight). In 1985, N. Blum gave an implementation of the operations that does not use path compression, but compresses trees duringunion{\displaystyle union}. His implementation runs inO(log⁡n/log⁡log⁡n){\displaystyle O(\log n/\log \log n)}time per operation,[16]and thus in comparison with Galler and Fischer's structure it has a better worst-case time per operation, but inferior amortized time. In 1999, Alstrup et al. gave a structure that has optimal worst-case timeO(log⁡n/log⁡log⁡n){\displaystyle O(\log n/\log \log n)}together with inverse-Ackermann amortized time.[17] The regular implementation as disjoint-set forests does not react favorably to the deletion of elements, in the sense that the time forFindwill not improve as a result of the decrease in the number of elements. However, there exist modern implementations that allow for constant-time deletion and where the time-bound forFinddepends on thecurrentnumber of elements[18][19] Disjoint-set data structures model thepartitioning of a set, for example to keep track of theconnected componentsof anundirected graph. This model can then be used to determine whether two vertices belong to the same component, or whether adding an edge between them would result in a cycle. The Union–Find algorithm is used in high-performance implementations ofunification.[20] This data structure is used by theBoost Graph Libraryto implement itsIncremental Connected Componentsfunctionality. It is also a key component in implementingKruskal's algorithmto find theminimum spanning treeof a graph. TheHoshen-Kopelman algorithmuses a Union-Find in the algorithm.
https://en.wikipedia.org/wiki/Disjoint_set_(data_structure)
Abitstream(orbit stream), also known asbinary sequence, is asequenceofbits. Abytestreamis a sequence ofbytes. Typically, each byte is an8-bit quantity, and so the termoctet streamis sometimes used interchangeably. An octet may be encoded as a sequence of 8 bits in multiple different ways (seebit numbering) so there is no unique and direct translation between bytestreams and bitstreams. Bitstreams and bytestreams are used extensively intelecommunicationsandcomputing. For example,synchronousbitstreams are carried bySONET, andTransmission Control Protocoltransports anasynchronousbytestream. In practice, bitstreams are not used directly to encode bytestreams; a communication channel may use a signalling method that does not directly translate to bits (for instance, by transmitting signals of multiple frequencies) and typically also encodes other information such asframinganderror correctiontogether with its data.[citation needed] The term bitstream is frequently used to describe the configuration data to be loaded into afield-programmable gate array(FPGA). Although most FPGAs also support a byte-parallel loading method as well, this usage may have originated based on the common method of configuring the FPGA from a serial bit stream, typically from a serialPROMorflash memorychip. The detailed format of the bitstream for a particular FPGA is typically proprietary to the FPGA vendor. In mathematics, several specificinfinite sequencesof bits have been studied for their mathematical properties; these include theBaum–Sweet sequence,Ehrenfeucht–Mycielski sequence,Fibonacci word,Kolakoski sequence,regular paperfolding sequence,Rudin–Shapiro sequence, andThue–Morse sequence. On mostoperating systems, includingUnix-likeandWindows, standard I/O libraries convert lower-level paged or bufferedfile accessto a bytestream paradigm. In particular, in Unix-like operating systems, each process has threestandard streams, which are examples of unidirectional bytestreams. TheUnix pipe mechanismprovides bytestream communications between different processes. Compression algorithms often code in bitstreams, as the 8 bits offered by a byte (the smallest addressable unit of memory) may be wasteful. Although typically implemented inlow-level languages, somehigh-level languagessuch as Python[1]and Java[2]offer native interfaces for bitstream I/O. One well-known example of acommunication protocolwhich provides a byte-stream service to its clients is theTransmission Control Protocol(TCP) of theInternet protocol suite, which provides a bidirectional bytestream. TheInternet media typefor an arbitrary bytestream isapplication/octet-stream. Other media types are defined for bytestreams in well-known formats. Often the contents of a bytestream are dynamically created, such as the data from the keyboard and other peripherals (/dev/tty), data from thepseudorandom number generator(/dev/urandom), etc. In those cases, when the destination of a bytestream (the consumer) uses bytes faster than they can be generated, the system usesprocess synchronizationto make the destination wait until the next byte is available. When bytes are generated faster than the destination can use them and the producer is a software algorithm, the system pauses it with the same process synchronization techniques. When the producer supportsflow control, the system only sends thereadysignal when the consumer is ready for the next byte. When the producer can not be paused—a keyboard or some hardware that does not support flow control—the system typically attempts to temporarily store the data until the consumer is ready for it, typically using aqueue. Often the receiver can empty the buffer before it gets completely full. A producer that continues to produce data faster than it can be consumed, even after the buffer is full, leads to unwantedbuffer overflow,packet loss,network congestion, anddenial of service.
https://en.wikipedia.org/wiki/Bitstream
Incomputer science,coinductionis a technique for defining and proving properties of systems ofconcurrentinteractingobjects. Coinduction is themathematicaldualtostructural induction.[citation needed]Coinductively defineddata typesare known ascodataand are typicallyinfinitedata structures, such asstreams. As a definition orspecification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As aprooftechnique, it may be used to show that an equation is satisfied by all possibleimplementationsof such a specification. To generate and manipulate codata, one typically usescorecursivefunctions, in conjunction withlazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. In programming, co-logic programming (co-LP for brevity) "is a natural generalization oflogic programmingand coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation,concurrent logic programming,model checking,bisimilarityproofs, etc."[1]Experimental implementations of co-LP are available from theUniversity of Texas at Dallas[2]and in the languageLogtalk(for examples see[3]) andSWI-Prolog. In his bookTypes and Programming Languages,[4]Benjamin C. Piercegives a concise statement of both theprinciple of inductionand theprinciple of coinduction. While this article is not primarily concerned withinduction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required. LetU{\displaystyle U}be a set andF{\displaystyle F}be amonotone function2U→2U{\displaystyle 2^{U}\rightarrow 2^{U}}, that is: X⊆Y⇒F(X)⊆F(Y){\displaystyle X\subseteq Y\Rightarrow F(X)\subseteq F(Y)} Unless otherwise stated,F{\displaystyle F}will be assumed to be monotone. These terms can be intuitively understood in the following way. Suppose thatX{\displaystyle X}is a set of assertions, andF(X){\displaystyle F(X)}is the operation that yields the consequences ofX{\displaystyle X}. ThenX{\displaystyle X}isF-closedwhen one cannot conclude anymore than has already been asserted, whileX{\displaystyle X}isF-consistentwhen all of the assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions"). TheKnaster–Tarski theoremtells us that theleast fixed-pointofF{\displaystyle F}(denotedμF{\displaystyle \mu F}) is given by the intersection of allF-closedsets, while thegreatest fixed-point(denotedνF{\displaystyle \nu F}) is given by the union of allF-consistentsets. We can now state the principles of induction and coinduction. The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property ofμF{\displaystyle \mu F}. By theprinciple of induction, it suffices to exhibit anF-closedsetX{\displaystyle X}for which the property holds. Dually, suppose you wish to show thatx∈νF{\displaystyle x\in \nu F}. Then it suffices to exhibit anF-consistentset thatx{\displaystyle x}is known to be a member of. Consider the following grammar of datatypes: T=⊥|⊤|T×T{\displaystyle T=\bot \;|\;\top \;|\;T\times T} That is, the set of types includes the "bottom type"⊥{\displaystyle \bot }, the "top type"⊤{\displaystyle \top }, and (non-homogenous) lists. These types can be identified with strings over the alphabetΣ={⊥,⊤,×}{\displaystyle \Sigma =\{\bot ,\top ,\times \}}. LetΣ≤ω{\displaystyle \Sigma ^{\leq \omega }}denote all (possibly infinite) strings overΣ{\displaystyle \Sigma }. Consider the functionF:2Σ≤ω→2Σ≤ω{\displaystyle F:2^{\Sigma ^{\leq \omega }}\rightarrow 2^{\Sigma ^{\leq \omega }}}: F(X)={⊥,⊤}∪{x×y:x,y∈X}{\displaystyle F(X)=\{\bot ,\top \}\cup \{x\times y:x,y\in X\}} In this context,x×y{\displaystyle x\times y}means "the concatenation of stringx{\displaystyle x}, the symbol×{\displaystyle \times }, and stringy{\displaystyle y}." We should now define our set of datatypes as a fixpoint ofF{\displaystyle F}, but it matters whether we take theleastorgreatestfixpoint. Suppose we takeμF{\displaystyle \mu F}as our set of datatypes. Using theprinciple of induction, we can prove the following claim: To arrive at this conclusion, consider the set of all finite strings overΣ{\displaystyle \Sigma }. ClearlyF{\displaystyle F}cannot produce an infinite string, so it turns out this set isF-closedand the conclusion follows. Now suppose that we takeνF{\displaystyle \nu F}as our set of datatypes. We would like to use theprinciple of coinductionto prove the following claim: Here⊥×⊥×⋯{\displaystyle \bot \times \bot \times \cdots }denotes the infinite list consisting of all⊥{\displaystyle \bot }. To use theprinciple of coinduction, consider the set: {⊥×⊥×⋯}{\displaystyle \{\bot \times \bot \times \cdots \}} This set turns out to beF-consistent, and therefore⊥×⊥×⋯∈νF{\displaystyle \bot \times \bot \times \cdots \in \nu F}. This depends on the suspicious statement that ⊥×⊥×⋯=(⊥×⊥×⋯)×(⊥×⊥×⋯){\displaystyle \bot \times \bot \times \cdots =(\bot \times \bot \times \cdots )\times (\bot \times \bot \times \cdots )} The formal justification of this is technical and depends on interpreting strings assequences, i.e. functions fromN→Σ{\displaystyle \mathbb {N} \rightarrow \Sigma }. Intuitively, the argument is similar to the argument that0.0¯1=0{\displaystyle 0.{\bar {0}}1=0}(seeRepeating decimal). Consider the following definition of astream:[5] This would seem to be a definition that isnot well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream. Consider theendofunctorF{\displaystyle F}in thecategory of sets: F(x)=A×xF(f)=⟨idA,f⟩{\displaystyle {\begin{aligned}F(x)&=A\times x\\F(f)&=\langle \mathrm {id} _{A},f\rangle \end{aligned}}} Thefinal F-coalgebraνF{\displaystyle \nu F}has the following morphism associated with it: out:νF→F(νF)=A×νF{\displaystyle \mathrm {out} :\nu F\rightarrow F(\nu F)=A\times \nu F} This induces another coalgebraF(νF){\displaystyle F(\nu F)}with associated morphismF(out){\displaystyle F(\mathrm {out} )}. BecauseνF{\displaystyle \nu F}isfinal, there is a unique morphism F(out)¯:F(νF)→νF{\displaystyle {\overline {F(\mathrm {out} )}}:F(\nu F)\rightarrow \nu F} such that out∘F(out)¯=F(F(out)¯)∘F(out)=F(F(out)¯∘out){\displaystyle \mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ F(\mathrm {out} )=F\left({\overline {F(\mathrm {out} )}}\circ \mathrm {out} \right)} The compositionF(out)¯∘out{\displaystyle {\overline {F(\mathrm {out} )}}\circ \mathrm {out} }induces anotherF-coalgebra homomorphismνF→νF{\displaystyle \nu F\rightarrow \nu F}. SinceνF{\displaystyle \nu F}is final, this homomorphism is unique and thereforeidνF{\displaystyle \mathrm {id} _{\nu F}}. Altogether we have: F(out)¯∘out=idνFout∘F(out)¯=F(F(out)¯)∘out)=idF(νF){\displaystyle {\begin{aligned}{\overline {F(\mathrm {out} )}}\circ \mathrm {out} &=\mathrm {id} _{\nu F}\\\mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ \mathrm {out} )&=\mathrm {id} _{F(\nu F)}\end{aligned}}} This witnesses the isomorphismνF≃F(νF){\displaystyle \nu F\simeq F(\nu F)}, which in categorical terms indicates thatνF{\displaystyle \nu F}is a fixed point ofF{\displaystyle F}and justifies the notation.[6][verification needed] We will show thatStream Ais the final coalgebra of the functorF(x)=A×x{\displaystyle F(x)=A\times x}. Consider the following implementations: These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details. We will demonstrate how theprinciple of inductionsubsumes mathematical induction. LetP{\displaystyle P}be some property ofnatural numbers. We will take the following definition of mathematical induction: 0∈P∧(n∈P⇒n+1∈P)⇒P=N{\displaystyle 0\in P\land (n\in P\Rightarrow n+1\in P)\Rightarrow P=\mathbb {N} } Now consider the functionF:2N→2N{\displaystyle F:2^{\mathbb {N} }\rightarrow 2^{\mathbb {N} }}: F(X)={0}∪{x+1:x∈X}{\displaystyle F(X)=\{0\}\cup \{x+1:x\in X\}} It should not be difficult to see thatμF=N{\displaystyle \mu F=\mathbb {N} }. Therefore, by theprinciple of induction, if we wish to prove some propertyP{\displaystyle P}ofN{\displaystyle \mathbb {N} }, it suffices to show thatP{\displaystyle P}isF-closed. In detail, we require: F(P)⊆P{\displaystyle F(P)\subseteq P} That is, {0}∪{x+1:x∈P}⊆P{\displaystyle \{0\}\cup \{x+1:x\in P\}\subseteq P} This is preciselymathematical inductionas stated.
https://en.wikipedia.org/wiki/Codata_(computer_science)
Inconnection-oriented communication, adata streamis thetransmissionof a sequence ofdigitally encoded signalsto conveyinformation.[1]Typically, the transmitted symbols are grouped into a series ofpackets.[2] Data streaming has become ubiquitous. Anything transmitted over theInternetis transmitted as a data stream. Using amobile phoneto have a conversation transmits the sound as a data stream. In a formal way, a data stream is anyordered pair(s,Δ){\displaystyle (s,\Delta )}where: Data Stream contains different sets of data, that depend on the chosen data format. There are various areas where data streams are used: Core integrations with data streams are: In a data stream it is visible what device has been used by the user side – it is visible onuser agent: The following information is shared out of used device: Adata pointis a tag that collects information about a certain action, performed by a user on a website. Data points exists in two types, the values of which are used to create appropriate audiences. Those are: Segmentis a logical statement, built on specific Data Points using AND, OR or NOT operators.[9]Hybrid data– raw data out of both Data Point and Segment data formats.[10]URLs– is a set of information about a particularURLthat has been visited. Information gathered out of websites are based on user behavior. Data providers deliver both personal or non-personal information. There are two types of user data available in data stream:
https://en.wikipedia.org/wiki/Data_stream
Data Stream Mining(also known asstream learning) is the process of extracting knowledge structures from continuous, rapid data records. Adata streamis an ordered sequence of instances that in many applications of data stream mining can be read only once or a small number of times using limited computing and storage capabilities.[1] In many data stream mining applications, the goal is to predict the class or value of new instances in the data stream given some knowledge about the class membership or values of previous instances in the data stream.[2]Machine learning techniques can be used to learn this prediction task from labeled examples in an automated fashion. Often, concepts from the field ofincremental learningare applied to cope with structural changes,on-line learningand real-time demands. In many applications, especially operating within non-stationary environments, the distribution underlying the instances or the rules underlying their labeling may change over time, i.e. the goal of the prediction, the class to be predicted or the target value to be predicted, may change over time.[3]This problem is referred to asconcept drift. Detectingconcept driftis a central issue to data stream mining.[4][5]Other challenges[6]that arise when applying machine learning to streaming data include: partially and delayed labeled data,[7][8]recovery from concept drifts,[1]and temporal dependencies.[9] Examples of data streams include computer network traffic, phone conversations, ATM transactions, web searches, and sensor data. Data stream mining can be considered a subfield ofdata mining,machine learning, andknowledge discovery.
https://en.wikipedia.org/wiki/Data_stream_mining
Inpacket switchingnetworks,traffic flow,packet flowornetwork flowis a sequence ofpacketsfrom a sourcecomputerto a destination, which may be another host, amulticastgroup, or abroadcastdomain. RFC 2722 defines traffic flow as "an artificial logical equivalent to a call or connection."[1]RFC 3697 defines traffic flow as "a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection."[2]Flow is also defined in RFC 3917 as "a set of IP packets passing an observation point in the network during a certain time interval."[3]Packet flow temporal efficiency can be affected byone-way delay (OWD)that is described as a combination of the following components: Packets from one flow need to be handled differently from others, by means of separate queues inswitches,routersandnetwork adapters, to achievetraffic shaping,policing,fair queueingorquality of service. It is also a concept used in Queueing Network Analyzers (QNAs) or in packet tracing. Applied to Internetrouters, a flow may be a host-to-host communication path, or asocket-to-socketcommunication identified by a unique combination of source and destination addresses and port numbers, together with transport protocol (for example,UDPorTCP). In the TCP case, a flow may be avirtual circuit, also known as avirtual connectionor abyte stream.[4] In packet switches, the flow may be identified byIEEE 802.1QVirtual LAN tagging in Ethernet networks, or by alabel-switched pathinMPLStag switching. Packet flow can be represented as apathin a network to model network performance. For example, a waterflow networkcan be used to conceptualize packet flow.Communication channelscan be thought of as pipes, with the pipe capacity corresponding to bandwidth and flows corresponding to data throughput. This visualization can help to understand bottlenecks, queuing, and the unique requirements of tailored systems.
https://en.wikipedia.org/wiki/Traffic_flow_(computer_networking)
Anetwork socketis a software structure within anetwork nodeof acomputer networkthat serves as an endpoint for sending and receiving data across the network. The structure and properties of a socket are defined by anapplication programming interface(API) for the networking architecture. Sockets are created only during the lifetime of aprocessof an application running in the node. Because of thestandardizationof theTCP/IPprotocols in the development of theInternet, the termnetwork socketis most commonly used in the context of the Internet protocol suite, and is therefore often also referred to asInternet socket. In this context, a socket is externally identified to other hosts by itssocket address, which is the triad oftransport protocol,IP address, andport number. The termsocketis also used for the software endpoint of node-internalinter-process communication(IPC), which often uses the same API as a network socket. The use of the termsocketin software is analogous to the function of an electricalfemale connector, a device in hardware for communication between nodes interconnected with anelectrical cable. Similarly, the termportis used for external physical endpoints at a node or device. The application programming interface (API) for the network protocol stack creates ahandlefor each socket created by an application, commonly referred to as asocket descriptor. InUnix-like operating systems, this descriptor is a type offile descriptor. It is stored by the application process for use with every read and write operation on the communication channel. At the time of creation with the API, a network socket is bound to the combination of a type of network protocol to be used for transmissions, a network address of the host, and aport number. Ports are numbered resources that represent another type of software structure of the node. They are used as service types, and, once created by a process, serve as an externally (from the network) addressable location component, so that other hosts may establish connections. Network sockets may be dedicated for persistent connections for communication between two nodes, or they may participate inconnectionlessandmulticastcommunications. In practice, due to the proliferation of the TCP/IP protocols in use on the Internet, the termnetwork socketusually refers to use with theInternet Protocol(IP). It is therefore often also calledInternet socket. An application can communicate with a remote process by exchanging data with TCP/IP by knowing the combination of protocol type, IP address, and port number. This combination is often known as asocket address. It is the network-facing access handle to the network socket. The remote process establishes a network socket in its own instance of the protocol stack and uses the networking API to connect to the application, presenting its own socket address for use by the application. Aprotocol stack, usually provided by theoperating system(rather than as a separate library, for instance), is a set of services that allows processes to communicate over a network using the protocols that the stack implements. The operating system forwards the payload of incoming IP packets to the corresponding application by extracting the socket address information from the IP and transport protocol headers and stripping the headers from the application data. The application programming interface (API) that programs use to communicate with the protocol stack, using network sockets, is called asocket API. Development of application programs that utilize this API is calledsocket programmingornetwork programming. Internet socket APIs are usually based on theBerkeley socketsstandard. In the Berkeley sockets standard, sockets are a form offile descriptor, due to theUnix philosophythat "everything is a file", and the analogies between sockets and files. Both have functions to read, write, open, and close. In practice, the differences strain the analogy, and different interfaces (send and receive) are used on a socket. Ininter-process communication, each end generally has its own socket. In the standard Internet protocols TCP and UDP, a socket address is the combination of anIP addressand aport number, much like one end of a telephone connection is the combination of aphone numberand a particularextension. Sockets need not have a source address, for example, for only sending data, but if a programbindsa socket to a source address, the socket can be used to receive data sent to that address. Based on this address, Internet sockets deliver incomingdata packetsto the appropriate applicationprocess. Socketoften refers specifically to an internet socket or TCP socket. An internet socket is minimally characterized by the following: The distinctions between a socket (internal representation), socket descriptor (abstract identifier), and socket address (public address) are subtle, and these are not always distinguished in everyday usage. Further, specific definitions of asocketdiffer between authors. InIETFRequest for Comments,Internet Standards, in many textbooks, as well as in this article, the termsocketrefers to an entity that is uniquely identified by the socket number. In other textbooks,[1]the termsocketrefers to a local socket address, i.e. a "combination of an IP address and a port number". In the original definition ofsocketgiven in RFC 147,[2]as it was related to theARPA networkin 1971,"the socket is specified as a 32-bit number with even sockets identifying receiving sockets and odd sockets identifying sending sockets."Today, however, socket communications are bidirectional. Within the operating system and the application that created a socket, a socket is referred to by a unique integer value called asocket descriptor. On Unix-like operating systems andMicrosoft Windows, the command-line toolsnetstatorss[3]are used to list established sockets and related information. This example, modeled according to the Berkeley socket interface, sends the string "Hello, world!" viaTCPto port 80 of the host with address 203.0.113.0. It illustrates the creation of a socket (getSocket), connecting it to the remote host, sending the string, and finally closing the socket: Several types of Internet socket are available: Other socket types are implemented over other transport protocols, such asSystems Network Architecture[10]andUnix domain socketsfor internal inter-process communication. Computer processes that provide application services are referred to asservers, and create sockets on startup that are in thelistening state. These sockets are waiting for initiatives fromclientprograms. A TCP server may serve several clients concurrently by creating a unique dedicated socket for each client connection in a new child process or processing thread for each client. These are in theestablished statewhen a socket-to-socketvirtual connectionor virtual circuit (VC), also known as a TCPsession, is established with the remote socket, providing a duplexbyte stream. A server may create several concurrently established TCP sockets with the same local port number and local IP address, each mapped to its own server-child process, serving its own client process. They are treated as different sockets by the operating system since the remote socket address (the client IP address or port number) is different; i.e. since they have differentsocket pairtuples. UDP sockets do not have anestablished state, because the protocol isconnectionless. A UDP server process handles incoming datagrams from all remote clients sequentially through the same socket. UDP sockets are not identified by the remote address, but only by the local address, although each message has an associated remote address that can be retrieved from each datagram with the networking application programming interface (API). Local and remote sockets communicating over TCP are calledsocket pairs. Each socket pair is described by a unique4-tupleconsisting of source and destination IP addresses and port numbers, i.e. of local and remote socket addresses.[11][12]As discussed above, in the TCP case, a socket pair is associated on each end of the connection with a unique 4-tuple. The termsocketdates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based onBerkeley sockets(1983), and other stacks such asWinsock(1991). The Berkeley sockets API in theBerkeley Software Distribution(BSD), originated with the 4.2BSDUnix operating systemas an API. Only in 1989, however, couldUC Berkeleyrelease versions of its operating system and networking library free from the licensing constraints ofAT&T's copyright-protectedUnix. In c. 1987, AT&T introduced theSTREAMS-basedTransport Layer Interface(TLI) inUNIX System VRelease 3 (SVR3).[13]and continued into Release 4 (SVR4).[14] Other early implementations were written forTOPS-20,[15]MVS,[15]VM,[15]IBM-DOS(PCIP).[15][16] The socket is primarily a concept used in thetransport layerof theInternet protocol suiteorsession layerof theOSI model. Networking equipment such asrouters, which operate at theinternet layer, andswitches, which operate at thelink layer, do not require implementations of the transport layer. However, statefulnetwork firewalls,network address translators, and proxy servers keep track of active socket pairs. Inmultilayer switchesandquality of service(QoS) support in routers,packet flowsmay be identified by extracting information about the socket pairs. Raw socketsare typically available in network equipment and are used forrouting protocolssuch asIGRPandOSPF, and forInternet Control Message Protocol(ICMP).
https://en.wikipedia.org/wiki/Network_socket
Streaming mediarefers tomultimediadelivered through anetworkfor playback using amedia player. Media is transferred in astreamofpacketsfrom aserverto aclientand is rendered in real-time;[1]this contrasts with filedownloading, a process in which the end-user obtains an entire media file before consuming the content. Streaming is more commonly used forvideo on demand,streaming television, andmusic streaming servicesover the Internet. While streaming is most commonly associated with multimedia from a remote server over the Internet, it also includesofflinemultimedia between devices on alocal area network. For example, usingDLNA[2]and ahome server, or in apersonal area networkbetween two devices usingBluetooth(which usesradio wavesrather thanIP).[3]Online streaming was initially popularized byRealNetworksandMicrosoftin the 1990s[4]and has since grown to become the globally most popular method for consuming music and videos,[5]with numerous competing subscription services being offered since the 2010s.[6]Audio streaming towireless speakers, often using Bluetooth, is another use that has become prevalent during that decade.[7]Live streamingis the real-time delivery of content during production, much aslive televisionbroadcasts content via television channels.[8] Distinguishing delivery methods from the media applies specifically to, as most of the traditional media delivery systems are either inherentlystreaming(e.g., radio, television) or inherentlynon-streaming(e.g., books,videotapes,audio CDs). The term "streaming media" can apply to media other than video and audio, such as liveclosed captioning,ticker tape, andreal-time text, which are all considered "streaming text". The term "streaming" was first used fortape drivesmanufactured by Data Electronics Inc. that were meant to slowly ramp up and run for the entire track; slower ramp times lowered drive costs. "Streaming" was applied in the early 1990s as a better description forvideo on demandand later live video onIP networks. It was first done byStarlight Networksfor video streaming andReal Networksfor audio streaming. Such video had previously been referred to by the misnomer "store and forward video."[9] Beginning in 1881,Théâtrophoneenabled subscribers to listen to opera and theatre performances over telephone lines. This operated until 1932. The concept of media streaming eventually came to America.[10] In the early 1920s,George Owen Squierwas granted patents for a system for the transmission and distribution of signals over electrical lines,[11]which was the technical basis for what later becameMuzak, a technology for streaming continuous music to commercial customers without the use of radio. The Telephone Music Service, a live jukebox service, began in 1929 and continued until 1997.[12][13]The clientele eventually included 120 bars and restaurants in the Pittsburgh area. A tavern customer would deposit money in the jukebox, use a telephone on top of the jukebox, and ask the operator to play a song. The operator would find the record in the studio library of more than 100,000 records, put it on a turntable, and the music would be piped over the telephone line to play in the tavern. The music media began as 78s, 33s and 45s, played on the six turntables they monitored. CDs and tapes were incorporated in later years. The business had a succession of owners, notably Bill Purse, his daughter Helen Reutzel, and finally Dotti White. The revenue stream for each quarter was split between 60% for the music service and 40% for the tavern owner.[14]This business model eventually became unsustainable due to city permits and the cost of setting up these telephone lines.[13] Attempts to display media oncomputersdate back to the earliest days of computing in the mid-20th century. However, little progress was made for several decades, primarily due to the high cost and limited capabilities of computer hardware. From the late 1980s through the 1990s, consumer-grade personal computers became powerful enough to display various media. The primary technical issues related to streaming were having enoughCPUandbusbandwidthto support the required data rates and achieving thereal-time computingperformance required to preventbuffer underrunsand enable smooth streaming of the content. However, computer networks were still limited in the mid-1990s, and audio and video media were usually delivered over non-streaming channels, such as playback from a localhard disk driveorCD-ROMson the end user's computer. Terminology in the 1970s was at best confusing for applications such as telemetered aircraft or missile test data. By then PCM [Pulse Code Modulation] was the dominant transmission type. This PCM transmission was bit-serial and not packetized so the 'streaming' terminology was often a confusion factor. In 1969 Grumman acquired one of the first telemetry ground stations [Automated Telemetry Station, 'ATS'] which had the capability for reconstructing serial telemetered data which had been recorded on digital computer peripheral tapes. Computer peripheral tapes were inherently recorded in blocks. Reconstruction was required for continuous display purposes without time-base distortion. The Navy implemented similar capability in DoD for the first time in 1973. These implementations are the only known examples of true 'streaming' in the sense of reconstructing distortion-free serial data from packetized or blocked recordings.[15]'Real-time' terminology has also been confusing in streaming context. The most accepted definition of 'real-time' requires that all associated processing or formatting of the data must take place prior to availability of the next sample of each measurement. In the 1970s the most powerful mainframe computers were not fast enough for this task at significant overall data rates in the range of 50,000 samples per second. For that reason both the Grumman ATS and the Navy Real-time Telemetry Processing System [RTPS] employed unique special purpose digital computers dedicated to real-time processing of raw data samples. In 1990, the first commercialEthernet switchwas introduced byKalpana, which enabled the more powerful computer networks that led to the first streaming video solutions used by schools and corporations. Practical streaming media was only made possible with advances indata compressiondue to the impractically high bandwidth requirements of uncompressed media. Rawdigital audioencoded withpulse-code modulation(PCM) requires a bandwidth of 1.4Mbit/sfor uncompressedCD audio, while rawdigital videorequires a bandwidth of 168Mbit/s forSD videoand over 1000Mbit/s forFHDvideo.[16] During the late 1990s and early 2000s, users had increased access to computer networks, especially the Internet. During the early 2000s, users had access to increased networkbandwidth, especially in thelast mile. These technological improvements facilitated the streaming of audio and video content to computer users in their homes and workplaces. There was also an increasing use of standard protocols and formats, such asTCP/IP,HTTP, andHTML, as the Internet became increasingly commercialized, which led to an infusion of investment into the sector. The bandSevere Tire Damagewas the first group to perform live on the Internet. On 24 June 1993, the band was playing a gig atXerox PARC, while elsewhere in the building, scientists were discussing new technology (theMbone) for broadcasting on the Internet usingmulticasting. As proof of PARC's technology, the band's performance was broadcast and could be seen live in Australia and elsewhere. In a March 2017 interview, band member Russ Haines stated that the band had used approximately "half of the total bandwidth of the internet" to stream the performance, which was a152 × 76pixel video, updated eight to twelve times per second, with audio quality that was, "at best, a bad telephone connection."[17]In October 1994, a school music festival was webcast from the Michael Fowler Centre in Wellington, New Zealand. The technician who arranged the webcast, local council employee Richard Naylor, later commented: "We had 16 viewers in 12 countries."[18] RealNetworkspioneered the broadcast of a baseball game between theNew York Yankeesand theSeattle Marinersover the Internet in 1995.[19]The first symphonic concert on the Internet—a collaboration between theSeattle Symphonyand guest musiciansSlash,Matt Cameron, andBarrett Martin—took place at theParamount TheaterinSeattle, Washington, on 10 November 1995.[20] In 1996,Marc Scarpaproduced the first large-scale, online, live broadcast, theAdam Yauch–ledTibetan Freedom Concert, an event that would define the format of social change broadcasts. Scarpa continued to pioneer in the streaming media world with projects such asWoodstock '99, Townhall withPresident Clinton, and more recently Covered CA's campaign "Tell a Friend Get Covered", which was livestreamed on YouTube. Xing Technologywas founded in 1989 and developed a JPEG streaming product called "StreamWorks". Another streaming product appeared in late 1992 and was named StarWorks.[21]StarWorks enabled on-demand MPEG-1 full-motion videos to be randomly accessed on corporateEthernetnetworks. Starworks was fromStarlight Networks, which also pioneered live video streaming on Ethernet and viaInternet Protocolover satellites withHughes Network Systems.[22]Other early companies that created streaming media technology include Progressive Networks and Protocomm prior to widespread World Wide Web usage. After theNetscape IPOin 1995 (and the release ofWindows 95with built-inTCP/IPsupport), usage of the Internet expanded, andmany companies "went public", including Progressive Networks (which was renamed "RealNetworks", and listed onNasdaqas "RNWK"). As the web became even more popular in the late 90s, streaming video on the internet blossomed from startups such asVivo Software(later acquired by RealNetworks), VDOnet (acquired by RealNetworks), Precept (acquired byCisco), and Xing (acquired by RealNetworks).[23] Microsoftdeveloped a media player known asActiveMoviein 1995 that supported streaming media and included a proprietary streaming format, which was the precursor to the streaming feature later inWindows Media Player6.4 in 1999. In June 1999,Applealso introduced a streaming media format in itsQuickTime4 application. It was later also widely adopted on websites, along with RealPlayer and Windows Media streaming formats. The competing formats on websites required each user to download the respective applications for streaming, which resulted in many users having to have all three applications on their computer for general compatibility. In 2000, Industryview.com launched its "world's largest streaming video archive" website to help businesses promote themselves.[24]Webcasting became an emerging tool for business marketing and advertising that combined the immersive nature of television with the interactivity of the Web. The ability to collect data and feedback from potential customers caused this technology to gain momentum quickly.[25] Around 2002, the interest in a single, unified, streaming format and the widespread adoption ofAdobe Flashprompted the development of a video streaming format through Flash, which was the format used in Flash-based players onvideo hostingsites. The first popular video streaming site, YouTube, was founded bySteve Chen,Chad Hurley, andJawed Karimin 2005. It initially used a Flash-based player, which playedMPEG-4 AVCvideo andAACaudio, but now defaults toHTML video.[26]Increasing consumer demand for live streaming prompted YouTube to implement a new live streaming service for users.[27]The company currently also offers a (secure) link that returns the available connection speed of the user.[28] TheRecording Industry Association of America(RIAA) revealed through its 2015, earnings report that streaming services were responsible for 34.3 percent of the year's totalmusic industry's revenue, growing 29 percent from the previous year and becoming the largest source of income, pulling in around $2.4 billion.[29][30]US streaming revenue grew 57 percent to $1.6 billion in the first half of 2016 and accounted for almost half of industry sales.[31] The termstreaming warswas coined to describe the new era (starting in the late 2010s) of competition between video streaming services such asNetflix,Amazon Prime Video,Hulu,Max,Disney+,Paramount+,Apple TV+,Peacock, and many more.[6][32] The competition between increasingly popular online platforms, such as Netflix and Amazon, and legacy broadcasters and studios moving online, like Disney and NBC, has driven each service to find ways to differentiate from one another. A key differentiator has been offering exclusive content, often self-produced and created for a specificmarket segment. When Netflix first launched in 2007, it became one of the more dominant streaming platforms even though it initially offered no original content. It would be nearly a half-dozen years before Netflix began offering its own shows, such as House of Cards, Orange Is the New Black, and Hemlock Grove. The legacy services also began producing original digital-only content, but they also began restricting their back catalog of shows and movies to their platforms, one of the most notable examples being Disney+. Disney took advantage of owning popular movies and shows like Frozen, Snow White, and the Star Wars and Marvel franchises, which could draw in more subscribers and make it a more serious competitor to Netflix and Amazon.[33]Research suggests that this approach to streaming competition can be disadvantageous for consumers by increasing spending across platforms, and for the industry as a whole by dilution of subscriber base. Once specific content is made available on a streaming service, piracy searches for the same content decrease; competition or legal availability across multiple platforms appears to deter online piracy. Exclusive content produced for subscription services such as Netflix tends to have a higher production budget than content produced exclusively forpay-per-viewservices, such as Amazon Prime Video.[34] This competition increased during the first two years of theCOVID-19 pandemicas more people stayed home and watched TV. "The COVID-19 pandemic has led to a seismic shift in the film & TV industry in terms of how films are made, distributed, and screened. Many industries have been hit by the economic effects of the pandemic" (Totaro Donato).[9]In August 2022, a CNN headline declared that "The streaming wars are over" as pandemic-era restrictions had largely ended and audience growth had stalled. This led services to focus on profit over market share by cutting production budgets, cracking down on password sharing, and introducing ad-supported tiers.[35]A December 2022 article inThe Vergeechoed this, declaring an end to the "golden age of the streaming wars".[36] In September 2023, several streaming services formed atrade associationnamed the Streaming Innovation Alliance (SIA), spearheaded byCharles Rivkinof theMotion Picture Association(MPA). FormerU.S. representativeFred Uptonand formerFederal Communications Commission(FCC) acting chairMignon Clyburnserve as senior advisors. Founding members include AfroLandTV, America Nu Network,BET+,The Africa Channel,Discovery+, FedNet, For Us By Us Network, In the Black Network,Max,Motion Picture Association, MotorTrend+,Netflix,Paramount+,Peacock,Pluto TV, Radiant, SkinsPlex,Telemundo,TelevisaUnivision, TVEI, Vault TV,Vix, andThe Walt Disney Company. Notably absent wereApple,Amazon,Roku, andTubi.[37][38] Advances incomputer networking, combined with powerful home computers and operating systems, have made streaming media affordable and easy for the public. Stand-aloneInternet radio devicesemerged to offer listeners a non-technical option for listening to audio streams. These audio-streaming services became increasingly popular; music streaming reached 4 trillion streams globally in 2023—a significant increase from 2022—jumping 34% over the year.[39] In general, multimedia content is data-intensive, so media storage and transmission costs are still significant. Media is generallycompressedfor transport and storage. Increasing consumer demand for streaminghigh-definition(HD) content has led the industry to develop technologies such asWirelessHDandG.hn, which are optimized for streaming HD content. Many developers have introduced HD streaming apps that work on smaller devices, such as tablets and smartphones, for everyday purposes. "Streaming creates the illusion—greatly magnified by headphone use, which is another matter—that music is a utility you can turn on and off; the water metaphor is intrinsic to how it works. It dematerializes music, denies it a crucial measure of autonomy, reality, and power. It makes music seem disposable, impermanent. Hence it intensifies the ebb and flow of pop fashion, the waymusical 'memes'rise up for a week or a month and are then forgotten. And it renders our experience of individual artists/groups shallower." A media stream can be streamed eitherliveoron demand. Live streams are generally provided by a method calledtrue streaming. True streaming sends the information straight to the computer or device without saving it to a local file. On-demand streaming is provided by a method calledprogressive download. Progressive download saves the received information to a local file and then plays it from that location. On-demand streams are often saved to files for extended period of time, while live streams are only available at one time only (e.g., during a football game).[41] Streaming media is increasingly being coupled with the use of social media. For example, sites such as YouTube encourage social interaction in webcasts through features such aslive chat,online surveys, user posting of comments online, and more. Furthermore, streaming media is increasingly being used forsocial businessande-learning.[42] TheHorowitz ResearchState of Pay TV, OTT, and SVOD 2017 report said that 70 percent of those viewing content did so through a streaming service and that 40 percent of TV viewing was done this way, twice the number from five years earlier.Millennials, the report said, streamed 60 percent of the content.[43] One of the movie streaming industry's largest impacts was on the DVD industry, which drastically dropped in popularity and profitability with the mass popularization of online content.[44]The rise of media streaming caused the downfall of many DVD rental companies, such asBlockbuster. In July 2015,The New York Timespublished an article aboutNetflix's DVD services. It stated that Netflix was continuing their DVD services with 5.3 million subscribers, which was a significant drop from the previous year. On the other hand, their streaming service had 65 million members.[45]The shift to streaming platforms also led to the decline of DVD rental services. In July 2024,NBC Newsreported thatRedBox, a DVD rental service that had operated for 22 years, would shut down due to the rapid incline of streaming platforms. As the rental services has been rapidly declining since 2010, the business had to file for bankruptcy, with 99% of households now subscribing to streaming services. Further reflecting the shift away from physical media, BestBuy has ceased selling DVDs.[46] Music streamingis one of the most popular ways in which consumers interact with streaming media. In the age of digitization, theprivate consumptionof music has transformed into apublic good, largely due to one player in the market: Napster. Napster, apeer-to-peer(P2P) file-sharing network where users could upload and downloadMP3files freely, broke all music industry conventions when it launched in early 1999 in Hull, Massachusetts. The platform was developed by Shawn and John Fanning as well asSean Parker.[47]In an interview from 2009, Shawn Fanning explained that Napster "was something that came to me as a result of seeing a sort of unmet need and the passion people had for being able to find all this music, particularly a lot of the obscure stuff, which wouldn't be something you go to a record store and purchase, so it felt like a problem worth solving."[48] Not only did this development disrupt the music industry by making songs that previously required payment to be freely accessible to any Napster user, but it also demonstrated the power of P2P networks in turning any digital file into a public, shareable good. For the brief period of time that Napster existed, mp3 files fundamentally changed as a type of good. Songs were no longer financially excludable, barring access to a computer with internet access, and they were not rivals, meaning if one person downloaded a song, it did not diminish another user from doing the same. Napster, like most other providers of public goods, faced thefree-rider problem. Every user benefits when an individual uploads an mp3 file, but there is no requirement or mechanism that forces all users to share their music. Generally, the platform encouraged sharing; users who downloaded files from others often had their own files available for upload as well. However, not everyone chose to share their files. There was no a built-in incentive specifically discouraging users from sharing their own files.[49] This structure revolutionized the consumer's perception of ownership overdigital goods; it made music freely replicable. Napster quickly garnered millions of users, growing faster than any other business in history. At the peak of its existence, Napster boasted about 80 million users globally. The site gained so much traffic that many college campuses had to block access to Napster because it created network congestion from so many students sharing music files.[50] The advent of Napster sparked the creation of numerous other P2P sites, includingLimeWire(2000),BitTorrent(2001), andthe Pirate Bay(2003). The reign of P2P networks was short-lived. The first to fall was Napster in 2001. Numerous lawsuits were filed against Napster by various record labels, all of which were subsidiaries ofUniversal Music Group,Sony MusicEntertainment,Warner Music Group, orEMI. In addition to this, theRecording Industry Association of America(RIAA) also filed a lawsuit against Napster on the grounds of unauthorized distribution of copyrighted material, which ultimately led Napster to shut down in 2001.[50]In an interview with theNew York Times, Gary Stiffelman, who representsEminem,Aerosmith, andTLC, explained, "I'm not an opponent of artists' music being included in these services, I'm just an opponent of their revenue not being shared."[51] The lawsuitA&M Records, Inc. v. Napster, Inc.fundamentally changed the way consumers interact with music streaming. It was argued on 2 October 2000, and was decided on 12 February 2001. TheCourt of Appealsfor the Ninth Circuit ruled that a P2P file-sharing service could be held liable for contributory and vicarious infringement of copyright, serving as a landmark decision for Intellectual property law.[52] The first issue that the Court addressed wasfair use, which says that otherwise infringing activities are permissible so long as they are for purposes "such as criticism, comment, news reporting, teaching [...] scholarship, or research."[53]Judge Beezer, the judge for this case, noted that Napster claimed that its services fit "three specific alleged fair uses:sampling, where users make temporary copies of a work before purchasing; space-shifting, where users access a sound recording through the Napster system that they already own in audio CD format; and permissive distribution of recordings by both new and established artists."[53]Judge Beezer found that Napster did not fit these criteria, instead enabling their users to repeatedly copy music, which would affect the market value of the copyrighted good. The second claim by the plaintiffs was that Napster was actively contributing tocopyright infringementsince it had knowledge of widespread file sharing on its platform. Since Napster took no action to reduce infringement and financially benefited from repeated use, the court ruled against the P2P site. The court found that "as much as eighty-seven percent of the files available on Napster may be copyrighted and more than seventy percent may be owned or administered by plaintiffs."[53] Theinjunctionordered against Napster ended the brief period in which music streaming was a public good – non-rival and non-excludable in nature. Other P2P networks had some success at sharing MP3s, though they all met a similar fate in court. The ruling set the precedent that copyrighted digital content cannot be freely replicated and shared unless given consent by the owner, thereby strengthening the property rights of artists and record labels alike.[52] Although music streaming is no longer a freely replicable public good, streaming platforms such asSpotify,Deezer,Apple Music,SoundCloud,YouTube Music, andAmazon Musichave shifted music streaming to aclub-type good. While some platforms, most notably Spotify, give customers access to afreemiumservice that enables the use of limited features for exposure to advertisements, most companies operate under a premium subscription model.[55]Under such circumstances, music streaming is financially excludable, requiring that customers pay a monthly fee for access to a music library, but non-rival, since one customer's use does not impair another's. An article written by theNew York Timesin 2021 states that "streaming saved music." This is because it provided monthly revenue. Especially Spotify offers its free platform, but you can pay for their premium to get music ad-free.[56]This allows access for people to stream music anywhere from their devices not having to rely on CDs anymore. There is competition between services similar but lesser to the streaming wars for video media. As of 2019[update], Spotify has over 207 million users in 78 countries,[57]As of 2018[update], Apple Music has about 60 million, and SoundCloud has 175 million.[58]All platforms provide varying degrees of accessibility. Apple Music and Prime Music only offer their services for paid subscribers, whereas Spotify and SoundCloud offer freemium and premium services. Napster, owned by Rhapsody since 2011, has resurfaced as a music streaming platform offering subscription-based services to over 4.5 million users as of January 2017[update].[59] In the evolving music streaming landscape, competition among platforms is shaped by various factors, including royalty rates, exclusive content, and market expansion strategies. A notable development occurred in January 2025, when Universal Music Group (UMG) and Spotify announced a new multi-year agreement. This partnership aims to enhance opportunities for artists and consumers through innovative subscription tiers and an enriched audio-visual catalog.[60] The music industry's response to music streaming was initially negative. Along with music piracy, streaming services disrupted the market and contributed to the fall in US revenue from $14.6 billion in 1999 to $6.3 billion in 2009. CDs and single-track downloads were not selling because content was freely available on the Internet. By 2018, however, music streaming revenue exceeded that of traditional revenue streams (e.g. record sales, album sales, downloads).[61]Streaming revenue is now one of the largest driving forces behind the growth in the music industry. By August 2020, theCOVID-19 pandemichad streaming services busier than ever. The pandemic contributed to a surge in subscriptions, in the UK alone, 12 million people joined a new streaming service that they had not previously had.[62]Global subscriptions skyrocketed passing 1 billion.[63]Within the first 3 months, back in 2020, nearly 15.7 million people signed up for Netflix.[64]With people stuck at home and facing lock-downs Netflix and other streaming services provided a much needed distraction. An impact analysis of 2020 data by theInternational Confederation of Societies of Authors and Composers(CISAC) indicated that remuneration from digital streaming of music increased with a strong rise in digital royalty collection (up 16.6% to EUR 2.4 billion), but it would not compensate the overall loss of income of authors from concerts, public performance and broadcast.[65]TheInternational Federation of the Phonographic Industry(IFPI) recompiled the music industry initiatives around the world related to the COVID-19. In its State of the Industry report, it recorded that the global recorded music market grew by 7.4% in 2022, the 6th consecutive year of growth. This growth was driven by streaming, mostly from paid subscription streaming revenues which increased by 18.5%, fueled by 443 million users of subscription accounts by the end of 2020.[66] The COVID-19 pandemic has also driven an increase in misinformation and disinformation, particularly on streaming platforms like YouTube andpodcasts.[67] Streaming also refers to the offline streaming of multimedia at home. This is made possible by technologies such asDLNA, which allow devices on the same local network to connect to each other and share media.[68][69]Such capabilities are heightened usingnetwork-attached storage(NAS) devices at home, or using specialized software likePlex Media Server,JellyfinorTwonkyMedia.[70] A broadband speed of 2 Mbit/s or more is recommended for streamingstandard-definition video,[71]for example to aRoku,Apple TV,Google TVor a Sony TV Blu-ray Disc Player. 5 Mbit/s is recommended for high-definition content and 9 Mbit/s forultra-high-definition content.[72]Streaming media storage size is calculated from the streaming bandwidth and length of the media using the following formula (for a single user and file): storage size inmegabytesis equal to length (in seconds) ×bit rate(in bit/s) / (8 × 1024 × 1024). For example, one hour of digital video encoded at 300 kbit/s (this was a typical broadband video in 2005 and it was usually encoded in320 × 240resolution) will be: (3,600 s × 300,000 bit/s) / (8 × 1024 × 1024) requires around 128MBof storage. If the file is stored on a server for on-demand streaming and this stream is viewed by 1,000 people at the same time using aUnicastprotocol, the requirement is 300 kbit/s × 1,000 = 300,000 kbit/s = 300 Mbit/s of bandwidth. This is equivalent to around 135GBper hour. Using amulticastprotocol the server sends out only a single stream that is common to all users. Therefore, such a stream would only use 300 kbit/s of server bandwidth. In 2018 video was more than 60% of data traffic worldwide and accounted for 80% of growth in data usage.[73][74] Video and audio streams are compressed to make the file size smaller.Audio coding formatsincludeMP3,Vorbis,AACandOpus.Video coding formatsincludeH.264,HEVC,VP8andVP9. Encoded audio and video streams are assembled in a containerbitstreamsuch asMP4,FLV,WebM,ASForISMA. The bitstream is delivered from a streaming server to a streaming client (e.g., the computer user with their Internet-connected laptop) using a transport protocol, such as Adobe'sRTMPorRTP. In the 2010s, technologies such as Apple'sHLS, Microsoft's Smooth Streaming, Adobe's HDS and non-proprietary formats such asMPEG-DASHemerged to enableadaptive bitrate streamingoverHTTPas an alternative to using proprietary transport protocols. Often, a streaming transport protocol is used to send video from an event venue to acloudtranscoding service andcontent delivery network, which then uses HTTP-based transport protocols to distribute the video to individual homes and users.[75]The streaming client (the end user) may interact with the streaming server using a control protocol, such asMMSorRTSP. The quality of the interaction between servers and users is based on the workload of the streaming service; as more users attempt to access a service the quality may be affected by resource constraints in the service.[76]Deploying clusters of streaming servers is one such method where there are regional servers spread across the network, managed by a singular, central server containing copies of all the media files as well as theIP addressesof the regional servers. This central server then usesload balancingandschedulingalgorithms to redirect users to nearby regional servers capable of accommodating them. This approach also allows the central server to provide streaming data to both users as well as regional servers usingFFmpeglibraries if required, thus demanding the central server to have powerful data processing and immense storage capabilities. In return, workloads on the streaming backbone network are balanced and alleviated, allowing for optimal streaming quality.[77][needs update] Designing a network protocol to support streaming media raises many problems.Datagramprotocols, such as theUser Datagram Protocol(UDP), send the media stream as a series of small packets. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and recover data usingerror correctiontechniques. If data is lost, the stream may suffer adropout. TheReal-Time Streaming Protocol(RTSP),Real-time Transport Protocol(RTP) and theReal-time Transport Control Protocol(RTCP) were specifically designed to stream media over networks. RTSP runs over a variety of transport protocols,[78]while the latter two are built on top of UDP. HTTP adaptive bitrate streaming is based on HTTP progressive download, but contrary to the previous approach, here the files are very small, so that they can be compared to the streaming of packets, much like the case of using RTSP and RTP.[79]Reliable protocols, such as theTransmission Control Protocol(TCP), guarantee correct delivery of each bit in the media stream. It means, however, that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. Clients can minimize this effect by buffering data for display. While delay due to buffering is acceptable in video-on-demand scenarios, users of interactive applications such as video conferencing will experience a loss of fidelity if the delay caused by buffering exceeds 200 ms.[80] Unicastprotocols send a separate copy of the media stream from the server to each recipient. Unicast is the norm for most Internet connections but does not scale well when many users want to view the same television program concurrently.Multicastprotocols were developed to reduce server and network loads resulting from duplicate data streams that occur when many recipients receive unicast content streams independently. These protocols send a single stream from the source to a group of recipients. Depending on the network infrastructure and type, multicast transmission may or may not be feasible. One potential disadvantage of multicasting is the loss ofvideo on demandfunctionality. Continuous streaming of radio or television material usually precludes the recipient's ability to control playback. However, this problem can be mitigated by elements such as caching servers, digitalset-top boxes, and bufferedmedia players. IP multicastprovides a means to send a single media stream to a group of recipients on acomputer network. A connection management protocol, usuallyInternet Group Management Protocol, is used to manage the delivery of multicast streams to the groups of recipients on a LAN. One of the challenges in deploying IP multicast is that routers and firewalls between LANs must allow the passage of packets destined to multicast groups. If the organization that is serving the content has control over the network between server and recipients (i.e., educational, government, and corporateintranets), then routing protocols such asProtocol Independent Multicastcan be used to deliver stream content to multiplelocal area networksegments. Peer-to-peer(P2P) protocols arrange for prerecorded streams to be sent between computers. This prevents the server and its network connections from becoming a bottleneck. However, it raises technical, performance, security, quality, and business issues. Content delivery networks(CDNs) use intermediate servers to distribute the load. Internet-compatible unicast delivery is used between CDN nodes and streaming destinations. Media that is livestreamed can be recorded through certain media players, such asVLC player, or through the use of ascreen recorder. Live-streaming platforms such asTwitchmay also incorporate avideo on demandsystem that allows automatic recording of live broadcasts so that they can be watched later.[81]YouTube also has recordings of live broadcasts, including television shows aired on major networks. These streams have the potential to be recorded by anyone who has access to them, whether legally or otherwise.[82] Recordings can happen through any device that allows people to watch movies they do not have access to or be at a music festival they could not get tickets to. These live streaming platforms have revolutionized entertainment, creating new ways for people to interact with content. Many celebrities started live streaming during COVID-19 through platforms likeInstagram,YouTube, andTikTokoffering an alternate form of entertainment when concerts were postponed. Live streaming and recording allow for fans to communicate with these artists through chats and likes. Most streaming services feature arecommender systemfor viewing based on each user's view history in conjunction with all viewers' aggregated view histories. Rather than focusing on subjective categorization of content by content curators, there is an assumption that, with the immensity of data collected on viewing habits, the choices of those who are first to view content can be algorithmically extrapolated to the totality of the user base, with increasing probabilistic accuracy as to the likelihood of their choosing and enjoying the recommended content as more data is collected.[83] Useful and typical applications of streaming are, for example, longvideo lecturesperformed online.[84]An advantage of this presentation is that these lectures can be very long, although they can always be interrupted or repeated at arbitrary places. Streaming enables new content marketing concepts. For example, theBerlin Philharmonic Orchestrasells Internet live streams of whole concerts instead of several CDs or similar fixed media in theirDigital Concert Hall[85]using YouTube fortrailers. These online concerts are also spread over a lot of different places, including cinemas at various places on the globe. A similar concept is used by theMetropolitan Operain New York. There is alsoa livestreamfrom theInternational Space Station.[86][87]In video entertainment, video streaming platforms likeNetflix,Hulu, andDisney+are mainstream elements of the media industry.[88] Marketers have found many opportunities offered by streaming media and the platforms that offer them, especially in light of the significant increase in the use of streaming media duringCOVID lockdownsfrom 2020 onwards. While revenue and placement oftraditional advertisingcontinued to decrease,digital marketingincreased by 15% in 2021,[89]withdigital mediaandsearchrepresenting 65% of the expenditures. A case study commissioned by the WIPO[90]indicates that streaming services attract advertising budgets with the opportunities provided by interactivity and the use of data from users, resulting in personalization on a mass scale withcontent marketing.[91]Targeted marketing is expanding with the use ofartificial intelligence, in particular programmatic advertisement, a tool that helps advertisers decide their campaign parameters and whether they are interested in buying advertising space online or not. One example of advertising space acquisition is Real-Time Bidding (RTB).[92] Forover-the-top media service(OTT) platforms, the original content captures additional subscribers.[93]This presents copyright issues and the potential for international exploitation through streaming,[94]widespread use of standards, and metadata in digital files.[95]The WIPO has indicated several basic copyright issues arising for those pursuing work in the film[96]and music industries[97]in the era of streaming. Streaming copyrighted content can involve making infringing copies of the works in question. The recording and distribution of streamed content is also an issue for many companies that rely on revenue based on views or attendance.[98] The netgreenhouse gas emissionsfrom streaming music were estimated at between 0.2 and 0.35 million metric tonsCO2eq(between 200,000 and 340,000 long tons; 220,000 and 390,000 short tons) per year in theUnited States, by a 2019 study.[99]This was an increase from emissions in the pre-digital music period, which were estimated at "0.14 million metric tons (140,000 long tons; 150,000 short tons) in 1977, 0.136 million (134,000 long tons; 150,000 short tons) in 1988, and 0.157 million (155,000 long tons; 173,000 short tons) in 2000."[100]However, this is far less than other everyday activities such as eating. For examplegreenhouse gas emissions in the United Statesfrom beef cattle (burping of ruminantsonly - not including theirmanure) were 129 million metric tons (127 million long tons; 142 million short tons) in 2019.[101] A 2021 study claimed that, based on the amount of data transmitted, one hour of streaming or videoconferencing "emits 150–1,000 grams (5–35 oz) of carbon dioxide ... requires 2–12 liters (0.4–2.6 imp gal; 0.5–3.2 U.S. gal) of water and demands a land area adding up to about the size of aniPad Mini." The study suggests that turning the camera off during video calls can reduce the greenhouse gas and water use footprints by 96%, and that an 86% reduction is possible by using standard definition rather than high definition when streaming content with apps such asNetflixorHulu.[102][103]However, another study estimated a relatively low amount of 36 grams per hour (1.3 ounces per hour), and concluded that watching a Netflix video for half an hour emitted only the same amount as driving a gasoline-fuelled car for about 100 meters (330 ft), so not a significant amount.[104] One way to decrease greenhouse gas emissions associated with streaming music is to makedata centerscarbon neutralby converting to electricity produced fromrenewable sources. On an individual level, the purchase of a physical CD may be more environmentally friendly if it is to be played more than 27 times.[105][dubious–discuss]Another option for reducing energy use is downloading the music for offline listening to reduce the need for streaming over distance.[105]The Spotify service has a built-in local cache to reduce the necessity of repeating song streams.[106]
https://en.wikipedia.org/wiki/Streaming_media
Incomputing, adiscriminatoris afieldof characters designed to separate a certain element from others of the sameidentifier. As an example, suppose that a program must save two uniqueobjectsto memory, both of whose identifiers happen to befoo. To ensure the two objects are not conflated, the program may assigndiscriminatorsto the objects in the form of numbers; thus,foo (1)andfoo (2)distinguish both objects namedfoo. This has been adopted byprogramming languagesas well as digital platforms forinstant messagingandmassively multiplayer online games. A discriminator is used to disambiguate auserfrom other users who wish to identify under the same username. OnDiscord, a discriminator is a four-digit suffix added to the end of ausername. This allowed for up to 10000user accountsto take the same name. In 2023, co-founderStanislav Vishnevskiywrote on a company blog post about thetechnical debtcaused by the discriminator system, stating that the system resulted in nearly half of the company's friend requests failing to connect. The platform implemented discriminators in the early days of the service, he wrote. When the platform was initially introduced, thesoftware developers' priority was to let its users take any username they want without receiving a “your desired username is taken” error. Discord had no friend system at first, thus letting people take names in differentletter cases, making usernames case-sensitive.[1] Discord also introduced a global display name system, wherein a user may input a default nickname to be shown on top of the messages they sent in lieu of their platform-wide username, Vishnevskiy touted onReddit.[2] The platform created a transition process to a system ofpseudonymswherein all new usernames would be case-insensitive lowercase and limited to theASCIIcharacters of A–Z, 0–9, thefull stopand theunderscore. The transition would happen over the course of months, with the accounts that were registered the oldest, and paid subscribers, receiving the opportunity to reserve their name earlier. This change was criticized online for being a step backward, as users could be a risk of being impersonated. A notableindie gamestudio noted that it could no longer claim its own name on the platform.[3]Discord pointed to its processes for users with high visibility and longstanding business relationships with the company for reserving a username under the new system. The old discriminator-oriented system also mitigated the rush to get unique usernames for sale on theblack market, leading toswattingandonline harassment.[4][2] Battle.netimplements a suffix of four-digit numbers to its usernames. A discriminator is a typed tag field present in theCommon Object Request Broker Architecture, theinterface description languageof theObject Management Group. It exists as type and value definitions oftagged unionsthat determine which union member is selected in the current union instance. This is done by introduction of the classicCswitch construct as part of the classic C union.[5][6]Unlike in some conventional programming languages offering support for unions, the discriminator in IDL is not identical to the selected field name. Here is an example of an IDL union type definition: The effective value of theRegistertype may contain AX as the selected field, but the discriminator value may be either 'a' or 'b' and is stored in memory separately. Therefore, IDL logically separates information about the currently selected field name and the union effective value from information about the current discriminator value. In the example above, the discriminator value may be any of the following: 'a', 'b', 'c', as well as all other characters belonging to the IDLchartype, since thedefaultbranch specified in the exampleRegistertype allows the use of the remaining characters as well. TheMicrosoft Interface Definition Languagealso supports tagged unions, allowing to choose the discriminator via anattributein an enclosing structure or function.[7] Afriend codeis a unique twelve-digit number that could be exchanged with friends and be used to maintain individual friend lists in eachvideo game. Friend codes were generated from an identifier unique to a copy of a game and theuniversally unique identifiercorresponding to that of a user's device.[8] Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Discriminator
TheCommon Object Request Broker Architecture(CORBA) is astandarddefined by theObject Management Group(OMG) designed to facilitate the communication of systems that are deployed on diverseplatforms. CORBA enables collaboration between systems on different operating systems,programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of thedistributed objectparadigm. While briefly popular in the mid to late 1990s, CORBA's complexity, inconsistency, and high licensing costs have relegated it to being a niche technology.[1] CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991. CORBA uses aninterface definition language(IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies amappingfrom IDL to a specific implementation language likeC++orJava. Standard mappings exist forAda,C,C++,C++11,COBOL,Java,Lisp,PL/I,Object Pascal,Python,Ruby, andSmalltalk. Non-standard mappings exist forC#,Erlang,Perl,Tcl, andVisual Basicimplemented byobject request brokers(ORBs) written for those languages. Versions of IDL have changed significantly with annotations replacing some pragmas. The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice: Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++Standard Template Library(STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features. In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure: This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has thePortable Object Adapter(POA) that redirects calls either to the localservantsor (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. seeModel–view–controller), etc. In addition to providing users with a language and a platform-neutralremote procedure call(RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models. This table presents the history of CORBA standard versions.[2][3][4] Note that IDL changes have progressed with annotations (e.g. @unit, @topic) replacing some pragmas. Aservantis the invocation target containing methods for handling theremote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into theobject(that is exposed to remote invocations)andservant(to which the former partforwardsthe method calls). It can be oneservantper remoteobject, or the same servant can support several (possibly all) objects, associated with the givenPortable Object Adapter. Theservantfor eachobjectcan be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remoteobjectand itsservantare objects from the viewpoint of the object-oriented programming. Incarnationis the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager. ThePortable Object Adapter(POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remoteobjectand itsservant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server. On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies. The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects. This reference is either acquired through a stringifiedUniform Resource Locator(URL), NameService lookup (similar toDomain Name System(DNS)), or passed-in as a method parameter during a call. Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success, or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping. The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space. Apart from remote objects, the CORBA andRMI-IIOPdefine the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be eithera prioriknown for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list ofURLswhence this code should be downloaded. The OBV can also have the remote methods. CORBA Component Model (CCM) is an addition to the family of CORBA definitions.[5]It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependentEnterprise Java Beans(EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces calledports. The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to)notification,authentication,persistence, andtransaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced. Portable interceptors are the "hooks", used by CORBA andRMI-IIOPto mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors: The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target. TheGIOPis an abstract protocol by whichObject request brokers(ORBs) communicate. Standards associated with the protocol are maintained by theObject Management Group(OMG). The GIOP architecture provides several concrete protocols, including: Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits. Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58). Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email totagrequest@omg.org. A list of currently assigned VMCIDs can be found on the OMG website at:https://www.omg.org/cgi-bin/doc?vendor-tags The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use. The Common Object Request Broker: Architecture and Specification (CORBA 2.3) Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL. All CORBA products must support two OMG-defined URLs: "corbaloc:" and "corbaname:". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained. An example of corbaloc is shown below: A CORBA product may optionally support the "http:", "ftp:", and "file:" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB. CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers. While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.[8] Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
https://en.wikipedia.org/wiki/CORBA
Variantis adata typein certain programming languages, particularlyVisual Basic,OCaml,[1]DelphiandC++when using theComponent Object Model. It is an implementation of theeponymous conceptincomputer science. In Visual Basic (andVisual Basic for Applications) the Variant data type is atagged unionthat can be used to represent any other data type (for example,integer,floating-point,single- anddouble-precision,object, etc.) except fixed-length string type. In Visual Basic, any variable not declared explicitly or the type of which is not declared explicitly, is taken to be a variant. While the use of not explicitly declared variants is not recommended, they can be of use when the needed data type can only be known at runtime, when the data type is expected to vary, or when optional parameters and parameter arrays are desired. In fact, languages with adynamic type systemoften have variant as theonlyavailable type for variables. Among the major changes inVisual Basic .NET, being a .NET language, the variant type was replaced with the .NETobjecttype. There are similarities in concept, but also major differences, and no direct conversions exist between these two types. For conversions, as might be needed if Visual Basic .NET code is interacting with a Visual Basic 6 COM object, the normal methodology is to use.NET marshalling. In Visual Basic, a variant named A can be declared either explicitly or implicitly: InDelphi, a variant named A is declared in the following way: A variable of variant type, for brevity called a "variant", as defined in Visual Basic, needs 16 bytes storage and its layout is as follows: A few examples of variants that one can encounter in Visual Basic follow. In other languages other kinds of variants can be used as well. TheCollectionclass inOLE Automationcan store items of different data types. Since the data type of these items cannot be known at compile time, the methods to add items to and retrieve items from a collection use variants. If in Visual Basic theFor Eachconstruct is used, the iterator variable must be of object type, or a variant. In OLE Automation theIDispatchinterface is used when the class of an object cannot be known in advance. Hence when calling a method on such an object the types of the arguments and the return value is not known at compile time. The arguments are passed as an array of variants and when the call completes a variant is returned. In Visual Basic a procedure argument can be declared to be optional by prefixing it with theOptionalkeyword. When the argument is omitted Visual Basic passes a special value to the procedure, calledMissingin the table above, indicating that the argument is missing. Since the value could either be a supplied value or a special value, a variant must be used. Similarly the keywordParamArraycan be used to pass all following arguments in a variant array.
https://en.wikipedia.org/wiki/Variant_type_(COM)
Thenull coalescing operatoris abinary operatorthat is part of the syntax for a basicconditional expressionin severalprogramming languages, such as (in alphabetical order):C#[1]since version 2.0,[2]Dart[3]since version 1.12.0,[4]PHPsince version 7.0.0,[5]Perlsince version 5.10 aslogical defined-or,[6]PowerShellsince 7.0.0,[7]andSwift[8]asnil-coalescing operator. It is most commonly written asx ?? y, but varies across programming languages. While its behavior differs between implementations, the null coalescing operator generally returns the result of its left-most operand if it exists and is notnull, and otherwise returns the right-most operand. This behavior allows a default value to be defined for cases where a more specific value is not available. Like the binaryElvis operator, usually written asx ?: y, the null coalescing operator is ashort-circuiting operatorand thus does not evaluate the second operand if its value is not used, which is significant if its evaluation hasside-effects. InBourne shell(and derivatives), "Ifparameteris unset or null, the expansion ofwordis substituted. Otherwise, the value ofparameteris substituted":[9] InC#, the null coalescing operator is??. It is most often used to simplify expressions as follows: For example, if one wishes to implement some C# code to give a page a default title if none is present, one may use the following statement: instead of the more verbose or The three forms result in the same value being stored into the variable namedpageTitle. suppliedTitleis referenced only once when using the??operator, and twice in the other two code examples. The operator can also be used multiple times in the same expression: Once a non-null value is assigned to number, or it reaches the final value (which may or may not be null), the expression is completed. If, for example, a variable should be changed to another value if its value evaluates to null, since C# 8.0 the??=null coalescing assignment operator can be used: Which is a more concise version of: In combination with thenull-conditional operator?.or the null-conditional element access operator?[]the null coalescing operator can be used to provide a default value if an object or an object's member is null. For example, the following will return the default title if either thepageobject is null orpageis not null but itsTitleproperty is: As ofColdFusion11,[10]Railo4.1,[11]CFMLsupports the null coalescing operator as a variation of the ternary operator,?:. It is functionally and syntactically equivalent to its C# counterpart, above. Example: Missing values inApache FreeMarkerwill normally cause exceptions. However, both missing and null values can be handled, with an optional default value:[12] or, to leave the output blank: JavaScript's nearest operator is??, the "nullish coalescing operator", which was added to the standard inECMAScript's 11th edition.[13]In earlier versions, it could be used via aBabelplugin, and inTypeScript. It evaluates its left-hand operand and, if the result value isnot"nullish" (nullorundefined), takes that value as its result; otherwise, it evaluates the right-hand operand and takes the resulting value as its result. In the following example,awill be assigned the value ofbif the value ofbis notnullorundefined, otherwise it will be assigned 3. Before the nullish coalescing operator, programmers would use the logical OR operator (||). But where??looks specifically fornullorundefined, the||operator looks for anyfalsyvalue:null,undefined,"",0,NaN, and of course,false. In the following example,awill be assigned the value ofbif the value ofbistruthy, otherwise it will be assigned 3. Kotlinuses the?:operator.[14]This is an unusual choice of symbol, given that?:is typically used for theElvis operator, not null coalescing, but it was inspired byGroovy (programming language)where null is considered false. InObj-C, the nil coalescing operator is?:. It can be used to provide a default for nil references: This is the same as writing InPerl(starting with version 5.10), the operator is//and the equivalent Perl code is: Thepossibly_null_valueis evaluated asnullornot-null(in Perl terminology,undefinedordefined). On the basis of the evaluation, the expression returns eithervalue_if_nullwhenpossibly_null_valueis null, orpossibly_null_valueotherwise. In the absence ofside-effectsthis is similar to the wayternary operators(?:statements) work in languages that support them. The above Perl code is equivalent to the use of the ternary operator below: This operator's most common usage is to minimize the amount of code used for a simple null check. Perl additionally has a//=assignment operator, where is largely equivalent to: This operator differs from Perl's older||and||=operators in that it considersdefinedness,nottruth. Thus they behave differently on values that are false but defined, such as 0 or "" (a zero-length string): PHP 7.0 introduced[15]a null-coalescing operator with the??syntax. This checks strictly for NULL or a non-existent variable/array index/property. In this respect, it acts similarly to PHP'sisset()pseudo-function: Version 7.4 of PHP introduced the Null Coalescing Assignment Operator with the??=syntax:[16] Since PowerShell 7, the??null coalescing operator provides this functionality.[7] SinceRversion 4.4.0 the%||%operator is included in base R (previously it was a feature of some packages likerlang).[17] While there's nonullinRust,tagged unionsare used for the same purpose. For example,Result<T, E>orOption<T>. Any type implementing the Try trait can be unwrapped. unwrap_or()serves a similar purpose as the null coalescing operator in other languages. Alternatively,unwrap_or_else()can be used to use the result of a function as a default value. In Oracle'sPL/SQL, theNVL() function provides the same outcome: InSQL Server/Transact-SQLthere is the ISNULL function that follows the same prototype pattern: Attention should be taken to not confuseISNULLwithIS NULL– the latter serves to evaluate whether some contents are defined to beNULLor not. The ANSI SQL-92 standard includes the COALESCE function implemented inOracle,[18]SQL Server,[19]PostgreSQL,[20]SQLite[21]andMySQL.[22]The COALESCE function returns the first argument that is not null. If all terms are null, returns null. The difference between ISNULL and COALESCE is that the type returned by ISNULL is the type of the leftmost value while COALESCE returns the type of the first non-null value. InSwift, the nil coalescing operator is??. It is used to provide a default when unwrapping anoptional type: For example, if one wishes to implement some Swift code to give a page a default title if none is present, one may use the following statement: instead of the more verbose
https://en.wikipedia.org/wiki/Null_coalescing_operator
Incomputer programming, asemipredicate problemoccurs when asubroutineintended to return a useful value can fail, but the signalling of failure uses an otherwise validreturn value.[1]The problem is that the caller of the subroutine cannot tell what the result means in this case. Thedivisionoperation yields areal number, but fails when the divisor iszero. If we were to write a function that performs division, we might choose to return 0 on this invalid input. However, if the dividend is 0, the result is 0 too. This means that there is no number we can return to uniquely signal attempted division by zero, since all real numbers are in therangeof division. Early programmers handled potentially exceptional cases such as division using aconventionrequiring the calling routine to verify the inputs before calling the division function. This had two problems: first, it greatly encumbered all code that performed division (a very common operation); second, it violated theDon't repeat yourselfandencapsulationprinciples, the former of which suggesting eliminating duplicated code, and the latter suggesting that data-associated code be contained in one place (in this division example, the verification of input was done separately). For a computation more complicated than division, it could be difficult for the caller to recognize invalid input; in some cases, determining input validity may be as costly as performing the entire computation. The target function could also be modified and would then expect different preconditions than would the caller; such a modification would require changes in every place where the function was called. The semipredicate problem is not universal among functions that can fail. If therange of a functiondoes not cover the entirespacecorresponding to thedata typeof the function's return value, a value known to be impossible under normal computation can be used. For example, consider the functionindex, which takes a string and a substring, and returns theintegerindex of the substring in the main string. If the search fails, the function may be programmed to return −1 (or any other negative value), since this can never signify a successful result. This solution has its problems, though, as it overloads the natural meaning of a function with an arbitrary convention: Many languages allow, through one mechanism or another, a function to return multiple values. If this is available, the function can be redesigned to return a boolean value signalling success or failure, along with its primary return value. If multiple error modes are possible, the function may instead return an enumeratedreturn code(error code) along with its primary return value. Various techniques for returning multiple values include: Similar to an "out" argument, aglobal variablecan store what error occurred (or simply whether an error occurred). For instance, if an error occurs, and is signalled (generally as above, by an illegal value like −1) the Unixerrnovariable is set to indicate which value occurred. Using a global has its usual drawbacks:thread safetybecomes a concern (modern operating systems use a thread-safe version of errno), and if only one error global is used, its type must be wide enough to contain all interesting information about all possible errors in the system. Exceptionsare one widely used scheme for solving this problem. An error condition is not considered a return value of the function at all; normalcontrol flowis disrupted, and explicit handling of the error takes place automatically. They are an example ofout-of-band signalling. InC, a common approach, when possible, is to use a data type deliberately wider than strictly needed by the function. For example, the standard functiongetchar()is defined with return typeintand returns a value in the range [0, 255] (the range ofunsigned char) on success or the valueEOF(implementation-defined, but outside the range ofunsigned char) on the end of the input or a read error. In languages with pointers or references, one solution is to return a pointer to a value, rather than the value itself. This return pointer can then be set tonullto indicate an error. It is typically suited to functions that return a pointer anyway. This has a performance advantage over the OOP style of exception handling,[4]with the drawback that negligent programmers may not check the return value, resulting in acrashwhen the invalid pointer is used. Whether a pointer is null or not is another example of the predicate problem; null may be a flag indicating failure or the value of a pointer returned successfully. A common pattern in theUNIXenvironment is setting a separatevariableto indicate the cause of an error. An example of this is theC standard libraryfopen()function. Indynamically typedlanguages, such asPHPandLisp, the usual approach is to returnfalse,none, ornullwhen the function call fails. This works by returning a type different from the normal return type (thus expanding the type). It is a dynamically typed equivalent to returning a null pointer. For example, a numeric function normally returns a number (int or float), and while zero might be a valid response, false is not. Similarly, a function that normally returns a string might sometimes return the empty string as a valid response, but return false on failure. This process of type-juggling necessitates care in testing the return value: e.g., in PHP, use===(i.e., equal and of same type) rather than just==(i.e., equal, after automatic type conversion). It works only when the original function is not meant to return a boolean value, and still requires that information about the error be conveyed via other means. InHaskelland otherfunctional programminglanguages, it is common to use a data type that is just as big as it needs to be to express any possible result. For example, one can write a division function that returned the typeMaybe Real, and agetcharfunction returningEither String Char. The first is anoption type, which has only one failure value,Nothing. The second case is atagged union: a result is either some string with a descriptive error message or a successfully read character. Haskell'stype inferencesystem helps ensure that callers deal with possible errors. Since the error conditions become explicit in the function type, looking at its signature immediately tells the programmer how to treat errors. Further, tagged unions and option types formmonadswhen endowed with appropriate functions: this may be used to keep the code tidy by automatically propagating unhandled error conditions. Rusthasalgebraic data typesand comes with the built-inResult<T, E>andOption<T>types. TheC++programming language introducedstd::optional<T>in theC++17update. andstd::expected<T, E>in theC++23update
https://en.wikipedia.org/wiki/Semipredicate_problem
Incomputer science, aunionis avaluethat may have any of multiple representations or formats within the same area ofmemory; that consists of avariablethat may hold such adata structure. Someprogramming languagessupport aunion typefor such adata type. In other words, a union type specifies the permitted types that may be stored in its instances, e.g.,floatandinteger. In contrast with arecord, which could be defined to contain both a floatandan integer; a union would hold only one at a time. A union can be pictured as a chunk of memory that is used to store variables of different data types. Once a new value is assigned to a field, the existing data is overwritten with the new data. The memory area storing the value has no intrinsic type (other than justbytesorwordsof memory), but the value can be treated as one of severalabstract data types, having the type of the value that was last written to the memory area. Intype theory, a union has asum type; this corresponds todisjoint unionin mathematics. Depending on the language and type, a union value may be used in some operations, such asassignmentand comparison for equality, without knowing its specific type. Other operations may require that knowledge, either by some external information, or by the use of atagged union. Because of the limitations of their use, untagged unions are generally only provided in untyped languages or in a type-unsafe way (as inC). They have the advantage over simple tagged unions of not requiring space to store a data type tag. The name "union" stems from the type's formal definition. If a type is considered as thesetof all values that that type can take on, a union type is simply the mathematicalunionof its constituting types, since it can take on any value any of its fields can. Also, because a mathematical union discards duplicates, if more than one field of the union can take on a single common value, it is impossible to tell from the value alone which field was last written. However, one useful programming function of unions is to map smaller data elements to larger ones for easier manipulation. A data structure consisting, for example, of 4 bytes and a 32-bit integer, can form a union with an unsigned 64-bit integer, and thus be more readily accessed for purposes of comparison etc. ALGOL 68has tagged unions, and uses a case clause to distinguish and extract the constituent type at runtime. A union containing another union is treated as the set of all its constituent possibilities, and if the context requires it a union is automatically coerced into the wider union. A union can explicitly contain no value, which can be distinguished at runtime. An example is: The syntax of the C/C++ union type and the notion of casts was derived from ALGOL 68, though in an untagged form.[1] InCandC++, untagged unions are expressed nearly exactly like structures (structs), except that each data member is located at the same memory address. The data members, as in structures, need not be primitive values, and in fact may be structures or even other unions. C++ (sinceC++11) also allows for a data member to be any type that has a full-fledged constructor/destructor and/or copy constructor, or a non-trivial copy assignment operator. For example, it is possible to have the standard C++stringas a member of a union. The primary use of a union is allowing access to a common location by different data types, for example hardware input/output access, bitfield and word sharing, ortype punning. Unions can also provide low-levelpolymorphism. However, there is no checking of types, so it is up to the programmer to be sure that the proper fields are accessed in different contexts. The relevant field of a union variable is typically determined by the state of other variables, possibly in an enclosing struct. One common C programming idiom uses unions to perform what C++ calls areinterpret_cast, by assigning to one field of a union and reading from another, as is done in code which depends on the raw representation of the values. A practical example is themethod of computing square roots using the IEEE representation. This is not, however, a safe use of unions in general. Structure and union specifiers have the same form. [ . . . ] The size of a union is sufficient to contain the largest of its members. The value of at most one of the members can be stored in a unionobjectat any time. A pointer to a union object, suitably converted, points to each of its members (or if a member is a bit-field, then to the unit in which it resides), and vice versa. In C++,C11, and as a non-standard extension in many compilers, unions can also be anonymous. Their data members do not need to be referenced, are instead accessed directly. They have some restrictions as opposed to traditional unions: in C11, they must be a member of another structure or union,[2]and in C++, they can not havemethodsor access specifiers. Simply omitting the class-name portion of the syntax does not make a union an anonymous union. For a union to qualify as an anonymous union, the declaration must not declare an object. Example: Anonymous unions are also useful in Cstructdefinitions to provide a sense of namespacing.[3] In compilers such as GCC, Clang, and IBM XL C for AIX, atransparent_unionattribute is available for union types. Types contained in the union can be converted transparently to the union type itself in a function call, provided that all types have the same size. It is mainly intended for function with multiple parameter interfaces, a use necessitated by early Unix extensions and later re-standardisation.[4] InCOBOL, union data items are defined in two ways. The first uses theRENAMES(66 level) keyword, which effectively maps a second alphanumeric data item on top of the same memory location as a preceding data item. In the example code below, data itemPERSON-RECis defined as a group containing another group and a numeric data item.PERSON-DATAis defined as an alphanumeric data item that renamesPERSON-REC, treating the data bytes continued within it as character data. The second way to define a union type is by using theREDEFINESkeyword. In the example code below, data itemVERS-NUMis defined as a 2-byte binary integer containing a version number. A second data itemVERS-BYTESis defined as a two-character alphanumeric variable. Since the second item isredefinedover the first item, the two items share the same address in memory, and therefore share the same underlying data bytes. The first item interprets the two data bytes as a binary value, while the second item interprets the bytes as character values. InPascal, there are two ways to create unions. One is the standard way through a variant record. The second is a nonstandard means of declaring a variable as absolute, meaning it is placed at the same memory location as another variable or at an absolute address. While all Pascal compilers support variant records, only some support absolute variables. For the purposes of this example, the following are all integer types: abyteconsists of 8 bits, awordis 16 bits, and anintegeris 32 bits. The following example shows the non-standard absolute form: In the first example, each of the elements of the array B maps to one of the specific bytes of the variable A. In the second example, the variable C is assigned to the exact machine address 0. In the following example, a record has variants, some of which share the same location as others: InPL/Ithe original term for a union wascell,[5]which is still accepted as a synonym for union by several compilers. The union declaration is similar to the structure definition, where elements at the same level within the union declaration occupy the same storage. Elements of the union can be any data type, including structures and array.[6]: pp192–193Here vers_num and vers_bytes occupy the same storage locations. An alternative to a union declaration is the DEFINED attribute, which allows alternative declarations of storage, however the data types of the base and defined variables must match.[6]: pp.289–293 Rustimplements both tagged and untagged unions. In Rust, tagged unions are implemented using theenumkeyword. Unlikeenumerated typesin most other languages, enum variants in Rust can contain additional data in the form of a tuple or struct, making them tagged unions rather than simple enumerated types.[7] Rust also supports untagged unions using theunionkeyword. The memory layout of unions in Rust is undefined by default,[8]but a union with the#[repr(C)]attribute will be laid out in memory exactly like the equivalent union in C.[9]Reading the fields of a union can only be done within anunsafefunction or block, as the compiler cannot guarantee that the data in the union will be valid for the type of the field; if this is not the case, it will result inundefined behavior.[10] In C and C++, the syntax is: A structure can also be a member of a union, as the following example shows: This example defines a variableuvaras a union (tagged asname1), which contains two members, a structure (tagged asname2) namedsvar(which in turn contains three members), and an integer variable namedd. Unions may occur within structures and arrays, and vice versa: The number ival is referred to assymtab[i].u.ivaland the first character of string sval by either of*symtab[i].u.svalorsymtab[i].u.sval[0]. Union types were introduced in PHP 8.0.[11]The values are implicitly "tagged" with a type by the language, and may be retrieved by "gettype()". Support for typing was introduced in Python 3.5.[12]The new syntax for union types were introduced in Python 3.10.[13] Union types are supported in TypeScript.[14]The values are implicitly "tagged" with a type by the language, and may be retrieved using atypeofcall for primitive values and aninstanceofcomparison for complex data types. Types with overlapping usage (e.g. a slice method exists on both strings and arrays, the plus operator works on both strings and numbers) don't need additional narrowing to use these features. Tagged unions in Rust use theenumkeyword, and can contain tuple and struct variants: Untagged unions in Rust use theunionkeyword: Reading from the fields of an untagged union results inundefined behaviorif the data in the union is not valid as the type of the field, and thus requires anunsafeblock:
https://en.wikipedia.org/wiki/Union_type
In the area ofmathematical logicandcomputer scienceknown astype theory, aunit typeis atypethat allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be anysingleton set. There is anisomorphismbetween any two such sets, so it is customary to talk abouttheunit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. theproductof no types. The unit type is theterminal objectin thecategoryof types and typed functions. It should not be confused with thezeroorempty type, which allowsnovalues and is theinitial objectin this category. Similarly, theBooleanis the type withtwovalues. The unit type is implemented in mostfunctional programminglanguages. Thevoid typethat is used in some imperative programming languages serves some of its functions, but because its carrier set is empty, it has some limitations (as detailed below). Several computerprogramming languagesprovide a unit type to specify the result type of afunctionwith the sole purpose of causing aside effect, and the argument type of a function that does not require arguments. InC,C++,C#,D, andPHP,voidis used to designate a function that does not return anything useful, or a function that accepts no arguments. The unit type in C is conceptually similar to an emptystruct, but a struct without members is not allowed in the C language specification (this is allowed in C++). Instead, 'void' is used in a manner that simulates some, but not all, of the properties of the unit type, as detailed below. Like most imperative languages, C allows functions that do not return a value; these are specified as having the void return type. Such functions are called procedures in other imperative languages likePascal, where a syntactic distinction, instead of type-system distinction, is made between functions and procedures. The first notable difference between a true unit type and the void type is that the unit type may always be the type of the argument to a function, but the void type cannot be the type of an argument in C, despite the fact that it may appear as the sole argument in the list. This problem is best illustrated by the following program, which is a compile-time error in C: This issue does not arise in most programming practice in C, because since thevoidtype carries no information, it is useless to pass it anyway; but it may arise ingeneric programming, such as C++templates, wherevoidmust be treated differently from other types. In C++ however, empty classes are allowed, so it is possible to implement a real unit type; the above example becomes compilable as: (For brevity, we're not worried in the above example whetherthe_unitis really asingleton; seesingleton patternfor details on that issue.) The second notable difference is that the void type is special and can never be stored in arecord type, i.e. in a struct or a class in C/C++. In contrast, the unit type can be stored in records in functional programming languages, i.e. it can appear as the type of a field; the above implementation of the unit type in C++ can also be stored. While this may seem a useless feature, it does allow one for instance to elegantly implement asetas amapto the unit type; in the absence of a unit type, one can still implement a set this way by storing some dummy value of another type for each key. In Java Generics, type parameters must be reference types. The wrapper typeVoidis often used when a unit type parameter is needed. Although theVoidtype can never have any instances, it does have one value,null(like all other reference types), so it acts as a unit type. In practice, any other non-instantiable type, e.g.Math, can also be used for this purpose, since they also have exactly one value,null. Statically typed languages give a type to every possible expression. They need to associate a type to thenullexpression. A type will be defined fornulland it will only have this value. For example in D, it's possible to declare functions that may only returnnull: nullis the only value thattypeof(null), a unit type, can have.
https://en.wikipedia.org/wiki/Unit_type
On thex86computer architecture, atriple faultis a special kind ofexceptiongenerated by theCPUwhen an exception occurs while the CPU is trying to invoke thedouble faultexception handler, which itself handles exceptions occurring while trying to invoke a regular exception handler. x86processors beginning with the80286will cause a shutdown cycle to occur when a triple fault is encountered. This typically causes themotherboardhardware to initiate a CPU reset, which, in turn, causes the whole computer to reboot.[1][2] Triple faults indicate a problem with theoperating systemkernelordevice drivers. In modern operating systems, a triple fault is typically caused by a buffer overflow or underflow in a device driver which writes over theinterrupt descriptor table(IDT). If the IDT is corrupted, when the nextinterrupthappens, the processor will be unable to call either the needed interrupt handler or the double fault handler because the descriptors in the IDT are corrupted.[citation needed] InQEMU, a triple fault produces a dump of the virtual machine in the console, with the instruction pointer set to the instruction that triggered the first exception. InVirtualBox, a triple fault causes aGuru Meditationerror to be displayed to the user. A virtual machine in this state has most features disabled and cannot be restarted. If the VirtualBox Debugger is open, a message is printed indicating a Triple fault has occurred, followed by a register dump anddisassemblyof the last instruction executed, similar to the output of thergdebugger command. When usingIntel VT-x, a triple fault causes a VM exit, with exit reason 2. The exit reason is saved to the VMCS and may be handled by the VMM software. InVMware, an error message will be displayed and the virtual machine will need to be reset. TheIntel 80286processor was the first x86 processor to introduce the now-ubiquitousprotected mode. However, the 286 could not revert to the basic 8086-compatible "real mode" without resetting the processor, which can only be done using hardware external to the CPU. On theIBM ATand compatibles, the documented method of doing this was to use a special function on theIntel 8042keyboard controller, which would assert the RESET pin of the processor. However, intentionally triple-faulting the CPU was found to cause the transition to occur much faster (0.8 milliseconds instead of 15+ milliseconds) and more cleanly, permitting multitasking operating systems to switch back and forth at high speed.[3] Some operating system kernels, such asLinux, still use triple faults as a last effort in their rebooting process if anACPIreboot fails. This is done by setting the IDT register to 0 and then issuing an interrupt.[1]Since the table now has length 0, all attempts to access it fail and the processor generates a triple fault.
https://en.wikipedia.org/wiki/Triple_fault
Incomputer programming, atype systemis alogical systemcomprising a set of rules that assigns a property called atype(for example,integer,floating point,string) to everyterm(a word, phrase, or other set of symbols). Usually the terms are variouslanguage constructsof acomputer program, such asvariables,expressions,functions, ormodules.[1]A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term. Type systems formalize and enforce the otherwise implicit categories the programmer uses foralgebraic data types,data structures, or otherdata types, such as "string", "array of float", "function returning boolean". Type systems are often specified as part ofprogramming languagesand built intointerpretersandcompilers, although the type system of a language can be extended byoptional toolsthat perform added checks using the language's original typesyntaxandgrammar. The main purpose of a type system in a programming language is to reduce possibilities forbugsin computer programs due totype errors.[2]The given type system in question determines what constitutes a type error, but in general, the aim is to prevent operations expecting a certain kind of value from being used with values of which that operation does not make sense (validity errors). Type systems allow defininginterfacesbetween different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (atcompile time), dynamically (atrun time), or as a combination of both. Type systems have other purposes as well, such as expressing business rules, enabling certaincompiler optimizations, allowing formultiple dispatch, and providing a form ofdocumentation. An example of a simple type system is that of theC language. The portions of a C program are thefunctiondefinitions. One function is invoked by another function. Theinterfaceof a function states the name of the function and a list ofparametersthat are passed to the function's code. The code of an invoking function states the name of the invoked, along with the names ofvariablesthat hold values to pass to it. During acomputer program's execution, the values are placed into temporary storage, then execution jumps to the code of the invoked function. The invoked function's code accesses the values and makes use of them. If the instructions inside the function are written with the assumption of receiving anintegervalue, but the calling code passed afloating-point value, then the wrong result will be computed by the invoked function. The C compiler checks the types of the arguments passed to a function when it is called against the types of the parameters declared in the function's definition. If the types do not match, the compiler throws a compile-time error or warning. Acompilermay also use the static type of a value to optimize the storage it needs and the choice ofalgorithmsfor operations on the value. In manyCcompilers thefloatdata type, for example, is represented in 32bits, in accord with theIEEE specification for single-precision floating point numbers. They will thus use floating-point-specificmicroprocessor operationson those values (floating-point addition, multiplication, etc.). The depth of type constraints and the manner of their evaluation affect thetypingof the language. Aprogramming languagemay further associate an operation with various resolutions for each type, in the case of typepolymorphism.Type theoryis the study of type systems. The concrete types of some programming languages, such as integers and strings, depend on practical issues ofcomputer architecture, compiler implementation, andlanguage design. Formally,type theorystudies type systems. A programming language must have the opportunity to type check using thetype systemwhether at compile time or runtime, manually annotated or automatically inferred. As Mark Manasse concisely put it:[3] The fundamental problem addressed by a type theory is to ensure that programs have meaning. The fundamental problem caused by a type theory is that meaningful programs may not have meanings ascribed to them. The quest for richer type systems results from this tension. Assigning a data type, termedtyping, gives meaning to a sequence ofbitssuch as a value inmemoryor someobjectsuch as avariable. Thehardwareof ageneral purpose computeris unable to discriminate between for example amemory addressand aninstruction code, or between acharacter, aninteger, or afloating-point number, because it makes no intrinsic distinction between any of the possible values that a sequence of bits mightmean.[note 1]Associating a sequence of bits with a type conveys thatmeaningto the programmable hardware to form asymbolic systemcomposed of that hardware and some program. A program associates each value with at least one specific type, but it also can occur that one value is associated with manysubtypes. Other entities, such asobjects,modules,communication channels, anddependenciescan become associated with a type. Even a type can become associated with a type. An implementation of atype systemcould in theory associate identifications calleddata type(a type of a value),class(a type of an object), andkind(atype of a type, or metatype). These are the abstractions that typing can go through, on a hierarchy of levels contained in a system. When a programming language evolves a more elaborate type system, it gains a more finely grained rule set than basic type checking, but this comes at a price when the typeinferences(and other properties) becomeundecidable, and when more attention must be paid by the programmer to annotate code or to consider computer-related operations and functioning. It is challenging to find a sufficiently expressive type system that satisfies all programming practices in atype safemanner. A programming language compiler can also implement adependent typeor aneffect system, which enables even more program specifications to be verified by a type checker. Beyond simple value-type pairs, a virtual "region" of code is associated with an "effect" component describingwhatis being donewith what, and enabling for example to "throw" an error report. Thus the symbolic system may be atype and effect system, which endows it with more safety checking than type checking alone. Whether automated by the compiler or specified by a programmer, a type system renders program behavior illegal if it falls outside the type-system rules. Advantages provided by programmer-specified type systems include: Advantages provided by compiler-specified type systems include: A type error occurs when an operation receives a different type of data than it expected.[4]For example, a type error would happen if a line of code divides two integers, and is passed a string of letters instead of an integer.[4]It is an unintended condition[note 2]which might manifest in multiple stages of a program's development. Thus a facility for detection of the error is needed in the type system. In some languages, such asHaskell, for whichtype inferenceis automated,lintmight be available to its compiler to aid in the detection of error. Type safety contributes toprogram correctness, but might only guarantee correctness at the cost of making the type checking itself anundecidable problem(as in thehalting problem). In atype systemwith automated type checking, a program may prove to run incorrectly yet produce no compiler errors.Division by zerois an unsafe and incorrect operation, but a type checker which only runs atcompile timedoes not scan for division by zero in most languages; that division would surface as aruntime error. To prove the absence of these defects, other kinds offormal methods, collectively known asprogram analyses, are in common use. Alternatively, a sufficiently expressive type system, such as in dependently typed languages, can prevent these kinds of errors (for example, expressingthe type of non-zero numbers). In addition,software testingis anempiricalmethod for finding errors that such a type checker would not detect. The process of verifying and enforcing the constraints of types—type checking—may occur atcompile time(a static check) or atrun-time(a dynamic check). If a language specification requires its typing rules strongly, more or less allowing only those automatictype conversionsthat do not lose information, one can refer to the process asstrongly typed; if not, asweakly typed. The terms are not usually used in a strict sense. Static type checking is the process of verifying thetype safetyof a program based on analysis of a program's text (source code). If a program passes a static type checker, then the program is guaranteed to satisfy some set of type safety properties for all possible inputs. Static type checking can be considered a limited form ofprogram verification(seetype safety), and in a type-safe language, can also be considered an optimization. If a compiler can prove that a program is well-typed, then it does not need to emit dynamic safety checks, allowing the resulting compiled binary to run faster and to be smaller. Static type checking forTuring-completelanguages is inherently conservative. That is, if a type system is bothsound(meaning that it rejects all incorrect programs) anddecidable(meaning that it is possible to write an algorithm that determines whether a program is well-typed), then it must beincomplete(meaning there are correct programs, which are also rejected, even though they do not encounter runtime errors).[7]For example, consider a program containing the code: Even if the expression<complex test>always evaluates totrueat run-time, most type checkers will reject the program as ill-typed, because it is difficult (if not impossible) for a static analyzer to determine that theelsebranch will not be taken.[8]Consequently, a static type checker will quickly detect type errors in rarely used code paths. Without static type checking, evencode coveragetests with 100% coverage may be unable to find such type errors. The tests may fail to detect such type errors, because the combination of all places where values are created and all places where a certain value is used must be taken into account. A number of useful and common programming language features cannot be checked statically, such asdowncasting. Thus, many languages will have both static and dynamic type checking; the static type checker verifies what it can, and dynamic checks verify the rest. Many languages with static type checking provide a way to bypass the type checker. Some languages allow programmers to choose between static and dynamic type safety. For example, historically C# declares variables statically,[9]: 77, Section 3.2butC# 4.0introduces thedynamickeyword, which is used to declare variables to be checked dynamically at runtime.[9]: 117, Section 4.1Other languages allow writing code that is not type-safe; for example, inC, programmers can freely cast a value between any two types that have the same size, effectively subverting the type concept. Dynamic type checking is the process of verifying the type safety of a program at runtime. Implementations of dynamically type-checked languages generally associate each runtime object with atype tag(i.e., a reference to a type) containing its type information. This runtime type information (RTTI) can also be used to implementdynamic dispatch,late binding,downcasting,reflective programming(reflection), and similar features. Most type-safe languages include some form of dynamic type checking, even if they also have a static type checker.[10]The reason for this is that many useful features or properties are difficult or impossible to verify statically. For example, suppose that a program defines two types, A and B, where B is a subtype of A. If the program tries to convert a value of type A to type B, which is known asdowncasting, then the operation is legal only if the value being converted is actually a value of type B. Thus, a dynamic check is needed to verify that the operation is safe. This requirement is one of the criticisms of downcasting. By definition, dynamic type checking may cause a program to fail at runtime. In some programming languages, it is possible to anticipate and recover from these failures. In others, type-checking errors are considered fatal. Programming languages that include dynamic type checking but not static type checking are often called "dynamically typed programming languages". Certain languages allow both static and dynamic typing. For example, Java and some other ostensibly statically typed languages supportdowncastingtypes to theirsubtypes, querying an object to discover its dynamic type and other type operations that depend on runtime type information. Another example isC++ RTTI. More generally, most programming languages include mechanisms for dispatching over different 'kinds' of data, such asdisjoint unions,runtime polymorphism, andvariant types. Even when not interacting with type annotations or type checking, such mechanisms are materially similar to dynamic typing implementations.Seeprogramming languagefor more discussion of the interactions between static and dynamic typing. Objects in object-oriented languages are usually accessed by a reference whose static target type (or manifest type) is equal to either the object's run-time type (its latent type) or a supertype thereof. This is conformant with theLiskov substitution principle, which states that all operations performed on an instance of a given type can also be performed on an instance of a subtype. This concept is also known as subsumption orsubtype polymorphism. In some languages subtypes may also possesscovariant or contravariantreturn types and argument types respectively. Certain languages, for exampleClojure,Common Lisp, orCythonare dynamically type checked by default, but allow programs to opt into static type checking by providing optional annotations. One reason to use such hints would be to optimize the performance of critical sections of a program. This is formalized bygradual typing. The programming environmentDrRacket, a pedagogic environment based on Lisp, and a precursor of the languageRacketis also soft-typed.[11] Conversely, as of version 4.0, the C# language provides a way to indicate that a variable should not be statically type checked. A variable whose type isdynamicwill not be subject to static type checking. Instead, the program relies on runtime type information to determine how the variable may be used.[12][9]: 113–119 InRust, thedynstd::any::Anytype provides dynamic typing of'statictypes.[13] The choice between static and dynamic typing requires certaintrade-offs. Static typing can find type errors reliably at compile time, which increases the reliability of the delivered program. However, programmers disagree over how commonly type errors occur, resulting in further disagreements over the proportion of those bugs that are coded that would be caught by appropriately representing the designed types in code.[14][15]Static typing advocates[who?]believe programs are more reliable when they have been well type-checked, whereas dynamic-typing advocates[who?]point to distributed code that has proven reliable and to small bug databases.[citation needed]The value of static typing increases as the strength of the type system is increased. Advocates ofdependent typing,[who?]implemented in languages such asDependent MLandEpigram, have suggested that almost all bugs can be considered type errors, if the types used in a program are properly declared by the programmer or correctly inferred by the compiler.[16] Static typing usually results in compiled code that executes faster. When the compiler knows the exact data types that are in use (which is necessary for static verification, either through declaration or inference) it can produce optimized machine code. Some dynamically typed languages such asCommon Lispallow optional type declarations for optimization for this reason. By contrast, dynamic typing may allow compilers to run faster andinterpretersto dynamically load new code, because changes to source code in dynamically typed languages may result in less checking to perform and less code to revisit.[clarification needed]This too may reduce the edit-compile-test-debug cycle. Statically typed languages that lacktype inference(such asCandJavaprior toversion 10) require that programmers declare the types that a method or function must use. This can serve as added program documentation, that is active and dynamic, instead of static. This allows a compiler to prevent it from drifting out of synchrony, and from being ignored by programmers. However, a language can be statically typed without requiring type declarations (examples includeHaskell,Scala,OCaml,F#,Swift, and to a lesser extentC#andC++), so explicit type declaration is not a necessary requirement for static typing in all languages. Dynamic typing allows constructs that some (simple) static type checking would reject as illegal. For example,evalfunctions, which execute arbitrary data as code, become possible. Anevalfunction is possible with static typing, but requires advanced uses ofalgebraic data types. Further, dynamic typing better accommodates transitional code and prototyping, such as allowing a placeholder data structure (mock object) to be transparently used in place of a full data structure (usually for the purposes of experimentation and testing). Dynamic typing typically allowsduck typing(which enableseasier code reuse). Many[specify]languages with static typing also featureduck typingor other mechanisms likegeneric programmingthat also enable easier code reuse. Dynamic typing typically makesmetaprogrammingeasier to use. For example,C++ templatesare typically more cumbersome to write than the equivalentRubyorPythoncode sinceC++has stronger rules regarding type definitions (for both functions and variables). This forces a developer to write moreboilerplate codefor a template than a Python developer would need to. More advanced run-time constructs such asmetaclassesandintrospectionare often harder to use in statically typed languages. In some languages, such features may also be used e.g. to generate new types and behaviors on the fly, based on run-time data. Such advanced constructs are often provided bydynamic programming languages; many of these are dynamically typed, althoughdynamic typingneed not be related todynamic programming languages. Languages are often colloquially referred to asstrongly typedorweakly typed. In fact, there is no universally accepted definition of what these terms mean. In general, there are more precise terms to represent the differences between type systems that lead people to call them "strong" or "weak". A third way of categorizing the type system of a programming language is by the safety of typed operations and conversions. Computer scientists use the termtype-safe languageto describe languages that do not allow operations or conversions that violate the rules of the type system. Computer scientists use the termmemory-safe language(or justsafe language) to describe languages that do not allow programs to access memory that has not been assigned for their use. For example, a memory-safe language willcheck array bounds, or else statically guarantee (i.e., at compile time before execution) that array accesses out of the array boundaries will cause compile-time and perhaps runtime errors. Consider the following program of a language that is both type-safe and memory-safe:[17] In this example, the variablezwill have the value 42. Although this may not be what the programmer anticipated, it is a well-defined result. Ifywere a different string, one that could not be converted to a number (e.g. "Hello World"), the result would be well-defined as well. Note that a program can be type-safe or memory-safe and still crash on an invalid operation. This is for languages where the type system is not sufficiently advanced to precisely specify the validity of operations on all possible operands. But if a program encounters an operation that is not type-safe, terminating the program is often the only option. Now consider a similar example in C: In this examplezwill point to a memory address five characters beyondy, equivalent to three characters after the terminating zero character of the string pointed to byy. This is memory that the program is not expected to access. In C terms this is simplyundefined behaviourand the program may do anything; with a simple compiler it might actually print whatever byte is stored after the string "37". As this example shows, C is not memory-safe. As arbitrary data was assumed to be a character, it is also not a type-safe language. In general, type-safety and memory-safety go hand in hand. For example, a language that supports pointer arithmetic and number-to-pointer conversions (like C) is neither memory-safe nor type-safe, because it allows arbitrary memory to be accessed as if it were valid memory of any type. Some languages allow different levels of checking to apply to different regions of code. Examples include: Additional tools such aslintandIBM Rational Purifycan also be used to achieve a higher level of strictness. It has been proposed, chiefly byGilad Bracha, that the choice of type system be made independent of choice of language; that a type system should be a module that can bepluggedinto a language as needed. He believes this is advantageous, because what he calls mandatory type systems make languages less expressive and code more fragile.[22]The requirement that the type system does not affect the semantics of the language is difficult to fulfill. Optional typing is related to, but distinct from,gradual typing. While both typing disciplines can be used to perform static analysis of code (static typing), optional type systems do not enforce type safety at runtime (dynamic typing).[22][23] The termpolymorphismrefers to the ability of code (especially, functions or classes) to act on values of multiple types, or to the ability of different instances of the same data structure to contain elements of different types. Type systems that allow polymorphism generally do so in order to improve the potential for code re-use: in a language with polymorphism, programmers need only implement a data structure such as a list or anassociative arrayonce, rather than once for each type of element with which they plan to use it. For this reason computer scientists sometimes call the use of certain forms of polymorphismgeneric programming. The type-theoretic foundations of polymorphism are closely related to those ofabstraction,modularityand (in some cases)subtyping. Many type systems have been created that are specialized for use in certain environments with certain types of data, or for out-of-bandstatic program analysis. Frequently, these are based on ideas from formaltype theoryand are only available as part of prototype research systems. The following table gives an overview over type theoretic concepts that are used in specialized type systems. The namesM, N, Orange over terms and the namesσ,τ{\displaystyle \sigma ,\tau }range over types. The following notation will be used: Dependent typesare based on the idea of using scalars or values to more precisely describe the type of some other value. For example,matrix(3,3){\displaystyle \mathrm {matrix} (3,3)}might be the type of a3×3{\displaystyle 3\times 3}matrix. We can then define typing rules such as the following rule for matrix multiplication: wherek,m,nare arbitrary positive integer values. A variant ofMLcalledDependent MLhas been created based on this type system, but because type checking for conventional dependent types isundecidable, not all programs using them can be type-checked without some kind of limits. Dependent ML limits the sort of equality it can decide toPresburger arithmetic. Other languages such asEpigrammake the value of all expressions in the language decidable so that type checking can be decidable. However, in generalproof of decidability is undecidable, so many programs require hand-written annotations that may be very non-trivial. As this impedes the development process, many language implementations provide an easy way out in the form of an option to disable this condition. This, however, comes at the cost of making the type-checker run in aninfinite loopwhen fed programs that do not type-check, causing the compilation to fail. Linear types, based on the theory oflinear logic, and closely related touniqueness types, are types assigned to values having the property that they have one and only one reference to them at all times. These are valuable for describing largeimmutable valuessuch as files, strings, and so on, because any operation that simultaneously destroys a linear object and creates a similar object (such asstr = str + "a") can be optimized "under the hood" into an in-place mutation. Normally this is not possible, as such mutations could cause side effects on parts of the program holding other references to the object, violatingreferential transparency. They are also used in the prototype operating systemSingularityfor interprocess communication, statically ensuring that processes cannot share objects in shared memory in order to prevent race conditions. TheCleanlanguage (aHaskell-like language) uses this type system in order to gain a lot of speed (compared to performing a deep copy) while remaining safe. Intersection typesare types describing values that belong tobothof two other given types with overlapping value sets. For example, in most implementations of C the signed char has range -128 to 127 and the unsigned char has range 0 to 255, so the intersection type of these two types would have range 0 to 127. Such an intersection type could be safely passed into functions expectingeithersigned or unsigned chars, because it is compatible with both types. Intersection types are useful for describing overloaded function types: for example, if "int→int" is the type of functions taking an integer argument and returning an integer, and "float→float" is the type of functions taking a float argument and returning a float, then the intersection of these two types can be used to describe functions that do one or the other, based on what type of input they are given. Such a function could be passed into another function expecting an "int→int" function safely; it simply would not use the "float→float" functionality. In a subclassing hierarchy, the intersection of a type and an ancestor type (such as its parent) is the most derived type. The intersection of sibling types is empty. The Forsythe language includes a general implementation of intersection types. A restricted form isrefinement types. Union typesare types describing values that belong toeitherof two types. For example, in C, the signed char has a -128 to 127 range, and the unsigned char has a 0 to 255 range, so the union of these two types would have an overall "virtual" range of -128 to 255 that may be used partially depending on which union member is accessed. Any function handling this union type would have to deal with integers in this complete range. More generally, the only valid operations on a union type are operations that are valid onbothtypes being unioned. C's "union" concept is similar to union types, but is not typesafe, as it permits operations that are valid oneithertype, rather thanboth. Union types are important in program analysis, where they are used to represent symbolic values whose exact nature (e.g., value or type) is not known. In a subclassing hierarchy, the union of a type and an ancestor type (such as its parent) is the ancestor type. The union of sibling types is a subtype of their common ancestor (that is, all operations permitted on their common ancestor are permitted on the union type, but they may also have other valid operations in common). Existentialtypes are frequently used in connection withrecord typesto representmodulesandabstract data types, due to their ability to separate implementation from interface. For example, the type "T = ∃X { a: X; f: (X → int); }" describes a module interface that has a data member namedaof typeXand a function namedfthat takes a parameter of thesametypeXand returns an integer. This could be implemented in different ways; for example: These types are both subtypes of the more general existential type T and correspond to concrete implementation types, so any value of one of these types is a value of type T. Given a value "t" of type "T", we know that "t.f(t.a)" is well-typed, regardless of what the abstract typeXis. This gives flexibility for choosing types suited to a particular implementation, while clients that use only values of the interface type—the existential type—are isolated from these choices. In general it's impossible for the typechecker to infer which existential type a given module belongs to. In the above example intT { a: int; f: (int → int); } could also have the type ∃X { a: X; f: (int → int); }. The simplest solution is to annotate every module with its intended type, e.g.: Although abstract data types and modules had been implemented in programming languages for quite some time, it wasn't until 1988 thatJohn C. MitchellandGordon Plotkinestablished the formal theory under the slogan: "Abstract [data] types have existential type".[25]The theory is a second-ordertyped lambda calculussimilar toSystem F, but with existential instead of universal quantification. In a type system withGradual typing, variables may be assigned a type either atcompile-time(which is static typing), or atrun-time(which is dynamic typing).[26]This allows software developers to choose either type paradigm as appropriate, from within a single language.[26]Gradual typing uses a special type nameddynamicto represent statically unknown types; gradual typing replaces the notion of type equality with a new relation calledconsistencythat relates the dynamic type to every other type. The consistency relation is symmetric but not transitive.[27] Many static type systems, such as those of C and Java, requiretype declarations: the programmer must explicitly associate each variable with a specific type. Others, such as Haskell's, performtype inference: the compiler draws conclusions about the types of variables based on how programmers use those variables. For example, given a functionf(x,y)that addsxandytogether, the compiler can infer thatxandymust be numbers—since addition is only defined for numbers. Thus, any call tofelsewhere in the program that specifies a non-numeric type (such as a string or list) as an argument would signal an error. Numerical and string constants and expressions in code can and often do imply type in a particular context. For example, an expression3.14might imply a type offloating-point, while[1,2,3]might imply a list of integers—typically anarray. Type inference is in general possible, if it iscomputablein the type system in question. Moreover, even if inference is not computable in general for a given type system, inference is often possible for a large subset of real-world programs. Haskell's type system, a version ofHindley–Milner, is a restriction ofSystem Fωto so-called rank-1 polymorphic types, in which type inference is computable. Most Haskell compilers allow arbitrary-rank polymorphism as an extension, but this makes type inference not computable. (Type checking isdecidable, however, and rank-1 programs still have type inference; higher rank polymorphic programs are rejected unless given explicit type annotations.) A type system that assigns types to terms in type environments usingtyping rulesis naturally associated with thedecision problemsoftype checking,typability, andtype inhabitation.[28] Some languages likeC#orScalahave a unified type system.[29]This means that allC#types including primitive types inherit from a single root object. Every type inC#inherits from the Object class. Some languages, likeJavaandRaku, have a root type but also have primitive types that are not objects.[30]Java provides wrapper object types that exist together with the primitive types so developers can use either the wrapper object types or the simpler non-object primitive types. Raku automatically converts primitive types to objects when their methods are accessed.[31] A type checker for a statically typed language must verify that the type of anyexpressionis consistent with the type expected by the context in which that expression appears. For example, in anassignment statementof the formx :=e, the inferred type of the expressionemust be consistent with the declared or inferred type of the variablex. This notion of consistency, calledcompatibility, is specific to each programming language. If the type ofeand the type ofxare the same, and assignment is allowed for that type, then this is a valid expression. Thus, in the simplest type systems, the question of whether two types are compatible reduces to that of whether they areequal(orequivalent). Different languages, however, have different criteria for when two type expressions are understood to denote the same type. These differentequational theoriesof types vary widely, two extreme cases beingstructural type systems, in which any two types that describe values with the same structure are equivalent, andnominative type systems, in which no two syntactically distinct type expressions denote the same type (i.e., types must have the same "name" in order to be equal). In languages withsubtyping, the compatibility relation is more complex: IfBis a subtype ofA, then a value of typeBcan be used in a context where one of typeAis expected (covariant), even if the reverse is not true. Like equivalence, the subtype relation is defined differently for each programming language, with many variations possible. The presence of parametric orad hoc polymorphismin a language may also have implications for type compatibility.
https://en.wikipedia.org/wiki/Existential_types
Inmathematical logic,System UandSystem U−arepure type systems, i.e. special forms of atyped lambda calculuswith an arbitrary number ofsorts, axioms and rules (or dependencies between the sorts). System U was proved inconsistent byJean-Yves Girardin 1972[1](and the question of consistency of System U−was formulated). This result led to the realization thatMartin-Löf's original1971 type theorywas inconsistent, as it allowed the same "Type in Type" behaviour that Girard's paradox exploits. System U is defined[2]: 352as a pure type system with System U−is defined the same with the exception of the(△,∗){\displaystyle (\triangle ,\ast )}rule. The sorts∗{\displaystyle \ast }and◻{\displaystyle \square }are conventionally called “Type” and “Kind”, respectively; the sort△{\displaystyle \triangle }doesn't have a specific name. The two axioms describe the containment of types in kinds (∗:◻{\displaystyle \ast :\square }) and kinds in△{\displaystyle \triangle }(◻:△{\displaystyle \square :\triangle }). Intuitively, the sorts describe a hierarchy in thenatureof the terms. The rules govern the dependencies between the sorts:(∗,∗){\displaystyle (\ast ,\ast )}says that values may depend on values (functions),(◻,∗){\displaystyle (\square ,\ast )}allows values to depend on types (polymorphism),(◻,◻){\displaystyle (\square ,\square )}allows types to depend on types (type operators), and so on. The definitions of System U and U−allow the assignment ofpolymorphickindstogeneric constructorsin analogy to polymorphic types of terms in classical polymorphic lambda calculi, such asSystem F. An example of such a generic constructor might be[2]: 353(wherekdenotes a kind variable) This mechanism is sufficient to construct a term with the type(∀p:∗,p){\displaystyle (\forall p:\ast ,p)}(equivalent to the type⊥{\displaystyle \bot }), which implies that every type isinhabited. By theCurry–Howard correspondence, this is equivalent to all logical propositions being provable, which makes the system inconsistent. Girard's paradox is thetype-theoreticanalogue ofRussell's paradoxinset theory.
https://en.wikipedia.org/wiki/System_U
Inmathematical logicandtype theory, theλ-cube(also writtenlambda cube) is a framework introduced byHenk Barendregt[1]to investigate the different dimensions in which thecalculus of constructionsis a generalization of thesimply typed λ-calculus. Each dimension of the cube corresponds to a new kind of dependency between terms and types. Here, "dependency" refers to the capacity of a term or type tobinda term or type. The respective dimensions of the λ-cube correspond to: The different ways to combine these three dimensions yield the 8 vertices of the cube, each corresponding to a different kind of typed system. The λ-cube can be generalized into the concept of apure type system. The simplest system found in the λ-cube is thesimply typed lambda calculus, also called λ→. In this system, the only way to construct an abstraction is by makinga term depend on a term, with thetyping rule: Γ,x:σ⊢t:τΓ⊢λx.t:σ→τ{\displaystyle {\frac {\Gamma ,x:\sigma \;\vdash \;t:\tau }{\Gamma \;\vdash \;\lambda x.t:\sigma \to \tau }}} InSystem F(also named λ2 for the "second-order typed lambda calculus")[2]there is another type of abstraction, written with aΛ{\displaystyle \Lambda }, that allowsterms to depend on types, with the following rule: Γ⊢t:σΓ⊢Λα.t:Πα.σifαdoes not occur free inΓ{\displaystyle {\frac {\Gamma \;\vdash \;t:\sigma }{\Gamma \;\vdash \;\Lambda \alpha .t:\Pi \alpha .\sigma }}\;{\text{ if }}\alpha {\text{ does not occur free in }}\Gamma } The terms beginning with aΛ{\displaystyle \Lambda }are calledpolymorphic, as they can be applied to different types to get different functions, similarly to polymorphic functions inML-like languages. For instance, the polymorphic identity ofOCamlhas type meaning it can take an argument of any type'aand return an element of that type. This type corresponds in λ2 to the typeΠα.α→α{\displaystyle \Pi \alpha .\alpha \to \alpha }. In System Fω_{\displaystyle {\underline {\omega }}}a construction is introduced to supplytypes that depend on other types. This is called atype constructorand provides a way to build "a function with a type as avalue".[3]An example of such a type constructor is the type of binary trees with leaves labeled by data of a given typeA{\displaystyle A}:TREE:=λA:∗.ΠB.(A→B)→(B→B→B)→B{\displaystyle {\mathsf {TREE}}:=\lambda A:*.\Pi B.(A\to B)\to (B\to B\to B)\to B}, where "A:∗{\displaystyle A:*}" informally means "A{\displaystyle A}is a type". This is a function that takes a type parameterA{\displaystyle A}as an argument and returns the type ofTREE{\displaystyle {\mathsf {TREE}}}s of values of typeA{\displaystyle A}. In concrete programming, this feature corresponds to the ability to define type constructors inside the language, rather than considering them as primitives. The previous type constructor roughly corresponds to the following definition of a tree with labeled leaves in OCaml: This type constructor can be applied to other types to obtain new types. E.g., to obtain type of trees of integers: System Fω_{\displaystyle {\underline {\omega }}}is generally not used on its own, but is useful to isolate the independent feature of type constructors.[4] In theλPsystem, also named λΠ, and closely related to theLF Logical Framework, one has so calleddependent types. These aretypes that are allowed to depend on terms. The crucial introduction rule of the system is Γ,x:A⊢B:∗Γ⊢(Πx:A.B):∗{\displaystyle {\frac {\Gamma ,x:A\;\vdash \;B:*}{\Gamma \;\vdash \;(\Pi x:A.B):*}}} where∗{\displaystyle *}represents valid types. The new type constructorΠ{\displaystyle \Pi }corresponds via theCurry-Howard isomorphismto a universal quantifier, and the system λP as a whole corresponds tofirst-order logicwith implication as only connective. An example of these dependent types in concrete programming is the type of vectors on a certain length: the length is a term, on which the type depends. System Fωcombines both theΛ{\displaystyle \Lambda }constructor of System F and the type constructors from System Fω_{\displaystyle {\underline {\omega }}}. Thus System Fω provides bothterms that depend on typesandtypes that depend on types. In thecalculus of constructions, denoted as λC in the cube or as λPω,[1]: 130these four features cohabit, so that both types and terms can depend on types and terms. The clear border that exists in λ→ between terms and types is somewhat abolished, as all types except the universal◻{\displaystyle \square }are themselves terms with a type. As for all systems based upon the simply typed lambda calculus, all systems in the cube are given in two steps: first, raw terms, together with a notion ofβ-reduction, and then typing rules that allow to type those terms. The set of sorts is defined asS:={∗,◻}{\displaystyle S:=\{*,\square \}}, sorts are represented with the letters{\displaystyle s}. There is also a setV{\displaystyle V}of variables, represented by the lettersx,y,…{\displaystyle x,y,\dots }. The raw terms of the eight systems of the cube are given by the following syntax: A:=x∣s∣AA∣λx:A.A∣Πx:A.A{\displaystyle A:=x\mid s\mid A~A\mid \lambda x:A.A\mid \Pi x:A.A} andA→B{\displaystyle A\to B}denotingΠx:A.B{\displaystyle \Pi x:A.B}whenx{\displaystyle x}does not occur free inB{\displaystyle B}. The environments, as is usual in typed systems, are given byΓ:=∅∣Γ,x:A{\displaystyle \Gamma :=\emptyset \mid \Gamma ,x:A} The notion of β-reduction is common to all systems in the cube. It is written→β{\displaystyle \to _{\beta }}and given by the rules(λx:A.B)C→βB[C/x]{\displaystyle {\frac {}{(\lambda x:A.B)~C\to _{\beta }B[C/x]}}}B→βB′λx:A.B→βλx:A.B′{\displaystyle {\frac {B\to _{\beta }B'}{\lambda x:A.B\to _{\beta }\lambda x:A.B'}}}A→βA′λx:A.B→βλx:A′.B{\displaystyle {\frac {A\to _{\beta }A'}{\lambda x:A.B\to _{\beta }\lambda x:A'.B}}}B→βB′Πx:A.B→βΠx:A.B′{\displaystyle {\frac {B\to _{\beta }B'}{\Pi x:A.B\to _{\beta }\Pi x:A.B'}}}A→βA′Πx:A.B→βΠx:A′.B{\displaystyle {\frac {A\to _{\beta }A'}{\Pi x:A.B\to _{\beta }\Pi x:A'.B}}}Itsreflexive, transitive closureis written=β{\displaystyle =_{\beta }}. The following typing rules are also common to all systems in the cube:⊢∗:◻(Axiom){\displaystyle {\frac {}{\vdash *:\square }}\quad {\text{(Axiom)}}}Γ⊢A:sΓ,x:A⊢x:Ax∉Γ(Start){\displaystyle {\frac {\Gamma \vdash A:s}{\Gamma ,x:A\vdash x:A}}x\not \in \Gamma \quad {\text{(Start)}}}Γ⊢A:BΓ⊢C:sΓ,x:C⊢A:Bx∉Γ(Weakening){\displaystyle {\frac {\Gamma \vdash A:B\quad \Gamma \vdash C:s}{\Gamma ,x:C\vdash A:B}}x\not \in \Gamma \quad {\text{(Weakening)}}}Γ⊢C:Πx:A.BΓ⊢D:AΓ⊢CD:B[D/x](Application){\displaystyle {\frac {\Gamma \vdash C:\Pi x:A.B\quad \Gamma \vdash D:A}{\Gamma \vdash CD:B[D/x]}}\quad {\text{(Application)}}}Γ⊢A:BB=βB′Γ⊢B′:sΓ⊢A:B′(Conversion){\displaystyle {\frac {\Gamma \vdash A:B\quad B=_{\beta }B'\quad \Gamma \vdash B':s}{\Gamma \vdash A:B'}}\quad {\text{(Conversion)}}} The difference between the systems is in the pairs of sorts(s1,s2){\textstyle (s_{1},s_{2})}that are allowed in the following two typing rules:Γ⊢A:s1Γ,x:A⊢B:s2Γ⊢Πx:A.B:s2(Product){\displaystyle {\frac {\Gamma \vdash A:s_{1}\quad \Gamma ,x:A\vdash B:s_{2}}{\Gamma \vdash \Pi x:A.B:s_{2}}}\quad {\text{(Product)}}}Γ⊢A:s1Γ,x:A⊢B:CΓ,x:A⊢C:s2Γ⊢λx:A.B:Πx:A.C(Abstraction){\displaystyle {\frac {\Gamma \vdash A:s_{1}\quad \Gamma ,x:A\vdash B:C\quad \Gamma ,x:A\vdash C:s_{2}}{\Gamma \vdash \lambda x:A.B:\Pi x:A.C}}\quad {\text{(Abstraction)}}} The correspondence between the systems and the pairs(s1,s2){\textstyle (s_{1},s_{2})}allowed in the rules is the following: Each direction of the cube corresponds to one pair (excluding the pair(∗,∗){\textstyle (*,*)}shared by all systems), and in turn each pair corresponds to one possibility of dependency between terms and types: A typical derivation that can be obtained isα:∗⊢λx:α.x:Πx:α.α{\displaystyle \alpha :*\vdash \lambda x:\alpha .x:\Pi x:\alpha .\alpha }or with the arrow shortcutα:∗⊢λx:α.x:α→α{\displaystyle \alpha :*\vdash \lambda x:\alpha .x:\alpha \to \alpha }closely resembling the identity (of typeα{\textstyle \alpha }) of the usual λ→. Note that all types used must appear in the context, because the only derivation that can be done in an empty context is⊢∗:◻{\textstyle \vdash *:\square }. The computing power is quite weak, it corresponds to the extended polynomials (polynomials together with a conditional operator).[5] In λ2, such terms can be obtained as⊢(λβ:∗.λx:⊥.xβ):Πβ:∗.⊥→β{\displaystyle \vdash (\lambda \beta :*.\lambda x:\bot .x\beta ):\Pi \beta :*.\bot \to \beta }with⊥=Πα:∗.α{\textstyle \bot =\Pi \alpha :*.\alpha }. If one readsΠ{\textstyle \Pi }as a universal quantification, via the Curry-Howard isomorphism, this can be seen as a proof of the principle of explosion. In general, λ2 adds the possibility to haveimpredicativetypes such as⊥{\textstyle \bot }, that is terms quantifying over all types including themselves.The polymorphism also allows the construction of functions that were not constructible in λ→. More precisely, the functions definable in λ2 are those provably total in second-orderPeano arithmetic.[6]In particular, all primitive recursive functions are definable. In λP, the ability to have types depending on terms means one can express logical predicates. For instance, the following is derivable:α:∗,a0:α,p:α→∗,q:∗⊢λz:(Πx:α.px→q).λy:(Πx:α.px).(za0)(ya0):(Πx:α.px→q)→(Πx:α.px)→q{\displaystyle {\begin{array}{l}\alpha :*,a_{0}:\alpha ,p:\alpha \to *,q:*\vdash \\\quad \lambda z:(\Pi x:\alpha .px\to q).\\\quad \lambda y:(\Pi x:\alpha .px).\\\quad (za_{0})(ya_{0}):(\Pi x:\alpha .px\to q)\to (\Pi x:\alpha .px)\to q\end{array}}}which corresponds, via the Curry-Howard isomorphism, to a proof of(∀x:A,Px→Q)→(∀x:A,Px)→Q{\displaystyle (\forall x:A,Px\to Q)\to (\forall x:A,Px)\to Q}.From the computational point of view, however, having dependent types does not enhance computational power, only the possibility to express more precise type properties.[7] The conversion rule is strongly needed when dealing with dependent types, because it allows to perform computation on the terms in the type. For instance, if one hasΓ⊢A:P((λx.x)y){\displaystyle \Gamma \vdash A:P((\lambda x.x)y)}andΓ⊢B:Πx:P(y).C{\displaystyle \Gamma \vdash B:\Pi x:P(y).C}, one needs to apply the conversion rule[a]to obtainΓ⊢A:P(y){\displaystyle \Gamma \vdash A:P(y)}to be able to typeΓ⊢BA:C{\displaystyle \Gamma \vdash BA:C}. In λω, the following operatorAND:=λα:∗.λβ:∗.Πγ:∗.(α→β→γ)→γ{\displaystyle AND:=\lambda \alpha :*.\lambda \beta :*.\Pi \gamma :*.(\alpha \to \beta \to \gamma )\to \gamma }is definable, that is⊢AND:∗→∗→∗{\displaystyle \vdash AND:*\to *\to *}. The derivationα:∗,β:∗⊢Πγ:∗.(α→β→γ)→γ:∗{\displaystyle \alpha :*,\beta :*\vdash \Pi \gamma :*.(\alpha \to \beta \to \gamma )\to \gamma :*}can be obtained already in λ2, however the polymorphicAND{\textstyle AND}can only be defined if the rule(◻,∗){\textstyle (\square ,*)}is also present. From a computing point of view, λω is extremely strong, and has been considered as a basis for programming languages.[10] The calculus of constructions has both the predicate expressiveness of λP and the computational power of λω, hence why λC is also called λPω,[1]: 130so it is very powerful, both on the logical side and on the computational side. The systemAutomathis similar to λ2 from a logical point of view. TheML-like languages, from a typing point of view, lie somewhere between λ→ and λ2, as they admit a restricted kind of polymorphic types, that is the types in prenex normal form. However, because they feature some recursion operators, their computing power is greater than that of λ2.[7]The Coq system is based on an extension of λC with a linear hierarchy of universes, rather than only one untypable◻{\textstyle \square }, and the ability to construct inductive types. Pure type systemscan be seen as a generalization of the cube, with an arbitrary set of sorts, axiom, product and abstraction rules. Conversely, the systems of the lambda cube can be expressed as pure type systems with two sorts{∗,◻}{\displaystyle \{*,\square \}}, the only axiom{∗,◻}{\textstyle \{*,\square \}}, and a set of rulesR{\textstyle R}such that{(∗,∗,∗)}⊆R⊆{(∗,∗,∗),(∗,◻,◻),(◻,∗,∗),(◻,◻,◻)}{\displaystyle \{(*,*,*)\}\subseteq R\subseteq \{(*,*,*),(*,\square ,\square ),(\square ,*,*),(\square ,\square ,\square )\}}.[1] Via the Curry-Howard isomorphism, there is a one-to-one correspondence between the systems in the lambda cube and logical systems,[1]namely: All the logics are implicative (i.e. the only connectives are→{\textstyle \to }and∀{\textstyle \forall }), however one can define other connectives such as∧{\displaystyle \wedge }or¬{\displaystyle \neg }in animpredicativeway in second and higher order logics. In the weak higher order logics, there are variables for higher order predicates, but no quantification on those can be done. All systems in the cube enjoy All of these can be proven on generic pure type systems.[11] Any term well-typed in a system of the cube is strongly normalizing,[1]although this property is not common to all pure type systems. No system in the cube is Turing complete.[7] Subtypinghowever is not represented in the cube, even though systems likeF<:ω{\displaystyle F_{<:}^{\omega }}, known ashigher-order bounded quantification, which combines subtyping and polymorphism are of practical interest, and can be further generalized tobounded type operators. Further extensions toF<:ω{\displaystyle F_{<:}^{\omega }}allow the definition ofpurely functional objects; these systems were generally developed after the lambda cube paper was published.[12] The idea of the cube is due to the mathematicianHenk Barendregt(1991). The framework ofpure type systemsgeneralizes the lambda cube in the sense that all corners of the cube, as well as many other systems can be represented as instances of this general framework.[13]This framework predates the lambda cube by a couple of years. In his 1991 paper, Barendregt also defines the corners of the cube in this framework.
https://en.wikipedia.org/wiki/Lambda_cube
Inmathematics,even and odd ordinalsextend the concept ofparityfrom thenatural numbersto theordinal numbers. They are useful in sometransfinite inductionproofs. The literature contains a few equivalent definitions of the parity of an ordinal α: Unlike the case of evenintegers, one cannot go on to characterize even ordinals as ordinal numbers of the formβ2 = β + β.Ordinal multiplicationis not commutative, so in general2β ≠ β2.In fact, the even ordinalω + 4cannot be expressed as β + β, and the ordinal number is not even. A simple application of ordinal parity is theidempotencelaw forcardinal addition(given thewell-ordering theorem). Given an infinite cardinal κ, or generally any limit ordinal κ, κ is order-isomorphic to both its subset of even ordinals and its subset of odd ordinals. Hence one has the cardinal sumκ + κ = κ.[2][7]
https://en.wikipedia.org/wiki/Even_and_odd_ordinals
Inmathematics, anorder topologyis a specifictopologythat can be defined on anytotally ordered set. It is a natural generalization of the topology of thereal numbersto arbitrary totally ordered sets. IfXis a totally ordered set, theorder topologyonXis generated by thesubbaseof "open rays" for alla, binX. ProvidedXhas at least two elements, this is equivalent to saying that the openintervals together with the above rays form abasefor the order topology. Theopen setsinXare the sets that are aunionof (possibly infinitely many) such open intervals and rays. Atopological spaceXis calledorderableorlinearly orderable[1]if there exists a total order on its elements such that the order topology induced by that order and the given topology onXcoincide. The order topology makesXinto acompletely normalHausdorff space. The standard topologies onR,Q,Z, andNare the order topologies. IfYis a subset ofX,Xa totally ordered set, thenYinherits a total order fromX. The setYtherefore has an order topology, theinduced order topology. As a subset ofX,Yalso has asubspace topology. The subspace topology is always at least asfineas the induced order topology, but they are not in general the same. For example, consider the subsetY= {−1} ∪ {1/n}n∈Nof therationals. Under the subspace topology, thesingleton set{−1} is open inY, but under the induced order topology, any open set containing −1 must contain all but finitely many members of the space. Though the subspace topology ofY= {−1} ∪ {1/n}n∈Nin the section above is shown not to be generated by the induced order onY, it is nonetheless an order topology onY; indeed, in the subspace topology every point is isolated (i.e., singleton {y} is open inYfor everyyinY), so the subspace topology is thediscrete topologyonY(the topology in which every subset ofYis open), and the discrete topology on any set is an order topology. To define a total order onYthat generates the discrete topology onY, simply modify the induced order onYby defining −1 to be the greatest element ofYand otherwise keeping the same order for the other points, so that in this new order (call it say <1) we have 1/n<1−1 for alln∈N. Then, in the order topology onYgenerated by <1, every point ofYis isolated inY. We wish to define here a subsetZof a linearly ordered topological spaceXsuch that no total order onZgenerates the subspace topology onZ, so that the subspace topology will not be an order topology even though it is the subspace topology of a space whose topology is an order topology. LetZ={−1}∪(0,1){\displaystyle Z=\{-1\}\cup (0,1)}in thereal line. The same argument as before shows that the subspace topology onZis not equal to the induced order topology onZ, but one can show that the subspace topology onZcannot be equal to any order topology onZ. An argument follows. Suppose by way of contradiction that there is somestrict total order< onZsuch that the order topology generated by < is equal to the subspace topology onZ(note that we are not assuming that < is the induced order onZ, but rather an arbitrarily given total order onZthat generates the subspace topology). LetM=Z\ {−1} = (0,1), thenMisconnected, soMis dense on itself and has no gaps, in regards to <. If −1 is not the smallest or the largest element ofZ, then(−∞,−1){\displaystyle (-\infty ,-1)}and(−1,∞){\displaystyle (-1,\infty )}separateM, a contradiction. Assume without loss of generality that −1 is the smallest element ofZ. Since {−1} is open inZ, there is some pointpinMsuch that the interval (−1,p) isempty, sopis the minimum ofM. ThenM\ {p} = (0,p) ∪ (p,1) is not connected with respect to the subspace topology inherited fromR. On the other hand, the subspace topology ofM\ {p} inherited from the order topology ofZcoincides with the order topology ofM\ {p} induced by <, which is connected since there are no gaps inM\ {p} and it is dense. This is a contradiction. Several variants of the order topology can be given: These topologies naturally arise when working withsemicontinuous functions, in that a real-valued function on a topological space is lower semicontinuous if and only if it iscontinuouswhen the reals are equipped with the right order.[3]The (natural)compact open topologyon the resulting set of continuous functions is sometimes referred to as thesemicontinuous topology[4]. Additionally, these topologies can be used to givecounterexamplesin general topology. For example, the left or right order topology on a bounded set provides an example of acompact spacethat is not Hausdorff. The left order topology is the standard topology used for manyset-theoreticpurposes on aBoolean algebra.[clarification needed] For anyordinal numberλone can consider the spaces of ordinal numbers together with the natural order topology. These spaces are calledordinal spaces. (Note that in the usual set-theoretic construction of ordinal numbers we haveλ= [0,λ) andλ+ 1 = [0,λ]). Obviously, these spaces are mostly of interest whenλis an infinite ordinal; for finite ordinals, the order topology is simply thediscrete topology. Whenλ= ω (the first infinite ordinal), the space [0,ω) is justNwith the usual (still discrete) topology, while [0,ω] is theone-point compactificationofN. Of particular interest is the case whenλ= ω1, the set of all countable ordinals, and thefirst uncountable ordinal. The element ω1is alimit pointof the subset [0,ω1) even though nosequenceof elements in [0,ω1) has the element ω1as its limit. In particular, [0,ω1] is notfirst-countable. The subspace [0,ω1) is first-countable however, since the only point in [0,ω1] without a countablelocal baseis ω1. Some further properties include Anyordinal numbercan be viewed as a topological space by endowing it with the order topology (indeed, ordinals arewell-ordered, so in particulartotally ordered). Unless otherwise specified, this is the usual topology given to ordinals. Moreover, if we are willing to accept aproper classas a topological space, then we may similarly view the class of all ordinals as a topological space with the order topology. The set oflimit pointsof an ordinalαis precisely the set oflimit ordinalsless thanα.Successor ordinals(and zero) less thanαareisolated pointsinα. In particular, the finite ordinals and ω arediscretetopological spaces, and no ordinal beyond that is discrete. The ordinalαiscompactas a topological space if and only ifαis either asuccessor ordinalor zero. Theclosed setsof a limit ordinalαare just the closed sets in the sense that we have already defined, namely, those that contain a limit ordinal whenever they contain all sufficiently large ordinals below it. Any ordinal is, of course, an open subset of any larger ordinal. We can also define the topology on the ordinals in the followinginductiveway: 0 is the empty topological space,α+1 is obtained by taking theone-point compactificationofα, and forδa limit ordinal,δis equipped with theinductive limittopology. Note that ifαis a successor ordinal, thenαis compact, in which case its one-point compactificationα+1 is thedisjoint unionofαand a point. As topological spaces, all the ordinals areHausdorffand evennormal. They are alsototally disconnected(connected components are points),scattered(every non-empty subspace has an isolated point; in this case, just take the smallest element),zero-dimensional(the topology has aclopenbasis: here, write an open interval (β,γ) as the union of the clopen intervals (β,γ'+1) = [β+1,γ'] forγ'<γ). However, they are notextremally disconnectedin general (there are open sets, for example the even numbers from ω, whoseclosureis not open). The topological spaces ω1and its successor ω1+1 are frequently used as textbook examples of uncountable topological spaces. For example, in the topological space ω1+1, the element ω1is in the closure of the subset ω1even though no sequence of elements in ω1has the element ω1as its limit: an element in ω1is a countable set; for any sequence of such sets, the union of these sets is the union of countably many countable sets, so still countable; this union is an upper bound of the elements of the sequence, and therefore of the limit of the sequence, if it has one. The space ω1isfirst-countablebut notsecond-countable, and ω1+1 has neither of these two properties, despite beingcompact. It is also worthy of note that anycontinuous functionfrom ω1toR(thereal line) is eventually constant: so theStone–Čech compactificationof ω1is ω1+1, just as its one-point compactification (in sharp contrast to ω, whose Stone–Čech compactification is muchlargerthan ω). Ifαis a limit ordinal andXis a set, anα-indexed sequence of elements ofXmerely means a function fromαtoX. This concept, atransfinite sequenceorordinal-indexed sequence, is a generalization of the concept of asequence. An ordinary sequence corresponds to the caseα= ω. IfXis a topological space, we say that anα-indexed sequence of elements ofXconvergesto a limitxwhen it converges as anet, in other words, when given anyneighborhoodUofxthere is an ordinalβ<αsuch thatxιis inUfor allι≥β. Ordinal-indexed sequences are more powerful than ordinary (ω-indexed) sequences to determine limits in topology: for example, ω1is a limit point of ω1+1 (because it is a limit ordinal), and, indeed, it is the limit of the ω1-indexed sequence which maps any ordinal less than ω1to itself: however, it is not the limit of any ordinary (ω-indexed) sequence in ω1, since any such limit is less than or equal to the union of its elements, which is a countable union of countable sets, hence itself countable. However, ordinal-indexed sequences are not powerful enough to replace nets (orfilters) in general: for example, on theTychonoff plank(the product space(ω1+1)×(ω+1){\displaystyle (\omega _{1}+1)\times (\omega +1)}), the corner point(ω1,ω){\displaystyle (\omega _{1},\omega )}is a limit point (it is in the closure) of the open subsetω1×ω{\displaystyle \omega _{1}\times \omega }, but it is not the limit of an ordinal-indexed sequence. This article incorporates material from Order topology onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Order_topology#Ordinal_space
Inmathematics, thesurreal numbersystem is atotally orderedproper classcontaining not only thereal numbersbut alsoinfiniteandinfinitesimal numbers, respectively larger or smaller inabsolute valuethan any positive real number. Research on theGo endgamebyJohn Horton Conwayled to the original definition and construction of surreal numbers. Conway's construction was introduced inDonald Knuth's 1974 bookSurreal Numbers: How Two Ex-Students Turned On to Pure Mathematics and Found Total Happiness. The surreals share many properties with the reals, including the usual arithmetic operations (addition, subtraction, multiplication, and division); as such, they form anordered field.[a]If formulated invon Neumann–Bernays–Gödel set theory, the surreal numbers are a universal ordered field in the sense that all other ordered fields, such as the rationals, the reals, therational functions, theLevi-Civita field, thesuperreal numbers(including thehyperreal numbers) can be realized as subfields of the surreals.[1]The surreals also contain alltransfiniteordinal numbers; the arithmetic on them is given by thenatural operations. It has also been shown (in von Neumann–Bernays–Gödel set theory) that the maximal class hyperreal field isisomorphicto the maximal class surreal field. Research on theGo endgamebyJohn Horton Conwayled to the original definition and construction of the surreal numbers.[2]Conway's construction was introduced inDonald Knuth's 1974 bookSurreal Numbers: How Two Ex-Students Turned On to Pure Mathematics and Found Total Happiness. In his book, which takes the form of a dialogue, Knuth coined the termsurreal numbersfor what Conway had called simplynumbers.[3]Conway later adopted Knuth's term, and used surreals for analyzing games in his 1976 bookOn Numbers and Games. A separate route to defining the surreals began in 1907, whenHans HahnintroducedHahn seriesas a generalization offormal power series, andFelix Hausdorffintroduced certain ordered sets calledηα-setsfor ordinalsαand asked if it was possible to find a compatible ordered group or field structure. In 1962, Norman Alling used a modified form of Hahn series to construct such ordered fields associated to certain ordinalsαand, in 1987, he showed that takingαto be the class of all ordinals in his construction gives a class that is an ordered field isomorphic to the surreal numbers.[4] If the surreals are considered as 'just' a proper-class-sized real closed field, Alling's 1962 paper handles the case ofstrongly inaccessiblecardinals which can naturally be considered as proper classes by cutting off thecumulative hierarchy of the universeone stage above the cardinal, and Alling accordingly deserves much credit for the discovery/invention of the surreals in this sense. There is an important additional field structure on the surreals that isn't visible through this lens however, namely the notion of a 'birthday' and the corresponding natural description of the surreals as the result of a cut-filling process along their birthdays given by Conway. This additional structure has become fundamental to a modern understanding of the surreal numbers, and Conway is thus given credit for discovering the surreals as we know them today—Alling himself gives Conway full credit in a 1985 paper preceding his book on the subject.[5] In the context of surreal numbers, anordered pairof setsLandR, which is written as(L,R)in many other mathematical contexts, is instead written{L|R}including the extra space adjacent to each brace. When a set is empty, it is often simply omitted. When a set is explicitly described by its elements, the pair of braces that encloses the list of elements is often omitted. When a union of sets is taken, the operator that represents that is often a comma. For example, instead of(L1∪L2∪ {0, 1, 2}, ∅), which is common notation in other contexts, we typically write{L1,L2, 0, 1, 2 | }. In the Conway construction,[6]the surreal numbers are constructed in stages, along with an ordering ≤ such that for any two surreal numbersaandb,a≤borb≤a. (Both may hold, in which caseaandbare equivalent and denote the same number.) Each number is formed from an ordered pair of subsets of numbers already constructed: given subsetsLandRof numbers such that all the members ofLare strictly less than all the members ofR, then the pair{L|R}represents a number intermediate in value between all the members ofLand all the members ofR. Different subsets may end up defining the same number:{L|R}and{L′|R′}may define the same number even ifL≠L′andR≠R′. (A similar phenomenon occurs whenrational numbersare defined as quotients of integers:⁠1/2⁠and⁠2/4⁠are different representations of the same rational number.) So strictly speaking, the surreal numbers areequivalence classesof representations of the form{L|R}that designate the same number. In the first stage of construction, there are no previously existing numbers so the only representation must use the empty set:{ | }. This representation, whereLandRare both empty, is called 0. Subsequent stages yield forms like and The integers are thus contained within the surreal numbers. (The above identities are definitions, in the sense that the right-hand side is a name for the left-hand side. That the names are actually appropriate will be evident when the arithmetic operations on surreal numbers are defined, as in the section below.) Similarly, representations such as arise, so that thedyadic rationals(rational numbers whose denominators are powers of 2) are contained within the surreal numbers. After an infinite number of stages, infinite subsets become available, so that anyreal numberacan be represented by{La|Ra},whereLais the set of all dyadic rationals less thanaandRais the set of all dyadic rationals greater thana(reminiscent of aDedekind cut). Thus the real numbers are also embedded within the surreals. There are also representations like whereωis a transfinite number greater than all integers andεis an infinitesimal greater than 0 but less than any positive real number. Moreover, the standard arithmetic operations (addition, subtraction, multiplication, and division) can be extended to these non-real numbers in a manner that turns the collection of surreal numbers into an ordered field, so that one can talk about2ωorω− 1and so forth. Surreal numbers areconstructed inductivelyasequivalence classesofpairsof sets of surreal numbers, restricted by the condition that each element of the first set is smaller than each element of the second set. The construction consists of three interdependent parts: the construction rule, the comparison rule and the equivalence rule. Aformis a pair of sets of surreal numbers, called itsleft setand itsright set. A form with left setLand right setRis written{L|R}. WhenLandRare given as lists of elements, the braces around them are omitted. Either or both of the left and right set of a form may be the empty set. The form{ { } | { } }with both left and right set empty is also written{ | }. Construction rule The numeric forms are placed in equivalence classes; each such equivalence class is asurreal number. The elements of the left and right sets of a form are drawn from the universe of the surreal numbers (not offorms, but of theirequivalence classes). Equivalence rule Anordering relationshipmust beantisymmetric, i.e., it must have the property thatx=y(i. e.,x≤yandy≤xare both true) only whenxandyare the same object. This is not the case for surreal numberforms, but is true by construction for surrealnumbers(equivalence classes). The equivalence class containing{ | }is labeled 0; in other words,{ | }is a form of the surreal number 0. The recursive definition of surreal numbers is completed by defining comparison: Given numeric formsx= {XL|XR}andy= {YL|YR},x≤yif and only if both: Surreal numbers can be compared to each other (or to numeric forms) by choosing a numeric form from its equivalence class to represent each surreal number. This group of definitions isrecursive, and requires some form ofmathematical inductionto define the universe of objects (forms and numbers) that occur in them. The only surreal numbers reachable viafinite inductionare thedyadic fractions; a wider universe is reachable given some form oftransfinite induction. The base case is actually a special case of the induction rule, with 0 taken as a label for the "least ordinal". Since there exists noSiwithi< 0, the expression⋃i<0Si{\textstyle \bigcup _{i<0}S_{i}}is the empty set; the only subset of the empty set is the empty set, and thereforeS0consists of a single surreal form{ | }lying in a single equivalence class 0. For every finite ordinal numbern,Sniswell-orderedby the ordering induced by the comparison rule on the surreal numbers. The first iteration of the induction rule produces the three numeric forms{ | 0 } < { | } < { 0 | }(the form{ 0 | 0 }is non-numeric because0 ≤ 0). The equivalence class containing{ 0 | }is labeled 1 and the equivalence class containing{ | 0 }is labeled −1. These three labels have a special significance in the axioms that define aring; they are the additive identity (0), the multiplicative identity (1), and the additive inverse of 1 (−1). The arithmetic operations defined below are consistent with these labels. For everyi<n, since every valid form inSiis also a valid form inSn, all of the numbers inSialso appear inSn(as supersets of their representation inSi). (The set union expression appears in our construction rule, rather than the simpler formSn−1, so that the definition also makes sense whennis alimit ordinal.) Numbers inSnthat are a superset of some number inSiare said to have beeninheritedfrom generationi. The smallest value ofαfor which a given surreal number appears inSαis called itsbirthday. For example, the birthday of 0 is 0, and the birthday of −1 is 1. A second iteration of the construction rule yields the following ordering of equivalence classes: Comparison of these equivalence classes is consistent, irrespective of the choice of form. Three observations follow: The informal interpretations of{ 1 | }and{ | −1 }are "the number just after 1" and "the number just before −1" respectively; their equivalence classes are labeled 2 and −2. The informal interpretations of{ 0 | 1 }and{ −1 | 0 }are "the number halfway between 0 and 1" and "the number halfway between −1 and 0" respectively; their equivalence classes are labeled⁠1/2⁠and −⁠1/2⁠. These labels will also be justified by the rules for surreal addition and multiplication below. The equivalence classes at each stagenof induction may be characterized by theirn-complete forms(each containing as many elements as possible of previous generations in its left and right sets). Either this complete form containseverynumber from previous generations in its left or right set, in which case this is the first generation in which this number occurs; or it contains all numbers from previous generations but one, in which case it is a new form of this one number. We retain the labels from the previous generation for these "old" numbers, and write the ordering above using the old and new labels: The third observation extends to all surreal numbers with finite left and right sets. (For infinite left or right sets, this is valid in an altered form, since infinite sets might not contain a maximal or minimal element.) The number{ 1, 2 | 5, 8 }is therefore equivalent to{ 2 | 5 }; one can establish that these are forms of 3 by using thebirthday property, which is a consequence of the rules above. A formx= {L|R}occurring in generationnrepresents a number inherited from an earlier generationi<nif and only if there is some number inSithat is greater than all elements ofLand less than all elements of theR. (In other words, ifLandRare already separated by a number created at an earlier stage, thenxdoes not represent a new number but one already constructed.) Ifxrepresents a number from any generation earlier thann, there is a least such generationi, and exactly one numbercwith this leastias its birthday that lies betweenLandR;xis a form of thisc. In other words, it lies in the equivalence class inSnthat is a superset of the representation ofcin generationi. The addition, negation (additive inverse), and multiplication of surreal numberformsx= {XL|XR}andy= {YL|YR}are defined by three recursive formulas. Negation of a given numberx= {XL|XR}is defined by−x=−{XL∣XR}={−XR∣−XL},{\displaystyle -x=-\{X_{L}\mid X_{R}\}=\{-X_{R}\mid -X_{L}\},}where the negation of a setSof numbers is given by the set of the negated elements ofS:−S={−s:s∈S}.{\displaystyle -S=\{-s:s\in S\}.} This formula involves the negation of the surrealnumbersappearing in the left and right sets ofx, which is to be understood as the result of choosing a form of the number, evaluating the negation of this form, and taking the equivalence class of the resulting form. This makes sense only if the result is the same, irrespective of the choice of form of the operand. This can be proved inductively using the fact that the numbers occurring inXLandXRare drawn from generations earlier than that in which the formxfirst occurs, and observing the special case:−0=−{∣}={∣}=0.{\displaystyle -0=-\{{}\mid {}\}=\{{}\mid {}\}=0.} The definition of addition is also a recursive formula:x+y={XL∣XR}+{YL∣YR}={XL+y,x+YL∣XR+y,x+YR},{\displaystyle x+y=\{X_{L}\mid X_{R}\}+\{Y_{L}\mid Y_{R}\}=\{X_{L}+y,x+Y_{L}\mid X_{R}+y,x+Y_{R}\},}where X+y={x′+y:x′∈X},x+Y={x+y′:y′∈Y}{\displaystyle X+y=\{x'+y:x'\in X\},\quad x+Y=\{x+y':y'\in Y\}} This formula involves sums of one of the original operands and a surreal number drawn from the left or right set of the other. It can be proved inductively with the special cases:0+0={∣}+{∣}={∣}=0{\displaystyle 0+0=\{{}\mid {}\}+\{{}\mid {}\}=\{{}\mid {}\}=0}x+0=x+{∣}={XL+0∣XR+0}={XL∣XR}=x{\displaystyle x+0=x+\{{}\mid {}\}=\{X_{L}+0\mid X_{R}+0\}=\{X_{L}\mid X_{R}\}=x}0+y={∣}+y={0+YL∣0+YR}={YL∣YR}=y{\displaystyle 0+y=\{{}\mid {}\}+y=\{0+Y_{L}\mid 0+Y_{R}\}=\{Y_{L}\mid Y_{R}\}=y}For example: which by the birthday property is a form of 1. This justifies the label used in the previous section. Subtraction is defined with addition and negation:x−y={XL∣XR}+{−YR∣−YL}={XL−y,x−YR∣XR−y,x−YL}.{\displaystyle x-y=\{X_{L}\mid X_{R}\}+\{-Y_{R}\mid -Y_{L}\}=\{X_{L}-y,x-Y_{R}\mid X_{R}-y,x-Y_{L}\}\,.} Multiplication can be defined recursively as well, beginning from the special cases involving 0, themultiplicative identity1, and its additive inverse −1:xy={XL∣XR}{YL∣YR}={XLy+xYL−XLYL,XRy+xYR−XRYR∣XLy+xYR−XLYR,xYL+XRy−XRYL}{\displaystyle {\begin{aligned}xy&=\{X_{L}\mid X_{R}\}\{Y_{L}\mid Y_{R}\}\\&=\left\{X_{L}y+xY_{L}-X_{L}Y_{L},X_{R}y+xY_{R}-X_{R}Y_{R}\mid X_{L}y+xY_{R}-X_{L}Y_{R},xY_{L}+X_{R}y-X_{R}Y_{L}\right\}\\\end{aligned}}}The formula contains arithmetic expressions involving the operands and their left and right sets, such as the expressionXRy+xYR−XRYR{\textstyle X_{R}y+xY_{R}-X_{R}Y_{R}}that appears in the left set of the product ofxandy. This is understood as{x′y+xy′−x′y′:x′∈XR,y′∈YR}{\textstyle \left\{x'y+xy'-x'y':x'\in X_{R},~y'\in Y_{R}\right\}}, the set of numbers generated by picking all possible combinations of members ofXR{\textstyle X_{R}}andYR{\textstyle Y_{R}}, and substituting them into the expression. For example, to show that the square of⁠1/2⁠is⁠1/4⁠: The definition of division is done in terms of the reciprocal and multiplication: xy=x⋅1y{\displaystyle {\frac {x}{y}}=x\cdot {\frac {1}{y}}} where[6]: 21 1y={0,1+(yR−y)(1y)LyR,1+(yL−y)(1y)RyL|1+(yL−y)(1y)LyL,1+(yR−y)(1y)RyR}{\displaystyle {\frac {1}{y}}=\left\{\left.0,{\frac {1+(y_{R}-y)\left({\frac {1}{y}}\right)_{L}}{y_{R}}},{\frac {1+\left(y_{L}-y\right)\left({\frac {1}{y}}\right)_{R}}{y_{L}}}\,\,\right|\,\,{\frac {1+(y_{L}-y)\left({\frac {1}{y}}\right)_{L}}{y_{L}}},{\frac {1+(y_{R}-y)\left({\frac {1}{y}}\right)_{R}}{y_{R}}}\right\}} for positivey. Only positiveyLare permitted in the formula, with any nonpositive terms being ignored (andyRare always positive). This formula involves not only recursion in terms of being able to divide by numbers from the left and right sets ofy, but also recursion in that the members of the left and right sets of⁠1/y⁠itself. 0 is always a member of the left set of⁠1/y⁠, and that can be used to find more terms in a recursive fashion. For example, ify= 3 = { 2 |}, then we know a left term of⁠1/3⁠will be 0. This in turn means⁠1 + (2 − 3)0/2⁠=⁠1/2⁠is a right term. This means1+(2−3)(12)2=14{\displaystyle {\frac {1+(2-3)\left({\frac {1}{2}}\right)}{2}}={\frac {1}{4}}}is a left term. This means1+(2−3)(14)2=38{\displaystyle {\frac {1+(2-3)\left({\frac {1}{4}}\right)}{2}}={\frac {3}{8}}}will be a right term. Continuing, this gives13={0,14,516,…|12,38,…}{\displaystyle {\frac {1}{3}}=\left\{\left.0,{\frac {1}{4}},{\frac {5}{16}},\ldots \,\right|\,{\frac {1}{2}},{\frac {3}{8}},\ldots \right\}} For negativey,⁠1/y⁠is given by1y=−(1−y){\displaystyle {\frac {1}{y}}=-\left({\frac {1}{-y}}\right)} Ify= 0, then⁠1/y⁠is undefined. It can be shown that the definitions of negation, addition and multiplication are consistent, in the sense that: With these rules one can now verify that the numbers found in the first few generations were properly labeled. The construction rule is repeated to obtain more generations of surreals: For eachnatural number(finite ordinal)n, all numbers generated inSnaredyadic fractions, i.e., can be written as anirreducible fraction⁠a/2b⁠, whereaandbareintegersand0 ≤b<n. The set of all surreal numbers that are generated in someSnfor finitenmay be denoted asS∗=⋃n∈NSn{\textstyle S_{*}=\bigcup _{n\in N}S_{n}}. One may form the three classesS0={0}S+={x∈S∗:x>0}S−={x∈S∗:x<0}{\displaystyle {\begin{aligned}S_{0}&=\{0\}\\S_{+}&=\{x\in S_{*}:x>0\}\\S_{-}&=\{x\in S_{*}:x<0\}\end{aligned}}}of whichS∗is the union. No individualSnis closed under addition and multiplication (exceptS0), butS∗is; it is the subring of the rationals consisting of all dyadic fractions. There are infinite ordinal numbersβfor which the set of surreal numbers with birthday less thanβis closed under the different arithmetic operations.[7]For any ordinalα, the set of surreal numbers with birthday less thanβ=ωα(usingpowers ofω) is closed under addition and forms a group; for birthday less thanωωαit is closed under multiplication and forms a ring;[b]and for birthday less than an (ordinal)epsilon numberεαit is closed under multiplicative inverse and forms a field. The latter sets are also closed under the exponential function as defined by Kruskal and Gonshor.[7][8]: ch. 10[7] However, it is always possible to construct a surreal number that is greater than any member of a set of surreals (by including the set on the left side of the constructor) and thus the collection of surreal numbers is aproper class. With their ordering and algebraic operations they constitute anordered field, with the caveat that they do not form aset. In fact it is the biggest ordered field, in that every ordered field is a subfield of the surreal numbers.[1]The class of all surreal numbers is denoted by the symbolNo{\textstyle \mathbb {No} }. DefineSωas the set of all surreal numbers generated by the construction rule from subsets ofS∗. (This is the same inductive step as before, since the ordinal numberωis the smallest ordinal that is larger than all natural numbers; however, the set union appearing in the inductive step is now an infinite union of finite sets, and so this step can be performed only in a set theory that allows such a union.) A unique infinitely large positive number occurs inSω:ω={S∗∣}={1,2,3,4,…∣}.{\displaystyle \omega =\{S_{*}\mid {}\}=\{1,2,3,4,\ldots \mid {}\}.}Sωalso contains objects that can be identified as therational numbers. For example, theω-complete form of the fraction⁠1/3⁠is given by:13={y∈S∗:3y<1∣y∈S∗:3y>1}.{\displaystyle {\tfrac {1}{3}}=\{y\in S_{*}:3y<1\mid y\in S_{*}:3y>1\}.}The product of this form of⁠1/3⁠with any form of 3 is a form whose left set contains only numbers less than 1 and whose right set contains only numbers greater than 1; the birthday property implies that this product is a form of 1. Not only do all the rest of therational numbersappear inSω; the remaining finitereal numbersdo too. For example,π={3,258,20164,…∣4,72,134,5116,…}.{\displaystyle \pi =\left\{3,{\tfrac {25}{8}},{\tfrac {201}{64}},\ldots \mid 4,{\tfrac {7}{2}},{\tfrac {13}{4}},{\tfrac {51}{16}},\ldots \right\}.} The only infinities inSωareωand−ω; but there are other non-real numbers inSωamong the reals. Consider the smallest positive number inSω:ε={S−∪S0∣S+}={0∣1,12,14,18,…}={0∣y∈S∗:y>0}{\displaystyle \varepsilon =\{S_{-}\cup S_{0}\mid S_{+}\}=\left\{0\mid 1,{\tfrac {1}{2}},{\tfrac {1}{4}},{\tfrac {1}{8}},\ldots \right\}=\{0\mid y\in S_{*}:y>0\}}This number is larger than zero but less than all positive dyadic fractions. It is therefore aninfinitesimalnumber, often labeledε. Theω-complete form ofε(respectively−ε) is the same as theω-complete form of 0, except that 0 is included in the left (respectively right) set. The only "pure" infinitesimals inSωareεand its additive inverse−ε; adding them to any dyadic fractionyproduces the numbersy±ε, which also lie inSω. One can determine the relationship betweenωandεby multiplying particular forms of them to obtain: This expression is well-defined only in a set theory which permits transfinite induction up toSω2. In such a system, one can demonstrate that all the elements of the left set ofωSω·‍Sωεare positive infinitesimals and all the elements of the right set are positive infinities, and thereforeωSω·‍Sωεis the oldest positive finite number, 1. Consequently,⁠1/ε⁠=ω. Some authors systematically useω−1in place of the symbolε. Given anyx= {L|R}inSω, exactly one of the following is true: Sωis not an algebraic field, because it is not closed under arithmetic operations; considerω+ 1, whose formω+1={1,2,3,4,...∣}+{0∣}={1,2,3,4,…,ω∣}{\displaystyle \omega +1=\{1,2,3,4,...\mid {}\}+\{0\mid {}\}=\{1,2,3,4,\ldots ,\omega \mid {}\}}does not lie in any number inSω. The maximal subset ofSωthat is closed under (finite series of) arithmetic operations is the field of real numbers, obtained by leaving out the infinities±ω, the infinitesimals±ε, and the infinitesimal neighborsy±εof each nonzero dyadic fractiony. This construction of the real numbers differs from theDedekind cutsofstandard analysisin that it starts from dyadic fractions rather than general rationals and naturally identifies each dyadic fraction inSωwith its forms in previous generations. (Theω-complete forms of real elements ofSωare in one-to-one correspondence with the reals obtained by Dedekind cuts, under the proviso that Dedekind reals corresponding to rational numbers are represented by the form in which the cut point is omitted from both left and right sets.) The rationals are not an identifiable stage in the surreal construction; they are merely the subsetQofSωcontaining all elementsxsuch thatxb=afor someaand some nonzerob, both drawn fromS∗. By demonstrating thatQis closed under individual repetitions of the surreal arithmetic operations, one can show that it is a field; and by showing that every element ofQis reachable fromS∗by a finite series (no longer than two, actually) of arithmetic operationsincluding multiplicative inversion, one can show thatQis strictly smaller than the subset ofSωidentified with the reals. The setSωhas the samecardinalityas the real numbersR. This can be demonstrated by exhibiting surjective mappings fromSωto the closed unit intervalIofRand vice versa. MappingSωontoIis routine; map numbers less than or equal toε(including−ω) to 0, numbers greater than or equal to1 −ε(includingω) to 1, and numbers betweenεand1 −εto their equivalent inI(mapping the infinitesimal neighborsy±εof each dyadic fractiony, along withyitself, toy). To mapIontoSω, map the (open) central third (⁠1/3⁠,⁠2/3⁠) ofIonto{ | } = 0; the central third (⁠7/9⁠,⁠8/9⁠) of the upper third to{ 0 | } = 1; and so forth. This maps a nonempty open interval ofIonto each element ofS∗, monotonically. The residue ofIconsists of theCantor set2ω, each point of which is uniquely identified by a partition of the central-third intervals into left and right sets, corresponding precisely to a form{L|R}inSω. This places the Cantor set in one-to-one correspondence with the set of surreal numbers with birthdayω. Continuing to performtransfinite inductionbeyondSωproduces more ordinal numbersα, each represented as the largest surreal number having birthdayα. (This is essentially a definition of the ordinal numbers resulting from transfinite induction.) The first such ordinal isω+ 1 = {ω| }. There is another positive infinite number in generationω+ 1: The surreal numberω− 1is not an ordinal; the ordinalωis not the successor of any ordinal. This is a surreal number with birthdayω+ 1, which is labeledω− 1on the basis that it coincides with the sum ofω= { 0, 1, 2, 3, 4, ... | }and−1 = { | 0 }. Similarly, there are two new infinitesimal numbers in generationω+ 1: At a later stage of transfinite induction, there is a number larger thanω+kfor all natural numbersk: This number may be labeledω+ωboth because its birthday isω+ω(the first ordinal number not reachable fromωby the successor operation) and because it coincides with the surreal sum ofωandω; it may also be labeled2ωbecause it coincides with the product ofω= { 1, 2, 3, 4, ... | }and2 = { 1 | }. It is the second limit ordinal; reaching it fromωvia the construction step requires a transfinite induction on⋃k<ωSω+k{\displaystyle \bigcup _{k<\omega }S_{\omega +k}}This involves an infinite union of infinite sets, which is a "stronger" set theoretic operation than the previous transfinite induction required. Note that theconventionaladdition and multiplication of ordinals does not always coincide with these operations on their surreal representations. The sum of ordinals1 +ωequalsω, but the surreal sum is commutative and produces1 +ω=ω+ 1 >ω. The addition and multiplication of the surreal numbers associated with ordinals coincides with thenatural sum and natural productof ordinals. Just as2ωis bigger thanω+nfor any natural numbern, there is a surreal number⁠ω/2⁠that is infinite but smaller thanω−nfor any natural numbern. That is,⁠ω/2⁠is defined by where on the right hand side the notationx−Yis used to mean{x−y:y∈Y}. It can be identified as the product ofωand the form{ 0 | 1 }of⁠1/2⁠. The birthday of⁠ω/2⁠is the limit ordinalω2. To classify the "orders" of infinite and infinitesimal surreal numbers, also known asarchimedeanclasses, Conway associated to each surreal numberxthe surreal number whererandsrange over the positive real numbers. Ifx<ythenωyis "infinitely greater" thanωx, in that it is greater thanrωxfor all real numbersr. Powers ofωalso satisfy the conditions so they behave the way one would expect powers to behave. Each power ofωalso has the redeeming feature of being thesimplestsurreal number in its archimedean class; conversely, every archimedean class within the surreal numbers contains a unique simplest member. Thus, for every positive surreal numberxthere will always exist some positive real numberrand some surreal numberyso thatx−rωyis "infinitely smaller" thanx. The exponentyis the "baseωlogarithm" ofx, defined on the positive surreals; it can be demonstrated thatlogωmaps the positive surreals onto the surreals and that This gets extended by transfinite induction so that every surreal number has a "normal form" analogous to theCantor normal formfor ordinal numbers. This is the Conway normal form: Every surreal numberxmay be uniquely written as where everyrαis a nonzero real number and theyαs form a strictly decreasing sequence of surreal numbers. This "sum", however, may have infinitely many terms, and in general has the length of an arbitrary ordinal number. (Zero corresponds of course to the case of an empty sequence, and is the only surreal number with no leading exponent.) Looked at in this manner, the surreal numbers resemble apower series field, except that the decreasing sequences of exponents must be bounded in length by an ordinal and are not allowed to be as long as the class of ordinals. This is the basis for the formulation of the surreal numbers as aHahn series. In contrast to the real numbers, a (proper) subset of the surreal numbers does not have a least upper (or lower) bound unless it has a maximal (minimal) element. Conway defines[6]a gap as{L|R}such that every element ofLis less than every element ofR, andL∪R=No{\textstyle L\cup R=\mathbb {No} }; this is not a number because at least one of the sides is a proper class. Though similar, gaps are not quite the same asDedekind cuts,[c]but we can still talk about a completionNoD{\textstyle \mathbb {No} _{\mathfrak {D}}}of the surreal numbers with the natural ordering which is a (proper class-sized)linear continuum.[9] For instance there is no least positive infinite surreal, but the gap {x:∃n∈N:x<n∣x:∀n∈N:x>n}{\displaystyle \{x:\exists n\in \mathbb {N} :x<n\mid x:\forall n\in \mathbb {N} :x>n\}} is greater than all real numbers and less than all positive infinite surreals, and is thus the least upper bound of the reals inNoD{\textstyle \mathbb {No} _{\mathfrak {D}}}. Similarly the gapOn={No∣}{\textstyle \mathbb {On} =\{\mathbb {No} \mid {}\}}is larger than all surreal numbers. (This is anesoteric pun: In the general construction of ordinals,α"is" the set of ordinals smaller thanα, and we can use this equivalence to writeα= {α| }in the surreals;On{\textstyle \mathbb {On} }denotes the class of ordinal numbers, and becauseOn{\textstyle \mathbb {On} }iscofinalinNo{\textstyle \mathbb {No} }we have{No∣}={On∣}=On{\textstyle \{\mathbb {No} \mid {}\}=\{\mathbb {On} \mid {}\}=\mathbb {On} }by extension.) With a bit of set-theoretic care,[d]No{\textstyle \mathbb {No} }can be equipped with a topology where theopen setsare unions of open intervals (indexed by proper sets) and continuous functions can be defined.[9]An equivalent ofCauchy sequencescan be defined as well, although they have to be indexed by the class of ordinals; these will always converge, but the limit may be either a number or a gap that can be expressed as∑α∈Norαωaα{\displaystyle \sum _{\alpha \in \mathbb {No} }r_{\alpha }\omega ^{a_{\alpha }}}withaαdecreasing and having no lower bound inNo{\textstyle \mathbb {No} }. (All such gaps can be understood as Cauchy sequences themselves, but there are other types of gap that are not limits, such as∞andOn{\textstyle \mathbb {On} }).[9] Based on unpublished work byKruskal, a construction (bytransfinite induction) that extends the realexponential functionexp(x)(with basee) to the surreals was carried through by Gonshor.[8]: ch. 10 Thepowers ofωfunction is also an exponential function, but does not have the properties desired for an extension of the function on the reals. It will, however, be needed in the development of the base-eexponential, and it is this function that is meant whenever the notationωxis used in the following. Whenyis a dyadic fraction, thepower functionx∈No{\textstyle x\in \mathbb {No} },x↦xymay be composed from multiplication, multiplicative inverse and square root, all of which can be defined inductively. Its values are completely determined by the basic relationxy+z=xy·xz, and where defined it necessarily agrees with any otherexponentiationthat can exist. The induction steps for the surreal exponential are based on the series expansion for the real exponential,exp⁡x=∑n≥0xnn!{\displaystyle \exp x=\sum _{n\geq 0}{\frac {x^{n}}{n!}}}more specifically those partial sums that can be shown by basic algebra to be positive but less than all later ones. Forxpositive these are denoted[x]nand include allpartial sums; forxnegative but finite,[x]2n+1denotes the odd steps in the series starting from the first one with a positive real part (which always exists). Forxnegative infinite the odd-numbered partial sums are strictly decreasing and the[x]2n+1notation denotes the empty set, but it turns out that the corresponding elements are not needed in the induction. The relations that hold for realx<yare then and and this can be extended to the surreals with the definition exp⁡z={0,exp⁡zL⋅[z−zL]n,exp⁡zR⋅[z−zR]2n+1∣exp⁡zR/[zR−z]n,exp⁡zL/[zL−z]2n+1}.{\displaystyle \exp z=\{0,\exp z_{L}\cdot [z-z_{L}]_{n},\exp z_{R}\cdot [z-z_{R}]_{2n+1}\mid \exp z_{R}/[z_{R}-z]_{n},\exp z_{L}/[z_{L}-z]_{2n+1}\}.} This is well-defined for all surreal arguments (the value exists and does not depend on the choice ofzLandzR). Using this definition, the following hold:[e] The surreal exponential is essentially given by its behaviour on positive powers ofω, i.e., the function⁠g(a){\displaystyle g(a)}⁠, combined with well-known behaviour on finite numbers. Only examples of the former will be given. In addition,⁠g(a)=a{\displaystyle g(a)=a}⁠holds for a large part of its range, for instance for any finite number with positive real part and any infinite number that is less than some iterated power ofω(ωω··ωfor some number of levels). A general exponentiation can be defined asxy= exp(y· logx), giving an interpretation to expressions like2ω= exp(ω· log 2)=ωlog 2 ·ω. Again it is essential to distinguish this definition from the "powers ofω" function, especially ifωmay occur as the base. Asurcomplex numberis a number of the forma+bi, whereaandbare surreal numbers andiis the square root of−1.[10][11]The surcomplex numbers form analgebraically closed field(except for being a proper class),isomorphicto thealgebraic closureof the field generated by extending therational numbersby aproper classofalgebraically independenttranscendentalelements. Up to fieldisomorphism, this fact characterizes the field of surcomplex numbers within any fixed set theory.[6]: Th.27 The definition of surreal numbers contained one restriction: each element ofLmust be strictly less than each element ofR. If this restriction is dropped we can generate a more general class known asgames. All games are constructed according to this rule: Addition, negation, and comparison are all defined the same way for both surreal numbers and games. Every surreal number is a game, but not all games are surreal numbers, e.g. the game{0|0}is not a surreal number. The class of games is more general than the surreals, and has a simpler definition, but lacks some of the nicer properties of surreal numbers. The class of surreal numbers forms afield, but the class of games does not. The surreals have atotal order: given any two surreals, they are either equal, or one is greater than the other. The games have only apartial order: there exist pairs of games that are neither equal, greater than, nor less than each other. Each surreal number is either positive, negative, or zero. Each game is either positive, negative,zero, orfuzzy(incomparable with zero, such as{1 | −1}). A move in a game involves the player whose move it is choosing a game from those available inL(for the left player) orR(for the right player) and then passing this chosen game to the other player. A player who cannot move because the choice is from the empty set has lost. A positive game represents a win for the left player, a negative game for the right player, a zero game for the second player to move, and afuzzy gamefor the first player to move. Ifx,y, andzare surreals, andx=y, thenxz=yz. However, ifx,y, andzare games, andx=y, then it is not always true thatxz=yz. Note that "=" here means equality, not identity. The surreal numbers were originally motivated by studies of the gameGo,[2]and there are numerous connections between popular games and the surreals. In this section, we will use a capitalizedGamefor the mathematical object{L|R}, and the lowercasegamefor recreational games likeChessorGo. We consider games with these properties: For most games, the initial board position gives no great advantage to either player. As the game progresses and one player starts to win, board positions will occur in which that player has a clear advantage. For analyzing games, it is useful to associate a Game with every board position. The value of a given position will be the Game{L|R}, whereLis the set of values of all the positions that can be reached in a single move by Left. Similarly,Ris the set of values of all the positions that can be reached in a single move by Right. The zero Game (called0) is the Game whereLandRare both empty, so the player to move next (LorR) immediately loses. The sum of two GamesG = { L1 | R1 }andH = { L2 | R2 }is defined as the GameG + H = { L1 + H, G + L2 | R1 + H,G + R2 }where the player to move chooses which of the Games to play in at each stage, and the loser is still the player who ends up with no legal move. One can imagine two chess boards between two players, with players making moves alternately, but with complete freedom as to which board to play on. IfGis the Game{L | R},−Gis the Game{−R | −L}, i.e. with the role of the two players reversed. It is easy to showG − G = 0for all GamesG(whereG − His defined asG + (−H)). This simple way to associate Games with games yields a very interesting result. Suppose two perfect players play a game starting with a given position whose associated Game isx. We can classify all Games into four classes as follows: More generally, we can defineG > HasG − H > 0, and similarly for<,=and||. The notationG || Hmeans thatGandHare incomparable.G || His equivalent toG − H || 0, i.e. thatG > H,G < HandG = Hare all false. Incomparable games are sometimes said to beconfusedwith each other, because one or the other may be preferred by a player depending on what is added to it. A game confused with zero is said to befuzzy, as opposed topositive, negative, or zero. An example of a fuzzy game isstar (*). Sometimes when a game nears the end, it will decompose into several smaller games that do not interact, except in that each player's turn allows moving in only one of them. For example, in Go, the board will slowly fill up with pieces until there are just a few small islands of empty space where a player can move. Each island is like a separate game of Go, played on a very small board. It would be useful if each subgame could be analyzed separately, and then the results combined to give an analysis of the entire game. This doesn't appear to be easy to do. For example, there might be two subgames where whoever moves first wins, but when they are combined into one big game, it is no longer the first player who wins. Fortunately, there is a way to do this analysis. The following theorem can be applied: A game composed of smaller games is called thedisjunctive sumof those smaller games, and the theorem states that the method of addition we defined is equivalent to taking the disjunctive sum of the addends. Historically, Conway developed the theory of surreal numbers in the reverse order of how it has been presented here. He was analyzingGo endgames, and realized that it would be useful to have some way to combine the analyses of non-interacting subgames into an analysis of theirdisjunctive sum. From this he invented the concept of a Game and the addition operator for it. From there he moved on to developing a definition of negation and comparison. Then he noticed that a certain class of Games had interesting properties; this class became the surreal numbers. Finally, he developed the multiplication operator, and proved that the surreals are actually a field, and that it includes both the reals and ordinals. Alternative approaches to the surreal numbers complement the original exposition by Conway in terms of games. In what is now called thesign-expansionorsign-sequenceof a surreal number, a surreal number is afunctionwhosedomainis anordinaland whosecodomainis{ −1, +1 }.[8]: ch. 2This notion has been introduced by Conway himself in the equivalent formulation of L-R sequences.[6] Define the binary predicate "simpler than" on numbers by:xis simpler thanyifxis aproper subsetofy, i.e. ifdom(x) <dom(y)andx(α) =y(α)for allα< dom(x). For surreal numbers define the binary relation<to be lexicographic order (with the convention that "undefined values" are greater than−1and less than1). Sox<yif one of the following holds: Equivalently, letδ(x,y) = min({ dom(x), dom(y)} ∪ {α:α< dom(x) ∧α< dom(y) ∧x(α) ≠y(α) }), so thatx=yif and only ifδ(x,y) = dom(x) = dom(y). Then, for numbersxandy,x<yif and only if one of the following holds: For numbersxandy,x≤yif and only ifx<y∨x=y, andx>yif and only ify<x. Alsox≥yif and only ify≤x. The relation<istransitive, and for all numbersxandy, exactly one ofx<y,x=y,x>y, holds (law oftrichotomy). This means that<is alinear order(except that<is a proper class). For sets of numbersLandRsuch that∀x∈L∀y∈R(x<y), there exists a unique numberzsuch that Furthermore,zis constructible fromLandRby transfinite induction.zis the simplest number betweenLandR. Let the unique numberzbe denoted byσ(L,‍R). For a numberx, define its left setL(x)and right setR(x)by thenσ(L(x),R(x)) =x. One advantage of this alternative realization is that equality is identity, not an inductively defined relation. Unlike Conway's original realization of the surreal numbers, however, the sign-expansion requires a prior construction of the ordinals, while in Conway's realization, the ordinals are constructed as particular cases of surreals. However, similar definitions can be made that eliminate the need for prior construction of the ordinals. For instance, we could let the surreals be the (recursively-defined) class of functions whose domain is a subset of the surreals satisfying the transitivity rule∀g∈ domf(∀h∈ domg(h∈ domf))and whose range is{ −, + }. "Simpler than" is very simply defined now:xis simpler thanyifx∈ domy. The total ordering is defined by consideringxandyas sets of ordered pairs (as a function is normally defined): Eitherx=y, or else the surreal numberz=x∩yis in the domain ofxor the domain ofy(or both, but in this case the signs must disagree). We then havex<yifx(z) = −ory(z) = +(or both). Converting these functions into sign sequences is a straightforward task; arrange the elements ofdomfin order of simplicity (i.e., inclusion), and then write down the signs thatfassigns to each of these elements in order. The ordinals then occur naturally as those surreal numbers whose range is{ + }. The sumx+yof two numbersxandyis defined by induction ondom(x)anddom(y)byx+y=σ(L,‍R), where The additive identity is given by the number0 = { }, i.e. the number0is the unique function whose domain is the ordinal0, and the additive inverse of the numberxis the number−x, given bydom(−x) = dom(x), and, forα< dom(x),(−x)(α) = −1ifx(α) = +1, and(−x)(α) = +1ifx(α) = −1. It follows that a numberxispositiveif and only if0 < dom(x)andx(0) = +1, andxisnegativeif and only if0 < dom(x)andx(0) = −1. The productxyof two numbers,xandy, is defined by induction ondom(x)anddom(y)byxy=σ(L,‍R), where The multiplicative identity is given by the number1 = { (0, +1) }, i.e. the number1has domain equal to the ordinal1, and1(0) = +1. The map from Conway's realization to sign expansions is given byf({L|R}) =σ(M,‍S), whereM= {f(x) :x∈L}andS= {f(x) :x∈R}. Theinverse mapfrom the alternative realization to Conway's realization is given byg(x) = {L|R}, whereL= {g(y) :y∈L(x) }andR= {g(y) :y∈R(x) }. In another approach to the surreals, given by Alling,[11]explicit construction is bypassed altogether. Instead, a set of axioms is given that any particular approach to the surreals must satisfy. Much like theaxiomatic approachto the reals, these axioms guarantee uniquenessup toisomorphism. A triple⟨No,<,b⟩{\textstyle \langle \mathbb {No} ,\mathrm {<} ,b\rangle }is a surreal number system if and only if the following hold: Both Conway's original construction and the sign-expansion construction of surreals satisfy these axioms. Given these axioms, Alling[11]derives Conway's original definition of≤and develops surreal arithmetic. A construction of the surreal numbers as a maximal binary pseudo-tree with simplicity (ancestor) and ordering relations is due to Philip Ehrlich.[12]The difference from the usual definition of a tree is that the set of ancestors of a vertex iswell-ordered, but may not have amaximal element(immediate predecessor); in other words the order type of that set is a general ordinal number, not just a natural number. This construction fulfills Alling's axioms as well and can easily be mapped to the sign-sequence representation. Ehrlich additionally constructed an isomorphism between Conway's maximal surreal number field and the maximalhyperrealsinvon Neumann–Bernays–Gödel set theory.[12] Alling[11]: th. 6.55, p. 246also proves that the field of surreal numbers is isomorphic (as an ordered field) to the field ofHahn serieswith real coefficients on the value group of surreal numbers themselves (the series representation corresponding to the normal form of a surreal number, as definedabove). This provides a connection between surreal numbers and more conventional mathematical approaches to ordered field theory. This isomorphism makes the surreal numbers into avalued fieldwhere the valuation is the additive inverse of the exponent of the leading term in the Conway normal form, e.g.,ν(ω) = −1. Thevaluation ringthen consists of the finite surreal numbers (numbers with a real and/or an infinitesimal part). The reason for the sign inversion is that the exponents in the Conway normal form constitute a reverse well-ordered set, whereas Hahn series are formulated in terms of (non-reversed) well-ordered subsets of the value group.
https://en.wikipedia.org/wiki/Surreal_number
In themathematicaldiscipline ofgraph theory, a3-dimensional matchingis a generalization ofbipartite matching(also known as 2-dimensional matching) to 3-partitehypergraphs, which consist of hyperedges each of which contains 3 vertices (instead of edges containing 2 vertices in a usual graph). 3-dimensional matching, often abbreviated as3DM, is also the name of a well-known computational problem: finding a largest 3-dimensional matching in a given hypergraph. 3DM is one of the first problems that were proved to beNP-hard. LetX,Y, andZbe finite sets, and letTbe a subset ofX×Y×Z. That is,Tconsists of triples (x,y,z) such thatx∈X,y∈Y, andz∈Z. NowM⊆Tis a 3-dimensional matching if the following holds: for any two distinct triples (x1,y1,z1) ∈Mand (x2,y2,z2) ∈M, we havex1≠x2,y1≠y2, andz1≠z2. The figure on the right illustrates 3-dimensional matchings. The setXis marked with red dots,Yis marked with blue dots, andZis marked with green dots. Figure (a) shows the setT(gray areas). Figure (b) shows a 3-dimensional matchingMwith |M| = 2, and Figure (c) shows a 3-dimensional matchingMwith |M| = 3. The matchingMillustrated in Figure (c) is amaximum 3-dimensional matching, i.e., it maximises |M|. The matching illustrated in Figures (b)–(c) aremaximal 3-dimensional matchings, i.e., they cannot be extended by adding more elements fromT. A2-dimensional matchingcan be defined in a completely analogous manner. LetXandYbe finite sets, and letTbe a subset ofX×Y. NowM⊆Tis a 2-dimensional matching if the following holds: for any two distinct pairs (x1,y1) ∈Mand (x2,y2) ∈M, we havex1≠x2andy1≠y2. In the case of 2-dimensional matching, the setTcan be interpreted as the set of edges in abipartite graphG= (X,Y,T); each edge inTconnects a vertex inXto a vertex inY. A 2-dimensional matching is then amatchingin the graphG, that is, a set of pairwise non-adjacent edges. Hence 3-dimensional matchings can be interpreted as a generalization of matchings to hypergraphs: the setsX,Y, andZcontain the vertices, each element ofTis a hyperedge, and the setMconsists of pairwise non-adjacent edges (edges that do not have a common vertex). In case of 2-dimensional matching, we have Y = Z. A 3-dimensional matching is a special case of aset packing: we can interpret each element (x,y,z) ofTas a subset {x,y,z} ofX∪Y∪Z; then a 3-dimensional matchingMconsists of pairwise disjoint subsets. In computational complexity theory,3-dimensional matching (3DM)is the name of the followingdecision problem: given a setTand an integerk, decide whether there exists a 3-dimensional matchingM⊆Twith |M| ≥k. This decision problem is known to beNP-complete; it is one ofKarp's 21 NP-complete problems.[1]It is NP-complete even in the special case thatk= |X| = |Y| = |Z| and when each element is contained in at most 3 sets, i.e., when we want a perfect matching in a 3-regular hypergraph.[1][2][3]In this case, a 3-dimensional matching is not only a set packing, but also anexact cover: the setMcovers each element ofX,Y, andZexactly once.[4]The proof is by reduction from3SAT. Given a 3SAT instance, we construct a 3DM instance as follows:[2][5] There exist polynomial time algorithms for solving 3DM in dense hypergraphs.[6][7] Amaximum 3-dimensional matchingis a largest 3-dimensional matching. In computational complexity theory, this is also the name of the followingoptimization problem: given a setT, find a 3-dimensional matchingM⊆Tthat maximizes|M|. Since the decision problem described above is NP-complete, this optimization problem isNP-hard, and hence it seems that there is no polynomial-time algorithm for finding a maximum 3-dimensional matching. However, there are efficient polynomial-time algorithms for finding a maximumbipartite matching(maximum 2-dimensional matching), for example, theHopcroft–Karp algorithm. There is a very simple polynomial-time 3-approximation algorithm for 3-dimensional matching: find any maximal 3-dimensional matching.[8]Just like a maximal matching is within factor 2 of a maximum matching,[9]a maximal 3-dimensional matching is within factor 3 of a maximum 3-dimensional matching. For any constant ε > 0 there is a polynomial-time (4/3 + ε)-approximation algorithm for 3-dimensional matching.[10] However, attaining better approximation factors is probably hard: the problem isAPX-complete, that is, it is hard toapproximatewithin some constant.[11][12][8] It is NP-hard to achieve an approximation factor of 95/94 for maximum 3-d matching, and an approximation factor of 48/47 for maximum 4-d matching. The hardness remains even when restricted to instances with exactly two occurrences of each element.[13] There are various algorithms for 3-d matching in themassively parallel communicationmodel.[14]
https://en.wikipedia.org/wiki/3-dimensional_matching
Ingraph theory, avertex coverin ahypergraphis a set ofvertices, such that every hyperedge of the hypergraph contains at least one vertex of that set. It is an extension of the notion ofvertex coverin a graph.[1]: 466–470[2] An equivalent term is ahitting set: given a collection of sets, a set whichintersectsall sets in the collection in at least one element is called a hitting set. The equivalence can be seen bymappingthe sets in the collection onto hyperedges. Another equivalent term, used more in acombinatorialcontext, istransversal. However, some definitions of transversal require that every hyperedge of the hypergraph contains precisely one vertex from the set. Recall that ahypergraphHis a pair(V,E), whereVis a set ofverticesandEis a set of subsets ofVcalledhyperedges. Each hyperedge may contain one or more vertices. Avertex-cover(akahitting setortransversal) inHis setT⊆Vsuch that, for all hyperedgese∈E, it holds thatT∩e≠∅. Thevertex-cover number(akatransversal number) of a hypergraphHis the smallest size of a vertex cover inH. It is often denoted byτ(H).[1]: 466 For example, ifHis this 3-uniform hypergraph: thenHhas admits several vertex-covers of size 2, for example: However, no subset of size 1 hits all the hyperedges ofH. Hence the vertex-cover number ofHis 2. Note that we get back the case of vertex covers for simple graphs if the maximum size of the hyperedges is 2. The computational problemsminimum hitting setandhitting setare defined as in the case of graphs. If the maximum size of a hyperedge is restricted tod, then the problem of finding a minimumd-hitting set permits ad-approximationalgorithm. Assuming theunique games conjecture, this is the best constant-factor algorithm that is possible and otherwise there is the possibility of improving the approximation tod− 1.[3] For the hitting set problem, differentparametrizationsmake sense.[4]The hitting set problem isW[2]-complete for the parameterOPT, that is, it is unlikely that there is an algorithm that runs in timef(OPT)nO(1)whereOPTis the cardinality of the smallest hitting set. The hitting set problem is fixed-parameter tractable for the parameterOPT +d, wheredis the size of the largest edge of the hypergraph. More specifically, there is an algorithm for hitting set that runs in timedOPTnO(1). The hitting set problem is equivalent to theset cover problem: An instance of set cover can be viewed as an arbitrarybipartite graph, with sets represented by vertices on the left, elements of the universe represented by vertices on the right, and edges representing the inclusion of elements in sets. The task is then to find a minimum cardinality subset of left-vertices which covers all of the right-vertices. In the hitting set problem, the objective is to cover the left-vertices using a minimum subset of the right vertices. Converting from one problem to the other is therefore achieved by interchanging the two sets of vertices. An example of a practical application involving the hitting set problem arises in efficient dynamic detection ofrace condition.[5][6]In this case, each time global memory is written, the current thread and set of locks held by that thread are stored. Under lockset-based detection, if later another thread writes to that location and there isnota race, it must be because it holds at least one lock in common with each of the previous writes. Thus the size of the hitting set represents the minimum lock set size to be race-free. This is useful in eliminating redundant write events, since large lock sets are considered unlikely in practice. Afractional vertex-coveris a function assigning a weight in[0,1]to each vertex inV, such that for every hyperedgeeinE, the sum of fractions of vertices ineis at least 1. A vertex cover is a special case of a fractional vertex cover in which all weights are either 0 or 1. Thesizeof a fractional vertex-cover is the sum of fractions of all vertices. Thefractional vertex-cover numberof a hypergraphHis the smallest size of a fractional vertex-cover inH. It is often denoted byτ*(H). Since a vertex-cover is a special case of a fractional vertex-cover, for every hypergraphH: fractional-vertex-cover-number(H) ≤ vertex-cover-number (H); In symbols: τ∗(H)≤τ(H).{\displaystyle \tau ^{*}(H)\leq \tau (H).} The fractional-vertex-cover-number of a hypergraph is, in general, smaller than its vertex-cover-number. A theorem ofLászló Lovászprovides an upper bound on the ratio between them:[7] τ(H)τ∗(H)≤1+ln⁡(d).{\displaystyle {\frac {\tau (H)}{\tau ^{*}(H)}}\leq 1+\ln(d).} Afinite projective planeis a hypergraph in which every two hyperedges intersect. Every finite projective plane isr-uniform for some integerr. Denote byHrther-uniform projective plane. The following projective planes are known to exist: WhenHrexists, it has the following properties:[8] A vertex-cover (transversal)Tis calledminimalif no proper subset ofTis a transversal. Thetransversal hypergraphofHis the hypergraph(X,F)whose hyperedge setFconsists of all minimal-transversals ofH. Computing the transversal hypergraph has applications incombinatorial optimization, ingame theory, and in several fields ofcomputer sciencesuch asmachine learning,indexing of databases, thesatisfiability problem,data mining, and computerprogram optimization.
https://en.wikipedia.org/wiki/Vertex_cover_in_hypergraphs
Ingraph theory, the termbipartite hypergraphdescribes several related classes ofhypergraphs, all of which are natural generalizations of abipartite graph. The weakest definition of bipartiteness is also called2-colorability. A hypergraphH= (V,E) is called 2-colorable if its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge meets bothXandY. Equivalently, the vertices ofHcan be 2-colored so that no hyperedge is monochromatic. Every bipartite graphG= (X+Y,E) is 2-colorable: each edge contains exactly one vertex ofXand one vertex ofY, so e.g.Xcan be colored blue andYcan be colored yellow and no edge is monochromatic. The property of 2-colorability was first introduced byFelix Bernsteinin the context of set families;[1]therefore it is also calledProperty B. A stronger definition of bipartiteness is: a hypergraph is calledbipartiteif its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge containsexactly oneelement ofX.[2][3]Every bipartite graph is also a bipartite hypergraph. Every bipartite hypergraph is 2-colorable, but bipartiteness is stronger than 2-colorability. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges: { {1,2,3} , {1,2,4} , {1,3,4} , {2,3,4} } ThisHis 2-colorable, for example by the partitionX= {1,2} andY= {3,4}. However, it is not bipartite, since every setXwith one element has an empty intersection with one hyperedge, and every setXwith two or more elements has an intersection of size 2 or more with at least two hyperedges. Hall's marriage theoremhas been generalized from bipartite graphs to bipartite hypergraphs; seeHall-type theorems for hypergraphs. A stronger definition is: given an integern, a hypergraph is calledn-uniform if all its hyperedges contain exactlynvertices. Ann-uniform hypergraph is calledn-partiteif its vertex setVcan be partitioned intonsubsets such that each hyperedge contains exactly one element from each subset.[4]An alternative term israinbow-colorable.[5] Everyn-partiteness hypergraph is bipartite, but n-partiteness is stronger than bipartiteness. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges; { {1,2,3} , {1,2,4} , {1,3,4} } ThisHis 3-uniform. It is bipartite by the partitionX= {1} andY= {2,3,4}. However, it is not 3-partite: in every partition ofVinto 3 subsets, at least one subset contains two vertices, and thus at least one hyperedge contains two vertices from this subset. A 3-partite hypergraph is often called "tripartite hypergraph". However, a 2-partite hypergraph isnotthe same as a bipartite hypergraph; it is equivalent to a bipartitegraph. There are other natural generalizations of bipartite graphs. A hypergraph is calledbalancedif it is essentially 2-colorable, and remains essentially 2-colorable upon deleting any number of vertices (seeBalanced hypergraph). The properties of bipartiteness and balance do not imply each other. Bipartiteness does not imply balance. For example, letHbe the hypergraph with vertices {1,2,3,4} and edges: { {1,2,3} , {1,2,4} , {1,3,4} } It is bipartite by the partitionX={1},Y={2,3,4}. But is not balanced. For example, if vertex 1 is removed, we get the restriction ofHto {2,3,4}, which has the following hyperedges; { {2,3} , {2,4} , {3,4} } It is not 2-colorable, since in any 2-coloring there are at least two vertices with the same color, and thus at least one of the hyperedges is monochromatic. Another way to see thatHis not balanced is that it contains the odd-length cycle C = (2 - {1,2,3} - 3 - {1,3,4} - 4 - {1,2,4} - 2), and no edge ofCcontains all three vertices 2,3,4 ofC. Balance does not imply bipartiteness. LetHbe the hypergraph:[citation needed] { {1,2} , {3,4} , {1,2,3,4} } it is 2-colorable and remains 2-colorable upon removing any number of vertices from it. However, it is not bipartite, since to have exactly one green vertex in each of the first two hyperedges, we must have two green vertices in the last hyperedge.
https://en.wikipedia.org/wiki/Bipartite_hypergraph
In the mathematical discipline ofgraph theory, arainbow matchingin anedge-colored graphis amatchingin which all the edges have distinct colors. Given an edge-colored graphG= (V,E), a rainbow matchingMinGis a set of pairwise non-adjacent edges, that is, no two edges share a common vertex, such that all the edges in the set have distinct colors. A maximum rainbow matching is a rainbow matching that contains the largest possible number of edges. Rainbow matchings are of particular interest given their connection to transversals ofLatin squares. Denote byKn,nthecomplete bipartite graphonn+nvertices. Every propern-edge coloringofKn,ncorresponds to a Latin square of ordern. A rainbow matching then corresponds to atransversalof the Latin square, meaning a selection ofnpositions, one in each row and each column, containing distinct entries. This connection between transversals of Latin squares and rainbow matchings inKn,nhas inspired additional interest in the study of rainbow matchings intriangle-free graphs.[1] An edge-coloring is calledproperif each edge has a single color, and each two edges of the same color have no vertex in common. A proper edge-coloring does not guarantee the existence of a perfect rainbow matching. For example, consider the graphK2,2: the complete bipartite graph on 2+2 vertices. Suppose the edges(x1,y1)and(x2,y2)are colored green, and the edges(x1,y2)and(x2,y1)are colored blue. This is a proper coloring, but there are only two perfect matchings, and each of them is colored by a single color. This invokes the question: when does a large rainbow matching is guaranteed to exist? Much of the research on this question was published using the terminology ofLatin transversals in Latin squares. Translated into the rainbow matching terminology: A more general conjecture of Stein is that a rainbow matching of sizen– 1exists not only for a proper edge-coloring, but for any coloring in which each color appears on exactlynedges.[2] Some weaker versions of these conjectures have been proved: Wang asked if there is a functionf(d)such that every properly edge-colored graphGwith minimumdegreedand at leastf(d)vertices must have a rainbow matching of sized.[9]Obviously at least2dvertices are necessary, but how many are sufficient? Suppose that each edge may have several different colors, while each two edges of the same color must still have no vertex in common. In other words, each color is amatching. How many colors are needed in order to guarantee the existence of a rainbow matching? Drisko[12]studied this question using the terminology ofLatin rectangles. He proved that, for anyn≤k, in the complete bipartite graphKn,k, any family of2n– 1matchings (=colors) of sizenhas a perfect rainbow matching (of sizen). He applied this theorem to questions aboutgroup actionsanddifference sets. Drisko also showed that2n– 1matchings may be necessary: consider a family of2n– 2matchings, of whichn– 1are{ (x1,y1), (x2,y2), ..., (xn,yn)}and the othern– 1are{(x1,y2), (x2,y3), …, (xn,y1) }.Then the largest rainbow matching is of sizen– 1(e.g. take one edge from each of the firstn– 1matchings). Alon[13]showed that Drisko's theorem implies an older result[14]inadditive number theory. Aharoni and Berger[15]generalized Drisko's theorem to any bipartite graph, namely: any family of2n– 1matchings of sizenin a bipartite graph has a rainbow matching of sizen. Aharoni, Kotlar and Ziv[16]showed that Drisko's extremal example is unique in any bipartite graph. In general graphs,2n– 1matchings are no longer sufficient. Whennis even, one can add to Drisko's example the matching{ (x1,x2), (y1,y2), (x2,x3), (y2,y3), … }and get a family of2n– 1matchings without any rainbow matching. Aharoni, Berger, Chudnovsky, Howard and Seymour[17]proved that, in a general graph,3n– 2matchings (=colors) are always sufficient. It is not known whether this is tight: currently the best lower bound for evennis2nand for oddnit is2n– 1.[18] Afractional matchingis a set of edges with a non-negative weight assigned to each edge, such that the sum of weights adjacent to each vertex is at most 1. The size of a fractional matching is the sum of weights of all edges. It is a generalization of a matching, and can be used to generalize both the colors and the rainbow matching: It is known that, in a bipartite graph, the maximum fractional matching size equals the maximum matching size. Therefore, the theorem of Aharoni and Berger[15]is equivalent to the following. Letnbe any positive integer. Given any family of2n– 1fractional-matchings (=colors) of sizenin a bipartite graph, there exists a rainbow-fractional-matching of sizen. Aharoni, Holzman and Jiang extend this theorem to arbitrary graphs as follows. Letnbe any positive integer or half-integer. Any family of2nfractional-matchings (=colors) of size at leastnin an arbitrary graph has a rainbow-fractional-matching of sizen.[18]: Thm.1.5The2nis the smallest possible for fractional matchings in arbitrary graphs: the extremal case is constructed using an odd-length cycle. For the case of perfect fractional matchings, both the above theorems can derived from thecolorful Caratheodory theorem. For every edgeeinE, let1ebe a vector of size|V|, where for each vertexvinV, elementvin1eequals 1 ifeis adjacent tov, and 0 otherwise (so each vector1ehas 2 ones and|V|-2 zeros). Every fractional matching corresponds to aconical combinationof edges, in which each element is at most 1. A conical combination in which each element isexactly1 corresponds to aperfectfractional matching. In other words, a collectionFof edges admits a perfect fractional matching, if and only if1v(the vector of|V|ones) is contained in theconical hullof the vectors1eforeinF. Consider a graph with2nvertices, and suppose there are2nsubsets of edges, each of which admits a perfect fractional matching (of sizen). This means that the vector1vis in the conical hull of each of thesensubsets. By thecolorful Caratheodory theorem, there exists a selection of2nedges, one from each subset, that their conical hull contains1v. This corresponds to a rainbow perfect fractional matching. The expression2nis the dimension of the vectors1e- each vector has2nelements. Now, suppose that the graph is bipartite. In a bipartite graph, there is a constraint on the vectors1e: the sum of elements corresponding to each part of the graph must be 1. Therefore, the vectors1elive in a(2n– 1)-dimensional space. Therefore, the same argument as above holds when there are only2n– 1subsets of edges. Anr-uniformhypergraphis a set of hyperedges each of which contains exactlyrvertices (so a 2-uniform hypergraph is a just a graph without self-loops). Aharoni, Holzman and Jiang extend their theorem to such hypergraphs as follows. Letnbe any positive rational number. Any family of⌈r⋅n⌉fractional-matchings (=colors) of size at leastnin anr-uniform hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.6The⌈r⋅n⌉is the smallest possible whennis an integer. Anr-partite hypergraphis anr-uniform hypergraph in which the vertices are partitioned intordisjoint sets and each hyperedge contains exactly one vertex of each set (so a 2-partite hypergraph is a just bipartite graph). Letnbe any positive integer. Any family ofrn–r+ 1fractional-matchings (=colors) of size at leastnin anr-partite hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.7Thern–r+ 1is the smallest possible: the extremal case is whenn=r– 1is aprime power, and all colors are edges of the truncatedprojective planeof ordern. So each color hasn2=rn–r+ 1edges and a fractional matching of sizen, but any fractional matching of that size requires allrn–r+ 1edges.[19] For the case of perfect fractional matchings, both the above theorems can derived from thecolorful caratheodory theoremin the previous section. For a generalr-uniform hypergraph (admitting a perfect matching of sizen), the vectors1elive in a(rn)-dimensional space. For anr-uniformr-partite hypergraph, ther-partiteness constraints imply that the vectors1elive in a(rn–r+ 1)-dimensional space. The above results hold only for rainbowfractionalmatchings. In contrast, the case of rainbowintegralmatchings inr-uniform hypergraphs is much less understood. The number of required matchings for a rainbow matching of sizengrows at least exponentially withn. GareyandJohnsonhave shown that computing a maximum rainbow matching isNP-completeeven for edge-coloredbipartite graphs.[20] Rainbow matchings have been applied for solvingpacking problems.[21]
https://en.wikipedia.org/wiki/Rainbow_matching#hypergraphs
Ingraph theory, ad-interval hypergraphis a kind of ahypergraphconstructed usingintervalsofreal lines. The parameterdis apositive integer. Theverticesof ad-interval hypergraph are the points ofddisjointlines(thus there areuncountably manyvertices). Theedgesof the graph ared-tuplesof intervals, one interval in every real line.[1] The simplest case isd= 1. The vertex set of a 1-interval hypergraph is the set of real numbers; each edge in such a hypergraph is an interval of the real line. For example, theset{ [−2, −1], [0, 5], [3, 7] }defines a 1-interval hypergraph. Note the difference from aninterval graph: in an interval graph, the vertices are the intervals (afinite set); in a 1-interval hypergraph, the vertices are all points in the real line (anuncountable set). As another example, in a 2-interval hypergraph, the vertex set is thedisjoint unionof two real lines, and each edge is a union of two intervals: one in line #1 and one in line #2. The following two concepts are defined ford-interval hypergraphs just like for finite hypergraphs: ν(H) ≤τ(H)is true for any hypergraphH. Tibor Gallaiproved that, in a 1-interval hypergraph, they are equal:τ(H) =ν(H). This is analogous toKőnig's theoremforbipartite graphs. Gabor Tardos[1]proved that, in a 2-interval hypergraph,τ(H) ≤ 2ν(H), and it is tight (i.e., every 2-interval hypergraph with a matching of sizem, can be covered by2mpoints). Kaiser[2]proved that, in ad-interval hypergraph,τ(H) ≤d(d– 1)ν(H), and moreover, everyd-interval hypergraph with a matching of sizem, can be covered by atd(d− 1)mpoints,(d− 1)mpoints on each line. Frick and Zerbib[3]proved a colorful ("rainbow") version of this theorem.
https://en.wikipedia.org/wiki/D-interval_hypergraph
In mathematics, theErdős–Ko–Rado theoremlimits the number ofsetsin afamily of setsfor which every two sets have at least one element in common.Paul Erdős,Chao Ko, andRichard Radoproved the theorem in 1938, but did not publish it until 1961. It is part of the field ofcombinatorics, and one of the central results ofextremal set theory.[1] The theorem applies to families of sets that all have the samesize,r{\displaystyle r},and are all subsets of some larger set of sizen{\displaystyle n}.One way to construct a family of sets with these parameters, each two sharing an element, is to choose a single element to belong to all the subsets, and then form all of the subsets that contain the chosen element. The Erdős–Ko–Rado theorem states that whenn{\displaystyle n}is large enough for the problem to be nontrivial(n≥2r{\displaystyle n\geq 2r})this construction produces the largest possible intersecting families. Whenn=2r{\displaystyle n=2r}there are other equally-large families, but for larger values ofn{\displaystyle n}only the families constructed in this way can be largest. The Erdős–Ko–Rado theorem can also be described in terms ofhypergraphsorindependent setsinKneser graphs. Several analogous theorems apply to other kinds of mathematical object than sets, includinglinear subspaces,permutations, andstrings. They again describe the largest possible intersecting families as being formed by choosing an element and forming the family of all objects that contain the chosen element. Suppose thatA{\displaystyle {\mathcal {A}}}is a family of distinctr{\displaystyle r}-elementsubsetsof ann{\displaystyle n}-elementsetwithn≥2r{\displaystyle n\geq 2r},and that each two subsets share at least one element. Then the theorem states that the number of sets inA{\displaystyle {\mathcal {A}}}is at most thebinomial coefficient(n−1r−1).{\displaystyle {\binom {n-1}{r-1}}.}The requirement thatn≥2r{\displaystyle n\geq 2r}is necessary for the problem to be nontrivial:whenn<2r{\displaystyle n<2r},allr{\displaystyle r}-elementsets intersect, and the largest intersecting family consists of allr{\displaystyle r}-elementsets, withsize(nr){\displaystyle {\tbinom {n}{r}}}.[2] The same result can be formulated as part of the theory ofhypergraphs. A family of sets may also be called a hypergraph, and when all the sets (which are called "hyperedges" in this context) are the samesizer{\displaystyle r},it is called anr{\displaystyle r}-uniformhypergraph. The theorem thus gives an upper bound for the number of pairwise overlapping hyperedges in anr{\displaystyle r}-uniformhypergraph withn{\displaystyle n}verticesandn≥2r{\displaystyle n\geq 2r}.[3] The theorem may also be formulated in terms ofgraph theory: theindependence numberof theKneser graphKGn,r{\displaystyle KG_{n,r}}forn≥2r{\displaystyle n\geq 2r}isα(KGn,r)=(n−1r−1).{\displaystyle \alpha (KG_{n,r})={\binom {n-1}{r-1}}.}This is a graph with a vertex for eachr{\displaystyle r}-elementsubset of ann{\displaystyle n}-elementset, and an edge between every pair ofdisjoint sets. Anindependent setis a collection of vertices that has no edges between its pairs, and the independence number is the size of the largestindependent set.[4]Because Kneser graphs have symmetries taking any vertex to any other vertex (they arevertex-transitive graphs), theirfractional chromatic numberequals the ratio of their number of vertices to their independence number, so another way of expressing the Erdős–Ko–Rado theorem is that these graphs have fractional chromatic numberexactlyn/r{\displaystyle n/r}.[5] Paul Erdős,Chao Ko, andRichard Radoproved this theorem in 1938 after working together on it in England. Rado had moved from Berlin to theUniversity of Cambridgeand Erdős from Hungary to theUniversity of Manchester, both escaping the influence of Nazi Germany; Ko was a student ofLouis J. MordellatManchester.[6]However, they did not publish the resultuntil 1961,[7]with the long delay occurring in part because of a lack of interest in combinatorial set theory in the 1930s, and increased interest in the topic inthe 1960s.[6]The 1961 paper stated the result in an apparently more general form, in which the subsets were only required to be sizeat mostr{\displaystyle r},and to satisfy the additional requirement that no subset be contained in anyother.[7]A family of subsets meeting these conditions can be enlarged to subsets of sizeexactlyr{\displaystyle r}either by an application ofHall's marriage theorem,[8]or by choosing each enlarged subset from the same chain in a symmetricchain decompositionof sets.[9] A simple way to construct an intersecting family ofr{\displaystyle r}-elementsets whose size exactly matches the Erdős–Ko–Rado bound is to choose any fixedelementx{\displaystyle x},and letA{\displaystyle {\mathcal {A}}}consist of allr{\displaystyle r}-elementsubsets thatincludex{\displaystyle x}.For instance, for 2-element subsets of the 4-elementset{1,2,3,4}{\displaystyle \{1,2,3,4\}},withx=1{\displaystyle x=1},this produces the family Any two sets in this family intersect, because they bothinclude1{\displaystyle 1}.The number of sets is(n−1r−1){\displaystyle {\tbinom {n-1}{r-1}}}, because after the fixed element is chosen there remainn−1{\displaystyle n-1}other elements to choose, and each set choosesr−1{\displaystyle r-1}of these remaining elements.[10] Whenn>2r{\displaystyle n>2r}this is the only intersecting family of this size. However, whenn=2r{\displaystyle n=2r}, there is a more general construction. Eachr{\displaystyle r}-elementset can be matched up to itscomplement, the onlyr{\displaystyle r}-elementset from which it is disjoint. Then, choose one set from each of these complementary pairs. For instance, for the same parameters above, this more general construction can be used to form the family where every two sets intersect despite no element belonging to all three sets. In this example, all of the sets have been complemented from the ones in the first example, but it is also possible to complement only some of the sets.[10] Whenn>2r{\displaystyle n>2r},families of the first type (variously known as stars,[1]dictatorships,[11]juntas,[11]centered families,[12]or principal families[13]) are the unique maximum families. In this case, a family of nearly-maximum size has an element which is common to almost all of itssets.[14]This property has been calledstability,[13]although the same term has also been used for a different property, the fact that (for a wide range of parameters) deleting randomly-chosen edges from the Kneser graph does not increase the size of its independentsets.[15] An intersecting family ofr{\displaystyle r}-elementsets may be maximal, in that no further set can be added (even by extending the ground set) without destroying the intersection property, but not of maximum size. An example withn=7{\displaystyle n=7}andr=3{\displaystyle r=3}is the set of seven lines of theFano plane, much less than the Erdős–Ko–Rado boundof 15.[16]More generally, the lines of anyfinite projective planeof orderq{\displaystyle q}form a maximal intersecting family that includes onlyn{\displaystyle n}sets, for the parametersr=q+1{\displaystyle r=q+1}andn=q2+q+1{\displaystyle n=q^{2}+q+1}.The Fano plane is the caseq=2{\displaystyle q=2}of this construction.[17] The smallest possible size of a maximal intersecting family ofr{\displaystyle r}-elementsets, in termsofr{\displaystyle r},is unknown but at least3r{\displaystyle 3r}forr≥4{\displaystyle r\geq 4}.[18]Projective planes produce maximal intersecting families whose number of setsisr2−r+1{\displaystyle r^{2}-r+1},but for infinitely many choices ofr{\displaystyle r}there exist smaller maximal intersecting families ofsize34r2{\displaystyle {\tfrac {3}{4}}r^{2}}.[17] The largest intersecting families ofr{\displaystyle r}-elementsets that are maximal but not maximum have size(n−1r−1)−(n−r−1r−1)+1.{\displaystyle {\binom {n-1}{r-1}}-{\binom {n-r-1}{r-1}}+1.}They are formed from anelementx{\displaystyle x}and anr{\displaystyle r}-elementsetY{\displaystyle Y}notcontainingx{\displaystyle x},by addingY{\displaystyle Y}to the family ofr{\displaystyle r}-elementsets that include bothx{\displaystyle x}and at least one elementofY{\displaystyle Y}.This result is called theHilton–Milner theorem, after its proof byAnthony HiltonandEric Charles Milnerin1967.[19] The original proof of the Erdős–Ko–Rado theorem usedinductiononn{\displaystyle n}.The base case,forn=2r{\displaystyle n=2r},follows easily from the facts that an intersecting family cannot include both a set and itscomplement, and that in this case the bound of the Erdős–Ko–Rado theorem is exactly half the number of allr{\displaystyle r}-elementsets. The induction step forlargern{\displaystyle n}uses a method calledshifting, of substituting elements in intersecting families to make the family smaller inlexicographic orderand reduce it to a canonical form that is easier toanalyze.[20] In 1972,Gyula O. H. Katonagave the following short proof usingdouble counting.[21] However, only some of these intervals can belongtoA{\displaystyle {\mathcal {A}}},because they do not all intersect. Katona's key observation is that at mostr{\displaystyle r}intervals from a single cyclic order may belongtoA{\displaystyle {\mathcal {A}}}.This is because, if(a1,a2,…,ar){\displaystyle (a_{1},a_{2},\dots ,a_{r})}is one of these intervals, then every other interval of the same cyclic order that belongs toA{\displaystyle {\mathcal {A}}}separatesai{\displaystyle a_{i}}fromai+1{\displaystyle a_{i+1}},forsomei{\displaystyle i},by containing precisely one of these two elements. The two intervals that separate these elements are disjoint, so at most one of them can belongtoA{\displaystyle {\mathcal {A}}}.Thus, the number of intervals inA{\displaystyle {\mathcal {A}}}is at most one plus the numberr−1{\displaystyle r-1}of pairs that can beseparated.[21] Based on this idea, it is possible to count thepairs(S,C){\displaystyle (S,C)},whereS{\displaystyle S}is a setinA{\displaystyle {\mathcal {A}}}andC{\displaystyle C}is a cyclic order for whichS{\displaystyle S}is an interval, in two ways. First, for each setS{\displaystyle S}one may generateC{\displaystyle C}by choosing one ofr!{\displaystyle r!}permutations ofS{\displaystyle S}and(n−r)!{\displaystyle (n-r)!}permutations of the remaining elements, showing that the number of pairsis|A|r!(n−r)!{\displaystyle |{\mathcal {A}}|r!(n-r)!}.And second, there are(n−1)!{\displaystyle (n-1)!}cyclic orders, each of which has at mostr{\displaystyle r}intervalsofA{\displaystyle {\mathcal {A}}},so the number of pairs is atmostr(n−1)!{\displaystyle r(n-1)!}.Comparing these two counts gives the inequality|A|r!(n−r)!≤r(n−1)!{\displaystyle |{\mathcal {A}}|r!(n-r)!\leq r(n-1)!}and dividing both sides byr!(n−r)!{\displaystyle r!(n-r)!}gives theresult[21] It is also possible to derive the Erdős–Ko–Rado theorem as a special case of theKruskal–Katona theorem, another important result inextremal set theory.[22]Many other proofs areknown.[23] A generalization of the theorem applies to subsets that are required to have large intersections. This version of the theorem has three parameters:n{\displaystyle n}, the number of elements the subsets are drawn from,r{\displaystyle r}, the size of the subsets, as before, andt{\displaystyle t}, the minimum size of the intersection of any two subsets. For the original form of the Erdős–Ko–Rado theorem,t=1{\displaystyle t=1}.In general, forn{\displaystyle n}large enough with respect to the other two parameters, the generalized theorem states that the size of at{\displaystyle t}-intersectingfamily of subsets is atmost[24](n−tr−t).{\displaystyle {\binom {n-t}{r-t}}.}More precisely, this bound holdswhenn≥(t+1)(r−t+1){\displaystyle n\geq (t+1)(r-t+1)},and does not hold for smaller valuesofn{\displaystyle n}.Whenn>(t+1)(r−t+1){\displaystyle n>(t+1)(r-t+1)},the onlyt{\displaystyle t}-intersectingfamilies of this size are obtained by designatingt{\displaystyle t}elementsas the common intersection of all the subsets, and constructing the family of allr{\displaystyle r}-elementsubsets that include theset{\displaystyle t}designatedelements.[25]The maximal size of at-intersecting family whenn<(t+1)(r−t+1){\displaystyle n<(t+1)(r-t+1)}was determined byAhlswedeand Khachatrian, in theirAhlswede–Khachatrian theorem.[26] The corresponding graph-theoretic formulation of this generalization involvesJohnson graphsin place of Knesergraphs.[27]For large enough values ofn{\displaystyle n}and in particularforn>12r2{\displaystyle n>{\tfrac {1}{2}}r^{2}},both the Erdős–Ko–Rado theorem and its generalization can be strengthened from the independence number to theShannon capacity of a graph: the Johnson graph corresponding to thet{\displaystyle t}-intersectingr{\displaystyle r}-elementsubsets has Shannoncapacity(n−tr−t){\displaystyle {\tbinom {n-t}{r-t}}}.[28] The theorem can also be generalized to families in which everyh{\displaystyle h}subsets have a common intersection. Because this strengthens the condition that every pair intersects (for whichh=2{\displaystyle h=2}),these families have the same bound on their maximum size,(n−1r−1){\displaystyle {\tbinom {n-1}{r-1}}}whenn{\displaystyle n}is sufficiently large. However, in this case the meaning of "sufficiently large" can be relaxed fromn≥2r{\displaystyle n\geq 2r}ton≥hh−1r{\displaystyle n\geq {\tfrac {h}{h-1}}r}.[29] Many results analogous to the Erdős–Ko–Rado theorem, but for other classes of objects than finite sets, are known. These generally involve a statement that the largest families of intersecting objects, for some definition of intersection, are obtained by choosing an element and constructing the family of all objects that include that chosen element. Examples include the following: There is aq-analogof the Erdős–Ko–Rado theorem for intersecting families oflinear subspacesoverfinite fields. IfS{\displaystyle {\mathcal {S}}}is an intersecting family ofr{\displaystyle r}-dimensionalsubspaces of ann{\displaystyle n}-dimensionalvector spaceover a finite field oforderq{\displaystyle q},andn≥2r{\displaystyle n\geq 2r},then|S|≤(n−1r−1)q,{\displaystyle |{\mathcal {S}}|\leq {\binom {n-1}{r-1}}_{q},}where the subscriptqmarks the notation for theGaussian binomial coefficient, the number of subspaces of a given dimension within avector spaceof a larger dimension over a finite field oforderq.In this case, a largest intersecting family of subspaces may be obtained by choosing any nonzero vector and constructing the family of subspaces of the given dimension that all contain the chosenvector.[30] Twopermutationson the same set of elements are defined to be intersecting if there is some element that has the same image under both permutations. On ann{\displaystyle n}-elementset, there is an obvious family of(n−1)!{\displaystyle (n-1)!}intersecting permutations, the permutations that fix one of the elements (thestabilizer subgroupof this element). The analogous theorem is that no intersecting family of permutations can be larger, and that the only intersecting families of size(n−1)!{\displaystyle (n-1)!}are thecosetsof one-element stabilizers. These can be described more directly as the families of permutations that map some fixed element to another fixed element. More generally, for anyt{\displaystyle t}and sufficiently largen{\displaystyle n}, a family of permutations each pair of which hast{\displaystyle t}elements in common has maximum size(n−t)!{\displaystyle (n-t)!}, and the only families of this size are cosets of pointwisestabilizers.[31]Alternatively, in graph theoretic terms, then{\displaystyle n}-elementpermutations correspond to theperfect matchingsof acomplete bipartite graphKn,n{\displaystyle K_{n,n}}and the theorem states that, among families of perfect matchings each pair of which sharet{\displaystyle t}edges, the largest families are formed by the matchings that all containt{\displaystyle t}chosenedges.[32]Another analog of the theorem, forpartitions of a set, includes as a special case the perfect matchings of acomplete graphKn{\displaystyle K_{n}}(withn{\displaystyle n}even). There are(n−1)!!{\displaystyle (n-1)!!}matchings, where!!{\displaystyle !!}denotes thedouble factorial. The largest family of matchings that pairwise intersect (meaning that they have an edge in common) has size(n−3)!!{\displaystyle (n-3)!!}and is obtained by fixing one edge and choosing all ways of matching the remainingn−2{\displaystyle n-2}vertices.[33] Apartial geometryis a system of finitely many abstract points and lines, satisfying certain axioms including the requirement that all lines contain the same number of points and all points belong to the same number of lines. In a partial geometry, a largest system of pairwise-intersecting lines can be obtained from the set of lines through any singlepoint.[34] Asigned setconsists of a set together with sign function that maps each elementto{1,−1}{\displaystyle \{1,-1\}}.Two signed sets may be said to intersect when they have a common element that has the same sign in each of them. Then an intersecting family ofr{\displaystyle r}-elementsigned sets, drawn from ann{\displaystyle n}-elementuniverse, consists of at most2r−1(n−1r−1){\displaystyle 2^{r-1}{\binom {n-1}{r-1}}}signed sets. This number of signed sets may be obtained by fixing one element and its sign and letting the remainingr−1{\displaystyle r-1}elements and signsvary.[35] Forstringsoflengthn{\displaystyle n}over analphabetofsizeq{\displaystyle q},two strings can be defined to intersect if they have a position where both share the same symbol. The largest intersecting families are obtained by choosing one position and a fixed symbol for that position, and letting the rest of the positions vary arbitrarily. These families consist ofqn−1{\textstyle q^{n-1}}strings, and are the only pairwise intersecting families of this size. More generally, the largest families of strings in which every two havet{\displaystyle t}positions with equal symbols are obtained by choosingt+2i{\displaystyle t+2i}positions and symbols for those positions, for a numberi{\displaystyle i}that depends onn{\displaystyle n},q{\displaystyle q}, andt{\displaystyle t}, and constructing the family of strings that each have at leastt+i{\displaystyle t+i}of the chosen symbols. These results can be interpreted graph-theoretically in terms of theHamming scheme.[36] An unprovenconjecture, posed byGil Kalaiand Karen Meagher, concerns another analog for the family of triangulations of aconvex polygonwithn{\displaystyle n}vertices. The number of all triangulations is aCatalan numberCn−2{\displaystyle C_{n-2}},and the conjecture states that a family of triangulations every pair of which shares an edge has maximumsizeCn−3{\displaystyle C_{n-3}}.An intersecting family of size exactlyCn−3{\displaystyle C_{n-3}}may be obtained by cutting off a single vertex of the polygon by a triangle, and choosing all ways of triangulating the remaining(n−1){\displaystyle (n-1)}-vertexpolygon.[37] The Erdős–Ko–Rado theorem can be used to prove the following result inprobability theory. Letxi{\displaystyle x_{i}}be independent0–1 random variableswith probabilityp≥12{\displaystyle p\geq {\tfrac {1}{2}}}of being one, and letc(x→){\displaystyle c({\vec {x}})}be any fixedconvex combinationof these variables. ThenPr[c(x→)≥12]≥p.{\displaystyle \Pr \left[c({\vec {x}})\geq {\tfrac {1}{2}}\right]\geq p.}The proof involves observing that subsets of variables whoseindicator vectorshave large convex combinations must be non-disjoint and using the Erdős–Ko–Rado theorem to bound the number of thesesubsets.[38] The stability properties of the Erdős–Ko–Rado theorem play a key role in an efficientalgorithmfor finding monochromatic edges inimproper coloringsof Kneser graphs.[39]The Erdős–Ko–Rado theorem has also been used to characterize the symmetries of the space ofphylogenetic trees.[40]
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Ko%E2%80%93Rado_theorem
Fractional coloringis a topic in a branch ofgraph theoryknown asfractional graph theory. It is a generalization of ordinarygraph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, and adjacent vertices — those connected by edges — must be assigned different colors. In a fractional coloring however, asetof colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common. Fractional graph coloring can be viewed as thelinear programming relaxationof traditional graph coloring. Indeed, fractional coloring problems are much more amenable to a linear programming approach than traditional coloring problems. Ab-fold coloringof a graphGis an assignment of sets of sizebto vertices of a graph such that adjacent vertices receive disjoint sets. Ana:b-coloringis ab-fold coloring out ofaavailable colors. Equivalently, it can be defined as a homomorphism to theKneser graphKGa,b. Theb-fold chromatic numberχb(G){\displaystyle \chi _{b}(G)}is the leastasuch that ana:b-coloring exists. Thefractional chromatic numberχf(G){\displaystyle \chi _{f}(G)}is defined to be: Note that the limit exists becauseχb(G){\displaystyle \chi _{b}(G)}issubadditive, meaning:χa+b(G)≤χa(G)+χb(G).{\displaystyle \chi _{a+b}(G)\leq \chi _{a}(G)+\chi _{b}(G).} The fractional chromatic number can equivalently be defined in probabilistic terms.χf(G){\displaystyle \chi _{f}(G)}is the smallestkfor which there exists a probability distribution over theindependent setsofGsuch that for each vertexv, given an independent setSdrawn from the distribution: We have: with equality forvertex-transitive graphs, wheren(G) is theorderofG, α(G) is theindependence number.[1] Moreover: where ω(G) is theclique number, andχ(G){\displaystyle \chi (G)}is thechromatic number. Furthermore, the fractional chromatic number approximates the chromatic number within a logarithmic factor,[2]in fact: Kneser graphs give examples where:χ(G)/χf(G){\displaystyle \chi (G)/\chi _{f}(G)}is arbitrarily large, since:χ(KGm,n)=m−2n+2,{\displaystyle \chi (KG_{m,n})=m-2n+2,}whileχf(KGm,n)=mn.{\displaystyle \chi _{f}(KG_{m,n})={\tfrac {m}{n}}.} The fractional chromatic numberχf(G){\displaystyle \chi _{f}(G)}of a graphGcan be obtained as a solution to alinear program. LetI(G){\displaystyle {\mathcal {I}}(G)}be the set of all independent sets ofG, and letI(G,x){\displaystyle {\mathcal {I}}(G,x)}be the set of all those independent sets which include vertexx. For each independent setI, define a nonnegative real variablexI. Thenχf(G){\displaystyle \chi _{f}(G)}is the minimum value of: subject to: for each vertexx{\displaystyle x}. Thedualof this linear program computes the "fractional clique number", a relaxation to the rationals of the integer concept ofclique number. That is, a weighting of the vertices ofGsuch that the total weight assigned to any independent set is at most1. Thestrong dualitytheorem of linear programming guarantees that the optimal solutions to both linear programs have the same value. Note however that each linear program may have size that is exponential in the number of vertices ofG, and that computing the fractional chromatic number of a graph isNP-hard.[3]This stands in contrast to the problem of fractionally coloring the edges of a graph, which can be solved in polynomial time. This is a straightforward consequence of Edmonds' matching polytope theorem.[4][5] Applications of fractional graph coloring includeactivity scheduling. In this case, the graphGis aconflict graph: an edge inGbetween the nodesuandvdenotes thatuandvcannot be active simultaneously. Put otherwise, the set of nodes that are active simultaneously must be an independent set in graphG. An optimal fractional graph coloring inGthen provides a shortest possible schedule, such that each node is active for (at least) 1 time unit in total, and at any point in time the set of active nodes is an independent set. If we have a solutionxto the above linear program, we simply traverse all independent setsIin an arbitrary order. For eachI, we let the nodes inIbe active forxI{\displaystyle x_{I}}time units; meanwhile, each node not inIis inactive. In more concrete terms, each node ofGmight represent aradio transmissionin a wireless communication network; the edges ofGrepresentinterferencebetween radio transmissions. Each radio transmission needs to be active for 1 time unit in total; an optimal fractional graph coloring provides a minimum-length schedule (or, equivalently, a maximum-bandwidth schedule) that is conflict-free. If one further required that each node must be activecontinuouslyfor 1 time unit (without switching it off and on every now and then), then traditional graphvertex coloringwould provide an optimal schedule: first the nodes of color 1 are active for 1 time unit, then the nodes of color 2 are active for 1 time unit, and so on. Again, at any point in time, the set of active nodes is an independent set. In general, fractional graph coloring provides a shorter schedule than non-fractional graph coloring; there is anintegrality gap. It may be possible to find a shorter schedule, at the cost of switching devices (such as radio transmitters) on and off more than once.
https://en.wikipedia.org/wiki/Fractional_coloring
Ingraph theory, apathin anedge-colored graphis said to berainbowif no color repeats on it. A graph is said to berainbow-connected(orrainbow colored) if there is a rainbow path between each pair of itsvertices. If there is a rainbowshortest pathbetween each pair of vertices, the graph is said to bestrongly rainbow-connected(orstrongly rainbow colored).[1] Therainbow connection numberof a graphG{\displaystyle G}is the minimum number of colors needed to rainbow-connectG{\displaystyle G}, and is denoted byrc(G){\displaystyle {\text{rc}}(G)}. Similarly, thestrong rainbow connection numberof a graphG{\displaystyle G}is the minimum number of colors needed to strongly rainbow-connectG{\displaystyle G}, and is denoted bysrc(G){\displaystyle {\text{src}}(G)}. Clearly, each strong rainbow coloring is also a rainbow coloring, while the converse is not true in general. It is easy to observe that to rainbow-connect any connected graphG{\displaystyle G}, we need at leastdiam(G){\displaystyle {\text{diam}}(G)}colors, wherediam(G){\displaystyle {\text{diam}}(G)}is thediameterofG{\displaystyle G}(i.e. the length of the longest shortest path). On the other hand, we can never use more thanm{\displaystyle m}colors, wherem{\displaystyle m}denotes the number ofedgesinG{\displaystyle G}. Finally, because each strongly rainbow-connected graph is rainbow-connected, we have thatdiam(G)≤rc(G)≤src(G)≤m{\displaystyle {\text{diam}}(G)\leq {\text{rc}}(G)\leq {\text{src}}(G)\leq m}. The following are the extremal cases:[1] The above shows that in terms of the number of vertices, the upper boundrc(G)≤n−1{\displaystyle {\text{rc}}(G)\leq n-1}is the best possible in general. In fact, a rainbow coloring usingn−1{\displaystyle n-1}colors can be constructed by coloring the edges of a spanning tree ofG{\displaystyle G}in distinct colors. The remaining uncolored edges are colored arbitrarily, without introducing new colors. WhenG{\displaystyle G}is 2-connected, we have thatrc(G)≤⌈n/2⌉{\displaystyle {\text{rc}}(G)\leq \lceil n/2\rceil }.[2]Moreover, this is tight as witnessed by e.g. odd cycles. The rainbow or the strong rainbow connection number has been determined for some structured graph classes: The problem of deciding whetherrc(G)=2{\displaystyle {\text{rc}}(G)=2}for a given graphG{\displaystyle G}isNP-complete.[3]Becauserc(G)=2{\displaystyle {\text{rc}}(G)=2}if and only ifsrc(G)=2{\displaystyle {\text{src}}(G)=2},[1]it follows that deciding ifsrc(G)=2{\displaystyle {\text{src}}(G)=2}is NP-complete for a given graphG{\displaystyle G}. Chartrand, Okamoto and Zhang[4]generalized the rainbow connection number as follows. LetG{\displaystyle G}be an edge-colored nontrivial connected graph of ordern{\displaystyle n}. A treeT{\displaystyle T}is arainbow treeif no two edges ofT{\displaystyle T}are assigned the same color. Letk{\displaystyle k}be a fixed integer with2≤k≤n{\displaystyle 2\leq k\leq n}. An edge coloring ofG{\displaystyle G}is called ak{\displaystyle k}-rainbow coloringif for every setS{\displaystyle S}ofk{\displaystyle k}vertices ofG{\displaystyle G}, there is a rainbow tree inG{\displaystyle G}containing the vertices ofS{\displaystyle S}. Thek{\displaystyle k}-rainbow indexrxk(G){\displaystyle {\text{rx}}_{k}(G)}ofG{\displaystyle G}is the minimum number of colors needed in ak{\displaystyle k}-rainbow coloring ofG{\displaystyle G}. Ak{\displaystyle k}-rainbow coloring usingrxk(G){\displaystyle {\text{rx}}_{k}(G)}colors is called aminimumk{\displaystyle k}-rainbow coloring. Thusrx2(G){\displaystyle {\text{rx}}_{2}(G)}is the rainbow connection number ofG{\displaystyle G}. Rainbow connection has also been studied in vertex-colored graphs. This concept was introduced by Krivelevich andYuster.[5]Here, therainbow vertex-connection numberof a graphG{\displaystyle G}, denoted byrvc(G){\displaystyle {\text{rvc}}(G)}, is the minimum number of colors needed to colorG{\displaystyle G}such that for each pair of vertices, there is a path connecting them whose internal vertices are assigned distinct colors.
https://en.wikipedia.org/wiki/Rainbow_coloring
Ingraph theory, the termbipartite hypergraphdescribes several related classes ofhypergraphs, all of which are natural generalizations of abipartite graph. The weakest definition of bipartiteness is also called2-colorability. A hypergraphH= (V,E) is called 2-colorable if its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge meets bothXandY. Equivalently, the vertices ofHcan be 2-colored so that no hyperedge is monochromatic. Every bipartite graphG= (X+Y,E) is 2-colorable: each edge contains exactly one vertex ofXand one vertex ofY, so e.g.Xcan be colored blue andYcan be colored yellow and no edge is monochromatic. The property of 2-colorability was first introduced byFelix Bernsteinin the context of set families;[1]therefore it is also calledProperty B. A stronger definition of bipartiteness is: a hypergraph is calledbipartiteif its vertex setVcan be partitioned into two sets,XandY, such that each hyperedge containsexactly oneelement ofX.[2][3]Every bipartite graph is also a bipartite hypergraph. Every bipartite hypergraph is 2-colorable, but bipartiteness is stronger than 2-colorability. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges: { {1,2,3} , {1,2,4} , {1,3,4} , {2,3,4} } ThisHis 2-colorable, for example by the partitionX= {1,2} andY= {3,4}. However, it is not bipartite, since every setXwith one element has an empty intersection with one hyperedge, and every setXwith two or more elements has an intersection of size 2 or more with at least two hyperedges. Hall's marriage theoremhas been generalized from bipartite graphs to bipartite hypergraphs; seeHall-type theorems for hypergraphs. A stronger definition is: given an integern, a hypergraph is calledn-uniform if all its hyperedges contain exactlynvertices. Ann-uniform hypergraph is calledn-partiteif its vertex setVcan be partitioned intonsubsets such that each hyperedge contains exactly one element from each subset.[4]An alternative term israinbow-colorable.[5] Everyn-partiteness hypergraph is bipartite, but n-partiteness is stronger than bipartiteness. LetHbe a hypergraph on the vertices {1, 2, 3, 4} with the following hyperedges; { {1,2,3} , {1,2,4} , {1,3,4} } ThisHis 3-uniform. It is bipartite by the partitionX= {1} andY= {2,3,4}. However, it is not 3-partite: in every partition ofVinto 3 subsets, at least one subset contains two vertices, and thus at least one hyperedge contains two vertices from this subset. A 3-partite hypergraph is often called "tripartite hypergraph". However, a 2-partite hypergraph isnotthe same as a bipartite hypergraph; it is equivalent to a bipartitegraph. There are other natural generalizations of bipartite graphs. A hypergraph is calledbalancedif it is essentially 2-colorable, and remains essentially 2-colorable upon deleting any number of vertices (seeBalanced hypergraph). The properties of bipartiteness and balance do not imply each other. Bipartiteness does not imply balance. For example, letHbe the hypergraph with vertices {1,2,3,4} and edges: { {1,2,3} , {1,2,4} , {1,3,4} } It is bipartite by the partitionX={1},Y={2,3,4}. But is not balanced. For example, if vertex 1 is removed, we get the restriction ofHto {2,3,4}, which has the following hyperedges; { {2,3} , {2,4} , {3,4} } It is not 2-colorable, since in any 2-coloring there are at least two vertices with the same color, and thus at least one of the hyperedges is monochromatic. Another way to see thatHis not balanced is that it contains the odd-length cycle C = (2 - {1,2,3} - 3 - {1,3,4} - 4 - {1,2,4} - 2), and no edge ofCcontains all three vertices 2,3,4 ofC. Balance does not imply bipartiteness. LetHbe the hypergraph:[citation needed] { {1,2} , {3,4} , {1,2,3,4} } it is 2-colorable and remains 2-colorable upon removing any number of vertices from it. However, it is not bipartite, since to have exactly one green vertex in each of the first two hyperedges, we must have two green vertices in the last hyperedge.
https://en.wikipedia.org/wiki/Rainbow-colorable_hypergraph
Ingraph theory, arainbow-independent set(ISR) is anindependent setin agraph, in which eachvertexhas a differentcolor. Formally, letG= (V,E)be a graph, and suppose vertex setVispartitionedintomsubsetsV1, …,Vm, called "colors". A setUof vertices is called a rainbow-independent set if it satisfies both the following conditions:[1] Other terms used in the literature areindependent set of representatives,[2]independent transversal,[3]andindependent system of representatives.[4] As an example application, consider a faculty withmdepartments, where some faculty members dislike each other. The dean wants to construct a committee withmmembers, one member per department, but without any pair of members who dislike each other. This problem can be presented as finding an ISR in a graph in which the nodes are the faculty members, the edges describe the "dislike" relations, and the subsetsV1, …,Vmare the departments.[3] It is assumed for convenience that the setsV1, …,Vmare pairwise-disjoint. In general the sets may intersect, but this case can be easily reduced to the case of disjoint sets: for every vertexx, form a copy ofxfor eachisuch thatVicontainsx. In the resulting graph, connect all copies ofxto each other. In the new graph, theViare disjoint, and each ISR corresponds to an ISR in the original graph.[4] ISR generalizes the concept of asystem of distinct representatives(SDR, also known astransversal). Every transversal is an ISR where in the underlying graph, all and only copies of the same vertex from different sets are connected. There are various sufficient conditions for the existence of an ISR. Intuitively, when the departmentsViare larger, and there is less conflict between faculty members, an ISR should be more likely to exist. The "less conflict" condition is represented by thevertex degreeof the graph. This is formalized by the following theorem:[5]: Thm.2 If the degree of every vertex inGis at mostd, and the size of each color-set is at least2d, thenGhas an ISR. The2dis best possible: there are graph with vertex degreekand colors of size2d– 1without an ISR.[6]But there is a more precise version in which the bound depends both ondand onm.[7] Below, given a subsetSof colors (a subset of{V1, ...,Vm}), we denote byUSthe union of all subsets inS(all vertices whose color is one of the colors inS), and byGSthe subgraph ofGinduced byUS.[8]The following theorem describes the structure of graphs that have no ISR but areedge-minimal, in the sense that whenever any edge is removed from them, the remaining graph has an ISR. IfGhas no ISR, but for every edgeeinE,G-ehas an ISR, then for every edgee= (x,y)inE, there exists a subsetSof the colors{V1, …,Vm},and a setZof edges ofGS, such that: Below, given a subsetSof colors (a subset of{V1, …,Vm}), an independent setISofGSis calledspecial forSif for every independent subsetJof vertices ofGSof size at most|S| − 1, there exists somevinISsuch thatJ∪ {v}is also independent. Figuratively,ISis a team of "neutral members" for the setSof departments, that can augment any sufficiently small set of non-conflicting members, to create a larger such set. The following theorem is analogous toHall's marriage theorem:[9] If, for every subset S of colors, the graphGScontains an independent setISthat is special forS, thenGhas an ISR.Proof idea. The theorem is proved usingSperner's lemma.[3]: Thm.4.2The standard simplex withmendpoints is assigned a triangulation with some special properties. Each endpointiof the simplex is associated with the color-setVi, each face{i1, …,ik}of the simplex is associated with a setS= {Vi1, …,Vik}of colors. Each pointxof the triangulation is labeled with a vertexg(x)ofGsuch that: (a) For each pointxon a faceS,g(x)is an element ofIS– the special independent set ofS. (b) If pointsxandyare adjacent in the1-skeletonof the triangulation, theng(x)andg(y)are not adjacent inG. By Sperner's lemma, there exists a sub-simplex in which, for each pointx,g(x)belongs to a different color-set; the set of theseg(x)is an ISR. The above theorem implies Hall's marriage condition. To see this, it is useful to state the theorem for the special case in whichGis theline graphof some other graphH; this means that every vertex ofGis an edge ofH, and every independent set ofGis a matching inH. The vertex-coloring ofGcorresponds to an edge-coloring ofH, and a rainbow-independent-set inGcorresponds to a rainbow-matching inH. A matchingISinHSis special forS, if for every matchingJinHSof size at most|S| − 1, there is an edgeeinISsuch thatJ∪ {e}is still a matching inHS. LetHbe a graph with an edge-coloring. If, for every subsetSof colors, the graphHScontains a matchingMSthat is special forS, thenHhas a rainbow-matching. LetH= (X+Y,E)be a bipartite graph satisfying Hall's condition. For each vertexiofX, assign a unique colorVito all edges ofHadjacent toi. For every subsetSof colors, Hall's condition implies thatShas at least|S|neighbors inY, and therefore there are at least|S|edges ofHadjacent to distinct vertices ofY. LetISbe a set of|S|such edges. For any matchingJof size at most|S| − 1inH, some elementeofIShas a different endpoint inYthan all elements ofJ, and thusJ∪ {e}is also a matching, soISis special forS. The above theorem implies thatHhas a rainbow matchingMR. By definition of the colors,MRis a perfect matching inH. Another corollary of the above theorem is the following condition, which involves both vertex degree and cycle length:[3]: Thm.4.3 If thedegree of every vertexinGis at most 2, and the length of each cycle ofGis divisible by 3, and the size of each color-set is at least 3, thenGhas an ISR.Proof.For every subsetSof colors, the graphGScontains at least3|S|vertices, and it is a union of cycles of length divisible by 3 and paths. LetISbe an independent set inGScontaining every third vertex in each cycle and each path. So|IS|contains at least3|S|⁄3= |S|vertices. LetJbe an independent set inGSof size at most|S| – 1. Since the distance between each two vertices ofISis at least 3, every vertex ofJis adjacent to at most one vertex ofIS. Therefore, there is at least one vertex ofISwhich is not adjacent to any vertex ofJ. ThereforeISis special forS. By the previous theorem,Ghas an ISR. One family of conditions is based on thehomological connectivityof theindependence complexof subgraphs. To state the conditions, the following notation is used: The following condition is implicit in[9]and proved explicitly in.[10] If, for all subsetsJof[m]: then the partitionV1, …,Vmadmits an ISR. As an example,[4]supposeGis abipartite graph, and its parts are exactlyV1andV2. In this case[m] = {1,2}so there are four options forJ: Every properly colouredtriangle-free graphofchromatic numberxcontains a rainbow-independent set of size at leastx⁄2.[11] Several authors have studied conditions for existence of large rainbow-independent sets in various classes of graphs.[1][12] TheISR decision problemis the problem of deciding whether a given graphG= (V,E)and a given partition ofVintomcolors admits a rainbow-independent set. This problem isNP-complete. The proof is by reduction from the3-dimensional matchingproblem (3DM).[4]The input to 3DM is a tripartite hypergraph(X+Y+Z,F), whereX,Y,Zare vertex-sets of sizem, andFis a set of triplets, each of which contains a single vertex of each ofX,Y,Z. An input to 3DM can be converted into an input to ISR as follows: In the resulting graphG= (V,E), an ISR corresponds to a set of triplets(x,y,z)such that: Therefore, the resulting graph admits an ISR if and only if the original hypergraph admits a 3DM. An alternative proof is by reduction fromSAT.[3] IfGis theline graphof some other graphH, then the independent sets inGare thematchingsinH. Hence, a rainbow-independent set inGis arainbow matchinginH. See alsomatching in hypergraphs. Another related concept is arainbow cycle, which is acyclein which each vertex has a different color.[13] When an ISR exists, a natural question is whether there exist other ISRs, such that the entire set of vertices is partitioned into disjoint ISRs (assuming the number of vertices in each color is the same). Such a partition is calledstrong coloring. Using the faculty metaphor:[3] Arainbow cliqueor acolorful cliqueis acliquein which every vertex has a different color.[10]Every clique in a graph corresponds to an independent set in itscomplement graph. Therefore, every rainbow clique in a graph corresponds to a rainbow-independent set in its complement graph.
https://en.wikipedia.org/wiki/Rainbow-independent_set
Incombinatoricsandcomputer science,covering problemsare computational problems that ask whether a certain combinatorial structure 'covers' another, or how large the structure has to be to do that. Covering problems areminimization problemsand usuallyinteger linear programs, whosedual problemsare calledpacking problems. The most prominent examples of covering problems are theset cover problem, which is equivalent to thehitting set problem, and its special cases, thevertex cover problemand theedge cover problem. Covering problems allow the covering primitives to overlap; the process of covering something with non-overlapping primitives is calleddecomposition. In the context oflinear programming, one can think of any minimization linear program as a covering problem if the coefficients in the constraintmatrix, the objective function, and right-hand side are nonnegative.[1]More precisely, consider the following generalinteger linear program: Such an integer linear program is called acovering problemifaji,bj,ci≥0{\displaystyle a_{ji},b_{j},c_{i}\geq 0}for alli=1,…,n{\displaystyle i=1,\dots ,n}andj=1,…,m{\displaystyle j=1,\dots ,m}. Intuition:Assume havingn{\displaystyle n}types of object and each object of typei{\displaystyle i}has an associated cost ofci{\displaystyle c_{i}}. The numberxi{\displaystyle x_{i}}indicates how many objects of typei{\displaystyle i}we buy. If the constraintsAx≥b{\displaystyle A\mathbf {x} \geq \mathbf {b} }are satisfied, it is said thatx{\displaystyle \mathbf {x} }is a covering(the structures that are covered depend on the combinatorial context). Finally, an optimal solution to the above integer linear program is a covering of minimal cost. There are various kinds of covering problems ingraph theory,computational geometryand more; seeCategory:Covering problems. Other stochastic related versions of the problem can be found.[2] ForPetri nets, the covering problem is defined as the question if for a given marking, there exists a run of the net, such that some larger (or equal) marking can be reached.Largermeans here that all components are at least as large as the ones of the given marking and at least one is properly larger. In some covering problems, the covering should satisfy some additional requirements. In particular, in therainbow coveringproblem, each of the original objects has a "color", and it is required that the covering contains exactly one (or at most one) object of each color. Rainbow covering was studied e.g. for covering points byintervals:[5] The problem isNP-hard(by reduction fromlinear SAT). A more general notion isconflict-free covering.[6]In this problem: Conflict-free set coveris the problem of finding a conflict-free subset ofOthat is a covering ofP. Banik, Panolan, Raman, Sahlot and Saurabh[7]provethe following for the special case in which the conflict-graph has boundedarboricity:
https://en.wikipedia.org/wiki/Rainbow_covering
In themathematicalfield ofgraph theory,Hall-type theorems for hypergraphsare severalgeneralizationsofHall's marriage theoremfromgraphstohypergraphs. Such theorems were proved by Ofra Kessler,[1][2]Ron Aharoni,[3][4]Penny Haxell,[5][6]Roy Meshulam,[7]and others. Hall's marriage theorem provides a condition guaranteeing that abipartite graph(X+Y,E)admits aperfect matching, or - more generally - amatchingthat saturates allverticesofY. The condition involves the number ofneighborsofsubsetsofY. Generalizing Hall's theorem to hypergraphs requires a generalization of the concepts of bipartiteness, perfect matching, and neighbors. 1.Bipartiteness: The notion of a bipartiteness can be extended to hypergraphs in many ways (seebipartite hypergraph). Here we define a hypergraph as bipartite if it isexactly 2-colorable, i.e., its vertices can be 2-colored such that each hyperedge contains exactly one yellow vertex. In other words,Vcan be partitioned into two setsXandY, such that each hyperedge contains exactly one vertex ofY.[1]Abipartite graphis a special case in which each edge contains exactly one vertex ofYand also exactly one vertex ofX; in a bipartite hypergraph, each hyperedge contains exactly one vertex ofYbut may contain zero or more vertices ofX. For example, the hypergraph(V,E)withV= {1,2,3,4,5,6}andE= { {1,2,3}, {1,2,4}, {1,3,4}, {5,2}, {5,3,4,6} }is bipartite withY= {1,5}andX= {2,3,4,6}. 2.Perfect matching: Amatching in a hypergraphH= (V,E)is a subsetFofE, such that every two hyperedges ofFare disjoint. IfHis bipartite with partsXandY, then the size of each matching is obviously at most|Y|. A matching is calledY-perfect(orY-saturating) if its size is exactly|Y|. In other words: every vertex ofYappears in exactly one hyperedge ofM. This definition reduces to the standard definition of aY-perfect matching in a bipartite graph. 3.Neighbors:Given a bipartite hypergraphH= (X+Y,E)and a subsetY0ofY, the neighbors ofY0are the subsets ofXthat share hyperedges with vertices ofY0. Formally: For example, in the hypergraph from point 1, we have:NH({1}) = { {2,3}, {2,4}, {3,4} }andNH({5}) = { {2}, {3,4,6} }andNH({1,5}) = { {2,3}, {2,4}, {3,4}, {2}, {3,4,6} }.Note that, in a bipartite graph, each neighbor is a singleton - the neighbors are just the vertices ofXthat are adjacent to one or more vertices ofY0. In a bipartite hypergraph, each neighbor is a set - the neighbors are the subsets ofXthat are "adjacent" to one or more vertices ofY0. SinceNH(Y0)contains only subsets ofX, one can define a hypergraph in which the vertex set isXand the edge set isNH(Y0). We call it the neighborhood-hypergraph ofY0and denote it: Note that, ifHis a simple bipartite graph, the neighborhood-hypergraph of everyY0contains just the neighbors ofY0inX, each of which with a self-loop. Hall's condition requires that, for each subsetY0ofY, the set of neighbors ofY0is sufficiently large. With hypergraphs this condition is insufficient. For example, consider the tripartite hypergraph with edges: { {1, a, A}, {2, a, B} } LetY= {1,2}.Every vertex inYhas a neighbor, andYitself has two neighbors:NH(Y) = { {a,A}, {a,B} }.But there is noY-perfect matching since both edges overlap. One could try to fix it by requiring thatNH(Y0)contain at least|Y0|disjointedges, rather than just|Y0|edges. In other words:HH(Y0)should contain amatchingof size at least|Y0|. The largest size of a matching in a hypergraphHis called its matching number and denoted byν(H)(thusHadmits aY-perfect matching if and only ifν(H) = |Y|). However, this fix is insufficient, as shown by the following tripartite hypergraph: { {1, a, A}, {1, b, B}, {2, a, B}, {2, b, A} } LetY= {1,2}.Again every vertex inYhas a neighbor, andYitself has four neighbors:NH(Y) = { {a,A}, {a,B}, {b, A}, {b, B} }.Moreover,ν(HH(Y)) = 2sinceHH(Y)admits a matching of size 2, e.g.{ {a,A}, {b,B} } or { {a,B}, {b,A} }.However, H does not admit aY-perfect matching, since every hyperedge that contains 1 overlaps every hyperedge that contains 2. Thus, to guarantee a perfect matching, a stronger condition is needed. Various such conditions have been suggested. LetH= (X+Y,E)be a bipartite hypergraph (as defined in 1. above), in which the size of every hyperedge is exactlyr, for some integerr> 1. Suppose that, for every subsetY0ofY, the following inequality holds: In words: the neighborhood-hypergraph ofY0admits a matching larger than(r– 1) (|Y0| – 1). ThenHadmits aY-perfect matching (as defined in 2. above). This was first conjectured by Aharoni.[3]It was proved with Ofra Kessler for bipartite hypergraphs in which|Y| ≤ 4[1]and for|Y| = 5.[2]It was later proved for allr-uniform hypergraphs.[6]: Corollary 1.2 For a bipartite simple graphr= 2, and Aharoni's condition becomes: Moreover, the neighborhood-hypergraph (as defined in 3. above) contains just singletons - a singleton for every neighbor ofY0. Since singletons do not intersect, the entire set of singletons is a matching. Hence,ν(HH(Y0)) = |NH(Y0)| =the number of neighbors ofY0. Thus, Aharoni's condition becomes, for every subsetY0ofY: This is exactly Hall's marriage condition. The following example shows that the factor(r– 1)cannot be improved. Choose some integerm> 1. LetH= (X+Y,E)be the followingr-uniform bipartite hypergraph: Note that edgeiinEmmeets all edges inEi. ThisHdoes not admit aY-perfect matching, since every hyperedge that containsmintersects all hyperedges inEifor somei<m. However, every subsetY0ofYsatisfies the following inequality sinceHH(Y0\ {m})contains at least(r– 1) ⋅ (|Y0| – 1)hyperedges, and they are all disjoint. The largest size of afractional matchinginHis denoted byν*(H). Clearlyν*(H) ≥ν(H). Suppose that, for every subsetY0ofY, the following weaker inequality holds: It was conjectured that in this case, too,Hadmits aY-perfect matching. This stronger conjecture was proved for bipartite hypergraphs in which|Y| = 2.[4] Later it was proved[4]that, if the above condition holds, thenHadmits aY-perfectfractionalmatching, i.e.,ν*(H) = |Y|. This is weaker than having aY-perfect matching, which is equivalent toν(H) = |Y|. Atransversal(also calledvertex-coverorhitting-set) in a hypergraphH= (V,E)is a subsetUofVsuch that every hyperedge inEcontains at least one vertex ofU. The smallest size of a transversal inHis denoted byτ(H). LetH= (X+Y,E)be a bipartite hypergraph in which the size of every hyperedge is at mostr, for some integerr> 1. Suppose that, for every subsetY0ofY, the following inequality holds: In words: the neighborhood-hypergraph ofY0has no transversal of size(2r– 3)(Y0– 1)or less. Then,Hadmits aY-perfect matching (as defined in 2. above).[5]: Theorem 3 For a bipartite simple graphr= 2so2r– 3 = 1, and Haxell's condition becomes: Moreover, the neighborhood-hypergraph (as defined in 3. above) contains justsingletons- a singleton for every neighbor ofY0. In a hypergraph of singletons, a transversal must contain all vertices. Hence,τ(HH(Y0)) = |NH(Y0)| =the number of neighbors ofY0. Thus, Haxell's condition becomes, for every subsetY0ofY: This is exactly Hall's marriage condition. Thus, Haxell's theorem implies Hall's marriage theorem for bipartite simple graphs. The following example shows that the factor(2r– 3)cannot be improved. LetH= (X+Y,E)be anr-uniform bipartite hypergraph with: ThisHdoes not admit aY-perfect matching, since every hyperedge that contains 0 intersects every hyperedge that contains 1. However, every subsetY0ofYsatisfies the following inequality: It is only slightly weaker (by 1) than required by Haxell's theorem. To verify this, it is sufficient to check the subsetY0=Y, since it is the only subset for which the right-hand side is larger than 0. The neighborhood-hypergraph ofYis(X,E00∪E11)where: One can visualize the vertices ofXas arranged on an(r– 1) × (r– 1)grid. The hyperedges ofE00are ther– 1rows. The hyperedges ofE11are the(r– 1)r-1selections of a single element in each row and each column. To cover the hyperedges ofE10we needr– 1vertices - one vertex in each row. Since all columns are symmetric in the construction, we can assume that we take all the vertices in column 1 (i.e.,vi1for eachiin{1, …,r– 1}). Now, sinceE11contains all columns, we need at leastr– 2additional vertices - one vertex for each column{2, …,r}.All in all, each transversal requires at least2r– 3vertices. Haxell's proof is not constructive. However, Chidambaram Annamalai proved that a perfect matching can be found efficiently under a slightly stronger condition.[8] For every fixed choice ofr≥ 2andε> 0, there exists an algorithm that finds aY-perfect matching in everyr-uniform bipartite hypergraph satisfying, for every subsetY0ofY: In fact, in anyr-uniform hypergraph, the algorithm finds either aY-perfect matching, or a subsetY0violating the above inequality. The algorithm runs in time polynomial in the size ofH, but exponential inrand1⁄ε. It is an open question whether there exists an algorithm with run-time polynomial in eitherror1⁄ε(or both). Similar algorithms have been applied for solving problems offair item allocation, in particular thesanta-claus problem.[9][10][11] We say that a setKof edgespinsanother setFof edges if every edge inFintersects some edge inK.[6]Thewidthof a hypergraphH= (V,E), denotedw(H), is the smallest size of a subset ofEthat pinsE.[7]Thematching widthof a hypergraphH, denotedmw(H), is the maximum, over all matchingsMinH, of the minimum size of a subset ofEthat pinsM.[12]SinceEcontains all matchings inE, the width of H is obviously at least as large as the matching-width ofH. Aharoni and Haxell proved the following condition: LetH= (X+Y,E)be a bipartite hypergraph. Suppose that, for every subsetY0ofY, the following inequality holds: mw(NH(Y0))≥|Y0|{\displaystyle mw(N_{H}(Y_{0}))\geq |Y_{0}|} [in other words:NH(Y0)contains a matchingM(Y0)such that at least|Y0|disjoint edges fromNH(Y0)are required for pinningM(Y0)]. Then,Hadmits aY-perfect matching.[6]: Theorem 1.1 They later extended this condition in several ways, which were later extended by Meshulam as follows: LetH= (X+Y,E)be a bipartite hypergraph. Suppose that, for every subsetY0ofY, at least one of the following conditions hold: mw(NH(Y0))≥|Y0|{\displaystyle mw(N_{H}(Y_{0}))\geq |Y_{0}|}orw(NH(Y0))≥2|Y0|−1{\displaystyle w(N_{H}(Y_{0}))\geq 2|Y_{0}|-1} Then,Hadmits aY-perfect matching.[7]: Theorem 1.4 In a bipartite simple graph, the neighborhood-hypergraph contains just singletons - a singleton for every neighbor ofY0. Since singletons do not intersect, the entire set of neighborsNH(Y0)is a matching, and its only pinning-set is the setNH(Y0)itself, i.e., the matching-width ofNH(Y0)is|NH(Y0)|, and its width is the same: Thus, both the above conditions are equivalent to Hall's marriage condition. We consider several bipartite graphs withY= {1, 2}andX= {A, B; a, b, c}.The Aharoni–Haxell condition trivially holds for the empty set. It holds for subsets of size 1 if and only if each vertex inYis contained in at least one edge, which is easy to check. It remains to check the subsetYitself. Consider a bipartite hypergraphH= (X+Y,E)whereY= {1, …,m}.The Hall-type theorems do not care about the setYitself - they only care about the neighbors of elements ofY. ThereforeHcan be represented as a collection of families of sets{H1, …,Hm},where for eachiin[m],Hi:=NH({i}) =the set-family of neighbors ofi. For every subsetY0ofY, the set-familyNH(Y0)is the union of the set-familiesHiforiinY0. Aperfect matchinginHis a set-family of sizem, where for eachiin[m], the set-familyHiis represented by a setRiinHi, and the representative setsRiare pairwise-disjoint. In this terminology, the Aharoni–Haxell theorem can be stated as follows. LetA= {H1, …,Hm}be a collection of families of sets. For every sub-collectionBofA, consider the set-family∪B- the union of all theHiinB. Suppose that, for every sub-collectionBofA, this∪Bcontains a matchingM(B)such that at least|B|disjoint subsets from∪Bare required for pinningM(B). ThenAadmits a system of disjoint representatives. LetH= (X+Y,E)be a bipartite hypergraph. The following are equivalent:[6]: Theorem 4.1 In set-family formulation: letA= {H1, …,Hm}be a collection of families of sets. The following are equivalent: Consider example #3 above:H= { {1,A,a}, {1,A,b};{1,B,a}, {1,B,b};{2,A,a}, {2,A,b};{2,B,a}, {2,B,b} }.Since it admits aY-perfect matching, it must satisfy the necessary condition. Indeed, consider the following assignment to subsets ofY: In the sufficient condition pinningM({1,2})required at least two edges fromNH(Y) = { {A,a}, {B,b}, {A,b}, {B,a} };it did not hold. But in the necessary condition, pinningM({1,2})required at least two edges fromM({1}) ∪ M({2}) ∪ M({1,2}) = { {A,a}, {B,b} };it does hold. Hence, the necessary+sufficient condition is satisfied. The proof is topological and usesSperner's lemma. Interestingly, it implies a new topological proof for the original Hall theorem.[13] First, assume that no two vertices inYhave exactly the same neighbor (it is without loss of generality, since for each elementyofY, one can add a dummy vertex to all neighbors ofy). LetY= {1, …,m}.They consider anm-vertex simplex, and prove that it admits a triangulationTwith some special properties that they calleconomically-hierarchic triangulation. Then they label each vertex ofTwith a hyperedge fromNH(Y)in the following way: Their sufficient condition implies that such a labeling exists. Then, they color each vertexvofTwith a colorisuch that the hyperedge assigned tovis a neighbor ofi. Conditions (a) and (b) guarantee that this coloring satisfies Sperner's boundary condition. Therefore, a fully-labeled simplex exists. In this simplex there aremhyperedges, each of which is a neighbor of a dif and only iferent element ofY, and so they must be disjoint. This is the desiredY-perfect matching. The Aharoni–Haxell theorem has a deficiency version. It is used to proveRyser's conjectureforr= 3.[12] LetVbe a set of vertices. LetCbe anabstract simplicial complexonV. LetVy(foryinY) be subsets ofV. AC-V-transversalis a set inC(an element ofC) whose intersection with eachVycontains exactly one vertex. For every subsetY0ofY, let Suppose that, for every subsetY0ofY, thehomological connectivity plus 2of the sub-complex induced byVY0{\displaystyle V_{Y_{0}}}is at least|Y0|, that is: Then there exists aC-V-transversal. That is: there is a set inCthat intersects eachVyby exactly one element.[14]This theorem has a deficiency version.[15]If, for every subsetY0ofY: then there exists a partialC-transversal, that intersects some|Y| –dsets by 1 element, and the rest by at most 1 element. More generally, ifgis a function on positive integers satisfyingg(z+ 1) ≤g(z) + 1, and for every subsetY0ofY: then there is a set inCthat intersects at leastg(|Y|)of theVyby at one element, and the others by at most one element. Using the above theorem requires some lower bounds on homological connectivity. One such lower bound is given byMeshulam's game. This is a game played by two players on a graph. One player - CON - wants to prove that the graph has a highhomological connectivity. The other player - NON - wants to prove otherwise. CON offers edges to NON one by one; NON can either disconnect an edge, or explode it; an explosion deletes the edge endpoints and all their neighbors. CON's score is the number of explosions when all vertices are gone, or infinity if some isolated vertices remain. The value of the game on a given graphG(the score of CON when both players play optimally) is denoted byΨ(G). This number can be used to get a lower bound on the homological connectivity of theindependence complexofG, denoted⁠I(G){\displaystyle {\mathcal {I}}(G)}⁠: Therefore, the above theorem implies the following. LetVbe a set of vertices. LetGbe a graph onV. Suppose that, for every subsetY0ofY: Then there is an independent set inG, that intersects eachVyby exactly one element. LetHbe a bipartite graph with partsXandY. LetVbe the set ofedgesofH. LetG= L(H) =theline graphofH. Then, the independence complex⁠I(L(H)){\displaystyle {\mathcal {I}}(L(H))}⁠is equal to thematching complexof H, denoted⁠M(H){\displaystyle {\mathcal {M}}(H)}⁠. It is a simplicial complex on the edges ofH, whose elements are all the matchings onH. For each vertexyinY, letVybe set of edges adjacent toy(note thatVyis a subset ofV). Then, for every subsetY0ofY, the induced subgraphG[VY0]{\displaystyle G[V_{Y_{0}}]}contains a clique for every neighbor ofY0(all edges adjacent toY0, that meet at the same vertex ofX, form a clique in the line-graph). So there are|NH(Y0)|disjoint cliques. Therefore, when Meshulam's game is played, NON needs|NH(Y0)|explosions to destroy all ofL(NH(Y0)), soΨ(L(NH(Y0)) = |NH(Y0)|. Thus, Meshulam's condition is equivalent to Hall's marriage condition. Here, the setsVyare pairwise-disjoint, so aC-transversal contains a unique element from eachVy, which is equivalent to aY-saturating matching. LetHbe a bipartite hypergraph, and supposeCis itsmatching complex⁠M(H){\displaystyle {\mathcal {M}}(H)}⁠. LetHy(foryinY) be sets of edges ofH. For every subsetY0ofY,⁠M(HY0){\displaystyle {\mathcal {M}}(H_{Y_{0}})}⁠is the set of matchings in the sub-hypergraph: If, for every subsetY0ofY: Then there exists a matching that intersects each setHyexactly once (it is also called arainbow matching, since eachHycan be treated as a color). This is true, in particular, if we defineHyas the set of edges ofHcontaining the vertexyofY. In this case,⁠M(HY0){\displaystyle {\mathcal {M}}(H_{Y_{0}})}⁠is equivalent toNH(Y0)- the multi-hypergraph of neighbors ofY0("multi" - since each neighbor is allowed to appear several times for several differenty). The matching complex of a hypergraph is exactly the independence complex of itsline graph, denotedL(H). This is a graph in which the vertices are the edges ofH, and two such vertices are connected iff their corresponding edges intersect inH. Therefore, the above theorem implies: Combining the previous inequalities leads to the following condition. We consider several bipartite hypergraphs withY= {1, 2}andX= {A, B; a, b, c}.The Meshulam condition trivially holds for the empty set. It holds for subsets of size 1 iff the neighbor-graph of each vertex inYis non-empty (so it requires at least one explosion to destroy), which is easy to check. It remains to check the subsetYitself. s No necessary-and-sufficient condition usingΨis known. Arainbow matchingis a matching in a simple graph, in which each edge has a different "color". By treating the colors as vertices in the setY, one can see that a rainbow matching is in fact a matching in a bipartitehypergraph. Thus, several sufficient conditions for the existence of a large rainbow matching can be translated to conditions for the existence of a large matching in a hypergraph. The following results pertain totripartite hypergraphs in which each of the 3 parts contains exactlynvertices, the degree of each vertex is exactlyn, and the set of neighbors of every vertex is a matching (henceforth "n-tripartite-hypergraph"): The following results pertain to more general bipartite hypergraphs: Abalanced hypergraphis an alternative generalization of a bipartite graph: it is a hypergraph in which every odd cycleCofHhas an edge containing at least three vertices ofC. LetH= (V,E)be a balanced hypergraph. The following are equivalent:[24][25] A simple graph is bipartite iff it is balanced (it contains no odd cycles and no edges with three vertices). LetG= (X+Y,E) be a bipartite graph. LetX0be a subset ofXandY0a subset ofY. The condition "e∩X0≥ |e∩Y0|for all edgeseinE" means thatX0contains all the neighbors of vertices ofY0-Hence, the CCKV condition becomes: "If a subsetX0ofXcontains the setNH(Y0), then|X0| ≥ |Y0|". This is equivalent to Hall's condition.
https://en.wikipedia.org/wiki/Hall-type_theorems_for_hypergraphs
Ineconomicsandsocial choice theory, anenvy-free matching (EFM)is a matching between people to "things", which isenvy-freein the sense that no person would like to switch their "thing" with that of another person. This term has been used in several different contexts. In an unweightedbipartite graphG = (X+Y,E), anenvy-free matchingis amatchingin which no unmatched vertex inXis adjacent to a matched vertex inY.[1]Suppose the vertices ofXrepresent people, the vertices ofYrepresent houses, and an edge between a personxand a houseyrepresents the fact thatxis willing to live iny. Then, an EFM is a partial allocation of houses to people such that each house-less person does not envy any person with a house, since they do not like any allocated house anyway. Every matching that saturatesXis envy-free, and every empty matching is envy-free. Moreover, if |NG(X)| ≥ |X| ≥ 1 (whereNG(X) is the set of neighbors ofXinY), thenGadmits a nonempty EFM.[1]This is a relaxation ofHall's marriage condition, which says that, if |NG(X')| ≥ |X'| forevery subset X' ofX, then anX-saturating matching exists. Consider a market in which there are several buyers and several goods, and each good may have a price. Given a price-vector, each buyer has ademand set- a set of bundles that maximize the buyer's utility over all bundles (this set might include the empty bundle, in case the buyer considers all bundles as too expensive). Aprice-envy-free matching(given a price-vector) is a matching in which each agent receives a bundle from his demand-set. This means that no agent would prefer to get another bundle with the same prices.[2]An example of this setting is therental harmonyproblem - matching tenants (the agents) to rooms (the items) while setting a price to each room. Anenvy-free priceis a price-vector for which an envy-free matching exists. It is a relaxation of aWalrasian equilibrium: aWalrasian equilibriumconsists of an EF price and EF matching, and in addition, every item must either be matched or have zero price. It is known that, in a Walrasian equilibrium, the matching maximizes the sum of values, i.e., it is amaximum-weight matching. However, the seller's revenue might be low. This motivates the relaxation to EF pricing, in which the seller may use reserve prices to increase the revenue; seeenvy-free pricingfor more details. The term envy-free matching is often used to denote a weaker condition -no-justified-envy matching. The termenvy-free matchinghas also been used in a different context: an algorithm for improving the efficiency ofenvy-free cake-cutting.[3]
https://en.wikipedia.org/wiki/Envy-free_matching
Inmathematics,economics, andcomputer science, thestable matching polytopeorstable marriage polytopeis aconvex polytopederived from the solutions to an instance of thestable matching problem.[1][2] The stable matching polytope is theconvex hullof theindicator vectorsof the stable matchings of the given problem. It has a dimension for each pair of elements that can be matched, and avertexfor each stable matching. For each vertex, theCartesian coordinatesare one for pairs that are matched in the corresponding matching, and zero for pairs that are not matched.[1] The stable matching polytope has a polynomial number offacets. These include the conventional inequalities describing matchings without the requirement of stability (each coordinate must be between 0 and 1, and for each element to be matched the sum of coordinates for the pairs involving that element must be exactly one), together with inequalities constraining the resulting matching to be stable (for each potential matched pair elements, the sum of coordinates for matches that are at least as good for one of the two elements must be at least one). The points satisfying all of these constraints can be thought of as the fractional solutions of alinear programming relaxationof the stable matching problem. It is a theorem ofVande Vate (1989)that the polytope described by the facet constraints listed above has only the vertices described above. In particular it is anintegral polytope. This can be seen as an analogue of the theorem ofGarrett Birkhoffthat an analogous polytope, theBirkhoff polytopedescribing the set of all fractional matchings between two sets, is integral.[3] An equivalent way of stating the same theorem is that every fractional matching can be expressed as aconvex combinationof integral matchings.Teo & Sethuraman (1998)prove this by constructing a probability distribution on integral matchings whoseexpected valuecan be set equal to any given fractional matching. To do so, they perform the following steps: The resulting randomly chosen stable matching chooses any particular matched pair with probability equal to the fractional coordinate value of that pair. Therefore, theprobability distributionover stable matchings constructed in this way provides a representation of the given fractional matching as a convex combination of integral stable matchings.[4] The family of all stable matchings forms adistributive lattice, thelattice of stable matchings, in which thejoinof two matchings gives all doctors their preference among their assigned hospitals in the two matchings, and the meet gives all hospitals their preference.[5]The same is true of the family of all fractional stable matchings, the points of the stable matching polytope.[3] In the stable matching polytope, one can define one matching to dominate another if, for every doctor and hospital, the total fractional value assigned to matches for that doctor that are at least as good (for the doctor) as that hospital are at least as large in the first matching as in the second. This defines apartial orderon the fractional matchings. This partial order has a unique largest element, the integer stable matching found by a version of theGale–Shapley algorithmin which the doctors propose matches and the hospitals respond to the proposals. It also has a unique smallest element, the integer stable matching found by a version of the Gale–Shapley algorithm in which the hospitals make the proposals.[3] Consistently with this partial order, one can define the meet of two fractional matchings to be a fractional matching that is as low as possible in the partial order while dominating the two matchings. For each doctor and hospital, it assigns to that potential matched pair a weight that makes the total weight of that pair and all better pairs for the same doctor equal to the larger of the corresponding totals from the two given matchings. The join is defined symmetrically.[3] By applyinglinear programmingto the stable matching polytope, one can find the minimum or maximum weight stable matching.[1]Alternative methods for the same problem include applying theclosure problemto apartially ordered setderived from thelattice of stable matchings,[6]or applying linear programming to theorder polytopeof this partial order. The property of the stable matching polytope, of defining a continuous distributive lattice is analogous to the defining property of adistributive polytope, a polytope in which coordinatewise maximization and minimization form the meet and join operations of a lattice.[7]However, the meet and join operations for the stable matching polytope are defined in a different way than coordinatewise maximization and minimization. Instead, theorder polytopeof the underlying partial order of thelattice of stable matchingsprovides a distributive polytope associated with the set of stable matchings, but one for which it is more difficult to read off the fractional value associated with each matched pair. In fact, the stable matching polytope and the order polytope of the underlying partial order are very closely related to each other: each is anaffine transformationof the other.[8]
https://en.wikipedia.org/wiki/Stable_matching_polytope
Inmathematics,economics, andcomputer science, thelattice of stable matchingsis adistributive latticewhose elements arestable matchings. For a given instance of the stable matching problem, this lattice provides analgebraicdescription of the family of all solutions to the problem. It was originally described in the 1970s byJohn Horton ConwayandDonald Knuth.[1][2] ByBirkhoff's representation theorem, this lattice can be represented as thelower setsof an underlyingpartially ordered set. The elements of this set can be given a concrete structure as rotations, withcycle graphsdescribing the changes between adjacent stable matchings in the lattice. The family of all rotations and their partial order can be constructed inpolynomial time, leading to polynomial time solutions for other problems on stable matching including the minimum or maximum weight stable matching. TheGale–Shapley algorithmcan be used to construct two special lattice elements, its top and bottom element. Every finite distributive lattice can be represented as a lattice of stable matchings. The number of elements in the lattice can vary from an average case ofe−1nln⁡n{\displaystyle e^{-1}n\ln n}to a worst-case of exponential. Computing the number of elements is#P-complete. In its simplest form, an instance of the stable matching problem consists of two sets of the same number of elements to be matched to each other, for instance doctors and positions at hospitals. Each element has a preference ordering on the elements of the other type: the doctors each have different preferences for which hospital they would like to work at (for instance based on which cities they would prefer to live in), and the hospitals each have preferences for which doctors they would like to work for them (for instance based on specialization or recommendations). The goal is to find a matching that isstable: no pair of a doctor and a hospital prefer each other to their assigned match. Versions of this problem are used, for instance, by theNational Resident Matching Programto match American medical students to hospitals.[3] In general, there may be many different stable matchings. For example, suppose there are three doctors (A,B,C) and three hospitals (X,Y,Z) which have preferences of: There are three stable solutions to this matching arrangement: The lattice of stable matchings organizes this collection of solutions, for any instance of stable matching, giving it the structure of adistributive lattice.[1] The lattice of stable matchings is based on the following weaker structure, apartially ordered setwhose elements are the stable matchings. Define a comparison operation≤{\displaystyle \leq }on the stable matchings, whereP≤Q{\displaystyle P\leq Q}if and only if all doctors prefer matchingQ{\displaystyle Q}to matchingP{\displaystyle P}: either they have the same assigned hospital in both matchings, or they are assigned a better hospital inQ{\displaystyle Q}than they are inP{\displaystyle P}. If the doctors disagree on which matching they prefer, thenP{\displaystyle P}andQ{\displaystyle Q}are incomparable: neither one is≤{\displaystyle \leq }the other. The same comparison operation can be defined in the same way for any two sets of elements, not just doctors and hospitals. The choice of which of the two sets of elements to use in the role of the doctors is arbitrary. Swapping the roles of the doctors and hospitals reverses the ordering of every pair of elements, but does not otherwise change the structure of the partial order.[1] Then this ordering gives the matchings the structure of a partially ordered set. To do so, it must obey the following three properties: For stable matchings, all three properties follow directly from the definition of the comparison operation. Define the best match of an elementx{\displaystyle x}of a stable matching instance to be the elementy{\displaystyle y}thatx{\displaystyle x}most prefers, among all the elements that can be matched tox{\displaystyle x}in a stable matching, and define the worst match analogously. Then no two elements can have the same best match. For, suppose to the contrary that doctorsx{\displaystyle x}andx′{\displaystyle x'}both havey{\displaystyle y}as their best match, and thaty{\displaystyle y}prefersx{\displaystyle x}tox′{\displaystyle x'}. Then, in the stable matching that matchesx′{\displaystyle x'}toy{\displaystyle y}(which must exist by the definition of the best match ofx′{\displaystyle x'}),x{\displaystyle x}andy{\displaystyle y}would be an unstable pair, becausey{\displaystyle y}prefersx{\displaystyle x}tox′{\displaystyle x'}andx{\displaystyle x}prefersy{\displaystyle y}to any other partner in any stable matching. This contradiction shows that assigning all doctors to their best matches gives a matching. It is a stable matching, because any unstable pair would also be unstable for one of the matchings used to define best matches. As well as assigning all doctor to their best matches, it assigns all hospitals to their worst matches. In the partial ordering on the matchings, it is greater than all other stable matchings.[1] Symmetrically, assigning all doctors to their worst matches and assigning all hospitals to their best matches gives another stable matching. In the partial order on the matchings, it is less than all other stable matchings.[1] TheGale–Shapley algorithmgives a process for constructing stable matchings, that can be described as follows: until a matching is reached, the algorithm chooses an arbitrary hospital with an unfilled position, and that hospital makes a job offer to the doctor it most prefers among the ones it has not already made offers to. If the doctor is unemployed or has a less-preferred assignment, the doctor accepts the offer (and resigns from their other assignment if it exists). The process always terminates, because each doctor and hospital interact only once. When it terminates, the result is a stable matching, the one that assigns each hospital to its best match and that assigns all doctors to their worst matches. An algorithm that swaps the roles of the doctors and hospitals (in which unemployed doctors send a job applications to their next preference among the hospitals, and hospitals accept applications either when they have an unfilled position or they prefer the new applicant, firing the doctor they had previously accepted) instead produces the stable matching that assigns all doctors to their best matches and each hospital to its worst match.[1] Given any two stable matchingsP{\displaystyle P}andQ{\displaystyle Q}for the same input, one can form two more matchingsP∨Q{\displaystyle P\vee Q}andP∧Q{\displaystyle P\wedge Q}in the following way: (The same operations can be defined in the same way for any two sets of elements, not just doctors and hospitals.)[1] Then bothP∨Q{\displaystyle P\vee Q}andP∧Q{\displaystyle P\wedge Q}are matchings. It is not possible, for instance, for two doctors to have the same best choice and be matched to the same hospital inP∨Q{\displaystyle P\vee Q}, for regardless of which of the two doctors is preferred by the hospital, that doctor and hospital would form an unstable pair in whichever ofP{\displaystyle P}andQ{\displaystyle Q}they are not already matched in. Because the doctors are matched inP∨Q{\displaystyle P\vee Q}, the hospitals must also be matched. The same reasoning applies symmetrically toP∧Q{\displaystyle P\wedge Q}.[1] Additionally, bothP∨Q{\displaystyle P\vee Q}andP∧Q{\displaystyle P\wedge Q}are stable. There cannot be a pair of a doctor and hospital who prefer each other to their match, because the same pair would necessarily also be an unstable pair for at least one ofP{\displaystyle P}andQ{\displaystyle Q}.[1] The two operationsP∨Q{\displaystyle P\vee Q}andP∧Q{\displaystyle P\wedge Q}form thejoin and meetoperations of a finitedistributive lattice. In this context, a finitelatticeis defined as a partially orderedfinite setin which there is a unique minimum element and a unique maximum element, in which every two elements have a unique least element greater than or equal to both of them (their join) and every two elements have a unique greatest element less than or equal to both of them (their meet).[1] In the case of the operationsP∨Q{\displaystyle P\vee Q}andP∧Q{\displaystyle P\wedge Q}defined above, the joinP∨Q{\displaystyle P\vee Q}is greater than or equal to bothP{\displaystyle P}andQ{\displaystyle Q}because it was defined to give each doctor their preferred choice, and because these preferences of the doctors are how the ordering on matchings is defined. It is below any other matching that is also above bothP{\displaystyle P}andQ{\displaystyle Q}, because any such matching would have to give each doctor an assigned match that is at least as good. Therefore, it fits the requirements for the join operation of a lattice. Symmetrically, the operationP∧Q{\displaystyle P\wedge Q}fits the requirements for the meet operation.[1] Because they are defined using an element-wise minimum or element-wise maximum in the preference ordering, these two operations obey the samedistributive lawsobeyed by the minimum and maximum operations on linear orderings: for every three different matchingsP{\displaystyle P},Q{\displaystyle Q}, andR{\displaystyle R}, and Therefore, the lattice of stable matchings is adistributive lattice.[1] Birkhoff's representation theoremstates that any finite distributive lattice can be represented by a family offinite sets, with intersection and union as the meet and join operations, and with the relation of being a subset as the comparison operation for the associated partial order. More specifically, these sets can be taken to be thelower setsof an associated partial order. In the general form of Birkhoff's theorem, this partial order can be taken as the induced order on a subset of the elements of the lattice, the join-irreducible elements (elements that cannot be formed as joins of two other elements).[4]For the lattice of stable matchings, the elements of the partial order can instead be described in terms of structures calledrotations, described byIrving & Leather (1986).[5] Suppose that two different stable matchingsP{\displaystyle P}andQ{\displaystyle Q}are comparable and have no third stable matching between them in the partial order. (That is,P{\displaystyle P}andQ{\displaystyle Q}form a pair of thecovering relationof the partial order of stable matchings.) Then the set of pairs of elements that are matched in one but not both ofP{\displaystyle P}andQ{\displaystyle Q}(thesymmetric differenceof their sets of matched pairs) is called a rotation. It forms acycle graphwhose edges alternate between the two matchings. Equivalently, the rotation can be described as the set of changes that would need to be performed to change the lower of the two matchings into the higher one (with lower and higher determined using the partial order). If two different stable matchings are separately the higher matching for the same rotation, then so is their meet. It follows that for any rotation, the set of stable matchings that can be the higher of a pair connected by the rotation has a unique lowest element. This lowest matching is join irreducible, and this gives a one-to-one correspondence between rotations and join-irreducible stable matchings.[5] If the rotations are given the same partial ordering as their corresponding join-irreducible stable matchings, then Birkhoff's representation theorem gives a one-to-one correspondence between lower sets of rotations and all stable matchings. The set of rotations associated with any given stable matching can be obtained by changing the given matching by rotations downward in the partial ordering, choosing arbitrarily which rotation to perform at each step, until reaching the bottom element, and listing the rotations used in this sequence of changes. The stable matching associated with any lower set of rotations can be obtained by applying the rotations to the bottom element of the lattice of stable matchings, choosing arbitrarily which rotation to apply when more than one can apply.[5] Every pair(x,y){\displaystyle (x,y)}of elements of a given stable matching instance belongs to at most two rotations: one rotation that, when applied to the lower of two matchings, removes other assignments tox{\displaystyle x}andy{\displaystyle y}and instead assigns them to each other, and a second rotation that, when applied to the lower of two matchings, removes pair(x,y){\displaystyle (x,y)}from the matching and finds other assignments for those two elements. Because there aren2{\displaystyle n^{2}}pairs of elements, there areO(n2){\displaystyle O(n^{2})}rotations.[5] Beyond being a finite distributive lattice, there are no other constraints on the lattice structure of stable matchings. This is because, for every finite distributive latticeL{\displaystyle L}, there exists a stable matching instance whose lattice of stable matchings is isomorphic toL{\displaystyle L}.[6]More strongly, if a finite distributive lattice hask{\displaystyle k}elements, then it can be realized using a stable matching instance with at mostk2−k+4{\displaystyle k^{2}-k+4}doctors and hospitals.[7] The lattice of stable matchings can be used to study thecomputational complexityof counting the number of stable matchings of a given instance. From the equivalence between lattices of stable matchings and arbitrary finite distributive lattices, it follows that this problem has equivalent computational complexity to counting the number of elements in an arbitrary finite distributive lattice, or to counting theantichainsin an arbitrary partially ordered set. Computing the number of stable matchings is#P-complete.[5] In a uniformly-random instance of the stable marriage problem withn{\displaystyle n}doctors andn{\displaystyle n}hospitals, the average number of stable matchings is asymptoticallye−1nln⁡n{\displaystyle e^{-1}n\ln n}.[8]In a stable marriage instance chosen to maximize the number of different stable matchings, this number can be at least2n−1{\displaystyle 2^{n-1}},[5]and us alsoupper-boundedby anexponential functionofn(significantly smaller than the naivefactorialbound on the number of matchings).[9] The family of rotations and their partial ordering can be constructed inpolynomial timefrom a given instance of stable matching, and provides a concise representation to the family of all stable matchings, which can for some instances be exponentially larger when listed explicitly. This allows several other computations on stable matching instances to be performed efficiently.[10] If each pair of elements in a stable matching instance is assigned a real-valued weight, it is possible to find the minimum or maximum weight stable matching inpolynomial time. One possible method for this is to applylinear programmingto theorder polytopeof the partial order of rotations, or to thestable matching polytope.[11]An alternative, combinatorial algorithm is possible, based on the same partial order.[12] From the weights on pairs of elements, one can assign weights to each rotation, where a rotation that changes a given stable matching to another one higher in the partial ordering of stable matchings is assigned the change in weight that it causes: the total weight of the higher matching minus the total weight of the lower matching. By the correspondence between stable matchings and lower sets of rotations, the total weight of any matching is then equal to the total weight of its corresponding lower set, plus the weight of the bottom element of the lattice of matchings. The problem of finding the minimum or maximum weight stable matching becomes in this way equivalent to the problem of finding the minimum or maximum weight lower set in a partially ordered set of polynomial size, the partially ordered set of rotations.[12] This optimal lower set problem is equivalent to an instance of theclosure problem, a problem on vertex-weighteddirected graphsin which the goal is to find a subset of vertices of optimal weight with no outgoing edges. The optimal lower set is an optimal closure of adirected acyclic graphthat has the elements of the partial order as its vertices, with an edge fromα{\displaystyle \alpha }toβ{\displaystyle \beta }wheneverα≤β{\displaystyle \alpha \leq \beta }in the partial order. The closure problem can, in turn, be solved in polynomial time by transforming it into an instance of themaximum flow problem.[12] Gusfield (1987)defines the regret of a participant in a stable matching to be the distance of their assigned match from the top of their preference list. He defines the regret of a stable matching to be the maximum regret of any participant. Then one can find the minimum-regret stable matching by a simple greedy algorithm that starts at the bottom element of the lattice of matchings and then repeatedly applies any rotation that reduces the regret of a participant with maximum regret, until this would cause some other participant to have greater regret.[10] The elements of any distributive lattice form amedian graph, a structure in which any three elementsP{\displaystyle P},Q{\displaystyle Q}, andR{\displaystyle R}(here, stable matchings) have a unique median elementm(P,Q,R){\displaystyle m(P,Q,R)}that lies on a shortest path between any two of them. It can be defined as:[13] For the lattice of stable matchings, this median can instead be taken element-wise, by assigning each doctor the median in the doctor's preferences of the three hospitals matched to that doctor inP{\displaystyle P},Q{\displaystyle Q}, andR{\displaystyle R}and similarly by assigning each hospital the median of the three doctors matched to it. More generally, any set of an odd number of elements of any distributive lattice (or median graph) has a median, a unique element minimizing its sum of distances to the given set.[14]For the median of an odd number of stable matchings, each participant is matched to the median element of the multiset of their matches from the given matchings. For an even set of stable matchings, this can be disambiguated by choosing the assignment that matches each doctor to the higher of the two median elements, and each hospital to the lower of the two median elements. In particular, this leads to a definition for the median matching in the set of all stable matchings.[15]However, for some instances of the stable matching problem, finding this median of all stable matchings isNP-hard.[16]
https://en.wikipedia.org/wiki/Lattice_of_stable_matchings
Thesecretary problemdemonstrates a scenario involvingoptimal stoppingtheory[1][2]that is studied extensively in the fields ofapplied probability,statistics, anddecision theory. It is also known as themarriage problem, thesultan's dowry problem, thefussy suitor problem, thegoogol game, and thebest choice problem. Its solution is also known as the37% rule.[3] The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out ofn{\displaystyle n}rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximumselection algorithmof tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately. The shortest rigorous proof known so far is provided by theodds algorithm. It implies that the optimal win probability is always at least1/e{\displaystyle 1/e}(whereeis the base of thenatural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first∼n/e{\displaystyle \sim n/e}applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the1/e{\displaystyle 1/e}stopping rule, because the probability of stopping at the best applicant with this strategy is already about1/e{\displaystyle 1/e}for moderate values ofn{\displaystyle n}. One reason why the secretary problem has received so much attention is that the optimal policy for the problem (the stopping rule) is simple and selects the single best candidate about 37% of the time, irrespective of whether there are 100 or 100 million applicants. The secretary problem is anexploration–exploitation dilemma. Although there are many variations, the basic problem can be stated as follows: Acandidateis defined as an applicant who, when interviewed, is better than all the applicants interviewed previously.Skipis used to mean "reject immediately after the interview". Since the objective in the problem is to select the single best applicant, only candidates will be considered for acceptance. The "candidate" in this context corresponds to the concept of record in permutation. The optimal policy for the problem is astopping rule. Under it, the interviewer rejects the firstr− 1 applicants (let applicantMbe the best applicant among theser− 1 applicants), and then selects the first subsequent applicant that is better than applicantM. It can be shown that the optimal strategy lies in this class of strategies.[citation needed](Note that we should never choose an applicant who is not the best we have seen so far, since they cannot be the best overall applicant.) For an arbitrary cutoffr, the probability that the best applicant is selected is The sum is not defined forr= 1, but in this case the only feasible policy is to select the first applicant, and henceP(1) = 1/n. This sum is obtained by noting that if applicantiis the best applicant, then it is selected if and only if the best applicant among the firsti− 1 applicants is among the firstr− 1 applicants that were rejected. Lettingntend to infinity, writingx{\displaystyle x}as the limit of(r−1)/n, usingtfor(i−1)/nanddtfor 1/n, the sum can be approximated by the integral Taking the derivative ofP(x) with respect tox{\displaystyle x}, setting it to 0, and solving forx, we find that the optimalxis equal to 1/e. Thus, the optimal cutoff tends ton/easnincreases, and the best applicant is selected with probability 1/e. For small values ofn, the optimalrcan also be obtained by standarddynamic programmingmethods. The optimal thresholdsrand probability of selecting the best alternativePfor several values ofnare shown in the following table.[note 1] The probability of selecting the best applicant in the classical secretary problem converges toward1/e≈0.368{\displaystyle 1/e\approx 0.368}. This problem and several modifications can be solved (including the proof of optimality) in a straightforward manner by theodds algorithm, which also has other applications. Modifications for the secretary problem that can be solved by this algorithm include random availabilities of applicants, more general hypotheses for applicants to be of interest to the decision maker, group interviews for applicants, as well as certain models for a random number of applicants.[citation needed] The solution of the secretary problem is only meaningful if it is justified to assume that the applicants have no knowledge of the decision strategy employed, because early applicants have no chance at all and may not show up otherwise. One important drawback for applications of the solution of the classical secretary problem is that the number of applicantsn{\displaystyle n}must be known in advance, which is rarely the case. One way to overcome this problem is to suppose that the number of applicants is a random variableN{\displaystyle N}with a known distribution ofP(N=k)k=1,2,⋯{\displaystyle P(N=k)_{k=1,2,\cdots }}(Presman and Sonin, 1972). For this model, the optimal solution is in general much harder, however. Moreover, the optimal success probability is now no longer around 1/ebut typically lower. This can be understood in the context of having a "price" to pay for not knowing the number of applicants. However, in this model the price is high. Depending on the choice of the distribution ofN{\displaystyle N}, the optimal win probability can approach zero. Looking for ways to cope with this new problem led to a new model yielding the so-called 1/e-law of best choice. The essence of the model is based on the idea that life is sequential and that real-world problems pose themselves in real time. Also, it is easier to estimate times in which specific events (arrivals of applicants) should occur more frequently (if they do) than to estimate the distribution of the number of specific events which will occur. This idea led to the following approach, the so-calledunified approach(1984): The model is defined as follows: An applicant must be selected on some time interval[0,T]{\displaystyle [0,T]}from an unknown numberN{\displaystyle N}of rankable applicants. The goal is to maximize the probability of selecting only the best under the hypothesis that all arrival orders of different ranks are equally likely. Suppose that all applicants have the same, but independent to each other, arrival time densityf{\displaystyle f}on[0,T]{\displaystyle [0,T]}and letF{\displaystyle F}denote the corresponding arrival time distribution function, that is Letτ{\displaystyle \tau }be such thatF(τ)=1/e.{\displaystyle F(\tau )=1/e.}Consider the strategy to wait and observe all applicants up to timeτ{\displaystyle \tau }and then to select, if possible, the first candidate after timeτ{\displaystyle \tau }which is better than all preceding ones. Then this strategy, called1/e-strategy, has the following properties: The1/e-strategy The 1/e-law, proved in 1984 byF. Thomas Bruss, came as a surprise. The reason was that a value of about 1/e had been considered before as being out of reach in a model for unknownN{\displaystyle N}, whereas this value 1/e was now achieved as a lower bound for the success probability, and this in a model with arguably much weaker hypotheses (see e.g. Math. Reviews 85:m). However, there are many other strategies that achieve (i) and (ii) and, moreover, perform strictly better than the 1/e-strategy simultaneously for allN{\displaystyle N}>2. A simple example is the strategy which selects (if possible) the first relatively best candidate after timeτ{\displaystyle \tau }provided that at least one applicant arrived before this time, and otherwise selects (if possible) the second relatively best candidate after timeτ{\displaystyle \tau }.[4] The 1/e-law is sometimes confused with the solution for the classical secretary problem described above because of the similar role of the number 1/e. However, in the 1/e-law, this role is more general. The result is also stronger, since it holds for an unknown number of applicants and since the model based on an arrival time distribution F is more tractable for applications. In the article "Who solved the Secretary problem?" (Ferguson, 1989)[1], it's claimed the secretary problem first appeared in print inMartin Gardner's February 1960Mathematical Games columninScientific American: Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of agoogol(1 followed by a hundred zeroes) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.[5] Ferguson pointed out that the secretarygameremained unsolved, as azero-sum gamewith two antagonistic players.[1]In this game: The difference with the basic secretary problem are two: Alice first writes down n numbers, which are then shuffled. So, their ordering does not matter, meaning that Alice's numbers must be anexchangeable random variable sequenceX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}. Alice's strategy is then just picking the trickiest exchangeable random variable sequence. Bob's strategy is formalizable as astopping ruleτ{\displaystyle \tau }for the sequenceX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}. We say that a stopping ruleτ{\displaystyle \tau }for Bob is arelative rank stopping strategyif it depends on only the relative ranks ofX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}, and not on their numerical values. In other words, it is as if someone secretly intervened after Alice picked her numbers, and changed each number inX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}into its relative rank (breaking ties randomly). For example,0.2,0.3,0.3,0.1{\displaystyle 0.2,0.3,0.3,0.1}is changed to2,3,4,1{\displaystyle 2,3,4,1}or2,4,3,1{\displaystyle 2,4,3,1}with equal probability. This makes itas ifAlice played an exchangeable random permutation on{1,2,...,n}{\displaystyle \{1,2,...,n\}}. Now, since the only exchangeable random permutation on{1,2,...,n}{\displaystyle \{1,2,...,n\}}is just the uniform distribution over all permutations on{1,2,...,n}{\displaystyle \{1,2,...,n\}}, the optimal relative rank stopping strategy is the optimal stopping rule for the secretary problem, given above, with a winning probabilityPr(Xτ=maxi∈1:nXi)=maxr∈1:nr−1n∑i=rn1i−1{\displaystyle Pr(X_{\tau }=\max _{i\in 1:n}X_{i})=\max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}}Alice's goal then is to make sure Bob cannot do better than the relative-rank stopping strategy. By the rules of the game, Alice's sequence must be exchangeable, but to do well in the game, Alice should not pick it to be independent. If Alice samples the numbers independently from some fixed distribution, it would allow Bob to do better. To see this intuitively, imagine ifn=2{\displaystyle n=2}, and Alice is to pick both numbers from the normal distributionN(0,1){\displaystyle N(0,1)}, independently. Then if Bob turns over one number and sees−3{\displaystyle -3}, then he can quite confidently turn over the second number, and if Bob turns over one number and sees+3{\displaystyle +3}, then he can quite confidently pick the first number. Alice can do better by pickingX1,X2{\displaystyle X_{1},X_{2}}that are positively correlated. So the fully formal statement is as below: Does there exist an exchangeable sequence of random variablesX1,...,Xn{\displaystyle X_{1},...,X_{n}}, such that foranystopping ruleτ{\displaystyle \tau },Pr(Xτ=maxi∈1:nXi)≤maxr∈1:nr−1n∑i=rn1i−1{\displaystyle Pr(X_{\tau }=\max _{i\in 1:n}X_{i})\leq \max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}}? Forn=2{\displaystyle n=2}, if Bob plays the optimal relative-rank stoppings strategy, then Bob has a winning probability 1/2. Surprisingly, Alice has nominimaxstrategy, which is closely related to a paradox ofT. Cover[6]and thetwo envelopes paradox. Concretely, Bob can play this strategy: sample a random numberY{\displaystyle Y}. IfX1>Y{\displaystyle X_{1}>Y}, then pickX1{\displaystyle X_{1}}, else pickX2{\displaystyle X_{2}}. Now, Bob can win with probability strictly greater than 1/2. Suppose Alice's numbers are different, then condition onY∉[min(X1,X2),max(X1,X2)]{\displaystyle Y\not \in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}, Bob wins with probability 1/2, but condition onY∈[min(X1,X2),max(X1,X2)]{\displaystyle Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}, Bob wins with probability 1. Note the random numberY{\displaystyle Y}can be sampled fromanyrandom distribution, as long asY∈[min(X1,X2),max(X1,X2)]{\displaystyle Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}has a nonzero probability. However, for anyϵ>0{\displaystyle \epsilon >0}, Alice can construct an exchangeable sequenceX1,X2{\displaystyle X_{1},X_{2}}such that Bob's winning probability is at most1/2+ϵ{\displaystyle 1/2+\epsilon }.[1] But forn>2{\displaystyle n>2}, the answer is yes: Alice can choose random numbers (which are dependent random variables) in such a way that Bob cannot play better than using the classical stopping strategy based on the relative ranks.[7] The remainder of the article deals again with the secretary problem for a known number of applicants. Stein, Seale & Rapoport 2003derived the expected success probabilities for several psychologically plausible heuristics that might be employed in the secretary problem. The heuristics they examined were: Each heuristic has a single parametery. The figure (shown on right) displays the expected success probabilities for each heuristic as a function ofyfor problems withn= 80. Finding the single best applicant might seem like a rather strict objective. One can imagine that the interviewer would rather hire a higher-valued applicant than a lower-valued one, and not only be concerned with getting the best. That is, the interviewer will derive some value from selecting an applicant that is not necessarily the best, and the derived value increases with the value of the one selected. To model this problem, suppose that then{\displaystyle n}applicants have "true" values that arerandom variablesXdrawni.i.d.from auniform distributionon [0, 1]. Similar to the classical problem described above, the interviewer only observes whether each applicant is the best so far (a candidate), must accept or reject each on the spot, andmustaccept the last one if he/she is reached. (To be clear, the interviewer does not learn the actual relative rank ofeachapplicant. He/she learns only whether the applicant has relative rank 1.) However, in this version thepayoffis given by the true value of the selected applicant. For example, if he/she selects an applicant whose true value is 0.8, then he/she will earn 0.8. The interviewer's objective is to maximize the expected value of the selected applicant. Since the applicant's values are i.i.d. draws from a uniform distribution on [0, 1], theexpected valueof thetth applicant given thatxt=max{x1,x2,…,xt}{\displaystyle x_{t}=\max \left\{x_{1},x_{2},\ldots ,x_{t}\right\}}is given by As in the classical problem, the optimal policy is given by a threshold, which for this problem we will denote byc{\displaystyle c}, at which the interviewer should begin accepting candidates. Bearden showed thatcis either⌊n⌋{\displaystyle \lfloor {\sqrt {n}}\rfloor }or⌈n⌉{\displaystyle \lceil {\sqrt {n}}\rceil }.[8](In fact, whichever is closest ton{\displaystyle {\sqrt {n}}}.) This follows from the fact that given a problem withn{\displaystyle n}applicants, the expected payoff for some arbitrary threshold1≤c≤n{\displaystyle 1\leq c\leq n}is DifferentiatingVn(c){\displaystyle V_{n}(c)}with respect toc, one gets Since∂2V/∂c2<0{\displaystyle \partial ^{\,2}V/\partial c^{\,2}<0}for all permissible values ofc{\displaystyle c}, we find thatV{\displaystyle V}is maximized atc=n{\displaystyle c={\sqrt {n}}}. SinceVis convex inc{\displaystyle c}, the optimal integer-valued threshold must be either⌊n⌋{\displaystyle \lfloor {\sqrt {n}}\rfloor }or⌈n⌉{\displaystyle \lceil {\sqrt {n}}\rceil }. Thus, for most values ofn{\displaystyle n}the interviewer will begin accepting applicants sooner in the cardinal payoff version than in the classical version where the objective is to select the single best applicant. Note that this is not an asymptotic result: It holds for alln{\displaystyle n}. Interestingly, if each of then{\displaystyle n}secretaries has a fixed, distinct value from1{\displaystyle 1}ton{\displaystyle n}, thenV{\displaystyle V}is maximized atc=n−1{\displaystyle c={\sqrt {n}}-1}, with the same convexity claims as before.[9]For other known distributions, optimal play can be calculated via dynamic programming. A more general form of this problem introduced by Palley and Kremer (2014)[10]assumes that as each new applicant arrives, the interviewer observes their rank relative to all of the applicants that have been observed previously. This model is consistent with the notion of an interviewerlearningas they continue the search process by accumulating a set of past data points that they can use to evaluate new candidates as they arrive. A benefit of this so-called partial-information model is that decisions and outcomes achieved given the relative rank information can be directly compared to the corresponding optimal decisions and outcomes if the interviewer had been given full information about the value of each applicant. This full-information problem, in which applicants are drawn independently from a known distribution and the interviewer seeks to maximize the expected value of the applicant selected, was originally solved by Moser (1956),[11]Sakaguchi (1961),[12]and Karlin (1962). There are several variants of the secretary problem that also have simple and elegant solutions. One variant replaces the desire to pick the best with the desire to pick the second-best.[13][14][15]For this problem, the probability of success for an even number of applicants is exactly0.25n2n(n−1){\displaystyle {\frac {0.25n^{2}}{n(n-1)}}}. This probability tends to 1/4 as n tends to infinity illustrating the fact that it is easier to pick the best than the second-best. Consider the problem of picking the k best secretaries out of n candidates, using k tries. In general, the optimal decision method starts by observingr=⌊nke1/k⌋{\displaystyle r=\left\lfloor {\frac {n}{ke^{1/k}}}\right\rfloor }candidates without picking any one of them, then pick every candidate that is better than those firstr{\displaystyle r}candidates until we run out of candidates or picks. Ifk{\displaystyle k}is held constant whilen→∞{\displaystyle n\to \infty }, then the probability of success converges to1ek{\displaystyle {\frac {1}{ek}}}.[16]ByVanderbei 1980, ifk=n/2{\displaystyle k=n/2}, then the probability of success is1n/2+1{\displaystyle {\frac {1}{n/2+1}}}. In this variant, a player is allowedr{\displaystyle r}choices and wins if any choice is the best. An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers(a1,a2,...,ar){\displaystyle (a_{1},a_{2},...,a_{r})}, wherea1>a2>⋯>ar{\displaystyle a_{1}>a_{2}>\cdots >a_{r}}. Specifically, imagine that you haver{\displaystyle r}letters of acceptance labelled from1{\displaystyle 1}tor{\displaystyle r}. You would haver{\displaystyle r}application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officeri{\displaystyle i}would send their letter of acceptance to the first candidate that is better than all candidates1{\displaystyle 1}toai{\displaystyle a_{i}}. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)[17] Atn→∞{\displaystyle n\rightarrow \infty }limit, eachai∼ne−ki{\displaystyle a_{i}\sim ne^{-k_{i}}}, for some rational numberki{\displaystyle k_{i}}.[18] Whenr=2{\displaystyle r=2}, the probability of winning converges toe−1+e−32,(n→∞){\displaystyle e^{-1}+e^{-{\frac {3}{2}}},(n\rightarrow \infty )}. More generally, for positive integersr{\displaystyle r}, the probability of winning converges top1+p2+⋯+pr{\displaystyle p_{1}+p_{2}+\cdots +p_{r}}, wherepi=limn→∞ain{\displaystyle p_{i}=\lim _{n\rightarrow \infty }{\frac {a_{i}}{n}}}.[18] [17]computed up tor=4{\displaystyle r=4}, withe−1+e−32+e−4724+e−27611152{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}}. Matsui & Ano 2016gave a general algorithm. For example,p5=e−41626371474560{\displaystyle p_{5}=e^{-{\frac {4162637}{1474560}}}}. Experimentalpsychologistsandeconomistshave studied thedecision behaviorof actual people in secretary problem situations.[19]In large part, this work has shown that people tend to stop searching too soon. This may be explained, at least in part, by the cost of evaluating candidates. In real world settings, this might suggest that people do not search enough whenever they are faced with problems where the decision alternatives are encountered sequentially. For example, when trying to decide at which gas station along a highway to stop for gas, people might not search enough before stopping. If true, then they would tend to pay more for gas than if they had searched longer. The same may be true when people search online for airline tickets. Experimental research on problems such as the secretary problem is sometimes referred to asbehavioral operations research. While there is a substantial body ofneuroscienceresearch on information integration, or the representation of belief, in perceptual decision-making tasks using both animal[20][21]and human subjects,[22]there is relatively little known about how the decision to stop gathering information is arrived at. Researchers have studied the neural bases of solving the secretary problem in healthy volunteers usingfunctional MRI.[23]AMarkov decision process(MDP) was used to quantify the value of continuing to search versus committing to the current option. Decisions to take versus decline an option engagedparietalanddorsolateral prefrontalcortices, as well asventral striatum,anterior insula, andanterior cingulate. Therefore, brain regions previously implicated in evidence integration andrewardrepresentation encode threshold crossings that trigger decisions to commit to a choice. The secretary problem was apparently introduced in 1949 byMerrill M. Flood, who called it the fiancée problem in a lecture he gave that year. He referred to it several times during the 1950s, for example, in a conference talk atPurdueon 9 May 1958, and it eventually became widely known in the folklore although nothing was published at the time. In 1958 he sent a letter toLeonard Gillman, with copies to a dozen friends includingSamuel Karlinand J. Robbins, outlining a proof of the optimum strategy, with an appendix by R. Palermo who proved that all strategies are dominated by a strategy of the form "reject the firstpunconditionally, then accept the next candidate who is better".[24] The first publication was apparently byMartin Gardnerin Scientific American, February 1960. He had heard about it from John H. Fox Jr., and L. Gerald Marnie, who had independently come up with an equivalent problem in 1958; they called it the "game of googol". Fox and Marnie did not know the optimum solution; Gardner asked for advice fromLeo Moser, who (together with J. R. Pounder) provided a correct analysis for publication in the magazine. Soon afterwards, several mathematicians wrote to Gardner to tell him about the equivalent problem they had heard via the grapevine, all of which can most likely be traced to Flood's original work.[25] The 1/e-law of best choice is due toF. Thomas Bruss.[26] Ferguson has an extensive bibliography and points out that a similar (but different) problem had been considered byArthur Cayleyin 1875 and even byJohannes Keplerlong before that, who spent 2 years investigating 11 candidates for marriage during 1611 -- 1613 after the death of his first wife.[27] The secretary problem can be generalized to the case where there are multiple different jobs. Again, there aren{\displaystyle n}applicants coming in random order. When a candidate arrives, he reveals a set of nonnegative numbers. Each value specifies her qualification for one of the jobs. The administrator not only has to decide whether or not to take the applicant but, if so, also has to assign her permanently to one of the jobs. The objective is to find an assignment where the sum of qualifications is as big as possible. This problem is identical to finding a maximum-weight matching in an edge-weightedbipartite graphwhere then{\displaystyle n}nodes of one side arrive online in random order. Thus, it is a special case of theonline bipartite matchingproblem. By a generalization of the classic algorithm for the secretary problem, it is possible to obtain an assignment where the expected sum of qualifications is only a factor ofe{\displaystyle e}less than an optimal (offline) assignment.[28]
https://en.wikipedia.org/wiki/Secretary_problem
Ingraph theory,graph coloringis a methodic assignment of labels traditionally called "colors" to elements of agraph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case ofgraph labeling. In its simplest form, it is a way of coloring theverticesof a graph such that no two adjacent vertices are of the same color; this is called avertex coloring. Similarly, anedge coloringassigns a color to eachedgesso that no two adjacent edges are of the same color, and aface coloringof aplanar graphassigns a color to eachface(or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just a vertex coloring of itsline graph, and a face coloring of a plane graph is just a vertex coloring of itsdual. However, non-vertex coloring problems are often stated and studied as-is. This is partlypedagogical, and partly because some problems are best studied in their non-vertex form, as in the case of edge coloring. The convention of using colors originates from coloring the countries in apolitical map, where each face is literally colored. This was generalized to coloring the faces of a graphembeddedin the plane. By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or non-negative integers as the "colors". In general, one can use anyfinite setas the "color set". The nature of the coloring problem depends on the number of colors but not on what they are. Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned, or even on the color itself. It has even reached popularity with the general public in the form of the popular number puzzleSudoku. Graph coloring is still a very active field of research. Note: Many terms used in this article are defined inGlossary of graph theory. The first results about graph coloring deal almost exclusively withplanar graphsin the form ofmap coloring. While trying to color a map of the counties of England,Francis Guthriepostulated thefour color conjecture, noting that four colors were sufficient to color the map so that no regions sharing a common border received the same color. Guthrie's brother passed on the question to his mathematics teacherAugustus De MorganatUniversity College, who mentioned it in a letter toWilliam Hamiltonin 1852.Arthur Cayleyraised the problem at a meeting of theLondon Mathematical Societyin 1879. The same year,Alfred Kempepublished a paper that claimed to establish the result, and for a decade the four color problem was considered solved. For his accomplishment Kempe was elected a Fellow of theRoyal Societyand later President of the London Mathematical Society.[1] In 1890,Percy John Heawoodpointed out that Kempe's argument was wrong. However, in that paper he proved thefive color theorem, saying that every planar map can be colored with no more thanfivecolors, using ideas of Kempe. In the following century, a vast amount of work was done and theories were developed to reduce the number of colors to four, until the four color theorem was finally proved in 1976 byKenneth AppelandWolfgang Haken. The proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening developments.[2]The proof of the four color theorem is noteworthy, aside from its solution of a century-old problem, for being the first major computer-aided proof. In 1912,George David Birkhoffintroduced thechromatic polynomialto study the coloring problem, which was generalised to theTutte polynomialbyW. T. Tutte, both of which are important invariants inalgebraic graph theory. Kempe had already drawn attention to the general, non-planar case in 1879,[3]and many results on generalisations of planar graph coloring to surfaces of higher order followed in the early 20th century. In 1960,Claude Bergeformulated another conjecture about graph coloring, thestrong perfect graph conjecture, originally motivated by aninformation-theoreticconcept called thezero-error capacityof a graph introduced byShannon. The conjecture remained unresolved for 40 years, until it was established as the celebratedstrong perfect graph theorembyChudnovsky,Robertson,Seymour, andThomasin 2002. Graph coloring has been studied as an algorithmic problem since the early 1970s: the chromatic number problem (see section§ Vertex coloringbelow) is one ofKarp's 21 NP-complete problemsfrom 1972, and at approximately the same time various exponential-time algorithms were developed based on backtracking and on the deletion-contraction recurrence ofZykov (1949). One of the major applications of graph coloring,register allocationin compilers, was introduced in 1981. When used without any qualification, acoloringof a graph almost always refers to aproper vertex coloring, namely a labeling of the graph's vertices with colors such that no two vertices sharing the sameedgehave the same color. Since a vertex with aloop(i.e. a connection directly back to itself) could never be properly colored, it is understood that graphs in this context are loopless. The terminology of usingcolorsfor vertex labels goes back to map coloring. Labels likeredandblueare only used when the number of colors is small, and normally it is understood that the labels are drawn from theintegers{1, 2, 3, ...}. A coloring using at mostkcolors is called a (proper)k-coloring. The smallest number of colors needed to color a graphGis called itschromatic number, and is often denotedχ(G).[4]Sometimesγ(G)is used, sinceχ(G)is also used to denote theEuler characteristicof a graph.[5]A graph that can be assigned a (proper)k-coloring isk-colorable, and it isk-chromaticif its chromatic number is exactlyk. A subset of vertices assigned to the same color is called acolor class; every such class forms anindependent set. Thus, ak-coloring is the same as a partition of the vertex set intokindependent sets, and the termsk-partiteandk-colorablehave the same meaning. Thechromatic polynomialcounts the number of ways a graph can be colored using some of a given number of colors. For example, using three colors, the graph in the adjacent image can be colored in 12 ways. With only two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4 × 12 = 72 ways: using all four colors, there are 4! = 24 valid colorings (everyassignment of four colors toany4-vertex graph is a proper coloring); and for every choice of three of the four colors, there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of valid colorings would start like this: The chromatic polynomial is a functionP(G,t)that counts the number oft-colorings ofG. As the name indicates, for a givenGthe function is indeed apolynomialint. For the example graph,P(G,t) =t(t− 1)2(t− 2), and indeedP(G, 4) = 72. The chromatic polynomial includes more information about the colorability ofGthan does the chromatic number. Indeed,χis the smallest positive integer that is not a zero of the chromatic polynomialχ(G) = min{k:P(G,k) > 0}. Anedge coloringof a graph is a proper coloring of theedges, meaning an assignment of colors to edges so that no vertex is incident to two edges of the same color. An edge coloring withkcolors is called ak-edge-coloring and is equivalent to the problem of partitioning the edge set intokmatchings. The smallest number of colors needed for an edge coloring of a graphGis thechromatic index, oredge chromatic number,χ′(G). ATait coloringis a 3-edge coloring of acubic graph. Thefour color theoremis equivalent to the assertion that every planar cubicbridgelessgraph admits a Tait coloring. Total coloringis a type of coloring on the verticesandedges of a graph. When used without any qualification, a total coloring is always assumed to be proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-vertices are assigned the same color. The total chromatic numberχ″(G)of a graphGis the fewest colors needed in any total coloring ofG. For a graph with a strong embedding on a surface, theface coloringis the dual of the vertex coloring problem. For a graphGwith a strong embedding on an orientable surface,William T. Tutte[6][7][8]discovered that if the graph isk-face-colorable thenGadmits a nowhere-zerok-flow. The equivalence holds if the surface is sphere. Anunlabeled coloringof a graph is anorbitof a coloring under the action of theautomorphism groupof the graph. The colors remain labeled; it is the graph that is unlabeled. There is an analogue of thechromatic polynomialwhich counts the number of unlabeled colorings of a graph from a given finite color set. If we interpret a coloring of a graph ondvertices as a vector in⁠Zd{\displaystyle \mathbb {Z} ^{d}}⁠, the action of an automorphism is apermutationof the coefficients in the coloring vector. Assigning distinct colors to distinct vertices always yields a proper coloring, so The only graphs that can be 1-colored areedgeless graphs. Acomplete graphKn{\displaystyle K_{n}}ofnvertices requiresχ(Kn)=n{\displaystyle \chi (K_{n})=n}colors. In an optimal coloring there must be at least one of the graph'smedges between every pair of color classes, so More generally a familyF{\displaystyle {\mathcal {F}}}of graphs isχ-boundedif there is some functionc{\displaystyle c}such that the graphsG{\displaystyle G}inF{\displaystyle {\mathcal {F}}}can be colored with at mostc(ω(G)){\displaystyle c(\omega (G))}colors, whereω(G){\displaystyle \omega (G)}is theclique numberofG{\displaystyle G}. For the family of the perfect graphs this function isc(ω(G))=ω(G){\displaystyle c(\omega (G))=\omega (G)}. The 2-colorable graphs are exactly thebipartite graphs, includingtreesand forests. By the four color theorem, every planar graph can be 4-colored. Agreedy coloringshows that every graph can be colored with one more color than the maximum vertexdegree, Complete graphs haveχ(G)=n{\displaystyle \chi (G)=n}andΔ(G)=n−1{\displaystyle \Delta (G)=n-1}, andodd cycleshaveχ(G)=3{\displaystyle \chi (G)=3}andΔ(G)=2{\displaystyle \Delta (G)=2}, so for these graphs this bound is best possible. In all other cases, the bound can be slightly improved;Brooks' theorem[9]states that Several lower bounds for the chromatic bounds have been discovered over the years: IfGcontains acliqueof sizek, then at leastkcolors are needed to color that clique; in other words, the chromatic number is at least the clique number: Forperfect graphsthis bound is tight. Finding cliques is known as theclique problem. Hoffman's bound:LetW{\displaystyle W}be a real symmetric matrix such thatWi,j=0{\displaystyle W_{i,j}=0}whenever(i,j){\displaystyle (i,j)}is not an edge inG{\displaystyle G}. DefineχW(G)=1−λmax(W)λmin(W){\displaystyle \chi _{W}(G)=1-{\tfrac {\lambda _{\max }(W)}{\lambda _{\min }(W)}}}, whereλmax(W),λmin(W){\displaystyle \lambda _{\max }(W),\lambda _{\min }(W)}are the largest and smallest eigenvalues ofW{\displaystyle W}. DefineχH(G)=maxWχW(G){\textstyle \chi _{H}(G)=\max _{W}\chi _{W}(G)}, withW{\displaystyle W}as above. Then: Vector chromatic number:LetW{\displaystyle W}be a positive semi-definite matrix such thatWi,j≤−1k−1{\displaystyle W_{i,j}\leq -{\tfrac {1}{k-1}}}whenever(i,j){\displaystyle (i,j)}is an edge inG{\displaystyle G}. DefineχV(G){\displaystyle \chi _{V}(G)}to be the least k for which such a matrixW{\displaystyle W}exists. Then Lovász number:The Lovász number of a complementary graph is also a lower bound on the chromatic number: Fractional chromatic number:The fractional chromatic number of a graph is a lower bound on the chromatic number as well: These bounds are ordered as follows: Graphs with largecliqueshave a high chromatic number, but the opposite is not true. TheGrötzsch graphis an example of a 4-chromatic graph without a triangle, and the example can be generalized to theMycielskians. To prove this, both, Mycielski and Zykov, each gave a construction of an inductively defined family oftriangle-free graphsbut with arbitrarily large chromatic number.[11]Burling (1965)constructed axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}whoseintersection graphis triangle-free and requires arbitrarily many colors to be properly colored. This family of graphs is then called the Burling graphs. The same class of graphs is used for the construction of a family of triangle-free line segments in the plane, given by Pawlik et al. (2014).[12]It shows that the chromatic number of its intersection graph is arbitrarily large as well. Hence, this implies that axis aligned boxes inR3{\displaystyle \mathbb {R} ^{3}}as well as line segments inR2{\displaystyle \mathbb {R} ^{2}}are notχ-bounded.[12] From Brooks's theorem, graphs with high chromatic number must have high maximum degree. But colorability is not an entirely local phenomenon: A graph with highgirthlooks locally like a tree, because all cycles are long, but its chromatic number need not be 2: An edge coloring ofGis a vertex coloring of itsline graphL(G){\displaystyle L(G)}, and vice versa. Thus, There is a strong relationship between edge colorability and the graph's maximum degreeΔ(G){\displaystyle \Delta (G)}. Since all edges incident to the same vertex need their own color, we have Moreover, In general, the relationship is even stronger than what Brooks's theorem gives for vertex coloring: A graph has ak-coloring if and only if it has anacyclic orientationfor which thelongest pathhas length at mostk; this is theGallai–Hasse–Roy–Vitaver theorem(Nešetřil & Ossona de Mendez 2012). For planar graphs, vertex colorings are essentially dual tonowhere-zero flows. About infinite graphs, much less is known. The following are two of the few results about infinite graph coloring: As stated above,ω(G)≤χ(G)≤Δ(G)+1.{\displaystyle \omega (G)\leq \chi (G)\leq \Delta (G)+1.}A conjecture of Reed from 1998 is that the value is essentially closer to the lower bound,χ(G)≤⌈ω(G)+Δ(G)+12⌉.{\displaystyle \chi (G)\leq \left\lceil {\frac {\omega (G)+\Delta (G)+1}{2}}\right\rceil .} Thechromatic number of the plane, where two points are adjacent if they have unit distance, is unknown, although it is one of 5, 6, or 7. Otheropen problemsconcerning the chromatic number of graphs include theHadwiger conjecturestating that every graph with chromatic numberkhas acomplete graphonkvertices as aminor, theErdős–Faber–Lovász conjecturebounding the chromatic number of unions of complete graphs that have at most one vertex in common to each pair, and theAlbertson conjecturethat amongk-chromatic graphs the complete graphs are the ones with smallestcrossing number. When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-color theorem, they conjectured that for planar graphsG, the polynomialP(G,t){\displaystyle P(G,t)}has no zeros in the region[4,∞){\displaystyle [4,\infty )}. Although it is known that such a chromatic polynomial has no zeros in the region[5,∞){\displaystyle [5,\infty )}and thatP(G,4)≠0{\displaystyle P(G,4)\neq 0}, their conjecture is still unresolved. It also remains an unsolved problem to characterize graphs which have the same chromatic polynomial and to determine which polynomials are chromatic. Determining if a graph can be colored with 2 colors is equivalent to determining whether or not the graph isbipartite, and thus computable inlinear timeusingbreadth-first searchordepth-first search. More generally, the chromatic number and a corresponding coloring ofperfect graphscan be computed inpolynomial timeusingsemidefinite programming.Closed formulasfor chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time. If the graph is planar and has low branch-width (or is nonplanar but with a knownbranch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width. Brute-force searchfor ak-coloring considers each of thekn{\displaystyle k^{n}}assignments ofkcolors tonvertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for everyk=1,…,n−1{\displaystyle k=1,\ldots ,n-1}, impractical for all but the smallest input graphs. Usingdynamic programmingand a bound on the number ofmaximal independent sets,k-colorability can be decided in time and spaceO(2.4423n){\displaystyle O(2.4423^{n})}.[15]Using the principle ofinclusion–exclusionandYates's algorithm for the fast zeta transform,k-colorability can be decided in timeO(2nn){\displaystyle O(2^{n}n)}[14][16][17][18]for anyk. Faster algorithms are known for 3- and 4-colorability, which can be decided in timeO(1.3289n){\displaystyle O(1.3289^{n})}[19]andO(1.7272n){\displaystyle O(1.7272^{n})},[20]respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs.[21] ThecontractionG/uv{\displaystyle G/uv}of a graphGis the graph obtained by identifying the verticesuandv, and removing any edges between them. The remaining edges originally incident touorvare now incident to their identification (i.e., the new fused nodeuv). This operation plays a major role in the analysis of graph coloring. The chromatic number satisfies therecurrence relation: due toZykov (1949), whereuandvare non-adjacent vertices, andG+uv{\displaystyle G+uv}is the graph with the edgeuvadded. Several algorithms are based on evaluating this recurrence and the resulting computation tree is sometimes called a Zykov tree. The running time is based on a heuristic for choosing the verticesuandv. The chromatic polynomial satisfies the following recurrence relation whereuandvare adjacent vertices, andG−uv{\displaystyle G-uv}is the graph with the edgeuvremoved.P(G−uv,k){\displaystyle P(G-uv,k)}represents the number of possible proper colorings of the graph, where the vertices may have the same or different colors. Then the proper colorings arise from two different graphs. To explain, if the verticesuandvhave different colors, then we might as well consider a graph whereuandvare adjacent. Ifuandvhave the same colors, we might as well consider a graph whereuandvare contracted. Tutte's curiosity about which other graph properties satisfied this recurrence led him to discover a bivariate generalization of the chromatic polynomial, theTutte polynomial. These expressions give rise to a recursive procedure called thedeletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The running time satisfies the same recurrence relation as theFibonacci numbers, so in the worst case the algorithm runs in time within a polynomial factor of(1+52)n+m=O(1.6180n+m){\displaystyle \left({\tfrac {1+{\sqrt {5}}}{2}}\right)^{n+m}=O(1.6180^{n+m})}fornvertices andmedges.[22]The analysis can be improved to within a polynomial factor of the numbert(G){\displaystyle t(G)}ofspanning treesof the input graph.[23]In practice,branch and boundstrategies andgraph isomorphismrejection are employed to avoid some recursive calls. The running time depends on the heuristic used to pick the vertex pair. Thegreedy algorithmconsiders the vertices in a specific orderv1{\displaystyle v_{1}}, ...,vn{\displaystyle v_{n}}and assigns tovi{\displaystyle v_{i}}the smallest available color not used byvi{\displaystyle v_{i}}'s neighbours amongv1{\displaystyle v_{1}}, ...,vi−1{\displaystyle v_{i-1}}, adding a fresh color if needed. The quality of the resulting coloring depends on the chosen ordering. There exists an ordering that leads to a greedy coloring with the optimal number ofχ(G){\displaystyle \chi (G)}colors. On the other hand, greedy colorings can be arbitrarily bad; for example, thecrown graphonnvertices can be 2-colored, but has an ordering that leads to a greedy coloring withn/2{\displaystyle n/2}colors. Forchordal graphs, and for special cases of chordal graphs such asinterval graphsandindifference graphs, the greedy coloring algorithm can be used to find optimal colorings in polynomial time, by choosing the vertex ordering to be the reverse of aperfect elimination orderingfor the graph. Theperfectly orderable graphsgeneralize this property, but it is NP-hard to find a perfect ordering of these graphs. If the vertices are ordered according to theirdegrees, the resulting greedy coloring uses at mostmaximin{d(xi)+1,i}{\displaystyle {\text{max}}_{i}{\text{ min}}\{d(x_{i})+1,i\}}colors, at most one more than the graph's maximum degree. This heuristic is sometimes called the Welsh–Powell algorithm.[24]Another heuristic due toBrélazestablishes the ordering dynamically while the algorithm proceeds, choosing next the vertex adjacent to the largest number of different colors.[25]Many other graph coloring heuristics are similarly based on greedy coloring for a specific static or dynamic strategy of ordering the vertices, these algorithms are sometimes calledsequential coloringalgorithms. The maximum (worst) number of colors that can be obtained by the greedy algorithm, by using a vertex ordering chosen to maximize this number, is called theGrundy numberof a graph. Two well-known polynomial-time heuristics for graph colouring are theDSaturandrecursive largest first(RLF) algorithms. Similarly to thegreedy colouring algorithm, DSatur colours theverticesof agraphone after another, expending a previously unused colour when needed. Once a newvertexhas been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of different colours in its neighbourhood and colours this vertex next. This is defined as thedegree of saturationof a given vertex. Therecursive largest first algorithmoperates in a different fashion by constructing each color class one at a time. It does this by identifying amaximal independent setof vertices in the graph using specialised heuristic rules. It then assigns these vertices to the same color and removes them from the graph. These actions are repeated on the remaining subgraph until no vertices remain. The worst-case complexity of DSatur isO(n2){\displaystyle O(n^{2})}, wheren{\displaystyle n}is the number of vertices in the graph. The algorithm can also be implemented using a binary heap to store saturation degrees, operating inO((n+m)log⁡n){\displaystyle O((n+m)\log n)}wherem{\displaystyle m}is the number of edges in the graph.[26]This produces much faster runs with sparse graphs. The overall complexity of RLF is slightly higher thanDSaturatO(mn){\displaystyle O(mn)}.[26] DSatur and RLF areexactforbipartite,cycle, andwheel graphs.[26] It is known that aχ-chromatic graph can bec-colored in the deterministic LOCAL model, inO(n1/α){\displaystyle O(n^{1/\alpha })}. rounds, withα=⌊c−1χ−1⌋{\displaystyle \alpha =\left\lfloor {\frac {c-1}{\chi -1}}\right\rfloor }. A matching lower bound ofΩ(n1/α){\displaystyle \Omega (n^{1/\alpha })}rounds is also known. This lower bound holds even if quantum computers that can exchange quantum information, possibly with a pre-shared entangled state, are allowed. In the field ofdistributed algorithms, graph coloring is closely related to the problem ofsymmetry breaking. The current state-of-the-art randomized algorithms are faster for sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized algorithms employ themulti-trials techniqueby Schneider and Wattenhofer.[27] In asymmetric graph, adeterministicdistributed algorithm cannot find a proper vertex coloring. Some auxiliary information is needed in order to break symmetry. A standard assumption is that initially each node has aunique identifier, for example, from the set {1, 2, ...,n}. Put otherwise, we assume that we are given ann-coloring. The challenge is toreducethe number of colors fromnto, e.g., Δ + 1. The more colors are employed, e.g.O(Δ) instead of Δ + 1, the fewer communication rounds are required.[27] A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires Θ(n) communication rounds in the worst case – information may need to be propagated from one side of the network to another side. The simplest interesting case is ann-cycle. Richard Cole andUzi Vishkin[28]show that there is a distributed algorithm that reduces the number of colors fromntoO(logn) in one synchronous communication step. By iterating the same procedure, it is possible to obtain a 3-coloring of ann-cycle inO(log*n) communication steps (assuming that we have unique node identifiers). The functionlog*,iterated logarithm, is an extremely slowly growing function, "almost constant". Hence the result by Cole and Vishkin raised the question of whether there is aconstant-timedistributed algorithm for 3-coloring ann-cycle.Linial (1992)showed that this is not possible: any deterministic distributed algorithm requires Ω(log*n) communication steps to reduce ann-coloring to a 3-coloring in ann-cycle. The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs as well; the running time is poly(Δ) +O(log*n).[29]The technique was extended tounit disk graphsby Schneider and Wattenhofer.[30]The fastest deterministic algorithms for (Δ + 1)-coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn.[31]The algorithm by Barenboim et al. runs in timeO(Δ) +log*(n)/2, which is optimal in terms ofnsince the constant factor 1/2 cannot be improved due to Linial's lower bound.Panconesi & Srinivasan (1996)use network decompositions to compute a Δ+1 coloring in time2O(log⁡n){\displaystyle 2^{O\left({\sqrt {\log n}}\right)}}. The problem of edge coloring has also been studied in the distributed model.Panconesi & Rizzi (2001)achieve a (2Δ − 1)-coloring inO(Δ +log*n) time in this model. The lower bound for distributed vertex coloring due toLinial (1992)applies to the distributed edge coloring problem as well. Decentralized algorithms are ones where nomessage passingis allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in wireless channel allocation it is usually reasonable to assume that a station will be able to detect whether other interfering transmitters are using the same channel (e.g. by measuring the SINR). This sensing information is sufficient to allow algorithms based on learning automata to find a proper graph coloring with probability one.[32] Graph coloring is computationally hard. It isNP-completeto decide if a given graph admits ak-coloring for a givenkexcept for the casesk∈ {0,1,2}. In particular, it is NP-hard to compute the chromatic number.[33]The 3-coloring problem remains NP-complete even on 4-regularplanar graphs.[34]On graphs with maximal degree 3 or less, however,Brooks' theoremimplies that the 3-coloring problem can be solved in linear time. Further, for everyk> 3, ak-coloring of a planar graph exists by thefour color theorem, and it is possible to find such a coloring in polynomial time. However, finding thelexicographicallysmallest 4-coloring of a planar graph is NP-complete.[35] The best knownapproximation algorithmcomputes a coloring of size at most within a factorO(n(log logn)2(log n)−3) of the chromatic number.[36]For allε> 0, approximating the chromatic number withinn1−εisNP-hard.[37] It is also NP-hard to color a 3-colorable graph with 5 colors,[38]4-colorable graph with 7 colours,[38]and ak-colorable graph with(k⌊k/2⌋)−1{\displaystyle \textstyle {\binom {k}{\lfloor k/2\rfloor }}-1}colors fork≥ 5.[39] Computing the coefficients of the chromatic polynomial is♯P-hard. In fact, even computing the value ofχ(G,k){\displaystyle \chi (G,k)}is ♯P-hard at anyrational pointkexcept fork= 1 andk= 2.[40]There is noFPRASfor evaluating the chromatic polynomial at any rational pointk≥ 1.5 except fork= 2 unlessNP=RP.[41] For edge coloring, the proof of Vizing's result gives an algorithm that uses at most Δ+1 colors. However, deciding between the two candidate values for the edge chromatic number is NP-complete.[42]In terms of approximation algorithms, Vizing's algorithm shows that the edge chromatic number can be approximated to within 4/3, and the hardness result shows that no (4/3 −ε)-algorithm exists for anyε > 0unlessP = NP. These are among the oldest results in the literature of approximation algorithms, even though neither paper makes explicit use of that notion.[43] Vertex coloring models to a number ofscheduling problems.[44]In the cleanest form, a given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can be scheduled in any order, but pairs of jobs may be inconflictin the sense that they may not be assigned to the same time slot, for example because they both rely on a shared resource. The corresponding graph contains a vertex for every job and an edge for every conflicting pair of jobs. The chromatic number of the graph is exactly the minimummakespan, the optimal time to finish all jobs without conflicts. Details of the scheduling problem define the structure of the graph. For example, when assigning aircraft to flights, the resulting conflict graph is aninterval graph, so the coloring problem can be solved efficiently. Inbandwidth allocationto radio stations, the resulting conflict graph is aunit disk graph, so the coloring problem is 3-approximable. Acompileris acomputer programthat translates onecomputer languageinto another. To improve the execution time of the resulting code, one of the techniques ofcompiler optimizationisregister allocation, where the most frequently used values of the compiled program are kept in the fastprocessor registers. Ideally, values are assigned to registers so that they can all reside in the registers when they are used. The textbook approach to this problem is to model it as a graph coloring problem.[45]The compiler constructs aninterference graph, where vertices are variables and an edge connects two vertices if they are needed at the same time. If the graph can be colored withkcolors then any set of variables needed at the same time can be stored in at mostkregisters. The problem of coloring a graph arises in many practical areas such as sports scheduling,[46]designing seating plans,[47]exam timetabling,[48]the scheduling of taxis,[49]and solvingSudokupuzzles.[50] An important class ofimpropercoloring problems is studied inRamsey theory, where the graph's edges are assigned to colors, and there is no restriction on the colors of incident edges. A simple example is thetheorem on friends and strangers, which states that in any coloring of the edges ofK6{\displaystyle K_{6}}, the complete graph of six vertices, there will be a monochromatic triangle; often illustrated by saying that any group of six people either has three mutual strangers or three mutual acquaintances. Ramsey theory is concerned with generalisations of this idea to seek regularity amid disorder, finding general conditions for the existence of monochromatic subgraphs with given structure. Modular coloring is a type of graph coloring in which the color of each vertex is the sum of the colors of its adjacent vertices. Letk ≥ 2be a number of colors whereZk{\displaystyle \mathbb {Z} _{k}}is the set of integers modulo k consisting of the elements (or colors)0,1,2, ..., k-2, k-1. First, we color each vertex in G using the elements ofZk{\displaystyle \mathbb {Z} _{k}}, allowing two adjacent vertices to be assigned the same color. In other words, we want c to be a coloring such that c: V(G) →Zk{\displaystyle \mathbb {Z} _{k}}where adjacent vertices can be assigned the same color. For each vertex v in G, the color sum ofv, σ(v), is the sum of all of the adjacent vertices to v mod k. The color sum of v is denoted by where u is an arbitrary vertex in the neighborhood of v, N(v). We then color each vertex with the new coloring determined by the sum of the adjacent vertices. The graph G has a modular k-coloring if, for every pair of adjacent vertices a,b, σ(a) ≠ σ(b). The modular chromatic number of G, mc(G), is the minimum value of k such that there exists a modular k-coloring of G.< For example, let there be a vertex v adjacent to vertices with the assigned colors 0, 1, 1, and 3 mod 4 (k=4). The color sum would be σ(v) = 0 + 1 + 1+ 3 mod 4 = 5 mod 4 = 1. This would be the new color of vertex v. We would repeat this process for every vertex in G. If two adjacent vertices have equal color sums, G does not have a modulo 4 coloring. If none of the adjacent vertices have equal color sums, G has a modulo 4 coloring. Coloring can also be considered forsigned graphsandgain graphs.
https://en.wikipedia.org/wiki/Vertex_coloring
Acomplex adaptive system(CAS) is asystemthat iscomplexin that it is adynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It isadaptivein that the individual andcollective behaviormutate andself-organizecorresponding to the change-initiating micro-event or collection of events.[1][2][3]It is a "complex macroscopic collection" of relatively "similar and partially connected micro-structures" formed in order toadaptto the changing environment and increase their survivability as amacro-structure.[1][2][4]The Complex Adaptive Systems approach builds onreplicator dynamics.[5] The study of complex adaptive systems, a subset ofnonlinear dynamical systems,[6]is an interdisciplinary matter that attempts to blend insights from the natural and social sciences to develop system-level models and insights that allow forheterogeneous agents,phase transition, andemergent behavior.[7] The termcomplex adaptive systems, orcomplexity science, is often used to describe the loosely organized academic field that has grown up around the study of such systems. Complexity science is not a single theory—it encompasses more than one theoretical framework and is interdisciplinary, seeking the answers to some fundamental questions aboutliving, adaptable, changeable systems. Complex adaptive systems may adopt hard or softer approaches.[8]Hard theories use formal language that is precise, tend to see agents as having tangible properties, and usually see objects in a behavioral system that can be manipulated in some way. Softer theories use natural language and narratives that may be imprecise, and agents are subjects having both tangible and intangible properties. Examples of hard complexity theories include complex adaptive systems (CAS) andviability theory, and a class of softer theory isViable System Theory. Many of the propositional consideration made in hard theory are also of relevance to softer theory. From here on, interest will now center on CAS. The study of CAS focuses on complex, emergent and macroscopic properties of the system.[4][9][10]John H. Hollandsaid that CAS "are systems that have a large numbers of components, often called agents, that interact and adapt or learn."[11] Typical examples of complex adaptive systems include: climate; cities; firms; markets; governments; industries; ecosystems; social networks; power grids; animal swarms; traffic flows;social insect(e.g.ant) colonies;[12]thebrainand theimmune system; and thecelland the developingembryo. Human social group-based endeavors, such aspolitical parties,communities,geopoliticalorganizations,war, andterrorist networksare also considered CAS.[12][13][14]Theinternetandcyberspace—composed, collaborated, and managed by a complex mix ofhuman–computer interactions, is also regarded as a complex adaptive system.[15][16][17]CAS can be hierarchical, but more often exhibit aspects of "self-organization".[18] The term complex adaptive system was coined in 1968 by sociologistWalter F. Buckley[19][20]who proposed a model ofcultural evolutionwhich regards psychological and socio-cultural systems as analogous with biologicalspecies.[21]In the modern context, complex adaptive system is sometimes linked tomemetics,[22]or proposed as a reformulation of memetics.[23]Michael D. CohenandRobert Axelrodhowever argue the approach is notsocial Darwinismorsociobiologybecause, even though the concepts of variation, interaction and selection can be applied to modelling 'populationsof business strategies', for example, the detailed evolutionary mechanisms are often distinctly unbiological.[24]As such, complex adaptive system is more similar toRichard Dawkins's idea ofreplicators.[24][25][26] What distinguishes a complex adaptive system (CAS) from a puremulti-agent system(MAS) is the focus on top-level properties and features likeself-similarity,complexity,emergenceandself-organization. Theorists define an MAS as a system composed of multiple interacting agents; whereas in CAS, the agents as well as the system are adaptive and the system isself-similar. A CAS is a complex, self-similarcollectivityof interacting, adaptive agents. Complex adaptive systems feature a high degree ofadaptive capacity, giving them resilience in the face ofperturbation. Other important properties include adaptation (orhomeostasis), communication, cooperation, specialization, spatial and temporal organization, and reproduction. Such properties can manifest themselves on all levels: cells specialize, adapt and reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from the agent- to the system-level. In some cases the forces drivingco-operationbetween agents in such a system can be analyzed usinggame theory. Some of the most important characteristics of complex adaptive systems are:[27] Robert Axelrod&Michael D. Cohenidentify a series of key terms from a modeling perspective:[28] Turner and Baker synthesized the characteristics of complex adaptive systems from the literature and tested these characteristics in the context of creativity and innovation.[29]Each of these eight characteristics had been shown to be present in the creativity and innovative processes: The organisation of a complex adaptive system rely on the use ofinternal models,mental modelsor schemas guiding the behaviors of the system. We can distinguish three levels of adaptation of a system: CAS are occasionally modeled by means ofagent-based modelsandcomplex network-based models.[35]Agent-based models are developed by means of various methods and tools primarily by means of first identifying the different agents inside the model.[36]Another method of developing models for CAS involves developing complex network models by means of using interaction data of various CAS components.[37] In 2013SpringerOpen/BioMed Centrallaunched an online open-access journal on the topic ofcomplex adaptive systems modeling(CASM). Publication of the journal ceased in 2020.[38] Living organisms are complex adaptive systems. Although complexity is hard to quantify in biology,evolutionhas produced some remarkably complex organisms.[39]This observation has led to the common misconception of evolution being progressive and leading towards what are viewed as "higher organisms".[40] If this were generally true, evolution would possess an active trend towards complexity. As shown below, in this type of process the value of the most common amount of complexity would increase over time.[41]Indeed, someartificial lifesimulations have suggested that the generation of CAS is an inescapable feature of evolution.[42][43] However, the idea of a general trend towards complexity in evolution can also be explained through a passive process.[41]This involves an increase invariancebut the most common value, themode, does not change. Thus, the maximum level of complexity increases over time, but only as an indirect product of there being more organisms in total. This type of random process is also called a boundedrandom walk. In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on the small number of large, very complex organisms that inhabit theright-hand tailof the complexity distribution and ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming majority of species aremicroscopicprokaryotes,[44]which comprise about half the world'sbiomass[45]and constitute the vast majority of Earth's biodiversity.[46]Therefore, simple life remains dominant on Earth, and complex life appears more diverse only because ofsampling bias. If there is a lack of an overall trend towards complexity in biology, this would not preclude the existence of forces driving systems towards complexity in a subset of cases. These minor trends would be balanced by other evolutionary pressures that drive systems towards less complex states.
https://en.wikipedia.org/wiki/Complex_adaptive_system
Dual phase evolution(DPE) is a process that drivesself-organizationwithincomplex adaptive systems.[1]It arises in response to phase changes within the network of connections formed by a system's components. DPE occurs in a wide range of physical, biological and social systems. Its applications to technology include methods for manufacturing novel materials and algorithms to solve complex problems in computation. Dual phase evolution (DPE) is a process that promotes the emergence of large-scale order incomplex systems. It occurs when a system repeatedly switches between various kinds of phases, and in each phase different processes act on the components or connections in the system. DPE arises because of a property ofgraphsandnetworks: the connectivity avalanche that occurs in graphs as the number of edges increases.[2] Social networks provide a familiar example. In asocial networkthe nodes of the network are people and the network connections (edges) are relationships or interactions between people. For any individual, social activity alternates between alocal phase, in which they interact only with people they already know, and aglobal phasein which they can interact with a wide pool of people not previously known to them. Historically, these phases have been forced on people by constraints of time and space. People spend most of their time in a local phase and interact only with those immediately around them (family, neighbors, colleagues). However, intermittent activities such as parties, holidays, and conferences involve a shift into a global phase where they can interact with different people they do not know. Different processes dominate each phase. Essentially, people make new social links when in the global phase, and refine or break them (by ceasing contact) while in the local phase. The following features are necessary for DPE to occur.[1] DPE occurs where a system has an underlying network. That is, the system's components form a set of nodes and there are connections (edges) that join them. For example, a family tree is a network in which the nodes are people (with names) and the edges are relationships such as "mother of" or "married to". The nodes in the network can take physical form, such as atoms held together by atomic forces, or they may be dynamic states or conditions, such as positions on a chess board with moves by the players defining the edges. In mathematical terms (graph theory), a graphG=⟨N,E⟩{\displaystyle \textstyle G=\langle N,E\rangle }is a set of nodesN{\displaystyle \textstyle N}and a set of edgesE⊂{(x,y)∣x,y∈N}{\displaystyle \textstyle E\subset \{(x,y)\mid x,y\in N\}}. Each edge(x,y){\displaystyle \textstyle (x,y)}provides a link between a pair of nodesx{\displaystyle \textstyle x}andy{\displaystyle \textstyle y}. A network is a graph in which values are assigned to the nodes and/or edges. Graphs and networks have two phases: disconnected (fragmented) and connected. In the connected phase every node is connected by an edge to at least one other node and for any pair of nodes, there is at least one path (sequence of edges) joining them. TheErdős–Rényi modelshows that random graphs undergo a connectivity avalanche as the density of edges in a graph increases.[2]This avalanche amounts to a sudden phase change in the size of the largest connected subgraph. In effect, a graph has two phases: connected (most nodes are linked by pathways of interaction) and fragmented (nodes are either isolated or form small subgraphs). These are often referred to asglobalandlocalphases, respectively. An essential feature of DPE is that the system undergoes repeated shifts between the two phases. In many cases, one phase is the system's normal state and it remains in that phase until shocked into the alternate phase by a disturbance, which may be external in origin. In each of the two phases, the network is dominated by different processes.[1]In a local phase, the nodes behave as individuals; in the global phase, nodes are affected by interactions with other nodes. Most commonly the two processes at work can be interpreted asvariationandselection.Variationrefers to new features, which typically appear in one of the two phases. These features may be new nodes, new edges, or new properties of the nodes or edges.Selectionhere refers to ways in which the features are modified, refined, selected or removed. A simple example would be new edges being added at random in the global phase and edges being selectively removed in the local phase. The effects of changes in one phase carry over into the other phase. This means that the processes acting in each phase can modify or refine patterns formed in the other phase. For instance, in a social network, if a person makes new acquaintances during a global phase, then some of these new social connections might survive into the local phase to become long-term friends. In this way, DPE can create effects that may be impossible if both processes act at the same time. DPE has been found to occur in many natural and artificial systems.[3] DPE is capable of producing social networks with known topologies, notablysmall-world networksandscale-free networks.[3]Small world networks, which are common in traditional societies, are a natural consequence of alternatinglocalandglobalphases: new, long-distance links are formed during the global phase and existing links are reinforced (or removed) during the local phase. The advent of social media has decreased the constraining influence that space used to impose on social communication, so time has become the chief constraint for many people. The alternation between local and global phases in social networks occurs in many different guises. Some transitions between phases occur regularly, such as the daily cycle of people moving between home and work. This alternation can influence shifts in public opinion.[4]In the absence of social interaction, the uptake of an opinion promoted by media is aMarkov process. The effect of social interaction under DPE is to retard the initial uptake until the number converted reaches a critical point, after which uptake accelerates rapidly. DPE models of socio-economics interpret the economy as networks of economic agents.[5]Several studies have examined the way socioeconomics evolve when DPE acts on different parts of the network. One model[6]interpreted society as a network of occupations with inhabitants matched to those occupations. In this model social dynamics become a process of DPE within the network, with regular transitions between a development phase, during which the network settles into an equilibrium state, and a mutating phase, during which the network is transformed in random ways by the creation of new occupations. Another model[7]interpreted growth and decline in socioeconomic activity as a conflict between cooperators and defectors. The cooperators form networks that lead to prosperity. However, the network is unstable and invasions by defectors intermittently fragment the network, reducing prosperity, until invasions of new cooperators rebuild networks again. Thus prosperity is seen as a dual phase process of alternating highly prosperous, connected phases and unprosperous, fragmented phases. In aforest, the landscape can be regarded as a network of sites where trees might grow.[8]Some sites are occupied by living trees; others sites are empty. In the local phase, sites free of trees are few and they are surrounded by forest, so the network of free sites is fragmented. In competition for these free sites, local seed sources have a massive advantage, and seeds from distant trees are virtually excluded.[1]Major fires (or other disturbances) clear away large tracts of land, so the network of free sites becomes connected and the landscape enters a global phase. In the global phase, competition for free sites is reduced, so the main competitive advantage is adaptation to the environment. Most of the time a forest is in the local phase, as described above. The net effect is that established tree populations largely exclude invading species.[9]Even if a few isolated trees do find free ground, their population is prevented from expanding by established populations, even if the invaders are better adapted to the local environment. A fire in such conditions leads to an explosion of the invading population, and possibly to a sudden change in the character of the entire forest. This dual phase process in the landscape explains the consistent appearance ofpollen zonesin the postglacial forest history of North America, Europe, as well as the suppression of widespreadtaxa, such asbeechandhemlock, followed by huge population explosions. Similar patterns, pollen zones truncated by fire-induced boundaries, have been recorded in most parts of the world Dual-phases also occur in the course of foraging by animals. Many species (e.g. ants) exhibit two modes of foraging: exploration and exploitation.[10]In ant colonies, for instance, individuals search at random (exploration) until food is found. They lay pheromone trails to the source. Other ants follow these trails, switching their behaviour from searching to gathering (exploitation). Dual phase evolution is a family ofsearch algorithmsthat exploit phase changes in thesearch spaceto mediate between local and global search. In this way they control the way algorithms explore a search space, so they can be regarded as a family ofmetaheuristicmethods. Problems such asoptimizationcan typically be interpreted as finding the tallest peak (optimum) within a search space of possibilities. The task can be approached in two ways:local search(e.g.hill climbing) involves tracing a path from point to point, and always moving "uphill".Global searchinvolves sampling at wide-ranging points in the search space to find high points. Many search algorithms involve a transition between phases of global search and local search.[3]A simple example is theGreat Deluge algorithmin which the searcher can move at random across the landscape, but cannot enter low-lying areas that are flooded. At first the searcher can wander freely, but rising water levels eventually confine the search to a local area. Many other nature-inspired algorithms adopt similar approaches.Simulated annealingachieves a transition between phases via its cooling schedule. Thecellular genetic algorithmplaces solutions in a pseudo landscape in which they breed only with local neighbours. Intermittent disasters clear patches, flipping the system into a global phase until gaps are filled again. Some variations on thememetic algorithminvolve alternating between selection at different levels. These are related to theBaldwin effect, which arises when processes acting onphenotypes(e.g. learning) influence selection at the level ofgenotypes. In this sense, the Baldwin effect alternates between global search (genotypes) and local search (phenotypes). Dual phase evolution is related to the well-known phenomenon ofself-organized criticality(SOC). Both concern processes in which critical phase changes promote adaptation and organization within a system. However, SOC differs from DPE in several fundamental ways.[1]Under SOC, a system's natural condition is to be in a critical state; in DPE a system's natural condition is a non-critical state. In SOC the size of disturbances follows a power law; in DPE disturbances are not necessarily distributed the same way. In SOC a system is not necessarily subject to other processes; in DPE different processes (e.g. selection and variation) operate in the two phases.
https://en.wikipedia.org/wiki/Dual-phase_evolution
The study ofinterdependent networksis a subfield ofnetwork sciencedealing with phenomena caused by the interactions betweencomplex networks. Though there may be a wide variety of interactions between networks,dependencyfocuses on the scenario in which the nodes in one network require support from nodes in another network.[1] In nature, networks rarely appear in isolation. They are typically elements in larger systems and can have non-trivial effects on one another. For example, infrastructure networks exhibit interdependency to a large degree. The power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. Thus the deactivation of a critical number of nodes in either the power network or the communication network can lead to a series of cascading failures across the system with potentially catastrophic repercussions. If the two networks were treated in isolation, this importantfeedbackeffect would not be seen and predictions of network robustness would be greatly overestimated. Links in a standard network representconnectivity, providing information about how one node can be reached from another.Dependencylinks represent a need for support from one node to another. This relationship is often, though not necessarily, mutual and thus the links can be directed or undirected. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to. Instatistical physics,phase transitionscan only appear in many particle systems. Though phase transitions are well known in network science, in single networks they are second order only. With the introduction of internetwork dependency, first order transitions emerge. This is a new phenomenon and one with profound implications for systems engineering. Where system dissolution takes place after steady (if steep) degradation for second order transitions, the existence of a first order transition implies that the system can go from a relatively healthy state to complete collapse with no advanced warning.
https://en.wikipedia.org/wiki/Interdependent_networks
Innetwork theory,multidimensional networks, a special type ofmultilayer network, are networks with multiple kinds of relations.[1][2][3][4][5][6][7]Increasingly sophisticated attempts to model real-world systems as multidimensional networks have yielded valuable insight in the fields ofsocial network analysis,[3][4][8][9][10][11][12]economics, urban and internationaltransport,[13][14][15]ecology,[16][17][18][19]psychology,[20][21]medicine, biology,[22]commerce, climatology, physics,[23]computational neuroscience,[24][25][26][27]operations management, and finance. The rapid exploration ofcomplex networksin recent years has been dogged by a lack of standardized naming conventions, as various groups use overlapping and contradictory[28][29]terminology to describe specific network configurations (e.g., multiplex, multilayer, multilevel, multidimensional, multirelational, interconnected). To fully leverage the dataset information on the directional nature of the communications, some authors consider only direct networks without any labels on vertices, and introduce the definition of edge-labeledmultigraphswhich can cover many multidimensional situations.[30]The term "fully multidimensional" has also been used to refer to amultipartiteedge-labeled multigraph.[31]Multidimensional networks have also recently been reframed as specific instances of multilayer networks.[1][5][6][32]In this case, there are as many layers as there are dimensions, and the links between nodes within each layer are simply all the links for a given dimension. In elementary network theory, a network is represented by a graphG=(V,E){\displaystyle G=(V,E)}in whichV{\displaystyle V}is the set ofnodesandE{\displaystyle E}thelinksbetween nodes, typically represented as atupleof nodesu,v∈V{\displaystyle u,v\in V}. While this basic formalization is useful for analyzing many systems, real world networks often have added complexity in the form of multiple types of relations between system elements. An early formalization of this idea came through its application in the field of social network analysis (see, e.g.,[33]and papers on relational algebras in social networks) in which multiple forms of social connection between people were represented by multiple types of links.[34] To accommodate the presence of more than one type of link, a multidimensional network is represented by a tripleG=(V,E,D){\displaystyle G=(V,E,D)}, whereD{\displaystyle D}is a set of dimensions (or layers), each member of which is a different type of link, andE{\displaystyle E}consists of triples(u,v,d){\displaystyle (u,v,d)}withu,v∈V{\displaystyle u,v\in V}andd∈D{\displaystyle d\in D}.[6] Note that as in alldirected graphs, the links(u,v,d){\displaystyle (u,v,d)}and(v,u,d){\displaystyle (v,u,d)}are distinct. By convention, the number of links between two nodes in a given dimension is either 0 or 1 in a multidimensional network. However, the total number of links between two nodes across all dimensions is less than or equal to|D|{\displaystyle |D|}. In the case of aweighted network, this triplet is expanded to a quadruplete=(u,v,d,w){\displaystyle e=(u,v,d,w)}, wherew{\displaystyle w}is the weight on the link betweenu{\displaystyle u}andv{\displaystyle v}in the dimensiond{\displaystyle d}. Further, as is often useful in social network analysis, link weights may take on positive or negative values. Such signed networks can better reflect relations like amity and enmity in social networks.[31]Alternatively, link signs may be figured as dimensions themselves,[35]e.g.G=(V,E,D){\displaystyle G=(V,E,D)}whereD={−1,0,1}{\displaystyle D=\{-1,0,1\}}andE={(u,v,d);u,v∈V,d∈D}{\displaystyle E=\{(u,v,d);u,v\in V,d\in D\}}This approach has particular value when considering unweighted networks. This conception of dimensionality can be expanded should attributes in multiple dimensions need specification. In this instance, links aren-tuplese=(u,v,d1…dn−2){\displaystyle e=(u,v,d_{1}\dots d_{n-2})}. Such an expanded formulation, in which links may exist within multiple dimensions, is uncommon but has been used in the study of multidimensionaltime-varying networks.[36] Whereas unidimensional networks have two-dimensionaladjacency matricesof sizeV×V{\displaystyle V\times V}, in a multidimensional network withD{\displaystyle D}dimensions, the adjacency matrix becomes a multilayer adjacency tensor, a four-dimensional matrix of size(V×D)×(V×D){\displaystyle (V\times D)\times (V\times D)}.[3]By usingindex notation, adjacency matrices can be indicated byAji{\displaystyle A_{j}^{i}}, to encode connections between nodesi{\displaystyle i}andj{\displaystyle j}, whereas multilayer adjacency tensors are indicated byMjβiα{\displaystyle M_{j\beta }^{i\alpha }}, to encode connections between nodei{\displaystyle i}in layerα{\displaystyle \alpha }and nodej{\displaystyle j}in layerβ{\displaystyle \beta }. As in unidimensional matrices, directed links, signed links, and weights are all easily accommodated by this framework. In the case ofmultiplex networks, which are special types of multilayer networks where nodes can not be interconnected with other nodes in other layers, a three-dimensional matrix of size(V×V)×D{\displaystyle (V\times V)\times D}with entriesAijα{\displaystyle A_{ij}^{\alpha }}is enough to represent the structure of the system[8][37]by encoding connections between nodesi{\displaystyle i}andj{\displaystyle j}in layerα{\displaystyle \alpha }. In a multidimensional network, the neighbors of some nodev{\displaystyle v}are all nodes connected tov{\displaystyle v}across dimensions. Apathbetween two nodes in a multidimensional network can be represented by a vectorr=(r1,…r|D|){\displaystyle =(r_{1},\dots r_{|D|})}in which thei{\displaystyle i}th entry inris the number of links traversed in thei{\displaystyle i}th dimension ofG{\displaystyle G}.[38]As with overlapping degree, the sum of these elements can be taken as a rough measure of a path length between two nodes. The existence of multiple layers (or dimensions) allows to introduce the new concept ofnetwork of layers,[3]peculiar of multilayer networks. In fact, layers might be interconnected in such a way that their structure can be described by a network, as shown in the figure. The network of layers is usually weighted (and might be directed), although, in general, the weights depends on the application of interest. A simple approach is, for each pair of layers, to sum all of the weights in the connections between their nodes to obtain edge weights that can be encoded into a matrixqαβ{\displaystyle q_{\alpha \beta }}. The rank-2 adjacency tensor, representing the underlying network of layers in the spaceRL×L{\displaystyle \mathbb {R} ^{L\times L}}is given by Ψδγ=∑α,β=1LqαβEδγ(αβ){\displaystyle \Psi _{\delta }^{\gamma }=\sum \limits _{\alpha ,\beta =1}^{L}q_{\alpha \beta }E_{\delta }^{\gamma }(\alpha \beta )} whereEδγ(αβ){\displaystyle E_{\delta }^{\gamma }(\alpha \beta )}is the canonical matrix with all components equal to zero except for the entry corresponding to rowα{\displaystyle \alpha }and columnβ{\displaystyle \beta }, that is equal to one. Using the tensorial notation, it is possible to obtain the (weighted) network of layers from the multilayer adjacency tensor asΨδγ=MjδiγUij{\displaystyle \Psi _{\delta }^{\gamma }=M_{j\delta }^{i\gamma }U_{i}^{j}}.[3] In a non-interconnected multidimensional network, where interlayer links are absent, thedegreeof a node is represented by a vector of length|D|:k=(ki1,…ki|D|){\displaystyle |D|:\mathbf {k} =(k_{i}^{1},\dots k_{i}^{|D|})}. Here|D|{\displaystyle |D|}is an alternative way to denote the number of layersL{\displaystyle L}in multilayer networks. However, for some computations it may be more useful to simply sum the number of links adjacent to a node across all dimensions.[3][39]This is theoverlapping degree:[4]∑α=1|D|kiα{\displaystyle \sum _{\alpha =1}^{|D|}k_{i}^{\alpha }}. As with unidimensional networks, distinction may similarly be drawn between incoming links and outgoing links. If interlayer links are present, the above definition must be adapted to account for them, and themultilayer degreeis given by ki=MjβiαUαβuj=∑α,β=1L∑j=1NMjβiα{\displaystyle k^{i}=M_{j\beta }^{i\alpha }U_{\alpha }^{\beta }u^{j}=\sum _{\alpha ,\beta =1}^{L}\sum _{j=1}^{N}M_{j\beta }^{i\alpha }} where the tensorsUαβ{\displaystyle U_{\alpha }^{\beta }}anduj{\displaystyle u^{j}}have all components equal to 1. The heterogeneity in the number of connections of a node across the different layers can be taken into account through the participation coefficient.[4] When extended to interconnected multilayer networks, i.e. those systems where nodes are connected across layers, the concept of centrality is better understood in terms of versatility.[10]Nodes that are not central in each layer might be the most important for the multilayer systems in certain scenarios. For instance, this is the case where two layers encode different networks with only one node in common: it is very likely that such a node will have the highest centrality score because it is responsible for the information flow across layers. As for unidimensional networks, eigenvector versatility can be defined as the solution of the eigenvalue problem given byMjβiαΘiα=λ1Θjβ{\displaystyle M_{j\beta }^{i\alpha }\Theta _{i\alpha }=\lambda _{1}\Theta _{j\beta }}, whereEinstein summation conventionis used for sake of simplicity. Here,Θjβ=λ1−1MjβiαΘiα{\displaystyle \Theta _{j\beta }=\lambda _{1}^{-1}M_{j\beta }^{i\alpha }\Theta _{i\alpha }}gives the multilayer generalization of Bonacich's eigenvector centrality per node per layer. The overall eigenvector versatility is simply obtained by summing up the scores across layers asθi=Θiαuα{\displaystyle \theta _{i}=\Theta _{i\alpha }u^{\alpha }}.[3][10] As for itsunidimensional counterpart, the Katz versatility is obtained as the solutionΦjβ=[(δ−aM)−1]jβiαUiα{\displaystyle \Phi _{j\beta }=[(\delta -aM)^{-1}]_{j\beta }^{i\alpha }U_{i\alpha }}of the tensorial equationΦjβ=aMjβiαΦiα+bujβ{\displaystyle \Phi _{j\beta }=aM_{j\beta }^{i\alpha }\Phi _{i\alpha }+bu_{j\beta }}, whereδjβiα=δjiδβα{\displaystyle \delta _{j\beta }^{i\alpha }=\delta _{j}^{i}\delta _{\beta }^{\alpha }},a{\displaystyle a}is a constant smaller than the largest eigenvalue andb{\displaystyle b}is another constant generally equal to 1. The overall Katz versatility is simply obtained by summing up the scores across layers asϕi=Φiαuα{\displaystyle \phi _{i}=\Phi _{i\alpha }u^{\alpha }}.[10] For unidimensional networks, theHITS algorithmhas been originally introduced byJon Kleinbergto rate Web Pages. The basic assumption of the algorithm is that relevant pages, named authorities, are pointed by special Web pages, named hubs. This mechanism can be mathematically described by two coupled equations which reduce to two eigenvalue problems. When the network is undirected, Authority and Hub centrality are equivalent to eigenvector centrality. These properties are preserved by the natural extension of the equations proposed by Kleinberg to the case of interconnected multilayer networks, given by(MMt)jβiαΓiα=λ1Γjβ{\displaystyle (MM^{t})_{j\beta }^{i\alpha }\Gamma _{i\alpha }=\lambda _{1}\Gamma _{j\beta }}and(MtM)jβiαΥiα=λ1Υjβ{\displaystyle (M^{t}M)_{j\beta }^{i\alpha }\Upsilon _{i\alpha }=\lambda _{1}\Upsilon _{j\beta }}, wheret{\displaystyle t}indicates the transpose operator,Γiα{\displaystyle \Gamma _{i\alpha }}andΥiα{\displaystyle \Upsilon _{i\alpha }}indicate hub and authority centrality, respectively. By contracting the hub and authority tensors, one obtains the overall versatilities asγi=Γiαuα{\displaystyle \gamma _{i}=\Gamma _{i\alpha }u^{\alpha }}andυi=Υiαuα{\displaystyle \upsilon _{i}=\Upsilon _{i\alpha }u^{\alpha }}, respectively.[10] PageRank, originally introduced to rank web pages, can also be considered as a measure of centrality for interconnected multilayer networks. It is worth remarking thatPageRankcan be seen as the steady-state solution of a specialMarkov processon the top of the network.Random walkersexplore the network according to a specialtransition matrixand their dynamics is governed by a random walkmaster equation. It is easy to show that the solution of this equation is equivalent to the leading eigenvector of the transition matrix. Random walks have been defined also in the case of interconnected multilayer networks[15]and edge-colored multigraphs (also known as multiplex networks).[40]For interconnected multilayer networks, the transition tensor governing the dynamics of the random walkers within and across layers is given byRjβiα=rTjβiα+(1−r)NLujβiα,{\displaystyle R_{j\beta }^{i\alpha }=rT_{j\beta }^{i\alpha }+{\frac {(1-r)}{NL}}u_{j\beta }^{i\alpha },}, wherer{\displaystyle r}is a constant, generally set to 0.85,N{\displaystyle N}is the number of nodes andL{\displaystyle L}is the number of layers or dimensions. Here,Rjβiα{\displaystyle R_{j\beta }^{i\alpha }}might be namedGoogle tensorandujβiα{\displaystyle u_{j\beta }^{i\alpha }}is the rank-4 tensor with all components equal to 1. As its unidimensional counterpart, PageRank versatility consists of two contributions: one encoding a classical random walk with rater{\displaystyle r}and one encoding teleportation across nodes and layers with rate1−r{\displaystyle 1-r}. If we indicate byΩiα{\displaystyle \Omega _{i\alpha }}theeigentensorof the Google tensorRjβiα{\displaystyle R_{j\beta }^{i\alpha }}, denoting the steady-state probability to find the walker in nodei{\displaystyle i}and layerα{\displaystyle \alpha }, the multilayer PageRank is obtained by summing up over layers the eigentensor:ωi=Ωiαuα{\displaystyle \omega _{i}=\Omega _{i\alpha }u^{\alpha }}[10] Like many other network statistics, the meaning of aclustering coefficientbecomes ambiguous in multidimensional networks, due to the fact that triples may be closed in different dimensions than they originated.[4][41][42]Several attempts have been made to define local clustering coefficients, but these attempts have highlighted the fact that the concept must be fundamentally different in higher dimensions: some groups have based their work off of non-standard definitions,[42]while others have experimented with different definitions of random walks and 3-cycles in multidimensional networks.[4][41] While cross-dimensional structures have been studied previously,[43][44]they fail to detect more subtle associations found in some networks. Taking a slightly different take on the definition of "community" in the case of multidimensional networks allows for reliable identification of communities without the requirement that nodes be in direct contact with each other.[3][8][9][45]For instance, two people who never communicate directly yet still browse many of the same websites would be viable candidates for this sort of algorithm. A generalization of the well-knownmodularity maximizationmethod for community discovery has been originally proposed by Mucha et al.[8]Thismultiresolution methodassumes a three-dimensional tensor representation of the network connectivity within layers, as for edge-colored multigraphs, and a three-dimensional tensor representation of the network connectivity across layers. It depends on the resolution parameterγ{\displaystyle \gamma }and the weightω{\displaystyle \omega }of interlayer connections. In a more compact notation, making use of the tensorial notation, modularity can be written asQ∝SiαaBjβiαSajβ{\displaystyle Q\propto S_{i\alpha }^{a}B_{j\beta }^{i\alpha }S_{a}^{j\beta }}, whereBjβiα=Mjβiα−Pjβiα{\displaystyle B_{j\beta }^{i\alpha }=M_{j\beta }^{i\alpha }-P_{j\beta }^{i\alpha }},Mjβiα{\displaystyle M_{j\beta }^{i\alpha }}is the multilayer adjacency tensor,Pjβiα{\displaystyle P_{j\beta }^{i\alpha }}is the tensor encoding the null model and the value of components ofSaiα{\displaystyle S_{a}^{i\alpha }}is defined to be 1 when a nodei{\displaystyle i}in layerα{\displaystyle \alpha }belongs to a particular community, labeled by indexa{\displaystyle a}, and 0 when it does not.[3] Non-negative matrix factorizationhas been proposed to extract the community-activity structure of temporal networks.[46]The multilayer network is represented by a three-dimensional tensorTijτ{\displaystyle T_{ij}^{\tau }}, like an edge-colored multigraph, where the order of layers encode the arrow of time. Tensor factorization by means of Kruskal decomposition is thus applied toTijτ{\displaystyle T_{ij}^{\tau }}to assign each node to a community across time. Methods based on statistical inference, generalizingexisting approachesintroduced for unidimensional networks, have been proposed.Stochastic block modelis the most used generative model, appropriately generalized to the case of multilayer networks.[47][48] As for unidimensional networks, principled methods likeminimum description lengthcan be used for model selection in community detection methods based on information flow.[9] Given the higher complexity of multilayer networks with respect to unidimensional networks, an active field of research is devoted to simplify the structure of such systems by employing some kind of dimensionality reduction.[22][49] A popular method is based on the calculation of thequantum Jensen-Shannon divergencebetween all pairs of layers, which is then exploited for itsmetric propertiesto build a distance matrix andhierarchically clusterthe layers. Layers are successively aggregated according to the resulting hierarchical tree and the aggregation procedure is stopped when theobjective function, based on theentropy of the network, gets a global maximum. This greedy approach is necessary because the underlying problem would require to verify all possible layer groups of any size, requiring a huge number of possible combinations (which is given by theBell numberand scales super-exponentially with the number of units). Nevertheless, for multilayer systems with a small number of layers, it has been shown that the method performs optimally in the majority of cases.[22] The question of degree correlations in unidimensional networks is fairly straightforward: do nodes of similar degree tend to connect to each other? In multidimensional networks, what this question means becomes less clear. When we refer to a node's degree, are we referring to its degree in one dimension, or collapsed over all? When we seek to probe connectivity between nodes, are we comparing the same nodes across dimensions, or different nodes within dimensions, or a combination?[6]What are the consequences of variations in each of these statistics on other network properties? In one study, assortativity was found to decrease robustness in a duplex network.[50] Given two multidimensional paths,rands, we say thatrdominatessif and only if:∀d∈⟨1,|D|⟩,rl≤sl{\displaystyle \forall d\in \langle 1,|D|\rangle ,r_{l}\leq s_{l}}and∃i{\displaystyle \exists i}such thatrl<sl{\displaystyle r_{l}<s_{l}}.[38] Among other network statistics, many centrality measures rely on the ability to assess shortest paths from node to node. Extending these analyses to a multidimensional network requires incorporating additional connections between nodes into the algorithms currently used (e.g.,Dijkstra's). Current approaches include collapsing multi-link connections between nodes in a preprocessing step before performing variations on abreadth-first searchof the network.[28] One way to assess the distance between two nodes in a multidimensional network is by comparing all the multidimensional paths between them and choosing the subset that we define as shortest via path dominance: letMP(u,v){\displaystyle MP(u,v)}be the set of all paths betweenu{\displaystyle u}andv{\displaystyle v}. Then the distance betweenu{\displaystyle u}andv{\displaystyle v}is a set of pathsP⊆MP{\displaystyle P\subseteq MP}such that∀p∈P,∄p′∈MP{\displaystyle \forall p\in P,\nexists p'\in MP}such thatp′{\displaystyle p'}dominatesp{\displaystyle p}. The length of the elements in the set of shortest paths between two nodes is therefore defined as themultidimensional distance.[38] In a multidimensional networkG=(V,E,D){\displaystyle G=(V,E,D)}, the relevance of a given dimension (or set of dimensions)D′{\displaystyle D'}for one node can be assessed by the ratio:Neighbors(v,D′)Neighbors(v,D){\displaystyle {\frac {{\text{Neighbors}}(v,D')}{{\text{Neighbors}}(v,D)}}}.[39] In a multidimensional network in which different dimensions of connection have different real-world values, statistics characterizing the distribution of links to the various classes are of interest. Thus it is useful to consider two metrics that assess this: dimension connectivity and edge-exclusive dimension connectivity. The former is simply the ratio of the total number of links in a given dimension to the total number of links in every dimension:|{(u,v,d)∈E|u,v∈V}||E|{\displaystyle {\frac {|\{(u,v,d)\in E|u,v\in V\}|}{|E|}}}. The latter assesses, for a given dimension, the number of pairs of nodes connected only by a link in that dimension:|{(u,v,d)∈E|u,v∈V∧∀j∈D,j≠d:(u,v,j)∉E}||{(u,v,d)∈E|u,v∈V}|{\displaystyle {\frac {|\{(u,v,d)\in E|u,v\in V\wedge \forall j\in D,j\neq d:(u,v,j)\notin E\}|}{|\{(u,v,d)\in E|u,v\in V\}|}}}.[39] Burstinessis a well-known phenomenon in many real-world networks, e.g. email or other human communication networks. Additional dimensions of communication provide a more faithful representation of reality and may highlight these patterns or diminish them. Therefore, it is of critical importance that our methods for detecting bursty behavior in networks accommodate multidimensional networks.[51] Diffusion processesare widely used inphysicsto explore physical systems, as well as in other disciplines as social sciences, neuroscience, urban and international transportation or finance. Recently, simple and more complex diffusive processes have been generalized to multilayer networks.[23][52]One result common to many studies is that diffusion in multiplex networks, a special type of multilayer system, exhibits two regimes: 1) the weight of inter-layer links, connecting layers each other, is not high enough and the multiplex system behaves like two (or more) uncoupled networks; 2) the weight of inter-layer links is high enough that layers are coupled each other, raising unexpected physical phenomena.[23]It has been shown that there is an abrupt transition between these two regimes.[53] In fact, all network descriptors depending on some diffusive process, from centrality measures to community detection, are affected by the layer-layer coupling. For instance, in the case of community detection, low coupling (where information from each layer separately is more relevant than the overall structure) favors clusters within layers, whereas high coupling (where information from all layer simultaneously is more relevant than the each layer separately) favors cross-layer clusters.[8][9] As for unidimensional networks, it is possible to define random walks on the top of multilayer systems. However, given the underlying multilayer structure, random walkers are not limited to move from one node to another within the same layer (jump), but are also allowed to move across layers (switch).[15] Random walks can be used to explore a multilayer system with the ultimate goal to unravel itsmesoscale organization, i.e. to partition it incommunities,[8][9]and have been recently used to better understand navigability of multilayer networks and their resilience to random failures,[15]as well as for exploring efficiently this type of topologies.[54] In the case of interconnected multilayer systems, the probability to move from a nodei{\displaystyle i}in layerα{\displaystyle \alpha }to nodej{\displaystyle j}in layerβ{\displaystyle \beta }can be encoded into the rank-4 transition tensorTjβiα{\displaystyle T_{j\beta }^{i\alpha }}and the discrete-time walk can be described by the master equation pjβ(t+1)=∑α=1L∑i=1NTjβiαpiα(t)=∑α=1L∑i=1N(Tt)jβiαpiα(0){\displaystyle p_{j\beta }(t+1)=\sum _{\alpha =1}^{L}\sum _{i=1}^{N}T_{j\beta }^{i\alpha }p_{i\alpha }(t)=\sum _{\alpha =1}^{L}\sum _{i=1}^{N}(T^{t})_{j\beta }^{i\alpha }p_{i\alpha }(0)} wherepiα(t){\displaystyle p_{i\alpha }(t)}indicates the probability of finding the walker in nodei{\displaystyle i}in layerα{\displaystyle \alpha }at timet{\displaystyle t}.[3][15] There are many different types of walks that can be encoded into the transition tensorTjβiα{\displaystyle T_{j\beta }^{i\alpha }}, depending on how the walkers are allowed to jump and switch. For instance, the walker might either jump or switch in a single time step without distinguishing between inter- and intra-layer links (classical random walk), or it can choose either to stay in the current layer and jump, or to switch layer and then jump to another node in the same time step (physical random walk). More complicated rules, corresponding to specific problems to solve, can be found in the literature.[23]In some cases, it is possible to find, analytically, the stationary solution of the master equation.[15][54] The problem of classical diffusion in complex networks is to understand how a quantity will flow through the system and how much time it will take to reach the stationary state. Classical diffusion in multiplex networks has been recently studied by introducing the concept ofsupra-adjacency matrix,[55]later recognized as a specialflatteningof the multilayer adjacency tensor.[3]In tensorial notation, the diffusion equation on the top of a general multilayer system can be written, concisely, as dXjβ(t)dt=−LjβiαXiα(t){\displaystyle {\frac {dX_{j\beta }(t)}{dt}}=-L_{j\beta }^{i\alpha }X_{i\alpha }(t)} whereXiα(t){\displaystyle X_{i\alpha }(t)}is the amount of diffusing quantity at timet{\displaystyle t}in nodei{\displaystyle i}in layerα{\displaystyle \alpha }. The rank-4 tensor governing the equation is the Laplacian tensor, generalizing thecombinatorial Laplacian matrixof unidimensional networks. It is worth remarking that in non-tensorial notation, the equation takes a more complicated form. Many of the properties of this diffusion process are completely understood in terms of the second smallest eigenvalue of the Laplacian tensor. It is interesting that diffusion in a multiplex system can be faster than diffusion in each layer separately, or in their aggregation, provided that certain spectral properties are satisfied.[55] Recently, how information (or diseases) spread through a multilayer system has been the subject of intense research.[56][1][57][58][59] Several software programs focusing on the analysis and visualization of multilayer networks have been introduced. Some popular solutions includemultinet(C++ / Python / R),MuxViz(R),Pymnet(Python), with each software typically specializing in different analytical functions.[60]However, most software currently face issues such as processing very large multilayer networks, while the interoperability between software also needs improvement.
https://en.wikipedia.org/wiki/Multidimensional_network
Inmathematics,random graphis the general term to refer toprobability distributionsovergraphs. Random graphs may be described simply by a probability distribution, or by arandom processwhich generates them.[1][2]The theory of random graphs lies at the intersection betweengraph theoryandprobability theory. From a mathematical perspective, random graphs are used to answer questions about the properties oftypicalgraphs. Its practical applications are found in all areas in whichcomplex networksneed to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context,random graphrefers almost exclusively to theErdős–Rényi random graph model. In other contexts, any graph model may be referred to as arandom graph. A random graph is obtained by starting with a set ofnisolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise.[3]Differentrandom graph modelsproduce differentprobability distributionson graphs. Most commonly studied is the one proposed byEdgar Gilbertbut often called theErdős–Rényi model, denotedG(n,p). In it, every possible edge occurs independently with probability 0 <p< 1. The probability of obtainingany one particularrandom graph withmedges ispm(1−p)N−m{\displaystyle p^{m}(1-p)^{N-m}}with the notationN=(n2){\displaystyle N={\tbinom {n}{2}}}.[4] A closely related model, also called the Erdős–Rényi model and denotedG(n,M), assigns equal probability to all graphs with exactlyMedges. With 0 ≤M≤N,G(n,M) has(NM){\displaystyle {\tbinom {N}{M}}}elements and every element occurs with probability1/(NM){\displaystyle 1/{\tbinom {N}{M}}}.[3]TheG(n,M) model can be viewed as a snapshot at a particular time (M) of therandom graph processG~n{\displaystyle {\tilde {G}}_{n}}, astochastic processthat starts withnvertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges. If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 <p< 1, then we get an objectGcalled aninfinite random graph. Except in the trivial cases whenpis 0 or 1, such aGalmost surelyhas the following property: Given anyn+melementsa1,…,an,b1,…,bm∈V{\displaystyle a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{m}\in V}, there is a vertexcinVthat is adjacent to each ofa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}and is not adjacent to any ofb1,…,bm{\displaystyle b_{1},\ldots ,b_{m}}. It turns out that if the vertex set iscountablethen there is,up toisomorphism, only a single graph with this property, namely theRado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply therandom graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property. Another model, which generalizes Gilbert's random graph model, is therandom dot-product model. A random dot-product graph associates with each vertex areal vector. The probability of an edgeuvbetween any verticesuandvis some function of thedot productu•vof their respective vectors. Thenetwork probability matrixmodels random graphs through edge probabilities, which represent the probabilitypi,j{\displaystyle p_{i,j}}that a given edgeei,j{\displaystyle e_{i,j}}exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure. ForM≃pN, whereNis the maximal number of edges possible, the two most widely used models,G(n,M) andG(n,p), are almost interchangeable.[5] Random regular graphsform a special case, with properties that may differ from random graphs in general. Once we have a model of random graphs, every function on graphs, becomes arandom variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur.[4] The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that theerror probabilitiestend to zero.[4] The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value ofn{\displaystyle n}andp{\displaystyle p}what the probability is thatG(n,p){\displaystyle G(n,p)}isconnected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to asn{\displaystyle n}grows very large.Percolation theorycharacterizes the connectedness of random graphs, especially infinitely large ones. Percolation is related to the robustness of the graph (called also network). Given a random graph ofn{\displaystyle n}nodes and an average degree⟨k⟩{\displaystyle \langle k\rangle }. Next we remove randomly a fraction1−p{\displaystyle 1-p}of nodes and leave only a fractionp{\displaystyle p}. There exists a critical percolation thresholdpc=1⟨k⟩{\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}}below which the network becomes fragmented while abovepc{\displaystyle p_{c}}a giant connected component exists.[1][5][6][7][8] Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of1−p{\displaystyle 1-p}of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degreespc=1⟨k⟩{\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}}exactly as for random removal. Random graphs are widely used in theprobabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via theSzemerédi regularity lemma, the existence of that property on almost all graphs. Inrandom regular graphs,G(n,r−reg){\displaystyle G(n,r-reg)}are the set ofr{\displaystyle r}-regular graphs withr=r(n){\displaystyle r=r(n)}such thatn{\displaystyle n}andm{\displaystyle m}are the natural numbers,3≤r<n{\displaystyle 3\leq r<n}, andrn=2m{\displaystyle rn=2m}is even.[3] The degree sequence of a graphG{\displaystyle G}inGn{\displaystyle G^{n}}depends only on the number of edges in the sets[3] If edges,M{\displaystyle M}in a random graph,GM{\displaystyle G_{M}}is large enough to ensure that almost everyGM{\displaystyle G_{M}}has minimum degree at least 1, then almost everyGM{\displaystyle G_{M}}is connected and, ifn{\displaystyle n}is even, almost everyGM{\displaystyle G_{M}}has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected.[3] Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more thann4log⁡(n){\displaystyle {\tfrac {n}{4}}\log(n)}edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex. For some constantc{\displaystyle c}, almost every labeled graph withn{\displaystyle n}vertices and at leastcnlog⁡(n){\displaystyle cn\log(n)}edges isHamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian. Properties of random graph may change or remain invariant under graph transformations.Mashaghi A.et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient.[9] Given a random graphGof ordernwith the vertexV(G) = {1, ...,n}, by thegreedy algorithmon the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.).[3]The number of proper colorings of random graphs given a number ofqcolors, called itschromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parametersnand the number of edgesmor the connection probabilityphas been studied empirically using an algorithm based on symbolic pattern matching.[10] Arandom treeis atreeorarborescencethat is formed by astochastic process. In a large range of random graphs of ordernand sizeM(n) the distribution of the number of tree components of orderkis asymptoticallyPoisson. Types of random trees includeuniform spanning tree,random minimum spanning tree,random binary tree,treap,rapidly exploring random tree,Brownian tree, andrandom forest. Consider a given random graph model defined on the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}and letP(G):Ω→Rm{\displaystyle {\mathcal {P}}(G):\Omega \rightarrow R^{m}}be a real valued function which assigns to each graph inΩ{\displaystyle \Omega }a vector ofmproperties. For a fixedp∈Rm{\displaystyle \mathbf {p} \in R^{m}},conditional random graphsare models in which the probability measureP{\displaystyle P}assigns zero probability to all graphs such that 'P(G)≠p{\displaystyle {\mathcal {P}}(G)\neq \mathbf {p} }. Special cases areconditionally uniform random graphs, whereP{\displaystyle P}assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of theErdős–Rényi modelG(n,M), when the conditioning information is not necessarily the number of edgesM, but whatever other arbitrary graph propertyP(G){\displaystyle {\mathcal {P}}(G)}. In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties. The earliest use of a random graph model was byHelen Hall JenningsandJacob Morenoin 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model.[11]Another use, under the name "random net", was byRay SolomonoffandAnatol Rapoportin 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices.[12] TheErdős–Rényi modelof random graphs was first defined byPaul ErdősandAlfréd Rényiin their 1959 paper "On Random Graphs"[8]and independently by Gilbert in his paper "Random graphs".[6]
https://en.wikipedia.org/wiki/Random_graph
Random graph theory of gelationis a mathematical theory forsol–gel processes. The theory is a collection of results that generalise theFlory–Stockmayer theory, and allow identification of thegel point, gel fraction, size distribution of polymers,molar mass distributionand other characteristics for a set of many polymerising monomers carrying arbitrary numbers and types of reactivefunctional groups. The theory builds upon the notion of therandom graph, introduced by mathematiciansPaul ErdősandAlfréd Rényi, and independently byEdgar Gilbertin the late 1950s, as well as on the generalisation of this concept known as the random graph with a fixed degree sequence.[1]The theory has been originally developed[2]to explainstep-growth polymerisation, and adaptations to other types of polymerisation now exist. Along with providing theoretical results the theory is also constructive. It indicates that the graph-like structures resulting from polymerisation can be sampled with an algorithm using theconfiguration model, which makes these structures available for further examination with computer experiments. At a given point of time, degree distributionu(n){\displaystyle u(n)}, is the probability that a randomly chosen monomer hasn{\displaystyle n}connected neighbours. The central idea of the random graph theory of gelation is that a cross-linked or branched polymer can be studied separately at two levels: 1) monomer reaction kinetics that predictsu(n){\displaystyle u(n)}and 2) random graph with a givendegree distribution. The advantage of such a decoupling is that the approach allows one to study the monomer kinetics with relatively simplerate equations, and then deduce the degree distribution serving as input for a random graph model. In several cases the aforementioned rate equations have a known analytical solution. In the case ofstep-growth polymerisationof monomers carrying functional groups of the same type (so calledA1+A2+A3+⋯{\displaystyle A_{1}+A_{2}+A_{3}+\cdots }polymerisation) the degree distribution is given by:u(n,t)=∑m=n∞(mn)c(t)n(1−c(t))m−nfm,{\displaystyle u(n,t)=\sum _{m=n}^{\infty }{\binom {m}{n}}c(t)^{n}{\big (}1-c(t){\big )}^{m-n}f_{m},}wherec(t)=μt1+μt{\displaystyle c(t)={\frac {\mu t}{1+\mu t}}}is bond conversion,μ=∑m=1kmfm{\displaystyle \mu =\sum _{m=1}^{k}mf_{m}}is the average functionality, andfm{\displaystyle f_{m}}is the initial fractions of monomers of functionalitym{\displaystyle m}. In the later expression unit reaction rate is assumed without loss of generality. According to the theory,[3]the system is in the gel state whenc(t)>cg{\displaystyle c(t)>c_{g}}, where the gelation conversion iscg=∑m=1∞mfm∑m=1∞(m2−m)fm{\displaystyle c_{g}={\frac {\sum _{m=1}^{\infty }mf_{m}}{\sum _{m=1}^{\infty }(m^{2}-m)f_{m}}}}. Analytical expression foraverage molecular weightandmolar mass distributionare known too.[3]When more complex reaction kinetics are involved, for example chemical substitution, side reactions or degradation, one may still apply the theory by computingu(n,t){\displaystyle u(n,t)}using numerical integration.[3]In which case,∑n=1∞(n2−2n)u(n,t)>0{\displaystyle \sum _{n=1}^{\infty }(n^{2}-2n)u(n,t)>0}signifies that the system is in the gel state at time t (or in the sol state when the inequality sign is flipped). When monomers with two types of functional groups A and B undergo step growth polymerisation by virtue of a reaction between A and B groups, a similar analytical results are known.[4]See the table on the right for several examples. In this case,fm,k{\displaystyle f_{m,k}}is the fraction of initial monomers withm{\displaystyle m}groups A andk{\displaystyle k}groups B. Suppose that A is the group that is depleted first. Random graph theory states that gelation takes place whenc(t)>cg{\displaystyle c(t)>c_{g}}, where the gelation conversion iscg=ν10ν11+(ν20−ν10)(ν02−ν01){\displaystyle c_{g}={\frac {\nu _{10}}{\nu _{11}+{\sqrt {(\nu _{20}-\nu _{10})(\nu _{02}-\nu _{01})}}}}}andνi,j=∑m,k=1∞mikjfm,k{\displaystyle \nu _{i,j}=\sum _{m,k=1}^{\infty }m^{i}k^{j}f_{m,k}}. Molecular size distribution, the molecular weight averages, and the distribution of gyration radii have known formal analytical expressions.[5]When degree distributionu(n,l,t){\displaystyle u(n,l,t)}, giving the fraction of monomers in the network withn{\displaystyle n}neighbours connected via A group andl{\displaystyle l}connected via B group at timet{\displaystyle t}is solved numerically, the gel state is detected[2]when2μμ11−μμ02−μμ20+μ02μ20−μ112>0{\displaystyle 2\mu \mu _{11}-\mu \mu _{02}-\mu \mu _{20}+\mu _{02}\mu _{20}-\mu _{11}^{2}>0}, whereμi,j=∑n,l=1∞nilju(n,l,t){\displaystyle \mu _{i,j}=\sum _{n,l=1}^{\infty }n^{i}l^{j}u(n,l,t)}andμ=μ01=μ10{\displaystyle \mu =\mu _{01}=\mu _{10}}. Known generalisations include monomers with an arbitrary number of functional group types,[6]crosslinking polymerisation,[7]and complex reaction networks.[8]
https://en.wikipedia.org/wiki/Random_graph_theory_of_gelation
Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at least asymptotically. That is, the fractionP(k) of nodes in the network havingkconnections to other nodes goes for large values ofkas whereγ{\displaystyle \gamma }is a parameter whose value is typically in the range2<γ<3{\textstyle 2<\gamma <3}(wherein the second moment (scale parameter) ofk−γ{\displaystyle k^{\boldsymbol {-\gamma }}}is infinite but the first moment is finite), although occasionally it may lie outside these bounds.[1][2]The name "scale-free" could be explained by the fact that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size". Preferential attachmentand thefitness modelhave been proposed as mechanisms to explain the power law degree distributions in real networks. Alternative models such assuper-linear preferential attachmentand second-neighbour preferential attachment may appear to generate transient scale-free networks, but the degree distribution deviates from a power law as networks become very large.[3][4] In studies of citations between scientific papers,Derek de Solla Priceshowed in 1965 that the number of citations a paper receives had aheavy-tailed distributionfollowing aPareto distributionorpower law. In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage." However, both treated citations are scalar quantities, rather than a fundamental feature of a new class of networks. The interest in scale-free networks started in 1999 with work byAlbert-László BarabásiandRéka Albertat theUniversity of Notre Damewho mapped the topology of a portion of the World Wide Web,[5]finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. In a subsequent paper[6]BarabásiandAlbertshowed that the power laws are not a unique property of the WWW, but the feature is present in a few real networks, prompting them to coin the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution. Barabási andRéka Albertproposed a generative mechanism[6]to explain the appearance of power-law distributions, which they called "preferential attachment". Analytic solutions for this mechanism were presented in 2000 by Dorogovtsev,Mendesand Samukhin[7]and independently by Krapivsky,Redner, and Leyvraz, and later rigorously proved by mathematicianBéla Bollobás.[8] When the concept of "scale-free" was initially introduced in the context of networks,[6]it primarily referred to a specific trait: a power-law distribution for a given variablek{\displaystyle k}, expressed asf(k)∝k−γ{\displaystyle f(k)\propto k^{-\gamma }}. This property maintains its form when subjected to a continuous scale transformationk→k+ϵk{\displaystyle k\to k+\epsilon k}, evoking parallels with the renormalization group techniques in statistical field theory.[9][10] However, there's a key difference. In statistical field theory, the term "scale" often pertains to system size. In the realm of networks, "scale"k{\displaystyle k}is a measure of connectivity, generally quantified by a node's degree—that is, the number of links attached to it. Networks featuring a higher number of high-degree nodes are deemed to have greater connectivity. The power-law degree distribution enables us to make "scale-free" assertions about the prevalence of high-degree nodes.[11]For instance, we can say that "nodes with triple the average connectivity occur half as frequently as nodes with average connectivity". The specific numerical value of what constitutes "average connectivity" becomes irrelevant, whether it's a hundred or a million.[12] The most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. In a random network the maximum degree, or the expected largest hub, scales askmax~ log N, whereNis the network size, a very slow dependence. In contrast, in scale-free networks the largest hub scales askmax~ ~N1/(γ−1)indicating that the hubs increase polynomically with the size of the network. A key feature of scale-free networks is their high degree heterogeneity, κ=<k2>/<k>, which governs multiple network-based processes, from network robustness to epidemic spreading and network synchronization. While for a random network κ=<k> + 1,i.e. the ration is independent of the network sizeN, for a scale-free network we have κ~ N(3−γ)/(γ−1), increasing with the network size, indicating that for these networks the degree heterogeneity increases. Another important characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows a power law. This implies that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as acomplete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are connected to a large number of communities (e.g., celebrities, politicians). Those people may be considered the hubs responsible for thesmall-world phenomenon. At present, the more specific characteristics of scale-free networks vary with the generative mechanism used to create them. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. The random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful forsecurity, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties. Similarly, the clustering coefficient of scale-free networks can vary significantly depending on other topological details. The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case pc{\displaystyle c}is relatively high and less nodes are needed to be immunized. However, in many realistic cases the global structure is not available and the largest degree nodes are not known. Properties of random graph may change or remain invariant under graph transformations.Mashaghi A.et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Scale free graphs, as such, remain scale free under such transformations.[13] Examples of networks found to be scale-free include: Scale free topology has been also found in high temperature superconductors.[17]The qualities of a high-temperature superconductor — a compound in which electrons obey the laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms and lattice distortion.[18] Scale-free networks do not arise by chance alone.Erdősand Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of theserandom graphsare different from the properties found in scale-free networks, and therefore a model for this growth process is needed. The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999)rich get richergenerative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but proportional to the current in-degree of Web pages. According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and network characteristics have been proposed and studied. For example, Pachon et al. (2018) proposed a variant of therich get richergenerative model which takes into account two different attachment rules: a preferential attachment mechanism and a uniform choice only for the most recent nodes.[19]For a review see the book by Dorogovtsev andMendes.[citation needed]Some mechanisms such assuper-linear preferential attachmentand second neighbour attachment generate networks which are transiently scale-free, but deviate from a power law as networks grow large.[3][4] A somewhat different generative model for Web links has been suggested by Pennock et al. (2002). They examined communities with interests in a specific topic such as the home pages of universities, public companies, newspapers or scientists, and discarded the major hubs of the Web. In this case, the distribution of links was no longer a power law but resembled anormal distribution. Based on these observations, the authors proposed a generative model that mixes preferential attachment with a baseline probability of gaining a link. Another generative model is thecopymodel studied by Kumar et al.[20](2000), in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law. There are two major components that explain the emergence of the power-law distribution in theBarabási–Albert model: the growth and the preferential attachment.[21]By "growth" is meant a growth process where, over an extended period of time, new nodes join an already existing system, a network (like the World Wide Web which has grown by billions of web pages over 10 years). Finally, by "preferential attachment" is meant that new nodes prefer to connect to nodes that already have a high number of links with others. Thus, there is a higher probability that more and more nodes will link themselves to that one which has already many links, leading this node to a hubin-fine.[6]Depending on the network, the hubs might either be assortative or disassortative. Assortativity would be found in social networks in which well-connected/famous people would tend to know better each other. Disassortativity would be found in technological (Internet, World Wide Web) and biological (protein interaction, metabolism) networks.[21] However, thegrowthof the networks (adding new nodes) is not a necessary condition for creating a scale-free network (see Dangalchev[22]). One possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertex properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties. There has been a burst of activity in the modeling ofscale-free complex networks. The recipe of Barabási and Albert[23]has been followed by several variations and generalizations[24][25][26][27][19]and the revamping of previous mathematical works.[28] In today's terms, if a complex network has a power-law distribution of any of its metrics, it's generally considered a scale-free network. Similarly, any model with this feature is called a scale-free model.[11] Many real networks are (approximately) scale-free and hence require scale-free models to describe them. In Price's scheme, there are two ingredients needed to build up a scale-free model: 1. Adding or removingnodes. Usually we concentrate on growing the network, i.e. adding nodes. 2.Preferential attachment: The probabilityΠ{\displaystyle \Pi }that new nodes will be connected to the "old" node. Note that some models (see Dangalchev[22]and Fitness model below) can work also statically, without changing the number of nodes. It should also be kept in mind that the fact that "preferential attachment" models give rise to scale-free networks does not prove that this is the mechanism underlying the evolution of real-world scale-free networks, as there might exist different mechanisms at work in real-world systems that nevertheless give rise to scaling. There have been several attempts to generate scale-free network properties. Here are some examples: TheBarabási–Albert model, an undirected version ofPrice's modelhas a linear preferential attachmentΠ(ki)=ki∑jkj{\displaystyle \Pi (k_{i})={\frac {k_{i}}{\sum _{j}k_{j}}}}and adds one new node at every time step. (Note, another general feature ofΠ(k){\displaystyle \Pi (k)}in real networks is thatΠ(0)≠0{\displaystyle \Pi (0)\neq 0}, i.e. there is a nonzero probability that a new node attaches to an isolated node. Thus in generalΠ(k){\displaystyle \Pi (k)}has the formΠ(k)=A+kα{\displaystyle \Pi (k)=A+k^{\alpha }}, whereA{\displaystyle A}is the initial attractiveness of the node.) Dangalchev (see[22]) builds a 2-L model by considering the importance of each of the neighbours of a target node in preferential attachment. The attractiveness of a node in the 2-L model depends not only on the number of nodes linked to it but also on the number of links in each of these nodes. whereCis a coefficient between 0 and 1. A variant of the 2-L model, the k2 model, where first and second neighbour nodes contribute equally to a target node's attractiveness, demonstrates the emergence of transient scale-free networks.[4]In the k2 model, the degree distribution appears approximately scale-free as long as the network is relatively small, but significant deviations from the scale-free regime emerge as the network grows larger. This results in the relative attractiveness of nodes with different degrees changing over time, a feature also observed in real networks. In themediation-driven attachment (MDA) model, a new node coming withm{\displaystyle m}edges picks an existing connected node at random and then connects itself, not with that one, but withm{\displaystyle m}of its neighbors, also chosen at random. The probabilityΠ(i){\displaystyle \Pi (i)}that the nodei{\displaystyle i}of the existing node picked is The factor∑j=1ki1kjki{\displaystyle {\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}}is the inverse of the harmonic mean (IHM) of degrees of theki{\displaystyle k_{i}}neighbors of a nodei{\displaystyle i}. Extensive numerical investigation suggest that for approximatelym>14{\displaystyle m>14}the mean IHM value in the largeN{\displaystyle N}limit becomes a constant which meansΠ(i)∝ki{\displaystyle \Pi (i)\propto k_{i}}. It implies that the higher the links (degree) a node has, the higher its chance of gaining more links since they can be reached in a larger number of ways through mediators which essentially embodies the intuitive idea of rich get richer mechanism (or the preferential attachment rule of the Barabasi–Albert model). Therefore, the MDA network can be seen to follow the PA rule but in disguise.[29] However, form=1{\displaystyle m=1}it describes the winner takes it all mechanism as we find that almost99%{\displaystyle 99\%}of the total nodes has degree one and one is super-rich in degree. Asm{\displaystyle m}value increases the disparity between the super rich and poor decreases and asm>14{\displaystyle m>14}we find a transition from rich get super richer to rich get richer mechanism. The Barabási–Albert model assumes that the probabilityΠ(k){\displaystyle \Pi (k)}that a node attaches to nodei{\displaystyle i}is proportional to thedegreek{\displaystyle k}of nodei{\displaystyle i}. This assumption involves two hypotheses: first, thatΠ(k){\displaystyle \Pi (k)}depends onk{\displaystyle k}, in contrast to random graphs in whichΠ(k)=p{\displaystyle \Pi (k)=p}, and second, that the functional form ofΠ(k){\displaystyle \Pi (k)}is linear ink{\displaystyle k}. In non-linear preferential attachment, the form ofΠ(k){\displaystyle \Pi (k)}is not linear, and recent studies have demonstrated that the degree distribution depends strongly on the shape of the functionΠ(k){\displaystyle \Pi (k)} Krapivsky, Redner, and Leyvraz[26]demonstrate that the scale-free nature of the network is destroyed for nonlinear preferential attachment. The only case in which the topology of the network is scale free is that in which the preferential attachment isasymptoticallylinear, i.e.Π(ki)∼a∞ki{\displaystyle \Pi (k_{i})\sim a_{\infty }k_{i}}aski→∞{\displaystyle k_{i}\to \infty }. In this case the rate equation leads to This way the exponent of the degree distribution can be tuned to any value between 2 and∞{\displaystyle \infty }.[clarification needed] Hierarchical network modelsare, by design, scale free and have high clustering of nodes.[30] Theiterativeconstruction leads to a hierarchical network. Starting from a fully connected cluster of five nodes, we create four identical replicas connecting the peripheral nodes of each cluster to the central node of the original cluster. From this, we get a network of 25 nodes (N= 25). Repeating the same process, we can create four more replicas of the original cluster – the four peripheral nodes of each one connect to the central node of the nodes created in the first step. This givesN= 125, and the process can continue indefinitely. The idea is that the link between two vertices is assigned not randomly with a probabilitypequal for all the couple of vertices. Rather, for every vertexjthere is an intrinsicfitnessxjand a link between vertexiandjis created with a probabilityp(xi,xj){\displaystyle p(x_{i},x_{j})}.[31]In the case of World Trade Web it is possible to reconstruct all the properties by using as fitnesses of the country their GDP, and taking Assuming that a network has an underlying hyperbolic geometry, one can use the framework ofspatial networksto generate scale-free degree distributions. This heterogeneous degree distribution then simply reflects the negative curvature and metric properties of the underlying hyperbolic geometry.[33] Starting with scale free graphs with low degree correlation and clustering coefficient, one can generate new graphs with much higher degree correlations and clustering coefficients by applying edge-dual transformation.[13] UPA modelis a variant of the preferential attachment model (proposed by Pachon et al.) which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1−p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes. This modification is interesting to study the robustness of the scale-free behavior of the degree distribution. It is proved analytically that the asymptotically power-law degree distribution is preserved.[19] In the context ofnetwork theoryascale-free ideal networkis arandom networkwith adegree distributionfollowing thescale-free ideal gasdensity distribution. These networks are able to reproduce city-size distributions and electoral results by unraveling the size distribution of social groups with information theory on complex networks when a competitive cluster growth process is applied to the network.[34][35]In models of scale-free ideal networks it is possible to demonstrate thatDunbar's numberis the cause of the phenomenon known as the 'six degrees of separation'. For a scale-free network withn{\displaystyle n}nodes and power-law exponentγ>3{\displaystyle \gamma >3}, the induced subgraph constructed by vertices with degrees larger thanlog⁡n×log∗⁡n{\displaystyle \log {n}\times \log ^{*}{n}}is a scale-free network withγ′=2{\displaystyle \gamma '=2},almost surely.[36] On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. (2005) offered a potentially more precise "scale-free metric". Briefly, letGbe a graph with edge setE, and denote the degree of a vertexv{\displaystyle v}(that is, the number of edges incident tov{\displaystyle v}) bydeg⁡(v){\displaystyle \deg(v)}. Define This is maximized when high-degree nodes are connected to other high-degree nodes. Now define wheresmaxis the maximum value ofs(H) forHin the set of all graphs with degree distribution identical to that ofG. This gives a metric between 0 and 1, where a graphGwith smallS(G) is "scale-rich", and a graphGwithS(G) close to 1 is "scale-free". This definition captures the notion ofself-similarityimplied in the name "scale-free". Estimating the power-law exponentγ{\displaystyle \gamma }of a scale-free network is typically done by using themaximum likelihood estimationwith the degrees of a few uniformly sampled nodes.[37]However, since uniform sampling does not obtain enough samples from the important heavy-tail of the power law degree distribution, this method can yield a large bias and a variance. It has been recently proposed to sample random friends (i.e., random ends of random links) who are more likely come from the tail of the degree distribution as a result of thefriendship paradox.[38][39]Theoretically, maximum likelihood estimation with random friends lead to a smaller bias and a smaller variance compared to classical approach based on uniform sampling.[39]
https://en.wikipedia.org/wiki/Scale-free_networks
Asmall-world networkis agraphcharacterized by a highclustering coefficientand lowdistances. In an example of the social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known assix degrees of separation).[1]Specifically, a small-world network is defined to be a network where thetypicaldistanceLbetween two randomly chosen nodes (the number of steps required) grows proportionally to thelogarithmof the number of nodesNin the network, that is:[2] while theglobal clustering coefficientis not small. In the context of a social network, this results in thesmall world phenomenonof strangers being linked by a short chain ofacquaintances. Many empirical graphs show the small-world effect, includingsocial networks, wikis such as Wikipedia,gene networks, and even the underlying architecture of theInternet. It is the inspiration for manynetwork-on-chiparchitectures in contemporarycomputer hardware.[3] A certain category of small-world networks were identified as a class ofrandom graphsbyDuncan WattsandSteven Strogatzin 1998.[4]They noted that graphs could be classified according to two independent structural features, namely theclustering coefficient, and average node-to-nodedistance(also known asaverage shortest path length). Purely random graphs, built according to theErdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named theWatts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999.[5]This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev andMendes; Barmpoutis and Murray, 2010). Small-world networks tend to containcliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a highclustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance ofhubs– nodes in the network with a high number of connections (known as highdegreenodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed throughhubcities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as afat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above. Network small-worldness has been quantified by a small-coefficient,σ{\displaystyle \sigma }, calculated by comparing clustering and path length of a given network to anErdős–Rényi modelwith same degree on average.[6][7] Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (ω{\displaystyle \omega }) is defined as[8] Where the characteristic path lengthLand clustering coefficientCare calculated from the network you are testing,Cℓis the clustering coefficient for an equivalent lattice network andLris the characteristic path length for an equivalent random network. Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as[9] Bothω′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for whichω′ = 1, and thusωaims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network whereC≈CℓandL≈Lr.[9] Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks,networks of brain neurons, voter networks, telephone call graphs, and airport networks.[10]Cultural networks[11]and wordco-occurrence networks[12]have also been shown to be small-world networks. Networks ofconnected proteinshave small world properties such as power-law obeying degree distributions.[13]Similarlytranscriptional networks, in which the nodes aregenes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.[14] In another example, the famous theory of "six degrees of separation" between people tacitly presumes that thedomain of discourseis the set of people alive at any one time. The number of degrees of separation betweenAlbert EinsteinandAlexander the Greatis almost certainly greater than 30[15]and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body. Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight. Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of thefile drawer effect resulting from the publication bias). It is hypothesized by some researchers, such asAlbert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage bymutationorviral infection. In a small world network with a degree distribution following apower-law, deletion of a random node rarely causes a dramatic increase inmean-shortest pathlength (or a dramatic decrease in theclustering coefficient). This follows from the fact that most shortest paths between nodes flow throughhubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction ofhubs, the probability of deleting an important node is very low. For example, if the small airport inSun Valley, Idahowas shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago'sO'Hare airport, are shut down because of snow; many people have to take additional flights. By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure. The main mechanism to construct small-world networks is theWatts–Strogatz mechanism. Small-world networks can also be introduced with time-delay,[16]which will not only produce fractals but also chaos[17]under the right conditions, or transition to chaos in dynamics networks.[18] Soon after the publication ofWatts–Strogatz mechanism, approaches have been developed byMashaghiand co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.[19] Degree–diametergraphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (thediameterof the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to theMoore bound. Another way to construct a small world network from scratch is given in Barmpoutiset al.,[20]where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph. Small-world properties can arise naturally in social networks and other real-world systems via the process ofdual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase. Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links.[21]For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying.[21]It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according toL∝log⁡log⁡N{\displaystyle L\propto \log \log N}.[22] The advantages to small world networking forsocial movement groupsare their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.[23] The small world network model is directly applicable toaffinity grouptheory represented in sociological arguments byWilliam Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action.[24]Clay Shirkyargues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network.[23]The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the1999 Seattle WTO protests. Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics.[25]The seismic network in the Southern California region may be a small-world network.[26]The examples above occur on very different spatial scales, demonstrating thescale invarianceof the phenomenon in the earth sciences. Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure.[27][28]The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository. TheFreenetpeer-to-peer network has been shown to form a small-world network in simulation,[29]allowing information to be stored and retrieved in a manner that scales efficiency as the network grows. Nearest Neighbor Searchsolutions likeHNSWuse small-world networks to efficiently find the information in large item corpuses.[30][31] Both anatomical connections in thebrain[32]and the synchronization networks of cortical neurons[33]exhibit small-world topology. Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering.[34]The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans.[35]Advances inconnectomicsandnetwork neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.[36] In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost.[36]The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks.[37]High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication.[38]This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network.[39]Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.[9] In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.[6] A small-world network of neurons can exhibitshort-term memory. A computer model developed bySara Solla[40][41]had two stable states, a property (calledbistability) thought to be important inmemorystorage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understandseizures.[42]
https://en.wikipedia.org/wiki/Small_world_networks
Aspatial network(sometimes alsogeometric graph) is agraphin which theverticesoredgesarespatial elementsassociated withgeometricobjects, i.e., the nodes are located in a space equipped with a certainmetric.[1][2]The simplest mathematical realization of spatial network is alatticeor arandom geometric graph(see figure in the right), where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if theEuclidean distanceis smaller than a given neighborhood radius.Transportation and mobility networks,Internet,mobile phone networks,power grids,social and contact networksandbiological neural networksare all examples where the underlying space is relevant and where the graph'stopologyalone does not contain all the information. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology. An urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as atransportation network. One might think of the 'space map' as being the negative image of the standard map, with the open space cut out of the background buildings or walls.[3] The following aspects are some of the characteristics to examine a spatial network:[1] In many applications, such as railways, roads, and other transportation networks, the network is assumed to beplanar. Planar networks build up an important group out of the spatial networks, but not all spatial networks are planar. Indeed, the airline passenger networks is a non-planar example: Many large airports in the world are connected through direct flights. There are examples of networks, which seem to be not "directly" embedded in space. Social networks for instance connect individuals through friendship relations. But in this case, space intervenes in the fact that the connection probability between two individuals usually decreases with the distance between them. A spatial network can be represented by aVoronoi diagram, which is a way of dividing space into a number of regions. The dual graph for a Voronoi diagram corresponds to theDelaunay triangulationfor the same set of points. Voronoi tessellations are interesting for spatial networks in the sense that they provide a natural representation model to which one can compare a real world network. Examining the topology of the nodes and edges itself is another way to characterize networks. The distribution ofdegreeof the nodes is often considered, regarding the structure of edges it is useful to find theMinimum spanning tree, or the generalization, theSteiner treeand therelative neighborhood graph. In the "real" world, many aspects of networks are not deterministic - randomness plays an important role. For example, new links, representing friendships, in social networks are in a certain manner random. Modelling spatial networks in respect of stochastic operations is consequent. In many cases thespatial Poisson processis used to approximate data sets of processes on spatial networks. Other stochastic aspects of interest are: Another definition of spatial network derives from the theory ofspace syntax. It can be notoriously difficult to decide what a spatial element should be in complex spaces involving large open areas or many interconnected paths. The originators of space syntax, Bill Hillier and Julienne Hanson useaxial linesandconvex spacesas the spatial elements. Loosely, an axial line is the 'longest line of sight and access' through open space, and a convex space the 'maximal convex polygon' that can be drawn in open space. Each of these elements is defined by the geometry of the local boundary in different regions of the space map. Decomposition of a space map into a complete set of intersecting axial lines or overlapping convex spaces produces the axial map or overlapping convex map respectively. Algorithmic definitions of these maps exist, and this allows the mapping from an arbitrary shaped space map to a network amenable to graph mathematics to be carried out in a relatively well defined manner. Axial maps are used to analyseurban networks, where the system generally comprises linear segments, whereas convex maps are more often used to analysebuilding planswhere space patterns are often more convexly articulated, however both convex and axial maps may be used in either situation. Currently, there is a move within the space syntax community to integrate better withgeographic information systems(GIS), and much of thesoftwarethey produce interlinks with commercially available GIS systems. While networks and graphs were already for a long time the subject of many studies inmathematics, physics, mathematical sociology,computer science, spatial networks have been also studied intensively during the 1970s in quantitative geography. Objects of studies in geography are inter alia locations, activities and flows of individuals, but also networks evolving in time and space.[4]Most of the important problems such as the location of nodes of a network, the evolution of transportation networks and their interaction with population and activity density are addressed in these earlier studies. On the other side, many important points still remain unclear, partly because at that time datasets of large networks and larger computer capabilities were lacking. Recently, spatial networks have been the subject of studies inStatistics, to connect probabilities and stochastic processes with networks in the real world.[5]
https://en.wikipedia.org/wiki/Spatial_network
Trophic coherenceis a property ofdirected graphs(or directednetworks).[1]It is based on the concept oftrophic levelsused mainly inecology,[2]but which can be defined for directed networks in general and provides a measure of hierarchical structure among nodes. Trophic coherence is the tendency of nodes to fall into well-defined trophic levels. It has been related to several structural and dynamical properties of directed networks, including the prevalence ofcycles[3]andnetwork motifs,[4]ecological stability,[1]intervality,[5]and spreading processes likeepidemicsandneuronal avalanches.[6] Consider a directed network defined by theN×N{\displaystyle N\times N}adjacency matrixA=(aij){\displaystyle A=(a_{ij})}. Each nodei{\displaystyle i}can be assigned atrophic levelsi{\displaystyle s_{i}}according to wherekiin=∑jaij{\displaystyle k_{i}^{\text{in}}=\sum _{j}a_{ij}}isi{\displaystyle i}'s in-degree, and nodes withkiin=0{\displaystyle k_{i}^{\text{in}}=0}(basal nodes) havesi=1{\displaystyle s_{i}=1}by convention. Each edge has atrophic differenceassociated, defined asxij=si−sj{\displaystyle x_{ij}=s_{i}-s_{j}}. Thetrophic coherenceof the network is a measure of how tightly peaked the distribution of trophic distances,p(x){\displaystyle p(x)}, is around its mean value, which is always⟨x⟩=1{\displaystyle \langle x\rangle =1}. This can be captured by anincoherence parameterq{\displaystyle q}, equal to the standard deviation ofp(x){\displaystyle p(x)}: whereL=∑ijaij{\displaystyle L=\sum _{ij}a_{ij}}is the number of edges in the network.[1] The figure shows two networks which differ in their trophic coherence. The position of the nodes on the vertical axis corresponds to their trophic level. In the network on the left, nodes fall into distinct (integer) trophic levels, so the network is maximally coherent(q=0){\displaystyle (q=0)}. In the one on the right, many of the nodes have fractional trophic levels, and the network is more incoherent(q=0.49){\displaystyle (q=0.49)}.[6] The degree to which empirical networks are trophically coherent (or incoherent) can be investigated by comparison with a null model. This is provided by thebasal ensemble, which comprises networks in which all non-basal nodes have the same proportion of basal nodes for in-neighbours.[3]Expected values in this ensemble converge to those of the widely usedconfiguration ensemble[7]in the limitN→∞{\displaystyle N\rightarrow \infty },L/N→∞{\displaystyle L/N\rightarrow \infty }(withN{\displaystyle N}andL{\displaystyle L}the numbers of nodes and edges), and can be shown numerically to be a good approximation for finite random networks. The basal ensemble expectation for the incoherence parameter is whereLB{\displaystyle L_{B}}is the number of edges connected to basal nodes.[3]The ratioq/q~{\displaystyle q/{\tilde {q}}}measured in empirical networks reveals whether they are more or less coherent than the random expectation. For instance, Johnson and Jones[3]find in a set of networks thatfood websare significantly coherent(q/q~=0.44±0.17){\displaystyle (q/{\tilde {q}}=0.44\pm 0.17)},metabolic networksare significantly incoherent(q/q~=1.81±0.11){\displaystyle (q/{\tilde {q}}=1.81\pm 0.11)}, andgene regulatory networksare close to the random expectation(q/q~=0.99±0.05){\displaystyle (q/{\tilde {q}}=0.99\pm 0.05)}. There is as yet little understanding of the mechanisms which might lead to particular kinds of networks becoming significantly coherent or incoherent.[3]However, in systems which present correlations between trophic level and other features of nodes, processes which tended to favour the creation of edges between nodes with particular characteristics could induce coherence or incoherence. In the case of food webs, predators tend to specialise on consuming prey with certain biological properties (such as size, speed or behaviour) which correlate with their diet, and hence with trophic level. This has been suggested as the reason for food-web coherence.[1]However, food-web models based on aniche axisdo not reproduce realistic trophic coherence,[1]which may mean either that this explanation is insufficient, or that severalniche dimensionsneed to be considered.[8] The relation between trophic level and node function can be seen in networks other than food webs. The figure shows a word adjacency network derived from the bookGreen Eggs and Ham, by Dr. Seuss.[3]The height of nodes represents their trophic levels (according here to the edge direction which is the opposite of that suggested by the arrows, which indicate the order in which words are concatenated in sentences). The syntactic function of words is also shown with node colour. There is a clear relationship between syntactic function and trophic level: the mean trophic level of common nouns (blue) issnoun=1.4±1.2{\displaystyle s_{noun}=1.4\pm 1.2}, whereas that of verbs (red) issverb=7.0±2.7{\displaystyle s_{verb}=7.0\pm 2.7}. This example illustrates how trophic coherence or incoherence might emerge from node function, and also that the trophic structure of networks provides a means of identifying node function in certain systems. There are various ways of generating directed networks with specified trophic coherence, all based on gradually introducing new edges to the system in such a way that the probability of each new candidate edge being accepted depends on the expected trophic difference it would have. Thepreferential preying modelis an evolving network model similar to theBarábasi-Albert modelof preferential attachment, but inspired on an ecosystem that grows through immigration of new species.[1]One begins withB{\displaystyle B}basal nodes and proceeds to introduce new nodes up to a total ofN{\displaystyle N}. Each new nodei{\displaystyle i}is assigned a first in-neighbourj{\displaystyle j}(a prey species in the food-web context) and a new edge is placed fromj{\displaystyle j}toi{\displaystyle i}. The new node is given a temporary trophic levelsit=sj+1{\displaystyle s_{i}^{t}=s_{j}+1}. Then a furtherκi{\displaystyle \kappa _{i}}new in-neighboursl{\displaystyle l}are chosen fori{\displaystyle i}from among those in the network according to their trophic levels. Specifically, for a new candidate in-neighbourl{\displaystyle l}, the probability of being chosen is a function ofxilt=sit−sl{\displaystyle x_{il}^{t}=s_{i}^{t}-s_{l}}. Johnsonet al[1]use whereT{\displaystyle T}is a parameter which tunes the trophic coherence: forT=0{\displaystyle T=0}maximally coherent networks are generated, andq{\displaystyle q}increases monotonically withT{\displaystyle T}forT>0{\displaystyle T>0}. The choice ofκi{\displaystyle \kappa _{i}}is arbitrary. One possibility is to set toκi=zini{\displaystyle \kappa _{i}=z_{i}n_{i}}, whereni{\displaystyle n_{i}}is the number of nodes already in the network wheni{\displaystyle i}arrives, andzi{\displaystyle z_{i}}is a random variable drawn from aBeta distributionwith parametersα=1{\displaystyle \alpha =1}and (Ld{\displaystyle L_{d}}being the desired number of edges). This way, thegeneralised cascade model[9][10]is recovered in the limitT→∞{\displaystyle T\rightarrow \infty }, and the degree distributions are as in theniche model[11]andgeneralised niche model.[10]This algorithm, as described above, generates networks with no cycles (except for self-cycles, if the new nodei{\displaystyle i}is itself considered among its candidate in-neighboursl{\displaystyle l}). In order for cycles of all lengths to be a possible, one can consider new candidate edges in which the new nodei{\displaystyle i}is the in-neighbour as well as those in which it would be the out-neighbour. The probability of acceptance of these edges,Pli{\displaystyle P_{li}}, then depends onxlit=sl−sit{\displaystyle x_{li}^{t}=s_{l}-s_{i}^{t}}. Thegeneralised preferential preying model[6]is similar to the one described above, but has certain advantages. In particular, it is more analytically tractable, and one can generate networks with a precise number of edgesL{\displaystyle L}. The network begins withB{\displaystyle B}basal nodes, and then a furtherN−B{\displaystyle N-B}new nodes are added in the following way. When each enters the system, it is assigned a single in-neighbour randomly from among those already there. Every node then has an integer temporary trophic levelsit{\displaystyle s_{i}^{t}}. The remainingL−N+B{\displaystyle L-N+B}edges are introduced as follows. Each pair of nodes(i,j){\displaystyle (i,j)}has two temporary trophic distances associated,xijt=sit−sjt{\displaystyle x_{ij}^{t}=s_{i}^{t}-s_{j}^{t}}andxjit=sjt−sit{\displaystyle x_{ji}^{t}=s_{j}^{t}-s_{i}^{t}}. Each of these candidate edges is accepted with a probability that depends on this temporary distance. Klaise and Johnson[6]use because they find the distribution of trophic distances in several kinds of networks to be approximatelynormal, and this choice leads to a range of the parameterT{\displaystyle T}in whichq≃T{\displaystyle q\simeq T}. Once all the edges have been introduced, one must recalculate the trophic levels of all nodes, since these will differ from the temporary ones originally assigned unlessT≃0{\displaystyle T\simeq 0}. As with the preferential preying model, the average incoherence parameterq{\displaystyle q}of the resulting networks is a monotonically increasing function ofT{\displaystyle T}forT≥0{\displaystyle T\geq 0}. The figure above shows two networks with different trophic coherence generated with this algorithm.
https://en.wikipedia.org/wiki/Trophic_coherence
Defunct Defunct TheGerman nobility(deutscher Adel) androyaltywerestatus groupsof themedieval society in Central Europe, which enjoyed certainprivilegesrelative to other people under the laws and customs in theGerman-speaking area, until the beginning of the 20th century. Historically, German entities that recognized or conferrednobilityincluded theHoly Roman Empire(962–1806), theGerman Confederation(1814–1866), and theGerman Empire(1871–1918). ChancellorOtto von Bismarckin the German Empire had a policy of expanding his political base by ennoblingnouveau richeindustrialists and businessmen who had no noble ancestors.[1]The nobility flourished during the dramatic industrialization and urbanization of Germany after 1850. Landowners modernized their estates, and oriented their business to an international market. Many younger sons were positioned in the rapidly growing national and regional civil service bureaucracies, as well as in the officer corps of the military. They acquired not only the technical skills but the necessary education in high prestige German universities that facilitated their success. Many became political leaders of new reform organizations such as agrarian leagues, and pressure groups. The Roman Catholic nobility played a major role in forming the newCentre Partyin resistance to Bismarck'santi-CatholicKulturkampf, while Protestant nobles were similarly active in theConservative Party.[2] In August 1919, at the beginning of theWeimar Republic(1918–1933), Germany's new constitution officially abolished royalty and nobility, and the respective legal privileges and immunities appertaining to an individual, a family or any heirs. Today, German nobility is no longer conferred by theFederal Republic of Germany(1949–present), and constitutionally the descendants of German noble families do not enjoy legal privileges.Hereditary titlesare permitted as part of the surname (e.g., the aristocratic particlesvonandzu), and these surnames can then be inherited by a person's children. Later developments distinguished theAustrian nobility, which came to be associated with theAustrian EmpireandAustria-Hungary. Thenobilitysystem of the German Empire was similar tonobility in the Austrian Empire; both developed during theHoly Roman Empireand both ended in 1919 when they were abolished, and legal status and privileges were revoked. In April 1919, Austrian nobility was abolished under theFirst Austrian Republic(1919–1934) and, contrary to Germany, the subsequent use and legal recognition of hereditary titles and aristocratic particles and use as part of surnames was banned. Today, Austrian nobility is no longer conferred by the Republic ofAustria(1945–present), and the public or official use of noble titles as title or part of the surname, is a minor offence under Austrian law for Austrian citizens. In Germany, nobility and titles pertaining to it were recognised or bestowed upon individuals by emperors, kings and lesser ruling royalty, and were then inherited by the legitimate,male-linedescendants of the ennobled person. Families that had been considered noble as early as pre-1400s Germany (i.e., theUradelor "ancient nobility") were usually eventually recognised by a sovereign, confirming their entitlement to whatever legal privileges nobles enjoyed in that sovereign's realm. Noble rank was usually granted to men byletters patent(seeBriefadel), whereas women were members of nobility by descent or by marriage to a nobleman. Nobility was inherited equally by all legitimate descendants in themale line. German titles of nobility were usually inherited by all male-line descendants, although some descended by maleprimogeniture, especially in 19th and 20th centuryPrussia(e.g.,Otto von Bismarck, born a baronialJunker(not a title), was granted the title of count (Graf) extending to all his male-line descendants, and later that of prince (Fürst) in primogeniture). Upon promulgation of theWeimar Constitutionon 11 August 1919, all Germans were declared equal before the law.[3]an exceptional practice regarding surnames borne by former members of the nobility: whereas thegender differentiation in German surnames, widespread until the 18th century and colloquially retained in some dialects, was abolished in Germany with the introduction of officially registered invariable surnames by the late 19th century, former noble titles transformed into parts of the surname in 1919 continue to appear in female and male forms.[4] Altogether abolished were titles of sovereigns, such as emperor/empress, king/queen, grand duke/grand duchess, etc. However, former titles shared and inherited by all members of the family were retained but incorporated into the surname. For instance, members of the former royal families ofPrussiaand Bavaria were allowed use ofPrinz/Prinzessin;[5]orHerzog/Herzogin.In the cases of the former kings/queens of Saxony and Württemberg, the ducal title borne by non-ruling cadets of their dynasties before 1919, orHerzog/Herzoginfor the six deposed grand dukes (i.e., the former rulers ofBaden,Hesse,Mecklenburg-Schwerin,Mecklenburg-Strelitz,Oldenburg, andSaxe-Weimar-Eisenach) and their consorts were retained. Any dynasty who did not reign prior to 1918 but had held a specific title as heir to one of Germany's former thrones (e.g.,Erbprinz("hereditary prince"))—along with any heir to a title of nobility inherited via primogeniture, and their wives—were permitted to incorporate those titles into elements of the personal surname. However, these titles became extinct upon their deaths, not being heritable.[a]With the demise of all persons styled "crown prince" before 1918, the termKronprinzno longer exists as a legal surname element. Traditional titles exclusively used for unmarried noblewomen, such asBaronesse,FreiinandFreifräulein, were also transformed into parts of the legal surname, subject to change at marriage or upon request.[6] All other former titles andnobiliary particlesare now inherited as part of the surname, and remain protected as private names under the laws. Whereas the title previously prefixed the given and surname (e.g.,Graf Kasimir von der Recke), the legal usage moves the former title to the surname (i.e.,Kasimir Graf von der Recke). However, the pre-1919 style sometimes continues in colloquial usage. In Austria, by contrast, not only were the privileges ofthe nobilityabolished, but their titles and nobiliary particles as well.[b] German nobility was not simply distinguished by noble ranks and titles, but was also seen as a distinctive ethos. Title 9, §1 of theGeneral State Laws for the Prussian Statesdeclared that the nobility's responsibility"as the first social class in the state"was"the defence of the country, as well as the supporting of the exterior dignity and the interior constitution thereof". MostGerman stateshad strict laws concerning proper conduct, employment, or marriage of nobles. Violating these laws could result in temporary or permanentAdelsverlust("loss of the status of nobility"). Until the late 19th century, for example, it was usually forbidden for nobles, theoretically on pain ofAdelsverlust, to marry persons "of low birth". Moreover, nobles employed in menial labour and lowly trades or wage labour could lose their nobility, as could nobles convicted ofcapital crimes.Adelsverlustonly concerned the individual who had violated nobility codes of conduct. Their kin, spouse, and living children were not affected, but children born to a man after anAdelsverlustwere commoners and did not inherit the father's former nobility. Various organisations[citation needed]perpetuate the historical legacy of the former nobility, documenting genealogy, chronicling the history of noble families and sometimes declining to acknowledge persons who acquired noble surnames in ways impossible before 1919. Many German states, however, required a marriage to a woman of elevated social status in order for a nobleman to pass on his titles and privileges to his children. In this respect, theGeneral State Laws for the Prussian Statesof 1794 spoke of marriage (and children) "to the right hand". This excluded marriages with women of the lower social classes, but did not mean a woman had to come from nobility herself. Especially towards the end of the 19th century and beyond, when a new upper class of wealthy common people had emerged following industrialization, marriages with commoners were becoming more widespread. However, with few exceptions, this did not apply to higher nobility, who largely continued to marry among themselves. Upwardly mobile German families typically followed marriage strategies involving men of lower rank marrying women of higher status who brought a major dowry.[7][8] Most, but not all, surnames of the German nobility were preceded by or contained the prepositionvon(meaning "of") orzu(meaning "at") as anobiliary particle.[9]The two were occasionally combined intovon und zu(meaning "of and at").[9]In general, thevonform indicates the family's place of origin, while thezuform indicates the family's continued possession of the estate from which the surname is drawn. Therefore,vonundzuindicates a family which is both named for and continues to own their original feudal holding or residence. However, thezuparticle can also hint to the split of a dynasty, as providing information on the adopted new home of one split-off branch: For instance, a senior branch owning and maybe even still residing at the place of the dynasty's origin might have been calledof A-Town [{and at} A-Town]furthermore, while a new, junior branch could then have adopted the style of, say,of A-town [and] at B-ville, sometimes even dropping[and] at, simply hyphenating the names of the two places. Other forms also exist as combinations with the definite article: e.g. "von der" orvon dem→ "vom" ("of the"),zu der→ "zur" orzu dem→ "zum" ("of the", "in the", "at the").[10]Particularly between the late 18th and early 20th century when an increasing number of unlandedcommonerswere ennobled, the "von" was typically simply put in front of a person's surname. When a person by the common occupational surname of "Meyer" received nobility, they would thus simply become "von Meyer". When sorting noble—as well as non-noble—names in alphabetic sequence, any prepositions or (former) title are ignored.[11]Name elements which have developed from honorary functions, such asSchenk(short forMundschenk, i.e., "cup-bearer"), are also overlooked.[12]Nobiliary particles are not capitalised unless they begin a sentence, and then they are usually skipped,[13]unless this creates confusion. In this, the German language practice differs from Dutch in the Netherlands, where the particlevanis usually capitalised when mentioned without preceding given names or initials, or from Dutch in Belgium, where the name particleVanis always capitalised. Although nobility as a class is no longer recognised in Germany and enjoys no legal privileges, institutions exist that carry on the legal tradition of pre-1919 nobiliary law, which in Germany today is subsumed underSonderprivatrecht, 'special private law'. TheDeutscher Adelsrechtsausschuss, 'German Commission on Nobiliary Law' can decide matters such as lineage, legitimacy, and a person's right to bear a name of nobility, in accordance with codified nobiliary law as it existed prior to 1919. The Commission's rulings are generally non-binding for individuals and establish no rights or privileges that German authorities or courts would have to consider or observe. However, they are binding for all German nobility associations recognized byCILANE(Commission d'information et de liaison des associations nobles d'Europe). In 1919, nobiliary particles and titles became part of the surname. Therefore, they can be transmitted according to civil law, for example from wife to husband, to illegitimate children and by way of adoption. The only difference to normal surnames is that noble surnames are deflected according to gender. Some impoverished nobles offered adoptions for money in the 20th century, and the adoptees adopts extensively themselves, creating a "flood" of fake nobility. A noble or noble-sounding surname does not convey nobility to those not born legitimately of a noble father, and these persons are not allowed to join a nobility association. Persons who bear a noble or noble-sounding surname without belonging to the historical nobility according toSalic laware classified asNichtadelige Namensträger, 'non-noble name-carriers'. The inflation of fake nobility is one of the major concerns of the Adelsrechtsausschuss, and it is up to the commission to determine whether a person should be considered noble or non-noble. For instance, the German-American businessmanFrédéric Prinz von Anhaltwas born asHans Robert Lichtenbergin Germany. He was married withZsa Zsa Gaborand was adopted byPrincess Marie-Auguste of Anhaltin 1980, allegedly arranged by the title dealerHans Hermann Weyer, hence he is one of the 'non-noble name-carriers'. In special cases, for example when a family is about to die out or when a daughter inherits the family estate and marries a commoner, the Adelsrechtsausschuss can grant a dispensation from Salic law, allowing for a one-time transfer of a noble surname contrary to nobiliary law, to a person considered non-noble. The following criteria are most important in such cases: The Adelsrechtsausschuss does not recognize ennoblements made by heads of formerly ruling houses, but the associations of the formerly ruling and mediatized houses of Germany send representatives to the commission. This so-called(Nichtbeanstandung), 'Non-Objection' results in the factual ennoblement of the recipient (even though the term is not applied), making Germany one of the few republics where it is still possible for non-nobles to join the ranks of the nobility even though there is no monarch who can ennoble anymore. However, dispensations are granted only in the most exceptional cases, as they infringe on the rights of a theoretical future monarch. When a person is granted a dispensation by the Adelsrechtsausschuss, he becomes the progenitor of a new noble family, which consists of all of his legitimate male-line descendants in accordance with nobiliary law. They are considered equal to nobles in all regards, and allowed to join nobility associations.[14] A family whose nobility dates back to at least the 14th century may be calledUradel, orAlter Adel("ancient nobility",[15]or "old nobility"). This contrasts withBriefadel("patent nobility"): nobility granted byletters patent. The first known such document is from September 30, 1360, for Wyker Frosch in Mainz.[16]The termUradelwas not without controversy, and the concept was seen by some[who?]as an arbitrary distinction invented by the Kingdom of Prussia. Hochadel("upper nobility", or "high nobility") were those noble houses which ruled sovereign states within the Holy Roman Empire and, later, in theGerman Confederationand theGerman Empire. They wereroyalty; the heads of these families were entitled to be addressed by some form of "Majesty" or "Highness". These were the families of kings (Bavaria, Hanover, Prussia, Saxony, andWürttemberg), grand dukes (Baden, Hesse and by Rhine, Luxembourg, Mecklenburg-Schwerin, Mecklenburg-Strelitz, Oldenburg and Saxe-Weimar-Eisenach), reigning dukes (Anhalt, Brunswick, Schleswig-Holstein, Nassau, Saxe-Altenburg, Saxe-Coburg and Gotha, Saxe-Meiningen), and reigning princes (Hohenzollern-Hechingen, Hohenzollern-Sigmaringen, Liechtenstein, Lippe, Reuss, Schaumburg-Lippe, Schwarzburg, and Waldeck-Pyrmont). TheHochadelalso included the Empire's formerly quasi-sovereign families whose domains had beenmediatisedwithin the German Confederation by 1815, yet preserved the legal right to continueroyal intermarriagewith still-reigning dynasties (Ebenbürtigkeit). These quasi-sovereign families comprised mostly princely andcomitalfamilies, but included a few dukes also of Belgian and Dutch origin (Arenberg,Croÿ,Looz-Corswarem). Information on these families constituted the second section ofJustus Perthes’ entries on reigning, princely, and ducal families in theAlmanach de Gotha. During the unification of Germany, mainly from 1866 to 1871, the states of Hanover, Hesse-Kassel, Hohenzollern-Hechingen, Hohenzollern-Sigmaringen (in 1850), Schleswig-Holstein and Nassau were absorbed into Prussia. The former ruling houses of these states were still consideredHochadelunder laws adopted by the German Empire. In addition, the ruling families of Hohenzollern-Hechingen and Hohenzollern-Sigmaringen were accorded the dynastic rights of acadet branchof the Royal House of Prussia after yielding sovereignty to their royal kinsmen. The exiled heirs to Hanover and Nassau eventually regained sovereignty by being allowed to inherit, respectively, the crowns of Brunswick (1914) and Luxembourg (1890). Nobility that held legal privileges until 1918 greater than those enjoyed by commoners, but less than those enjoyed by theHochadel,were considered part of the lower nobility orNiederer Adel. Most were untitled, only making use of the particlevonin their surnames. Higher-ranking noble families of theNiederer Adelbore such hereditary titles asEdler(lord),Ritter(knight),Freiherr(or baron) andGraf. Although most German counts belonged officially to the lower nobility, those who were mediatised belonged to theHochadel, the heads of their families being entitled to be addressed asErlaucht("Illustrious Highness"), rather than simply asHochgeboren("High-born"). There were also some German noble families, especially in Austria, Prussia and Bavaria, whose heads bore the titles ofFürst(prince) orHerzog(duke); however, never having exercised a degree of sovereignty, they were accounted members of the lower nobility (e.g.,Bismarck,Blücher,Putbus,Hanau,Henckel von Donnersmarck,Pless,Wrede). The titles ofelector,grand duke,archduke,duke,landgrave,margrave,count palatine,princeandReichsgrafwere borne by rulers who belonged to Germany'sHochadel. Other counts, as well as barons (Freiherren/Barons), lords (Herren), Landed knights (Ritter)[c]were borne by noble, non-reigning families. The vast majority of the German nobility, however, inherited no titles, and were usually distinguishable only by the nobiliary particlevonin their surnames.
https://en.wikipedia.org/wiki/German_nobility
The concept ofGermanyas a distinct region inCentral Europecan be traced toJulius Caesar, who referred to the unconquered area east of theRhineasGermania, thus distinguishing it fromGaul. The victory of theGermanic tribesin theBattle of the Teutoburg Forest(AD9) prevented annexation by theRoman Empire, although theRoman provincesofGermania SuperiorandGermania Inferiorwere established along theRhine. Following theFall of the Western Roman Empire, theFranksconquered the otherWestGermanic tribes. When theFrankish Empirewas divided amongCharles the Great's heirs in 843, the eastern part becameEast Francia, and laterKingdom of Germany. In 962,Otto Ibecame the firstHoly Roman Emperorof theHoly Roman Empire, the medieval German state. During theHigh Middle Ages, theHanseatic League, dominated by German port cities, established itself along theBalticandNorth Seas. The growth of a crusading element within GermanChristendomled to theState of the Teutonic Orderalong the Baltic coast in what would later becomePrussia. In theInvestiture Controversy, the German Emperors resisted Catholic Church authority. In theLate Middle Ages, the regional dukes, princes, and bishops gained power at the expense of the emperors.Martin Lutherled the ProtestantReformationwithin the Catholic Church after 1517, as the northern and eastern states became Protestant, while most of the southern and western states remained Catholic. TheThirty Years' War, a civil war from 1618 to 1648 brought tremendous destruction to the Holy Roman Empire. The estates of the empire attained great autonomy in thePeace of Westphalia, the most important beingAustria,Prussia,BavariaandSaxony. With theNapoleonic Wars,feudalismfell away and the Holy Roman Empire was dissolved in 1806.Napoleonestablished theConfederation of the Rhineas a German puppet state, but after the French defeat, theGerman Confederationwas established under Austrian presidency. TheGerman revolutions of 1848–1849failed but theIndustrial Revolutionmodernized the German economy, leading to rapid urban growth and the emergence of thesocialist movement. Prussia, with its capitalBerlin, grew in power. German universities became world-class centers for science and humanities, while music and art flourished. Theunification of Germanywas achieved under the leadership of the ChancellorOtto von Bismarckwith the formation of theGerman Empirein 1871. The newReichstag, an elected parliament, had only a limited role in the imperial government. Germany joined the other powers incolonial expansion in Africa and the Pacific. By 1900, Germany was the dominant power on the European continent and its rapidly expanding industry had surpassed Britain's while provoking it ina naval arms race. Germany led theCentral PowersinWorld War I, but was defeated, partly occupied, forced to paywar reparations, and stripped of its colonies and significant territory along its borders. TheGerman Revolution of 1918–1919ended the German Empire with the abdication ofWilhelm IIin 1918 and established theWeimar Republic, an ultimately unstable parliamentary democracy. In January 1933,Adolf Hitler, leader of theNazi Party, used the economic hardships of theGreat Depressionalong with popular resentment over the terms imposed on Germany at the end of World War I to establish atotalitarianregime. ThisNazi Germanymade racism, especiallyantisemitism, a central tenet of its policies, and became increasingly aggressive with its territorial demands, threatening war if they were not met. Germany quickly remilitarized, annexed its German-speaking neighbors andinvaded Poland, triggeringWorld War II. During the war, the Nazis established a systematicgenocideprogram known asthe Holocaustwhich killed 11 million people, including 6 million Jews (representing 2/3rds of the European Jewish population). By 1944, the German Army was pushed back on all fronts until finally collapsing in May 1945. Underoccupation by the Allies,denazificationefforts took place, large populations under former German-occupied territories were displaced, German territories were split up by the victorious powers and in the east annexed by Poland and the Soviet Union. Germany spent the entirety of theCold Warera divided into theNATO-alignedWest GermanyandWarsaw Pact-alignedEast Germany. Germans also fled from Communist areas into West Germany, which experienced rapideconomic expansion, and became the dominant economy in Western Europe. In 1989, theBerlin Wallwasopened, theEastern Bloccollapsed, andEast and West Germany were reunitedin 1990. TheFranco-German friendshipbecame the basis for the political integration of Western Europe in theEuropean Union. In 1998–1999, Germany was one of the founding countries of theeurozone. Germany remains one of the economic powerhouses of Europe, contributing about 1/4 of the eurozone's annualgross domestic product. In the early 2010s, Germany played a critical role in trying to resolve the escalating euro crisis, especially concerning Greece and otherSouthern Europeannations. In 2015, Germany faced theEuropean migrant crisisas the main receiver of asylum seekers fromSyriaand other troubled regions. Germany opposedRussia's 2022 invasion of Ukraineand decided to strengthenits armed forces. Pre-human apes such asDanuvius guggenmosi, who were present in Germany over 11 million years ago, are theorized to be among the earliest apes to walk on two legs prior to other species and genera such asAustralopithecus.[1]The discovery of theHomo heidelbergensismandible in 1907 affirms archaic human presence in Germany by at least 600,000 years ago,[2]so stone tools were dated as far back as 1.33 million years ago.[3]The oldest complete set of hunting weapons ever found anywhere in the world was excavated from a coal mine inSchöningen,Lower Saxony. Between 1994 and 1998,eight 380,000-year-old wooden javelinsbetween 1.82 and 2.25 m (5.97 and 7.38 ft) in length were eventually unearthed.[4][5]One of the oldest buildings in the world and one of the oldest pieces of art was found inBilzingsleben.[6] In 1856, the fossilized bones of an extinct human species were salvaged from a limestone grotto in theNeandervalley nearDüsseldorf,North Rhine-Westphalia. The archaic nature of the fossils, now known to be around 40,000 years old, was recognized and the characteristics published in the first-everpaleoanthropologicspecies descriptionin 1858 byHermann Schaaffhausen.[7]The species was namedHomo neanderthalensis,Neanderthalman in 1864. The oldest traces ofhomo sapiensin Germany were found in the caveIlsenhöhle[de]inRanis, where up to 47,500-year-old remains were discovered, among the oldest in Europe.[8]The remains ofPaleolithicearly modern humanoccupation uncovered and documented in several caves in theSwabian Jurainclude various mammoth ivory sculptures that rank among the oldest uncontested works of art and several flutes, made of bird bone and mammoth ivory that are confirmed to be the oldest musical instruments ever found. The 41,000-year-oldLöwenmensch figurinerepresents the oldest uncontested figurative work of art and the 40,000-year-oldVenus of Hohle Felshas been asserted as the oldest uncontested object of human figurative art ever discovered.[9][10][11][12]These artefacts are attributed to theAurignacianculture. Between 12,900 and 11,700 years ago, north-central Germany was part of theAhrensburg culture(named forAhrensburg). The first groups of early farmers different from the indigenous hunter-gatherers to migrate into Europe came from a population in westernAnatoliaat the beginning of theNeolithicperiod between 10,000 and 8,000 years ago.[13] Central Germany was one of the primary areas of theLinear Pottery culture(c.5500 BC– c.4500 BC), which was partially contemporary with theErtebølle culture(c.5300 BC– c.3950 BC) of Denmark and northern Germany. The construction of the Central EuropeanNeolithic circular enclosuresfalls in this time period with the best known and oldest being theGoseck circle, constructedc.4900 BC. Afterwards, Germany was part of theRössen culture,Michelsberg cultureandFunnelbeaker culture(c.4600 BC– c.2800 BC). The oldest traces for the use of wheel and wagon ever found are located at a northern German Funnelbeaker culture site and date to around 3400 BC.[14] The settlers of theCorded Ware culture(c.2900 BC– c.2350 BC), that had spread all over the fertile plains of Central Europe during the Late Neolithic were ofIndo-Europeanancestry. The Indo-Europeans had, via mass-migration, arrived into the heartland of Europe around 4,500 years ago.[16] By the lateBronze Age, theUrnfield culture(c.1300 BC– c.750 BC) had replaced theBell Beaker,UneticeandTumulus culturesin central Europe,[17]whilst theNordic Bronze Agehad developed in Scandinavia and northern Germany. The name comes from the custom ofcrematingthe dead and placing their ashes inurns, which were then buried in fields. The first usage of the name occurred in publications over grave sites in southern Germany in the late 19th century.[18][19]Over much of Europe, the Urnfield culture followed theTumulus cultureand was succeeded by theHallstatt culture.[20]TheItalic peoples, including theLatins, from which theRomansemerged, come from the Urnfield culture of central Europe.[21][22][23] TheHallstatt culture, which had developed from the Urnfield culture, was the predominant Western and Central European culture from the 12th to 8th centuries BC and during the earlyIron Age(8th to 6th centuries BC). It was followed by theLa Tène culture(5th to 1st centuries BC). The people who had adopted these cultural characteristics in central and southern Germany are regarded asCelts. How and if the Celts are related to the Urnfield culture remains disputed. However, Celtic cultural centres developed in central Europe during the late Bronze Age (c.1200 BCuntil 700 BC). Some, like theHeuneburg, the oldest city north of the Alps,[24]grew to become important cultural centres of the Iron Age in Central Europe, that maintained trade routes to theMediterranean. In the 5th century BC the Greek historianHerodotusmentioned a Celtic city at the Danube –Pyrene, that historians attribute to the Heuneburg. Beginning around 700 BC (or later),Germanic peoples(Germanic tribes) fromsouthern Scandinavia and northern Germanyexpanded south and gradually replaced the Celtic peoples in Central Europe.[25][26][27][28][29][30] Theethnogenesisof theGermanic tribesremains debated. However, for authorAveril Cameron"it is obvious that a steady process" occurred during theNordic Bronze Age, or at the latest during thePre-Roman Iron Age[33](Jastorf culture). From their homes in southern Scandinavia and northern Germany the tribes began expanding south, east and west during the 1st century BC,[34]and came into contact with theCeltictribes ofGaul, as well as withIranic,[35]Baltic,[36]andSlaviccultures inCentral/Eastern Europe.[37] Factual and detailed knowledge about the early history of the Germanic tribes is rare. Researchers have to be content with the recordings of the tribes' affairs with theRomans, linguistic conclusions, archaeological discoveries and the rather new yet auspicious results ofarchaeogeneticstudy.[38]In the mid-1st century BC,Republican RomanstatesmanJulius Caesarerected thefirst known bridges across the Rhineduring hiscampaign in Gauland led a military contingent across and into the territories of the local Germanic tribes. After several days and having made no contact with Germanic troops (who had retreated inland) Caesar returned to the west of the river.[39]By 60 BC, theSuebitribe under chieftainAriovistus, had conquered lands of the GallicAeduitribe to the west of the Rhine. Consequent plans to populate the region with Germanic settlers from the east were vehemently opposed by Caesar, who had already launched hisambitious campaignto subjugate all Gaul. Julius Caesar defeated the Suebi forces in 58 BC in theBattle of Vosgesand forced Ariovistus to retreat across the Rhine.[40][41] Augustus, firstRoman emperor, considered conquest beyond theRhineand theDanubenot only regular foreign policy but also necessary to counter Germanic incursions into a still rebellious Gaul. Forts and commercial centers were established along the rivers. Some tribes, such as theUbiiconsequently allied with Rome and readily adopted advanced Roman culture. During the 1st century CE Roman legions conducted extended campaigns intoGermania magna, the area north of the Upper Danube and east of the Rhine, attempting to subdue the various tribes. Roman ideas of administration, the imposition of taxes and a legal framework were frustrated by the total absence of an infrastructure.Germanicus'scampaigns, for example, were almost exclusively characterized by frequent massacres of villagers and indiscriminate pillaging. The tribes, however maintained their elusive identities. A coalition of tribes under theCheruscichieftainArminius, who was familiar with Roman tactical doctrines, defeated a large Roman force in theBattle of the Teutoburg Forest. Consequently, Rome resolved to permanently establish the Rhine/Danube border and refrain from further territorial advance into Germania.[42][43]By AD 100 the frontier along the Rhine and the Danube and theLimes Germanicuswas firmly established. Several Germanic tribes lived under Roman rule south and west of the border, as described inTacitus'sGermania. Austria formed the regular provinces ofNoricumandRaetia.[44][45][46]The provincesGermania Inferior(with the capital situated atColonia Claudia Ara Agrippinensium, modernCologne) andGermania Superior(with its capital atMogontiacum, modernMainz), were formally established in 85 AD, after long campaigns as lasting military control was confined to the lands surrounding the rivers.[47]Christianity was introducedto Roman controlled western Germania before the Middle Ages, with Christian religious structures such as theAula PalatinaofTrierbuilt during the reign ofConstantine I(r.306–337).[48] Rome'sThird Century Crisiscoincided with the emergence of a number of large West Germanic tribes: theAlamanni,Franks,Bavarii,Chatti,Saxons,Frisii,Sicambri, andThuringii. By the 3rd century the Germanic speaking peoples began to migrate beyond thelimesand the Danube frontier.[49]Several large tribes – theVisigoths,Ostrogoths,Vandals,Burgundians,Lombards,SaxonsandFranks– migrated and played their part in thedecline of the Roman Empireand the transformation of the oldWestern Roman Empire.[50]By the end of the 4th century theHunsinvaded eastern and central Europe, establishing theHunnic Empire. The event triggered theMigration Period.[51]Hunnic hegemony over a vast territory in central and eastern Europe lasted until the death ofAttila's sonDengizichin 469.[52]Another pivotal moment in the Migration Period was theCrossing of the Rhinein December of 406 by a large group of tribes includingVandals,AlansandSuebiwho settled permanently within the crumbling Western Roman Empire.[53] Stem duchies(German:Stammesherzogtümer) in Germany refer to the traditional territory of the various Germanic tribes. The concept of such duchies survived especially in the areas which by the 9th century would constituteEast Francia,[54]which included theDuchy of Bavaria, theDuchy of Swabia, theDuchy of Saxony, theDuchy of Franconiaand theDuchy of Thuringia,[55]unlike further west theCounty of BurgundyorLorraineinMiddle Francia.[56][57] TheSalian emperors(reigned 1027–1125) retained the stem duchies as the major divisions of Germany, but they became increasingly obsolete during the early high-medieval period under theHohenstaufen, andFrederick Barbarossafinally abolished them in 1180 in favour of more numerous territorial duchies. Successive kings of Germany founded a series of border counties ormarchesin the east and the north. These includedLusatia, theNorth March(which would becomeBrandenburgand the heart of the futurePrussia), and theBillung March. In the south, the marches includedCarniola,Styria, and theMarch of Austriathat would becomeAustria. The Western Roman Empire fell in 476 with thedeposition of Romulus Augustusby the GermanicfoederatileaderOdoacer, who became the firstKing of Italy.[58]Afterwards, the Franks, like other post-Roman Western Europeans, emerged as a tribal confederacy in the Middle Rhine-Weser region, among the territory soon to be calledAustrasia(the "eastern land"), the northeastern portion of the future Kingdom of theMerovingianFranks. As a whole, Austrasia comprised parts of present-dayFrance,Germany,Belgium,Luxembourgand theNetherlands. Unlike theAlamannito their south inSwabia, they absorbed large swaths of former Roman territory as they spread west intoGaul, beginning in 250.Clovis Iof theMerovingian dynastyconquered northern Gaul in 486 and in theBattle of Tolbiacin 496 theAlemannitribe inSwabia, which eventually became theDuchy of Swabia. By 500, Clovis had united all the Frankish tribes, ruled all of Gaul[59]and was proclaimedKing of the Franksbetween 509 and 511.[60]Clovis, unlike most Germanic rulers of the time, was baptized directly intoRoman Catholicisminstead ofArianism. His successors would cooperate closely withpapalmissionaries, among themSaint Boniface. After the death of Clovis in 511, his four sons partitioned his kingdom includingAustrasia. Authority over Austrasia passed back and forth from autonomy to royal subjugation, as successiveMerovingiankings alternately united and subdivided the Frankish lands.[61] During the 5th and 6th centuries the Merovingian kings conquered theThuringii(531 to 532), theKingdom of the Burgundiansand the principality of Metz and defeated the Danes, the Saxons and the Visigoths.[62]KingChlothar I(558 to 561) ruled the greater part of what is now Germany and undertook military expeditions intoSaxony, while the South-east of what is modern Germany remained under the influence of theOstrogoths. Saxons controlled the area from the northern sea board to theHarz Mountainsand theEichsfeldin the south.[63] The Merovingians placed the various regions of their Frankish Empire under the control of semi-autonomous dukes – either Franks or local rulers,[64]and followedimperial Romanstrategic traditions of social and political integration of the newly conquered territories.[65][66]While allowed to preserve their own legal systems,[67]the conquered Germanic tribes were pressured to abandon theArianChristian faith.[68] In 718Charles Martelwaged war against the Saxons in support of theNeustrians. In 743 his sonCarlomanin his role asMayor of the Palacerenewed the war against the Saxons, who had allied with and aided the dukeOdilo of Bavaria.[69]The Catholic Franks, who by 750 controlled avast territoryin Gaul, north-western Germany, Swabia,Burgundyand westernSwitzerland, that included thealpinepasses allied with the Curia inRomeagainst theLombards, who posed a permanent threat to the Holy See.[59]Pressed byLiutprand, King of the Lombards, a Papal envoy for help had already been sent to the de facto rulerCharles Martelafter his victory in 732 over the forces of the Umayyad Caliphate at theBattle of Tours, however a lasting and mutually beneficial alliance would only materialize after Charles' death under his successor Duke of the Franks, Pepin the Short.[70] In 751Pippin III,Mayor of the Palaceunder the Merovingian king, himself assumed the title of king and was anointed by the Church.Pope Stephen IIbestowed him the hereditary title ofPatricius Romanorumas protector of Rome and St. Peter[71]in response to theDonation of Pepin, that guaranteed the sovereignty of thePapal States.Charles the Great(who ruled the Franks from 774 to 814) launched a decades-long military campaign against the Franks' heathen rivals, theSaxonsand theAvars. The campaigns and insurrections of theSaxon Warslasted from 772 to 804. The Franks eventually overwhelmed the Saxons and Avars, forcibly converted the people toChristianity, and annexed their lands to theCarolingian Empire. After the death of Frankish kingPepin the Shortin 768, his oldest son "Charlemagne" ("Charles the Great") consolidated his power over and expanded theKingdom. Charlemagne ended 200 years of Royal Lombard rule with theSiege of Pavia, and in 774 he installed himself asKing of the Lombards. Loyal Frankish nobles replaced the old Lombard aristocracy following a rebellion in 776.[72]The next 30 years of his reign were spent ruthlessly strengthening his power in Francia and on the conquest of the Slavs andPannonian Avarsin the east and alltribes, such as theSaxonsand theBavarians.[73][74]OnChristmas Day, 800 AD, Charlemagne was crownedImperator Romanorum(Emperor of the Romans) in Rome byPope Leo III.[74] Fighting among Charlemagne's three grandsons over the continuation of the custom ofpartible inheritanceor the introduction ofprimogeniturecaused the Carolingian empire to be partitioned into three parts by theTreaty of Verdunof 843.[75]Louis the Germanreceived the Eastern portion of the kingdom,East Francia, all lands east of the Rhine river and to the north of Italy. This encompassed the territories of the Germanstem duchies– Franks, Saxons,Swabians, and Bavarians – that were united in a federation under the first non-Frankish kingHenry the Fowler, who ruled from 919 to 936.[76]The royal court permanently moved in between a series of strongholds, calledKaiserpfalzen, that developed into economic and cultural centers.Aachen Palaceplayed a central role, as the localPalatine Chapelserved as the official site for all royal coronation ceremonies during the entire medieval period until 1531.[74][77] In 936,Otto Iwas crowned German king atAachen, in 961King of ItalyinPaviaand crowned emperor byPope John XIIinRomein 962. The tradition of the German King as protector of the Kingdom of Italy and the Latin Church resulted in the termHoly Roman Empirein the 12th century. The name, that was to identify with Germany continued to be used officially, with the extension added:Nationis Germanicæ (of the German nation)after the last imperial coronation in Rome in 1452 until its dissolution in 1806.[76]Otto strengthened the royal authority by re-asserting the oldCarolingianrights over ecclesiastical appointments.[78]Otto wrested from the nobles the powers of appointment of the bishops and abbots, who controlled large land holdings. Additionally, Otto revived the old Carolingian program of appointing missionaries in the border lands. Otto continued to supportcelibacyfor the higher clergy, so ecclesiastical appointments never became hereditary. By granting lands to the abbots and bishops he appointed, Otto actually turned these bishops into "princes of the Empire" (Reichsfürsten).[79]In this way, Otto was able to establish a national church. Outside threats to the kingdom were contained with the decisive defeat of the HungarianMagyarsat theBattle of Lechfeldin 955. TheSlavsbetween theElbeand theOderrivers were also subjugated. Otto marched on Rome and droveJohn XIIfrom the papal throne and for years controlled the election of the pope, setting a firm precedent for imperial control of the papacy for years to come.[80][81] Otto I was followed on the throne by his sonOtto II(955–983), emperor 973–983, Otto II's wifeTheophanu(955–991), regent 983–991, his own wifeAdelaide of Italy(931–999), regent 991–995, and his grandsonOtto III(980–1002), emperor 996–1002. Otto III died childless and was succeeded by his second cousinHenry II, who likewise died childless as the last emperor of the Ottonian dynasty. Henry II was succeeded byConrad II, a great-great-grandson of Otto I and the first emperor of theSalian dynasty. During the reign of Conrad II's son,Henry III(1039 to 1056), the empire supported theCluniac reformsof the Church, thePeace of God, prohibition ofsimony(the purchase of clerical offices), and requiredcelibacyof priests. Imperial authority over the Pope reached its peak. However, Rome reacted with the creation of theCollege of CardinalsandPope Gregory VII'sseries of clerical reforms. Pope Gregory insisted in hisDictatus Papaeon absolute papal authority over appointments to ecclesiastical offices. The subsequent conflict in which emperorHenry IVwas compelled to submit to the Pope atCanossain 1077, after having been excommunicated came to be known as theInvestiture Controversy. In 1122, a temporary reconciliation was reached betweenHenry Vand the Pope with theConcordat of Worms. With the conclusion of the dispute the Roman church and the papacy regained supreme control over all religious affairs.[83][84]Consequently, the imperial Ottonian church system (Reichskirche) declined. It also ended the royal/imperial tradition of appointing selected powerful clerical leaders to counter the Imperial secular princes.[85] Between 1095 and 1291 the various campaigns of thecrusadesto the Holy Land took place. Knightly religious orders were established, including theKnights Templar, the Knights of St John (Knights Hospitaller), and theTeutonic Order.[86][87] The termsacrum imperium(Holy Empire) was first used officially byFriedrich Iin 1157,[88]but the wordsSacrum Romanum Imperium, Holy Roman Empire, were only combined in July 1180 and would never consistently appear on official documents from 1254 onwards.[89] TheHanseatic Leaguewas a commercial and defensive alliance of the merchantguildsof towns and cities in northern and central Europe that dominated marine trade in theBaltic Sea, theNorth Seaand along the connected navigable rivers during the Late Middle Ages ( 12th to 15th centuries ). Each of the affiliated cities retained the legal system of its sovereign and, with the exception of theFree imperial cities, had only a limited degree of political autonomy.[90]Beginning with an agreement of the cities ofLübeckandHamburg, guilds cooperated in order to strengthen and combine their economic assets, like securing trading routes and tax privileges, to control prices and better protect and market their local commodities. Important centers of commerce within the empire, such asCologneon theRhineriver andBremenon theNorth Seajoined the union, which resulted in greater diplomatic esteem.[91]Recognized by the various regional princes for the great economic potential, favorable charters for, often exclusive, commercial operations were granted.[92]During its zenith the alliance maintained trading posts andkontorsin virtually all cities betweenLondonandEdinburghin the west toNovgorodin the east andBergenin Norway. By the late 14th century the powerful league enforced its interests with military means, if necessary. This culminated ina warwith the sovereign Kingdom of Denmark from 1361 to 1370. Principal city of the Hanseatic League remained Lübeck, where in 1356 the first general diet was held and its official structure was announced. The league declined after 1450 due to a number of factors, such as the15th-century crisis, the territorial lords' shifting policies towards greater commercial control, thesilver crisisand marginalization in the wider Eurasian trade network, among others.[93][94] TheOstsiedlung(lit. Eastern settlement) is the term for a process of largely uncoordinated immigration and chartering of settlement structures by ethnic Germans into territories, already inhabited bySlavsandBaltseast of theSaaleandElberivers, such as modern Poland andSilesiaand to the south intoBohemia, modern Hungary and Romania during theHigh Middle Agesfrom the 11th to the 14th century.[95][96]The primary purpose of the early imperial military campaigns into the lands to the east during the 10th and 11th century, was to punish and subjugate the localheathentribes. Conquered territories were mostly lost after the troops had retreated, but eventually were incorporated into the empire asmarches, fortified borderlands with garrisoned troops in strongholds and castles, who were to ensure military control and enforce the exaction of tributes. Contemporary sources do not support the idea of policies or plans for the organized settlement of civilians.[97] Emperor Lothair IIre-established feudal sovereignty over Poland, Denmark and Bohemia from 1135 and appointedmargravesto turn the borderlands into hereditaryfiefsand install a civilian administration. There is no discernible chronology of the immigration process as it took place in many individual efforts and stages, often even encouraged by the Slavic regional lords. However, the new communities were subjected to German law and customs. Total numbers of settlers were generally rather low and, depending on who held a numerical majority, populations usually assimilated into each other. In many regions only enclaves would persist, likeHermannstadt, founded by theTransylvanian Saxonsin the medieval Hungarian Kingdom (today in Romania) who were called on byGeza IIto repopulate the area as part of theOstsiedlung, having arrived there and founding the city in 1147 [Saxons called these parts of Transylvania "Altland" to distinguish them from later immigrant Saxon settlements established in about 1220 by the Teutonic Order].[98][99] In 1230, the Catholicmonasticorder of theTeutonic Knightslaunched thePrussian Crusade. The campaign, that was supported by the forces of Polish dukeKonrad I of Masovia, initially intended to Christianize the BalticOld Prussians, succeeded primarily in the conquest of large territories. The order, emboldened byimperial approval, quickly resolved to establish an independentstate, without the consent of duke Konrad. Recognizing only papal authority and based on a solid economy, the order steadily expanded the Teutonic state during the following 150 years, engaging in several land disputes with its neighbors. Permanent conflicts with theKingdom of Poland, theGrand Duchy of Lithuania, and theNovgorod Republic, eventually led tomilitary defeatand containment by the mid-15th century. The lastGrand MasterAlbert of Brandenburgconverted toLutheranismin 1525 and turned the remaining lands of the order into the secularDuchy of Prussia.[100][101] Henry V, great-grandson of Conrad II, who had overthrown his fatherHenry IVbecameHoly Roman Emperorin 1111. Hoping to gain greater control over the church inside the Empire, Henry V appointedAdalbert of Saarbrückenas the powerfularchbishop of Mainzin the same year. Adalbert began to assert the powers of the Church against secular authorities, that is, the Emperor. This precipitated the "Crisis of 1111" as yet another chapter of the long-termInvestiture Controversy.[102]In 1137, the prince-electors turned back to theHohenstaufenfamily for a candidate,Conrad III. Conrad tried to divest his rivalHenry the Proudof his two duchies—BavariaandSaxony—that led to war in southern Germany as the empire was divided into two powerful factions. The faction of theWelfsorGuelphs(in Italian) supported theHouse of Welfof Henry the Proud, which was the ruling dynasty in the Duchy of Bavaria. The rival faction of theWaiblingsorGhibellines(in Italian) pledged allegiance to theSwabianHouse of Hohenstaufen. During this early period, the Welfs generally maintained ecclesiastical independence under the papacy andpolitical particularism(the focus on ducal interests against the central imperial authority). The Waiblings, on the other hand, championed strict control of the church and a strong central imperial government.[103] During the reign of theHohenstaufenemperorFrederick I(Barbarossa), an accommodation was reached in 1156 between the two factions. The Duchy of Bavaria was returned to Henry the Proud's sonHenry the Lion, duke ofSaxony, who represented theGuelphparty. However, theMargraviate of Austriawas separated from Bavaria and turned into the independentDuchy of Austriaby virtue of thePrivilegium Minusin 1156.[104] Having become wealthy through trade, the confident cities of Northern Italy, supported by the Pope, increasingly opposed Barbarossa's claim of feudal rule(Honor Imperii)over Italy. The cities united in theLombard Leagueand finally defeated Barbarossa in theBattle of Legnanoin 1176. The following year a reconciliation was reached between the emperor andPope Alexander IIIin theTreaty of Venice.[105]The 1183Peace of Constanceeventually settled that the Italian cities remained loyal to the empire but were granted local jurisdiction and fullregal rightsin their territories.[106] In 1180, Henry the Lion was outlawed, Saxony was divided, and Bavaria was given toOtto of Wittelsbach, who founded theWittelsbach dynasty, which was to rule Bavaria until 1918. From 1184 to 1186, the empire underFrederick I Barbarossareached its cultural peak with theDiet of Pentecostheld atMainzand the marriage of his sonHenryin Milan to theNormanprincessConstance of Sicily.[107]The power of the feudal lords was undermined by the appointment ofministerials(unfree servants of the Emperor) as officials. Chivalry and the court life flowered, as expressed in the scholastic philosophy ofAlbertus Magnusand the literature ofWolfram von Eschenbach.[108] Between 1212 and 1250,Frederick IIestablished a modern, professionally administered state from his base inSicily. He resumed the conquest of Italy, leading to further conflict with thePapacy. In the Empire, extensive sovereign powers were granted to ecclesiastical and secular princes, leading to the rise of independent territorial states. The struggle with the Pope sapped the Empire's strength, as Frederick II was excommunicated three times. After his death, the Hohenstaufen dynasty fell, followed by aninterregnumduring which there was no Emperor (1250–1273). This interregnum came to an end with the election of a small Swabian count, Rudolf of Habsburg, as emperor.[109][110] The failure of negotiations between EmperorLouis IVand the papacy led to the 1338Declaration at Rhenseby six princes of theImperial Estateto the effect that election by all or the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. As result, the monarch was no longer subject to papal approbation and became increasingly dependent on the favour of the electors. Between 1346 and 1378Emperor Charles IVofLuxembourg, king of Bohemia, sought to restore imperial authority. The 1356 decree of theGolden Bullstipulated that all future emperors were to be chosen by a college of onlyseven– four secular and three clerical – electors. The secular electors were the King of Bohemia, theCount Palatineof the Rhine, the Duke ofSaxony, and the Margrave ofBrandenburg, the clerical electors were the Archbishops ofMainz,Trier, andCologne.[111] Between 1347 and 1351 Germany and almost the entire European continent were consumed by the most severe outbreak of theBlack Deathpandemic. Estimated to have caused the abrupt death of 30 to 60% of Europe's population, it led to widespread social and economic disruption and deep religious disaffection and fanaticism. Minority groups, and Jews in particular were blamed, singled out andattacked. As a consequence, many Jews fled and resettled in Eastern Europe.[112][113] Total population estimates of the German territories range around 5 to 6 million by the end of Henry III's reign in 1056 and about 7 to 8 million after Friedrich Barbarossa's rule in 1190.[114][115]The vast majority were farmers, typically in a state ofserfdomunder feudal lords and monasteries.[103]Towns gradually emerged and in the 12th century many new cities were founded along the trading routes and near imperial strongholds and castles. The towns were subjected to themunicipal legal system. Cities such asCologne, that had acquired the status ofImperial Free Cities, were no longer answerable to the local landlords or bishops, but immediate subjects of the Emperor and enjoyed greater commercial and legal liberties.[116]The towns were ruled by a council of the – usuallymercantile– elite, thepatricians.Craftsmenformedguilds, governed by strict rules, which sought to obtain control of the towns; a few were open to women. Society had diversified, but was divided into sharply demarcated classes of theclergy,physicians,merchants, various guilds of artisans, unskilled day labourers andpeasants. Full citizenship was not available topaupers. Political tensions arose from issues of taxation, public spending, regulation of business, and market supervision, as well as the limits of corporate autonomy.[117] Cologne'scentral location on theRhineriver placed it at the intersection of the major trade routes between east and west and was the basis of Cologne's growth.[118]The economic structures of medieval and early modern Cologne were characterized by the city's status as a major harbor and transport hub upon the Rhine. It was the seat of an archbishop, under whose patronage the vastCologne Cathedralwas built since 1240. The cathedral houses sacred Christian relics and it has since become a well knownpilgrimage destination. By 1288 the city had secured its independence from the archbishop (who relocated to Bonn), and was ruled by itsburghers.[119] BenedictineabbessHildegard von Bingenwrote several influential theological, botanical, and medicinal texts, as well as letters, liturgical songs, poems, and arguably the oldest survivingmorality play,Ordo Virtutum, while supervising brilliant miniatureIlluminations. About 100 years later,Walther von der Vogelweidebecame the most celebrated of theMinnesänger, who wereMiddle High Germanlyric poets. Around 1439,Johannes GutenbergofMainz, usedmovable typeprinting and issued theGutenberg Bible. He was the global inventor of theprinting press, thereby starting thePrinting Revolution. Cheap printed books and pamphlets played central roles for the spread of theReformationand theScientific Revolution. Around the transition from the 15th to the 16th century,Albrecht DürerfromNurembergestablished his reputation across Europe aspainter,printmaker,mathematician,engraver, andtheoristwhen he was still in his twenties and secured his reputation as one of the most important figures of theNorthern Renaissance. The early-modern European society gradually developed after the disasters of the 14th century as religious obedience and political loyalties declined in the wake of theGreat Plague, theschismof the Church and prolonged dynastic wars. The rise of thecitiesand the emergence of the newburgherclass eroded the societal, legal and economic order of feudalism.[127] The commercial enterprises of the mercantile elites in the quickly developing cities in South Germany (such asAugsburgandNuremberg), with the most prominent families being theGossembrots,Fuggers(the wealthiest family in Europe during the fifteenth and sixteenth centuries[130]),Welsers,Hochstetters, Imholts, generated unprecedented financial means. As financiers to both the leading ecclesiastical and secular rulers, these families fundamentally influenced the political affairs in the empire during the fifteenth and sixteenth century.[131][132][133][134]The increasingly money based economy also provoked social discontent among knights and peasants and predatory "robber knights" became common.[135] From 1438 theHabsburgdynasty, who had acquired control in the south-eastern empire over the Duchy of Austria,BohemiaandHungaryafter the death of KingLouis IIin 1526, managed to permanently occupy the position of the Holy Roman Emperor until 1806 (with the exception of the years between 1742 and 1745). Some Europe-wide revolutions were born in the Empire: the combination of thefirst modern postal systemestablished byMaximilian(with the management under theTaxis family) with the printing system invented by Gutenberg produced a communication revolution[136][137][138]– the Empire's decentralized nature made censorship difficult and this combined with the new communication system to facilitate free expression, thus elevating cultural life. The system also helped the authorities to disseminate orders and policies, boosted the Empire's coherence in general, and helped reformers like Luther to broadcast their views and communicate with each other effectively, thus contributing to the religious Reformation.[139][140][141] Maximilian'smilitary reforms, especially his development of theLandsknechte, caused a military revolution that broke the back of the knight class[142][143]and spread all over Europe shortly after his death.[144][145] During his reign from 1493 to 1519,Maximilian I, in a combined effort with the Estates (who sometimes acted as opponents and sometimes as cooperators to him), his officials and his humanists,reformedthe empire. A dual system of Supreme Courts (theReichskammergerichtand theReichshofrat) was established (with theReichshofratplaying a more efficient role during the Early Modern period),[150]together with the formalized Reception of Roman Law;[151][152][153][154]theImperial Diet(Reichstag) became the all-important political forum and the supreme legal and constitutional institution, which would act as a guarantee for the preservation of the Empire in the long run;[155][156]a Permanent Land Piece (Ewiger Landfriede) was declared in 1495 with regional leagues and unions providing the supporting structure, together with the creation of theReichskreise(Imperial Circles, which would serve the purpose of organize imperial armies, collect taxes and enforce orders of the imperial institutions);[157][158][159]the Imperial and Court Chanceries were combined to become the decisive government institution;[160][161]theLandsknechtethat Maximilian created became a form of imperial army;[162]a national political culture began to emerge;[163][164]and the German language began to attain an unified form.[165][166]The political structure remained incomplete and piecemeal though, mainly due to the failure of the Common Penny (an imperial tax) that the Estates resisted.[150][a]Through many compromises between emperor and estates though, a flexible, future-oriented problem-solving mechanism for the Empire was formed, together with a monarchy through which the emperor shared power with the Estates.[168][b]Whether the Reform also equated to a (successful or unsuccessful) nation building process remains a debate.[170] The additionNationis Germanicæ(of German Nation) to the emperor's title appeared first in the 15th century: in a 1486 law decreed by Frederick III and in 1512 in reference to the Imperial Diet in Cologne by Maximilian I. In 1525, the Heilbronn reform plan – the most advanced document of theGerman Peasants' War(Deutscher Bauernkrieg) – referred to theReichasvon Teutscher Nation(of German nation). During the fifteen century, the term "German nation" had witness a rise in use due to the growth of a "community of interests". The Estates also increasingly distinguished between their German Reich and the wider, "universal" Reich.[171] In order to manage their ever growing expenses, theRenaissance Popesof the 15th and early 16th century promoted the excessive sale ofindulgencesand offices and titles of the Roman Curia. In 1517, the monkMartin Lutherpublished a pamphlet with95 Thesesthat he posted in the town square ofWittenbergand handed copies of to feudal lords. Whether he nailed them to a church door at Wittenberg remains unclear. The list detailed 95 assertions, he argued, represented corrupt practice of the Christian faith and misconduct within the Catholic Church. Although perhaps not Luther's chief concern, he received popular support for his condemnation of the sale ofindulgencesand clerical offices, the pope's and higher clergy's abuse of power and his doubts of the very idea of the institution of the Church and the papacy.[172] The ProtestantReformationwas the first successful challenge to the Catholic Church and began in 1521 as Luther was outlawed at theDiet of Wormsafter his refusal to repent. The ideas of the reformation spread rapidly, as the new technology of the modern printing press ensured cheap mass copies and distribution of the theses and helped by theEmperor Charles V's wars with France and theTurks.[172]Hiding in theWartburg Castle, Luther translated the Bible into German, thereby greatly contributing to the establishment of the modern German language. This is highlighted by the fact that Luther spoke only a local dialect of minor importance during that time. After the publication of his Bible, his dialect suppressed others and constitutes to a great extent what is now modern German. With theprotestationof the Lutheran princes at theImperial DietofSpeyerin 1529 and the acceptance and adoption of the LutheranAugsburg Confessionby the Lutheran princes beginning in 1530, the separate Lutheran church was established.[173] TheGerman Peasants' War, which began in the southwest inAlsaceandSwabiaand spread further east intoFranconia,Thuringiaand Austria, was a series of economic and religious revolts of the rural lower classes, encouraged by the rhetoric of various radical religious reformers and Anabaptists against the ruling feudal lords. Although occasionally assisted by war-experienced noblemen likeGötz von BerlichingenandFlorian Geyer(in Franconia) and the theologianThomas Müntzer(in Thuringia), the peasant forces lacked military structure, skill, logistics and equipment and as many as 100,000 insurgents were eventually defeated and massacred by the territorial princes.[174] The CatholicCounter-Reformation, initiated in 1545 at theCouncil of Trentwas spearheaded by the scholarly religiousJesuit order, that was founded just five years prior by several clerics aroundIgnatius of Loyola. Its intent was to challenge and contain the Protestant Reformation via apologetic and polemical writings and decrees, ecclesiastical reconfiguration, wars and imperial political maneuverings. In 1547, emperor Charles V defeated theSchmalkaldic League, a military alliance of Protestant rulers.[175]The 1555Peace of Augsburgdecreed the recognition of the Lutheran Faith and religious division of the empire. It also stipulated the ruler's right to determine the official confession in his principality (Cuius regio, eius religio). The Counter-Reformation eventually failed to reintegrate the central and northern German Lutheran states. In 1608/1609 theProtestant Unionand theCatholic Leaguewere formed. The 1618 to 1648Thirty Years' War, that took place almost exclusively in the Holy Roman Empire has its origins, which remain widely debated, in the unsolved and recurring conflicts of the Catholic and Protestant factions. The Catholic emperorFerdinand IIattempted to achieve the religious and political unity of the empire, while the opposing Protestant Union forces were determined to defend their religious rights. The religious motive served as the universal justification for the various territorial and foreign princes, who over the course of several stages joined either of the two warring parties in order to gain land and power.[176][177] The conflict was sparked by therevolt of the Protestant nobility of Bohemiaagainst emperorMatthias' succession policies. After imperial triumph at theBattle of White Mountainand a short-lived peace, the war grew to become a political European conflict by the intervention ofKing Christian IV of Denmarkfrom 1625 to 1630,Gustavus Adolphus of Swedenfrom 1630 to 1648 and France underCardinal Richelieufrom 1635 to 1648. The conflict increasingly evolved into a struggle between the French House of Bourbon and the House of Habsburg for predominance in Europe, for which the central German territories of the empire served as the battleground.[178] The war ranks among the most catastrophic in history as three decades of constant warfare and destruction had left the land devastated. Marauding armies incessantly pillaged the countryside, seized and levied heavy taxes on cities and indiscriminately plundered the food stocks of the peasantry. There were also the countless bands of murderous outlaws, sick, homeless, disrupted people and invalid soldiery. Overall social and economic disruption caused a dramatic decline in population as a result of pandemic murder and random rape and killings, endemic infectious diseases, crop failures, famine, declining birth rates, wanton burglary, witch-hunts and the emigration of terrified people. Estimates vary between a 38% drop from 16 million people in 1618 to 10 million by 1650 and a mere 20% drop from 20 million to 16 million. TheAltmarkandWürttembergregions were especially hard hit, where it took generations to fully recover.[176][179] The war was the last major religious struggle in mainland Europe and ended in 1648 with thePeace of Westphalia. It resulted in increased autonomy for the constituent states of the Holy Roman Empire, limiting the power of the emperor. Most ofAlsacewas ceded to France,Western PomeraniaandBremen-Verdenwere given to Sweden as Imperial fiefs, and the Netherlands officially left the Empire.[180] The population of Germany reached about twenty million people by the mid-16th century, the great majority of whom were peasant farmers.[182] The ProtestantReformationwas a triumph forliteracyand the newprinting press.[183][c][185][186]Luther's translation of the Bible into High German(theNew Testamentwas published in 1522; theOld Testamentwas published in parts and completed in 1534) was a decisive impulse for the increase of literacy inearly modern Germany,[181]and stimulated printing and distribution of religious books and pamphlets. From 1517 onward religious pamphlets flooded Germany and much of Europe. The Reformation instigated a media revolution as by 1530 over 10,000 individual works are published with a total of ten million copies. Luther strengthened his attacks on Rome by depicting a "good" against "bad" church. It soon became clear that print could be used for propaganda in the Reformation for particular agendas. Reform writers used pre-Reformation styles, clichés, and stereotypes and changed items as needed for their own purposes.[187]Especially effective were Luther'sSmall Catechism, for use of parents teaching their children, andLarger Catechism,for pastors.[188]Using the German vernacular they expressed the Apostles' Creed in simpler, more personal, Trinitarian language. Illustrations in the newly translated Bible and in many tracts popularized Luther's ideas.Lucas Cranach the Elder, the painter patronized by the electors of Wittenberg, was a close friend of Luther, and illustrated Luther's theology for a popular audience. He dramatized Luther's views on the relationship between the Old and New Testaments, while remaining mindful of Luther's careful distinctions about proper and improper uses of visual imagery.[189] Luther's translation of the Bible into High Germanwas also decisive for theGerman languageand its evolution fromEarly New High Germanto Modern Standard German.[181]The publication of Luther's Bible was a decisive moment in the spread of literacy inearly modern Germany,[181]and promoted the development of non-local forms of language and exposed all speakers to forms of German from outside their own area.[190] Notable late fifteenth to early eighteenth-centurypolymathsinclude:Johannes Trithemius, one of the founder of modern cryptography, founder ofsteganography, as well asbibliographyand literary studies as branches of knowledge;[191][192][193]Conrad Celtes, the first and foremost German cartographic writer and "the greatest lyric genius and certainly the greatest organizer and popularizer of German Humanism";[194][195][196][197]Athanasius Kircher, described by Fletcher as "a founder figure of various disciplines—of geology (certainly vulcanology), musicology (as a surveyor of musical forms), museum curatorship, Coptology, to name a few—and might be claimed today as the first theorist of gravity and a long-term originator of the moving pictures (with his magic lantern shows). Through his many enthusiasms, moreover, he was the conduit of others' pursuits in the rapidly widening horizon of knowledge that marks the later Renaissance.";[198]andGottfried Wilhelm Leibniz, one of the greatest, if not the greatest "Universal genius", of all times.[199][200] Cartography developed strongly, with the center being Nuremberg, at the beginning of the sixteenth century.Martin WaldseemüllerandMatthias Ringmann'sUniversalis Cosmographiaand the 1513 edition ofGeographymarked the climax of a cartography revolution.[201][202]The emperor himself dabbled in cartography.[203] In 1515,Johannes Stabius(court astronomer under Maximilian I),Albrecht Dürerand the astronomerKonrad Heinfogelproduced the first planispheres of both southern and northerns hemispheres, also the first printed celestial maps. These maps prompted the revival of interest in the field of uranometry throughout Europe.[204][205][206][207] AstronomerJohannes KeplerfromWeil der Stadtwas one of the pioneering minds of empirical and rational research. Through rigorous application of the principles of theScientific methodhe construed hislaws of planetary motion. His ideas influenced contemporary Italian scientistGalileo Galileiand provided fundamental mechanical principles forIsaac Newton's theory ofuniversal gravitation.[208] German Colonies in the Americas existed because theFree Imperial CitiesofAugsburgandNuremberggot colonial rights in theProvince of Venezuelaor North of South America in return for debts owed by theHoly Roman EmpireCharles V, who was also King of Spain. In 1528, Charles V issued a charter by which theWelser familypossessed the rights to explore, rule and colonize the area, also with the motivation of searching for the legendary golden city ofEl Dorado. Their principal colony wasKlein-Venedig. A never realized colonial project wasHanauish-Indiesintended byFriedrich Casimir, Count of Hanau-Lichtenbergas a fief of theDutch West India Company. The project failed due to a lack of funds and the outbreak of theFranco-Dutch Warin 1672. Frederick William, ruler ofBrandenburg-Prussiasince 1640 and later called the GreatElector, acquiredEast Pomeraniavia thePeace of Westphaliain 1648. He reorganized his loose and scattered territories and managed to throw off the vassalage of Prussia under the Kingdom of Poland during theSecond Northern War.[212]In order to address the demographic problem of Prussia's largely rural population of about three million, he attracted the immigration and settlement of FrenchHuguenotsin urban areas. Many became craftsmen and entrepreneurs.[213]King Frederick William I, known as theSoldier King, who reigned from 1713 to 1740, established the structures for the highly centralized Prussian state and raised a professional army, that was to play a central role.[214]He also successfully operated a command economy that some historians consider mercantilist.[215][216] The total population of Germany (in its1914 territorial extent) grew from 16 million in 1700 to 17 million in 1750 and reached 24 million in 1800. The 18th-century economy noticeably profited from widespread practical application of the Scientific method as greater yields and a more reliable agricultural production and the introduction of hygienic standards positively affected the birth rate – death rate balance.[217] Louis XIVof France waged a series of successful wars in order to extend the French territory. He occupiedLorraine(1670) and annexed the remainder of Alsace (1678–1681) that included the free imperial city ofStraßburg. At the start of theNine Years' War, he also invaded theElectorate of the Palatinate(1688–1697).[218]Louis established a number ofcourtswhose sole function was to reinterpret historic decrees and treaties, theTreaties of Nijmegen(1678) and thePeace of Westphalia(1648) in particular in favor of his policies of conquest. He considered the conclusions of these courts, theChambres de réunionas sufficient justification for his boundless annexations. Louis' forces operated inside the Holy Roman Empire largely unopposed, because all available imperial contingents fought in Austria in theGreat Turkish War. TheGrand Allianceof 1689 took up arms against France and countered any further military advances of Louis. The conflict ended in 1697 as both parties agreed to peace talks after either side had realized, that a total victory was financially unattainable. TheTreaty of Ryswickprovided for the return of the Lorraine and Luxembourg to the empire and the abandoning of French claims to the Palatinate.[219] After the last-minuterelief of Viennafrom a siege and the imminent seizure by aTurkish forcein 1683, the combined troops of theHoly League, that had been founded the following year, embarked on the military containment of theOttoman Empireand reconqueredHungaryin 1687.[220]ThePapal States, the Holy Roman Empire, thePolish–Lithuanian Commonwealth, theRepublic of Veniceand since 1686Russiahad joined the league under the leadership ofPope Innocent XI.Prince Eugene of Savoy, who served under emperor Leopold I, took supreme command in 1697 and decisively defeated the Ottomans in a series of spectacular battles and manoeuvres. The 1699Treaty of Karlowitzmarked the end of the Great Turkish War and Prince Eugene continued his service for theHabsburg monarchyas president of theWar Council. He effectively ended Turkish rule over most of the territorial states in theBalkansduring theAustro-Turkish War of 1716–1718. TheTreaty of Passarowitzleft Austria to freely establish royal domains in Serbia and the Banat and maintain hegemony inSoutheast Europe, on which the futureAustrian Empirewas based.[221][222] Frederick II "the Great"is best known for his military genius and unique utilisation of the highly organized army to make Prussia one of the great powers in Europe as well asescaping from almost certain national disasterat the last minute. He was also an artist, author and philosopher, who conceived and promoted the concept ofenlightened absolutism.[223][224] Austrian empressMaria Theresasucceeded in bringing about a favorable conclusion for her inthe 1740 to 1748 warfor recognition of her succession to the throne. However,Silesiawas permanently lost to Prussia as a consequence of theSilesian Warsand theSeven Years' War. The 1763Treaty of Hubertusburgruled that Austria and Saxony had to relinquish all claims to Silesia. Prussia, that had nearly doubled its territory was eventually recognized as a great European power with the consequence that the politics of the following century were fundamentally influenced byGerman dualism, the rivalry of Austria and Prussia for supremacy in Central Europe.[225] The concept of enlightened absolutism, although rejected by the nobility and citizenry, was advocated inPrussiaandAustriaand implemented since 1763. Prussian kingFrederick IIdefended the idea in an essay and argued that thebenevolent monarchsimply is thefirst servant of the state, who effects his absolute political power for the benefit of the population as a whole. A number of legal reforms (e.g. the abolition of torture and the emancipation of the rural population and the Jews), the reorganization of thePrussian Academy of Sciences, the introduction of compulsory education for boys and girls and promotion of religious tolerance, among others, caused rapid social and economic development.[226] During 1772 to 1795 Prussia instigated thepartitions of Polandby occupying the western territories of the formerPolish–Lithuanian Commonwealth. Austria andRussiaresolved to acquire the remaining lands with the effect that Poland ceased to exist as a sovereign state until 1918.[227] The smaller German states were overshadowed by Prussia and Austria.Bavariahad arural economy.Saxonywas in economically good shape, although numerous wars had taken their toll. During the time when Prussia rose rapidly within Germany, Saxony was distracted by foreign affairs. The House of Wettin concentrated on acquiring and then holding on to the Polish throne which was ultimately unsuccessful.[228][clarification needed] Many of the smaller states of Germany were run by bishops, who in reality were from powerful noble families and showed scant interest in religion. While none of the later ecclesial rulers reached the outstanding reputation of Mainz'Johann Philipp von Schönbornor Münster'sChristoph Bernhard von Galen, some of them promotedEnlightenmentlike the benevolent and progressiveFranz Ludwig von ErthalinWürzburgandBamberg.[229] InHesse-Kassel, the LandgraveFrederick II, ruled from 1760 to 1785 as an enlightened despot, and raised money by renting soldiers (called "Hessians") toGreat Britainto help fight theAmerican Revolutionary War. He combined Enlightenment ideas with Christian values,cameralistplans for central control of the economy, and a militaristic approach toward diplomacy.[230] Hanoverdid not have to support a lavish court—its rulers were also kings of England and resided in London.George III, elector (ruler) from 1760 to 1820, never once visited Hanover. The local nobility who ran the country opened theUniversity of Göttingenin 1737; it soon became a world-class intellectual center.Badensported perhaps the best government of the smaller states.Karl Friedrichruled for 73 years and was an enthusiast for the Enlightenment; he abolished serfdom in 1783.[231] The smaller states failed to form coalitions with each other, and were eventually overwhelmed by Prussia who swallowed up many of them between 1807 and 1871.[232] Prussiaunderwent majorsocial changebetween the mid-17th and mid-18th centuries as thenobilitydeclined as the traditionalaristocracystruggled to compete with the risingmerchant class,[233]which developed into a newBourgeoisiemiddle class,[234][235][236]while theemancipation of the serfsgranted the ruralpeasantryland purchasing rights and freedom of movement,[237]and a series ofagrarian reformsin northwestern Germany abolishedfeudal obligationsand divided up feudal land, giving rise to wealthier peasants and paved the way for a more efficientrural economy.[238] During the mid-18th century, the recognition and application of Enlightenment cultural, intellectual and spiritual ideals and standards, led to a flourishing of art, music, philosophy, science and literature. The philosopherChristian Wolffwas a pioneering author in a vast number of fields of Enlightenment rationality, and established German as the prevailing language of philosophical reasoning, scholarly instruction and research.[239] In 1685, MargraveFrederick Williamof Prussia issued theEdict of Potsdamwithin a week after French kingLouis XIV'sEdict of Fontainebleau, that decreed the abolishment of the 1598concessionto free religious practice forProtestants. Frederick William offered hisco-religionists, who are oppressed and assailed for the sake of the Holy Gospel and its pure doctrine...a secure and free refuge in all Our Lands.[240]Around 20,000 Huguenot refugees arrived in an immediate wave and settled in the cities, 40% in Berlin, the ducal residence alone. The French Lyceum in Berlin was established in 1689 and the French language had by the end of the 17th century replaced Latin to be spoken universally in international diplomacy. The nobility and the educated middle-class of Prussia and the various German states increasingly used the French language in public conversation in combination with universal cultivated manners. Like no other German state, Prussia had access to and the skill set for the application of pan-European Enlightenment ideas to develop more rational political and administrative institutions.[241]The princes of Saxony carried out a comprehensive series of fundamental fiscal, administrative, judicial, educational, cultural and general economic reforms. The reforms were aided by the country's strong urban structure and influential commercial groups, who modernized pre-1789 Saxony along the lines of classic Enlightenment principles.[242] Johann Gottfried von Herderbroke new ground in philosophy and poetry, as a leader of theSturm und Drangmovement of proto-Romanticism.Weimar Classicism("Weimarer Klassik") was a cultural and literary movement based in Weimar that sought to establish a new humanism by synthesizing Romantic, classical, and Enlightenment ideas. The movement, from 1772 until 1805, involved Herder as well as polymathJohann Wolfgang von GoetheandFriedrich Schiller, a poet and historian. Herder argued that every folk had its own particular identity, which was expressed in its language and culture. This legitimized the promotion of German language and culture and helped shape the development of German nationalism. Schiller's plays expressed the restless spirit of his generation, depicting the hero's struggle against social pressures and the force of destiny.[243] German music, sponsored by the upper classes, came of age under composersJohann Sebastian Bach,Joseph Haydn, andWolfgang Amadeus Mozart.[244] KönigsbergphilosopherImmanuel Kanttried to reconcile rationalism and religious belief, individual freedom, and political authority. Kant's work contained basic tensions that would continue to shape German thought – and indeed all of European philosophy – well into the 20th century.[245][246]The ideas of the Enlightenment and their implementation received general approval and recognition as principal cause for widespread cultural progress.[247] German reaction to theFrench Revolutionwas mixed at first. German intellectuals celebrated the outbreak, hoping to see the triumph of Reason and The Enlightenment. The royal courts in Vienna and Berlin denounced the overthrow of the king and the threatened spread of notions of liberty, equality, and fraternity. By 1793, theexecution of the French kingand the onset ofthe Terrordisillusioned the Bildungsbürgertum (educated middle classes). Reformers said the solution was to have faith in the ability of Germans to reform their laws and institutions in peaceful fashion.[248] Europe was racked by two decades of war revolving around France's efforts to spread its revolutionary ideals, and the opposition of reactionary royalty. War broke out in 1792 as Austria and Prussia invaded France, but were defeated at theBattle of Valmy(1792). The German lands saw armies marching back and forth, bringing devastation (albeit on a far lower scale than theThirty Years' War, almost two centuries before), but also bringing new ideas of liberty and civil rights for the people. Prussia and Austria ended their failed wars with France but (with Russia) partitioned Poland among themselves in 1793 and 1795. Francetook control of theRhineland, imposed French-style reforms, abolished feudalism, established constitutions, promoted freedom of religion, emancipated Jews, opened the bureaucracy to ordinary citizens of talent, and forced the nobility to share power with the rising middle class. Napoleon created theKingdom of Westphaliaas a model state.[249]These reforms proved largely permanent and modernized the western parts of Germany. When the French tried to impose the French language, German opposition grew in intensity. ASecond Coalitionof Britain, Russia, and Austria then attacked France but failed. Napoleon established direct or indirect control over most of western Europe, including the German states apart from Prussia and Austria. The old Holy Roman Empire was little more than a farce; Napoleon simply abolished it in 1806 while forming new countries under his control. In Germany Napoleon set up the "Confederation of the Rhine", comprising most of the German states except Prussia and Austria.[250] UnderFrederick William II's weak rule (1786–1797) Prussia had undergone a serious economic, political and military decline. His successor kingFrederick William IIItried to remain neutral during theWar of the Third CoalitionandFrench emperorNapoleon's dissolution of theHoly Roman Empireand reorganisation of the German principalities. Induced by the queen and a pro-war party Frederick William joined theFourth Coalitionin October 1806. Napoleon easily defeated the Prussian army at theBattle of Jenaand occupied Berlin. Prussia lost its recently acquired territories in western Germany, its army was reduced to 42,000 men, no trade with Britain was allowed and Berlin had to pay Paris high reparations and fund the French army of occupation.Saxonychanged sides to support Napoleon and joined theConfederation of the Rhine. RulerFrederick Augustus Iwas rewarded with the title of king and given a part of Poland taken from Prussia, which became known as theDuchy of Warsaw.[251] AfterNapoleon's military fiasco in Russia in 1812, Prussia allied with Russia in theSixth Coalition. A series of battles followed and Austria joined the alliance. Napoleon was decisively defeated in theBattle of Leipzigin late 1813. The German states of the Confederation of the Rhine defected to the Coalition against Napoleon, who rejected any peace terms. Coalition forces invaded France in early 1814,Paris felland in April Napoleon surrendered. Prussia as one of the winners at theCongress of Vienna, gained extensive territory.[217] In 1815, continental Europe was in a state of overall turbulence and exhaustion, as a consequence of theFrench RevolutionaryandNapoleonic Wars. The liberal spirit of theEnlightenmentand Revolutionary era diverged towardRomanticism.[252]The victorious members of the Coalition had negotiated a new peaceful balance of powers in Vienna and agreed to maintain a stable German heartland that keeps French imperialism at bay. However, the idea of reforming the defunctHoly Roman Empirewas discarded. Napoleon'sreorganization of the German stateswas continued and the remaining princes were allowed to keep their titles. In 1813, in return for guarantees from the Allies that the sovereignty and integrity of the Southern German states (Baden,Württemberg, andBavaria) would be preserved, they broke with France.[253] During the 1815Congress of Viennathe 39 former states of theConfederation of the Rhinejoined theGerman Confederation, a loose agreement for mutual defense. Attempts at economic integration and customs coordination were frustrated by repressive anti-national policies. Great Britain approved of the union, convinced that a stable, peaceful entity in central Europe could discourage aggressive moves by France or Russia. Most historians, however, concluded, that the Confederation was weak and ineffective and an obstacle to German nationalism. The union was undermined by the creation of theZollvereinin 1834, the1848 revolutions, the rivalry between Prussia and Austria and was finally dissolved in the wake of theAustro-Prussian Warof 1866,[254]to be replaced by theNorth German Confederationduring the same year.[254] Increasingly after 1815, a centralized Prussian government based in Berlin took over the powers of the nobles, which in terms of control over the peasantry had been almost absolute. To help the nobility avoid indebtedness, Berlin set up a credit institution to provide capital loans in 1809, and extended the loan network to peasants in 1849. When the German Empire was established in 1871, the Junker nobility controlled the army and the navy, the bureaucracy, and the royal court; they generally set governmental policies.[255] Between 1815 and 1865 the population of the German Confederation (excluding Austria) grew around 60% from 21 million to 34 million.[256]Simultaneously theDemographic Transitiontook place as the high birth rates and high death rates of the pre-industrial country shifted to low birth and death rates of the fast-growing industrialized urban economic and agricultural system. Increased agricultural productivity secured a steady food supply, as famines and epidemics declined. This allowed people to marry earlier, and have more children. The high birthrate was offset by a very high rate of infant mortality and after 1840, large-scale emigration to the United States. Emigration totaled at 480,000 in the 1840s, 1,200,000 in the 1850s, and at 780,000 in the 1860s. The upper and middle classes first practiced birth control, soon to be universally adopted.[257] In 1800, Germany's social structure was poorly suited to entrepreneurship or economic development. Domination by France during the French Revolution (1790s to 1815), however, produced important institutional reforms, that included the abolition of feudal restrictions on the sale of large landed estates, the reduction of the power of the guilds in the cities, and the introduction of a new, more efficient commercial law. The idea that these reforms were beneficial for Industrialization is a subject of debate among historians.[258] In the early 19th century the Industrial Revolution was in full swing in Britain, France, and Belgium. The various small federal states in Germany developed only slowly and autonomously as competition was strong. Early investments for the railway network during the 1830s came almost exclusively from private hands. Without a central regulatory agency, construction projects were quickly realized. Actual industrialization only took off after 1850 in the wake of the railroad construction.[259]The textile industry grew rapidly, profiting from the elimination of tariff barriers by the Zollverein.[260][261]During the second half of the 19th century German industry grew exponentially and by 1900, Germany was an industrial world leader along with Britain and the United States.[262] In 1800, the population was predominantly rural, as only 10% lived in communities of 5,000 or more people, and only 2% lived in cities of more than 100,000 people. After 1815, the urban population grew rapidly, due to the influx of young people from the rural areas. Berlin grew from 172,000 in 1800, to 826,000 inhabitants in 1870, Hamburg from 130,000 to 290,000, Munich from 40,000 to 269,000 and Dresden from 60,000 to 177,000.[263] The initial stage of economic development came with the railroad revolution in the 1840s, which opened up new markets for local products, created a pool of middle managers, increased the demand for engineers, architects and skilled machinists and stimulated investments in coal and iron. Political disunity among three dozen states and a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines did link the major cities; each German state was responsible for the lines within its own borders. EconomistFriedrich Listsummed up the advantages to be derived from the development of the railway system in 1841: Lacking a technological base at first, engineering and hardware was imported from Britain. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was the support of industrialisation. Eventually numerous lines criss-crossed the Ruhr area and other industrial centers and provided good connections to the major ports of Hamburg and Bremen. By 1880, 9,400 locomotives pulled 43,000 passengers and 30,000 tons of freight a day.[259] While there existed no national newspaper the many states issued a great variety of printed media, although they rarely exceeded regional significance. In a typical town existed one or two outlets, urban centers, such as Berlin and Leipzig had dozens. The audience was limited to a few per cent of male adults, chiefly from the aristocratic and upper middle class. Liberal publishers outnumbered conservative ones by a wide margin. Foreign governments bribed editors to guarantee a favorable image.[265]Censorship was strict, and the imperial government issued the political news that was supposed to be published. After 1871, strict press laws were enforced by Bismarck to contain the Socialists and hostile editors. Editors focused on political commentary, culture, the arts, high culture and the popular serialized novels. Magazines were politically more influential and attracted intellectual authors.[266] 19th-century artists and intellectuals were greatly inspired by the ideas of the French Revolution and the great poets and writersJohann Wolfgang von Goethe,Gotthold Ephraim LessingandFriedrich Schiller. TheSturm und Drangromanticmovement was embraced and emotion was given free expression in reaction to the perceived rationalism of theEnlightenment. Philosophical principles and methods were revolutionized byImmanuel Kant's paradigm shift.Ludwig van Beethovenwas the most influential composer of the period fromclassicaltoRomantic music. His use of tonal architecture in such a way as to allow significant expansion of musical forms and structures was immediately recognized as bringing a new dimension to music. His later piano music and string quartets, especially, showed the way to a completely unexplored musical universe, and influencedFranz SchubertandRobert Schumann. In opera, a new Romantic atmosphere combining supernatural terror and melodramatic plot in a folkloric context was first successfully achieved byCarl Maria von Weberand perfected byRichard Wagnerin hisRing Cycle. TheBrothers Grimmcollected folk stories into the popularGrimm's Fairy Talesand are ranked among the founding fathers ofGerman studiesinasmuch as they initiated the work on theDeutsches Wörterbuch("The German Dictionary"), the most comprehensive work on the German language.[267] University professors developed international reputations, especially in subjects from the humanities such as history and philology, which brought a new historical perspective to the study of political history, theology, philosophy, language, and literature. WithGeorg Wilhelm Friedrich Hegel,Friedrich Wilhelm Joseph Schelling,Arthur Schopenhauer,Friedrich Nietzsche,Max Weber,Karl MarxandFriedrich Engelsin philosophy,Friedrich Schleiermacherin theology andLeopold von Rankein history, German scholars became famous. TheUniversity of Berlin, founded in 1810, became the world's leading university. Von Ranke, for example, professionalized history and set the world standard for historiography. By the 1830s mathematics, physics, chemistry, and biology had emerged with world class science, led byAlexander von Humboldtin natural science andCarl Friedrich Gaussin mathematics. Young intellectuals often turned to politics, but their support for the failed revolution of 1848 forced many into exile.[217] Two main developments reshaped religion in Germany. Across the land, there was a movement to unite the larger Lutheran and the smaller Reformed Protestant churches. The churches themselves brought this about in Baden, Nassau, and Bavaria. However, in Prussia KingFrederick William IIIwas determined to handle unification entirely on his own terms, without consultation. His goal was to unify the Protestant churches, and to impose a single standardized liturgy, organization, and even architecture. The long-term goal was to have fully centralized royal control of all the Protestant churches. In a series of proclamations over several decades theChurch of the Prussian Unionwas formed, bringing together the more numerous Lutherans, and the less numerous Reformed Protestants. The government of Prussia now had full control over church affairs, with the king himself recognized as the leading bishop. Opposition to unification came from the "Old Lutherans" in Silesia who clung tightly to the theological and liturgical forms they had followed since the days of Luther. The government attempted to crack down on them, so they went underground. Tens of thousands migrated,to South Australia, and especially to the United States, where they formed theMissouri Synod, which is still in operation as a conservative denomination. Finally in 1845 a new kingFrederick William IVoffered a general amnesty and allowed the Old Lutherans to form a separate church association with only nominal government control.[268][269][270] From the religious point of view of the typical Catholic or Protestant, major changes were underway in terms of a much more personalized religiosity that focused on the individual more than the church or the ceremony. The rationalism of the late 19th century faded away, and there was a new emphasis on the psychology and feeling of the individual, especially in terms of contemplating sinfulness, redemption, and the mysteries and the revelations of Christianity.Pietistic revivalswere common among Protestants. Among, Catholics there was a sharp increase in popular pilgrimages. In 1844 alone, half a million pilgrims made a pilgrimage to the city of Trier in the Rhineland to view theSeamless robe of Jesus, said to be the robe that Jesus wore on the way to his crucifixion. Catholic bishops in Germany had historically been largely independent of Rome, but now the Vatican exerted increasing control, a new "ultramontanism" of Catholics highly loyal to Rome.[271]A heated controversy erupted in 1837–1838 in the largely Catholic Rhineland over the religious education of children of mixed marriages, where the mother was Catholic and the father Protestant. The government passed laws to require that these children always be raised as Protestants, contrary to Napoleonic law that had previously prevailed and allowed the parents to make the decision. The government put the Catholic Archbishop under house arrest. In 1840, the new King Frederick William IV sought reconciliation and defused the controversy by agreeing to most of the Catholic demands. However Catholic memories remained deep and led to a sense that Catholics always needed to stick together in the face of a hostile government.[272] After the fall of Napoleon, Europe's statesmen convened in Vienna in 1815 for the reorganisation of European affairs, under the leadership of theAustrian Prince Metternich. The political principles agreed upon at thisCongress of Viennaincluded the restoration, legitimacy and solidarity of rulers for the repression of revolutionary and nationalist ideas. TheGerman Confederation(German:Deutscher Bund) was founded, a loose union of 39 states (35 ruling princes and 4 free cities) under Austrian leadership, with a Federal Diet (German:Bundestag) meeting inFrankfurt am Main. It was a loose coalition that failed to satisfy most nationalists. The member states largely went their own way, and Austria had its own interests. In 1819, a student radical assassinated the reactionary playwrightAugust von Kotzebue, who had scoffed at liberal student organisations. In one of the few major actions of the German Confederation, Prince Metternich called a conference that issued the repressiveCarlsbad Decrees, designed to suppress liberal agitation against the conservative governments of the German states.[273]The Decrees terminated the fast-fading nationalist fraternities (German:Burschenschaften), removed liberal university professors, and expanded the censorship of the press. The decrees began the "persecution of the demagogues", which was directed against individuals who were accused of spreading revolutionary and nationalist ideas. Among the persecuted were the poetErnst Moritz Arndt, the publisher Johann Joseph Görres and the "Father of Gymnastics" Ludwig Jahn.[274] In 1834, theZollvereinwas established, a customs union between Prussia and most other German states, but excluding Austria. As industrialisation developed, the need for a unified German state with a uniform currency, legal system, and government became more and more obvious. Growing discontent with the political and social order imposed by the Congress of Vienna led to the outbreak, in 1848, of theMarch Revolutionin the German states. In May the German National Assembly (theFrankfurt Parliament) met in Frankfurt to draw up a national German constitution. But the 1848 revolution turned out to be unsuccessful:King Frederick William IV of Prussiarefused the imperial crown, the Frankfurt parliament was dissolved, the ruling princes repressed the risings by military force, and the German Confederation was re-established by 1850. Many leaders went into exile, including a number who went to the United States and became a political force there.[275] The 1850s were a period of extreme political reaction. Dissent was vigorously suppressed, and many Germans emigrated to America following the collapse of the 1848 uprisings. Frederick William IV became extremely depressed and melancholic during this period, and was surrounded by men who advocatedclericalismandabsolute divine monarchy. The Prussian people once again lost interest in politics. Prussia not only expanded its territory but began to industrialize rapidly, while maintaining a strong agricultural base. In 1857, the Prussian kingFrederick William IVsuffered a stroke and his brotherWilliamserved as regent until 1861 when he became King William I. Although conservative, William was very pragmatic. His most significant accomplishment was the naming ofOtto von Bismarckas Prussian minister president in 1862. The cooperation of Bismarck, Defense MinisterAlbrecht von Roon, and Field MarshalHelmut von Moltkeset the stage for the military victories over Denmark, Austria, and France that led to the unification of Germany.[276][277] In 1863–1864, disputes between Prussia and Denmark overSchleswig, which was not part of the German Confederation, and which Danish nationalists wanted to incorporate into the Danish kingdom escalated. The conflict led to theSecond War of Schleswigin 1864. Prussia, joined by Austria, easily defeated Denmark and occupiedJutland. The Danes were forced to cede both the Duchy of Schleswig and theDuchy of Holsteinto Austria and Prussia. The subsequent management of the two duchies led to tensions between Austria and Prussia. Austria wanted the duchies to become an independent entity within the German Confederation, while Prussia intended to annex Austria. The disagreement served as a pretext for theSeven Weeks Warbetween Austria and Prussia that broke out in June 1866. In July, the two armies clashed at Sadowa-Königgrätz (Bohemia) in anenormous battleinvolving half a million men. Prussian superior logistics and the then-modern breech-loadingneedle guns'superiority over the slowmuzzle-loading riflesof the Austrians proved to be essential for Prussia's victory. The battle had also decided thestruggle for hegemonyin Germany and Bismarck was deliberately lenient with a defeated Austria that would play only a subpordinate role in future German affairs.[278][279] After theSeven Weeks War, the German Confederation was dissolved and theNorth German Federation(GermanNorddeutscher Bund) was established under the leadership of Prussia. Austria was excluded and its immense influence over Germany finally came to an end. The North German Federation was a transitional organisation that existed from 1867 to 1871, between the dissolution of the German Confederation and the founding of the German Empire.[280] ChancellorOtto von Bismarckdetermined the political course of the German Empire until 1890. He fostered alliances in Europe to contain France on the one hand and aspired to consolidate Germany's influence in Europe on the other. His principal domestic policies focused on the suppression of socialism and the reduction of the strong influence of the Roman Catholic Church on its adherents. He issued a series of anti-socialist laws in accord with a set of social laws, that included universal health care, pension plans and other social security programs. HisKulturkampfpolicies were vehemently resisted by Catholics, who organized political opposition in the Center Party (Zentrum). German industrial and economic power had grown to match Britain by 1900. In 1888, the young and ambitious KaiserWilhelm IIbecame emperor. He rejected advice from experienced politicians and ordered Bismarck's resignation in 1890. He opposed Bismarck's carefully considered foreign policy and was determined to pursue colonialist policies, as Britain and France had been doing for centuries. The Kaiser promoted the active colonization of Africa and Asia for the lands that were not already colonies of other European powers. The Kaiser took a mostly unilateral approach in Europe only allied with the Austro-Hungarian Empire, and embarked on a dangerous naval arms race with Britain. His aggressive and ill-considered policies greatly contributed to the situation in which the assassination of the Austrian-Hungarian crown prince would sparkWorld War I. Bismarck was the dominant personality not just in Germany but in all of Europe and indeed the entire diplomatic world 1870–1890. Historians continue to debate his goals.Lothar GallandErnst Engelbergconsider Bismarck was a future-oriented modernizer. In sharp contrast,Jonathan Steinbergdecided he was basically a traditional Prussian whose highest priorities were to reinforce the monarchy, the Army, and the social and economic dominance of his own Junker class, thereby being responsible for a tragic history after his removal in 1890.[281] In 1868, the Spanish queenIsabella IIwas deposed in theGlorious Revolution, leaving the country's throne vacant. When Prussia suggested the Hohenzollern candidate,Prince Leopoldas successor, France vehemently objected. The matter evolved into adiplomatic scandaland in July 1870, France resolved to end it in afull-scale war. The conflict was quickly decided as Prussia, joined by forces of a pan-German alliance never gave up the tactical initiative. A series of victories in north-eastern France followed and another French army group was simultaneously encircled at Metz. A few weeks later, the French army contingent under EmperorNapoleon III's personal command was finally forced to capitulate in thefortress of Sedan.[282][283]Napoleon was taken prisoner and aprovisional governmenthastily proclaimed in Paris. The new government resolved to fight on and tried to reorganize the remaining armies while the Germans settled down to besiege Paris. The starving city surrendered in January 1871 and Jules Favre signed the surrender at Versailles. France was forced to pay indemnities of 5 billion francs and cedeAlsace-Lorraineto Germany. This conclusion left the French national psyche deeply humiliated and further aggravated theFrench–German enmity. During theSiege of Paris, the German princes assembled in theHall of Mirrorsof thePalace of Versailleson 18 January 1871 and announced the establishment of theGerman Empireand proclaimed the Prussian KingWilhelm IasGerman Emperor. The actunified all ethnic German stateswith the exception of Austria in theLittle German solutionof a federal economic, political and administrative unit. Bismarck, was appointed to serve as Chancellor. The new empire was afederalunion of 25 states that varied considerably in size, demography, constitution, economy, culture, religion and socio-political development. However, even Prussia itself, which accounted for two-thirds of the territory as well as of the population, had emerged from the empire's periphery as a newcomer. It also faced colossal cultural and economic internal divisions. The Prussian provinces of Westphalia and the Rhineland for example had been under French controlduring the previous decades. The local people, who had benefited from the liberal, civil reforms, that were derived from the ideas of the French Revolution, had only little in common with predominantly rural communities in authoritarian and disjointedJunkerestates ofPommerania.[284]The inhabitants of the smaller territorial lands, especially in central and southern Germany greatly rejected the Prussianized concept of the nation and preferred to associate such terms with their individual home state. The Hanseatic port cities of Hamburg, Bremen and Lübeck ranked among the most ferocious opponents of theso-called contract with Prussia. As advocates of free trade, they objected to Prussian ideas of economic integration and refused to sign the renewedZollverein(Custom Union) treaties until 1888.[285]TheHanseaticmerchants' overseas economic success corresponded with their globalist mindset. The citizen of Hamburg, whom Bismark characterized asextremely irritatingand the German ambassador in London asthe worst Germans we have, were particularly appalled by Prussian militarism and its unopposed growing influence.[286][unreliable source?] The Prusso-German authorities were aware of necessary integration concepts as the results and the 52%voter turnoutof thefirst imperial electionshad clearly demonstrated. Historians increasingly argue, that the nation-state wasforged through empire.[287]National identity was expressed in bombastic imperialstone iconographyand was to be achieved as an imperial people, withan emperor as head of state and it was to develop imperial ambitions– domestic, European and global.[288][287] Bismarck's domestic policies as Chancellor of Germany were based on his effort to universally adopt the idea of the Protestant Prussian state and achieve the clear separation of church and state in all imperial principalities. In theKulturkampf(lit.: culture struggle) from 1871 to 1878, he tried to minimize the influence of the Roman Catholic Church and its political arm, theCatholic Centre Party, via secularization of all education and introduction of civil marriage, but without success. The Kulturkampf antagonised many Protestants as well as Catholics and was eventually abandoned. The millions of non-German imperial subjects, like the Polish, Danish and French minorities, were left with no choice but to endure discrimination or accept[289][290]the policies ofGermanisation. The new Empire provided attractive top level career opportunities for the national nobility in the various branches of the consular and civil services and the army. As a consequence the aristocratic near total control of the civil sector guaranteed a dominant voice in the decision making in the universities and the churches. The 1914 German diplomatic corps consisted of 8 princes, 29 counts, 20 barons, 54 representants of the lower nobility and a mere 11 commoners. These commoners were indiscriminately recruited from elite industrialist and banking families. The consular corps employed numerous commoners, that however, occupied positions of little to no executive power.[291]The Prussian tradition to reserve the highest military ranks for young aristocrats was adopted and the newconstitutionput all military affairs under the direct control of the Emperor and beyond control of theReichstag.[292]With its large corps of reserve officers across Germany, the military strengthened its role as"The estate which upheld the nation", and historianHans-Ulrich Wehleradded:"it became an almost separate, self-perpetuating caste".[293] Power increasingly was centralized among the 7000 aristocrats, who resided in the national capital ofBerlin and neighboring Potsdam. Berlin's rapidly increasing rich middle-class copied the aristocracy and tried to marry into it. A peerage could permanently boost a rich industrial family into the upper reaches of the establishment.[294]However, the process tended to work in the other direction as the nobility became industrialists. For example, 221 of the 243 mines in Silesia were owned by nobles or by the King of Prussia himself.[295] Themiddle classin the cities grew exponentially, although it never acquired the powerful parliamentary representation and legislative rights as in France, Britain or the United States. TheAssociation of German Women's Organizationsor BDF was established in 1894 to encompass the proliferating women's organizations that had emerged since the 1860s. From the beginning the BDF was abourgeoisorganization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome and were organized by the Socialists.[296] The rise of the Socialist Workers' Party (later known as theSocial Democratic Party of Germany, SPD), aimed to peacefully establish a socialist order through the transformation of the existing political and social conditions. From 1878, Bismarck tried to oppose the growing social democratic movement byoutlawing the party's organisation, its assemblies and most of its newspapers. Nonetheless, the Social Democrats grew stronger and Bismarck initiated hissocial welfare programin 1883 in order to appease the working class.[297] Bismarck built on a tradition of welfare programs in Prussia and Saxony that began as early as the 1840s. In the 1880s he introduced old age pensions, accident insurance, medical care, and unemployment insurance that formed the basis of the modernEuropean welfare state. His paternalistic programs won the support of German industry because its goals were to win the support of the working classes for the Empire and reduce the outflow of immigrants to America, where wages were higher but welfare did not exist.[298][299]Bismarck further won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade.[300][301] Bismarck would not tolerate any power outside Germany—as in Rome—having a say in domestic affairs. He launched theKulturkampf("culture war") against the power of the pope and the Catholic Church in 1873, but only in the state of Prussia. This gained strong support from German liberals, who saw the Catholic Church as the bastion of reaction and their greatest enemy. The Catholic element, in turn, saw in theNational-Liberalsthe worst enemy and formed theCenter Party.[302] Catholics, although nearly a third of the national population, were seldom allowed to hold major positions in the Imperial government, or the Prussian government. After 1871, there was a systematic purge of the remaining Catholics; in the powerful interior ministry, which handled all police affairs, the only Catholic was a messenger boy. Jews were likewise heavily discriminated against.[303][304] Most of the Kulturkampf was fought out in Prussia, but Imperial Germany passed thePulpit Lawwhich made it a crime for any cleric to discuss public issues in a way that displeased the government. Nearly all Catholic bishops, clergy, and laymen rejected the legality of the new laws and defiantly faced the increasingly heavy penalties and imprisonments imposed by Bismarck's government. Historian Anthony Steinhoff reports the casualty totals: As of 1878, only three of eight Prussian dioceses still had bishops, some 1,125 of 4,600 parishes were vacant, and nearly 1,800 priests ended up in jail or in exile ... Finally, between 1872 and 1878, numerous Catholic newspapers were confiscated, Catholic associations and assemblies were dissolved, and Catholic civil servants were dismissed merely on the pretence of having Ultramontane sympathies.[305] Bismarck underestimated the resolve of the Catholic Church and did not foresee the extremes that this struggle would attain.[306][307]The Catholic Church denounced the harsh new laws as anti-Catholic and mustered the support of its rank and file voters across Germany. In the following elections, the Center Party won a quarter of the seats in the Imperial Diet.[308]The conflict ended after 1879 because Pope Pius IX died in 1878 and Bismarck broke with the Liberals to put his main emphasis on tariffs, foreign policy, andattacking socialists. Bismarck negotiated with the conciliatory new popeLeo XIII.[309]Peace was restored, the bishops returned and the jailed clerics were released. Laws were toned down or taken back, but the laws concerning education, civil registry of marriages and religious disaffiliation remained in place. The Center Party gained strength and became an ally of Bismarck, especially when he attacked socialism.[310] Historians have cited the campaign against the Catholic church, as well as a similar campaign against theSocial Democratic Party, as leaving a lasting influence on the German consciousness, whereby national unity can be encouraged by excluding or persecuting a minority. This strategy, later referred to as "negative integration", set a tone of either being loyal to the government or an enemy of the state, which directly influenced German nationalist sentiment and the later Nazi movement.[311] Chancellor Bismarck's imperial foreign policy basically aimed at security and the prevention of a Franco-Russian alliance, in order to avoid a likelyTwo-front war. TheLeague of Three Emperorswas signed in 1873 by Russia, Austria, and Germany. It stated thatrepublicanismandsocialismwere common enemies and that the three powers would discuss any matters concerning foreign policy. Bismarck needed good relations with Russia in order to keep France isolated. Russia fought a victoriouswar against the Ottoman Empirefrom 1877 to 1878 and attempted toestablishthePrincipality of Bulgaria, that was strongly opposed by France and Britain in particular, as they were long concerned with the preservation of theOttoman Empireand Russian containment at theBosphorus Straitand the Black Sea. Germany hosted theCongress of Berlinin 1878, where a more moderate peace settlement was agreed upon. In 1879, Germany formed theDual Alliancewith Austria-Hungary, an agreement of mutual military assistance in the case of an attack from Russia, which was not satisfied with the agreement of the Congress of Berlin. The establishment of the Dual Alliance led Russia to take a more conciliatory stance and in 1887, the so-calledReinsurance Treatywas signed between Germany and Russia. In it, the two powers agreed on mutual military support in the case that France attacked Germany or an Austrian attack on Russia. Russia turned its attention eastward to Asia and remained largely inactive in European politics for the next 25 years. In 1882, Italy, seeking supporters for its interests inNorth Africaagainst France's colonial policy, joined the Dual Alliance, which became theTriple Alliance. In return for German and Austrian support, Italy committed itself to assisting Germany in the case of a French attack.[312] Bismarck had always argued that the acquisition of overseas colonies was impractical and the burden of administration and maintenance would outweigh the benefits. Eventually, Bismarck gave way, and a number of colonies were established in Africa (Togo, theCameroons,German South-West Africa, andGerman East Africa) and inOceania(German New Guinea, theBismarck Archipelago, and theMarshall Islands). Consequently, Bismarck initiated theBerlin Conferenceof 1885, a formal meeting of the European colonial powers, who sought to "established international guidelines for the acquisition of African territory" (seeColonisation of Africa). Its outcome, theGeneral Act of the Berlin Conference, can be seen as the formalisation of the "Scramble for Africa" and "New Imperialism".[313] Emperor William I died in 1888. His sonFrederick III, open for a more liberal political course, reigned only for ninety-nine days, as he was stricken with throat cancer and died three months after his coronation. His sonWilhelm IIfollowed him on the throne at the age of 29. Wilhelm rejected the liberal ideas of his parents and embarked on a conservative autocratic rule. He early on decided to replace the political elite and in March 1890 he forced chancellor Bismarck into retirement.[314]Following his principle of "Personal Regiment", Wilhelm was determined to exercise maximum influence on all government affairs.[315][316][317] The youngKaiser Wilhelmset out to apply his imperialist ideas ofWeltpolitik(German:[ˈvɛltpoliˌtiːk], "world politics"), as he envisaged a gratuitously aggressive political course to increase the empire's influence in and control over the world. After the removal of Bismarck, foreign policies were tackled with by the Kaiser and the Federal Foreign Office underFriedrich von Holstein. Wilhelm's increasingly erratic and reckless conduct was unmistakably related to character deficits and the lack of diplomatic skills.[318][319]The foreign office's rather sketchy assessment of the current situation and its recommendations for the empire's most suitable course of action were: First a long-term coalition between France and Russia had to fall apart, secondly, Russia and Britain would never get together, and finally, Britain would eventually seek an alliance with Russia. Subsequently, Wilhelm refused to renew theReinsurance Treatywith Russia. Russia promptly formed a closer relationship with France in theDual Alliance of 1894, as both countries were concerned about the novel disagreeability of Germany. Furthermore, Anglo–German relations provided, from a British point of view, no basis for any consensus as the Kaiser refused to divert from his, although somewhat peculiarly desperate and anachronistic, aggressive imperial engagement and thenaval arms racein particular. Holstein's analysis proved to be mistaken on every point and Wilhelm failed too, as he did not adopt a nuanced political dialogue. Germany was left gradually isolated and dependent on theTriple Alliance, with Austria-Hungary and Italy. This agreement was hampered by differences between Austria and Italy and in 1915 Italy left the alliance.[250] In 1897, AdmiralAlfred von Tirpitz, state secretary of theGerman Imperial Naval Officedevised his initially rather practical, yet nonethelessambitious planto build a sizeable naval force. Although basically posing only an indirect threat as aFleet in being, Tirpitz theorized, that its mere existence would force Great Britain, dependent on unrestricted movement on the seas, to agree to diplomatic compromises.[320]Tirpitz started the program of warship construction in 1898 and enjoyed the full support of Kaiser Wilhelm. Wilhelm entertained less rational ideas on the fleet, that circled around his romantic childhood dream to have a "fleet of [his] own some day" and his obsessive adherence to direct his policies along the line ofAlfred Thayer Mahan's workThe Influence of Sea Power upon History.[321]In exchange for the eastern African island ofZanzibar, Germany had bargained the island ofHeligolandin theGerman Bightwith Britain in 1890, and converted the island into a naval base and installed immense coastal defense batteries. Britain considered the imperial German endeavours to be a dangerous infringement on the century-old delicate balance of global affairs and trade on the seas under British control. The British, however, resolved to keep up thenaval arms raceand introduced the highly advanced newDreadnoughtbattleship concept in 1907. Germany quickly adopted the concept and by 1910 the arms race again escalated.[322][323] In theFirst Moroccan Crisisof 1905, Germany nearly clashed with Britain and France when the latter attempted to establish a protectorate over Morocco. Kaiser Wilhelm II was upset at having not been informed about French intentions, and declared their support for Moroccan independence. William II made a highly provocative speech regarding this. The following year, a conference was held in which all of the European powers except Austria-Hungary (by now little more than a German satellite) sided with France. A compromise was brokered by the United States where the French relinquished some, but not all, control over Morocco.[324] TheSecond Moroccan Crisisof 1911 saw another dispute over Morocco erupt when France tried to suppress a revolt there. Germany, still smarting from the previous quarrel, agreed to a settlement whereby the French ceded some territory in central Africa in exchange for Germany's renouncing any right to intervene in Moroccan affairs. This confirmed French control over Morocco, which became a full protectorate of that country in 1912.[325] By 1890, the economy continued to industrialize and grow on an even higher rate than during the previous two decades and increased dramatically in the years leading up to World War I. Growth rates for the individual branches and sectors often varied considerably, and periodical figures provided by theKaiserliches Statistisches Amt("Imperial Statistical Bureau) are often disputed or just assessments. Classification and naming of internationally traded commodities and exported goods was still in progress and the structure of production and export had changed during four decades. Published documents provide numbers such as: The proportion of goods manufactured by the modern industry was approximately 25% in 1900, while the proportion of consumer related products in manufactured exports stood at 40%.[326]Reasonably exact are the figures for the entire industrial production between 1870 and 1914, which increased about 500%.[327] Historian J. A. Perkins argued that more important than Bismarck's new tariff on imported grain was the introduction of the sugar beet as a main crop. Farmers quickly abandoned traditional, inefficient practices in favor of modern methods, including the use of artificial fertilizers and mechanical tools. Intensive methodical farming of sugar and other root crops made Germany the most efficient agricultural producer in Europe by 1914. Even so, farms were usually small in size and women did much of the field work. An unintended consequence was the increased dependence on migratory, especially foreign, labor.[328][329] The basics of the modern chemical research laboratory layout and the introduction of essential equipment and instruments such asBunsen burners, thePetri dish, theErlenmeyer flask, task-oriented working principles and team research originated in 19th-century Germany and France. The organisation of knowledge acquisition was further refined by laboratory integration in research institutes of the universities and the industries. Germany acquired the leading role in the world'schemical industryby the late 19th century through strictly organized methodology. In 1913, the German chemical industry produced almost 90 per cent of the global supply ofdyestuffsand sold about 80 per cent of its production abroad.[330][331] Germany became Europe's leading steel-producing nation in the 1890s, thanks in large part to the protection from American and British competition afforded by tariffs and cartels.[332]The leading firm was "Friedrich Krupp AG Hoesch-Krupp", run by theKrupp family.[333]The merger of several major firms into theVereinigte Stahlwerke(United Steel Works) in 1926 was modeled on theU.S. Steelcorporation in the United States. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. By 1913, American and German exports dominated the world steel market, as Britain slipped to third place.[334] In machinery, iron and steel, and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulate competition and allowed small firms to function in the shadow of much larger companies.[335] By the 1890s, German colonial expansion in Asia and the Pacific (Kiauchauin China, theMarianas, theCaroline Islands,Samoa) led to frictions with Britain, Russia, Japan and the United States.[336]The construction of theBaghdad Railway, financed by German banks, was designed to eventually connect Germany with the Turkish Empire and thePersian Gulf, but it also collided with British and Russian geopolitical interests.[337] The largest colonial enterprises were in Africa.[338]The harsh treatment of theNamaandHereroin what is nowNamibiain Africa in 1906–1907 led to charges of genocide against the Germans. Historians are examining the links and precedents between theHerero and Namaqua Genocideand theHolocaustof the 1940s.[339][340][341] Other claimed territories of the German Colonial Empire are:Bear Island(occupied in 1899),[342]Togo-Hinterlands,[343]German Somali Coast,[344]Katanga Territories,Pondoland(failed attempt byEmil Nagel[de]),[345]Nyassaland (Mozambique), Southwestern Madagascar,[346]Santa Lucia Bay (South Africa) (failed attempt in 1884),[347]and the Farasan Islands.[348] Ethnic demands for nation states upset the balance between the empires that dominated Europe,leading to World War I, which started in August 1914. Germany stood behind its ally Austria in a confrontation with Serbia, but Serbia was under the protection of Russia, which was allied to France. Germany was the leader of the Central Powers, which included Austria-Hungary, the Ottoman Empire, and later Bulgaria; arrayed against them were the Allies, consisting chiefly of Russia, France, Britain, and in 1915 Italy. In explaining why neutral Britain went to war with Germany, author Paul M. Kennedy recognized it was critical for war that Germany become economically more powerful than Britain, but he downplays the disputes over economic trade imperialism, the Baghdad Railway, confrontations in Central and Eastern Europe, high-charged political rhetoric and domestic pressure-groups. Germany's reliance time and again on sheer power, while Britain increasingly appealed to moral sensibilities, played a role, especially in seeing the invasion of Belgium as a necessary military tactic or a profound moral crime. The German invasion of Belgium was not important because the British decision had already been made and the British were more concerned with the fate of France. Kennedy argues that by far the main reason was London's fear that a repeat of 1870 – when Prussia and the German states smashed France – would mean that Germany, with a powerful army and navy, would control the English Channel and northwest France. British policy makers insisted that would be a catastrophe for British security.[349] In the west, Germany sought a quick victory by encircling Paris using theSchlieffen Plan. But it failed due to Belgian resistance, Berlin's diversion of troops, and very stiff French resistance on theMarne, north of Paris. TheWestern Frontbecame an extremely bloody battleground oftrench warfare. The stalemate lasted from 1914 until early 1918, with ferocious battles that moved forces a few hundred yards at best along a line that stretched from theNorth Seato the Swiss border. The British imposed a tight naval blockade in the North Sea which lasted until 1919, sharply reducing Germany's overseas access to raw materials and foodstuffs. Food scarcity became a serious problem by 1917.[350]The United States joined with the Allies in April 1917. The entry of the United States into the war – following Germany's declaration of unrestricted submarine warfare – marked a decisive turning-point against Germany.[351] Total casualties on the Western Front were 3,528,610 killed and 7,745,920 wounded.[352] More wide open was the fighting on theEastern Front. In the east, there were decisive victories against the Russian army, the trapping and defeat of large parts of the Russian contingent at theBattle of Tannenberg, followed by huge Austrian and German successes. The breakdown of Russian forces – exacerbated by internal turmoil caused by the 1917Russian Revolution– led to theTreaty of Brest-Litovskthe Bolsheviks were forced to sign on 3 March 1918 as Russia withdrew from the war. It gave Germany control of Eastern Europe. Spencer Tucker says, "The German General Staff had formulated extraordinarily harsh terms that shocked even the German negotiator."[353]When Germany later complained that theTreaty of Versaillesof 1919 was too harsh on them, the Allies responded that it was more benign than Brest-Litovsk.[354] By defeating Russia in 1917, Germany was able to bring hundreds of thousands of combat troops from the east to the Western Front, giving it a numerical advantage over the Allies. By retraining the soldiers in new storm-trooper tactics, the Germans expected to unfreeze the Battlefield and win a decisive victory before the American army arrived in strength.[355]However, the spring offensives all failed, as the Allies fell back and regrouped, and the Germans lacked the reserves necessary to consolidate their gains. In the summer, with the Americans arriving at 10,000 a day, and the German reserves exhausted, it was only a matter of time before multiple Allied offenses destroyed the German army.[356] Although war was not expected in 1914, Germany rapidly mobilized its civilian economy for the war effort, the economy was handicapped by the British blockade that cut off food supplies.[357] Steadily conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. Causes involved the transfer of many farmers and food workers into the military, an overburdened railroad system, shortages of coal, and especially the British blockade that cut off imports from abroad. The winter of 1916–1917 was known as the "turnip winter", because that vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers.[358]Morale of both civilians and soldiers continued to sink. According to historianWilliam H. MacNeil: 1918 was the year of the deadly1918 Spanish Flu pandemicwhich struck hard at a population weakened by years of malnutrition. In October 1918,General Ludendorff, who wanted to protect the reputation of the Imperial Army by placing responsibility for the capitulation on the democratic parties and theImperial Reichstag, pushed for the government to be democratised. A newchancellorwas appointed, members of the Reichstag's majority parties were brought into the cabinet for the first time and theconstitution modified.[360]The moves did not, however, satisfy either theAlliesor the majority of German citizens. TheGerman revolution of 1918–1919began on 3 November with asailor's mutiny at Kielwhich spread rapidly and all but bloodlessly across Germany. Within a week,workers' and soldiers' councilswere in control of government and military institutions across most of the Reich.[361]On 9 November, Germany wasdeclared a republic. The following day, theCouncil of the People's Deputies, formed from members of Germany's two main socialist parties, began acting as the provisional government. By the end of the month, all of Germany'sruling monarchs, including Emperor Wilhelm II, who had fled to exile in the Netherlands, had been forced to abdicate.[362] In early January 1919, theSpartacist uprisingled by the newly foundedCommunist Party of Germanyattempted to take power in Berlin, but it was quashed by government andFreikorpstroops. Into the spring there were additional violently suppressed efforts to push the revolution further in the direction of acouncil republic, such as the short-lived local soviet republics, notably inBavaria(Munich). They too were put down with considerable loss of life.[363] The revolution's end is generally set at 11 August 1919, the day theWeimar Constitutionwas signed following its adoption by the popularly electedWeimar National Assembly, Even though the widespread violence largely ended in 1919, the revolution remained in many ways incomplete. A large number of its opponents had been left in positions of power in the military and the Reich administration, and it failed to resolve the fracture in the Left between moderate socialists and communists. The Weimar Republic as a result was beset from the beginning by opponents from both the Left and – to a greater degree – the Right.[364] Under the peace terms of theTreaty of Versailles, Germany's first democracy began its fourteen-year life facing territorial losses,reparations to the victorsof World War I and stringent limitations on its military. Political violence from those on the Right who wanted a return to the monarchy and those on the Left who wanted a soviet-style regime repeatedly threatened the moderate socialist government through 1923. Ongoing issues with state finances, impacted by war debt and the funding of striking workers in the Ruhr, fuelled thehyperinflation of 1923that impoverished many Germans and left them bitter enemies of the Republic. A period of relative political and economic stability that lasted until the onset of theGreat Depressionin 1929 was followed by the rapid growth of parties on the extremes – theCommunistson the Left and theNazison the Right – that left theReichstag(parliament) all but unable to function. In quick succession, fourchancellorstried and failed to govern by decree beforePresident HindenburgnamedAdolf Hitlerchancellor in 1933. In only a few months he had turned the Republic into a Nazi dictatorship. TheArmistice of 11 November 1918ended the fighting in World War I, and on 28 June 1919 Germany reluctantly signed the peace terms laid out in theTreaty of Versailles. Germany had to renounce sovereignty over its colonies[365]and in Europe lost 65,000 km2(25,000 sq mi) or about 13% of its former territory – including 48% of its iron and 10% of its coal resources – along with 7 million people, or 12% of its population.[366]Allied troopsoccupied the Rhineland, and it along with an area stretching 50 kilometres east of the Rhine were demilitarized.[367]The German army was limited to no more than 100,000 men with 4,000 officers and no general staff; the navy could have at most 15,000 men and 1,500 officers. Germany was prohibited from having an air force, submarines ordreadnoughts. A large number of its ships and all of its air-related armaments were to be surrendered.[368][369]The most contentious article of the treaty, the so-calledWar Guilt Clause(Article 231), stated that Germany accepted responsibility for the loss and damage from the war caused to the Allies, and therefore had to pay reparations for the damage caused to the Allied Powers.[370] The treaty was reviled as a dictated rather than a negotiated peace.Philipp Scheidemann, theSocial Democraticminister president of Germany, said to theWeimar National Assemblyon 12 May 1919, "What hand should not wither that puts this fetter on itself and on us?"[371] TheWeimar Constitutionestablished a federalsemi-presidential republicwith achancellordependent on the confidence of theReichstag(parliament), a strong president who had considerablepowers to govern by decree,[372]and a substantial set of individual rights.[373]The Social DemocratFriedrich Ebertwas the Republic's first president. The Left accused the Social Democrats of betraying the ideals of the labour movement because of their alliance with the old elites in the military and administration, and the Rightheld the supporters of the Republic responsiblefor Germany's defeat in the war.[374]In early 1920, the right-wingKapp Putsch, backed by units of the paramilitaryFreikorps, briefly took control of the government in Berlin, but the putsch quickly collapsed due to a general strike and passive resistance by civil servants.[375]In the putsch's wake, workers in the industrialRuhr district, where dissatisfaction with the lack of nationalisation of key industries was particularly high, rose up and attempted to take control of the region.Reichswehrand Freikorps units suppressed theRuhr uprisingwith the loss of over 1,000 lives.[376]The unstable political conditions of the period were reflected in theReichstag election of 1920, in which the centre-leftWeimar Coalition, which until then had held a three-quarters majority, lost 125 seats to parties on both the Left and Right.[377] Political violence continued at a high level through 1923. Aright-wing extremist groupassassinated former finance ministerMatthias Erzbergerin August 1921 andWalther Rathenau, the Jewish foreign minister, in June 1922.[378]1923 saw the communist-led takeover attempt known as theGerman October, the right-wingKüstrin PutschandAdolf Hitler'sBeer Hall Putsch. Germany was the first state to establish diplomatic relations with the newSoviet Unionin the 1922Treaty of Rapallo.[379]In October 1925, Germany, France, Belgium, Britain and Italy signed theTreaty of Locarno, which recognised Germany's borders with France and Belgium but left its eastern borders open to negotiations. The treaty paved the way for Germany's admission to theLeague of Nationsin 1926.[380] In May 1921 the Allied Powers set Germany's reparations liability under the terms of the Treaty of Versailles at 132 billion Reichsmarks, to be paid either in gold or commodities such as iron, steel and coal.[381]After a series of German defaults, French and Belgian troopsoccupied the Ruhrin January 1923. The German government responded with a policy of passive resistance. It underwrote the costs of idled factories and mines and paid the workers who were on strike. Unable to meet the enormous costs by any other means, it resorted to printing money. Along with the debts the state had incurred during the war, it was one of the major causes of the 1923 peak inGermany's post-war hyperinflation.[382]The passive resistance was called off in September 1923, and the occupation ended in August 1925, following an agreement (theDawes Plan) to restructure Germany's reparations.[383]In November 1923 the government introduced a new currency, theRentenmark(later theReichsmark). Together with other measures, it quickly stopped the hyperinflation, but many Germans who lost their life savings became bitter enemies of the Weimar Republic and supporters of the anti-democratic Right.[384]During the following six years the economic situation improved. In 1928 Germany's industrial production surpassed the pre-war level of 1913.[385] In 1925, following the death in office of President Ebert, conservative Field MarshalPaul von Hindenburgwaselectedto replace him. His presidency, coming after a campaign that emphasised nationalism and Hindenburg's ties to the fallen German Empire, was the beginning of a significant shift to the right in German politics.[386] TheWall Street crash of 1929marked the beginning of the worldwideGreat Depression, which hit Germany as hard as any nation. In 1931 severalmajor banks failed, and by early 1932 the number of unemployed had soared to more than six million.[387]In theReichstag election of September 1930, theCommunist Party of Germany(KPD) gained 23 seats, while theNational Socialist German Workers' Party(NSDAP, Nazi Party), until then a minor far-right party, increased its share by 95 seats, becoming Germany's second largest party behind the Social Democrats.[388]The Nazis were particularly successful among Protestants, unemployed young voters, the lower middle class in the cities and the rural population. It was weakest in Catholic areas and in large cities.[389]The shift to the political extremes made the unstable coalition system by which every Weimar chancellor had governed increasingly unworkable. The last years of the Weimar Republic were marred by even more systemic political instability than previous years, and political violence increased. Four chancellors (Heinrich Brüning,Franz von Papen,Kurt von Schleicherand, from 30 January to 23 March 1933,Adolf Hitler) governed throughpresidential decreerather than parliamentary consultation.[381]It effectively rendered the Reichstag powerless as a means of enforcing constitutionalchecks and balances. Hindenburg wasre-elected president in 1932, out-polling Hitler by almost 6 million votes in the second round.[390]The Nazi Party became the largest party in the Reichstag following theelection of July 1932. It received 37% of the vote, with the SPD second (22%) and the Communist KPD third at 14%. The Nazis dropped to 33% after anotherelection four months later, but they remained the largest party. The splintered Reichstag was still unable to form a stable coalition. On 30 January 1933, seeing no other viable option and pressured by former chancellorFranz von Papenand other conservatives, President Hindenburg appointed Hitler chancellor.[391] The Weimar years saw a flowering ofGerman scienceand high culture, before the Nazi regime resulted in a decline in the scientific and cultural life in Germany and forced many renowned scientists and writers to flee. German recipients dominated theNobel prizes in science.[392]Germany dominated the world of physics before 1933, led byHermann von Helmholtz,Wilhelm Conrad Röntgen,Albert Einstein,Otto Hahn,Max PlanckandWerner Heisenberg. Chemistry likewise was dominated by German professors and researchers at the great chemical companies such asBASFandBayerand persons likeJustus von Liebig,Fritz HaberandEmil Fischer. Theoretical mathematiciansGeorg Cantorin the 19th century andDavid Hilbertin the 20th century.Karl Benz, the inventor of the automobile, andRudolf Dieselwere pivotal figures of engineering, andWernher von Braun, rocket engineer.Ferdinand Cohn,Robert KochandRudolph Virchowwere three key figures in microbiology. Among the most important German writers wereThomas Mann,Hermann HesseandBertolt Brecht. The reactionary historianOswald SpenglerwroteThe Decline of the West(1918–1923) on the inevitable decay of Western Civilization, and influenced intellectuals in Germany such asMartin Heidegger,Max Scheler, and theFrankfurt School, as well as intellectuals around the world.[393] After 1933, Nazi proponents of "Aryan physics", led by the Nobel Prize-winnersJohannes StarkandPhilipp Lenard, attacked Einstein's theory of relativity as a degenerate example of Jewish materialism in the realm of science. Many scientists and humanists emigrated; Einstein moved permanently to the U.S. but some of the others returned after 1945.[394][395] The Nazi regime suppressed labor unions and strikes, leading to prosperity which gave theNazi Partypopularity, with only minor, isolated and subsequently unsuccessful cases ofresistance among the German populationover their rule. TheGestapo(secret police) destroyed the political opposition and persecuted the Jews, trying to force them into exile. The Party took control of the courts, local government, and all civic organizations except the Christian churches. All expressions of public opinion were controlled the propaganda ministry, which used film, mass rallies, and Hitler's hypnotic speaking. The Nazi state idolized Hitler as its Führer (leader), putting all powers in his hands.Nazi propagandacentered on Hitler and created the "Hitler Myth"—that Hitler was all-wise and that any mistakes or failures by others would be corrected when brought to his attention.[396]In fact Hitler had a narrow range of interests and decision making was diffused among overlapping, feuding power centers; on some issues he was passive, simply assenting to pressures from whoever had his ear. All top officials reported to Hitler and followed his basic policies, but they had considerable autonomy on a daily basis.[397] To secure aReichstagmajority for his party, Hitler called for new elections. After the 27 February 1933Reichstag fire, Hitler swiftly blamed an alleged Communist uprising, and convinced President Hindenburg to approve theReichstag Fire Decree, rescinding civil liberties. Four thousandcommunistswere arrested[398]and Communist agitation was banned. Communists and Socialists were brought into hastily preparedNazi concentration camps, where they were at the mercy of theGestapo, the newly established secret police force. CommunistReichstagdeputies were taken into "protective custody". Despite the terror and unprecedented propaganda, the last free General Elections of 5 March 1933, while resulting in 43.9% failed to give the Nazis their desired majority. Together with theGerman National People's Party(DNVP), however, he was able to form a slim majority government. On 23 March 1933, theEnabling Actmarked the beginning of Nazi Germany,[399]allowing Hitler and his cabinet to enact laws on their own without the President or the Reichstag.[400]The Enabling Act formed the basis for the dictatorship and the dissolution of theLänder. Trade unions and all political parties other than the Nazi Party were suppressed. A centralised totalitarian state was established, no longer based on the liberalWeimarconstitution. Germany withdrew from theLeague of Nationsshortly thereafter. The coalition parliament was rigged by defining the absence of arrested and murdered deputies as voluntary and therefore cause for their exclusion as wilful absentees. The Centre Party was voluntarily dissolved in aquid pro quowith thePopeunder theanti-communistPope Pius XIfor theReichskonkordat; and by these manoeuvres Hitler achieved movement of these Catholic voters into the Nazi Party, and a long-awaited international diplomatic acceptance of his regime. The Nazis gained a larger share of their vote in Protestant areas than in Catholic areas.[401]The Communist Party was proscribed in April 1933. Hitler used theSSand Gestapo to purge the entire SA leadership—along with a number of Hitler's political adversaries in theNight of the Long Knivesfrom 30 June to 2 July 1934.[402]As a reward, the SS became an independent organisation under the command of theReichsführer-SSHeinrich Himmler. Upon Hindenburg's death on 2 August 1934, Hitler's cabinet passed a law proclaiming the presidency to be vacant and transferred the role and powers of the head of state to Hitler. The Nazi regime was particularly hostile towards Jews, who became the target of unendingantisemiticpropaganda attacks. The Nazis attempted to convince the German people to view and treat Jews as "subhumans"[403]and immediately after the1933 federal electionsthe Nazis imposed a nationwideboycott of Jewish businesses. In March 1933 the firstNazi concentration campwas established atDachau[404]and from 1933 to 1935 the Nazi regime consolidated their power. TheLaw for the Restoration of the Professional Civil Serviceforced all Jewish civil servants to retire from the legal profession and the civil service.[405]TheNuremberg Lawsbanned sexual relations between Jews and Germans and only those of German or related blood were eligible to be considered citizens; the remainder were classed as state subjects, without citizenship rights.[406]This stripped Jews,Romaniand others of their legal rights.[407]Jews continued to suffer persecution under the Nazi regime, exemplified by theKristallnacht pogromof 1938, and about half of Germany's 500,000 Jews fled the country before 1939, after which escape became almost impossible.[408] In 1941, the Nazi leadership decided to implement a plan that they called the "Final Solution" which came to be known as theHolocaust. Under the plan, Jews and other "lesser races" along with political opponents from Germany as well asoccupied countrieswere systematically murdered at murder sites, and starting in 1942, atextermination camps.[409]Between 1941 and 1945 Jews, Gypsies, Slavs, communists, homosexuals, the mentally and physically disabled and members of other groups were targeted and methodically murdered – the origin of the word "genocide". In total approximately 11 million people were killed during the Holocaust.[410] In 1935, Hitler officially re-established theLuftwaffe(air force) and reintroduced universal military service, in breach of theTreaty of Versailles; Britain, France and Italy formally protested. Hitler had the officers swear their personal allegiance to him.[411]In 1936, German troopsmarched into the demilitarised Rhineland.[412]As the territory was part of Germany, the British and French governments did not feel that attempting to enforce the treaty was worth the risk of war.[413]The move strengthened Hitler's standing in Germany. His reputation swelled further with the1936 Summer Olympicsin Berlin, and proved another great propaganda success for the regime as orchestrated by master propagandistJoseph Goebbels.[414] Hitler's diplomatic strategy in the 1930s was to make seemingly reasonable demands, threatening war if they were not met. When opponents tried to appease him, he accepted the gains that were offered, then went to the next target. That aggressive strategy worked as Germany pulled out of theLeague of Nations, rejected theVersailles Treatyand began to re-arm, won back the Saar, remilitarized the Rhineland, formed an alliance with Mussolini's Italy, sent massive military aid to Franco in the Spanish Civil War, annexed Austria, took over Czechoslovakia after the British and Frenchappeasementof the Munich Agreement, formed a peace pact withJoseph Stalin's Soviet Union, and finally invaded Poland. Britain and France declared war on Germany andWorld War IIin Europe began.[415][416] Having established a "Rome-Berlin axis" withBenito Mussolini, and signing theAnti-Comintern Pactwith Japan – which was joined by Italy a year later in 1937 – Hitler felt able to take the offensive in foreign policy. On 12 March 1938, German troops marched into Austria, where an attempted Nazi coup had been unsuccessful in 1934. When Austrian-born Hitler enteredVienna, he was greeted by loud cheers and Austrians voted in favour of the annexation of their country. After Austria, Hitler turned toCzechoslovakia, where theSudeten Germanminority was demanding equal rights and self-government. At theMunich Conferenceof September 1938, Hitler, Mussolini, British Prime MinisterNeville Chamberlainand French Prime MinisterÉdouard Daladieragreed upon the cession of Sudeten territory to the German Reich byCzechoslovakia. Hitler thereupon declared that all of German Reich's territorial claims had been fulfilled. However, hardly six months after the Munich Agreement Hitler used the smoldering quarrel betweenSlovaksandCzechsas a pretext for taking over the rest of Czechoslovakia. He then secured the return ofMemelfromLithuaniato Germany. Chamberlain was forced to acknowledge that his policy ofappeasementtowards Hitler had failed. At first Germany was successful in its military operations. In less than three months (April – June 1940), Germany conqueredDenmark,Norway, the Low Countries, andFrance. The unexpectedly swift defeat of France resulted in an upswing in Hitler's popularity and an upsurge in war fever.[417][418]Hitler made peace overtures to the new British leaderWinston Churchillin July 1940, but Churchill remained dogged in his defiance with major help from US presidentFranklin D. Roosevelt. Hitler's bombing campaign against Britain (September 1940 – May 1941) failed. Some 43,000 British civilians were killed and 139,000 wounded inthe Blitz; much ofLondonwas destroyed. Germany's armed forcesinvaded the Soviet Unionin June 1941 swept forward until they reached the gates of Moscow. TheEinsatzgruppen(Nazi mobiledeath squads) executed all Soviet Jews that it located, while the Germans went to Jewish households and forced the families into concentration camps for labor or to extermination camps for death. The tide began to turn in December 1941, when the invasion of the Soviet Union hit determined resistance in theBattle of Moscowand Hitler declared war on the United States in the wake of the JapanesePearl Harbor attack. After surrender inNorth Africaand losing theBattle of Stalingradin 1942–1943, the Germans were forced into the defensive. By late 1944, the United States, Canada, France, and Great Britain were closing in on Germany in the West, while the Soviets were victoriouslyadvancing in the East. In 1944–1945, Soviet forces completely or partially liberatedRomania, Bulgaria,Hungary,Yugoslavia, Poland,Czechoslovakia,Austria, Denmark, andNorway. Nazi Germany collapsed asBerlin was takenby the Soviet Union's Red Army in a fight to the death on the city streets. 2,000,000 Soviet troops took part in the assault, and they faced 750,000 German troops. 78,000–305,000 Soviets were killed, while 325,000 German civilians and soldiers were killed.[419]Hitler committed suicide on 30 April 1945. The finalGerman Instrument of Surrenderwas signed on 8 May 1945, marking the end of Nazi Germany. By September 1945, Nazi Germany and its Axis partners (mainlyItalyandJapan) had all been defeated, chiefly by the forces of the Soviet Union, the United States, and Britain. Much of Europe lay in ruins, over 60 million people worldwide had been killed (most of them civilians), including approximately 6 million Jews and 11 million non-Jews in what became known asthe Holocaust. World War II destroyed Germany's political and economic infrastructure, caused its partition, considerable loss of territory (especially in the East), and historical legacy of guilt and shame.[420] As a consequence of the defeat of Nazi Germany in 1945 and the onset of theCold Warin 1947, the country's territory was shrunk and split between the two global blocs in the East and West, a period known as the division of Germany. Millions of refugees from Central and Eastern Europe moved west, most of them to West Germany. Two countries emerged:West Germanywas a parliamentary democracy, aNATOmember, a founding member of what since became theEuropean Unionas one of the world's largest economies and under allied military control until 1955,[421]whileEast Germanywas a totalitarian Communist dictatorship controlled by the Soviet Union as a satellite of Moscow. With the collapse of Communism in Europe in 1989,reunion followed. No one doubted Germany's economic and engineering prowess; the question was how long bitter memories of the war would cause Europeans to distrust Germany, and whether Germany could demonstrate it had rejected totalitarianism and militarism and embraced democracy and human rights.[422] At thePotsdam Conference, Germany wasdivided into four military occupation zonesby the Allies and did not regain independence until 1949. The provinces east of the Oder and Neisse rivers (theOder-Neisse line) were transferred to Poland and Soviet Russia (Kaliningrad oblast) while Saarland separated from Germany to become a Frenchprotectorateon 17 December 1947 (joined West Germany on 1 January 1957), pending a final peace conference with Germany, which eventually never took place.[423]Most of the remaining German populationwas expelled. Around 6.7 million Germans living in"west-shifted" Poland, mostly within previously German lands, and the 3 million in German-settled regions of Czechoslovakia weredeported west.[424] The total ofGerman war deadwas 8% to 10% out of a prewar population of 69,000,000, or between 5.5 million and 7 million people. This included 4.5 million in the military, and between 1 and 2 million civilians. There was chaos as 11 million foreign workers and POWs left, while soldiers returned home and more than 14 million displaced German-speaking refugees from both the eastern provinces and East-Central and Eastern Europe were expelled from their native land and came to the western German lands, often foreign to them.[425]During theCold War, theWest Germangovernment estimated a death toll of 2.2 million civilians due to theflight and expulsion of Germansand throughforced labour in the Soviet Union.[426][427]This figure remained unchallenged until the 1990s, when some historians put the death toll at 500,000–600,000 confirmed deaths.[428]In 2006, the German government reaffirmed its position that 2.0–2.5 million deaths occurred. Denazificationremoved, imprisoned, or executed most top officials of the old regime, but most middle and lower ranks of civilian officialdom were not seriously affected. In accordance with the Allied agreement made at theYalta Conference, millions of POWs were used asforced laborby the Soviet Union and other European countries.[429] In the East, the Soviets crushed dissent and imposed another police state, often employing ex-Nazis in the dreadedStasi. The Soviets extracted about 23% of the East German GNP for reparations, while in the West reparations were a minor factor.[430] In 1945–1946 housing and food conditions were bad, as the disruption of transport, markets, and finances slowed a return to normal. In the West, bombing had destroyed the fourth of the housing stock,[431]and over 10 million refugees from the east had crowded in, most living in camps.[432]Food production in 1946–1948 was only two-thirds of the prewar level, while grain and meat shipments – which usually supplied 25% of the food – no longer arrived from the East. Furthermore, the end of the war brought the end of large shipments of food seized from occupied nations that had sustained Germany during the war. Coal production was down 60%, which had cascading negative effects on railroads, heavy industry, and heating.[433]Industrial production fell more than half and reached prewar levels only at the end of 1949.[434] Allied economic policy originally was one ofindustrial disarmamentplus building the agricultural sector. In the western sectors, most of the industrial plants had minimal bomb damage and the Allies dismantled 5% of the industrial plants for reparations.[435] However, deindustrialization became impractical and the U.S. instead called for a strong industrial base in Germany so it could stimulate European economic recovery.[436]The U.S. shipped food in 1945–1947 and made a $600 million loan in 1947 to rebuild German industry. By May 1946 the removal of machinery had ended, thanks to lobbying by the U.S. Army. The Truman administration finally realised that economic recovery in Europe could not go forward without the reconstruction of the German industrial base on which it had previously been dependent. Washington decided that an "orderly, prosperous Europe requires the economic contributions of a stable and productive Germany".[437][438] In 1945, the occupying powers took over all newspapers in Germany and purged them of Nazi influence. The American occupation headquarters, the Office of Military Government, United States (OMGUS) began its own newspaper based in Munich,Die Neue Zeitung.It was edited by German and Jewish émigrés who fled to the United States before the war. Its mission was to encourage democracy by exposing Germans to how American culture operated. The paper was filled with details on American sports, politics, business, Hollywood, and fashions, as well as international affairs.[439] On 7 October 1949, the Soviet zone became the "Deutsche Demokratische Republik" – "DDR" ("German Democratic Republic" – "GDR", simply often "East Germany"), under control of the Socialist Unity Party. Neither country had a significant army until the 1950s, but East Germany built theStasiinto a powerful secret police that infiltrated every aspect of its society.[440] East Germany was anEastern blocstate under political and military control of the Soviet Union through her occupation forces and theWarsaw Treaty. Political power was solely executed by leading members (Politburo) of the communist-controlledSocialist Unity Party(SED). A Soviet-stylecommand economywas set up; later the GDR became the most advancedComeconstate. WhileEast German propagandawas based on the benefits of the GDR's social programs and the alleged constant threat of a West German invasion, many of her citizens looked to the West for political freedoms and economic prosperity.[441] Walter Ulbrichtwas the party boss from 1950 to 1971. In 1933, Ulbricht had fled to Moscow, where he served as a Comintern agent loyal to Stalin. As World War II was ending, Stalin assigned him the job of designing the postwar German system that would centralize all power in the Communist Party. Ulbricht became deputy prime minister in 1949 and secretary (chief executive) of the Socialist Unity (Communist) party in 1950.[442]Some 2.6 million people had fled East Germany by 1961 when he built theBerlin Wallto stop them – shooting those who attempted it. What the GDR called the "Anti-Fascist Protective Wall" was a major embarrassment for the program during the Cold War, but it did stabilize East Germany and postpone its collapse.[443][444]Ulbricht lost power in 1971, but was kept on as a nominal head of state. He was replaced because he failed to solve growing national crises, such as the worsening economy in 1969–1970, the fear of another popular uprising as had occurred in 1953, and the disgruntlement between Moscow and Berlin caused by Ulbricht's détente policies toward the West. The transition toErich Honecker(General Secretaryfrom 1971 to 1989) led to a change in the direction of national policy and efforts by the Politburo to pay closer attention to the grievances of the proletariat.Honecker's plans were not successful, however, with the dissent growing among East Germany's population. In 1989, the socialist regime collapsed after 40 years, despite its omnipresent secret police, theStasi. The main reasons for its collapse included severe economic problems and growing emigration towards the West. East Germany's culture was shaped by Communism and particularly Stalinism. It was characterized by East German psychoanalyst Hans-Joachim Maaz in 1990 as having produced a "Congested Feeling" among Germans in the East as a result of Communist policies criminalizing personal expression that deviates from government approved ideals, and through the enforcement of Communist principals by physical force and intellectual repression by government agencies, particularly the Stasi.[445]Critics of the East German state have claimed that the state's commitment to communism was a hollow and cynical tool of a ruling elite. This argument has been challenged by some scholars who claim that the Party was committed to the advance of scientific knowledge, economic development, and social progress. However, the vast majority regarded the state's Communist ideals to be nothing more than a deceptive method for government control.[445] According to German historianJürgen Kocka(2010): On 23 May 1949, thethree western occupation zones(American, British, and French) were combined into the Federal Republic of Germany (FRG, West Germany). The government was formed under ChancellorKonrad Adenauerand his conservative CDU/CSU coalition.[447]The CDU/CSU was in power during most of the period since 1949. The capital wasBonnuntil it was moved to Berlin in 1990. In 1990, FRG absorbedEast Germanyand gained full sovereignty over Berlin. At all points West Germany was much larger and richer than East Germany, which became a dictatorship under the control of the Communist Party and was closely monitored by Moscow. Germany, especially Berlin, was a cockpit of theCold War, with NATO and the Warsaw Pact assembling major military forces in west and east. However, there was never any combat.[448] West Germany enjoyed prolonged economic growth beginning in the early 1950s (Wirtschaftswunderor "Economic Miracle").[449]Industrial production doubled from 1950 to 1957, and gross national product grew at a rate of 9 or 10% per year, providing the engine for economic growth of all of Western Europe. Labor unions supported the new policies with postponed wage increases, minimized strikes, support for technological modernization, and a policy ofco-determination(Mitbestimmung), which involved a satisfactory grievance resolution system as well as requiring representation of workers on the boards of large corporations.[450]The recovery was accelerated by thecurrency reform of June 1948, U.S. gifts of $1.4 billion as part of theMarshall Plan, the breaking down of old trade barriers and traditional practices, and the opening of the global market.[451]West Germany gained legitimacy and respect, as it shed the horrible reputation Germany had gained under the Nazis. West Germany played a central role in the creation of European cooperation; it joinedNATOin 1955 and was a founding member of theEuropean Economic Communityin 1958. The most dramatic and successful policy event was the currency reform of 1948.[452]Since the 1930s, prices and wages had been controlled, but money had been plentiful. That meant that people had accumulated large paper assets, and that official prices and wages did not reflect reality, as the black market dominated the economy and more than half of all transactions were taking place unofficially. On 21 June 1948, the Western Allies withdrew the old currency and replaced it with the newDeutsche Markat the rate of 1 new per 10 old. This wiped out 90% of government and private debt, as well as private savings. Prices were decontrolled, and labor unions agreed to accept a 15% wage increase, despite the 25% rise in prices. The result was that prices of German export products held steady, while profits and earnings from exports soared and were poured back into the economy. The currency reforms were simultaneous with the $1.4 billion inMarshall Planmoney coming in from the United States, which was used primarily for investment. In addition, the Marshall Plan forced German companies, as well as those in all of Western Europe, to modernize their business practices and take account of the international market. Marshall Plan funding helped overcome bottlenecks in the surging economy caused by remaining controls (which were removed in 1949), and Marshall Plan business reforms opened up a greatly expanded market for German exports. Overnight, consumer goods appeared in the stores, because they could be sold for realistic prices, emphasizing to Germans that their economy had turned a corner.[432] The success of the currency reform angered the Soviets, who cut off all road, rail, and canal links between the western zones andWest Berlin. This was theBerlin Blockade, which lasted from 24 June 1948 to 12 May 1949. In response, the U.S. and Britain launched an airlift of food and coal and distributed the new currency in West Berlin as well. The city thereby became economically integrated into West Germany.[453]Until the mid-1960s, it served as "America's Berlin", symbolizing the United States' commitment to defending its freedom, which John F. Kennedy underscored during his visit in June 1963.[454] Konrad Adenauerwas the dominant leader in West Germany.[455]He was the first chancellor (top official) of the FRG and until his death was the founder and leader of the Christian Democratic Union (CDU), a coalition of conservatives,ordoliberals, and adherents of Protestant andCatholic social teachingthat dominated West Germany politics for most of its history. During his chancellorship, the West Germany economy grew quickly, and West Germany established friendly relations with France, participated in the emergingEuropean Union, established the country's armed forces (theBundeswehr), and became a pillar ofNATOas well as firm ally of the United States. Adenauer's government also commenced the long process of reconciliation with the Jews andIsraelafter the Holocaust.[456] Ludwig Erhardwas in charge of economic policy as economics director for the British and American occupation zones and was Adenauer's long-time economics minister. Erhard's decision to lift many price controls in 1948 (despite opposition from both the social democratic opposition and Allied authorities), plus his advocacy of free markets, helped set the Federal Republic on its strong growth from wartime devastation.[457]Norbert Walter, a former chief economist atDeutsche Bank, argues that "Germany owes its rapid economic advance after World War II to the system of the Social Market Economy, established by Ludwig Erhard."[458][459]Erhard was politically less successful when he served as the CDU Chancellor from 1963 until 1966. Erhard followed the concept of asocial market economy, and was in close touch with professional economists. Erhard viewed the market itself as social and supported only a minimum of welfare legislation. However, Erhard suffered a series of decisive defeats in his effort to create a free, competitive economy in 1957; he had to compromise on such key issues as the anti-cartel legislation. Thereafter, the West German economy evolved into a conventional west European welfare state.[460] Meanwhile, in adopting theGodesberg Programin 1959, theSocial Democratic Party of Germany(SPD) largely abandoned Marxism ideas and embraced the concept of themarket economyand the welfare state. Instead it now sought to move beyond its old working class base to appeal the full spectrum of potential voters, including the middle class and professionals. Labor unionscooperatedincreasingly with industry, achieving labor representation on corporate boards and increases in wages and benefits.[461] In 1966, Erhard lost support andKurt Kiesingerwas elected as Chancellor by a new CDU/CSU-SPDalliance combining the two largest parties.Social democratic(SPD) leaderWilly Brandtwas Deputy Federal Chancellor and Foreign Minister. The 1966–1969 Grand Coalition reduced tensions with the Soviet bloc nations and establishing diplomatic relations withCzechoslovakia,RomaniaandYugoslavia. With a booming economy short of unskilled workers, especially after the Berlin Wall cut off the steady flow of East Germans, the FRG negotiated migration agreements with Italy (1955),Spain(1960), Greece (1960), and Turkey (1961) that brought in hundreds of thousands of temporary guest workers, calledGastarbeiter. In 1968, the FRG signed a guest worker agreement with Yugoslavia that employed additional guest workers.Gastarbeiterwere young men who were paid full-scale wages and benefits, but were expected to return home in a few years.[462] The agreement with Turkey ended in 1973 but few workers returned because there were few good jobs in Turkey.[463]By 2010 there were about 4 million people of Turkish descent in Germany. The generation born in Germany attended German schools, but had a poor command of either German or Turkish, and had either low-skilled jobs or were unemployed.[464][465] Willy Brandtwas the leader of theSocial Democratic Partyin 1964–1987 and West German Chancellor in 1969–1974. Under his leadership, the German government sought to reduce tensions with theSoviet Unionand improve relations with theGerman Democratic Republic, a policy known as theOstpolitik.[449]Relations between the two German states had been icy at best, with propaganda barrages in each direction. The heavy outflow of talent from East Germany prompted the building of theBerlin Wallin 1961, which worsenedCold Wartensions and prevented East Germans from travel. Although anxious to relieve serious hardships for divided families and to reduce friction, Brandt'sOstpolitikwas intent on holding to its concept of "two German states in one German nation". Ostpolitikwas opposed by the conservative elements in Germany, but won Brandt an international reputation and the Nobel Peace Prize in 1971.[466]In September 1973, both West and East Germany were admitted to theUnited Nations. The two countries exchanged permanent representatives in 1974, and, in 1987, East Germany's leaderErich Honeckerpaid anofficial state visitto West Germany.[467] After 1973, Germany was hard hit by a worldwide economic crisis, soaring oil prices, and stubbornly high unemployment, which jumped from 300,000 in 1973 to 1.1 million in 1975. TheRuhrregion was hardest hit, as its easy-to-reach coal mines petered out, and expensive German coal was no longer competitive. Likewise the Ruhr steel industry went into sharp decline, as its prices were undercut by lower-cost suppliers such as Japan. The welfare system provided a safety net for the large number of unemployed workers, and many factories reduced their labor force and began to concentrate on high-profit specialty items. After 1990 the Ruhr moved into service industries and high technology. Cleaning up the heavy air and water pollution became a major industry in its own right. Meanwhile, formerly rural Bavaria became a high-tech center of industry.[435] A spy scandal forced Brandt to step down as Chancellor while remaining as party leader. He was replaced byHelmut Schmidt(b. 1918), of the SPD, who served as Chancellor in 1974–1982. Schmidt continued theOstpolitikwith less enthusiasm. He had aPhDin economics and was more interested in domestic issues, such as reducinginflation. The debt grew rapidly as he borrowed to cover the cost of the ever more expensive welfare state.[468]After 1979, foreign policy issues grew central as the Cold War turned hot again. The German peace movement mobilized hundreds of thousands of demonstrators to protest against American deployment in Europe of newmedium-range ballistic missiles. Schmidt supported the deployment but was opposed by the left wing of the SPD and by Brandt. The pro-businessFree Democratic Party (FDP)had been in coalition with the SPD, but now it changed direction.[469]Led by Finance MinisterOtto Graf Lambsdorffthe FDP adopted the market-oriented "Kiel Theses" in 1977; it rejected the Keynesian emphasis on consumer demand, and proposed to reduce social welfare spending, and try to introduce policies to stimulate production and facilitate jobs. Lambsdorff argued that the result would be economic growth, which would itself solve both the social problems and the financial problems. As a consequence, the FDP switched allegiance to the CDU and Schmidt lost his parliamentary majority in 1982. For the only time in West Germany's history, the government fell on avote of no confidence.[432][470] Helmut Kohlbrought the conservatives back to power with aCDU/CSU-FDP coalitionin 1982, and served as Chancellor until 1998.[449]He orchestrated reunification with the approval of all the Four Powers from World War II, who still had a voice in German affairs.[471]He lost inthe left's biggest landslide victory in 1998, and was succeeded by the SPD'sGerhard Schröder.[472] During the summer of 1989, rapid changes known aspeaceful revolutionorDie Wendetook place in East Germany, which quickly led toGerman reunification.[449]Growing numbers of East Germans emigrated to West Germany, many via Hungary after Hungary's reformist government opened its borders. The opening of theIron CurtainbetweenAustriaandHungaryat the Pan-European Picnic in August 1989 then triggered a chain reaction, at the end of which there was no longer a GDR and the Eastern Bloc had disintegrated.Otto von Habsburg's idea developed the greatest mass exodus since the construction of the Berlin Wall and it was shown that the USSR and the rulers of the Eastern European satellite states were not ready to keep the Iron Curtain effective. This made their loss of power visible and clear that the GDR no longer received effective support from the other communist Eastern Bloc countries.[473][474][475]Thousands of East Germans then tried to reach the West by staging sit-ins at West German diplomatic facilities in other East European capitals, most notably in Prague. The exodus generated demands within East Germany for political change, andmass demonstrations in several citiescontinued to grow.[476] Unable to stop the growing civil unrest,Erich Honeckerwas forced to resign in October, and on 9 November, East German authorities unexpectedly allowed East German citizens to enter West Berlin and West Germany. Hundreds of thousands of people took advantage of the opportunity; new crossing pointswere opened in the Berlin Walland along the border with West Germany. This led to the acceleration of the process of reforms in East Germany that ended with the dissolution of East Germany and theGerman reunificationthat came into force on 3 October 1990.[477] The SPD/Green coalition won the 1998 elections and SPD leaderGerhard Schröderpositioned himself as acentrist"Third Way" candidate in the mold ofU.K. Prime MinisterTony BlairandU.S. PresidentBill Clinton. Schröder proposedAgenda 2010, a significant downsizing of the welfare state with five goals: tax cuts; labor market deregulation, especially relaxing rules protecting workers from dismissal and setting upHartz conceptjob training; modernizing the welfare state by reducing entitlements; decreasing bureaucratic obstacles for small businesses; and providing new low-interest loans to local governments.[478] On 26 December 2004 duringBoxing Daycelebration, about more than nearly 540 Germans have died and many more thousands of Germans are missing fromIndian Ocean tsunami from Indonesian earthquakewhile vacationing in SouthernThailand.[citation needed] In 2005, after the SPD lost to theChristian Democratic Union (CDU)inNorth Rhine-Westphalia,Gerhard Schröderannounced he would call federal elections "as soon as possible". Amotion of confidencewas subsequently defeated after Schröder urged members not to vote for his government to trigger new elections. In response, a grouping of left-wing SPD dissidents and the neo-communistParty of Democratic Socialismagreed to run on a joint ticket in the general election, with Schröder's rivalOskar Lafontaineleading the new group. In the2005 elections,Angela Merkelbecame the first female chancellor. In 2009 the German government approved a €50 billion stimulus plan.[479]Among the major German political projects of the early 21st century are the advancement ofEuropean integration, theenergy transition(Energiewende) for asustainable energysupply, thedebt brakefor balanced budgets, measures to increase thefertility rate(pronatalism), and high-tech strategies for the transition of the German economy, summarised asIndustry 4.0.[480]From2005to2009and2013to2021, Germany was ruled by agrand coalitionled by the CDU'sAngela Merkelas chancellor. From 2009 to 2013, Merkel headed a centre-right government of the CDU/CSU and FDP.[481] Together with France, Italy, Netherlands, and other EU member nations, Germany has played the leading role in theEuropean Union. Germany (especially under ChancellorHelmut Kohl) was one of the main supporters of admitting many East European countries to the EU. Germany is at the forefront of European states seeking to exploit the momentum of monetary union to advance the creation of a more unified and capable European political, defence and security apparatus. German Chancellor Schröder expressed an interest in a permanent seat for Germany in theUN Security Council, identifying France, Russia, and Japan as countries that explicitly backed Germany's bid. Germany formally adopted the Euro on 1 January 1999 after permanently fixing the Deutsche Mark rate on 31 December 1998.[482][483] Since 1990, GermanBundeswehrhas participated in a number of peacekeeping and disaster relief operations abroad. Since 2002, German troops formed part of theInternational Security Assistance Forcein theWar in Afghanistan, resulting in the first Germancasualtiesin combat missions since World War II. In light of the worldwideGreat Recessionthat began in 2008, Germany did not experience as much economic hardship as other European nations. Germany later sponsored a massive financial rescue in the wake of theEurozone crisiswhich affected the German economy. Following the2011 earthquake and tsunamiin Japan, which led to theFukushima nuclear disaster, German public opinion turned sharply againstnuclear power in Germany, which at the time produced a fourth of the electricity supply. In response Merkel announced plans to close down the nuclear power plants over the following decade, and a commitment to rely more heavily on wind and other alternative energy sources, in addition to coal and natural gas.[484] Germany was affected by theEuropean migrant crisisin 2015 as it became the final destination of choice for many asylum seekers fromAfricaand theMiddle Eastentering the EU. The country took in over a million refugees and migrants and developed a quota system which redistributed migrants around its federal states based on their tax income and existing population density.[485]The decision by Merkel to authorize unrestricted entry led to heavy criticism in Germany as well as within Europe.[486][487]This was a major factor in the rise of the far-right partyAlternative for Germanywhich entered the Bundestag in the2017 federal election.[488] In January 2020, Germany has confirmed the first case ofnovel coronavirus, found from Wuhan, China. In March 2020, Germany went to the national lockdowns, which was greatly affected by the pandemic, and greatly impact on German economy, healthcare system, and society, and also commended for being an effective model for instituting methods of curbing infections and deaths, but lost this status by the end of the year due to rising number of cases, hospitalizations, and deaths. In December 2020, COVID-19 vaccines began to be administered in Germany. Unfortunately, from June 2021 to the end of March 2022, Germany has might seeing a new surge of huge COVID-19 infection wave, fueled by the highly transmissibleDeltacronhybrid variant, which is combined of Delta and Omicron mutations. However, Germany has suffered from a recombination event of Deltacron, which was caused of less access to vaccine shortage in the first quarter. As of May 2022, Germany has reported 140,292 COVID-19-related deaths, the fifth highest mortality toll (Behind Russia, the United Kingdom,Italy, andFrance), out of 2 million deaths in Europe.[489] On 8 April 2022 just after the first two years of pandemic, Germany joinedFrance,Italy,Netherlands,Belgium,Luxembourg,Austria,Switzerland,Greece,Turkey, andCypruswere lifted all COVID-19 restrictions, measures, and state of emergencies up in the future.[citation needed] On 8 December 2021 just three months after Germany's centre-left Social Democrats (SPD) narrowly won the federalelection, ending 16 years of conservative-led rule under Angela Merkel, Social DemocratOlaf Scholzwas sworn in as Germany's new chancellor. He formed a coalition government with the Green Party and the liberal Free Democrats.[490][491] In February 2022,Frank-Walter Steinmeierwas elected for a second five-year term as Germany's president. Although largely ceremonial post, he has been seen as a symbol of consensus and continuity.[492] AfterRussia's Feb. 24 invasion of Ukrainein 2022, Germany's previous foreign policy towards Russia (traditional Ostpolitik) has been severely criticized for having been too credulous and soft.[493]Following concerns from the2022 Russian invasion of Ukraine, Germany announced a major shift in policy, pledging a €100 billion special fund for the Bundeswehr – to remedy years of underinvestment – along with raising the budget to above 2%GDP.[494]As of April 2023, over 1.06 million refugees from Ukraine were recorded in Germany.[495] As of December 2023, Germany is the fourth largest economy in the world after the United States, China and Japan and the largest economy in Europe. It is the third largest export nation in the world.[496] In February 2025, CDU/CSU, the conservatives, won Germany's 2025 federalelection, becoming the biggest group in the parliament. However, far-right Alternative for Germany, AfD, doubled its support to became the second biggest political party in parliament with 20.8% of the vote. SPD, the Social Democrats, had its worst performance in decades with 16.4% of the vote.[497] On 6 May 2025,Friedrich Merzwas sworn in as Germany's next chancellor by President Frank-Walter Steinmeier. Merz formed a coalition with his Christian Democrats, its sister party the Christian Social Union and the Social Democrats.[498]
https://en.wikipedia.org/wiki/History_of_Germany
TheHoly Roman Emperor, originally and officially theEmperor of the Romans(Latin:ImperatorRomanorum;German:Kaiserder Römer) during theMiddle Ages, and also known as theRoman-German Emperorsince theearly modern period[1](Latin:Imperator Germanorum;German:Römisch-Deutscher Kaiser), was the ruler andhead of stateof theHoly Roman Empire. The title was held in conjunction with the title ofKing of Italy(Rex Italiae) from the 8th to the 16th century, and, almost without interruption, with the title ofKing of Germany(Rex Teutonicorum,lit.'King of theTeutons') throughout the 12th to 18th centuries.[2] The Holy Roman Emperor title provided the highest prestige amongmedieval Catholic monarchs, because the empire was considered by theCatholic Churchto bethe only successorof theRoman Empireduring theMiddle Agesand theearly modern period. Thus, in theory and diplomacy, the emperors were consideredprimus inter pares—first among equals—among other Catholic monarchs across Europe.[3] From anautocracyinCarolingiantimes (AD 800–924), the title by the 13th century evolved into anelective monarchy, with the emperor chosen by theprince-electors. Various royal houses of Europe, at different times, becamede factohereditary holders of the title, notably theOttonians(962–1024) and theSalians(1027–1125). Following the late medievalcrisis of government, theHabsburgskept possession of the title (with onlyone interruption) from 1440 to 1806. The final emperors were from theHouse of Habsburg-Lorraine, from 1765 to 1806. The Holy Roman Empire was dissolved byFrancis II, after a devastating defeat byNapoleonat theBattle of Austerlitz. The emperor was widely perceived to rule bydivine right, though he often contradicted or rivaled thepope, most notably during theInvestiture controversy. The Holy Roman Empire never had anempress regnant, though women such asTheophanuandMaria Theresaexerted strong influence. Throughout its history, the position was viewed as a defender of the Catholic faith. UntilMaximilian Iin 1508, the Emperor-elect (Imperator electus) was required to be crowned by the pope before assuming the imperial title.Charles Vwas the last to be crowned by the pope in 1530. There were short periods in history when the electoral college was dominated byProtestants, and the electors usually voted in their own political interest. However, even after theReformation, the elected emperor was always aCatholic. From the time ofConstantine I(r.306–337), theRoman Emperorshad, with very few exceptions, taken on a role as promoters and defenders ofChristianity. Thereign of Constantineestablished a precedent for the position of the Christian emperor in theGreat Church. Emperors considered themselves responsible to God for the spiritual health of their subjects, and after Constantine they had a duty to help the Church define and maintainorthodoxy. The emperor's role was to enforce doctrine, root outheresies, and uphold ecclesiastical unity.[4]Both the title and connection between Emperor andChurchcontinued in theEastern Roman Empirethroughout the medieval period (in exileduring 1204–1261). Theecumenical councilsof the 5th to 8th centuries were convoked by theEastern Roman Emperors.[5] InWestern Europe, the title ofEmperor in the Westlapsed after the death ofJulius Neposin 480, although the rulers of thebarbarian kingdomscontinued to recognize the authority of the Eastern Emperor at least nominally well into the 6th century. While the reconquest ofJustinian Ihad re-establishedByzantine presence in the Italian Peninsula, religious frictions existed with thePapacywho sought dominance over the Church ofConstantinople. Toward the end of the 8th century, the Papacy still recognised the ruler at Constantinople as the Roman Emperor, though Byzantine military support in Italy had increasingly waned, leading to the Papacy to look to theFranksfor protection. In 800Pope Leo IIIowed a great debt toCharlemagne, theKing of the FranksandKing of Italy, for securing his life and position. By this time, the Eastern EmperorConstantine VIhad been deposed in 797 and replaced as monarch by his mother,Irene.[6] Under the pretext that a woman could not rule the empire, Pope Leo III declared the throne vacant and crowned Charlemagne Emperor of the Romans (Imperator Romanorum), the successor of Constantine VI as Roman emperor, using the concept oftranslatio imperii.[6]On his coins, the name and title used by Charlemagne isKarolus Imperator Augustus. In documents, he usedImperator Augustus Romanum gubernans Imperium("Emperor Augustus, governing the Roman Empire") andserenissimus Augustus a Deo coronatus, magnus pacificus Imperator Romanorum gubernans Imperium("most serene Augustus crowned by God, great peaceful emperor governing the empire of the Romans"). The Eastern Empire eventually relented to recognizing Charlemagne and his successors as emperors, but as "Frankish" and "German emperors", at no point referring to them as Roman, a label they reserved for themselves.[7] The title of emperor in the West implied recognition by the pope. As the power of the papacy grew during the Middle Ages, popes and emperors came into conflict over church administration. The best-known and most bitter conflict was that known as theinvestiture controversy, fought during the 11th century betweenHenry IVandPope Gregory VII. After the coronation of Charlemagne, his successors maintained the title until the death ofBerengar I of Italyin 924. The comparatively brief interregnum between 924 and the coronation ofOtto the Greatin 962 is taken as marking the transition from theFrankish Empireto theHoly Roman Empire. Under theOttonians, much of the formerCarolingiankingdom ofEastern Franciafell within the boundaries of the Holy Roman Empire. Since 911, the variousGerman princeshad elected theKing of the Germansfrom among their peers. The King of the Germans would then be crowned as emperor following the precedent set by Charlemagne, during the period of 962–1530.Charles Vwas the last emperor to be crowned by the pope, and his successor,Ferdinand I, merely adopted the title of "Emperor elect" in 1558. The final Holy Roman emperor-elect,Francis II, abdicated in 1806 during theNapoleonic Warsthat saw the Empire's final dissolution. The termsacrum(i.e., "holy") in connection with the German Roman Empire was first used in 1157 underFrederick I Barbarossa.[8] The Holy Roman Emperor's standard designation was "August Emperor of the Romans" (Romanorum Imperator Augustus). When Charlemagne was crowned in 800, he was styled as "most serene Augustus, crowned by God, great and pacific emperor, governing the Roman Empire," thus constituting the elements of "Holy" and "Roman" in the imperial title.[9] The wordRomanwas a reflection of the principle oftranslatio imperii(or in this caserestauratio imperii) that regarded the Holy Roman emperors as the inheritors of the title of emperor of theWestern Roman Empire. In German-language historiography, the termRömisch-deutscher Kaiser("Roman-German emperor") is used to distinguish the title from that ofRoman emperoron one hand, and that ofGerman emperor(Deutscher Kaiser) on the other. The English term "Holy Roman Emperor" is a modern shorthand for "emperor of the Holy Roman Empire" not corresponding to the historical style or title, i.e., the adjective "holy" is not intended as modifying "emperor"; the English term "Holy Roman Emperor" gained currency in the interbellum period (the 1920s to 1930s); formerly the title had also been rendered as "German-Roman emperor" in English.[1] Theelective monarchyof theKingdom of Germanygoes back to the early 10th century, the election ofConrad I of Germanyin 911 following the death without issue ofLouis the Child, the lastCarolingianruler of Germany.Electionsmeant the kingship of Germany was only partially hereditary, unlike the kingship ofEngland, although sovereignty frequently remained in a dynasty until there were no more male successors. The process of an election meant that the prime candidate had to make concessions, by which the voters were kept on his side, which was known asWahlkapitulationen(electoral capitulation). Conrad was elected by theGerman dukes, and it is not known precisely when the system of sevenprince-electorswas established. The papal decreeVenerabilembyInnocent III(1202), addressed toBerthold V, Duke of Zähringen, establishes the election procedure by (unnamed) princes of the realm, reserving for the pope the right to approve of the candidates. A letter ofPope Urban IV(1263), in the context of the disputed vote of 1256 and the subsequentinterregnum, suggests that by "immemorial custom", seven princes had the right to elect the king and future emperor. The seven prince-electors are named in theGolden Bull of 1356: thearchbishop of Mainz, thearchbishop of Trier, thearchbishop of Cologne, theking of Bohemia, thecount palatine of the Rhine, theduke of Saxonyand themargrave of Brandenburg. After 1438, the title remained in the House ofHabsburgandHabsburg-Lorraine, with the brief exception ofCharles VII, who was aWittelsbach.Maximilian I(emperor 1508–1519) and his successors no longer traveled to Rome to be crowned as emperor by the pope. Maximilian, therefore, named himself elected Roman emperor (Erwählter Römischer Kaiser) in 1508 with papal approval. This title was in use by all his uncrowned successors. Of his successors, onlyCharles V, the immediate one, received apapal coronation. The elector palatine's seat was conferred on theduke of Bavariain 1621, but in 1648, in the wake of theThirty Years' War, the elector palatine was restored, as the eighth elector. TheElectorate of Hanoverwas added as a ninth elector in 1692, confirmed by the Imperial Diet in 1708. The whole college was reshuffled in theGerman mediatizationof 1803 with a total of ten electors, a mere three years before the dissolution of the Empire. This list includes all 47 German monarchs crowned from Charlemagne until the dissolution of the Holy Roman Empire (800–1806). Several rulers were crownedking of the Romans(king of Germany) but not emperor, although they styled themselves thus, among whom were:Conrad IandHenry the Fowlerin the 10th century, andConrad IV,Rudolf I,AdolfandAlbert Iduring theinterregnumof the late 13th century. Traditional historiography assumes a continuity between theCarolingian Empireand the Holy Roman Empire, while a modern convention takes the coronation of Otto I in 962 as the starting point of the Holy Roman Empire (although the termSacrum Imperium Romanumwas not in use before the 13th century). On Christmas Day, 800, Charlemagne, King of the Franks, was crowned Emperor of the Romans (Imperator Romanorum) byPope Leo III, in opposition toEmpress Irene, who was then ruling the Roman Empire from Constantinople. Charlemagne's descendants from theCarolingian Dynastycontinued to be crowned Emperor until 899, excepting a brief period when the Imperial crown was awarded to theWidonidDukes of Spoleto. There is some contention as to whether the Holy Roman Empire dates as far back as Charlemagne, some histories consider theCarolingian Empireto be a distinct polity from the later Holy Roman Empire as established under Otto I in 962. Adopted son ofCharles III While earlier Frankish and Italian monarchs had been crowned as Roman emperors, the actualHoly Roman Empireis often considered to have begun with the crowning of Frederick Barbarossa who called the empire "the holy empire", however in general it is already attributed toOtto I, at the time Otto wasDuke of SaxonyandKing of Germany. Because the King of Germany was an elected position, being elected King of Germany was functionally a pre-requisite to being crowned Holy Roman Emperor. By the 13th century, thePrince-electorsbecame formalized as a specific body of seven electors, consisting of three bishops and four secular princes. Through the middle 15th century, the electors chose freely from among a number of dynasties. A period of dispute during the second half of the 13th century over the kingship of Germany led to there being no emperor crowned for several decades, though this ended in 1312 with the coronation ofHenry VII, Holy Roman Emperor. The period of free election ended with the ascension of the AustrianHouse of Habsburg, as an unbroken line of Habsburgs held the imperial throne until the 18th century. Later a cadet branch known as theHouse of Habsburg-Lorrainepassed it from father to son until the abolition of the Empire in 1806. Notably, from the 16th century, the Habsburgs dispensed with the requirement that emperors be crowned by the pope before exercising their office. Starting withFerdinand I, all successive emperors forwent the traditional coronation. Theinterregnumof the Holy Roman Empire is taken to have lasted from the deposition of Frederick II byPope Innocent IVin 1245 (or alternatively from Frederick's death in 1250 or from the death ofConrad IVin 1254) to the election ofRudolf I of Germany(1273). Rudolf was not crowned emperor, nor were his successorsAdolfandAlbert. The next emperor wasHenry VII, crowned on 29 June 1312 byPope Clement V. In 1508,Pope Julius IIallowedMaximilian Ito use the title of Emperor without coronation in Rome, though the title was qualified asElectus Romanorum Imperator("elected Emperor of the Romans"). Maximilian's successors each adopted the same titulature, usually on becoming the sole ruler of the Holy Roman Empire. Maximilian's predecessorFrederick IIIwas the last to be crowned Emperor by the Pope in Rome, while Maximilian's successorCharles Vwas the last to be crowned by the pope, though inBologna, in 1530.[12] The Emperor was crowned in a special ceremony, traditionally performed by thePopeinRome. Without that coronation, no king, despite exercising all powers, could call himself Emperor. In 1508, PopeJulius IIallowedMaximilian Ito use the title of Emperor without coronation in Rome, though the title was qualified asElectus Romanorum Imperator("elected Emperor of the Romans"). Maximilian's successors adopted the same titulature, usually when they became the sole ruler of the Holy Roman Empire.[13]Maximilian's first successorCharles Vwas the last to be crowned Emperor.
https://en.wikipedia.org/wiki/Holy_Roman_Emperor
This is a list of monarchs who ruled overEast Francia, and theKingdom of Germany(Latin:Regnum Teutonicum), fromthe divisionof theFrankish Empirein 843 andthe collapseof theHoly Roman Empirein 1806 untilthe collapseof theGerman Empirein 1918: Inaccurate[a] Non-contemporary Non-contemporary The title "King of the Romans", used in theHoly Roman Empire, was, from the coronation of Henry II, considered equivalent to King of Germany. A king was chosen by the German electors and would then proceed to Rome to becrowned emperor by the pope. Non-contemporary Non-contemporary Frederick IIandConrad IV1247–1254 (Präsidialmacht)Austria[17] Emperors are listed inbold. Rival kings, anti-kings, and junior co-regents areitalicized.
https://en.wikipedia.org/wiki/List_of_German_monarchs
TheImperial Diet(Latin:Dieta ImperiiorComitium Imperiale;German:Reichstag) was the deliberative body of theHoly Roman Empire. It was not alegislative bodyin the contemporary sense; its members envisioned it more like a central forum where it was more important to negotiate than to decide.[1] Its members were theImperial Estates, divided into three colleges. Thedietas a permanent, regularized institution evolved from theHoftage(court assemblies) of theMiddle Ages. From 1663 until the end of the empire in 1806, it was inpermanent sessionatRegensburg. All Imperial Estates enjoyedimmediacyand, therefore, they had no authority above them besides theHoly Roman Emperorhimself. While all the estates were entitled to a seat and vote, only the higher temporal and spiritual princes of the College of Princes enjoyed an individual vote (Virilstimme), while lesser estates such as imperial counts and imperial abbots, were merely entitled to a collective vote (Kuriatstimme) within their particular bench (Curia), as did the free imperial cities belonging to the College of Towns.[2] The right to vote rested essentially on a territorial entitlement, with the result that when a given prince acquired new territories through inheritance or otherwise, he also acquired their voting rights in the diet.[3]In general, members did not attend the permanent diet at Regensburg, but sent representatives instead. The late imperial diet was in effect a permanent meeting of ambassadors between the estates. The role and function of the Imperial Diet evolved over the centuries, like the Empire itself, with the estates and separate territories increasing control of their own affairs at the expense of imperial power. Initially, there was neither a fixed time nor location for the Diet. It began as a convention of thedukesof the oldGermanictribes that formed theFrankish kingdomwhen important decisions had to be made, probably based on the old Germanic law whereby each leader relied on the support of his leading men. In the early and high Middle Ages these assemblies were not yet institutionalized, but were held as needed at the decision of the king or emperor. They weren't called Diet yet, butHoftag(court day). They were usually held in the imperial palaces(Kaiserpfalz). For example, already underCharlemagneduring theSaxon Wars, a Hoftag, according to theRoyal Frankish Annals, met atPaderbornin 777 and determined laws over the subduedSaxonsand other tribes. In 803 Charlemagne, by then crowned as emperor of the Franks, issued the final version of theLex Saxonum. At the Diet of 919 inFritzlarthe dukes elected the firstKing of the Germans, who was a Saxon,Henry the Fowler, thus overcoming the longstanding rivalry between Franks and Saxons and laying the foundation for the German realm. After the conquest ofItaly, the 1158Diet of Roncagliafinalized four laws that would significantly alter the (never formally written)constitutionof the Empire, marking the beginning of the steady decline of the central power in favour of the local dukes. TheGolden Bull of 1356cemented the concept of "territorial rule" (Landesherrschaft), the largely independent rule of the dukes over their respective territories, and also limited the number of electors to seven. The Pope, contrary to modern myth, was never involved in the electoral process but only in the process of ratification and coronation of whomever the Prince-Electors chose. Until the late 15th century the Diet was not formalized as an institution. Instead, the dukes and other princes would irregularly convene at the court of the Emperor. These assemblies were usually referred to asHoftage(from GermanHof"court"). Only beginning in 1489 was the Diet called theReichstag, and it was formally divided intocollegia("colleges"). Initially, the two colleges were of theprince-electorsand of the remaining dukes and princes. Later, theimperial citieswithImperial immediacybecame oligarchic republics independent of a local ruler, subject only to the Emperor himself, and managed to be accepted as third parties. Motions passed if two of the colleges approved. Generally, the princely and electoral colleges would agree with each other, rather than rely on the cities to make a decision, but the cities still had influence.[4] Several attempts to reform the Empire and end its slow disintegration, starting with theDiet of 1495, did not have much effect. In contrast, this process was hastened with thePeace of Westphaliaof 1648, which formally bound the Emperor to accept all decisions made by the Diet, in effect depriving him of his few remaining powers. Nonetheless, the Emperor still had substantial influence in the Diet. TheHabsburgEmperors possessed a large number of votes, and even held command over theReichsarmee (Imperial Army)if the Diet decided to raise it.[4] Probably the most famous Diets were those held inWorms in 1495, where theImperial Reformwas enacted, and1521, whereMartin Lutherwas banned (seeEdict of Worms), the Diets ofSpeyer1526and1529(seeProtestation at Speyer), and several inNuremberg(Diet of Nuremberg). Only with the introduction of thePerpetual Diet of Regensburgin 1663 did the Diet permanently convene at a fixed location. The Imperial Diet of Constance opened on 27 April 1507;[5]it recognized the unity of the Holy Roman Empire and founded theImperial Chamber, the empire's supreme court. From 1489, the Diet comprised three colleges: TheElectoral College(Kurfürstenrat) was led by thePrince-Archbishop of Mainzin his capacity asArchchancellorofGermany. The seven Prince-electors were designated by the Golden Bull of 1356: The number increased to eight, when in 1623 theDuke of Bavariatook over the electoral dignity of the Count Palatine, who himself received a separate vote in the electoral college according to the 1648Peace of Westphalia(Causa Palatina), including the high office of anArchtreasurer. In 1692 theElector of Hanover(formally Brunswick-Lüneburg) became the ninth Prince-elector as Archbannerbearer during theNine Years' War. In theWar of the Bavarian Succession, the electoral dignities of the Palatinate andBavariawere merged, approved by the 1779Treaty of Teschen. TheGerman Mediatisationof 1803 entailed the dissolution of the Cologne and Trier Prince-archbishoprics. At the same time, the Prince-Archbishop of Mainz and German Archchancellor received—as compensation for his lost territory occupied byRevolutionary France—the newly establishedPrincipality of Regensburg. In turn, four secular princes were elevated to prince-electors: These changes however had little effect, as with the abdication ofFrancis IIas Holy Roman Emperor the Empire was dissolved only three years later. The college ofImperial Princes(ReichsfürstenratorFürstenbank) incorporated theImperial Countsas well asimmediatelords,Prince-BishopsandImperial abbots. Strong in members, though often discordant, the second college tried to preserve its interests against the dominance of the Prince-electors. The House of Princes was again subdivided into an ecclesiastical and a secular bench. Remarkably, the ecclesiastical bench was headed by the—secular—Archduke of Austriaand theBurgundianduke of theHabsburg Netherlands(held byHabsburg Spainfrom 1556). As the AustrianHouse of Habsburghad failed to assume the leadership of the secular bench, they received the guidance over the ecclesiastical princes. The first ecclesiastical prince was theArchbishop of SalzburgasPrimas Germaniae; thePrince-Archbishop of Besançon, though officially a member until the 1678Treaty of Nijmegen, did not attend the Diet's meetings. The ecclesiastical bench also comprised theGrand MasterandDeutschmeisterof theTeutonic Knights, as well as theGrand Priorof the Monastic State of theKnights HospitalleratHeitersheim. ThePrince-Bishopric of Lübeckremained an ecclesiastical member even after it had turnedProtestant, ruled bydiocesan administratorsfrom theHouse of Holstein-Gottorpfrom 1586. ThePrince-Bishopric of Osnabrück, according to the 1648 Peace of Westphalia was under alternating rule of aCatholicbishop and a Lutheran bishop from theHouse of Hanover. Each member of the Princes' College held either a single vote (Virilstimme) or a collective vote (Kuriatstimme). Due to the Princes, their single vote from 1582 strictly depended on their immediate fiefs; this principle led to an accumulation of votes, when one ruler held several territories inpersonal union. Counts and Lords only were entitled to collective votes, they therefore formed separate colleges like theWetterau Association of Imperial Countsand mergers within theSwabian, theFranconianand theLower Rhenish–Westphalian Circles. Likewise, on the ecclesiastical bench, the Imperial abbots joined a Swabian orRhenishcollege. In theGerman Mediatisationof 1803, numerous ecclesiastical territories were annexed by secular estates. However, a reform of the Princes' college was not carried out until the Empire's dissolution in 1806. The college ofImperial Cities(Reichsstädtekollegium) evolved from 1489 onwards. It contributed greatly to the development of the Imperial Diets as a political institution. Nevertheless, the collective vote of the cities was of inferior importance until a 1582Recessof theAugsburg Diet. The college was led by the city council of the actual venue until the Perpetual Diet in 1663, when the chair passed toRegensburg. The Imperial cities also divided into a Swabian and Rhenish bench. The Swabian cities were led byNuremberg,Augsburgand Regensburg, the Rhenish cities byCologne,AachenandFrankfurt. For a complete list of members of the Imperial Diet from 1792, near the end of the Empire, seeList of Reichstag participants (1792). After thePeace of Westphalia, religious matters could no longer be decided by a majority vote of the colleges. Instead, the Reichstag would separate into Catholic and Protestant bodies, which would discuss the matter separately and then negotiate an agreement with each other, a procedure called theitio in partes.[6]The Catholic body, orcorpus catholicorum, was headed by the Archbishop-Elector ofMainz.[7] The Protestant body, orcorpus evangelicorum, was headed by the Elector ofSaxony. At meetings of the Protestant body, Saxony would introduce each topic of discussion, after whichBrandenburg-PrussiaandHanoverwould speak, followed by the remaining states in order of size. When all the states had spoken, Saxony would weigh the votes and announce a consensus. Frederick Augustus I, Elector of Saxonyconverted to Catholicism in 1697 in order to become King of Poland, but the Electorate itself remained officially Protestant and retained the directorship of the Protestant body. Whenthe Elector's sonalso converted to Catholicism, Prussia and Hanover attempted to take over the directorship in 1717–1720, but without success. The Electors of Saxony would head the Protestant body until the end of the Holy Roman Empire.[7] After the formation of the newGerman Empirein 1871, the Historical Commission of theBavarian Academy of Sciencesstarted to collect imperial records (Reichsakten) and imperial diet records (Reichstagsakten). In 1893 the commission published the first volume. At present the years 1524–1527 and years up to 1544 are being collected and researched. A volume dealing with the 1532 Diet of Regensburg, including the peace negotiations with theProtestantsinSchweinfurtandNuremberg, byRosemarie AulingerofViennawas published in 1992.
https://en.wikipedia.org/wiki/Reichstag_(Holy_Roman_Empire)
Amissus dominicus(pluralmissi dominici),Latinfor "envoy[s] of the lord [ruler]", also known inDutchasZendgraaf(German:Sendgraf), meaning "sentGraf", was an official commissioned by the Frankish king orHoly Roman Emperorto supervise the administration, mainly of justice, in parts of his dominions too remote for frequent personal visits.[1]As such, themissusperformed important intermediary functions between royal and local administrations. There are superficial points of comparison with the original Romancorrector, except that themissuswas sent out on a regular basis. Four points made themissieffective as instruments of the centralized monarchy: the personal character of themissus, yearly change, isolation from local interests and the free choice of the king.[2] Based onMerovingianad hocarrangements,[3]using the formmissus regis(the "king's envoy") and sending a layman and an ecclesiastic in pairs,[4]the use ofmissi dominiciwas fully exploited byCharlemagne(ruling 768–814), who made them a regular part of his administration,[5][6][7]"a highly intelligent and plausible innovation in Carolingian government",Norman F. Cantorobserves,[8]"and a tribute to the administrative skill of the ecclesiastics, such asAlcuinandEinhard". Themissiwere at first chosen from Charlemagne's personal, most trusted entourage, of whatever social degree. Soon they were selected only from the secular and ecclesiastical nobility: the entry for 802 in the so-calledLorsch Annals(794–803) states that instead of relying on "poorervassals",[9]Charlemagne "chose from the kingdom archbishops and bishops and abbots, with dukes and counts, who now had no need to receive gifts from the innocent, and sent them throughout his kingdom, so that they might administer justice to the churches, to widows, orphans and the poor, and to all the people."[10]Presumably the same year thecapitularyusually known as theCapitulare missorum generalewas issued, which gives a detailed account of their duties and responsibilities. They were to execute justice, to ensure respect for the king, to control the government of the militarydukesand administrativecounts(then still royal officials), to receive their oath of allegiance,[11]to let the king's will be known, at times by distributing capitularies around the empire, and to supervise the clergy of their assigned region.[5]In short, they were the direct representatives of the king orHoly Roman Emperor. The inhabitants of the district they administered had to provide for their subsistence, and at times they led the host to battle.[5]Themissiwere protected by a triplewergeldand resistance to them was punishable by death.[12]In addition special instructions were given to variousmissi, and many of these have been preserved.[5] Asmissibecame a conventional part of court machinery,missus ad hoccame to signifymissisent out for some particular purpose.[13][14]The districts placed under the ordinarymissi, which it was their duty to visit for a month at a time, four times a year, were calledmissaticiorlegationes[5](a term illustrating the analogy with apapal legate); themissatica(singularmissaticum) avoided division along the lines of the existingdiocesesorprovinces.[12][15]Themissiwere not permanent officials, but were generally selected from the ranks of officials at the court, and during the reign of Charlemagne high-standing personages undertook this work.[5]They were sent out collegially, usually in twos, an ecclesiastic and a layman, and were generally complete strangers to the district which they administered,[5]to deter them from putting out local roots and acting on their own initiative, as the counts were doing. In addition extraordinarymissirepresented the emperor on special occasions, and at times beyond the limits of his dominions.[16]Even under the strong rule of Charlemagne it was difficult to find men to discharge these duties impartially, and after his death in 814 it became almost impossible.[5][17] Under Charlemagne's surviving legitimate son,Louis the Pious(ruling 813–840),the process of disintegration was hastened.[18]Once the king associated the choice ofmissiwith the assembly of nobles, the nobles interfered in the appointment of themissi. Themissiwere later selected from the district in which their duties lay,[19][20][5]which led to their association with local hereditary filiations and in general a focus upon their own interests rather than that of the king.[21][22]The 825 list ofmissireveals that the circuits of themissaticanow corresponded with provinces, strengthening local powers. The duties ofmissi, who gradually increased in number, became merged in the ordinary work of the bishops and counts,[23]and under the emperorCharles the Bald[5](ruling 843–877), who was repeatedly pressured by bishops to send outmissi, they took control of associations for the preservation of the peace.[5]Louis the German(ruling 843–876) is not known to have sent outmissi.[24]About the end of the ninth century, with the implosion of Carolingian power, themissidisappeared from France and during the 10th century from Italy.[5][25] Themissiwere the last attempt to preserve centralised control in theHoly Roman Empire. In the course of the ninth century, the forces which were making forfeudalismtended to produce inherited fiefdoms as the only way to ensure stability, especially in the face of renewed external aggression in the form ofVikingattacks, to which the impaired central power was demonstrated to be impotent.
https://en.wikipedia.org/wiki/Sendgraf