text
stringlengths
16
172k
source
stringlengths
32
122
Szymański's Mutual Exclusion Algorithmis amutual exclusion algorithmdevised by computer scientist Dr.Bolesław Szymański, which has many favorable properties including linear wait,[1][2]and which extension[3]solved the open problem posted byLeslie Lamport[4]whether there is an algorithm with a constant number of communication bits per process that satisfies every reasonable fairness and failure-tolerance requirement that Lamport conceived of (Lamport's solution usednfactorial communication variables vs. Szymański's 5). The algorithm is modeled on a waiting room with an entry and exit doorway.[1]Initially the entry door is open and the exit door is closed. All processes which request entry into the critical section at roughly the same time enter the waiting room; the last of them closes the entry door and opens the exit door. The processes then enter the critical section one by one (or in larger groups if the critical section permits this). The last process to leave the critical section closes the exit door and reopens the entry door, so the next batch of processes may enter. The implementation consists of each process having aflagvariable which is written by that process and read by all others (this single-writer property is desirable for efficientcacheusage). Theflagvariable assumes one of the following five values/states: The status of the entry door is computed by reading the flags of allNprocesses. Pseudo-code is given below: Note that the order of the "all" and "any" tests must be uniform. Despite the intuitive explanation, the algorithm was not easy toprove correct, however due to its favorable properties a proof of correctness was desirable and multiple proofs have been presented.[2][5]
https://en.wikipedia.org/wiki/Szyma%C5%84ski%27s_algorithm
Incomputer architecture, thetest-and-setCPUinstruction(or instruction sequence) is designed to implementmutual exclusioninmultiprocessorenvironments. Although a correctlockcan be implemented with test-and-set, thetest and test-and-setoptimization lowersresource contentioncaused by bus locking, especiallycache coherency protocoloverhead on contended locks. Given a lock: the entry protocol is: and the exit protocol is: The difference to the simple test-and-set protocol is the additional spin-loop (thetestintest and test-and-set) at the start of the entry protocol, which utilizes ordinary load instructions. The load in this loop executes with less overhead compared to an atomic operation (resp. aload-exclusiveinstruction). E.g., on a system utilizing theMESIcache coherency protocol, the cache line being loaded is moved to the Shared state, whereas a test-and-set instruction or a load-exclusive instruction moves it into the Exclusive state. This is particularly advantageous if multiple processors are contending for the same lock: whereas an atomic instruction or load-exclusive instruction requires a coherency-protocol transaction to give that processor exclusive access to the cache line (causing that line to ping-pong between the involved processors), ordinary loads on a line in Shared state require no protocol transactions at all: processors spinning in the inner loop operate purely locally. Cache-coherency protocol transactions are used only in the outer loop, after the initial check has ascertained that they have a reasonable likelihood of success. If theprogramming languageused supportsshort-circuit evaluation, the entry protocol could be implemented as: Although thisoptimizationis useful insystem programming, test-and-set is to be avoided in high-levelconcurrent programming: spinning in applications deprives the operating system scheduler the knowledge of who is blocking on what. Consequently, the scheduler will have to guess on how to allocate CPU time among the threads -- typically just allowing the threads to use up their timing quota. Threads will end up spinning unproductively, waiting for threads that are not scheduled. By using operating-system provided lock objects, such as mutexes, the OS can schedule exactly the unblocked threads.
https://en.wikipedia.org/wiki/Test_and_Test-and-set
Incomputer programming, aprogramming idiom,code idiomor simplyidiomis acodefragment having asemantic role[1]which recurs frequently acrosssoftwareprojects. It often expresses a special feature of a recurringconstructin one or moreprogramming languages,frameworksorlibraries. This definition is rooted in the linguistic definition of "idiom". The idiom can be seen bydevelopersas an action on a programming concept underlying a pattern in code, which is represented in implementation by contiguous or scatteredcode snippets. Generally speaking, a programming idiom's semantic role is anatural languageexpression of a simple task,algorithm, ordata structurethat is not abuilt-infeature in the programming language being used, or, conversely, the use of an unusual or notable feature thatisbuilt into a programming language. Knowing the idioms associated with a programming language and how to use them is an important part of gainingfluencyin that language. It also helps to transfer knowledge in the form of analogies from one language or framework to another. Such idiomatic knowledge is widely used incrowdsourcedrepositories to help developers overcome programming barriers.[2] Mapping code idioms toidiosyncrasiescan be a helpful way to navigate the tradeoffs between generalization and specificity. By identifying common patterns and idioms, developers can create mental models and schemata that help them quickly understand and navigate new code. Furthermore, by mapping these idioms to idiosyncrasies and specific use cases, developers can ensure that they are applying the correct approach and not overgeneralizing it. One way to do this is by creating a reference or documentation that maps common idioms to specific use cases, highlighting where they may need to be adapted or modified to fit a particular project or development team. This can help ensure that developers are working with a shared understanding of best practices and can make informed decisions about when to use established idioms and when to adapt them to fit their specific needs. A common misconception is to use theadverbialoradjectivalform of the term as "using a programming language in a typical way", which really refers to a idiosyncrasy. An idiom implies the semantics of some code in a programming language has similarities to other languages or frameworks. For example, anidiosyncraticway tomanage dynamic memoryinCwould be to use theC standard libraryfunctionsmallocandfree, whereasidiomaticrefers to manualmemory managementas recurring semantic role that can be achieved with code fragmentsmallocin C, orpointer = new type [number_of_elements]in C++. In both cases, the semantics of the code are intelligible to developers familiar with C or C++, once the idiomatic or idiosyncratic rationale is exposed to them. However, while idiomatic rationale is often general to the programming domain, idiosyncratic rationale is frequently tied to specific API terminology. One of the most common starting points to learn to program or notice the syntax differences between a known language and a new one.[3] It has several implementations, among them the code fragments forC++: ForJava: This idiom helps developers understand how to manipulate collections in a given language, particularly inserting an elementxat a positioniin a listsand moving the elements to its right.[4] Code fragments: ForPython: ForJavaScript: ForPerl:
https://en.wikipedia.org/wiki/Programming_idiom
Insoftware engineering, theinitialization-on-demand holder(design pattern) idiom is alazy-loadedsingleton. In all versions of Java, the idiom enables a safe, highly concurrent lazy initialization of static fields with good performance.[1][2] The implementation of the idiom relies on the initialization phase of execution within theJava Virtual Machine(JVM) as specified by the Java Language Specification (JLS).[3]When the classSomethingis loaded by the JVM, the class goes through initialization. Since the class does not have any static variables to initialize, the initialization completes trivially. The static class definitionLazyHolderwithin it isnotinitialized until the JVM determines thatLazyHoldermust be executed. The static classLazyHolderis only executed when the static methodgetInstanceis invoked on the classSomething, and the first time this happens the JVM will load and initialize theLazyHolderclass. The initialization of theLazyHolderclass results in static variableINSTANCEbeing initialized by executing the (private) constructor for the outer classSomething. Since the class initialization phase is guaranteed by the JLS to be sequential, i.e., non-concurrent, no further synchronization is required in the staticgetInstancemethod during loading and initialization. And since the initialization phase writes the static variableINSTANCEin a sequential operation, all subsequent concurrent invocations of thegetInstancewill return the same correctly initializedINSTANCEwithout incurring any additional synchronization overhead. While the implementation is an efficient thread-safe "singleton" cache without synchronization overhead, and better performing than uncontended synchronization,[4]the idiom can only be used when the construction ofSomethingis guaranteed to not fail. In most JVM implementations, if construction ofSomethingfails, subsequent attempts to initialize it from the same class-loader will result in aNoClassDefFoundErrorfailure.
https://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom
Incomputer science, areaders–writer(single-writerlock,[1]amulti-readerlock,[2]apush lock,[3]or anMRSW lock) is asynchronizationprimitive that solves one of thereaders–writers problems. An RW lock allowsconcurrentaccess for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusivelockis needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updatedatomicallyand is invalid (and should not be read by another thread) until the update is complete. Readers–writer locks are usually constructed on top ofmutexesandcondition variables, or on top ofsemaphores. Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.[1]Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode. RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or beunspecifiedwith regards to priority. These policies lead to different tradeoffs with regards toconcurrencyandstarvation. Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist. Raynaldemonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter,b, tracks the number of blocking readers. One mutex,r, protectsband is only used by readers; the other,g(for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following ispseudocodefor the operations: Initialize Begin Read End Read Begin Write End Write This implementation is read-preferring.[4]: 76 Alternatively an RW lock can be implemented in terms of acondition variable,cond, an ordinary (mutex) lock,g, and various counters and flags describing the threads that are currently active or waiting.[7][8][9]For a write-preferring RW lock one can use two integer counters and one Boolean flag: Initiallynum_readers_activeandnum_writers_waitingare zero andwriter_activeis false. The lock and release operations can be implemented as Begin Read End Read Begin Write End Write Theread-copy-update(RCU) algorithm is one solution to the readers–writers problem. RCU iswait-freefor readers. TheLinux kernelimplements a special solution for few writers calledseqlock.
https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock
Inconcurrent computing,deadlockis any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing alock.[1]Deadlocks are a common problem inmultiprocessingsystems,parallel computing, anddistributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implementprocess synchronization.[2] In anoperating system, a deadlock occurs when aprocessorthreadenters a waitingstatebecause a requestedsystem resourceis held by another waiting process, which in turn is waiting for another resource held by another waiting process.[3]If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock.[4] In acommunications system, deadlocks occur mainly due to loss or corruption of signals rather than contention for resources.[5] A deadlock situation on a resource can arise only if all of the following conditions occur simultaneously in a system:[6] These four conditions are known as theCoffman conditionsfrom their first description in a 1971 article byEdward G. Coffman, Jr.[9] While these conditions are sufficient to produce a deadlock on single-instance resource systems, they only indicate the possibility of deadlock on systems having multiple instances of resources.[10] Most current operating systems cannot prevent deadlocks.[11]When a deadlock occurs, different operating systems respond to them in different non-standard manners. Most approaches work by preventing one of the fourCoffman conditionsfrom occurring, especially the fourth one.[12]Major approaches are as follows. In this approach, it is assumed that a deadlock will never occur. This is also an application of theOstrich algorithm.[12][13]This approach was initially used byMINIXandUNIX.[9]This is used when the time intervals between occurrences of deadlocks are large and the data loss incurred each time is tolerable. Ignoring deadlocks can be safely done if deadlocks areformally provento never occur. An example is the RTIC framework.[14] Under the deadlock detection, deadlocks are allowed to occur. Then the state of the system is examined to detect that a deadlock has occurred and subsequently it is corrected. An algorithm is employed that tracks resource allocation and process states, it rolls back and restarts one or more of the processes in order to remove the detected deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler of the operating system.[13] After a deadlock is detected, it can be corrected by using one of the following methods:[citation needed] Deadlock prevention works by preventing one of the four Coffman conditions from occurring. Similar to deadlock prevention, deadlock avoidance approach ensures that deadlock will not occur in a system. The term "deadlock avoidance" appears to be very close to "deadlock prevention" in a linguistic context, but they are very much different in the context of deadlock handling. Deadlock avoidance does not impose any conditions as seen in prevention but, here each resource request is carefully analyzed to see whether it could be safely fulfilled without causing deadlock. Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. Deadlock avoidance algorithm analyzes each and every request by examining that there is no possibility of deadlock occurrence in the future if the requested resource is allocated. The drawback of this approach is its requirement of information in advance about how resources are to be requested in the future. One of the most used deadlock avoidance algorithms isBanker's algorithm.[17] Alivelockis similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. The term was coined byEdward A. Ashcroftin a 1975 paper[18]in connection with an examination of airline booking systems.[19]Livelock is a special case ofresource starvation; the general definition only states that a specific process is not progressing.[20] Livelock is a risk with somealgorithmsthat detect and recover fromdeadlock. If more than one process takes action, thedeadlock detection algorithmcan be repeatedly triggered. This can be avoided by ensuring that only one process (chosen arbitrarily or by priority) takes action.[21] Distributed deadlockscan occur indistributed systemswhendistributed transactionsorconcurrency controlis being used. Distributed deadlocks can be detected either by constructing a globalwait-for graphfrom local wait-for graphs at a deadlock detector or by adistributed algorithmlike edge chasing. Phantom deadlocksare deadlocks that are falsely detected in a distributed system due to system internal delays but do not actually exist. For example, if a process releases a resourceR1and issues a request forR2, and the first message is lost or delayed, a coordinator (detector of deadlocks) could falsely conclude a deadlock (if the request forR2while havingR1would cause a deadlock).
https://en.wikipedia.org/wiki/Deadlock_(computer_science)
The Java programming language'sJava Collections Frameworkversion 1.5 and later defines and implements the original regular single-threaded Maps, and also new thread-safe Maps implementing thejava.util.concurrent.ConcurrentMapinterface among other concurrent interfaces.[1]In Java 1.6, thejava.util.NavigableMapinterface was added, extendingjava.util.SortedMap, and thejava.util.concurrent.ConcurrentNavigableMapinterface was added as a subinterface combination. The version 1.8 Map interface diagram has the shape below. Sets can be considered sub-cases of corresponding Maps in which the values are always a particular constant which can be ignored, although the Set API uses corresponding but differently named methods. At the bottom is thejava.util.concurrent.ConcurrentNavigableMap, which is a multiple-inheritance. For unordered access as defined in thejava.util.Mapinterface, thejava.util.concurrent.ConcurrentHashMapimplementsjava.util.concurrent.ConcurrentMap.[2]The mechanism is a hash access to a hash table with lists of entries, each entry holding a key, a value, the hash, and a next reference. Previous to Java 8, there were multiple locks each serializing access to a 'segment' of the table. In Java 8, native synchronization is used on the heads of the lists themselves, and the lists can mutate into small trees when they threaten to grow too large due to unfortunate hash collisions. Also, Java 8 uses the compare-and-set primitive optimistically to place the initial heads in the table, which is very fast. Performance isO(n), but there are delays occasionally when rehashing is necessary. After the hash table expands, it never shrinks, possibly leading to a memory 'leak' after entries are removed. For ordered access as defined by thejava.util.NavigableMapinterface,java.util.concurrent.ConcurrentSkipListMapwas added in Java 1.6,[1]and implementsjava.util.concurrent.ConcurrentMapand alsojava.util.concurrent.ConcurrentNavigableMap. It is aSkip listwhich uses Lock-free techniques to make a tree. Performance isO(log(n)). One problem solved by the Java 1.5java.util.concurrentpackage is that of concurrent modification. The collection classes it provides may be reliably used by multiple Threads. All Thread-shared non-concurrent Maps and other collections need to use some form of explicit locking such as native synchronization in order to prevent concurrent modification, or else there must be a way to prove from the program logic that concurrent modification cannot occur. Concurrent modification of aMapby multiple Threads will sometimes destroy the internal consistency of the data structures inside theMap, leading to bugs which manifest rarely or unpredictably, and which are difficult to detect and fix. Also, concurrent modification by one Thread with read access by another Thread or Threads will sometimes give unpredictable results to the reader, although the Map's internal consistency will not be destroyed. Using external program logic to prevent concurrent modification increases code complexity and creates an unpredictable risk of errors in existing and future code, although it enables non-concurrent Collections to be used. However, either locks or program logic cannot coordinate external threads which may come in contact with theCollection. In order to help with the concurrent modification problem, the non-concurrentMapimplementations and otherCollections use internal modification counters which are consulted before and after a read to watch for changes: the writers increment the modification counters. A concurrent modification is supposed to be detected by this mechanism, throwing ajava.util.ConcurrentModificationException,[3]but it is not guaranteed to occur in all cases and should not be relied on. The counter maintenance is also a performance reducer. For performance reasons, the counters are not volatile, so it is not guaranteed that changes to them will be propagated betweenThreads. One solution to the concurrent modification problem is using a particular wrapper class provided by a factory injava.util.Collections:public static<K,V> Map<K,V> synchronizedMap(Map<K,V> m)which wraps an existing non-thread-safeMapwith methods that synchronize on an internal mutex.[4]There are also wrappers for the other kinds of Collections. This is a partial solution, because it is still possible that the underlyingMapcan be inadvertently accessed byThreads which keep or obtain unwrapped references. Also, all Collections implement thejava.lang.Iterablebut the synchronized-wrapped Maps and other wrappedCollectionsdo not provide synchronized iterators, so the synchronization is left to the client code, which is slow and error prone and not possible to expect to be duplicated by other consumers of the synchronizedMap. The entire duration of the iteration must be protected as well. Furthermore, aMapwhich is wrapped twice in different places will have different internal mutex Objects on which the synchronizations operate, allowing overlap. The delegation is a performance reducer, but modern Just-in-Time compilers often inline heavily, limiting the performance reduction. Here is how the wrapping works inside the wrapper - the mutex is just a finalObjectand m is the final wrappedMap: The synchronization of the iteration is recommended as follows; however, this synchronizes on the wrapper rather than on the internal mutex, allowing overlap:[5] AnyMapcan be used safely in a multi-threaded system by ensuring that all accesses to it are handled by the Java synchronization mechanism: The code using ajava.util.concurrent.ReentrantReadWriteLockis similar to that for native synchronization. However, for safety, the locks should be used in a try/finally block so that early exit such asjava.lang.Exceptionthrowing or break/continue will be sure to pass through the unlock. This technique is better than using synchronization[6]because reads can overlap each other, there is a new issue in deciding how to prioritize the writes with respect to the reads. For simplicity ajava.util.concurrent.ReentrantLockcan be used instead, which makes no read/write distinction. More operations on the locks are possible than with synchronization, such astryLock()andtryLock(long timeout, TimeUnit unit). Mutual exclusion has alock convoyproblem, in which threads may pile up on a lock, causing the JVM to need to maintain expensive queues of waiters and to 'park' the waitingThreads. It is expensive to park and unpark aThreads, and a slow context switch may occur. Context switches require from microseconds to milliseconds, while the Map's own basic operations normally take nanoseconds. Performance can drop to a small fraction of a singleThread's throughput as contention increases. When there is no or little contention for the lock, there is little performance impact; however, except for the lock's contention test. Modern JVMs will inline most of the lock code, reducing it to only a few instructions, keeping the no-contention case very fast. Reentrant techniques like native synchronization orjava.util.concurrent.ReentrantReadWriteLockhowever have extra performance-reducing baggage in the maintenance of the reentrancy depth, affecting the no-contention case as well. The Convoy problem seems to be easing with modern JVMs, but it can be hidden by slow context switching: in this case, latency will increase, but throughput will continue to be acceptable. With hundreds ofThreads , a context switch time of 10ms produces a latency in seconds. Mutual exclusion solutions fail to take advantage of all of the computing power of a multiple-core system, because only oneThreadis allowed inside theMapcode at a time. The implementations of the particular concurrent Maps provided by the Java Collections Framework and others sometimes take advantage of multiple cores usinglock freeprogramming techniques. Lock-free techniques use operations like the compareAndSet() intrinsic method available on many of the Java classes such asAtomicReferenceto do conditional updates of some Map-internal structures atomically. The compareAndSet() primitive is augmented in the JCF classes by native code that can do compareAndSet on special internal parts of some Objects for some algorithms (using 'unsafe' access). The techniques are complex, relying often on the rules of inter-thread communication provided by volatile variables, the happens-before relation, special kinds of lock-free 'retry loops' (which are not like spin locks in that they always produce progress). The compareAndSet() relies on special processor-specific instructions. It is possible for any Java code to use for other purposes the compareAndSet() method on various concurrent classes to achieve Lock-free or even Wait-free concurrency, which provides finite latency. Lock-free techniques are simple in many common cases and with some simple collections like stacks. The diagram indicates how synchronizing usingCollections.synchronizedMap(java.util.Map)wrapping a regular HashMap (purple) may not scale as well as ConcurrentHashMap (red). The others are the ordered ConcurrentNavigableMaps AirConcurrentMap (blue) and ConcurrentSkipListMap (CSLM green). (The flat spots may be rehashes producing tables that are bigger than the Nursery, and ConcurrentHashMap takes more space. Note y axis should say 'puts K'. System is 8-core i7 2.5 GHz, with -Xms5000m to prevent GC). GC and JVM process expansion change the curves considerably, and some internal lock-Free techniques generate garbage on contention. Yet another problem with mutual exclusion approaches is that the assumption of complete atomicity made by some single-threaded code creates sporadic unacceptably long inter-Thread delays in a concurrent environment. In particular, Iterators and bulk operations like putAll() and others can take a length of time proportional to the Map size, delaying otherThreads that expect predictably low latency for non-bulk operations. For example, a multi-threaded web server cannot allow some responses to be delayed by long-running iterations of other threads executing other requests that are searching for a particular value. Related to this is the fact thatThreads that lock theMapdo not actually have any requirement ever to relinquish the lock, and an infinite loop in the ownerThreadmay propagate permanent blocking to otherThreads . Slow ownerThreads can sometimes be Interrupted. Hash-based Maps also are subject to spontaneous delays during rehashing. Thejava.util.concurrentpackages' solution to the concurrent modification problem, the convoy problem, the predictable latency problem, and the multi-core problem includes an architectural choice called weak consistency. This choice means that reads likeget(java.lang.Object)will not block even when updates are in progress, and it is allowable even for updates to overlap with themselves and with reads. Weak consistency allows, for example, the contents of aConcurrentMapto change during an iteration of it by a singleThread.[7]The Iterators are designed to be used by oneThreadat a time. So, for example, aMapcontaining two entries that are inter-dependent may be seen in an inconsistent way by a readerThreadduring modification by anotherThread. An update that is supposed to change the key of an Entry (k1,v) to an Entry (k2,v) atomically would need to do a remove(k1) and then a put(k2, v), while an iteration might miss the entry or see it in two places. Retrievals return the value for a given key that reflectsthe latest previous completedupdate for that key. Thus there is a 'happens-before' relation. There is no way forConcurrentMaps to lock the entire table. There is no possibility ofConcurrentModificationExceptionas there is with inadvertent concurrent modification of non-concurrentMaps. Thesize()method may take a long time, as opposed to the corresponding non-concurrentMaps and other collections which usually include a size field for fast access, because they may need to scan the entireMapin some way. When concurrent modifications are occurring, the results reflect the state of theMapat some time, but not necessarily a single consistent state, hencesize(),isEmpty()andcontainsValue(java.lang.Object)may be best used only for monitoring. There are some operations provided byConcurrentMapthat are not inMap- which it extends - to allow atomicity of modifications. The replace(K, v1, v2) will test for the existence ofv1in the Entry identified byKand only if found, then thev1is replaced byv2atomically. The new replace(k,v) will do a put(k,v) only ifkis already in the Map. Also, putIfAbsent(k,v) will do a put(k,v) only ifkis not already in theMap, and remove(k, v) will remove the Entry for v only if v is present. This atomicity can be important for some multi-threaded use cases, but is not related to the weak-consistency constraint. ForConcurrentMaps, the following are atomic. m.putIfAbsent(k, v) is atomic but equivalent to: m.replace(k, v) is atomic but equivalent to: m.replace(k, v1, v2) is atomic but equivalent to: m.remove(k, v) is atomic but equivalent to: BecauseMapandConcurrentMapare interfaces, new methods cannot be added to them without breaking implementations. However, Java 1.8 added the capability for default interface implementations and it added to theMapinterface default implementations of some new methods getOrDefault(Object, V), forEach(BiConsumer), replaceAll(BiFunction), computeIfAbsent(K, Function), computeIfPresent(K, BiFunction), compute(K,BiFunction), and merge(K, V, BiFunction). The default implementations inMapdo not guarantee atomicity, but in theConcurrentMapoverriding defaults these useLock freetechniques to achieve atomicity, and existing ConcurrentMap implementations will automatically be atomic. The lock-free techniques may be slower than overrides in the concrete classes, so concrete classes may choose to implement them atomically or not and document the concurrency properties. It is possible to uselock-freetechniques with ConcurrentMaps because they include methods of asufficiently high consensus number, namely infinity, meaning that any number ofThreads may be coordinated. This example could be implemented with the Java 8 merge() but it shows the overall lock-free pattern, which is more general. This example is not related to the internals of the ConcurrentMap but to the client code's use of the ConcurrentMap. For example, if we want to multiply a value in the Map by a constant C atomically: The putIfAbsent(k, v) is also useful when the entry for the key is allowed to be absent. This example could be implemented with the Java 8 compute() but it shows the overall lock-free pattern, which is more general. The replace(k,v1,v2) does not accept null parameters, so sometimes a combination of them is necessary. In other words, ifv1is null, then putIfAbsent(k, v2) is invoked, otherwise replace(k,v1,v2) is invoked. The Java collections framework was designed and developed primarily byJoshua Bloch, and was introduced inJDK 1.2.[8]The original concurrency classes came fromDoug Lea's[9]collection package.
https://en.wikipedia.org/wiki/Java_ConcurrentMap#Lock-free_atomicity
Properties of an execution of a computer program—particularly forconcurrentanddistributed systems—have long been formulated by givingsafety properties("bad things don't happen") andliveness properties("good things do happen").[1] A program istotally correctwith respect to apreconditionP{\displaystyle P}andpostconditionQ{\displaystyle Q}if any execution started in a state satisfyingP{\displaystyle P}terminates in a state satisfyingQ{\displaystyle Q}. Total correctness is a conjunction of a safety property and a liveness property:[2] Note that abad thingis discrete,[3]since it happens at a particular place during execution. A "good thing" need not be discrete, but the liveness property of termination is discrete. Formal definitions that were ultimately proposed for safety properties[4]and liveness properties[5]demonstrated that this decomposition is not only intuitively appealing but is also complete: all properties of an execution are a conjunction of safety and liveness properties.[5]Moreover, undertaking the decomposition can be helpful, because the formal definitions enable a proof that different methods must be used for verifying safety properties versus for verifying liveness properties.[6][7] A safety property proscribes discretebad thingsfrom occurring during an execution.[1]A safety property thus characterizes what is permitted by stating what is prohibited. The requirement that thebad thingbe discrete means that abad thingoccurring during execution necessarily occurs at some identifiable point.[5] Examples of a discretebad thingthat could be used to define a safety property include:[5] An execution of a program can be described formally by giving the infinite sequence of program states that results as execution proceeds, where the last state for a terminating program is repeated infinitely. For a program of interest, letS{\displaystyle S}denote the set of possible program states,S∗{\displaystyle S^{*}}denote the set of finite sequences of program states, andSω{\displaystyle S^{\omega }}denote the set of infinite sequences of program states. The relationσ≤τ{\displaystyle \sigma \leq \tau }holds for sequencesσ{\displaystyle \sigma }andτ{\displaystyle \tau }iffσ{\displaystyle \sigma }is aprefixofτ{\displaystyle \tau }orσ{\displaystyle \sigma }equalsτ{\displaystyle \tau }.[5] A property of a program is the set of allowed executions. The essential characteristic of a safety propertySP{\displaystyle SP}is: If some executionσ{\displaystyle \sigma }does not satisfySP{\displaystyle SP}then the definingbad thingfor that safety property occurs at some point inσ{\displaystyle \sigma }. Notice that after such abad thing, if further execution results in an executionσ′{\displaystyle \sigma ^{\prime }}, thenσ′{\displaystyle \sigma ^{\prime }}also does not satisfySP{\displaystyle SP}, since thebad thinginσ{\displaystyle \sigma }also occurs inσ′{\displaystyle \sigma ^{\prime }}. We take this inference about the irremediability ofbad thingsto be the defining characteristic forSP{\displaystyle SP}to be a safety property. Formalizing this in predicate logic gives a formal definition forSP{\displaystyle SP}being a safety property.[5] This formal definition for safety properties implies that if an executionσ{\displaystyle \sigma }satisfies a safety propertySP{\displaystyle SP}then every prefix ofσ{\displaystyle \sigma }(with the last state repeated) also satisfiesSP{\displaystyle SP}. A liveness property prescribesgood thingsfor every execution or, equivalently, describes something that must happen during an execution.[1]Thegood thingneed not be discrete—it might involve an infinite number of steps. Examples of agood thingused to define a liveness property include:[5] Thegood thingin the first example is discrete but not in the others. Producing an answer within a specified real-time bound is a safety property rather than a liveness property. This is because a discretebad thingis being proscribed: a partial execution that reaches a state where the answer still has not been produced and the value of the clock (a state variable) violates the bound. Deadlock freedom is a safety property: the "bad thing" is adeadlock(which is discrete). Most of the time, knowing that a program eventually does some "good thing" is not satisfactory; we want to know that the program performs the "good thing" within some number of steps or before some deadline. A property that gives a specific bound to the "good thing" is a safety property (as noted above), whereas the weaker property that merely asserts the bound exists is a liveness property. Proving such a liveness property is likely to be easier than proving the tighter safety property because proving the liveness property doesn't require the kind of detailed accounting that is required for proving the safety property. To differ from a safety property, a liveness propertyLP{\displaystyle LP}cannot rule out any finite prefixα∈S∗{\displaystyle \alpha \in S^{*}}[8]of an execution (since such anα{\displaystyle \alpha }would be a "bad thing" and, thus, would be defining a safety property). That leads to defining a liveness propertyLP{\displaystyle LP}to be a property that does not rule out any finite prefix.[5] This definition does not restrict agood thingto being discrete—thegood thingcan involve all ofτ{\displaystyle \tau }, which is an infinite-length execution. Lamportused the termssafety propertyandliveness propertyin his 1977 paper[1]on proving the correctness ofmultiprocess (concurrent) programs. He borrowed the terms fromPetri net theory, which was using the termslivenessandboundednessfor describing how the assignment of a Petri net's "tokens" to its "places" could evolve; Petri netsafetywas a specific form ofboundedness. Lamport subsequently developed a formal definition of safety for a NATO short course on distributed systems in Munich.[9]It assumed that properties are invariant understuttering. The formal definition of safety given above appears in a paper by Alpern and Schneider;[5]the connection between the two formalizations of safety properties appears in a paper by Alpern, Demers, and Schneider.[10] Alpern and Schneider[5]gives the formal definition for liveness, accompanied by a proof that all properties can be constructed using safety properties and liveness properties. That proof was inspired byGordon Plotkin's insight that safety properties correspond toclosed setsand liveness properties correspond todense setsin a naturaltopologyon the setSω{\displaystyle S^{\omega }}of infinite sequences of program states.[11]Subsequently, Alpern and Schneider[12]not only gave aBüchi automatoncharacterization for the formal definitions of safety properties and liveness properties but used these automata formulations to show that verification of safety properties would require aninvariantand verification of liveness properties would require awell-foundedness argument. The correspondence between the kind of property (safety vs liveness) with kind of proof (invariance vs well-foundedness) was a strong argument that the decomposition of properties into safety and liveness (as opposed to some other partitioning) was a useful one—knowing the type of property to be proved dictated the type of proof that is required.
https://en.wikipedia.org/wiki/Liveness
Incomputer science,resource starvationis a problem encountered inconcurrent computingwhere aprocessis perpetually denied necessaryresourcesto process its work.[1]Starvation may be caused by errors in a scheduling ormutual exclusionalgorithm, but can also be caused byresource leaks, and can be intentionally caused via adenial-of-service attacksuch as afork bomb. When starvation is impossible in aconcurrent algorithm, the algorithm is calledstarvation-free,lockout-freed[2]or said to havefinite bypass.[3]This property is an instance ofliveness, and is one of the two requirements for any mutual exclusion algorithm; the other beingcorrectness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to theshared resource.[3] Starvation is usually caused by an overly simplisticscheduling algorithm. For example, if a (poorly designed)multi-tasking systemalways switches between the first two tasks while a third never gets to run, then the third task is being starved ofCPU time. The scheduling algorithm, which is part of thekernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources. Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called apriority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation. In computer networks, especially wireless networks,scheduling algorithmsmay suffer from scheduling starvation. An example ismaximum throughput scheduling. Starvation is normally caused bydeadlockin that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into acritical sectionand picks one arbitrarily is deadlock-free, but not starvation-free.[3] A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses theagingtechnique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.[4]
https://en.wikipedia.org/wiki/Resource_starvation
Incomputer science,communicating sequential processes(CSP) is aformal languagefor describingpatternsof interaction inconcurrent systems.[1]It is a member of the family of mathematical theories of concurrency known as process algebras, orprocess calculi, based onmessage passingviachannels. CSP was highly influential in the design of theoccamprogramming language[1][2]and also influenced the design of programming languages such asLimbo,[3]RaftLib,Erlang,[4]Go,[5][3]Crystal, andClojure's core.async.[6] CSP was first described byTony Hoarein a 1978 article,[7]and has since evolved substantially.[8]CSP has been practically applied in industry as a tool forspecifying and verifyingthe concurrent aspects of a variety of different systems, such as the T9000Transputer,[9]as well as a securee-commercesystem.[10]The theory of CSP itself is also still the subject of active research, including work to increase its range of practical applicability (e.g., increasing the scale of the systems that can be tractably analyzed).[11] The version of CSP presented in Hoare's original 1978 article was essentially a concurrent programming language rather than aprocess calculus. It had a substantially differentsyntaxthan later versions of CSP, did not possess mathematically defined semantics,[12]and was unable to representunbounded nondeterminism.[13]Programs in the original CSP were written as a parallel composition of a fixed number of sequential processes communicating with each other strictly through synchronous message-passing. In contrast to later versions of CSP, each process was assigned an explicit name, and the source or destination of a message was defined by specifying the name of the intended sending or receiving process. For example, the process repeatedly receives a character from the process namedwestand sends that character to process namedeast. The parallel composition assigns the nameswestto theDISASSEMBLEprocess,Xto theCOPYprocess, andeastto theASSEMBLEprocess, and executes these three processes concurrently.[7] Following the publication of the original version of CSP, Hoare, Stephen Brookes, andA. W. Roscoedeveloped and refined thetheoryof CSP into its modern, process algebraic form. The approach taken in developing CSP into a process algebra was influenced byRobin Milner's work on theCalculus of Communicating Systems(CCS) and conversely. The theoretical version of CSP was initially presented in a 1984 article by Brookes, Hoare, and Roscoe,[14]and later in Hoare's bookCommunicating Sequential Processes,[12]which was published in 1985. In September 2006, that book was still thethird-most citedcomputer sciencereference of all time according toCiteseer[citation needed](albeit an unreliable source due to the nature of its sampling). The theory of CSP has undergone a few minor changes since the publication of Hoare's book. Most of these changes were motivated by the advent of automated tools for CSP process analysis and verification. Roscoe'sThe Theory and Practice of Concurrency[1]describes this newer version of CSP. An early and important application of CSP was its use for specification and verification of elements of the INMOS T9000Transputer, a complex superscalar pipelined processor designed to support large-scale multiprocessing. CSP was employed in verifying the correctness of both the processor pipeline and the Virtual Channel Processor, which managed off-chip communications for the processor.[9] Industrial application of CSP to software design has usually focused on dependable and safety-critical systems. For example, the Bremen Institute for Safe Systems andDaimler-Benz Aerospacemodeled a fault-management system and avionics interface (consisting of about 23,000 lines of code) intended for use on the International Space Station in CSP, and analyzed the model to confirm that their design was free ofdeadlockandlivelock.[15][16]The modeling and analysis process was able to uncover a number of errors that would have been difficult to detect using testing alone. Similarly,Praxis High Integrity Systemsapplied CSP modeling and analysis during the development of software (approximately 100,000 lines of code) for a secure smart-card certification authority to verify that their design was secure and free of deadlock. Praxis claims that the system has a much lower defect rate than comparable systems.[10] Since CSP is well-suited to modeling and analyzing systems that incorporate complex message exchanges, it has also been applied to the verification of communications and security protocols. A prominent example of this sort of application is Lowe's use of CSP and theFDR refinement-checkerto discover a previously unknown attack on theNeedham–Schroeder public-key authentication protocol, and then to develop a corrected protocol able to defeat the attack.[17] As its name suggests, CSP allows the description of systems in terms of component processes that operate independently, and interact with each other solely throughmessage-passingcommunication. However, the"Sequential"part of the CSP name is now something of a misnomer, since modern CSP allows component processes to be defined both as sequential processes, and as the parallel composition of more primitive processes. The relationships between different processes, and the way each process communicates with its environment, are described using variousprocess algebraicoperators. Using this algebraic approach, quite complex process descriptions can be easily constructed from a few primitive elements. CSP provides two classes of primitives in its process algebra: events and primitive processes. Events represent communications or interactions. They are assumed to be instantaneous, and their communication is all that an external ‘environment’ can know about processes. An event is communicated only if the environment allows it. If a process does offer an event and the environment allows it, then that eventmustbe communicated. Events may be atomic names (e.g.on,off), compound names (e.g.valve.open,valve.close), or input/output events (e.g.mouse?xy,screen!bitmap). The set of all events is denotedΣ{\displaystyle \Sigma }.[18] Primitive processes represent fundamental behaviors: examples includeSTOP{\displaystyle \mathrm {STOP} }(the process that immediately deadlocks), andSKIP{\displaystyle \mathrm {SKIP} }(the process that immediately terminates successfully).[18] CSP has a wide range of algebraic operators. The principal ones are informally given as follows. The prefix operator combines an event and a process to produce a new process. For example,a→P{\displaystyle a\to P}is the process that is willing to communicate eventa{\displaystyle a}with its environment and, aftera{\displaystyle a}, behaves like the processP{\displaystyle P}.[18] Processes can be defined using recursion. WhereF(P){\displaystyle F(P)}is any CSP term involvingP{\displaystyle P}, the processμP.F(P){\displaystyle \mu P.F(P)}defines a recursive process given by the equationP=F(P){\displaystyle P=F(P)}. Recursions can also be defined mutually, such asPu=up→PdPd=down→Pu{\displaystyle {\begin{aligned}&P_{u}=up\to P_{d}\\&P_{d}=down\to P_{u}\\\end{aligned}}}which defines a pair of mutually recursive processes that alternate between communcatingup{\displaystyle up}anddown{\displaystyle down}.[18] The deterministic (or external) choice operator allows the future evolution of a process to be defined as a choice between two component processes and allows the environment to resolve the choice by communicating an initial event for one of the processes. For example,(a→P)◻(b→Q){\displaystyle (a\to P)\ \Box \ (b\to Q)}is the process that is willing to communicate the initial eventsa{\displaystyle a}andb{\displaystyle b}and subsequently behaves as eitherP{\displaystyle P}orQ{\displaystyle Q}, depending on which initial event the environment chooses to communicate.[18] The nondeterministic (or internal) choice operator allows the future evolution of a process to be defined as a choice between two component processes, but does not allow the environment any control over which one of the component processes will be selected. For example,(a→P)⊓(b→Q){\displaystyle (a\to P)\sqcap (b\to Q)}can behave like eithera→P{\displaystyle a\to P}orb→Q{\displaystyle b\to Q}. It can refuse to accepta{\displaystyle a}orb{\displaystyle b}and is only obliged to communicate if the environment offers botha{\displaystyle a}andb{\displaystyle b}. Nondeterminism can be inadvertently introduced into a nominally deterministic choice if the initial events of both sides of the choice are identical. So, for example,(a→a→STOP)◻(a→b→STOP){\displaystyle (a\to a\to \mathrm {STOP} )\ \Box \ (a\to b\to \mathrm {STOP} )}anda→((a→STOP)⊓(b→STOP)){\displaystyle a\to {\big (}(a\to \mathrm {STOP} )\sqcap (b\to \mathrm {STOP} ){\big )}}are equivalent.[18] The interleaving operator represents completely independent concurrent activity. The processP|||Q{\displaystyle P\;|||\;Q}behaves as bothP{\displaystyle P}andQ{\displaystyle Q}simultaneously. The events from both processes are arbitrarily interleaved in time. Interleaving can introduce nondeterminism even ifP{\displaystyle P}andQ{\displaystyle Q}are both deterministic: ifP{\displaystyle P}andQ{\displaystyle Q}can both communicate the same event, thenP|||Q{\displaystyle P\;|||\;Q}nondeterministically chooses which of the two processes communicated that event.[18] The interface parallel (or generalized parallel) operator represents concurrent activity that requires synchronization between the component processes: forP|[X]|Q{\displaystyle P\;|[X]|\;Q}, any event in the interface setX⊆Σ{\displaystyle X\subseteq \Sigma }can only occur when bothP{\displaystyle P}andQ{\displaystyle Q}are able to engage in that event.[18] For example, the processP|[{a}]|Q{\displaystyle P\;|[\{a\}]|\;Q}requires thatP{\displaystyle P}andQ{\displaystyle Q}must both be able to perform eventa{\displaystyle a}before that event can occur. So, the process(a→P)|[{a}]|(a→Q){\displaystyle (a\to P)\;|[\{a\}]|\;(a\to Q)}is equivalent toa→(P|[{a}]|Q){\displaystyle a\to (P\;|[\{a\}]|\;Q)}, while(a→P)|[{a,b}]|(b→Q){\displaystyle (a\to P)\;|[\{a,b\}]|\;(b\to Q)}is equivalent toSTOP{\displaystyle \mathrm {STOP} }(i.e. the process deadlocks). The hiding operator provides a way to abstract processes by making some events unobservable by the environment.P∖X{\displaystyle P\setminus X}is the processP{\displaystyle P}with the event setX{\displaystyle X}hidden. A trivial example of hiding is(a→P)∖{a}{\displaystyle (a\to P)\setminus \{a\}}which, assuming that the eventa{\displaystyle a}doesn't appear inP{\displaystyle P}, simply reduces toP{\displaystyle P}. Hidden events are internalized asτ actions, which are invisible to and uncontrollable by the environment. The existence of hiding introduces an additional behaviour calleddivergence, where an infinite sequence of τ actions is performed. This is captured by the processdiv{\displaystyle \mathbf {div} }, whose behaviour is solely to perform τ actions forever.[18]For example,(μP.a→P)∖{a}{\displaystyle (\mu P.a\to P)\setminus \{a\}}is equivalent todiv{\displaystyle \mathbf {div} }. One of the archetypal CSP examples is an abstract representation of a chocolate vending machine and its interactions with a person wishing to buy some chocolate. This vending machine might be able to carry out two different events, “coin” and “choc” which represent the insertion of payment and the delivery of a chocolate respectively. A machine which demands payment (only in cash) before offering a chocolate can be written as: VendingMachine=coin→choc→STOP{\displaystyle \mathrm {VendingMachine} =\mathrm {coin} \rightarrow \mathrm {choc} \rightarrow \mathrm {STOP} } A person who might choose to use a coin or card to make payments could be modelled as: Person=(coin→STOP)◻(card→STOP){\displaystyle \mathrm {Person} =(\mathrm {coin} \rightarrow \mathrm {STOP} )\;\Box \;(\mathrm {card} \rightarrow \mathrm {STOP} )} These two processes can be put in parallel, so that they can interact with each other. The behaviour of the composite process depends on the events that the two component processes must synchronise on. Thus, VendingMachine|[{coin,card}]|Person≡coin→choc→STOP{\displaystyle \mathrm {VendingMachine} \left\vert \left[\left\{\mathrm {coin} ,\mathrm {card} \right\}\right]\right\vert \mathrm {Person} \equiv \mathrm {coin} \rightarrow \mathrm {choc} \rightarrow \mathrm {STOP} } whereas if synchronization was only required on “coin”, we would obtain VendingMachine|[{coin}]|Person≡(coin→choc→STOP)◻(card→STOP){\displaystyle \mathrm {VendingMachine} \left\vert \left[\left\{\mathrm {coin} \right\}\right]\right\vert \mathrm {Person} \equiv \left(\mathrm {coin} \rightarrow \mathrm {choc} \rightarrow \mathrm {STOP} \right)\Box \left(\mathrm {card} \rightarrow \mathrm {STOP} \right)} If we abstract this latter composite process by hiding the “coin” and “card” events, i.e. ((coin→choc→STOP)◻(card→STOP))∖{coin,card}{\displaystyle \left(\left(\mathrm {coin} \rightarrow \mathrm {choc} \rightarrow \mathrm {STOP} \right)\Box \left(\mathrm {card} \rightarrow \mathrm {STOP} \right)\right)\setminus \left\{\mathrm {coin,card} \right\}} we get the nondeterministic process (choc→STOP)⊓STOP{\displaystyle \left(\mathrm {choc} \rightarrow \mathrm {STOP} \right)\sqcap \mathrm {STOP} } This is a process which either offers a “choc” event and then stops, or just stops. In other words, if we treat the abstraction as an external view of the system (e.g., someone who does not see the decision reached by the person),nondeterminismhas been introduced. The syntax of CSP defines the “legal” ways in which processes and events may be combined. Letebe an event, andXbe a set of events. Then the basicsyntaxof CSP can be defined as: Proc::=STOP|SKIP|e→Proc(prefixing)|Proc◻Proc(externalchoice)|Proc⊓Proc(nondeterministicchoice)|Proc|||Proc(interleaving)|Proc|[{X}]|Proc(interfaceparallel)|Proc∖X(hiding)|Proc;Proc(sequentialcomposition)|ifbthenProcelseProc(booleanconditional)|Proc▹Proc(timeout)|Proc△Proc(interrupt){\displaystyle {\begin{array}{lcll}{Proc}&::=&\mathrm {STOP} &\;\\&|&\mathrm {SKIP} &\;\\&|&e\rightarrow {Proc}&({\text{prefixing}})\\&|&{Proc}\;\Box \;{Proc}&({\text{external}}\;{\text{choice}})\\&|&{Proc}\;\sqcap \;{Proc}&({\text{nondeterministic}}\;{\text{choice}})\\&|&{Proc}\;\vert \vert \vert \;{Proc}&({\text{interleaving}})\\&|&{Proc}\;|[\{X\}]|\;{Proc}&({\text{interface}}\;{\text{parallel}})\\&|&{Proc}\setminus X&({\text{hiding}})\\&|&{Proc};{Proc}&({\text{sequential}}\;{\text{composition}})\\&|&\mathrm {if} \;b\;\mathrm {then} \;{Proc}\;\mathrm {else} \;Proc&({\text{boolean}}\;{\text{conditional}})\\&|&{Proc}\;\triangleright \;{Proc}&({\text{timeout}})\\&|&{Proc}\;\triangle \;{Proc}&({\text{interrupt}})\end{array}}} Note that, in the interests of brevity, the syntax presented above omits thediv{\displaystyle \mathbf {div} }process, which representsdivergence, as well as various operators such as alphabetized parallel, piping, and indexed choices. CSP has been imbued with several differentformal semantics, which define themeaningof syntactically correct CSP expressions. The theory of CSP includes mutually consistentdenotational semantics,algebraic semantics, andoperational semantics. The three major denotational models of CSP are thetracesmodel, thestable failuresmodel, and thefailures/divergencesmodel. Semantic mappings from process expressions to each of these three models provide the denotational semantics for CSP.[1] Thetraces modeldefines the meaning of a process expression as the set of sequences of events (traces) that the process can be observed to perform. For example, More formally, the traces modelT{\displaystyle {\mathcal {T}}}is defined as the set of non-empty prefix-closed subsets ofΣ∗{\displaystyle \Sigma ^{\ast }}. The meaning of a processPin the traces model is defined astraces(P)⊆Σ∗{\displaystyle \mathrm {traces} \left(P\right)\subseteq \Sigma ^{\ast }}such that: whereΣ∗{\displaystyle \Sigma ^{\ast }}is the set of all possible finite sequences of events. Thestable failures modelextends the traces model with refusal sets, which are sets of eventsX⊆Σ{\displaystyle X\subseteq \Sigma }that a process can refuse to perform. Afailureis a pair(s,X){\displaystyle \left(s,X\right)}, consisting of a traces, and a refusal setXwhich identifies the events that a process may refuse once it has executed the traces. The observed behavior of a process in the stable failures model is described by the pair(traces(P),failures(P)){\displaystyle \left(\mathrm {traces} \left(P\right),\mathrm {failures} \left(P\right)\right)}. For example, failures((a→STOP)◻(b→STOP))={(⟨⟩,∅),(⟨a⟩,{a,b}),(⟨b⟩,{a,b})}{\displaystyle \mathrm {failures} \left(\left(a\rightarrow \mathrm {STOP} \right)\Box \left(b\rightarrow \mathrm {STOP} \right)\right)=\left\{\left(\langle \rangle ,\emptyset \right),\left(\langle a\rangle ,\left\{a,b\right\}\right),\left(\langle b\rangle ,\left\{a,b\right\}\right)\right\}}failures((a→STOP)⊓(b→STOP))={(⟨⟩,{a}),(⟨⟩,{b}),(⟨a⟩,{a,b}),(⟨b⟩,{a,b})}{\displaystyle \mathrm {failures} \left(\left(a\rightarrow \mathrm {STOP} \right)\sqcap \left(b\rightarrow \mathrm {STOP} \right)\right)=\left\{\left(\langle \rangle ,\left\{a\right\}\right),\left(\langle \rangle ,\left\{b\right\}\right),\left(\langle a\rangle ,\left\{a,b\right\}\right),\left(\langle b\rangle ,\left\{a,b\right\}\right)\right\}} Thefailures/divergence modelfurther extends the failures model to handledivergence. The semantics of a process in the failures/divergences model is a pair(failures⊥(P),divergences(P)){\displaystyle \left(\mathrm {failures} _{\perp }\left(P\right),\mathrm {divergences} \left(P\right)\right)}wheredivergences(P){\displaystyle \mathrm {divergences} \left(P\right)}is defined as the set of all traces that can lead to divergent behavior andfailures⊥(P)=failures(P)∪{(s,X)∣s∈divergences(P)}{\displaystyle \mathrm {failures} _{\perp }\left(P\right)=\mathrm {failures} \left(P\right)\cup \left\{\left(s,X\right)\mid s\in \mathrm {divergences} \left(P\right)\right\}}. One of the most important principles in CSP is the Unique Fixed Points (UFP) rule. A version for single recursions in the traces model states that ifF:T→T{\displaystyle F:{\mathcal {T}}\rightarrow {\mathcal {T}}}is a function on trace sets generated by the guarded recursive[clarification needed]processX{\displaystyle X}, andY{\displaystyle Y}is a process wheretraces(Y){\displaystyle \mathrm {traces} (Y)}is afixed pointofF{\displaystyle F}, thenX{\displaystyle X}is equivalent toY{\displaystyle Y}in the traces model.[18]UFP can also be extended to mutual recursions and other models of CSP. Over the years, a number of tools for analyzing and understanding systems described using CSP have been produced. Early tool implementations used a variety of machine-readable syntaxes for CSP, making input files written for different tools incompatible. However, most CSP tools have now standardized on the machine-readable dialect of CSP devised by Bryan Scattergood, sometimes referred to as CSPM.[19]The CSPMdialect of CSP possesses a formally definedoperational semantics, which includes an embeddedfunctional programming language. The most well-known CSP tool is probablyFailures-Divergences Refinement, which is a commercial product originally developed by Formal Systems (Europe) Ltd. FDR is often described as amodel checker, but is technically arefinementchecker, in that it converts two CSP process expressions intoLabelled Transition Systems(LTSs), and then determines whether one of the processes is a refinement of the other within some specified semantic model (traces, failures, or failures/divergence).[20]FDR applies various state-space compression algorithms to the process LTSs in order to reduce the size of the state-space that must be explored during a refinement check. FDR was succeeded by FDR2, FDR3 and FDR4.[21] TheAdelaide Refinement Checker(ARC)[22]is a CSP refinement checker developed by the Formal Modelling and Verification Group atThe University of Adelaide. ARC differs from FDR2 in that it internally represents CSP processes asOrdered Binary Decision Diagrams(OBDDs), which alleviates the state explosion problem of explicit LTS representations without requiring the use of state-space compression algorithms such as those used in FDR2. TheProBproject,[23]which is hosted by the Institut für Informatik, Heinrich-Heine-Universität Düsseldorf, was originally created to support analysis of specifications constructed in theB method. However, it also includes support for analysis of CSP processes both through refinement checking, andLTLmodel-checking. ProB can also be used to verify properties of combined CSP and B specifications. A ProBE CSP Animator is integrated in FDR3. TheProcess Analysis Toolkit(PAT)[24][25]is a CSP analysis tool developed in the School of Computing at theNational University of Singapore. PAT is able to perform refinement checking, LTL model-checking, and simulation of CSP and Timed CSP processes. The PAT process language extends CSP with support for mutable shared variables, asynchronous message passing, and a variety of fairness and quantitative time related process constructs such asdeadlineandwaituntil. The underlying design principle of the PAT process language is to combine a high-level specification language with procedural programs (e.g. an event in PAT may be a sequential program or even an external C# library call) for greater expressiveness. Mutable shared variables and asynchronous channels provide a convenientsyntactic sugarfor well-known process modelling patterns used in standard CSP. The PAT syntax is similar, but not identical, to CSPM.[26]The principal differences between the PAT syntax and standard CSPMare the use of semicolons to terminate process expressions, the inclusion of syntactic sugar for variables and assignments, and the use of slightly different syntax for internal choice and parallel composition. VisualNets[27]produces animated visualisations of CSP systems from specifications, and supports timed CSP. CSPsim[28]is a lazy simulator. It does not model check CSP, but is useful for exploring very large (potentially infinite) systems. SyncStitchis a CSP refinement checker with interactive modeling and analyzing environment. It has a graphical state-transition diagram editor. The user can model the behavior of processes as not only CSP expressions but also state-transition diagrams. The result of checking are also reported graphically as computation-trees and can be analyzed interactively with peripheral inspecting tools. In addition to refinement checks, It can perform deadlock check and livelock check. Several other specification languages and formalisms have been derived from, or inspired by, the classic untimed CSP, including: In as much as it is concerned with concurrent processes that exchange messages, theactor modelis broadly similar to CSP. However, the two models make some fundamentally different choices with regard to the primitives they provide: Note that the aforementioned properties do not necessarily refer to the original CSP paper by Hoare, but rather the modern incarnation of the idea as seen in implementations such as Go and Clojure's core.async. In the original paper, channels were not a central part of the specification, and the sender and receiver processes actually identify each other by name. In 1990, “AQueen’s Award for Technological Achievement [was] conferred ... on[Oxford University] Computing Laboratory. The award recognises a successful collaboration between the laboratory andInmosLtd. … Inmos’ flagship product is the ‘transputer’, amicroprocessorwith many of the parts that would normally be needed in addition built into the same singlecomponent.”[30]According to Tony Hoare,[31]“The INMOS Transputer was as embodiment of the ideas … of building microprocessors that could communicate with each other along wires that would stretch between their terminals. The founder had the vision that theCSPideas were ripe for industrial exploitation, and he made that the basis of the language for programming Transputers, which was calledOccam. … The company estimated it enabled them to deliver the hardware one year earlier than would otherwise have happened. They applied for and won a Queen’s award for technological achievement, in conjunction with Oxford University Computing Laboratory.”
https://en.wikipedia.org/wiki/Communicating_sequential_processes
Sir Charles Antony Richard Hoare(/hɔːr/; born 11 January 1934), also known asC. A. R. Hoare, is a Britishcomputer scientistwho has made foundational contributions toprogramming languages,algorithms,operating systems,formal verification, andconcurrent computing.[3]His work earned him theTuring Award, usually regarded as the highest distinction in computer science, in 1980. Hoare developed thesorting algorithmquicksortin 1959–1960.[4]He developedHoare logic, anaxiomaticbasis for verifyingprogram correctness.[5]In the semantics ofconcurrency, he introduced the formal languagecommunicating sequential processes(CSP) to specify the interactions of concurrent processes, and along withEdsger Dijkstra, formulated thedining philosophers problem.[6][7][8][9][10][11]Since 1977, he has held positions at theUniversity of OxfordandMicrosoft ResearchinCambridge. Tony Hoare was born inColombo, Ceylon (nowSri Lanka) to British parents; his father was a colonialcivil servantand his mother was the daughter of a tea planter. Hoare was educated inEnglandat theDragon SchoolinOxfordand theKing's SchoolinCanterbury.[12]He then studiedClassics and Philosophy("Greats") atMerton College, Oxford.[13]On graduating in 1956 he did 18 monthsNational Servicein theRoyal Navy,[13]where he learned Russian.[14]He returned to theUniversity of Oxfordin 1958 to study for a postgraduate certificate instatistics,[13]and it was here that he begancomputer programming, having been taughtAutocodeon theFerranti MercurybyLeslie Fox.[15]He then went toMoscow State Universityas aBritish Councilexchange student,[13]where he studiedmachine translationunderAndrey Kolmogorov.[14] In 1960, Hoare left theSoviet Unionand began working atElliott Brothers Ltd,[13]a small computer manufacturing firm located in London. There, he implemented the languageALGOL 60and began developing majoralgorithms.[16][17] He was involved with developinginternational standardsin programming and informatics, as a member of theInternational Federation for Information Processing(IFIP)Working Group 2.1on Algorithmic Languages and Calculi,[18]whichspecified, maintains, and supports the languages ALGOL 60 andALGOL 68.[19] He became the Professor ofComputing Scienceat theQueen's University of Belfastin 1968, and in 1977 returned to Oxford as the Professor of Computing to lead theProgramming Research Groupin theOxford University Computing Laboratory(nowDepartment of Computer Science, University of Oxford), following the death ofChristopher Strachey. He became the firstChristopher Strachey Professor of Computingon its establishment in 1988 until his retirement at Oxford in 2000.[20]He is now anEmeritus Professorthere, and is also a principal researcher atMicrosoft ResearchinCambridge, England.[21][22][23] Hoare's most significant work has been in the following areas: his sorting and selection algorithm (QuicksortandQuickselect),Hoare logic, the formal languagecommunicating sequential processes(CSP) used to specify the interactions betweenconcurrent processes(and implemented in various programming languages such asoccam), structuring computeroperating systemsusing themonitorconcept, and theaxiomaticspecification ofprogramming languages.[24][25] Speaking at a software conference in 2009, Tony Hoare hyperbolically apologized for inventing thenull reference:[26][27] I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.[28] For many years under his leadership, Hoare's Oxford department worked on formal specification languages such asCSPandZ. These did not achieve the expected take-up by industry, and in 1995 Hoare was led to reflect upon the original assumptions:[29] Ten years ago, researchers into formal methods (and I was the most mistaken among them) predicted that the programming world would embrace with gratitude every assistance promised by formalisation to solve the problems of reliability that arise when programs get large and more safety-critical. Programs have now got very large and very critical – well beyond the scale which can be comfortably tackled by formal methods. There have been many problems and failures, but these have nearly always been attributable to inadequate analysis of requirements or inadequate management control. It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve. A commemorative article was written in tribute to Hoare for his 90th birthday.[30] In 1962, Hoare marriedJill Pym, a member of his research team.[46] This article incorporatestextavailable under theCC BY 4.0license.
https://en.wikipedia.org/wiki/C._A._R._Hoare
Inconcurrent programming, an operation (or set of operations) islinearizableif it consists of an ordered list ofinvocationand responseevents, that may be extended by adding response events such that: Informally, this means that the unmodified list of events is linearizableif and only ifits invocations were serializable, but some of the responses of the serial schedule have yet to return.[1] In a concurrent system, processes can access a sharedobjectat the same time. Because multiple processes are accessing a single object, a situation may arise in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place instantaneously. Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently. It is a safety property which ensures that operations do not complete unexpectedly or unpredictably. If a system is linearizable it allows a programmer to reason about the system.[2] Linearizability was first introduced as aconsistency modelbyHerlihyandWingin 1987. It encompassed more restrictive definitions of atomic, such as "an atomic operation is one which cannot be (or is not) interrupted by concurrent operations", which are usually vague about when an operation is considered to begin and end. An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge. Specifically, linearizability guarantees that theinvariantsof a system areobservedandpreservedby all operations: if all operations individually preserve an invariant, the system as a whole will. A concurrent system consists of a collection of processes communicating through shared data structures or objects. Linearizability is important in these concurrent systems where objects may be accessed by multiple processes at the same time and a programmer needs to be able to reason about the expected results. An execution of a concurrent system results in ahistory, an ordered sequence of completed operations. Ahistoryis a sequence ofinvocationsandresponsesmade of an object by a set ofthreadsor processes. An invocation can be thought of as the start of an operation, and the response being the signaled end of that operation. Each invocation of a function will have a subsequent response. This can be used to model any use of an object. Suppose, for example, that two threads, A and B, both attempt to grab a lock, backing off if it's already taken. This would be modeled as both threads invoking the lock operation, then both threads receiving a response, one successful, one not. Asequentialhistory is one in which all invocations have immediate responses; that is the invocation and response are considered to take place instantaneously. A sequential history should be trivial to reason about, as it has no real concurrency; the previous example was not sequential, and thus is hard to reason about. This is where linearizability comes in. A history islinearizableif there is a linear orderσ{\displaystyle \sigma }of the completed operations such that: In other words: Note that the first two bullet points here matchserializability: the operations appear to happen in some order. It is the last point which is unique to linearizability, and is thus the major contribution of Herlihy and Wing.[1] Consider two ways of reordering the locking example above. Reordering B's invocation after A's response yields a sequential history. This is easy to reason about, as all operations now happen in an obvious order. However, it does not match the sequential definition of the object (it doesn't match the semantics of the program): A should have successfully obtained the lock, and B should have subsequently aborted. This is another correct sequential history. It is also a linearization since it matches the sequential definition. Note that the definition of linearizability only precludes responses that precede invocations from being reordered; since the original history had no responses before invocations, they can be reordered. Hence the original history is indeed linearizable. An object (as opposed to a history) is linearizable if all valid histories of its use can be linearized. This is a much harder assertion to prove. Consider the following history, again of two objects interacting with a lock: This history is not valid because there is a point at which both A and B hold the lock; moreover, it cannot be reordered to a valid sequential history without violating the ordering rule. Therefore, it is not linearizable. However, under serializability, B's unlock operation may be moved tobeforeA's original lock, which is a valid history (assuming the object begins the history in a locked state): This reordering is sensible provided there is no alternative means of communicating between A and B. Linearizability is better when considering individual objects separately, as the reordering restrictions ensure that multiple linearizable objects are, considered as a whole, still linearizable. This definition of linearizability is equivalent to the following: This alternative is usually much easier to prove. It is also much easier to reason about as a user, largely due to its intuitiveness. This property of occurring instantaneously, or indivisibly, leads to the use of the termatomicas an alternative to the longer "linearizable".[1] In the examples below, the linearization point of the counter built on compare-and-swap is the linearization point of the first (and only) successful compare-and-swap update. The counter built using locking can be considered to linearize at any moment while the locks are held, since any potentially conflicting operations are excluded from running during that period. Processors haveinstructionsthat can be used to implementlockingandlock-free and wait-free algorithms. The ability to temporarily inhibitinterrupts, ensuring that the currently runningprocesscannot becontext switched, also suffices on auniprocessor. These instructions are used directly by compiler and operating system writers but are also abstracted and exposed as bytecodes and library functions in higher-level languages: Mostprocessorsinclude store operations that are not atomic with respect to memory. These include multiple-word stores and string operations. Should a high priority interrupt occur when a portion of the store is complete, the operation must be completed when the interrupt level is returned. The routine that processes the interrupt must not modify the memory being changed. It is important to take this into account when writing interrupt routines. When there are multiple instructions which must be completed without interruption, a CPU instruction which temporarily disables interrupts is used. This must be kept to only a few instructions and the interrupts must be re-enabled to avoid unacceptable response time to interrupts or even losing interrupts. This mechanism is not sufficient in a multi-processor environment since each CPU can interfere with the process regardless of whether interrupts occur or not. Further, in the presence of aninstruction pipeline, uninterruptible operations present a security risk, as they can potentially be chained in aninfinite loopto create adenial of service attack, as in theCyrix coma bug. TheC standardandSUSv3providesig_atomic_tfor simple atomic reads and writes; incrementing or decrementing is not guaranteed to be atomic.[3]More complex atomic operations are available inC11, which providesstdatomic.h. Compilers use the hardware features or more complex methods to implement the operations; an example is libatomic of GCC. TheARM instruction setprovidesLDREXandSTREXinstructions which can be used to implement atomic memory access by usingexclusive monitorsimplemented in the processor to track memory accesses for a specific address.[4]However, if acontext switchoccurs between calls toLDREXandSTREX, the documentation notes thatSTREXwill fail, indicating the operation should be retried. In the case of 64-bit ARMv8-A architecture, it providesLDXRandSTXRinstructions for byte, half-word, word, and double-word size.[5] The easiest way to achieve linearizability is running groups of primitive operations in acritical section. Strictly, independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability. Such an approach must balance the cost of large numbers oflocksagainst the benefits of increased parallelism. Another approach, favoured by researchers (but not yet widely used in the software industry), is to design a linearizable object using the native atomic primitives provided by the hardware. This has the potential to maximise available parallelism and minimise synchronisation costs, but requires mathematical proofs which show that the objects behave correctly. A promising hybrid of these two is to provide atransactional memoryabstraction. As with critical sections, the user marks sequential code that must be run in isolation from other threads. The implementation then ensures the code executes atomically. This style of abstraction is common when interacting with databases; for instance, when using theSpring Framework, annotating a method with @Transactional will ensure all enclosed database interactions occur in a singledatabase transaction. Transactional memory goes a step further, ensuring that all memory interactions occur atomically. As with database transactions, issues arise regarding composition of transactions, especially database and in-memory transactions. A common theme when designing linearizable objects is to provide an all-or-nothing interface: either an operation succeeds completely, or it fails and does nothing. (ACIDdatabases refer to this principle asatomicity.) If the operation fails (usually due to concurrent operations), the user must retry, usually performing a different operation. For example: To demonstrate the power and necessity of linearizability we will consider a simple counter which different processes can increment. We would like to implement a counter object which multiple processes can access. Many common systems make use of counters to keep track of the number of times an event has occurred. The counter object can be accessed by multiple processes and has two available operations. We will attempt to implement this counter object usingshared registers. Our first attempt which we will see is non-linearizable has the following implementation using one shared register among the processes. The naive, non-atomic implementation: Increment: Read: Read register R This simple implementation is not linearizable, as is demonstrated by the following example. Imagine two processes are running accessing the single counter object initialized to have value 0: The second process is finished running and the first process continues running from where it left off: In the above example, two processes invoked an increment command, however the value of the object only increased from 0 to 1, instead of 2 as it should have. One of the increment operations was lost as a result of the system not being linearizable. The above example shows the need for carefully thinking through implementations of data structures and how linearizability can have an effect on the correctness of the system. To implement a linearizable or atomic counter object we will modify our previous implementation soeach process Piwill use its own register Ri Each process increments and reads according to the following algorithm: Increment: Read: This implementation solves the problem with our original implementation. In this system the increment operations are linearized at the write step. The linearization point of an increment operation is when that operation writes the new value in its register Ri.The read operations are linearized to a point in the system when the value returned by the read is equal to the sum of all the values stored in each register Ri. This is a trivial example. In a real system, the operations can be more complex and the errors introduced extremely subtle. For example, reading a64-bitvalue from memory may actually be implemented as twosequentialreads of two32-bitmemory locations. If a process has only read the first 32 bits, and before it reads the second 32 bits the value in memory gets changed, it will have neither the original value nor the new value but a mixed-up value. Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect, reproduce anddebug. Most systems provide an atomic compare-and-swap instruction that reads from a memory location, compares the value with an "expected" one provided by the user, and writes out a "new" value if the two match, returning whether the update succeeded. We can use this to fix the non-atomic counter algorithm as follows: Since the compare-and-swap occurs (or appears to occur) instantaneously, if another process updates the location while we are in-progress, the compare-and-swap is guaranteed to fail. Many systems provide an atomic fetch-and-increment instruction that reads from a memory location, unconditionally writes a new value (the old value plus one), and returns the old value. We can use this to fix the non-atomic counter algorithm as follows: Using fetch-and increment is always better (requires fewer memory references) for some algorithms—such as the one shown here—than compare-and-swap,[6]even though Herlihy earlier proved that compare-and-swap is better for certain other algorithms that can't be implemented at all using only fetch-and-increment. SoCPU designswith both fetch-and-increment and compare-and-swap (or equivalent instructions) may be a better choice than ones with only one or the other.[6] Another approach is to turn the naive algorithm into acritical section, preventing other threads from disrupting it, using alock. Once again fixing the non-atomic counter algorithm: This strategy works as expected; the lock prevents other threads from updating the value until it is released. However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention. To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations fornon-blocking synchronization(as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort.
https://en.wikipedia.org/wiki/Atomicity_(programming)
Inlogicandprobability theory, two events (or propositions) aremutually exclusiveordisjointif they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails, but not both. In the coin-tossing example, both outcomes are, in theory,collectively exhaustive, which means that at least one of the outcomes must happen, so these two possibilities together exhaust all the possibilities.[1]However, not all mutually exclusive events are collectively exhaustive. For example, the outcomes 1 and 4 of a single roll of asix-sided dieare mutually exclusive (both cannot happen at the same time) but not collectively exhaustive (there are other possible outcomes; 2,3,5,6). Inlogic, two propositionsϕ{\displaystyle \phi }andψ{\displaystyle \psi }are mutually exclusive if it isnot logically possiblefor them to be true at the same time; that is,¬(ϕ∧ψ){\displaystyle \lnot (\phi \land \psi )}is a tautology. To say that more than two propositions are mutually exclusive, depending on the context, means either 1. "¬(ϕ1∧ϕ2)∧¬(ϕ1∧ϕ3)∧¬(ϕ2∧ϕ3){\displaystyle \lnot (\phi _{1}\land \phi _{2})\land \lnot (\phi _{1}\land \phi _{3})\land \lnot (\phi _{2}\land \phi _{3})}is a tautology" (it is not logically possible for more than one proposition to be true) or 2. "¬(ϕ1∧ϕ2∧ϕ3){\displaystyle \lnot (\phi _{1}\land \phi _{2}\land \phi _{3})}is a tautology" (it is not logically possible for all propositions to be true at the same time). The termpairwise mutually exclusivealways means the former. Inprobability theory, eventsE1,E2, ...,Enare said to be mutually exclusive if the occurrence of any one of them implies the non-occurrence of the remainingn− 1 events. Therefore, two mutually exclusive events cannot both occur. Formally said,X{\displaystyle X}is a set of mutually exclusive eventsif and only ifgiven anyEi,Ej∈X{\displaystyle E_{i},E_{j}\in X}, ifEi≠Ej{\displaystyle E_{i}\neq E_{j}}thenEi∩Ej=∅{\displaystyle E_{i}\cap E_{j}=\varnothing }. As a consequence, mutually exclusive events have the property:P(A∩B)=0{\displaystyle P(A\cap B)=0}.[2] For example, in astandard 52-card deckwith two colors it is impossible to draw a card that is both red and a club because clubs are always black. If just one card is drawn from the deck, either a red card (heart or diamond) or a black card (club or spade) will be drawn. WhenAandBare mutually exclusive,P(A∪B) = P(A) + P(B).[3]To find the probability of drawing a red card or a club, for example, add together the probability of drawing a red card and the probability of drawing a club. In a standard 52-card deck, there are twenty-six red cards and thirteen clubs: 26/52 + 13/52 = 39/52 or 3/4. One would have to draw at least two cards in order to draw both a red card and a club. The probability of doing so in two draws depends on whether the first card drawn was replaced before the second drawing since without replacement there is one fewer card after the first card was drawn. The probabilities of the individual events (red, and club) are multiplied rather than added. The probability of drawing a red and a club in two drawings without replacement is then26/52 × 13/51 × 2 = 676/2652, or 13/51. With replacement, the probability would be26/52 × 13/52 × 2 = 676/2704, or 13/52. In probability theory, the wordorallows for the possibility of both events happening. The probability of one or both events occurring is denoted P(A∪B) and in general, it equals P(A) + P(B) – P(A∩B).[3]Therefore, in the case of drawing a red card or a king, drawing any of a red king, a red non-king, or a black king is considered a success. In a standard 52-card deck, there are twenty-six red cards and four kings, two of which are red, so the probability of drawing a red or a king is 26/52 + 4/52 – 2/52 = 28/52. Events arecollectively exhaustiveif all the possibilities for outcomes are exhausted by those possible events, so at least one of those outcomes must occur. The probability that at least one of the events will occur is equal to one.[4]For example, there are theoretically only two possibilities for flipping a coin. Flipping a head and flipping a tail are collectively exhaustive events, and there is a probability of one of flipping either a head or a tail. Events can be both mutually exclusive and collectively exhaustive.[4]In the case of flipping a coin, flipping a head and flipping a tail are also mutually exclusive events. Both outcomes cannot occur for a single trial (i.e., when a coin is flipped only once). The probability of flipping a head and the probability of flipping a tail can be added to yield a probability of 1: 1/2 + 1/2 =1.[5] Instatisticsandregression analysis, anindependent variablethat can take on only two possible values is called adummy variable. For example, it may take on the value 0 if an observation is of a white subject or 1 if the observation is of a black subject. The two possible categories associated with the two possible values are mutually exclusive, so that no observation falls into more than one category, and the categories are exhaustive, so that every observation falls into some category. Sometimes there are three or more possible categories, which are pairwise mutually exclusive and are collectively exhaustive — for example, under 18 years of age, 18 to 64 years of age, and age 65 or above. In this case a set of dummy variables is constructed, each dummy variable having two mutually exclusive and jointly exhaustive categories — in this example, one dummy variable (called D1) would equal 1 if age is less than 18, and would equal 0otherwise; a second dummy variable (called D2) would equal 1 if age is in the range 18–64, and 0 otherwise. In this set-up, the dummy variable pairs (D1, D2) can have the values (1,0) (under 18), (0,1) (between 18 and 64), or (0,0) (65 or older) (but not (1,1), which would nonsensically imply that an observed subject is both under 18 and between 18 and 64). Then the dummy variables can be included as independent (explanatory) variables in a regression. The number of dummy variables is always one less than the number of categories: with the two categories black and white there is a single dummy variable to distinguish them, while with the three age categories two dummy variables are needed to distinguish them. Suchqualitative datacan also be used fordependent variables. For example, a researcher might want to predict whether someone gets arrested or not, using family income or race, as explanatory variables. Here the variable to be explained is a dummy variable that equals 0 if the observed subject does not get arrested and equals 1 if the subject does get arrested. In such a situation,ordinary least squares(the basic regression technique) is widely seen as inadequate; insteadprobit regressionorlogistic regressionis used. Further, sometimes there are three or more categories for the dependent variable — for example, no charges, charges, and death sentences. In this case, themultinomial probitormultinomial logittechnique is used.
https://en.wikipedia.org/wiki/Mutually_exclusive_events
Incomputer science, thereentrant mutex(recursive mutex,recursive lock) is a particular type ofmutual exclusion(mutex) device that may be locked multiple times by the sameprocess/thread, without causing adeadlock. While any attempt to perform the "lock" operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeedif and only ifthe locking thread is the one that already holds the lock. Typically, a recursive mutex tracks the number of times it has been locked, and requires equally many unlock operations to be performed before other threads may lock it. Recursive mutexes solve the problem ofnon-reentrancywith regular mutexes: if a function that takes a lock and executes a callback is itself called by the callback,deadlockensues.[1]Inpseudocode, that is the following situation: Given these definitions, the function calllock_and_call(1)will cause the following sequence of events: Replacing the mutex with a recursive one solves the problem, because the finalm.lock()will succeed without blocking. W. Richard Stevensnotes that recursive locks are "tricky" to use correctly, and recommends their use for adapting single-threaded code without changingAPIs, but "only when no other solution is possible".[2] TheJavalanguage's native synchronization mechanism,monitor, uses recursive locks. Syntactically, a lock is a block of code with the 'synchronized' keyword preceding it and anyObjectreference in parentheses that will be used as the mutex. Inside the synchronized block, the given object can be used as a condition variable by doing a wait(), notify(), or notifyAll() on it. Thus all Objects are both recursive mutexes andcondition variables.[3] Software emulation can be accomplished[clarification needed]using the following structure:[citation needed]
https://en.wikipedia.org/wiki/Reentrant_mutex
Insoftware engineering, aspinlockis alockthat causes athreadtrying to acquire it to simply wait in aloop("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind ofbusy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one that holds the lock) blocks or "goes to sleep". Because they avoid overhead fromoperating systemprocess reschedulingorcontext switching, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason,operating-system kernelsoften use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished. Implementing spinlocks correctly is challenging because programmers must take into account the possibility of simultaneous access to the lock, which could causerace conditions. Generally, such an implementation is possible only with specialassembly languageinstructions, such asatomic(i.e. un-interruptible)test-and-setoperations and cannot be easily implemented in programming languages not supporting truly atomic operations.[1]On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g.Peterson's algorithm. However, such an implementation may require morememorythan a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language ifout-of-order executionis allowed. The following example uses x86 assembly language to implement a spinlock. It will work on anyIntel80386compatible processor. The simple implementation above works on all CPUs using the x86 architecture. However, a number of performance optimizations are possible: On later implementations of the x86 architecture,spin_unlockcan safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtlememory orderingrules which support this, even though MOV is not a fullmemory barrier. However, some processors (someCyrixprocessors, some revisions of theIntelPentium Pro(due to bugs), and earlierPentiumandi486SMPsystems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such asIA-64, there are special "unlock" instructions which provide the needed memory ordering. To reduce inter-CPUbus traffic, code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because ofMESIcaching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkablynobus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread. On Hyper-Threading CPUs, pausing withrep nopgives additional performance by hinting to the core that it can work on the other thread while the lock spins waiting.[2] Transactional Synchronization Extensionsand other hardwaretransactional memoryinstruction sets serve to replace locks in most cases. Although locks are still required as a fallback, they have the potential to greatly improve performance by having the processor handle entire blocks of atomic operations. This feature is built into some mutex implementations, for example inglibc. The Hardware Lock Elision (HLE) in x86 is a weakened but backwards-compatible version of TSE, and we can use it here for locking without losing any compatibility. In this particular case, the processor can choose to not lock until two threads actually conflict with each other.[3] A simpler version of the test can use thecmpxchginstruction on x86, or the__sync_bool_compare_and_swapbuilt into many Unix compilers. With the optimizations applied, a sample would look like: On any multi-processor system that uses theMESI contention protocol, such a test-and-test-and-set lock (TTAS) performs much better than the simple test-and-set lock (TAS) approach.[4] With large numbers of processors, adding a randomexponential backoffdelay before re-checking the lock performs even better than TTAS.[4][5] A few multi-core processors have a "power-conscious spin-lock" instruction that puts a processor to sleep, then wakes it up on the next cycle after the lock is freed. A spin-lock using such instructions is more efficient and uses less energy than spin locks with or without a back-off loop.[6] The primary disadvantage of a spinlock is that, whilewaitingto acquire a lock, it wastes time that might be productively spent elsewhere. There are two ways to avoid this: Most operating systems (includingSolaris,Mac OS XandFreeBSD) use a hybrid approach called "adaptivemutex". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if thethreadis not currently running. (The latter isalwaysthe case on single-processor systems.)[8] OpenBSDattempted to replace spinlocks withticket lockswhich enforcedfirst-in-first-outbehaviour, however this resulted in more CPU usage in the kernel and larger applications, such asFirefox, becoming much slower.[9][10]
https://en.wikipedia.org/wiki/Spinlock
Incomputer science,load-linked/store-conditional[1](LL/SC), sometimes known asload-reserved/store-conditional[2](LR/SC), are a pair ofinstructionsused inmultithreadingto achievesynchronization. Load-link returns the current value of amemory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-link. Together, this implements alock-free,atomic,read-modify-writeoperation. "Load-linked" is also known asload-link,[3]load-reserved,[2]andload-locked.[citation needed] LL/SC was originally[4]proposed by Jensen, Hagensen, and Broughton for the S-1 AAP multiprocessor[1][failed verification]atLawrence Livermore National Laboratory. If any updates have occurred, the store-conditional is guaranteed to fail, even if the value read by the load-link has since been restored. As such, an LL/SC pair is stronger than a read followed by acompare-and-swap(CAS), which will not detect updates if the old value has been restored (seeABA problem). Real implementations of LL/SC do not always succeed even if there are no concurrent updates to the memory location in question. Any exceptional events between the two operations, such as acontext switch, another load-link, or even (on many platforms) another load or store operation, will cause the store-conditional to spuriously fail. Older implementations will fail if there areanyupdates broadcast over the memory bus. This is calledweakLL/SC by researchers, as it breaks many theoretical LL/SC algorithms.[5]Weakness is relative, and some weak implementations can be used for some algorithms. LL/SC is more difficult to emulate than CAS. Additionally, stopping running code between paired LL/SC instructions, such as when single-stepping through code, can prevent forward progress, making debugging tricky.[6] Nevertheless, LL/SC is equivalent to CAS in the sense that either primitive can be implemented in terms of the other, inO(1)and in await-freemanner.[7] LL/SC instructions are supported by: Some CPUs[which?]require the address being accessed exclusively to be configured in write-through mode. Typically, CPUs track the load-linked address at acache-lineor other granularity, such that any modification to any portion of the cache line (whether via another core's store-conditional or merely by an ordinary store) is sufficient to cause the store-conditional to fail. All of these platforms provide weak[clarification needed]LL/SC. The PowerPC implementation allows an LL/SC pair to wrap loads and even stores to othercache lines(although this approach is vulnerable to false cache line sharing). This allows it to implement, for example, lock-freereference countingin the face of changing object graphs with arbitrary counter reuse (which otherwise requiresdouble compare-and-swap, DCAS). RISC-V provides an architectural guarantee of eventual progress for LL/SC sequences of limited length. Some ARM implementations define platform dependent blocks, ranging from 8 bytes to 2048 bytes, and an LL/SC attempt in any given block fails if there is between the LL and SC a normal memory access inside the same block. Other ARM implementations fail if there is a modification anywhere in the whole address space. The former implementation is the stronger and most practical. LL/SC has two advantages over CAS when designing aload–store architecture: reads and writes are separate instructions, as required by the design philosophy (andpipeline architecture); and both instructions can be performed using only tworegisters(address and value), fitting naturally into common2-operand ISAs. CAS, on the other hand, requires three registers (address, old value, new value) and a dependency between the value read and the value written.x86, being aCISCarchitecture, does not have this constraint; though modern chips may well translate a CAS instruction into separate LL/SCmicro-operationsinternally. Hardware LL/SC implementations typically do not allow nesting of LL/SC pairs.[17]A nesting LL/SC mechanism can be used to provide a MCAS primitive (multi-word CAS, where the words can be scattered).[18]In 2013, Trevor Brown,Faith Ellen, and Eric Ruppert implemented in software a multi-address LL/SC extension (which they call LLX/SCX) that relies on automated code generation;[19]they have used it to implement one of the best-performing concurrentbinary search tree(actually achromatic tree), slightly beating theJDKCAS-basedskip listimplementation.[20]
https://en.wikipedia.org/wiki/Load-link/store-conditional
Incomputing,schedulingis the action of assigningresourcesto performtasks. The resources may beprocessors,network linksorexpansion cards. The tasks may bethreads,processesor dataflows. The scheduling activity is carried out by a mechanism called ascheduler. Schedulers are often designed so as to keep all computer resources busy (as inload balancing), allow multiple users to share system resources effectively, or to achieve a targetquality-of-service. Scheduling is fundamental to computation itself, and an intrinsic part of theexecution modelof a computer system; the concept of scheduling makes it possible to havecomputer multitaskingwith a singlecentral processing unit(CPU). A scheduler may aim at one or more goals, for example: In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by any one of the concerns mentioned above, depending upon the user's needs and objectives. Inreal-timeenvironments, such asembedded systemsforautomatic controlin industry (for examplerobotics), the scheduler also must ensure that processes can meetdeadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network andmanagedthrough an administrative back end. The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next process to run. Operating systems may feature up to three distinct scheduler types: along-term scheduler(also known as an admission scheduler or high-level scheduler), amid-term or medium-term scheduler, and ashort-term scheduler. The names suggest the relative frequency with which their functions are performed. The process scheduler is a part of the operating system that decides which process runs at a certain point in time. It usually has the ability to pause a running process, move it to the back of the running queue and start a new process; such a scheduler is known as apreemptivescheduler, otherwise it is acooperativescheduler.[5] We distinguish betweenlong-term scheduling,medium-term scheduling, andshort-term schedulingbased on how often decisions must be made.[6] Thelong-term scheduler, oradmission scheduler, decides which jobs or processes are to be admitted to the ready queue (in main memory); that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus, this scheduler dictates what processes are to run on a system, the degree of concurrency to be supported at any one time – whether many or few processes are to be executed concurrently, and how the split between I/O-intensive and CPU-intensive processes is to be handled. The long-term scheduler is responsible for controlling the degree of multiprogramming. In general, most processes can be described as eitherI/O-boundorCPU-bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that a long-term scheduler selects a good process mix of I/O-bound and CPU-bound processes. If all processes are I/O-bound, the ready queue will almost always be empty, and the short-term scheduler will have little to do. On the other hand, if all processes are CPU-bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. In modern operating systems, this is used to make sure that real-time processes get enough CPU time to finish their tasks.[7] Long-term scheduling is also important in large-scale systems such asbatch processingsystems,computer clusters,supercomputers, andrender farms. For example, inconcurrent systems,coschedulingof interacting processes is often required to prevent them from blocking due to waiting on each other. In these cases, special-purposejob schedulersoftware is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system. Some operating systems only allow new tasks to be added if it is sure all real-time deadlines can still be met. The specific heuristic algorithm used by an operating system to accept or reject new tasks is theadmission control mechanism.[8] Themedium-term schedulertemporarily removes processes from main memory and places them in secondary memory (such as ahard disk drive) or vice versa, which is commonly referred to asswapping outorswapping in(also incorrectly aspagingoutorpaging in). The medium-term scheduler may decide to swap out a process that has not been active for some time, a process that has a low priority, a process that ispage faultingfrequently, or a process that is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating binaries asswapped-out processesupon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, orlazy loaded, also calleddemand paging. Theshort-term scheduler(also known as theCPU scheduler) decides which of the ready, in-memory processes is to be executed (allocated a CPU) after a clockinterrupt, an I/O interrupt, an operatingsystem callor another form ofsignal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers – A scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can bepreemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known asvoluntaryorco-operative), in which case the scheduler is unable toforceprocesses off the CPU. A preemptive scheduler relies upon aprogrammable interval timerwhich invokes aninterrupt handlerthat runs inkernel modeand implements the scheduling function. Another component that is involved in the CPU-scheduling function is the dispatcher, which is the module that gives control of the CPU to the process selected by the short-term scheduler. It receives control in kernel mode as the result of an interrupt or system call. The functions of a dispatcher involve the following: The dispatcher should be as fast as possible since it is invoked during every process switch. During the context switches, the processor is virtually idle for a fraction of time, thus unnecessary context switches should be avoided. The time it takes for the dispatcher to stop one process and start another is known as thedispatch latency.[7]: 155 Ascheduling discipline(also calledscheduling policyorscheduling algorithm) is an algorithm used for distributing resources among parties which simultaneously and asynchronously request them. Scheduling disciplines are used inrouters(to handle packet traffic) as well as inoperating systems(to shareCPU timeamong boththreadsandprocesses), disk drives (I/O scheduling), printers (print spooler), most embedded systems, etc. The main purposes of scheduling algorithms are to minimizeresource starvationand to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them. Inpacket-switchedcomputer networksand otherstatistical multiplexing, the notion of ascheduling algorithmis used as an alternative tofirst-come first-servedqueuing of data packets. The simplest best-effort scheduling algorithms areround-robin,fair queuing(amax-min fairscheduling algorithm),proportional-fair schedulingandmaximum throughput. If differentiated or guaranteedquality of serviceis offered, as opposed to best-effort communication,weighted fair queuingmay be utilized. In advanced packet radio wireless networks such asHSDPA(High-Speed Downlink Packet Access)3.5Gcellular system,channel-dependent schedulingmay be used to take advantage ofchannel state information. If the channel conditions are favourable, thethroughputandsystem spectral efficiencymay be increased. In even more advanced systems such asLTE, the scheduling is combined by channel-dependent packet-by-packetdynamic channel allocation, or by assigningOFDMAmulti-carriers or otherfrequency-domain equalizationcomponents to the users that best can utilize them.[9] First in, first out(FIFO), also known asfirst come, first served(FCFS), is the simplest scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready queue. This is commonly used for atask queue, for example as illustrated in this section. Earliest deadline first (EDF) orleast time to gois a dynamic scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (a task finishes, new task is released, etc.), the queue will be searched for the process closest to its deadline, which will be the next to be scheduled for execution. Similar toshortest job first(SJF). With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete. The operating system assigns a fixed-priority rank to every process, and the scheduler arranges the processes in the ready queue in order of their priority. Lower-priority processes get interrupted by incoming higher-priority processes. The scheduler assigns a fixed time unit per process, and cycles through them. If process completes within that time-slice it gets terminated otherwise it is rescheduled after giving a chance to all other processes. This is used for situations in which processes are easily divided into different groups. For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time requirements and so may have different scheduling needs. It is very useful forshared memoryproblems. Awork-conserving scheduleris a scheduler that always tries to keep the scheduled resources busy, if there are submitted jobs ready to be scheduled. In contrast, a non-work conserving scheduler is a scheduler that, in some cases, may leave the scheduled resources idle despite the presence of jobs ready to be scheduled. There are several scheduling problems in which the goal is to decide which job goes to which station at what time, such that the totalmakespanis minimized: A very common method in embedded systems is to schedule jobs manually. This can for example be done in a time-multiplexed fashion. Sometimes the kernel is divided in three or more parts: Manual scheduling, preemptive and interrupt level. Exact methods for scheduling jobs are often proprietary. When designing an operating system, a programmer must consider which scheduling algorithm will perform best for the use the system is going to see. There is no universalbestscheduling algorithm, and many operating systems use extended or combinations of the scheduling algorithms above. For example,Windows NT/XP/Vista uses amultilevel feedback queue, a combination of fixed-priority preemptive scheduling, round-robin, and first in, first out algorithms. In this system, threads can dynamically increase or decrease in priority depending on if it has been serviced already, or if it has been waiting extensively. Every priority level is represented by its own queue, withround-robin schedulingamong the high-priority threads andFIFOamong the lower-priority ones. In this sense, response time is short for most threads, and short but critical system threads get completed very quickly. Since threads can only use one time unit of the round-robin in the highest-priority queue, starvation can be a problem for longer high-priority threads. The algorithm used may be as simple asround-robinin which each process is given equal time (for instance 1 ms, usually between 1 ms and 100 ms) in a cycling list. So, process A executes for 1 ms, then process B, then process C, then back to process A. More advanced algorithms take into account process priority, or the importance of the process. This allows some processes to use more time than other processes. The kernel always uses whatever resources it needs to ensure proper functioning of the system, and so can be said to have infinite priority. InSMPsystems,processor affinityis considered to increase overall system performance, even if it may cause a process itself to run more slowly. This generally improves performance by reducingcache thrashing. IBMOS/360was available with three different schedulers. The differences were such that the variants were often considered three different operating systems: Later virtual storage versions of MVS added aWorkload Managerfeature to the scheduler, which schedules processor resources according to an elaborate scheme defined by the installation. Very earlyMS-DOSand Microsoft Windows systems were non-multitasking, and as such did not feature a scheduler.Windows 3.1xused a non-preemptive scheduler, meaning that it did not interrupt programs. It relied on the program to end or tell the OS that it didn't need the processor so that it could move on to another process. This is usually called cooperative multitasking. Windows 95 introduced a rudimentary preemptive scheduler; however, for legacy support opted to let 16-bit applications run without preemption.[10] Windows NT-based operating systems use a multilevel feedback queue. 32 priority levels are defined, 0 through to 31, with priorities 0 through 15 beingnormalpriorities and priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 is reserved for the Operating System. User interfaces and APIs work with priority classes for the process and the threads in the process, which are then combined by the system into the absolute priority level. The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive applications.[11]The scheduler was modified inWindows Vistato use thecycle counter registerof modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine.[12]Vista also uses a priority scheduler for the I/O queue so that disk defragmenters and other such programs do not interfere with foreground operations.[13] Mac OS 9 uses cooperative scheduling for threads, where one process controls multiple cooperative threads, and also provides preemptive scheduling for multiprocessing tasks. The kernel schedules multiprocessing tasks using a preemptive scheduling algorithm. All Process Manager processes run within a special multiprocessing task, called theblue task. Those processes are scheduled cooperatively, using around-robin schedulingalgorithm; a process yields control of the processor to another process by explicitly calling ablocking functionsuch asWaitNextEvent. Each process has its own copy of theThread Managerthat schedules that process's threads cooperatively; a thread yields control of the processor to another thread by callingYieldToAnyThreadorYieldToThread.[14] macOS uses a multilevel feedback queue, with four priority bands for threads – normal, system high priority, kernel mode only, and real-time.[15]Threads are scheduled preemptively; macOS also supports cooperatively scheduled threads in its implementation of the Thread Manager inCarbon.[14] In AIX Version 4 there are three possible values for thread scheduling policy: Threads are primarily of interest for applications that currently consist of several asynchronous processes. These applications might impose a lighter load on the system if converted to a multithreaded structure. AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER.[16] Linux 1.2 used around-robin schedulingpolicy.[17] Linux 2.2 added scheduling classes and support forsymmetric multiprocessing(SMP).[17] InLinux2.4,[17]anO(n) schedulerwith amultilevel feedback queuewith priority levels ranging from 0 to 140 was used; 0–99 are reserved for real-time tasks and 100–140 are considerednicetask levels. For real-time tasks, the time quantum for switching processes was approximately 200 ms, and for nice tasks approximately 10 ms.[citation needed]The scheduler ran through therun queueof all ready processes, letting the highest priority processes go first and run through their time slices, after which they will be placed in an expired queue. When the active queue is empty the expired queue will become the active queue and vice versa. However, some enterpriseLinux distributionssuch asSUSE Linux Enterprise Serverreplaced this scheduler with a backport of theO(1) scheduler(which was maintained byAlan Coxin his Linux 2.4-ac Kernel series) to the Linux 2.4 kernel used by the distribution. In versions 2.6.0 to 2.6.22, the kernel used anO(1) schedulerdeveloped byIngo Molnarand many other kernel developers during the Linux 2.5 development. For many kernel in time frame,Con Kolivasdeveloped patch sets which improved interactivity with this scheduler or even replaced it with his own schedulers. Con Kolivas' work, most significantly his implementation offair schedulingnamedRotating Staircase Deadline(RSDL), inspired Ingo Molnár to develop theCompletely Fair Scheduler(CFS) as a replacement for the earlierO(1) scheduler, crediting Kolivas in his announcement.[18]CFS is the first implementation of a fair queuingprocess schedulerwidely used in a general-purpose operating system.[19] The CFS uses a well-studied, classic scheduling algorithm calledfair queuingoriginally invented forpacket networks. Fair queuing had been previously applied to CPU scheduling under the namestride scheduling. The fair queuing CFS scheduler has a scheduling complexity ofO(log⁡N){\displaystyle O(\log N)}, whereNis the number of tasks in therunqueue. Choosing a task can be done in constant time, but reinserting a task after it has run requiresO(log⁡N){\displaystyle O(\log N)}operations, because therun queueis implemented as ared–black tree. TheBrain Fuck Scheduler, also created by Con Kolivas, is an alternative to the CFS. In 2023, Peter Zijlstra proposed replacing CFS with anearliest eligible virtual deadline first scheduling(EEVDF) process scheduler.[20][21]The aim was to remove the need for CFSlatency nicepatches.[22] Linux 6.12 added support foruserspacescheduler extensions, also known as sched_ext.[23]These schedulers can be installed and replace the default scheduler.[24] FreeBSDuses a multilevel feedback queue with priorities ranging from 0–255. 0–63 are reserved for interrupts, 64–127 for the top half of the kernel, 128–159 for real-time user threads, 160–223 for time-shared user threads, and 224–255 for idle user threads. Also, like Linux, it uses the active queue setup, but it also has an idle queue.[25] NetBSDuses a multilevel feedback queue with priorities ranging from 0–223. 0–63 are reserved for time-shared threads (default, SCHED_OTHER policy), 64–95 for user threads which enteredkernel space, 96-128 for kernel threads, 128–191 for user real-time threads (SCHED_FIFO and SCHED_RR policies), and 192–223 forsoftware interrupts. Solarisuses a multilevel feedback queue with priorities ranging between 0 and 169. Priorities 0–59 are reserved for time-shared threads, 60–99 for system threads, 100–159 for real-time threads, and 160–169 for low priority interrupts. Unlike Linux,[25]when a process is done using its time quantum, it is given a new priority and put back in the queue. Solaris 9 introduced two new scheduling classes, namely fixed-priority class and fair share class. The threads with fixed priority have the same priority range as that of the time-sharing class, but their priorities are not dynamically adjusted. The fair scheduling class uses CPUsharesto prioritize threads for scheduling decisions. CPU shares indicate the entitlement to CPU resources. They are allocated to a set of processes, which are collectively known as a project.[7]
https://en.wikipedia.org/wiki/Scheduler_pattern
Thebalking patternis asoftware design patternthat only executes an action on anobjectwhen the object is in a particular state. For example, if an object readsZIPfiles and a calling method invokes a get method on the object when the ZIP file is not open, the object would "balk" at the request. In theJavaprogramming language, for example, anIllegalStateExceptionmight be thrown under these circumstances. InC#it would beInvalidOperationException. There are some specialists[who?]in this field who consider balking more of ananti-patternthan a design pattern. If an object cannot support itsAPI, it should either limit the API so that the offending call is not available, or so that the call can be made without limitation. It should: Objects that use this pattern are generally only in a state that is prone to balking temporarily but for an unknown amount of time.[citation needed]If objects are to remain in a state which is prone to balking for a known, finite period of time, then theguarded suspension patternmay be preferred. Below is a general, simple example for an implementation of the balking pattern.[1]As demonstrated by the definition above, notice how the "synchronized" line is utilized. If there are multiple calls to the job method, only one will proceed while the other calls will return with nothing. Another thing to note is thejobCompleted()method. The reason it is synchronized is because the only way to guarantee another thread will see a change to a field is to synchronize all access to it. Actually, since it is a boolean variable, it could be left not explicitly synchronized, only declared volatile - to guarantee that the other thread will not read an obsolete cached value. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Balking_pattern
Incomputer science, thereaders–writers problemsare examples of a common computing problem inconcurrency.[1]There are at least three variations of the problems, which deal with situations in which many concurrentthreadsof execution try to access the same shared resource at one time. Some threads may read and some may write, with the constraint that no thread may access the shared resource for either reading or writing while another thread is in the act of writing to it. (In particular, we want to prevent more than one thread modifying the shared resource simultaneously and allow for two or more readers to access the shared resource at the same time). Areaders–writer lockis adata structurethat solves one or more of the readers–writers problems. The basic reader–writers problem was first formulated and solved by Courtoiset al.[2][3] Suppose we have a shared memory area (critical section) with the basic constraints detailed above. It is possible to protect the shared data behind a mutual exclusionmutex, in which case no two threads can access the data at the same time. However, this solution is sub-optimal, because it is possible that a readerR1might have the lock, and then another readerR2requests access. It would be foolish forR2to wait untilR1was done before starting its own read operation; instead,R2should be allowed to read the resource alongsideR1because reads don't modify data, so concurrent reads are safe. This is the motivation for thefirst readers–writers problem, in which the constraint is added thatno reader shall be kept waiting if the share is currently opened for reading.This is also calledreaders-preference, with its solution: In this solution of the readers/writers problem, the first reader must lock the resource (shared file) if such is available. Once the file is locked from writers, it may be used by many subsequent readers without having them to re-lock it again. Before entering thecritical section, every new reader must go through the entry section. However, there may only be a single reader in the entry section at a time. This is done to avoidrace conditionson the readers (in this context, a race condition is a condition in which two or more threads are waking up simultaneously and trying to enter the critical section; without further constraint, the behavior is nondeterministic. E.g. two readers increment thereadcountat the same time, and both try to lock the resource, causing one reader to block). To accomplish this, every reader which enters the <ENTRY Section> will lock the <ENTRY Section> for themselves until they are done with it. At this point the readers are not locking the resource. They are only locking the entry section so no other reader can enter it while they are in it. Once the reader is done executing the entry section, it will unlock it by signaling the mutex. Signaling it is equivalent to: mutex.V() in the above code. Same is valid for the <EXIT Section>. There can be no more than a single reader in the exit section at a time, therefore, every reader must claim and lock the Exit section for themselves before using it. Once the first reader is in the entry section, it will lock the resource. Doing this will prevent any writers from accessing it. Subsequent readers can just utilize the locked (from writers) resource. The reader to finish last (indicated by thereadcountvariable) must unlock the resource, thus making it available to writers. In this solution, every writer must claim the resource individually. This means that a stream of readers can subsequently lock all potential writers out and starve them. This is so, because after the first reader locks the resource, no writer can lock it, before it gets released. And it will only be released by the last reader. Hence, this solution does not satisfy fairness. The first solution is suboptimal, because it is possible that a readerR1might have the lock, a writerWbe waiting for the lock, and then a readerR2requests access. It would be unfair forR2to jump in immediately, ahead ofW; if that happened often enough,Wwouldstarve. Instead,Wshould start as soon as possible. This is the motivation for thesecond readers–writers problem, in which the constraint is added thatno writer, once added to the queue, shall be kept waiting longer than absolutely necessary. This is also calledwriters-preference. A solution to the writers-preference scenario is:[2] In this solution, preference is given to the writers. This is accomplished by forcing every reader to lock and release thereadtrysemaphore individually. The writers on the other hand don't need to lock it individually. Only the first writer will lock thereadtryand then all subsequent writers can simply use the resource as it gets freed by the previous writer. The very last writer must release thereadtrysemaphore, thus opening the gate for readers to try reading. No reader can engage in the entry section if thereadtrysemaphore has been set by a writer previously. The reader must wait for the last writer to unlock the resource andreadtrysemaphores. On the other hand, if a particular reader has locked thereadtrysemaphore, this will indicate to any potential concurrent writer that there is a reader in the entry section. So the writer will wait for the reader to release thereadtryand then the writer will immediately lock it for itself and all subsequent writers. However, the writer will not be able to access the resource until the current reader has released the resource, which only occurs after the reader is finished with the resource in the critical section. The resource semaphore can be locked by both the writer and the reader in their entry section. They are only able to do so after first locking thereadtrysemaphore, which can only be done by one of them at a time. It will then take control over the resource as soon as the current reader is done reading and lock all future readers out. All subsequent readers will hang up at thereadtrysemaphore waiting for the writers to be finished with the resource and to open the gate by releasingreadtry. Thermutexandwmutexare used in exactly the same way as in the first solution. Their sole purpose is to avoid race conditions on the readers and writers while they are in their entry or exit sections. In fact, the solutions implied by both problem statements can result in starvation — the first one may starve writers in the queue, and the second one may starve readers. Therefore, thethird readers–writers problemis sometimes proposed, which adds the constraint thatno thread shall be allowed to starve; that is, the operation of obtaining a lock on the shared data will always terminate in a bounded amount of time. A solution with fairness for both readers and writers might be as follows: This solution can only satisfy the condition that "no thread shall be allowed to starve" if and only if semaphores preserve first-in first-out ordering when blocking and releasing threads. Otherwise, a blocked writer, for example, may remain blocked indefinitely with a cycle of other writers decrementing the semaphore before it can. The simplest reader writer problem which uses only two semaphores and doesn't need an array of readers to read the data in buffer. Please notice that this solution gets simpler than the general case because it is made equivalent to theBounded bufferproblem, and therefore onlyNreaders are allowed to enter in parallel,Nbeing the size of the buffer. The initial value ofreadandwritesemaphores are 0 and N respectively. In writer, the value of write semaphore is given to read semaphore and in reader, the value of read is given to write on completion of the loop.
https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem
Alight-weight Linux distributionis aLinux distributionthat uses lower memory and processor-speed requirements than a more "feature-rich" Linux distribution. The lower demands on hardware ideally result in amore responsive machine, and allow devices with fewersystem resources(e.g. older orembedded hardware) to be used productively. The lower memory and processor-speed requirements are achieved by avoidingsoftware bloat, i.e. by leaving out features that are perceived to have little or no practical use or advantage, or for which there is no or low demand. The perceived weight of a Linux distribution is strongly influenced by thedesktop environmentincluded with that distribution.[1][2]Accordingly, many Linux distributions offer a choice of editions. For example,Canonicalhosts several variants ("flavors") of theUbuntu distributionthat include desktop environments other than the defaultGNOMEor the deprecatedUnity. These variants include theXubuntuandLubuntudistributions for the comparatively light-weightXfceandLXDE/LXQtdesktop environments. The demands that a desktop environment places on a system may be seen in a comparison of theminimum system requirementsof Ubuntu 10.10 and Lubuntu 10.10 desktop editions, where the only significant difference between the two was their desktop environment. Ubuntu 10.10 included theUnitydesktop, which had minimum system requirements of a 2 GHz processor with 2 GB of RAM,[3]while Lubuntu 10.10 included LXDE, which required at least a Pentium II with 128 MB of RAM.[4] RAM:64 MB 1+ GB suggested[13][14] RAM:256 MB[20] RAM:512 MB[26] Drive:5 GB 256 MB to run X[27]1 GB for Firefox2+ GB recommended 1100 MB (i386, AMD64)[27] RAM:1 GB Drive:2.1 GB. RAM:192 MB (2017)[32] RAM:8 MB[34] RAM:512 MB[36] CPU: 500 MHz Storage: 1 GB RAM:512 MB Hard drive: 8GB 1000+ MB for full graphic[38] CPU:x86_64 1 GB recommended[40] CPU:486 RAM:2 GB (2025)[41] Drive:20 GB No minimum system requirements provided.[43] CPU:64-bit (2022) CPU:486 RAM:32 MB (2018)[47]64 MB recommended CPU:64-bit (from 2016) RAM:2 GB (2020)[50] CPU:x86 RAM:1 GB.4 GB recommended[52] CPU:32 bit RAM:36 MB[54] RAM:256 MB (2017)[56] RAM:256 Mb / 1 GB Hard drive:3 GB / 5 GB RAM:512 MB CPU:Intel Pentium III 1 GHz[61] Hard drive:8 GB RAM:256MB without web browser RAM:24 MB loram-cdrom[66] 128 MB loram 256 MB standard CPU:486DXRAM:46 MB[70] CPU:486DX RAM:4 MB 8 MB recommended RAM:256 MB CPU:64-bit CPU for latest version[73] RAM:256 MB 2017[75] RAM: 64 MB Light edition 96 MB Standard 256MB Live[79] CPU:ARMv6 Pentium 4 (SSE2) RAM:96 MB Hard drive:350 MB[80] RAM:1 GB Hard drive:8.6 GB[83] RAM:1.5 GB Hard drive:15 GB[86]
https://en.wikipedia.org/wiki/Light-weight_Linux_distribution
Theusage share of an operating systemis the percentage ofcomputersrunning thatoperating system(OS). These statistics are estimates as wide scale OS usage data is difficult to obtain and measure. Reliable primary sources are limited and data collection methodology is not formally agreed. Currently devices connected to the internet allow for web data collection to approximately measure OS usage. As of March 2025[update],Android, which uses theLinux kernel, is the world's most popular operating system with 46% of the global market, followed byWindowswith 25%,iOSwith 18%,macOSwith 6%, and other operating systems with 5% .[1]This is for all device types excludingembedded devices. Linux is also most used for web servers, and the most commonLinux distributionisUbuntu, followed byDebian. Linux has almost caught up with the second-most popular (desktop) OS, macOS, in some regions, such as in South America,[7]and in Asia it's at 6.4% (7% with ChromeOS) vs 9.7% for macOS.[8]In the US, ChromeOS is third at 5.5%, followed by (desktop) Linux at 4.3%, but can arguably be combined into a single number 9.8%.[9][10] The most numerous type of device with an operating system areembedded systems. Not all embedded systems have operating systems, instead running their application code on the "bare metal"; of those that do have operating systems, a high percentage are standalone or do not have a web browser, which makes their usage sharedifficult to measure. Some operating systems used in embedded systems are more widely used than some of those mentioned above; for example, modern Intel microprocessors contain anembedded management processorrunning a version of theMinixoperating system.[11] According to Gartner, the following is the worldwide device shipments (referring towholesale) by operating system, which includes smartphones,tablets,laptopsandPCstogether. macOS = 1% Shipments (to stores) do not necessarily translate to sales to consumers, therefore suggesting the numbers indicate popularity and/or usage could be misleading. Not only do smartphones sell in higher numbers than PCs, but also a lot more by dollar value, with the gap only projected to widen, to well over double.[19] On 27 January 2016, Paul Thurrott summarized the operating system market, the day after Apple announced "one billion devices": Apple's "active installed base" is now one billion devices. [..] Granted, some of those Apple devices were probably sold into the marketplace years ago. But that 1 billion figure can and should be compared to the numbers Microsoft touts for Windows 10 (200 million, most recently) or Windows more generally (1.5 billion active users, a number that hasn’t moved, magically, in years), and that Google touts for Android (over 1.4 billion, as of September). My understanding of iOS is that the user base was previously thought to be around 800 million strong, and when you factor out Macs and other non-iOS Apple devices, that's probably about right. But as you can see, there are three big personal computing platforms. For 2015 (and earlier), Gartner reports for "the year, worldwide PC shipments declined for the fourth consecutive year, which started in 2012 with the launch of tablets" with an 8% decline in PC sales for 2015 (not including cumulative decline in sales over the previous years).[21] Microsoft backed away from their goal of one billion Windows 10 devices in three years (or "by the middle of 2018")[22]and reported on 26 September 2016 that Windows 10 was running on over 400 million devices,[23]and in March 2019 on more than 800 million.[24] In May 2020,Gartnerpredicted further decline in all market segments for 2020 due toCOVID-19, predicting a decline of 13.6% for all devices. while the "Work from Home Trend Saved PC Market from Collapse", with only a decline of 10.5% predicted for PCs. However, in the end, according to Gartner, PC shipments grew 10.7% in the fourth quarter of 2020 and reached 275 million units in 2020, a 4.8% increase from 2019 and the highest growth in ten years." Apple in 4th place for PCs had the largest growth in shipments for a company in Q4 of 31.3%, while "the fourth quarter of 2020 was another remarkable period of growth for Chromebooks, with shipments increasing around 200% year over year to reach 11.7 million units. In 2020, Chromebook shipments increased over 80% to total nearly 30 million units, largely due to demand from the North American education market." Chromebooks sold more (30 million) than Apple's Macs worldwide (22.5 million) in pandemic year 2020.[25] According to the Catalyst group, the year 2021 had record high PC shipments with total shipments of 341 million units (including Chromebooks), 15% higher than 2020 and 27% higher than 2019, while being the largest shipment total since 2012.[26] According to Gartner, worldwide PC shipments declined by 16.2% in 2022, the largest annual decrease since the mid-1990s, due to geopolitical, economic, and supply chain challenges.[27] In 2015,eMarketerestimated at the beginning of the year that the tabletinstalled basewould hit one billion[28]for the first time (with China's use at 328 million, whichGoogle Playdoesn't serve or track, and the United States's use second at 156 million). At the end of the year, because of cheap tablets – not counted by all analysts – that goal was met (even excluding cumulative sales of previous years) as: Sales quintupled to an expected 1 billion units worldwide this year, from 216 million units in 2014, according to projections from the Envisioneering Group. While that number is far higher than the 200-plus million units globally projected by research firms IDC, Gartner and Forrester, Envisioneering analyst Richard Doherty says the rival estimates miss all the cheap Asian knockoff tablets that have been churning off assembly lines.[..] Forrester says its definition of tablets "is relatively narrow" while IDC says it includes some tablets by Amazon — but not all.[..] The top tech purchase of the year continued to be the smartphone, with an expected 1.5 billion sold worldwide, according to projections from researcher IDC. Last year saw some 1.2 billion sold.[..] Computers didn’t fare as well, despite the introduction of Microsoft's latest software upgrade, Windows 10, and the expected but not realized bump it would provide for consumers looking to skip the upgrade and just get a new computer instead. Some 281 million PCs were expected to be sold, according to IDC, down from 308 million in 2014. Folks tend to be happy with the older computers and keep them for longer, as more of our daily computing activities have moved to the smartphone.[..] While Windows 10 got good reviews from tech critics, only 11% of the 1-billion-plus Windows user base opted to do the upgrade, according to Microsoft. This suggests Microsoft has a ways to go before the software gets "hit" status. Apple's new operating system El Capitan has been downloaded by 25% of Apple's user base, according to Apple. This conflicts with statistics from IDC that say the tablet market contracted by 10% in 2015 with onlyHuawei, ranked fifth, with big gains, more than doubling their share; for fourth quarter 2015, the five biggest vendors were the same except thatAmazon Firetablets ranked third worldwide, new on the list, enabled by its not quite tripling of market share to 7.9%, with itsFire OSAndroid-derivative.[30] Gartnerexcludes some devices from their tablet shipment statistic and includes them in a different category called "premium ultramobiles" with screen sizes of more than 10" inches.[35] There are more mobile phone owners than toothbrush owners,[36]with mobile phones the fastest growing technology in history.[citation needed]There are a billion more active mobile phones in the world than people (and many more than 10 billion sold so far with less than half still in use), explained by the fact that some people have more than one, such as an extra for work.[37]All the phones have an operating system, but only a fraction of them are smartphones with an OS capable of running modern applications. In 2018, 3.1 billion smartphones and tablets were in use across the world (with tablets, a small fraction of the total, generally running the same operating systems, Android or iOS, the latter being more popular on tablets. In 2019, a variant of iOS callediPadOSbuilt for iPad tablets was released). On 28 May 2015, Google announced that there were 1.4 billion Android users and 1 billion Google play users active during that month.[38][39]This changed to 2 billion monthly active users in May 2017.[40][41] By late 2016, Android had been said to be "killing" Apple's iOS market share (i.e. its declining sales of smartphones, not just relatively but also by number of units, when the whole market was increasing).[42]Gartner's press release stated: "Apple continued its downward trend with a decline of 7.7 percent in the second quarter of 2016",[43]which is their decline, based on absolute number of units, that underestimates the relative decline (with the market increasing), along with the misleading "1.7percent [point]" decline. That point decline means an 11.6% relative decline (from 14.6% down to 12.9%). Although by units sold Apple was declining in the late 2010s, the company was almost the only vendor making any profit in the smartphone sector from hardware sales alone. In Q3 2016 for example, they captured 103.6% of the market profits.[44] In May 2019 the biggest smartphone companies (by market share) were Samsung, Huawei and Apple, respectively.[45] In November 2024, a new competitor to Android and iOS emerged, when sales of the HuaweiMate 70started with the all-new operating systemHarmonyOS NEXTinstalled[46]on the flagship device. Future Huawei devices are to be sold mainly with this operating system, creating a third player on the market for smartphone operating systems.[47] The following table shows worldwide smartphone sales to end users by operating systems, as measured byGartner,International Data Corporation (IDG)and others: Data from various sources published over the 2021/2022 period is summarized in the table below. All of these sources monitor a substantial number of websites, any statistics that relate to only one web site have been excluded. Android currently ranks highest,[67]above Windows (incl. Xbox console) systems.Windows Phoneaccounted for 0.51% of the web usage, before it was discontinued.[68] Considering all personal computers,Microsoft Windowsis well below 50% usage share on every continent, and at 30% in the US (24% single-day low) and in many countries lower, e.g. China, and in India at 19% (12% some days) and Windows' lowest share globally was 29% in May 2022 (25% some days), and 29% in the US.[69] For a short time, iOS was slightly more popular than Windows in the US, but this is no longer the case. Worldwide, Android holds 45.49%, more than Windows at 25.35%, and iOS third at 18.26%. In Africa, Android is at 66.07%, Windows is 13.46 (and iOS third at 10.24%).[70] Before iOS became the most popular operating system in any independent country, it was most popular in Guam, anunincorporated territory of the United States, for four consecutive quarters in 2017–18,[71][72]although Android is now the most popular there.[73]iOS has been the highest ranked OS inJersey(a BritishCrown dependencyin Europe) for years, by a wide margin, and iOS was also highest ranked in Falkland Islands, aBritish Overseas Territory, for one quarter in 2019, before being overtaken by Android in the following quarter.[74][75]iOS is competitive with Windows in Sweden, where some days it is more used.[76] The designation of an "Unknown" operating system is strangely high in a few countries such asMadagascarwhere it was at 32.44% (no longer near as high).[77]This may be due to the fact that StatCounter usesbrowser detectionto get OS statistics, and there the most common browsers are not often used. The version breakdown for browsers in Madagascar shows "Other" at 34.9%,[78]and Opera Mini 4.4 is the most popular known browser at 22.1% (plus e.g. 3.34% for Opera 7.6). However browser statistics without version-breakdown has Opera at 48.11% with the "Other" category very small.[79][clarification needed] In China, Android became the highest ranked operating system in July 2016 (Windows has occasionally topped it since then, while since April 2016 it or all non-mobile operating systems haven't outranked mobile operating systems, meaning Android plus iOS).[80]In the Asian continent as a whole, Android has been ranked highest since February 2016 and Android alone has the majority share,[81]because of a large majority in all the most populous countries of the continent, up to 84% in Bangladesh, where it has had over 70% share for over four years.[82]Since August 2015, Android is ranked first, at 48.36% in May 2016, in the African continent – when it took a big jump ahead of Windows 7,[83]and thereby Africa joined Asia as a mobile-majority continent. China is no longer a desktop-majority country,[84]joining India, which has a mobile-majority of 71%, confirming Asia's significant mobile-majority. Online usage ofLinux kernelderivatives (Android+ChromeOS+ otherLinux) exceeds that of Windows. This has been true since some time between January and April 2016, according to W3Counter[85]and StatCounter.[86]However, even before that, the figure for all Unix-like OSes, including those from Apple, was higher than that for Windows. 2020 20.55% 6.74% 8.06% 13.67% 37.44% Windows is still the dominant desktop OS, but the dominance varies by region and it has gradually lost market share to other desktop operating systems (not just to mobile) with the slide very noticeable in the US, where macOS usage has more than quadrupled from Jan. 2009 to Dec. 2020 to 30.62% (i.e. in Christmas month; and 34.72% in April 2020 in the middle ofCOVID-19, and iOS was more popular overall that year;[97]globally Windows lost to Android that year,[98]as for the two years prior), with Windows down to 61.136% and ChromeOS at 5.46%, plus traditional Linux at 1.73%.[99] There is little openly published information on the device shipments of desktop and laptop computers. Gartner publishes estimates, but the way the estimates are calculated is not openly published. Another source of market share of various operating systems isStatCounter[100]basing its estimate on web use (although this may not bevery accurate). Also, sales may overstate usage. Most computers are sold with apre-installed operating system, with some users replacing that OS with a different one due to personal preference, or installing another OS alongside it and using both. Conversely, sales underestimate usage by not counting unauthorized copies. For example, in 2009, approximately 80% of software sold in China consisted of illegitimate copies.[101]In 2007, the statistics from an automated update of IE7 for registered Windows computers differed with the observed web browser share, leading one writer to estimate that 25–35% of all Windows XP installations were unlicensed.[102] The usage share of Microsoft's (then latest operating system version)Windows 10has slowly increased since July/August 2016, reaching around 27.15% (of all Windows versions, not all desktop or all operating systems) in December 2016. It eventually reached 79.79% on 5 October 2021, the same day on which its successorWindows 11was released. In the United States, usage of Windows XP has dropped to 0.38% (of all Windows versions), and its global average to 0.59%, while in Africa it is still at 2.71%, and inArmeniait is more than 70%, as of 2017.[103] StatCounter web usage data of desktop or laptop operating systems varies significantly by country. For example, in 2017, macOS usage in North America was at 16.82%[104](17.52% in the US[105]) whereas in Asia it was only 4.4%.[106]As of July 2023, macOS usage has increased to 30.81% in North America[107](31.77% in the US)[108]and to 9.64% in Asia.[109] The 2023Stack Overflowdeveloper survey counts 87,222 survey responses. However, usage of a particular system as a desktop or as a server was not differentiated in the survey responses. The operating system share among those identifying as professional developers was:[126] In June 2016, Microsoft claimed Windows 10 had half the market share of all Windows installations in the US and UK, as quoted by BetaNews: Microsoft's Windows trends page [shows] Windows 10 hit 50 percent in the US (51 percent in the UK, 39 percent globally), while ... Windows 7 was on 38 percent (36 percent in the UK, 46 percent globally). A big reason for the difference in numbers comes down to how they are recorded. ... actual OS usage (based on web browsing), while Microsoft records the number of devices Windows 10 is installed on. ... Microsoft also only records Windows 7, Windows 8, Windows 8.1 and Windows 10, while NetMarketShare includes both XP and Vista. The digital video game distribution platformSteampublishes a monthly "Hardware & Software Survey", with the statistics below: ^†These figures, as reported by Steam, do not includeSteamOSstatistics.[137] By Q1 2018,mobile operating systemsonsmartphonesincludedGoogle's dominantAndroid(and variants) andApple'siOSwhich combined had an almost 100% market share.[138] Smartphone penetration vs. desktop use differs substantially by country. Some countries, like Russia, still have smartphone use as low as 22.35% (as a fraction of all web use),[139]but in most western countries, smartphone use is close to 50% of all web use. This doesn't mean that only half of the population has a smartphone, could mean almost all have, just that other platforms have about equal use. Smartphone usage share in developing countries is much higher – in Bangladesh, for example, Android smartphones had up to 84% and currently 70% share,[82]and in Mali smartphones had over 90% (up to 95%) share for almost two years.[140][141](A section below has more information on regional trends on the move to smartphones.) There is a clear correlation between the GDP per capita of a country and that country's respective smartphone OS market share, with users in the richest countries being much more likely to choose Apple's iPhone, with Google's Android being predominant elsewhere.[142][143][144] Tablet computers, or simplytablets, became a significant OS market share category starting with Apple'siPad. In Q1 2018, iOS had 65.03% market share and Android had 34.58% market share.[155]Windows tablets may not get classified as such by some analysts, and thus barely register; e.g.2-in-1 PCsmay get classified as "desktops", not tablets. Since 2016, in South America (and Cuba[156]in North America), Android tablets have gained majority,[157]and in Asia in 2017 Android was slightly more popular than the iPad, which was at 49.05% usage share in October 2015.[158][159][160]In Africa, Android tablets are much more popular while elsewhere the iPad has a safe margin. As of March 2015[update], Android has made steady gains to becoming the most popular tablet operating system:[161]that is the trend in many countries, having already gained the majority in large countries (India at 63.25%,[162]and in Indonesia at 62.22%[163]) and in the African continent with Android at 62.22% (first to gain Android majority in late 2014),[164]with steady gains from 20.98% in August 2012[165](Egypt at 62.37%,[166]Zimbabwe at 62.04%[166]), and South America at 51.09% in July 2015.[167](Peru at 52.96%[168]). Asia is at 46%.[169]In Nepal, Android gained majority lead in November 2014 but lost it down to 41.35% with iOS at 56.51%.[170]In Taiwan, as of October 2016, Android after having gained a confident majority, has been on a losing streak.[171]China is a major exception to Android gaining market share in Asia (there Androidphabletsare much more popular than Android tablets, while similar devices get classified as smartphones) where the iPad/iOS is at 82.84% in March 2015.[172] According toStatCounterweb use statistics (a proxy for all use), smartphones are more popular than desktop computers globally (and Android in particular more popular than Windows). Including tablets with mobiles/smartphones, as they also run so-calledmobile operating systems, even in the United States (and most countries) are mobiles including tablets more popular than other (older originally made for desktops) operating systems (such as Windows and macOS). Windows in the US (at 33.42%) has only 8% head-start (2.55-percentage points) over iOS only; with Android, that mobile operating system and iOS have 52.14% majority.[180]Alternatively, Apple, with iOS plus their non-mobile macOS (9.33%) has 20% more share (6.7-percentage points more) than Microsoft's Windows in the country where both companies were built. Although desktop computers are still popular in many countries (while overall down to 44.9% in the first quarter of 2017[181]), smartphones are more popular even in many developed countries. A few countries on all continents are desktop-minority with Android more popular than Windows; many, e.g. Poland in Europe, and about half of the countries in South America, and many in North America, e.g. Guatemala, Honduras, Haiti; up to most countries in Asia and Africa[182]with smartphone-majority because of Android, Poland and Turkey in Europe highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%.[183][184]Spain was also slightly desktop-minority. As of July 2019, Sweden had been desktop-minority for eight weeks in a row.[185] The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India amongst world leaders in use of mobile to surf the internet"[186](of the big countries) where the share is around (or over) 80%[187]and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest). Smartphones (discounting tablets), first gained majority in December 2016 (desktop-majority was lost the month before),[where?]and it wasn't a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017.[188][clarification needed] In the week of 7–13 November 2016, smartphones alone (without tablets) overtook desktop for the first time, albeit for a short period.[189]Examples of mobile-majority countries include Paraguay in South America, Poland in Europe and Turkey and most of Asia and Africa. Some of the world is still desktop-majority, with for example the United States at 54.89% (but not on all days).[190]However, in someterritories of the United States, such asPuerto Rico,[191]desktop is significantly under majority, with Windows just under 25%, overtaken by Android. On 22 October 2016 (and subsequent weekends), mobile showed majority.[192]Since 27 October, the desktop hasn't had a majority, including on weekdays. Smartphones alone have shown majority since 23 December to the end of the year, with the share topping at 58.22% on Christmas Day.[193]To the "mobile"-majority share of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similar high also occurred on Monday 17 April 2017, with the smartphone share slightly lower and tablet share slightly higher, combining to 62.88%. Formerly, according to a StatCounter press release, the world has turned desktop-minority;[194]as of October 2016[update], at about 49% desktop use for that month, but mobile wasn't ranked higher, tablet share had to be added to it to exceed desktop share. For the Christmas season (i.e. temporarily, while desktop-minority remains and smartphone-majority on weekends[195][196]), the last two weeks in December 2016, Australia (and Oceania in general)[197]was desktop-minority for the first time for an extended period, i.e. every day from 23 December.[198] In South America, smartphones alone took majority from desktops on Christmas Day,[196]but for a full-week-average, desktop is still at least at 58%.[199] The UK desktop-minority dropped down to 44.02% on Christmas Day and for the eight days to the end of the year.[200]Ireland joined some other European countries with smartphone-majority, for three days after Christmas, topping that day at 55.39%.[201][202] In the US, desktop-minority happened for three days on and around Christmas (while a longer four-day stretch happened in November, and happens frequently on weekends).[203] According to StatCounter web use statistics (a proxy for all use), in the week from 7–13 November 2016, "mobile" (meaning smartphones) alone (without tablets) overtook desktop, for the first time, with them highest ranked at 52.13% (on 27 November 2016)[204]or up to 49.02% for a full week.[205][206]Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and the continents Asia and Africa. Large regions of the rest of the world are still desktop-majority, while on some days, the United States,[207](and North America as a whole)[208]isn't; the US is desktop-minority up to four days in a row,[209]and up to a five-day average.[210]Other examples, of desktop-minority on some days, include the UK,[208]Ireland,[211]Australia[212](andOceaniaas a whole); in fact, at least one country on every continent[213][214][215]has turned desktop-minority (for at least a month). On 22 October 2016 (and subsequent weekends), mobile has shown majority.[216] Previously, according to a StatCounter press release, the world has turned desktop-minority;[217]as of October 2016[update], at about 49% desktop use for that month,[218][219]with desktop-minority stretching up to an 18-weeks/4-months period from 28 June to 31 October 2016,[220][221]while whole of July, August or September 2016, showed desktop-majority (and many other long sub-periods in the long stretch showed desktop-minority; similarly only Fridays, Saturdays and Sundays are desktop-minority). The biggest continents, Asia and Africa, have shown vast mobile-majority for long time (any day of the week), as well as several individual countries elsewhere have also turned mobile-majority: Poland, Albania (andTurkey)[222]in Europe and Paraguay and Bolivia[223]in South America.[224] According to StatCounter's web use statistics, Saturday 28 May 2016, was the day when smartphones ("mobile" at StatCounter, that now counts tablets separately) became a most used platform, ranking first, at 47.27%, above desktops.[225][226]The next day, desktops slightly outnumbered "mobile" (unless counting tablets: some analysts count tablets with smartphones or separately while others with desktops – even when most tablets are iPad or Android, not Windows devices).[227] Since Sunday 27 March 2016, the first day the world dipped to desktop-minority,[228]it has happened almost every week, and by week of 11–17 July 2016, the world was desktop-minority,[229]followed by the next week, and thereon also for a three-week period.[230]The trend is still stronger on weekends, with e.g. 17 July 2016 showed desktop at 44.67%, "mobile" at 49.5% plus tablets at 5.7%.[231]Recent weekly data shows a downward trend for desktops.[232][233] According to StatCounter web use statistics (a proxy for overall use), on weekends desktops worldwide lose about 5 percent points, e.g. down to 51.46% on 15 August 2015, with the loss in (relative) web use going to mobile (and also a minuscule increase for tablets),[234]mostly becauseWindows 7, ranked 1st on workdays, declines in web use, with it shifting to Android and lesser degree to iOS.[235] Two continents have already crossed over to mobile-majority (because of Android), based on StatCounters web use statistics. In June 2015,Asiabecame the first continent where mobile overtook desktop[236](followed byAfricain August;[237]whileNigeriahad mobile majority in October 2011,[238][239]because ofSymbian– that later had 51% share, thenSeries 40dominating, followed by Android as dominating operating system[240]) and as far back as October 2014, they had reported this trend on a large scale in a press release: "Mobile usage has already overtaken desktop in several countries includingIndia,South AfricaandSaudi Arabia".[241]In India, desktop went from majority, in July 2012, down to 32%.[242]In Bangladesh desktop went from majority, in May 2013, down to 17%, with Android alone now accounting for majority web use.[243]Only a few African countries were still desktop-majority[244]and many have a large mobile majority includingEthiopiaandKenya, where mobile usage is over 72%.[245] The popularity of mobile use worldwide has been driven by the huge popularity increase of Android in Asian countries, where Android is the highest ranked operating system statistically in virtually every south-east Asian country,[246]while it also ranks most popular in almost every African country. Poland has been desktop-minority since April 2015,[247]because of Android being vastly more popular there,[248]and other European countries, such as Albania (andTurkey), have also crossed over. The South America continent is somewhat far from losing desktop-majority, but Paraguay had lost it as of March 2015[update].[249]Android and mobile browsing in general has also become hugely popular in all other continents where desktop has a large desktop base and the trend to mobile is not as clear as a fraction of the total web use. While some analysts count tablets with desktops (as some of them run Windows), others count them with mobile phones (as the vast majority of tablets run so-calledmobile operating systems, such asAndroidoriOSon theiPad). iPad has a clear lead globally, but has clearly lost the majority to Android in South America,[250]and a number of Eastern European countries such as Poland; lost virtually all African countries and has lost the majority twice in Asia, but gained the majority back (while many individual countries, e.g. India and most of the middle East have clear Android majority on tablets).[251]Android on tablets is thus second most popular after the iPad.[252] In March 2015, for the first time in the US the number of mobile-only adult internet users exceeded the number of desktop-only internet users with 11.6% of the digital population only using mobile compared to 10.6% only using desktop; this also means the majority, 78%, use both desktop and mobile to access the internet.[253]A few smaller countries in North America, such as Haiti (because of Android) have gone mobile majority (mobile went to up to 72.35%, and is at 64.43% in February 2016).[254] The region with the largest Android usage[67]also has the largest mobile revenue.[255] Internet based servers'market sharecan be measured with statistical surveys of publicly accessible servers, such asweb servers,mail servers[257]orDNS serverson the Internet: the operating systems powering such servers are found by inspecting raw response messages. This method gives insight only into market share of operating systems that are publicly accessible on the Internet. There will be differences in the result depending on how the sample is done and observations weighted. Usually the surveys are not based on a random sample of all IP addresses, domain names, hosts or organisations, but on servers found by some other method.[citation needed]Additionally, many domains and IP addresses may be served by one host and some domains may be served by several hosts or by one host with several IP addresses. Mainframesare larger and more powerful than most servers, but notsupercomputers. They are used to process large sets of data, for exampleenterprise resource planningorcredit card transactions. The most common operating system for mainframes is IBM'sz/OS.[265][citation needed]Operating systems forIBM Zgeneration hardware include IBM's proprietary z/OS,[266]Linux on IBM Z,z/TPF,z/VSEandz/VM. Gartnerreported on 23 December 2008 that Linux on System z was used on approximately 28% of the "customer z base" and that they expected this to increase to over 50% in the following five years.[267]Of Linux on IBM Z,Red HatandMicro Focuscompete to sellRHELandSLESrespectively: Like today's trend of mobile devices from personal computers,[253]in 1984 for the first time estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion). IBM received the vast majority of mainframe revenue.[269] From 1991 to 1996, AT&T Corporation briefly owned NCR, one of themajor original mainframe producers. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.[270] In 2012,NASApowered down its last mainframe, an IBM System z9.[271]However, IBM's successor to the z9, thez10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for IBM, and mainframes are still the back-office engines behind the world's financial markets and much of global commerce".[272]As of 2010[update], while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".[273] TheTOP500project lists and ranks the 500 fastestsupercomputersfor which benchmark results are submitted. Since the early 1990s, the field of supercomputers has been dominated byUnixorUnix-likeoperating systems, and starting in 2017, every top 500 fastest supercomputer usesLinuxas itssupercomputer operating system. The last supercomputer to rank #1 while using an operating system other than Linux wasASCI White, which ranAIX. It held the title from November 2000 to November 2001,[274]and was decommissioned in 2006. Then in June 2017, two AIX computers held rank 493 and 494,[275]the last non-Linux systems before they dropped off the list. Historically all kinds ofUnixoperating systems dominated, and in the end ultimately Linux remains.
https://en.wikipedia.org/wiki/Usage_share_of_operating_systems
ThePick Operating System, also known as thePick Systemor simplyPick,[1]is ademand-paged,multi-user,virtual memory,time-sharingcomputeroperating systembased around aMultiValue database. Pick is used primarily for businessdata processing. It is named after one of its developers, Dick Pick.[2][3] The term "Pick system" has also come to be used as the general name of alloperating environmentswhich employ this multivalued database and have some implementation of Pick/BASIC andENGLISH/Accessqueries. Although Pick started on a variety ofminicomputers, the system and its various implementations eventually spread to a large assortment ofmicrocomputers, personal computers,[4]andmainframe computers.[5] The Pick Operating System is an integrated computing platform with a database, query and procedural operation languages, peripheral and multi-user management, andBASICprogramming capabilities. Its database utilizes a hash-file system, enabling efficient data storage and retrieval by organizing data into dynamic associative arrays managed by associative files. Data within the Pick system is organized into a hierarchical structure of accounts, dictionaries, files, and sub-files based on ahash-tablemodel with linear probing. This structure comprises variable-length records, fields, and sub-fields, with unique naming conventions that reflect its multivalued database characteristics. Records are identified by unique keys that facilitate direct access to their storage locations.[6] Initially constrained by the era's technological limitations, the Pick system's capacity has expanded over time, removing earlier record-size limits and introducing dynamic file allocation andB-treeindexing to enhance data management capabilities. The Pick database operates without explicitdata types,[6]treating all data as character strings, which places the onus of data integrity on the applications developed for the system. This flexibility allows Pick to store data innon-first-normal-form, avoiding the need forjoinoperations by containing all related data within single records. This approach can optimize storage and retrieval efficiency for specific kinds of datasets. Pick was originally implemented as theGeneralized Information Retrieval Language System(GIRLS) on anIBM System/360in 1965 by Don Nelson and Dick Pick atTRW, whose government contract for the Cheyenne Helicopter project required developing a database.[5]It was supposed to be used by theU.S. Armyto control the inventory ofCheyenne helicopterparts.[7] Pick was subsequently commercially released in 1973 byMicrodata Corporation(and its British distributor CMC) as the Reality Operating System now supplied byNorthgate Information Solutions.[8]McDonnell Douglasbought Microdata in 1981.[5] The first Microdata implementation, called the Reality, came only with a procedural language (PROC), and a query language (ENGLISH). In 1975, Ken Simms of Pick Systems created an implementation of DartmouthBASICfor the Reality, with numerous syntax extensions forsmart terminal interfaceand database operations, and it was called Data/BASIC. At or near the same time, SMI of Chicago, created an extended procedural language and called it RPL. PROC, the procedure language was provided for executingscripts. ASQL-style language called ENGLISH allowed database retrieval and reporting, but not updates (although later, the ENGLISH command "REFORMAT" allowed updates on a batch basis). ENGLISH did not fully allow manipulating the 3-dimensional multivalued structure of data records. Nor did it directly provide commonrelationalcapabilities such asjoins. This was because powerfuldata dictionaryredefinitions for a field allowed joins via the execution of a calculated lookup in another file. The system included aspooler. A simpletext editorfor file-system records was provided, but the editor was only suitable[6]for system maintenance, and could not lock records, so most applications were written with the other tools such as Batch, RPL, or the BASIC language so as to ensure data validation and allowrecord locking. By the early 1980s observers saw the Pick Operating System as a strong competitor toUnix.[9]BYTEin 1984 stated that "Pick is simple and powerful, and it seems to be efficient and reliable, too ... because it works well as a multiuser system, it's probably the most cost-effective way to use anXT".[10]Dick Pick founded Pick & Associates, later renamed Pick Systems, then Raining Data, then (as of 2011[update]) TigerLogic, and finally Rocket Software. He licensed "Pick" to a large variety of manufacturers and vendors who have produced different "flavors" of Pick. The database flavors sold by TigerLogic were D3, mvBase, and mv Enterprise. Those previously sold byIBMunder the "U2" umbrella are known as UniData and UniVerse.Rocket Softwarepurchased IBM's U2 family of products in 2010 and Tiger Logic's D3 and mvBase family of products in 2014. In 2021, Rocket acquired OpenQM and jBASE as well. Dick Pick died at age 56 due to stroke complications in October 1994.[3][11] Pick Systems often became tangled in licensing litigation, and devoted relatively little effort to marketing[12][13]and improving its software. Subsequent ports of Pick to other platforms generally offered the same tools and capabilities for many years, usually with relatively minor improvements and simply renamed (for example, Data/BASIC became Pick/BASIC and ENGLISH becameACCESS).[6]Licensees often developed proprietary variations and enhancements; for example, Microdata created an input processor called ScreenPro. The Pick database was licensed to roughly three dozen licensees between 1978 and 1984. Application-compatible implementations evolved into derivatives and also inspired similar systems. Through the implementations above, and others, Pick-like systems became available as database, programming, and emulation environments running under many variants of Unix and Microsoft Windows.
https://en.wikipedia.org/wiki/PICK_OS
This is an alphabetical list of notable technology terms. It includes terms with notable applications in computing, networking, and other technological fields.
https://en.wikipedia.org/wiki/List_of_technology_terms
Free Range RoutingorFRRoutingorFRRis anetwork routingsoftware suiterunning onUnix-likeplatforms, particularlyLinux,Solaris,OpenBSD,FreeBSDandNetBSD. It was created as aforkfromQuagga, which itself was a fork ofGNU Zebra. FRRouting is distributed under the terms of theGNU General Public License v2(GPL2). FRR provides implementations of the following protocols: It also provides alpha implementations of: FRRouting broke away from the free routing software Quagga. Several Quagga contributors, includingCumulus Networks,6WIND, and BigSwitch Networks, citing frustration about the pace of development, decided to fork the software and form their own community.[2] Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/FRRouting
NCOSis thegraphical user interface-basedoperating systemdeveloped for use inOracle Corporation'sNetwork Computers, which are discontinued.[1]It was adapted byAcorn Computersfrom its ownRISC OS,[2][3]which was originally developed for their range ofArchimedesdesktop computers. It shares withRISC OSthe same 4 MBROMsize and suitability for use with TV displays. In 1999,Paceacquired theset-top box(STB) division of Acorn Computers,[4]:5:9this being a component in the disposal of assets around the takeover of Acorn by MSDW Investment Holdings.[5]This gave Pace the rights to use and develop NCOS.[6]RISCOS Ltdlater announcedEmbedded RISC OS, which was to have similarities with NCOS.[7] NCOS originated in connection with the Network Computer project. It was used on various STB products.[8]It branched from RISC OS 3.60 and was called RISC OS 3.61[citation needed]before being named afterNetwork Computer Operating System.[9]It was merged back into the HEAD whilst at Pace,[citation needed]where it was known asRISC OS-NC[10]and RO-STB.[11] NCOS was designed in accord with theNetwork Computer Reference Profileand thus supportsinternet standardsof the time.[citation needed]Being closely based onRISC OS, it can also run many of that operating system's applications.[12]:13Reporting on the launch of the Network Computer in 1996, it was noted that NCOS was essentially the same as RISC OS but with some features removed, such as "support for local file systems", whereas other features such as network support had been added to ROM.[13]The actual differences involved the absence of "modules significant to the operation and networking" of existing RISC OS versions, including the Filer, TaskManager and Pinboard modules, plus a range of networking modules. The use of files stored on a server and accessed using the Network File System (NFS) also imposed restrictions on the files used by applications, with recommended techniques for the deployment of applications involving the transfer of files over NFS from RISC OS clients or the use of archives in the largely Acorn-specific Spark format, with these being unpacked on the server using an appropriate tool.[12] Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Network_Computer_Operating_System
Network functions virtualization(NFV)[1]is anetwork architectureconcept that leverages ITvirtualizationtechnologies to virtualize entire classes ofnetwork nodefunctions into building blocks that may connect, or chain together, to create and deliver communication services. NFV relies upon traditional server-virtualizationtechniques such as those used in enterprise IT. Avirtualized network function, orVNF, is implemented within one or morevirtual machinesorcontainersrunning different software and processes, on top of commercial off the shelf (COTS) high-volume servers, switches and storage devices, or evencloud computinginfrastructure, instead of having custom hardware appliances for each network function thereby avoiding vendor lock-in. For example, a virtualsession border controllercould be deployed to protect a network without the typical cost and complexity of obtaining and installing physical network protection units. Other examples of NFV include virtualizedload balancers,firewalls,intrusion detection devicesandWAN acceleratorsto name a few.[2] The decoupling of the network function software from the customized hardware platform realizes a flexible network architecture that enables agile network management, fast new service roll outs with significant reduction in CAPEX and OPEX. Product development within the telecommunication industry has traditionally followed rigorous standards for stability, protocol adherence and quality, reflected by the use of the termcarrier gradeto designate equipment demonstrating this high reliability and performance factor.[3]While this model worked well in the past, it inevitably led to long product cycles, a slow pace of development and reliance on proprietary or specific hardware, e.g., bespokeapplication-specific integrated circuits(ASICs). This development model resulted in significant delays when rolling out new services, posed complex interoperability challenges and significant increase in CAPEX/OPEX when scaling network systems & infrastructure and enhancing network service capabilities to meet increasing network load and performance demands. Moreover, the rise of significant competition in communication service offerings from agile organizations operating at large scale on the public Internet (such asGoogle Talk,Skype,Netflix) has spurred service providers to look for innovative ways to disrupt the status quo and increase revenue streams. In October 2012, a group of telecom operators published awhite paper[4]at a conference inDarmstadt, Germany, onsoftware-defined networking(SDN) andOpenFlow. The Call for Action concluding the White Paper led to the creation of the Network Functions Virtualization (NFV) Industry Specification Group (ISG)[5]within theEuropean Telecommunications Standards Institute(ETSI). The ISG was made up of representatives from the telecommunication industry from Europe and beyond.[6][7]ETSI ISG NFV addresses many aspects, including functional architecture, information model, data model, protocols, APIs, testing, reliability, security, future evolutions, etc. The ETSI ISG NFV has announced the Release 5 of its specifications since May 2021 aiming to produce new specifications and extend the already published specifications based on new features and enhancements. Since the publication of the white paper, the group has produced over 100 publications,[8]which have gained wider acceptance in the industry and are being implemented in prominent open source projects like OpenStack, ONAP, Open Source MANO (OSM) to name a few. Due to active cross-liaison activities, the ETSI NFV specifications are also being referenced in other SDOs like 3GPP, IETF, ETSI MEC etc. The NFV framework consists of three main components:[9] The building block for both the NFVI and the NFV-MANO is the NFV platform. In the NFVI role, it consists of both virtual and physical processing and storage resources, and virtualization software. In its NFV-MANO role it consists of VNF and NFVI managers and virtualization software operating on ahardware controller. The NFV platform implements carrier-grade features used to manage and monitor the platform components, recover from failures and provide effective security – all required for the public carrier network. A service provider that follows the NFV design implements one or more virtualized network functions, orVNFs. A VNF by itself does not automatically provide a usable product or service to the provider's customers. To build more complex services, the notion ofservice chainingis used, where multiple VNFs are used in sequence to deliver a service. Another aspect of implementing NFV is theorchestrationprocess. To build highly reliable and scalable services, NFV requires that the network be able to instantiate VNF instances, monitor them, repair them, and (most important for a service provider business) bill for the services rendered. These attributes, referred to as carrier-grade[11]features, are allocated to an orchestration layer in order to provide high availability and security, and low operation and maintenance costs. Importantly, the orchestration layer must be able to manage VNFs irrespective of the underlying technology within the VNF. For example, an orchestration layer must be able to manage anSBCVNF from vendor X running onVMware vSpherejust as well as anIMSVNF from vendor Y running on KVM. The initial perception of NFV was that virtualized capability should be implemented in data centers. This approach works in many – but not all – cases. NFV presumes and emphasizes the widest possible flexibility as to the physical location of the virtualized functions. Ideally, therefore, virtualized functions should be located where they are the most effective and least expensive. That means a service provider should be free to locate NFV in all possible locations, from the data center to the network node to the customer premises. This approach, known as distributed NFV, has been emphasized from the beginning as NFV was being developed and standardized, and is prominent in the recently released NFV ISG documents.[12] For some cases there are clear advantages for a service provider to locate this virtualized functionality at the customer premises. These advantages range from economics to performance to the feasibility of the functions being virtualized.[13] The first ETSI NFV ISG-approved public multi-vendorproof of concept (PoC)of D-NFV was conducted byCyan, Inc.,RAD,Fortinetand Certes Networks inChicagoin June, 2014, and was sponsored byCenturyLink. It was based on RAD's dedicated customer-edge D-NFV equipment running Fortinet's Next Generation Firewall (NGFW) and Certes Networks’ virtual encryption/decryption engine as Virtual Network Functions (VNFs) with Cyan's Blue Planet system orchestrating the entire ecosystem.[14]RAD's D-NFV solution, aLayer 2/Layer 3network termination unit (NTU)equipped with a D-NFVX86server module that functions as a virtualization engine at the customer edge, became commercially available by the end of that month.[15]During 2014 RAD also had organized a D-NFV Alliance, an ecosystem of vendors and internationalsystems integratorsspecializing in new NFV applications.[16] When designing and developing the software that provides the VNFs, vendors may structure that software into software components (implementation view of a software architecture) and package those components into one or more images (deployment view of a software architecture). These vendor-defined software components are called VNF Components (VNFCs). VNFs are implemented with one or more VNFCs and it is assumed, without loss of generality, that VNFC instances map 1:1 to VM Images. VNFCs should in general be able toscale up and/or scale out. By being able to allocate flexible (virtual) CPUs to each of the VNFC instances, the network management layer can scale up (i.e., scalevertically) the VNFC to provide the throughput/performance and scalability expectations over a single system or a single platform. Similarly, the network management layer can scale out (i.e.,scale horizontally) a VNFC by activating multiple instances of such VNFC over multiple platforms and therefore reach out to the performance and architecture specifications whilst not compromising the other VNFC function stabilities. Early adopters of such architecture blueprints have already implemented the NFV modularity principles.[17] Network Functions Virtualisation is highly complementary to SDN.[4]In essence, SDN is an approach to building data networking equipment and software that separates and abstracts elements of these systems. It does this by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the forwarding components remain distributed. The control plane interacts with bothnorthboundandsouthbound. In the northbound direction the control plane provides a common abstracted view of the network to higher-level applications and programs using high-level APIs and novel management paradigms, such as Intent-based networking. In the southbound direction the control plane programs the forwarding behavior of the data plane, using device level APIs of the physical network equipment distributed around the network. Thus, NFV is not dependent on SDN or SDN concepts, but NFV and SDN can cooperate to enhance the management of a NFV infrastructure and to create a more dynamic network environment. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of Network Services (NS), composed of different type of Network Functions (NF), such as Physical Network Functions (PNF) and VNFs, and placed between different geo-located NFV infrastructures, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems.[18] An NFV system needs a central orchestration and management system that takes operator requests associated with an NS or a VNF, translates them into the appropriate processing, storage and network configuration needed to bring the NS or VNF into operation. Once in operation, the VNF and the networks it is connected to potentially must be monitored for capacity and utilization, and adapted if necessary.[19] All network control functions in an NFV infrastructure can be accomplished using SDN concepts and NFV could be considered one of the primary SDN use cases in service provider environments.[20]For example, within each NFV infrastructure site, a VIM could rely upon an SDN controller to set up and configure the overlay networks interconnecting (e.g. VXLAN) the VNFs and PNFs composing an NS. The SDN controller would then configure the NFV infrastructure switches and routers, as well as the network gateways, as needed. Similarly, a Wide Area Infrastructure Manager (WIM) could rely upon an SDN controller to set up overlay networks to interconnect NSs that are deployed to different geo-located NFV infrastructures. It is also apparent that many SDN use-cases could incorporate concepts introduced in the NFV initiative. Examples include where the centralized controller is controlling a distributed forwarding function that could in fact be also virtualized on existing processing or routing equipment. NFV has proven a popular standard even in its infancy. Its immediate applications are numerous, such as virtualization ofmobile base stations,platform as a service(PaaS),content delivery networks(CDN), fixed access and home environments.[21]The potential benefits of NFV is anticipated to be significant. Virtualization of network functions deployed on general purpose standardized hardware is expected to reduce capital and operational expenditures, and service and product introduction times.[22][23]Many major network equipment vendors have announced support for NFV.[24]This has coincided with NFV announcements from major software suppliers who provide the NFV platforms used by equipment suppliers to build their NFV products.[25][26] However, to realize the anticipated benefits of virtualization, network equipment vendors are improving IT virtualization technology to incorporate carrier-grade attributes required to achievehigh availability, scalability, performance, and effective network management capabilities.[27]To minimize the total cost of ownership (TCO), carrier-grade features must be implemented as efficiently as possible. This requires that NFV solutions make efficient use of redundant resources to achieve five-nines availability (99.999%),[28]and of computing resource without compromising performance predictability. The NFV platform is the foundation for achieving efficient carrier-grade NFV solutions.[29]It is a software platform running on standard multi-core hardware and built using open source software that incorporates carrier-grade features. The NFV platform software is responsible for dynamically reassigning VNFs due to failures and changes in traffic load, and therefore plays an important role in achieving high availability. There are numerous initiatives underway to specify, align and promote NFV carrier-grade capabilities such as ETSI NFV Proof of Concept,[30]ATIS[31]Open Platform for NFV Project,[32]Carrier Network Virtualization Awards[33]and various supplier ecosystems.[34] The vSwitch, a key component of NFV platforms, is responsible for providing connectivity both VM-to-VM (between VMs) and between VMs and the outside network. Its performance determines both the bandwidth of the VNFs and the cost-efficiency of NFV solutions. The standardOpen vSwitch's (OVS) performance has shortcomings that must be resolved to meet the needs of NFVI solutions.[35]Significant performance improvements are being reported by NFV suppliers for both OVS and Accelerated Open vSwitch (AVS) versions.[36][37] Virtualization is also changing the wayavailabilityis specified, measured and achieved in NFV solutions. As VNFs replace traditional function-dedicated equipment, there is a shift from equipment-based availability to a service-based, end-to-end, layered approach.[38][39]Virtualizing network functions breaks the explicit coupling with specific equipment, therefore availability is defined by the availability of VNF services. Because NFV technology can virtualize a wide range of network function types, each with their own service availability expectations, NFV platforms should support a wide range of fault tolerance options. This flexibility enables CSPs to optimize their NFV solutions to meet any VNF availability requirement. ETSIhas already indicated that an important part of controlling the NFV environment be done through automated orchestration. NFV Management and Orchestration (NFV-MANO) refers to a set of functions within an NFV system to manage and orchestrate the allocation of virtual infrastructure resources to virtualized network functions (VNFs) and network services (NSs). They are the brains of the NFV system and a key automation enabler. The main functional blocks within the NFV-MANO architectural framework (ETSI GS NFV-006) are: The entry point in NFV-MANO for external operations support systems (OSS) and business support systems (BSS) is the NFVO, which is in charge of managing the lifecycle of NS instances. The management of the lifecycle of VNF instances constituting an NS instance is delegated by the NFVO to one more or VNFMs. Both the NFVO and the VNFMs uses the services exposed by one or more VIMs for allocating virtual infrastructure resources to the objects they manage. Additional functions are used for managing containerized VNFs: the Container Infrastructure Service Management (CISM) and the Container Image Registry (CIR) functions. The CISM is responsible for maintaining the containerized workloads while the CIR is responsible for storing and maintaining information of OS container software images The behavior of the NFVO and VNFM is driven by the contents of deployment templates (a.k.a. NFV descriptors) such as a Network Service Descriptor (NSD) and a VNF Descriptor (VNFD). ETSI delivers a full set of standardsenabling an open ecosystemwhere Virtualized Network Functions (VNFs) can be interoperable with independently developed management and orchestration systems, and where the components of a management and orchestration system are themselves interoperable. This includes a set ofRestful APIspecifications[40]as well as the specifications of a packaging format for delivering VNFs to service providers and of the deployment templates to be packaged with the software images to enable managing the lifecycle of VNFs. Deployment templates can be based onTOSCAorYANG.[41][42] AnOpenAPI(a.k.a. Swagger) representation of the API specifications is available and maintained on the ETSI forgeserver, along with TOSCA and YANG definition files to be used when creating deployment templates. The full set of published specifications is summarized in the table below. OS Container management and orchestration An overview of the different versions of the OpenAPI representations of NFV-MANO APIs is available on the ETSI NFVwiki. The OpenAPI files as well as the TOSCA YAML definition files and YANG modules applicable to NFV descriptors are available on the ETSIForge. Additional studies are ongoing within ETSI on possible enhancement to the NFV-MANO framework to improve its automation capabilities and introduce autonomous management mechanisms (seeETSI GR NFV-IFA 041) Recent performance study on NFV focused on the throughput, latency and jitter of virtualized network functions (VNFs), as well as NFV scalability in terms of the number of VNFs a single physical server can support.[43]Open source NFV platforms are available, one representative is openNetVM.[44]openNetVM is a high performance NFV platform based on DPDK and Docker containers. openNetVM provides a flexible framework for deploying network functions and interconnecting them to build service chains. openNetVM is an open source version of the NetVM platform described in NSDI 2014 and HotMiddlebox 2016 papers, released under the BSD license. The source code can be found at GitHub:openNetVM[45] From 2018, many VNF providers began to migrate many of their VNFs to a container-based architecture. Such VNFs also known asCloud-Native Network Functions(CNF) utilize many innovations deployed commonly on internet infrastructure. These include auto-scaling, supporting a continuous delivery / DevOps deployment model, and efficiency gains by sharing common services across platforms. Through service discovery and orchestration, a network based on CNFs will be more resilient to infrastructure resource failures. Utilizing containers, and thus dispensing with the overhead inherent in traditional virtualization through the elimination of theguest OScan greatly increase infrastructure resource efficiency.[46]
https://en.wikipedia.org/wiki/Network_functions_virtualization
TheSoftware for Open Networking in the Cloudor alternatively abbreviated and stylized asSONiC, is afree and open sourcenetwork operating systembased onLinux. It was originally developed byMicrosoftand theOpen Compute Project. In 2022, Microsoft ceded oversight of the project to theLinux Foundation, who will continue to work with theOpen Compute Projectfor continued ecosystem and developer growth.[1][2][3][4]SONiC includes thenetworking softwarecomponents necessary for a fully functionalL3 device[5]and was designed to meet the requirements of aclouddata center. It allows cloud operators to share the samesoftware stackacross hardware from different switch vendors and works on over 100 different platforms.[3][5][6]There are multiple companies offering enterprise service and support for SONiC. SONiC was developed and open sourced by Microsoft in 2016.[2]The software decouples network software from the underlying hardware and is built on theSwitch Abstraction InterfaceAPI.[1]It runs onnetwork switchesandASICsfrom multiple vendors.[2]Notable supported network features includeBorder Gateway Protocol(BGP),remote direct memory access(RDMA),QoS, and various other Ethernet/IP technologies.[2]Much of the protocol support is provided through inclusion of theFRRoutingsuite of routing daemons.[7] The SONiC community includescloud providers, service providers, and silicon and component suppliers, as well asnetworking hardwareOEMs and ODMs. It has more than 850 members.[2] Thesource codeis licensed under a mix of open source licenses including theGNU General Public Licenseand theApache License, and is available onGitHub.[8][9]
https://en.wikipedia.org/wiki/SONiC_(operating_system)
Object-oriented analysis and design(OOAD) is a technical approach for analyzing and designing an application, system, or business by applyingobject-oriented programming, as well as using visual modeling throughout thesoftware development processto guide stakeholder communication and product quality. OOAD in modern software engineering is typically conducted in an iterative and incremental way. The outputs of OOAD activities are analysis models (for OOA) and design models (for OOD) respectively. The intention is for these to be continuously refined and evolved, driven by key factors like risks and business value. In the early days of object-oriented technology before the mid-1990s, there were many different competing methodologies for software development andobject-oriented modeling, often tied to specificComputer Aided Software Engineering(CASE) tool vendors. No standard notations, consistent terms and process guides were the major concerns at the time, which degradedcommunication efficiencyand lengthened learning curves. Some of the well-known early object-oriented methodologies were from and inspired by gurus such asGrady Booch,James Rumbaugh,Ivar Jacobson(theThree Amigos),Robert Martin,Peter Coad,Sally Shlaer,Stephen Mellor, andRebecca Wirfs-Brock. In 1994, theThree Amigosof Rational Software started working together to develop theUnified Modeling Language(UML). Later, together withPhilippe Kruchtenand Walker Royce (eldest son ofWinston Royce), they have led a successful mission to merge their own methodologies,OMT,OOSEandBooch method, with various insights and experiences from other industry leaders into theRational Unified Process(RUP), a comprehensive iterative and incremental process guide and framework for learning industry best practices of software development and project management.[1]Since then, theUnified Processfamily has become probably the most popular methodology and reference model for object-oriented analysis and design. Anobjectcontainsencapsulateddata and procedures grouped to represent an entity. The 'object interface' defines how theobjectcan be interacted with. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining theobjectsand their interactions to solve a problem that was identified and documented duringobject-oriented analysis. What follows is a description of theclass-basedsubset of object-oriented design, which does not includeobject prototype-basedapproaches where objects are not typically obtained by instantiating classes but by cloning other (prototype) objects. Object-oriented design is a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as state and dynamic models of the system under design. Thesoftware life cycleis typically divided up into stages, going from abstract descriptions of the problem, to designs, then to code and testing, and finally to deployment. The earliest stages of this process are analysis and design. The analysis phase is also often called "requirements acquisition". In some approaches to software development—known collectively as waterfall models—the boundaries between each stage are meant to be fairly rigid and sequential. The term "waterfall" was coined for such methodologies to signify that progress went sequentially in one direction only, i.e., once analysis was complete then and only then was design begun and it was rare (and considered a source of error) when a design issue required a change in the analysis model or when a coding issue required a change in design. The alternative to waterfall models are iterative models. This distinction was popularized byBarry Boehmin a very influential paper on his Spiral Model for iterative software development. With iterative models it is possible to do work in various stages of the model in parallel. So for example it is possible—and not seen as a source of error—to work on analysis, design, and even code all on the same day and to have issues from one stage impact issues from another. The emphasis on iterative models is that software development is a knowledge-intensive process and that things like analysis can't really be completely understood without understanding design issues, that coding issues can affect design, that testing can yield information about how the code or even the design should be modified, etc.[2] Although it is possible to do object-oriented development using a waterfall model, in practice most object-oriented systems are developed with an iterative approach. As a result, in object-oriented processes "analysis and design" are often considered at the same time. The object-oriented paradigm emphasizes modularity and re-usability. The goal of an object-oriented approach is to satisfy the"open–closed principle". A module is open if it supports extension, or if the module provides standardized ways to add new behaviors or describe new states. In the object-oriented paradigm this is often accomplished by creating a new subclass of an existing class. A module is closed if it has a well defined stable interface that all other modules must use and that limits the interaction and potential errors that can be introduced into one module by changes in another. In the object-oriented paradigm this is accomplished by defining methods that invoke services on objects. Methods can be either public or private, i.e., certain behaviors that are unique to the object are not exposed to other objects. This reduces a source of many common errors in computer programming.[3] The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. The earliest stages of this process are analysis and design. The distinction between analysis and design is often described as "what vs. how". In analysis developers work with users and domain experts to define what the system is supposed to do. Implementation details are supposed to be mostly or totally (depending on the particular method) ignored at this phase. The goal of the analysis phase is to create a functional model of the system regardless of constraints such as appropriate technology. In object-oriented analysis this is typically done via use cases and abstract definitions of the most important objects. The subsequent design phase refines the analysis model and makes the needed technology and other implementation choices. In object-oriented design the emphasis is on describing the various objects, their data, behavior, and interactions. The design model should have all the details required so that programmers can implement the design in code.[4] The purpose of any analysis activity in thesoftware life-cycleis to create a model of the system's functional requirements that is independent of implementation constraints. The main difference between object-oriented analysis and other forms of analysis is that by the object-oriented approach we organize requirements around objects, which integrate both behaviors (processes) and states (data) modeled after real world objects that the system interacts with. In other or traditional analysis methodologies, the two aspects: processes and data are considered separately. For example, data may be modeled byER diagrams, and behaviors byflow chartsorstructure charts. Common models used in OOA are use cases andobject models.Use casesdescribe scenarios for standard domain functions that the system must accomplish. Object models describe the names, class relations (e.g. Circle is a subclass of Shape), operations, and properties of the main objects. User-interface mockups or prototypes can also be created to help understanding.[5] Object-oriented design (OOD)is the process of planning asystem of interacting objectsto solve a software problem. It is a method forsoftware design. By defining classes and their functionality for their children (instantiated objects), each object can run the same implementation of the class with its state. During OOD, a developer applies implementation constraints to the conceptual model produced in object-oriented analysis. Such constraints could include the hardware andsoftwareplatforms, the performance requirements, persistent storage and transaction, usability of the system, and limitations imposed by budgets and time. Concepts in the analysis model which is technology independent, are mapped onto implementing classes and interfaces resulting in a model of the solution domain, i.e., a detailed description ofhowthe system is to be built on concrete technologies.[6] Important topics during OOD also include the design ofsoftware architecturesby applyingarchitectural patternsanddesign patternswith the object-oriented design principles. The input for object-oriented design is provided by the output ofobject-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice, the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot. Some typical input artifacts for object-oriented design are: The five basic concepts of object-oriented design are the implementation-level features built into the programming language. These features are often referred to by these common names: The main advantage of using a design pattern is that it can be reused in multiple applications. It can also be thought of as a template for how to solve a problem that can be used in many different situations and/or applications. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects involved. Object-oriented modeling (OOM) is a common approach to modeling applications, systems, and business domains by using the object-oriented paradigm throughout the entiredevelopment life cycles. OOM is a main technique heavily used by both OOD and OOA activities in modern software engineering. Object-oriented modeling typically divides into two aspects of work: the modeling of dynamic behaviors like business processes anduse cases, and the modeling of static structures like classes and components. OOA and OOD are the two distinct abstract levels (i.e. the analysis level and the design level) during OOM. TheUnified Modeling Language (UML)andSysMLare the two popular international standard languages used for object-oriented modeling.[9] The benefits of OOM are: Efficient and effective communication Users typically have difficulties in understanding comprehensive documents and programming language codes well. Visual model diagrams can be more understandable and can allow users and stakeholders to give developers feedback on the appropriate requirements and structure of the system. A key goal of the object-oriented approach is to decrease the "semantic gap" between the system and the real world, and to have the system be constructed using terminology that is almost the same as the stakeholders use in everyday business. Object-oriented modeling is an essential tool to facilitate this. Useful and stable abstraction Modeling helps coding. A goal of most modern software methodologies is to first address "what" questions and then address "how" questions, i.e. first determine the functionality the system is to provide without consideration of implementation constraints, and then consider how to make specific solutions to these abstract requirements, and refine them into detailed designs and codes by constraints such as technology and budget. Object-oriented modeling enables this by producing abstract and accessible descriptions of both system requirements and designs, i.e.modelsthat define their essential structures and behaviors like processes and objects, which are important and valuable development assets with higher abstraction levels above concrete and complex source code.
https://en.wikipedia.org/wiki/Object-oriented_design
ICAD(Corporate history: ICAD, Inc., Concentra (name change atIPOin 1995), KTI (name change in 1998),Dassault Systèmes(purchase in 2001) ([1]) is aknowledge-based engineering(KBE) system that enables users to encode design knowledge using a semantic representation that can be evaluated forParasolidoutput. ICAD has an open architecture that can utilize all the power and flexibility of the underlying language. KBE, as implemented via ICAD, received a lot of attention due to the remarkable results that appeared to take little effort.[citation needed]ICAD allowed one example ofend-user computingthat in a sense is unparalleled. Most ICAD developers were degreed engineers. Systems developed by ICAD users were non-trivial and consisted of highly complicated code. In the sense ofend-user computing, ICAD was the first to allow the power of a domain tool to be in the hands of the user, at the same time being open to allow extensions as identified and defined by the domain expert orsubject-matter expert(SME).[citation needed] A COE article[2]looked at the resulting explosion of expectations (seeAI winter), which were not sustainable. However, such a bubble burst does not diminish the existence of ability that would exist were expectations and use reasonable or properly managed.[citation needed] The original implementation of ICAD was on aLisp machine(Symbolics). Some of the principals involved with the development were Larry Rosenfeld,[3]Avrum Belzer, Patrick M. O'Keefe, Philip Greenspun, and David F. Place. The time frame was 1984–85.[4][5] ICAD started on special-purpose Symbolics Lisp hardware and was then ported toUnixwhenCommon Lispbecame portable to general-purpose workstations. The original domain for ICAD wasmechanical designwith many application successes. However, ICAD has found use in other domains, such aselectrical design, shape modeling, etc. An example project could be wind tunnel design or the development of a support tool for aircraft multidisciplinary design.[6][7][8]Further examples can be found in the presentations at the annual IIUG (International ICAD Users Group) that have been published in theKTI Vault[dead link](1999 through 2002).[9]Boeing and Airbus used ICAD extensively to develop various components in the 1990s and early 21st century. As of 2003, ICAD was featured strongly in several areas as evidenced by theVision & Strategy Product Vision and Strategy[dead link]presentation. After 2003, ICAD use diminished. At the end of 2001, the KTI Company faced financial difficulties and laid off most of its best staff. They were eventually bought out by Dassault who effectively scuppered the ICAD product. SeeIIUG at COE, 2003(first meeting due to Dassault by KTI) The ICAD system was very expensive, relatively, and was in the price range of high-end systems. Market dynamics couldn't support this as there may not have been sufficient differentiating factors between ICAD and the lower-end systems (or the promises from Dassault). KTI was absorbed by Dassault Systèmes and ICAD is no longer considered the go-forward tool for knowledge-based engineering (KBE) applications by that company. Dassault Systèmes is promoting a suite of tools oriented around version 5 of their popular CATIA CAD application, with Knowledgeware the replacement for ICAD. As of 2005[update], things were still a bit unclear.ICAD 8.3was delivered. The recent COE Aerospace Conference had a discussion about the futures of KBE. One issue involves the stacking of 'meta' issues within a computer model. How this is resolved, whether by more icons or the availability of an external language, remains to be seen. TheGenworks GDLproduct (including kernel technology from theGendl Project) is the nearest functional equivalent to ICAD currently available. ICAD provided a declarative language (IDL) using NewFlavors(never converted toCommon Lisp Object System(CLOS)) that supported a mechanism for relating parts (defpart) via a hierarchical set of relationships. Technically, the ICAD Defpart was aLispmacro; the ICAD defpart list was a set of generic classes that can be instantiated with specific properties depending upon what was represented. This defpart list was extendible via composited parts that represented domain entities. Along with the part-subpart relations, ICAD supported generic relations via the object modeling abilities of Lisp. Example applications of ICAD range from a small collection of defparts that represents a part or component to a larger collection that represents an assembly. In terms of power, an ICAD system, when fully specified, can generate thousands of instances of parts on a major assembly design. One example of an application driving thousands of instances of parts is that of an aircraft wing – where fastener type and placement may number in the thousands, each instance requiring evaluation of several factors driving the design parameters. One role for ICAD may be serving as the defining prototype for KBE which would require that we know more about what occurred the past 15 years (much information is tied up behind corporate firewalls and under proprietary walls). With the rise offunctional programminglanguages (an example isHaskell) in the markets, perhaps some of the power attributable toLispmay be replicated.
https://en.wikipedia.org/wiki/ICAD_(software)
Orphaned technologyrefers tocomputer technologiesthat have been abandoned by their original developers. As opposed todeprecation, which tends to be a gradual shift away from an older technology to newer technology, orphaned technology is usually abandoned immediately or with no direct replacement.[1]Unlikeabandonware, orphaned technology refers to bothsoftwareandhardwareand the practices around them. Users of orphaned technologies must often make a choice continuing to use the technology, which may become harder to maintain over time, or switch to other supported technologies, possibly losing capabilities unique to the orphaned technology.[citation needed] While technology can be abandoned due to an unfavourable design or poor implementation, abandoning a technology can happen for a variety of reasons.[1]There are instances where products are phased out of the market because they are no longer viable as business ventures, such as certain medical technologies.[2] Some orphaned technologies do not suffer complete abandonment or obsolescence.[citation needed]For instance, there is the case of IBM'sSilicon Germanium(SiGe) technology, which is a program that produced anin situdoped alloy as a replacement for the conventional implantation step in silicon semiconductor bipolar process. The technology was previously orphaned but was continued again by a small team at IBM so that it emerged as a leading product in the high-volume communications marketplace.[3] Technologies orphaned due to failure on the part of their startup developers can be picked up by another investor. One example is Wink, anIoT technologyorphaned when its parent companyQuirkyfiled for bankruptcy. The platform, however, continued after it was purchased by another company, Flex.[4] Some examples of orphaned technology include: Symbolics Inc's operating systems,Generaand OpenGenera, were twice orphaned, as they were ported fromLISP machinesto computers using theAlpha64-bitCPU.[further explanation needed] User groups often exist for specific orphaned technologies, such as TheHong KongNewtonUser Group,[8]Symbolics Lisp [Machines] Users' Group (now known as the Association of Lisp Users),[9]and Newton Reference.[10]TheSave Sibeliusgroup sprang into existence becauseSibelius (scorewriter)users feared the application would be orphaned after its ownersAvid Techfired most of the development team, who were thereafter hired bySteinbergto develop the competing product,Dorico.[11][12][13]
https://en.wikipedia.org/wiki/Orphaned_technology
This is a list ofutilitiesfor performingdisk partitioning.
https://en.wikipedia.org/wiki/List_of_disk_partitioning_software
Anoptical disc image(orISO image, from theISO 9660file system used withCD-ROMmedia) is adisk imagethat contains everything that would be written to anoptical disc,disk sectorby disc sector, including the optical discfile system.[3]ISO images contain the binary image of an optical mediafile system(usuallyISO 9660and its extensions orUDF), including the data in its files in binary format, copied exactly as they were stored on the disc. The data inside the ISO image will be structured according to the file system that was used on the optical disc from which it was created. ISO images can be created from optical discs bydisk imaging software, or from a collection offilesbyoptical disc authoring software, or from a differentdisk image fileby means ofconversion. Software distributed on bootable discs is often available for download in ISO image format; like any other ISO image, it may be written to an optical disc such as CD, DVD and Blu-Ray. Optical-disc images are uncompressed and do not use a particular container format; they are asector-by-sector copy of the data on an optical disc, stored inside a binary file. Other than ISO 9660 media, an ISO image might also contain aUDF(ISO/IEC 13346) file system (commonly used byDVDsandBlu-ray Discs), including the data in its files in binary format, copied exactly as they were stored on the disc. The data inside the ISO image will be structured according to thefile systemthat was used on the optical disc from which it was created. The.isofile extensionis the one most commonly used for this type of disc images. The.imgextension can also be found on some ISO image files, such as in some images from MicrosoftDreamSpark; however,IMG files, which also use the.imgextension, tend to have slightly different contents. The.udffile extension is sometimes used to indicate that the file system inside the ISO image is actually UDF and not ISO 9660. ISO files store only the user data from each sector on an optical disc, ignoring thecontrol headersand error correction data, and are therefore slightly smaller than a raw disc image of optical media. Since the size of the user-data portion of a sector (logical sector) in data optical discs is 2,048 bytes, the size of an ISO image will be a multiple of 2,048. Any single-trackCD-ROM,DVDorBlu-raydisc can be archived in ISO format as a true digital copy of the original. Unlike a physical optical disc, an image can be transferred over any data link or removable storage medium. An ISO image can be opened with almost every multi-formatfile archiver. Native support for handling ISO images varies from operating system to operating system. With a suitabledriversoftware, an ISO can be "mounted" – allowing the operating system to interface with it, just as if the ISO were a physical optical disc. MostUnix-based operating systems, includingLinuxandmacOS, have this built-in capability to mount an ISO. Versions of Windows, beginning withWindows 8, also have such a capability.[4]For other operating systems, separately available software drivers can be installed to achieve the same objective. A CD can have multipletracks, which can contain computer data, audio, or video.File systemssuch asISO 9660are stored inside one of these tracks. Since ISO images are expected to contain a binary copy of the file system and its contents, there is no concept of a "track" inside an ISO image, since a track is a container for the contents of an ISO image. This means that CDs with multiple tracks can not be stored inside a single ISO image; at most, an ISO image will contain the data inside one of those multiple tracks, and only if it is stored inside a standard file system. This also means thataudio CDs, which are usually composed of multiple tracks, can not be stored inside an ISO image. Furthermore, not even a single track of an audio CD can be stored as an ISO image, since audio tracks do not contain a file system inside them, but only a continuous stream of encoded audio data. This audio is stored onsectors of 2352 bytesdifferent from those that store a file system and it is not stored inside files; it is addressed withtrack numbers,index pointsand aCD time codethat are encoded into thelead-inof each session of the CD-Audio disc. Video CDsandSuper Video CDsrequire at least two tracks on a CD, so it is also not possible to store an image of one of these discs inside an ISO image file, however an .IMG file can achieve this. Formats such asCUE/BIN,CCD/IMGandMDS/MDFformats can be used to store multi-track disc images, including audio CDs. These formats store a raw disc image of the complete disc, including information from all tracks, along with a companion file describing the multiple tracks and the characteristics of each of those tracks. This would allow an optical media burning tool to have all the information required to correctly burn the image on a new disc. For audio CDs, one can also transfer the audio data into uncompressed audio files likeWAVorAIFF, optionally reserving the metadata (seeCD ripping). Most software that is capable of writing from ISO images to hard disks or recordable media (CD / DVD / BD) is generally not able to write from ISO disk images toflash drives. This limitation is more related to the availability of software tools able to perform this task, than to problems in the format itself. However, since 2011, various software has existed to write raw image files to USB flash drives.[5][6] .ISO files are commonly used inemulatorsto replicate aCDimage. Emulators such asDolphinandPCSX2use .iso files to emulateWiiandGameCubegames, andPlayStation 2games, respectively.[7][8]They can also be used as virtual CD-ROMs for hypervisors such asVMware WorkstationorVirtualBox. Other uses are burning disk images of operating systems to physical install media.
https://en.wikipedia.org/wiki/ISO_image
Inmathematics, amultiset(orbag, ormset) is a modification of the concept of asetthat, unlike a set,[1]allows for multiple instances for each of itselements. The number of instances given for each element is called themultiplicityof that element in the multiset. As a consequence, an infinite number of multisets exist that contain only elementsaandb, but vary in the multiplicities of their elements: These objects are all different when viewed as multisets, although they are the same set, since they all consist of the same elements. As with sets, and in contrast totuples, the order in which elements are listed does not matter in discriminating multisets, so{a,a,b}and{a,b,a}denote the same multiset. To distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used: the multiset{a,a,b}can be denoted by[a,a,b].[2] Thecardinalityor "size" of a multiset is the sum of the multiplicities of all its elements. For example, in the multiset{a,a,b,b,b,c}the multiplicities of the membersa,b, andcare respectively 2, 3, and 1, and therefore the cardinality of this multiset is 6. Nicolaas Govert de Bruijncoined the wordmultisetin the 1970s, according toDonald Knuth.[3]: 694However, the concept of multisets predates the coinage of the wordmultisetby many centuries. Knuth himself attributes the first study of multisets to the Indian mathematicianBhāskarāchārya, who describedpermutations of multisetsaround 1150. Other names have been proposed or used for this concept, includinglist,bunch,bag,heap,sample,weighted set,collection, andsuite.[3]: 694 Wayne Blizard traced multisets back to the very origin of numbers, arguing that "in ancient times, the numbernwas often represented by a collection ofnstrokes,tally marks, or units."[4]These and similar collections of objects can be regarded as multisets, because strokes, tally marks, or units are considered indistinguishable. This shows that people implicitly used multisets even before mathematics emerged. Practical needs for this structure have caused multisets to be rediscovered several times, appearing in literature under different names.[5]: 323For instance, they were important in earlyAIlanguages, such as QA4, where they were referred to asbags,a term attributed toPeter Deutsch.[6]A multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset (finitely repeated element set).[5]: 320[7] Although multisets were used implicitly from ancient times, their explicit exploration happened much later. The first known study of multisets is attributed to the Indian mathematicianBhāskarāchāryacirca 1150, who described permutations of multisets.[3]: 694The work ofMarius Nizolius(1498–1576) contains another early reference to the concept of multisets.[8]Athanasius Kircherfound the number of multiset permutations when one element can be repeated.[9]Jean Prestetpublished a general rule for multiset permutations in 1675.[10]John Wallisexplained this rule in more detail in 1685.[11] Multisets appeared explicitly in the work ofRichard Dedekind.[12][13] Other mathematicians formalized multisets and began to study them as precise mathematical structures in the 20th century. For example,Hassler Whitney(1933) describedgeneralized sets("sets" whosecharacteristic functionsmay take anyintegervalue: positive, negative or zero).[5]: 326[14]: 405Monro (1987) investigated thecategoryMulof multisets and theirmorphisms, defining amultisetas a set with anequivalence relationbetween elements "of the samesort", and amorphismbetween multisets as afunctionthat respectssorts. He also introduced amultinumber: a functionf(x) from a multiset to thenatural numbers, giving themultiplicityof elementxin the multiset. Monro argued that the concepts of multiset and multinumber are often mixed indiscriminately, though both are useful.[5]: 327–328[15] One of the simplest and most natural examples is the multiset ofprime factorsof a natural numbern. Here the underlying set of elements is the set of prime factors ofn. For example, the number120has theprime factorization120=233151,{\displaystyle 120=2^{3}3^{1}5^{1},}which gives the multiset{2, 2, 2, 3, 5}. A related example is the multiset of solutions of analgebraic equation. Aquadratic equation, for example, has two solutions. However, in some cases they are both the same number. Thus the multiset of solutions of the equation could be{3, 5}, or it could be{4, 4}. In the latter case it has a solution of multiplicity 2. More generally, thefundamental theorem of algebraasserts that thecomplexsolutions of apolynomial equationofdegreedalways form a multiset of cardinalityd. A special case of the above are theeigenvaluesof amatrix, whose multiplicity is usually defined as their multiplicity asrootsof thecharacteristic polynomial. However two other multiplicities are naturally defined for eigenvalues, their multiplicities as roots of theminimal polynomial, and thegeometric multiplicity, which is defined as thedimensionof thekernelofA−λI(whereλis an eigenvalue of the matrixA). These three multiplicities define three multisets of eigenvalues, which may be all different: LetAbe an×nmatrix inJordan normal formthat has a single eigenvalue. Its multiplicity isn, its multiplicity as a root of the minimal polynomial is the size of the largest Jordan block, and its geometric multiplicity is the number of Jordan blocks. Amultisetmay be formally defined as anordered pair(U,m)whereUis asetcalled auniverseor theunderlying set, andm:U→Z≥0{\displaystyle m\colon U\to \mathbb {Z} _{\geq 0}}is a function fromUto thenonnegative integers. The value⁠m(a){\displaystyle m(a)}⁠for an element⁠a∈U{\displaystyle a\in U}⁠is called themultiplicityof⁠a{\displaystyle a}⁠in the multiset and intepreted as the number of occurences of⁠a{\displaystyle a}⁠in the multiset. Thesupportof a multiset is the subset of⁠U{\displaystyle U}⁠formed by the elements⁠a∈U{\displaystyle a\in U}⁠such that⁠m(a)>0{\displaystyle m(a)>0}⁠. Afinite multisetis a multiset with afinitesupport. Most authors definemultisetsas finite multisets. This is the case in this article, where, unless otherwise stated, all multisets are finite multisets. Some authors[who?]define multisets with the additional constraint that⁠m(a)>0{\displaystyle m(a)>0}⁠for every⁠a{\displaystyle a}⁠, or, equivalently, the support equals the underlying set. Multisets with infinite multiplicities have also been studied;[16]they are not considered in this article. Some authors[who?]define a multiset in terms of a finite index set⁠I{\displaystyle I}⁠and a function⁠f:I→U{\displaystyle f\colon I\rightarrow U}⁠where the multiplicity of an element⁠a∈U{\displaystyle a\in U}⁠is⁠|f−1(a)|{\displaystyle |f^{-1}(a)|}⁠, the number of elements of⁠I{\displaystyle I}⁠that get mapped to⁠a{\displaystyle a}⁠by⁠f{\displaystyle f}⁠. Multisets may be represented as sets, with some elements repeated. For example, the multiset with support⁠{a,b}{\displaystyle \{a,b\}}⁠and multiplicity function such that⁠m(a)=2,m(b)=1{\displaystyle m(a)=2,\;m(b)=1}⁠can be represented as{a,a,b}. A more compact notation, in case of high multiplicities is⁠{(a,2),(b,1)}{\displaystyle \{(a,2),(b,1)\}}⁠for the same multiset. IfA={a1,…,an},{\displaystyle A=\{a_{1},\ldots ,a_{n}\},}a multiset with support included in⁠A{\displaystyle A}⁠is often represented asa1m(a1)⋯anm(an),{\displaystyle a_{1}^{m(a_{1})}\cdots a_{n}^{m(a_{n})},}to which the computation rules ofindeterminatescan be applied; that is, exponents 1 and factors with exponent 0 can be removed, and the multiset does not depend on the order of the factors. This allows extending the notation to infinite underlying sets as∏a∈Uam(a).{\displaystyle \prod _{a\in U}a^{m(a)}.}An advantage of notation is that it allows using the notation without knowing the exact support. For example, theprime factorsof anatural number⁠n{\displaystyle n}⁠form a multiset such thatn=∏pprimepm(p)=2m(2)3m(3)5m(5)⋯.{\displaystyle n=\prod _{p\;{\text{prime}}}p^{m(p)}=2^{m(2)}3^{m(3)}5^{m(5)}\cdots .} The finite subsets of a set⁠U{\displaystyle U}⁠are exactly the multisets with underlying set⁠U{\displaystyle U}⁠, such that⁠m(a)≤1{\displaystyle m(a)\leq 1}⁠for every⁠a∈U{\displaystyle a\in U}⁠. Elements of a multiset are generally taken in a fixed setU, sometimes called auniverse, which is often the set ofnatural numbers. An element ofUthat does not belong to a given multiset is said to have a multiplicity 0 in this multiset. This extends the multiplicity function of the multiset to a function fromUto the setN{\displaystyle \mathbb {N} }of non-negative integers. This defines aone-to-one correspondencebetween these functions and the multisets that have their elements inU. This extended multiplicity function is commonly called simply themultiplicity function, and suffices for defining multisets when the universe containing the elements has been fixed. This multiplicity function is a generalization of theindicator functionof asubset, and shares some properties with it. Thesupportof a multisetA{\displaystyle A}in a universeUis the underlying set of the multiset. Using the multiplicity functionm{\displaystyle m}, it is characterized asSupp⁡(A):={x∈U∣mA(x)>0}.{\displaystyle \operatorname {Supp} (A):=\{x\in U\mid m_{A}(x)>0\}.} A multiset isfiniteif its support is finite, or, equivalently, if its cardinality|A|=∑x∈Supp⁡(A)mA(x)=∑x∈UmA(x){\displaystyle |A|=\sum _{x\in \operatorname {Supp} (A)}m_{A}(x)=\sum _{x\in U}m_{A}(x)}is finite. Theempty multisetis the unique multiset with anemptysupport (underlying set), and thus a cardinality 0. The usual operations of sets may be extended to multisets by using the multiplicity function, in a similar way to using the indicator function for subsets. In the following,AandBare multisets in a given universeU, with multiplicity functionsmA{\displaystyle m_{A}}andmB.{\displaystyle m_{B}.} Two multisets aredisjointif their supports aredisjoint sets. This is equivalent to saying that their intersection is the empty multiset or that their sum equals their union. There is an inclusion–exclusion principle for finite multisets (similar tothe one for sets), stating that a finite union of finite multisets is the difference of two sums of multisets: in the first sum we consider all possible intersections of anoddnumber of the given multisets, while in the second sum we consider all possible intersections of anevennumber of the given multisets.[citation needed] The number of multisets of cardinalityk, with elements taken from a finite set of cardinalityn, is sometimes called themultiset coefficientormultiset number. This number is written by some authors as((nk)){\displaystyle \textstyle \left(\!\!{n \choose k}\!\!\right)}, a notation that is meant to resemble that ofbinomial coefficients; it is used for instance in (Stanley, 1997), and could be pronounced "nmultichoosek" to resemble "nchoosek" for(nk).{\displaystyle {\tbinom {n}{k}}.}Like thebinomial distributionthat involves binomial coefficients, there is anegative binomial distributionin which the multiset coefficients occur. Multiset coefficients should not be confused with the unrelatedmultinomial coefficientsthat occur in themultinomial theorem. The value of multiset coefficients can be given explicitly as((nk))=(n+k−1k)=(n+k−1)!k!(n−1)!=n(n+1)(n+2)⋯(n+k−1)k!,{\displaystyle \left(\!\!{n \choose k}\!\!\right)={n+k-1 \choose k}={\frac {(n+k-1)!}{k!\,(n-1)!}}={n(n+1)(n+2)\cdots (n+k-1) \over k!},}where the second expression is as a binomial coefficient;[a]many authors in fact avoid separate notation and just write binomial coefficients. So, the number of such multisets is the same as the number of subsets of cardinalitykof a set of cardinalityn+k− 1. The analogy with binomial coefficients can be stressed by writing the numerator in the above expression as arising factorial power((nk))=nk¯k!,{\displaystyle \left(\!\!{n \choose k}\!\!\right)={n^{\overline {k}} \over k!},}to match the expression of binomial coefficients using a falling factorial power:(nk)=nk_k!.{\displaystyle {n \choose k}={n^{\underline {k}} \over k!}.} For example, there are 4 multisets of cardinality 3 with elements taken from the set{1, 2}of cardinality 2 (n= 2,k= 3), namely{1, 1, 1},{1, 1, 2},{1, 2, 2},{2, 2, 2}. There are also 4subsetsof cardinality 3 in the set{1, 2, 3, 4}of cardinality 4 (n+k− 1), namely{1, 2, 3},{1, 2, 4},{1, 3, 4},{2, 3, 4}. One simple way toprovethe equality of multiset coefficients and binomial coefficients given above involves representing multisets in the following way. First, consider the notation for multisets that would represent{a,a,a,a,a,a,b,b,c,c,c,d,d,d,d,d,d,d}(6as, 2bs, 3cs, 7ds) in this form: This is a multiset of cardinalityk= 18made of elements of a set of cardinalityn= 4. The number of characters including both dots and vertical lines used in this notation is18 + 4 − 1. The number of vertical lines is 4 − 1. The number of multisets of cardinality 18 is then the number of ways to arrange the4 − 1vertical lines among the 18 + 4 − 1 characters, and is thus the number of subsets of cardinality 4 − 1 of a set of cardinality18 + 4 − 1. Equivalently, it is the number of ways to arrange the 18 dots among the18 + 4 − 1characters, which is the number of subsets of cardinality 18 of a set of cardinality18 + 4 − 1. This is(4+18−14−1)=(4+18−118)=1330,{\displaystyle {4+18-1 \choose 4-1}={4+18-1 \choose 18}=1330,}thus is the value of the multiset coefficient and its equivalencies:((418))=(2118)=21!18!3!=(213),=4⋅5⋅6⋅7⋅8⋅9⋅10⋅11⋅12⋅13⋅14⋅15⋅16⋅17⋅18⋅19⋅20⋅211⋅2⋅3⋅4⋅5⋅6⋅7⋅8⋅9⋅10⋅11⋅12⋅13⋅14⋅15⋅16⋅17⋅18,=1⋅2⋅3⋅4⋅5⋯16⋅17⋅18⋅19⋅20⋅211⋅2⋅3⋅4⋅5⋯16⋅17⋅18⋅1⋅2⋅3,=19⋅20⋅211⋅2⋅3.{\displaystyle {\begin{aligned}\left(\!\!{4 \choose 18}\!\!\right)&={21 \choose 18}={\frac {21!}{18!\,3!}}={21 \choose 3},\\[1ex]&={\frac {{\color {red}{\mathfrak {4\cdot 5\cdot 6\cdot 7\cdot 8\cdot 9\cdot 10\cdot 11\cdot 12\cdot 13\cdot 14\cdot 15\cdot 16\cdot 17\cdot 18}}}\cdot \mathbf {19\cdot 20\cdot 21} }{\mathbf {1\cdot 2\cdot 3} \cdot {\color {red}{\mathfrak {4\cdot 5\cdot 6\cdot 7\cdot 8\cdot 9\cdot 10\cdot 11\cdot 12\cdot 13\cdot 14\cdot 15\cdot 16\cdot 17\cdot 18}}}}},\\[1ex]&={\frac {1\cdot 2\cdot 3\cdot 4\cdot 5\cdots 16\cdot 17\cdot 18\;\mathbf {\cdot \;19\cdot 20\cdot 21} }{\,1\cdot 2\cdot 3\cdot 4\cdot 5\cdots 16\cdot 17\cdot 18\;\mathbf {\cdot \;1\cdot 2\cdot 3\quad } }},\\[1ex]&={\frac {19\cdot 20\cdot 21}{1\cdot 2\cdot 3}}.\end{aligned}}} From the relation between binomial coefficients and multiset coefficients, it follows that the number of multisets of cardinalitykin a set of cardinalityncan be written((nk))=(−1)k(−nk).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=(-1)^{k}{-n \choose k}.}Additionally,((nk))=((k+1n−1)).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{k+1 \choose n-1}\!\!\right).} Arecurrence relationfor multiset coefficients may be given as((nk))=((nk−1))+((n−1k))forn,k>0{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{n \choose k-1}\!\!\right)+\left(\!\!{n-1 \choose k}\!\!\right)\quad {\mbox{for }}n,k>0}with((n0))=1,n∈N,and((0k))=0,k>0.{\displaystyle \left(\!\!{n \choose 0}\!\!\right)=1,\quad n\in \mathbb {N} ,\quad {\mbox{and}}\quad \left(\!\!{0 \choose k}\!\!\right)=0,\quad k>0.} The above recurrence may be interpreted as follows. Let[n]:={1,…,n}{\displaystyle [n]:=\{1,\dots ,n\}}be the source set. There is always exactly one (empty) multiset of size 0, and ifn= 0there are no larger multisets, which gives the initial conditions. Now, consider the case in whichn,k> 0. A multiset of cardinalitykwith elements from[n]might or might not contain any instance of the final elementn. If it does appear, then by removingnonce, one is left with a multiset of cardinalityk− 1of elements from[n], and every such multiset can arise, which gives a total of((nk−1)){\displaystyle \left(\!\!{n \choose k-1}\!\!\right)}possibilities. Ifndoes not appear, then our original multiset is equal to a multiset of cardinalitykwith elements from[n− 1], of which there are((n−1k)).{\displaystyle \left(\!\!{n-1 \choose k}\!\!\right).} Thus,((nk))=((nk−1))+((n−1k)).{\displaystyle \left(\!\!{n \choose k}\!\!\right)=\left(\!\!{n \choose k-1}\!\!\right)+\left(\!\!{n-1 \choose k}\!\!\right).} Thegenerating functionof the multiset coefficients is very simple, being∑d=0∞((nd))td=1(1−t)n.{\displaystyle \sum _{d=0}^{\infty }\left(\!\!{n \choose d}\!\!\right)t^{d}={\frac {1}{(1-t)^{n}}}.}As multisets are in one-to-one correspondence withmonomials,((nd)){\displaystyle \left(\!\!{n \choose d}\!\!\right)}is also the number of monomials ofdegreedinnindeterminates. Thus, the above series is also theHilbert seriesof thepolynomial ringk[x1,…,xn].{\displaystyle k[x_{1},\ldots ,x_{n}].} As((nd)){\displaystyle \left(\!\!{n \choose d}\!\!\right)}is a polynomial inn, it and the generating function are well defined for anycomplexvalue ofn. The multiplicative formula allows the definition of multiset coefficients to be extended by replacingnby an arbitrary numberα(negative,real, or complex):((αk))=αk¯k!=α(α+1)(α+2)⋯(α+k−1)k(k−1)(k−2)⋯1fork∈Nand arbitraryα.{\displaystyle \left(\!\!{\alpha \choose k}\!\!\right)={\frac {\alpha ^{\overline {k}}}{k!}}={\frac {\alpha (\alpha +1)(\alpha +2)\cdots (\alpha +k-1)}{k(k-1)(k-2)\cdots 1}}\quad {\text{for }}k\in \mathbb {N} {\text{ and arbitrary }}\alpha .} With this definition one has a generalization of the negative binomial formula (with one of the variables set to 1), which justifies calling the((αk)){\displaystyle \left(\!\!{\alpha \choose k}\!\!\right)}negative binomial coefficients:(1−X)−α=∑k=0∞((αk))Xk.{\displaystyle (1-X)^{-\alpha }=\sum _{k=0}^{\infty }\left(\!\!{\alpha \choose k}\!\!\right)X^{k}.} ThisTaylor seriesformula is valid for all complex numbersαandXwith|X| < 1. It can also be interpreted as anidentityofformal power seriesinX, where it actually can serve as definition of arbitrary powers of series with constant coefficient equal to 1; the point is that with this definition all identities hold that one expects forexponentiation, notably (1−X)−α(1−X)−β=(1−X)−(α+β)and((1−X)−α)−β=(1−X)−(−αβ),{\displaystyle (1-X)^{-\alpha }(1-X)^{-\beta }=(1-X)^{-(\alpha +\beta )}\quad {\text{and}}\quad ((1-X)^{-\alpha })^{-\beta }=(1-X)^{-(-\alpha \beta )},}and formulas such as these can be used to prove identities for the multiset coefficients. Ifαis a nonpositive integern, then all terms withk> −nare zero, and the infinite series becomes a finite sum. However, for other values ofα, including positive integers andrational numbers, the series is infinite. Multisets have various applications.[7]They are becoming fundamental incombinatorics.[17][18][19][20]Multisets have become an important tool in the theory ofrelational databases, which often uses the synonymbag.[21][22][23]For instance, multisets are often used to implement relations in database systems. In particular, a table (without a primary key) works as a multiset, because it can have multiple identical records. Similarly,SQLoperates on multisets and returns identical records. For instance, consider "SELECT name from Student". In the case that there are multiple records with name "Sara" in the student table, all of them are shown. That means the result of an SQL query is a multiset; if the result were instead a set, the repetitive records in the result set would have been eliminated. Another application of multisets is in modelingmultigraphs. In multigraphs there can be multiple edges between any two givenvertices. As such, the entity that specifies the edges is a multiset, and not a set. There are also other applications. For instance,Richard Radoused multisets as a device to investigate the properties of families of sets. He wrote, "The notion of a set takes no account of multiple occurrence of any one of its members, and yet it is just this kind of information that is frequently of importance. We need only think of the set of roots of a polynomialf(x) or thespectrumof alinear operator."[5]: 328–329 Different generalizations of multisets have been introduced, studied and applied to solving problems.
https://en.wikipedia.org/wiki/Multisets
Quantum mechanicsis the fundamental physicaltheorythat describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale ofatoms.[2]: 1.1It is the foundation of allquantum physics, which includesquantum chemistry,quantum field theory,quantum technology, andquantum information science. Quantum mechanics can describe many systems thatclassical physicscannot. Classical physics can describe many aspects of nature at an ordinary (macroscopicand(optical) microscopic) scale, but is not sufficient for describing them at very smallsubmicroscopic(atomic andsubatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales.[3] Quantum systems haveboundstates that arequantizedtodiscrete valuesofenergy,momentum,angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of bothparticlesandwaves(wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (theuncertainty principle). Quantum mechanicsarose graduallyfrom theories to explain observations that could not be reconciled with classical physics, such asMax Planck's solution in 1900 to theblack-body radiationproblem, and the correspondence between energy and frequency inAlbert Einstein's1905 paper, which explained thephotoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s byNiels Bohr,Erwin Schrödinger,Werner Heisenberg,Max Born,Paul Diracand others. The modern theory is formulated in variousspecially developed mathematical formalisms. In one of them, a mathematical entity called thewave functionprovides information, in the form ofprobability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. Quantum mechanics allows the calculation of properties and behaviour ofphysical systems. It is typically applied to microscopic systems:molecules,atomsandsubatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms,[4]but its application to human beings raises philosophical problems, such asWigner's friend, and its application to the universe as a whole remains speculative.[5]Predictions of quantum mechanics have been verified experimentally to an extremely high degree ofaccuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known asquantum electrodynamics(QED), has beenshown to agree with experimentto within 1 part in 1012when predicting the magnetic properties of an electron.[6] A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of acomplex number, known as a probability amplitude. This is known as theBorn rule, named after physicistMax Born. For example, a quantum particle like anelectroncan be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives aprobability density functionfor the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. TheSchrödinger equationrelates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.[7]: 67–87 One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of thisuncertainty principlesays that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of itsmomentum.[7]: 427–435 Another consequence of the mathematical rules of quantum mechanics is the phenomenon ofquantum interference, which is often illustrated with thedouble-slit experiment. In the basic version of this experiment, acoherent light source, such as alaserbeam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.[8]: 102–111[2]: 1.1–1.8The wave nature of light causes the light waves passing through the two slits tointerfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles.[8]However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detectedphotonpasses through one slit (as would a classical particle), and not through both slits (as would a wave).[8]: 109[9][10]However,such experimentsdemonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known aswave–particle duality. In addition to light,electrons,atoms, andmoleculesare all found to exhibit the same dual behavior when fired towards a double slit.[2] Another non-classical phenomenon predicted by quantum mechanics isquantum tunnelling: a particle that goes up against apotential barriercan cross it, even if its kinetic energy is smaller than the maximum of the potential.[11]In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enablingradioactive decay,nuclear fusionin stars, and applications such asscanning tunnelling microscopy,tunnel diodeandtunnel field-effect transistor.[12][13] When quantum systems interact, the result can be the creation ofquantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...thecharacteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought".[14]Quantum entanglement enablesquantum computingand is part of quantum communication protocols, such asquantum key distributionandsuperdense coding.[15]Contrary to popular misconception, entanglement does not allow sending signalsfaster than light, as demonstrated by theno-communication theorem.[15] Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantlyBell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory oflocalhidden variables, then the results of aBell testwill be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.[16][17] It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but alsolinear algebra,differential equations,group theory, and other more advanced subjects.[18][19]Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vectorψ{\displaystyle \psi }belonging to a (separable) complexHilbert spaceH{\displaystyle {\mathcal {H}}}. This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}, and it is well-defined up to a complex number of modulus 1 (the global phase), that is,ψ{\displaystyle \psi }andeiαψ{\displaystyle e^{i\alpha }\psi }represent the same physical system. In other words, the possible states are points in theprojective spaceof a Hilbert space, usually called thecomplex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complexsquare-integrablefunctionsL2(C){\displaystyle L^{2}(\mathbb {C} )}, while the Hilbert space for thespinof a single proton is simply the space of two-dimensional complex vectorsC2{\displaystyle \mathbb {C} ^{2}}with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which areHermitian(more precisely,self-adjoint) linearoperatorsacting on the Hilbert space. A quantum state can be aneigenvectorof an observable, in which case it is called aneigenstate, and the associatedeigenvaluecorresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as aquantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by theBorn rule: in the simplest case the eigenvalueλ{\displaystyle \lambda }is non-degenerate and the probability is given by|⟨λ→,ψ⟩|2{\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}}, whereλ→{\displaystyle {\vec {\lambda }}}is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by⟨ψ,Pλψ⟩{\displaystyle \langle \psi ,P_{\lambda }\psi \rangle }, wherePλ{\displaystyle P_{\lambda }}is the projector onto its associated eigenspace. In the continuous case, these formulas give instead theprobability density. After the measurement, if resultλ{\displaystyle \lambda }was obtained, the quantum state is postulated tocollapsetoλ→{\displaystyle {\vec {\lambda }}}, in the non-degenerate case, or toPλψ/⟨ψ,Pλψ⟩{\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}}, in the general case. Theprobabilisticnature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famousBohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way ofthought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newerinterpretations of quantum mechanicshave been formulated that do away with the concept of "wave function collapse" (see, for example, themany-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions becomeentangledso that the original quantum system ceases to exist as an independent entity (seeMeasurement in quantum mechanics[20]). The time evolution of a quantum state is described by the Schrödinger equation:iℏ∂∂tψ(t)=Hψ(t).{\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).}HereH{\displaystyle H}denotes theHamiltonian, the observable corresponding to thetotal energyof the system, andℏ{\displaystyle \hbar }is the reducedPlanck constant. The constantiℏ{\displaystyle i\hbar }is introduced so that the Hamiltonian is reduced to theclassical Hamiltonianin cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called thecorrespondence principle. The solution of this differential equation is given byψ(t)=e−iHt/ℏψ(0).{\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).}The operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}is known as the time-evolution operator, and has the crucial property that it isunitary. This time evolution isdeterministicin the sense that – given an initial quantum stateψ(0){\displaystyle \psi (0)}– it makes a definite prediction of what the quantum stateψ(t){\displaystyle \psi (t)}will be at any later time.[21] Some wave functions produce probability distributions that are independent of time, such aseigenstatesof the Hamiltonian.[7]: 133–137Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around theatomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as ansorbital(Fig. 1). Analytic solutions of the Schrödinger equation are known forvery few relatively simple model Hamiltoniansincluding thequantum harmonic oscillator, theparticle in a box, thedihydrogen cation, and thehydrogen atom. Even theheliumatom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution inclosed form.[22][23][24] However, there are techniques for finding approximate solutions. One method, calledperturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weakpotential energy.[7]: 793Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.[7]: 849 One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum.[25][26]Both position and momentum are observables, meaning that they are represented byHermitian operators. The position operatorX^{\displaystyle {\hat {X}}}and momentum operatorP^{\displaystyle {\hat {P}}}do not commute, but rather satisfy thecanonical commutation relation:[X^,P^]=iℏ.{\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .}Given a quantum state, the Born rule lets us compute expectation values for bothX{\displaystyle X}andP{\displaystyle P}, and moreover for powers of them. Defining the uncertainty for an observable by astandard deviation, we haveσX=⟨X2⟩−⟨X⟩2,{\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},}and likewise for the momentum:σP=⟨P2⟩−⟨P⟩2.{\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.}The uncertainty principle states thatσXσP≥ℏ2.{\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.}Either standard deviation can in principle be made arbitrarily small, but not both simultaneously.[27]This inequality generalizes to arbitrary pairs of self-adjoint operatorsA{\displaystyle A}andB{\displaystyle B}. Thecommutatorof these two operators is[A,B]=AB−BA,{\displaystyle [A,B]=AB-BA,}and this provides the lower bound on the product of standard deviations:σAσB≥12|⟨[A,B]⟩|.{\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators areFourier transformsof each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to ani/ℏ{\displaystyle i/\hbar }factor) to taking the derivative according to the position, since in Fourier analysisdifferentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentumpi{\displaystyle p_{i}}is replaced by−iℏ∂∂x{\displaystyle -i\hbar {\frac {\partial }{\partial x}}}, and in particular in thenon-relativistic Schrödinger equation in position spacethe momentum-squared term is replaced with a Laplacian times−ℏ2{\displaystyle -\hbar ^{2}}.[25] When two different quantum systems are considered together, the Hilbert space of the combined system is thetensor productof the Hilbert spaces of the two components. For example, letAandBbe two quantum systems, with Hilbert spacesHA{\displaystyle {\mathcal {H}}_{A}}andHB{\displaystyle {\mathcal {H}}_{B}}, respectively. The Hilbert space of the composite system is thenHAB=HA⊗HB.{\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.}If the state for the first system is the vectorψA{\displaystyle \psi _{A}}and the state for the second system isψB{\displaystyle \psi _{B}}, then the state of the composite system isψA⊗ψB.{\displaystyle \psi _{A}\otimes \psi _{B}.}Not all states in the joint Hilbert spaceHAB{\displaystyle {\mathcal {H}}_{AB}}can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, ifψA{\displaystyle \psi _{A}}andϕA{\displaystyle \phi _{A}}are both possible states for systemA{\displaystyle A}, and likewiseψB{\displaystyle \psi _{B}}andϕB{\displaystyle \phi _{B}}are both possible states for systemB{\displaystyle B}, then12(ψA⊗ψB+ϕA⊗ϕB){\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)}is a valid joint state that is not separable. States that are not separable are calledentangled.[28][29] If the state for a composite system is entangled, it is impossible to describe either component systemAor systemBby a state vector. One can instead definereduced density matricesthat describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system.[28][29]Just as density matrices specify the state of a subsystem of a larger system, analogously,positive operator-valued measures(POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.[28][30] As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known asquantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.[31] There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed byPaul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics –matrix mechanics(invented byWerner Heisenberg) and wave mechanics (invented byErwin Schrödinger).[32]An alternative formulation of quantum mechanics isFeynman'spath integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of theaction principlein classical mechanics.[33] The HamiltonianH{\displaystyle H}is known as thegeneratorof time evolution, since it defines a unitary time-evolution operatorU(t)=e−iHt/ℏ{\displaystyle U(t)=e^{-iHt/\hbar }}for each value oft{\displaystyle t}. From this relation betweenU(t){\displaystyle U(t)}andH{\displaystyle H}, it follows that any observableA{\displaystyle A}that commutes withH{\displaystyle H}will beconserved: its expectation value will not change over time.[7]: 471This statement generalizes, as mathematically, any Hermitian operatorA{\displaystyle A}can generate a family of unitary operators parameterized by a variablet{\displaystyle t}. Under the evolution generated byA{\displaystyle A}, any observableB{\displaystyle B}that commutes withA{\displaystyle A}will be conserved. Moreover, ifB{\displaystyle B}is conserved by evolution underA{\displaystyle A}, thenA{\displaystyle A}is conserved under the evolution generated byB{\displaystyle B}. This implies a quantum version of the result proven byEmmy Noetherin classical (Lagrangian) mechanics: for everydifferentiablesymmetryof a Hamiltonian, there exists a correspondingconservation law. The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:H=12mP2=−ℏ22md2dx2.{\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.}The general solution of the Schrödinger equation is given byψ(x,t)=12π∫−∞∞ψ^(k,0)ei(kx−ℏk22mt)dk,{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,}which is a superposition of all possibleplane wavesei(kx−ℏk22mt){\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}}, which are eigenstates of the momentum operator with momentump=ℏk{\displaystyle p=\hbar k}. The coefficients of the superposition areψ^(k,0){\displaystyle {\hat {\psi }}(k,0)}, which is the Fourier transform of the initial quantum stateψ(x,0){\displaystyle \psi (x,0)}. It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states.[note 1]Instead, we can consider a Gaussianwave packet:ψ(x,0)=1πa4e−x22a{\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}}which has Fourier transform, and therefore momentum distributionψ^(k,0)=aπ4e−ak22.{\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.}We see that as we makea{\displaystyle a}smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by makinga{\displaystyle a}larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.[34] The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhereinsidea certain region, and therefore infinite potential energy everywhereoutsidethat region.[25]: 77–78For the one-dimensional case in thex{\displaystyle x}direction, the time-independent Schrödinger equation may be written−ℏ22md2ψdx2=Eψ.{\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined byp^x=−iℏddx{\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}}the previous equation is evocative of theclassic kinetic energy analogue,12mp^x2=E,{\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,}with stateψ{\displaystyle \psi }in this case having energyE{\displaystyle E}coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box areψ(x)=Aeikx+Be−ikxE=ℏ2k22m{\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}}or, fromEuler's formula,ψ(x)=Csin⁡(kx)+Dcos⁡(kx).{\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values ofC,D,{\displaystyle C,D,}andk{\displaystyle k}atx=0{\displaystyle x=0}andx=L{\displaystyle x=L}whereψ{\displaystyle \psi }must be zero. Thus, atx=0{\displaystyle x=0},ψ(0)=0=Csin⁡(0)+Dcos⁡(0)=D{\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D}andD=0{\displaystyle D=0}. Atx=L{\displaystyle x=L},ψ(L)=0=Csin⁡(kL),{\displaystyle \psi (L)=0=C\sin(kL),}in whichC{\displaystyle C}cannot be zero as this would conflict with the postulate thatψ{\displaystyle \psi }has norm 1. Therefore, sincesin⁡(kL)=0{\displaystyle \sin(kL)=0},kL{\displaystyle kL}must be an integer multiple ofπ{\displaystyle \pi },k=nπLn=1,2,3,….{\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint onk{\displaystyle k}implies a constraint on the energy levels, yieldingEn=ℏ2π2n22mL2=n2h28mL2.{\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} Afinite potential wellis the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of therectangular potential barrier, which furnishes a model for thequantum tunnelingeffect that plays an important role in the performance of modern technologies such asflash memoryandscanning tunneling microscopy. As in the classical case, the potential for the quantum harmonic oscillator is given by[7]: 234V(x)=12mω2x2.{\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. Theeigenstatesare given byψn(x)=12nn!⋅(mωπℏ)1/4⋅e−mωx22ℏ⋅Hn(mωℏx),{\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad }n=0,1,2,….{\displaystyle n=0,1,2,\ldots .}whereHnare theHermite polynomialsHn(x)=(−1)nex2dndxn(e−x2),{\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),}and the corresponding energy levels areEn=ℏω(n+12).{\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy forbound states. TheMach–Zehnder interferometer(MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in thedelayed choice quantum eraser, theElitzur–Vaidman bomb tester, and in studies of quantum entanglement.[35][36] We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vectorψ∈C2{\displaystyle \psi \in \mathbb {C} ^{2}}that is a superposition of the "lower" pathψl=(10){\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}}and the "upper" pathψu=(01){\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}}, that is,ψ=αψl+βψu{\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}}for complexα,β{\displaystyle \alpha ,\beta }. In order to respect the postulate that⟨ψ,ψ⟩=1{\displaystyle \langle \psi ,\psi \rangle =1}we require that|α|2+|β|2=1{\displaystyle |\alpha |^{2}+|\beta |^{2}=1}. Bothbeam splittersare modelled as the unitary matrixB=12(1ii1){\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}}, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of1/2{\displaystyle 1/{\sqrt {2}}}, or be reflected to the other path with a probability amplitude ofi/2{\displaystyle i/{\sqrt {2}}}. The phase shifter on the upper arm is modelled as the unitary matrixP=(100eiΔΦ){\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}}, which means that if the photon is on the "upper" path it will gain a relative phase ofΔΦ{\displaystyle \Delta \Phi }, and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitterB{\displaystyle B}, a phase shifterP{\displaystyle P}, and another beam splitterB{\displaystyle B}, and so end up in the stateBPBψl=ieiΔΦ/2(−sin⁡(ΔΦ/2)cos⁡(ΔΦ/2)),{\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},}and the probabilities that it will be detected at the right or at the top are given respectively byp(u)=|⟨ψu,BPBψl⟩|2=cos2⁡ΔΦ2,{\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},}p(l)=|⟨ψl,BPBψl⟩|2=sin2⁡ΔΦ2.{\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.}One can therefore use the Mach–Zehnder interferometer to estimate thephase shiftby estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given byp(u)=p(l)=1/2{\displaystyle p(u)=p(l)=1/2}, independently of the phaseΔΦ{\displaystyle \Delta \Phi }. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.[37] Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained byclassical methods.[note 2]Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons,protons,neutrons,photons, and others).Solid-state physicsandmaterials scienceare dependent upon quantum mechanics.[38] In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory includequantum chemistry,quantum optics,quantum computing,superconducting magnets,light-emitting diodes, theoptical amplifierand the laser, thetransistorandsemiconductorssuch as themicroprocessor,medical and research imagingsuch asmagnetic resonance imagingandelectron microscopy.[39]Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-moleculeDNA. The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is thecorrespondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those ofclassical mechanicsin the regime of largequantum numbers.[40]One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known asquantization.[41]: 299[42] When quantum mechanics was originally formulated, it was applied to models whose correspondence limit wasnon-relativisticclassical mechanics. For instance, the well-known model of thequantum harmonic oscillatoruses an explicitly non-relativistic expression for thekinetic energyof the oscillator, and is thus a quantum version of theclassical harmonic oscillator.[7]: 234 Complications arise withchaotic systems, which do not have good quantum numbers, andquantum chaosstudies the relationship between classical and quantum descriptions in these systems.[41]: 353 Quantum decoherenceis a mechanism through which quantum systems losecoherence, and thus become incapable of displaying many typically quantum effects:quantum superpositionsbecome simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.[7]: 687–730Quantum coherence is not typically evident at macroscopic scales, though at temperatures approachingabsolute zeroquantum behavior may manifest macroscopically.[note 3] Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms andmoleculeswhich would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction ofelectric chargesunder the rules of quantum mechanics.[43] Early attempts to merge quantum mechanics withspecial relativityinvolved the replacement of the Schrödinger equation with a covariant equation such as theKlein–Gordon equationor theDirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory,quantum electrodynamics, provides a fully quantum description of theelectromagnetic interaction. Quantum electrodynamics is, along withgeneral relativity, one of the most accurate physical theories ever devised.[44][45] The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treatchargedparticles as quantum mechanical objects being acted on by a classicalelectromagnetic field. For example, the elementary quantum model of thehydrogen atomdescribes theelectric fieldof the hydrogen atom using a classical−e2/(4πϵ0r){\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)}Coulomb potential.[7]: 285Likewise, in aStern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.[41]: 26This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons bycharged particles. Quantum fieldtheories for thestrong nuclear forceand theweak nuclear forcehave also been developed. The quantum field theory of the strong nuclear force is calledquantum chromodynamics, and describes the interactions of subnuclear particles such asquarksandgluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known aselectroweak theory), by the physicistsAbdus Salam,Sheldon GlashowandSteven Weinberg.[46] Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeatedempirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory ofquantum gravityis an important issue inphysical cosmologyand the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.[47] One proposal for doing so isstring theory, which posits that thepoint-like particlesofparticle physicsare replaced byone-dimensionalobjects calledstrings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with itsmass,charge, and other properties determined by thevibrationalstate of the string. In string theory, one of the many vibrational states of the string corresponds to thegraviton, a quantum mechanical particle that carries gravitational force.[48][49] Another popular theory isloop quantum gravity(LQG), which describes quantum properties of gravity and is thus a theory ofquantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops calledspin networks. The evolution of a spin network over time is called aspin foam. The characteristic length scale of a spin foam is thePlanck length, approximately 1.616×10−35m, and so lengths shorter than the Planck length are not physically meaningful in LQG.[50] Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strongphilosophicaldebates and manyinterpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties withwavefunction collapseand the relatedmeasurement problem, andquantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus.Richard Feynmanonce said, "I think I can safely say that nobody understands quantum mechanics."[51]According toSteven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[52] The views ofNiels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation".[53][54]According to these views, the probabilistic nature of quantum mechanics is not atemporaryfeature which will eventually be replaced by a deterministic theory, but is instead afinalrenunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to thecomplementarynature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr,[55]Heisenberg,[56]Schrödinger,[57]Feynman,[2]andZeilinger[58]as well as 21st-century researchers in quantum foundations.[59] Albert Einstein, himself one of the founders ofquantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such asdeterminismandlocality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as theBohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbidsaction at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to howthermodynamicsis valid, but the fundamental theory behind it isstatistical mechanics. In 1935, Einstein and his collaboratorsBoris PodolskyandNathan Rosenpublished an argument that the principle of locality implies the incompleteness of quantum mechanics, athought experimentlater termed theEinstein–Podolsky–Rosen paradox.[note 4]In 1964,John Bellshowed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known asBell inequalities, that can be violated by entangled particles.[64]Since thenseveral experimentshave been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.[16][17] Bohmian mechanicsshows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.[65] Everett'smany-worlds interpretation, formulated in 1956, holds thatallthe possibilities described by quantum theorysimultaneouslyoccur in a multiverse composed of mostly independent parallel universes.[66]This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule,[67][68]with no consensus on whether they have been successful.[69][70][71] Relational quantum mechanicsappeared in the late 1990s as a modern derivative of Copenhagen-type ideas,[72][73]andQBismwas developed some years later.[74][75] Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such asRobert Hooke,Christiaan HuygensandLeonhard Eulerproposed a wave theory of light based on experimental observations.[76]In 1803 EnglishpolymathThomas Youngdescribed the famousdouble-slit experiment.[77]This experiment played a major role in the general acceptance of thewave theory of light. During the early 19th century,chemicalresearch byJohn DaltonandAmedeo Avogadrolent weight to theatomic theoryof matter, an idea thatJames Clerk Maxwell,Ludwig Boltzmannand others built upon to establish thekinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics.[78]While the early conception of atoms fromGreek philosophyhad been that they were indivisible units – the word "atom" deriving from theGreekfor 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard wasMichael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure.Julius Plücker,Johann Wilhelm HittorfandEugen Goldsteincarried on and improved upon Faraday's work, leading to the identification ofcathode rays, whichJ. J. Thomsonfound to consist of subatomic particles that would be called electrons.[79][80] Theblack-body radiationproblem was discovered byGustav Kirchhoffin 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation.[81]The wordquantumderives from theLatin, meaning "how great" or "how much".[82]According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to theirfrequency(ν):E=hν{\displaystyle E=h\nu \ }, wherehis thePlanck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not thephysical realityof the radiation.[83]In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[84]However, in 1905 Albert Einstein interpreted Planck's quantum hypothesisrealisticallyand used it to explain thephotoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into amodel of the hydrogen atomthat successfully predicted thespectral linesof hydrogen.[85]Einstein further developed this idea to show that anelectromagnetic wavesuch as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency.[86]In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation,[87]which became the basis of the laser.[88] This phase is known as theold quantum theory. Never complete or self-consistent, the old quantum theory was rather a set ofheuristiccorrections to classical mechanics.[89][90]The theory is now understood as asemi-classical approximationto modern quantum mechanics.[91][92]Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein andPeter Debye's work on thespecific heatof solids, Bohr andHendrika Johanna van Leeuwen'sproofthat classical physics cannot account fordiamagnetism, andArnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.[89][93] In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicistLouis de Broglieput forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, andPascual Jordan[94][95]developedmatrix mechanicsand the Austrian physicist Erwin Schrödinger inventedwave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926.[96]Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the FifthSolvay Conferencein 1927.[97] By 1930, quantum mechanics had been further unified and formalized byDavid Hilbert, Paul Dirac andJohn von Neumann[98]with greater emphasis onmeasurement, the statistical nature of our knowledge of reality, andphilosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry,quantum electronics,quantum optics, andquantum information science. It also provides a useful framework for many features of the modernperiodic table of elements, and describes the behaviors ofatomsduringchemical bondingand the flow of electrons in computersemiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain somemacroscopicphenomena such assuperconductors[99]andsuperfluids.[100] The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus: More technical: Course material Philosophy
https://en.wikipedia.org/wiki/Quantum_physics
Set theoryis the branch ofmathematical logicthat studiessets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory – as a branch ofmathematics– is mostly concerned with those that are relevant to mathematics as a whole. The modern study of set theory was initiated by the German mathematiciansRichard DedekindandGeorg Cantorin the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name ofnaive set theory. After the discovery ofparadoxes within naive set theory(such asRussell's paradox,Cantor's paradoxand theBurali-Forti paradox), variousaxiomatic systemswere proposed in the early twentieth century, of whichZermelo–Fraenkel set theory(with or without theaxiom of choice) is still the best-known and most studied. Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Besides its foundational role, set theory also provides the framework to develop a mathematical theory ofinfinity, and has various applications incomputer science(such as in the theory ofrelational algebra),philosophy,formal semantics, andevolutionary dynamics. Its foundational appeal, together with itsparadoxes, and its implications for the concept of infinity and its multiple applications have made set theory an area of major interest forlogiciansandphilosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of thereal numberline to the study of theconsistencyoflarge cardinals. The basic notion of grouping objects has existed since at least theemergence of numbers, and the notion of treating sets as their own objects has existed since at least theTree of Porphyry, 3rd-century AD. The simplicity and ubiquity of sets makes it hard to determine the origin of sets as now used in mathematics, however,Bernard Bolzano'sParadoxes of the Infinite(Paradoxien des Unendlichen, 1851) is generally considered the first rigorous introduction of sets to mathematics. In his work, he (among other things) expanded onGalileo's paradox, and introducedone-to-one correspondenceof infinite sets, for example between theintervals[0,5]{\displaystyle [0,5]}and[0,12]{\displaystyle [0,12]}by the relation5y=12x{\displaystyle 5y=12x}. However, he resisted saying these sets wereequinumerous, and his work is generally considered to have been uninfluential in mathematics of his time.[1][2] Before mathematical set theory, basic concepts ofinfinitywere considered to be solidly in the domain of philosophy (see:Infinity (philosophy)andInfinity § History). Since the 5th century BC, beginning with Greek philosopherZeno of Eleain the West (and earlyIndian mathematiciansin the East), mathematicians had struggled with the concept of infinity. With thedevelopment of calculusin the late 17th century, philosophers began to generally distinguish betweenactual and potential infinity, wherein mathematics was only considered in the latter.[3]Carl Friedrich Gaussfamously stated: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics."[4] Development of mathematical set theory was motivated by several mathematicians.Bernhard Riemann's lectureOn the Hypotheses which lie at the Foundations of Geometry(1854) proposed new ideas abouttopology, and about basing mathematics (especially geometry) in terms of sets ormanifoldsin the sense of aclass(which he calledMannigfaltigkeit) now calledpoint-set topology. The lecture was published byRichard Dedekindin 1868, along with Riemann's paper ontrigonometric series(which presented theRiemann integral), The latter was a starting point a movement inreal analysisfor the study of “seriously”discontinuous functions. A youngGeorg Cantorentered into this area, which led him to the study ofpoint-sets. Around 1871, influenced by Riemann, Dedekind began working with sets in his publications, which dealt very clearly and precisely withequivalence relations,partitions of sets, andhomomorphisms. Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work. However, he did not publish a formal explanation of his set theory until 1888. Set theory, as understood by modern mathematicians, is generally considered to be founded by a single paper in 1874 byGeorg CantortitledOn a Property of the Collection of All Real Algebraic Numbers.[5][6][7]In his paper, he developed the notion ofcardinality, comparing the sizes of two sets by setting them in one-to-one correspondence. His "revolutionary discovery" was that the set of allreal numbersisuncountable, that is, one cannot put all real numbers in a list. This theorem is proved usingCantor's first uncountability proof, which differs from the more familiar proof using hisdiagonal argument. Cantor introduced fundamental constructions in set theory, such as thepower setof a setA, which is the set of all possiblesubsetsofA. He later proved that the size of the power set ofAis strictly larger than the size ofA, even whenAis an infinite set; this result soon became known asCantor's theorem. Cantor developed a theory oftransfinite numbers, calledcardinalsandordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letterℵ{\displaystyle \aleph }(ℵ,aleph) with a natural number subscript; for the ordinals he employed the Greek letterω{\displaystyle \omega }(ω,omega). Set theory was beginning to become an essential ingredient of the new “modern” approach to mathematics. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such asLeopold KroneckerandHenri Poincaréand later fromHermann WeylandL. E. J. Brouwer, whileLudwig Wittgensteinraisedphilosophical objections(see:Controversy over Cantor's theory).[a]Dedekind's algebraic style only began to find followers in the 1890s Despite the controversy, Cantor's set theory gained remarkable ground around the turn of the 20th century with the work of several notable mathematicians and philosophers. Richard Dedekind, around the same time, began working with sets in his publications, and famously constructing the real numbers usingDedekind cuts. He also worked withGiuseppe Peanoin developing thePeano axioms, which formalized natural-number arithmetic, using set-theoretic ideas, which also introduced theepsilonsymbol forset membership. Possibly most prominently,Gottlob Fregebegan to develop hisFoundations of Arithmetic. In his work, Frege tries to ground all mathematics in terms of logical axioms using Cantor's cardinality. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concepthorse in the barn. Frege attempted to explain our grasp of numbers through cardinality ('the number of...', orNx:Fx{\displaystyle Nx:Fx}), relying onHume's principle. However, Frege's work was short-lived, as it was found byBertrand Russellthat his axioms lead to acontradiction. Specifically, Frege'sBasic Law V(now known as theaxiom schema of unrestricted comprehension). According toBasic Law V, for any sufficiently well-definedproperty, there is the set of all and only the objects that have that property. The contradiction, calledRussell's paradox, is shown as follows: LetRbe the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) IfRis not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: This came around a time of severalparadoxesor counter-intuitive results. For example, that theparallel postulatecannot be proved, the existence ofmathematical objectsthat cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved withPeano arithmetic. The result was afoundational crisis of mathematics. Set theory begins with a fundamentalbinary relationbetween an objectoand a setA. Ifois amember(orelement) ofA, the notationo∈Ais used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }.[8]Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also calledset inclusion. If all the members of setAare also members of setB, thenAis asubsetofB, denotedA⊆B. For example,{1, 2}is a subset of{1, 2, 3}, and so is{2}but{1, 4}is not. As implied by this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the termproper subsetis defined, variously denotedA⊂B{\displaystyle A\subset B},A⊊B{\displaystyle A\subsetneq B}, orA⫋B{\displaystyle A\subsetneqq B}(note however that the notationA⊂B{\displaystyle A\subset B}is sometimes used synonymously withA⊆B{\displaystyle A\subseteq B}; that is, allowing the possibility thatAandBare equal). We callAaproper subsetofBif and only ifAis a subset ofB, butAis not equal toB. Also, 1, 2, and 3 are members (elements) of the set{1, 2, 3}, but are not subsets of it; and in turn, the subsets, such as{1}, are not members of the set{1, 2, 3}. More complicated relations can exist; for example, the set{1}is both a member and a proper subset of the set{1, {1}}. Just asarithmeticfeaturesbinary operationsonnumbers, set theory features binary operations on sets.[9]The following is a partial list of them: Some basic sets of central importance are the set ofnatural numbers, the set ofreal numbersand theempty set– the unique set containing no elements. The empty set is also occasionally called thenull set,[15]though this name is ambiguous and can lead to several interpretations. The empty set can be denoted with empty braces "{}{\displaystyle \{\}}" or the symbol "∅{\displaystyle \varnothing }" or "∅{\displaystyle \emptyset }". Thepower setof a setA, denotedP(A){\displaystyle {\mathcal {P}}(A)}, is the set whose members are all of the possible subsets ofA. For example, the power set of{1, 2}is{ {}, {1}, {2}, {1, 2} }. Notably,P(A){\displaystyle {\mathcal {P}}(A)}contains bothAand the empty set. A set ispureif all of its members are sets, all members of its members are sets, and so on. For example, the set containing only the empty set is a nonempty pure set. In modern set theory, it is common to restrict attention to thevon Neumann universeof pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into acumulative hierarchy, based on how deeply their members, members of members, etc. are nested. Each set in this hierarchy is assigned (bytransfinite recursion) anordinal numberα{\displaystyle \alpha }, known as itsrank.The rank of a pure setX{\displaystyle X}is defined to be the least ordinal that is strictly greater than the rank of any of its elements. For example, the empty set is assigned rank 0, while the setcontaining only the empty set is assigned rank 1. For each ordinalα{\displaystyle \alpha }, the setVα{\displaystyle V_{\alpha }}is defined to consist of all pure sets with rank less thanα{\displaystyle \alpha }. The entire von Neumann universe is denotedV{\displaystyle V}. Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools usingVenn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition. This assumption gives rise to paradoxes, the simplest and best known of which areRussell's paradoxand theBurali-Forti paradox.Axiomatic set theorywas originally devised to rid set theory of such paradoxes.[note 1] The most widely studied systems of axiomatic set theory imply that all sets form acumulative hierarchy.[b]Such systems come in two flavors, those whoseontologyconsists of: The above systems can be modified to allowurelements, objects that can be members of sets but that are not themselves sets and do not have any members. TheNew Foundationssystems ofNFU(allowingurelements) andNF(lacking them), associate withWillard Van Orman Quine, are not based on a cumulative hierarchy. NF and NFU include a "set of everything", relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which theaxiom of choicedoes not hold. Despite NF's ontology not reflecting the traditional cumulative hierarchy and violating well-foundedness,Thomas Forsterhas argued that it does reflect aniterative conception of set.[16] Systems ofconstructive set theory, such as CST, CZF, and IZF, embed their set axioms inintuitionisticinstead ofclassical logic. Yet other systems accept classical logic but feature a nonstandard membership relation. These includerough set theoryandfuzzy set theory, in which the value of anatomic formulaembodying the membership relation is not simplyTrueorFalse. TheBoolean-valued modelsofZFCare a related subject. An enrichment of ZFC calledinternal set theorywas proposed byEdward Nelsonin 1977.[17] Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse asgraphs,manifolds,rings,vector spaces, andrelational algebrascan all be defined as sets satisfying various (axiomatic) properties.Equivalenceandorder relationsare ubiquitous in mathematics, and the theory of mathematicalrelationscan be described in set theory.[18][19] Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume ofPrincipia Mathematica, it has been claimed that most (or even all) mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, usingfirstorsecond-order logic. For example, properties of thenaturalandreal numberscan be derived within set theory, as each of these number systems can be defined by representing their elements as sets of specific forms.[20] Set theory as a foundation formathematical analysis,topology,abstract algebra, anddiscrete mathematicsis likewise uncontroversial; mathematicians accept (in principle) that theorems in these areas can be derived from the relevant definitions and the axioms of set theory. However, it remains that few full derivations of complex mathematical theorems from set theory have been formally verified, since such formal derivations are often much longer than the natural language proofs mathematicians commonly present. One verification project,Metamath, includes human-written, computer-verified derivations of more than 12,000 theorems starting fromZFCset theory,first-order logicandpropositional logic.[21] Set theory is a major area of research in mathematics with many interrelated subfields: Combinatorial set theoryconcerns extensions of finitecombinatoricsto infinite sets. This includes the study ofcardinal arithmeticand the study of extensions ofRamsey's theoremsuch as theErdős–Rado theorem. Descriptive set theoryis the study of subsets of thereal lineand, more generally, subsets ofPolish spaces. It begins with the study ofpointclassesin theBorel hierarchyand extends to the study of more complex hierarchies such as theprojective hierarchyand theWadge hierarchy. Many properties ofBorel setscan be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field ofeffective descriptive set theoryis between set theory andrecursion theory. It includes the study oflightface pointclasses, and is closely related tohyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable. A recent area of research concernsBorel equivalence relationsand more complicated definableequivalence relations. This has important applications to the study ofinvariantsin many fields of mathematics. In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. Infuzzy set theorythis condition was relaxed byLotfi A. Zadehso an object has adegree of membershipin a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.75. Aninner modelof Zermelo–Fraenkel set theory (ZF) is a transitiveclassthat includes all the ordinals and satisfies all the axioms of ZF. The canonical example is theconstructible universeLdeveloped by Gödel. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a modelVof ZF satisfies thecontinuum hypothesisor theaxiom of choice, the inner modelLconstructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice. Thus the assumption that ZF is consistent (has at least one model) implies that ZF together with these two principles is consistent. The study of inner models is common in the study ofdeterminacyandlarge cardinals, especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice. For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy (and thus not satisfying the axiom of choice).[22] Alarge cardinalis a cardinal number with an extra property. Many such properties are studied, includinginaccessible cardinals,measurable cardinals, and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable inZermelo–Fraenkel set theory. Determinacyrefers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy. The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. Theaxiom of determinacy(AD) is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved (in particular, measurable and with the perfect set property). AD can be used to prove that theWadge degreeshave an elegant structure. Paul Coheninvented the method offorcingwhile searching for amodelofZFCin which thecontinuum hypothesisfails, or a model of ZF in which theaxiom of choicefails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined (i.e. "forced") by the construction and the original model. For example, Cohen's construction adjoins additional subsets of thenatural numberswithout changing any of thecardinal numbersof the original model. Forcing is also one of two methods for provingrelative consistencyby finitistic methods, the other method beingBoolean-valued models. Acardinal invariantis a property of the real line measured by a cardinal number. For example, a well-studied invariant is the smallest cardinality of a collection ofmeagre setsof reals whose union is the entire real line. These are invariants in the sense that any two isomorphic models of set theory must give the same cardinal for each invariant. Many cardinal invariants have been studied, and the relationships between them are often complex and related to axioms of set theory. Set-theoretic topologystudies questions ofgeneral topologythat are set-theoretic in nature or that require advanced methods of set theory for their solution. Many of these theorems are independent of ZFC, requiring stronger axioms for their proof. A famous problem is thenormal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. From set theory's inception, some mathematicians have objected to it as afoundation for mathematics. The most common objection to set theory, oneKroneckervoiced in set theory's earliest years, starts from theconstructivistview that mathematics is loosely related to computation. If this view is granted, then the treatment of infinite sets, both innaiveand in axiomatic set theory, introduces into mathematics methods and objects that are not computable even in principle. The feasibility of constructivism as a substitute foundation for mathematics was greatly increased byErrett Bishop's influential bookFoundations of Constructive Analysis.[23] A different objection put forth byHenri Poincaréis that defining sets using the axiom schemas ofspecificationandreplacement, as well as theaxiom of power set, introducesimpredicativity, a type ofcircularity, into the definitions of mathematical objects. The scope of predicatively founded mathematics, while less than that of the commonly accepted Zermelo–Fraenkel theory, is much greater than that of constructive mathematics, to the point thatSolomon Fefermanhas said that "all of scientifically applicable analysis can be developed [using predicative methods]".[24] Ludwig Wittgensteincondemned set theory philosophically for its connotations ofmathematical platonism.[25]He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers".[26]Wittgenstein identified mathematics with algorithmic human deduction;[27]the need for a secure foundation for mathematics seemed, to him, nonsensical.[28]Moreover, since human effort is necessarily finite, Wittgenstein's philosophy required an ontological commitment to radicalconstructivismandfinitism. Meta-mathematical statements – which, for Wittgenstein, included any statement quantifying over infinite domains, and thus almost all modern set theory – are not mathematics.[29]Few modern philosophers have adopted Wittgenstein's views after a spectacular blunder inRemarks on the Foundations of Mathematics: Wittgenstein attempted to refuteGödel's incompleteness theoremsafter having only read the abstract. As reviewersKreisel,Bernays,Dummett, andGoodsteinall pointed out, many of his critiques did not apply to the paper in full. Only recently have philosophers such asCrispin Wrightbegun to rehabilitate Wittgenstein's arguments.[30] Category theoristshave proposedtopos theoryas an alternative to traditional axiomatic set theory. Topos theory can interpret various alternatives to that theory, such asconstructivism, finite set theory, andcomputableset theory.[31][32]Topoi also give a natural setting for forcing and discussions of the independence of choice from ZF, as well as providing the framework forpointless topologyandStone spaces.[33] An active area of research is theunivalent foundationsand related to ithomotopy type theory. Within homotopy type theory, a set may be regarded as a homotopy 0-type, withuniversal propertiesof sets arising from the inductive and recursive properties ofhigher inductive types. Principles such as theaxiom of choiceand thelaw of the excluded middlecan be formulated in a manner corresponding to the classical formulation in set theory or perhaps in a spectrum of distinct ways unique to type theory. Some of these principles may be proven to be a consequence of other principles. The variety of formulations of these axiomatic principles allows for a detailed analysis of the formulations required in order to derive various mathematical results.[34][35] As set theory gained popularity as a foundation for modern mathematics, there has been support for the idea of introducing the basics ofnaive set theoryearly inmathematics education. In the US in the 1960s, theNew Mathexperiment aimed to teach basic set theory, among other abstract concepts, toprimary schoolstudents but was met with much criticism.[36]The math syllabus in European schools followed this trend and currently includes the subject at different levels in all grades.Venn diagramsare widely employed to explain basic set-theoretic relationships to primary school students (even thoughJohn Vennoriginally devised them as part of a procedure to assess thevalidityofinferencesinterm logic). Set theory is used to introduce students tological operators(NOT, AND, OR), and semantic or rule description (technicallyintensional definition)[37]of sets (e.g. "months starting with the letterA"), which may be useful when learningcomputer programming, sinceBoolean logicis used in variousprogramming languages. Likewise, sets and other collection-like objects, such asmultisetsandlists, are commondatatypesin computer science and programming.[38] In addition to that, certain sets are commonly used in mathematical teaching, such as the setsN{\displaystyle \mathbb {N} }ofnatural numbers,Z{\displaystyle \mathbb {Z} }ofintegers,R{\displaystyle \mathbb {R} }ofreal numbers, etc.). These are commonly used when defining amathematical functionas a relation from one set (thedomain) to another set (therange).[39]
https://en.wikipedia.org/wiki/Axiomatic_set_theory
Inmathematics, for a functionf:X→Y{\displaystyle f:X\to Y}, theimageof an input valuex{\displaystyle x}is the single output value produced byf{\displaystyle f}when passedx{\displaystyle x}. Thepreimageof an output valuey{\displaystyle y}is the set of input values that producey{\displaystyle y}. More generally, evaluatingf{\displaystyle f}at eachelementof a given subsetA{\displaystyle A}of itsdomainX{\displaystyle X}produces a set, called the "imageofA{\displaystyle A}under (or through)f{\displaystyle f}". Similarly, theinverse image(orpreimage) of a given subsetB{\displaystyle B}of thecodomainY{\displaystyle Y}is the set of all elements ofX{\displaystyle X}that map to a member ofB.{\displaystyle B.} Theimageof the functionf{\displaystyle f}is the set of all output values it may produce, that is, the image ofX{\displaystyle X}. Thepreimageoff{\displaystyle f}, that is, the preimage ofY{\displaystyle Y}underf{\displaystyle f}, always equalsX{\displaystyle X}(thedomainoff{\displaystyle f}); therefore, the former notion is rarely used. Image and inverse image may also be defined for generalbinary relations, not just functions. The word "image" is used in three related ways. In these definitions,f:X→Y{\displaystyle f:X\to Y}is afunctionfrom thesetX{\displaystyle X}to the setY.{\displaystyle Y.} Ifx{\displaystyle x}is a member ofX,{\displaystyle X,}then the image ofx{\displaystyle x}underf,{\displaystyle f,}denotedf(x),{\displaystyle f(x),}is thevalueoff{\displaystyle f}when applied tox.{\displaystyle x.}f(x){\displaystyle f(x)}is alternatively known as the output off{\displaystyle f}for argumentx.{\displaystyle x.} Giveny,{\displaystyle y,}the functionf{\displaystyle f}is said totake the valuey{\displaystyle y}ortakey{\displaystyle y}as a valueif there exists somex{\displaystyle x}in the function's domain such thatf(x)=y.{\displaystyle f(x)=y.}Similarly, given a setS,{\displaystyle S,}f{\displaystyle f}is said totake a value inS{\displaystyle S}if there existssomex{\displaystyle x}in the function's domain such thatf(x)∈S.{\displaystyle f(x)\in S.}However,f{\displaystyle f}takes [all] values inS{\displaystyle S}andf{\displaystyle f}is valued inS{\displaystyle S}means thatf(x)∈S{\displaystyle f(x)\in S}foreverypointx{\displaystyle x}in the domain off{\displaystyle f}. Throughout, letf:X→Y{\displaystyle f:X\to Y}be a function. Theimageunderf{\displaystyle f}of a subsetA{\displaystyle A}ofX{\displaystyle X}is the set of allf(a){\displaystyle f(a)}fora∈A.{\displaystyle a\in A.}It is denoted byf[A],{\displaystyle f[A],}or byf(A){\displaystyle f(A)}when there is no risk of confusion. Usingset-builder notation, this definition can be written as[1][2]f[A]={f(a):a∈A}.{\displaystyle f[A]=\{f(a):a\in A\}.} This induces a functionf[⋅]:P(X)→P(Y),{\displaystyle f[\,\cdot \,]:{\mathcal {P}}(X)\to {\mathcal {P}}(Y),}whereP(S){\displaystyle {\mathcal {P}}(S)}denotes thepower setof a setS;{\displaystyle S;}that is the set of allsubsetsofS.{\displaystyle S.}See§ Notationbelow for more. Theimageof a function is the image of its entiredomain, also known as therangeof the function.[3]This last usage should be avoided because the word "range" is also commonly used to mean thecodomainoff.{\displaystyle f.} IfR{\displaystyle R}is an arbitrarybinary relationonX×Y,{\displaystyle X\times Y,}then the set{y∈Y:xRyfor somex∈X}{\displaystyle \{y\in Y:xRy{\text{ for some }}x\in X\}}is called the image, or the range, ofR.{\displaystyle R.}Dually, the set{x∈X:xRyfor somey∈Y}{\displaystyle \{x\in X:xRy{\text{ for some }}y\in Y\}}is called the domain ofR.{\displaystyle R.} Letf{\displaystyle f}be a function fromX{\displaystyle X}toY.{\displaystyle Y.}Thepreimageorinverse imageof a setB⊆Y{\displaystyle B\subseteq Y}underf,{\displaystyle f,}denoted byf−1[B],{\displaystyle f^{-1}[B],}is the subset ofX{\displaystyle X}defined byf−1[B]={x∈X:f(x)∈B}.{\displaystyle f^{-1}[B]=\{x\in X\,:\,f(x)\in B\}.} Other notations includef−1(B){\displaystyle f^{-1}(B)}andf−(B).{\displaystyle f^{-}(B).}[4]The inverse image of asingleton set, denoted byf−1[{y}]{\displaystyle f^{-1}[\{y\}]}or byf−1(y),{\displaystyle f^{-1}(y),}is also called thefiberor fiber overy{\displaystyle y}or thelevel setofy.{\displaystyle y.}The set of all the fibers over the elements ofY{\displaystyle Y}is a family of sets indexed byY.{\displaystyle Y.} For example, for the functionf(x)=x2,{\displaystyle f(x)=x^{2},}the inverse image of{4}{\displaystyle \{4\}}would be{−2,2}.{\displaystyle \{-2,2\}.}Again, if there is no risk of confusion,f−1[B]{\displaystyle f^{-1}[B]}can be denoted byf−1(B),{\displaystyle f^{-1}(B),}andf−1{\displaystyle f^{-1}}can also be thought of as a function from the power set ofY{\displaystyle Y}to the power set ofX.{\displaystyle X.}The notationf−1{\displaystyle f^{-1}}should not be confused with that forinverse function, although it coincides with the usual one for bijections in that the inverse image ofB{\displaystyle B}underf{\displaystyle f}is the image ofB{\displaystyle B}underf−1.{\displaystyle f^{-1}.} The traditional notations used in the previous section do not distinguish the original functionf:X→Y{\displaystyle f:X\to Y}from the image-of-sets functionf:P(X)→P(Y){\displaystyle f:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)}; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative[5]is to give explicit names for the image and preimage as functions between power sets: For every functionf:X→Y{\displaystyle f:X\to Y}and all subsetsA⊆X{\displaystyle A\subseteq X}andB⊆Y,{\displaystyle B\subseteq Y,}the following properties hold: Also: For functionsf:X→Y{\displaystyle f:X\to Y}andg:Y→Z{\displaystyle g:Y\to Z}with subsetsA⊆X{\displaystyle A\subseteq X}andC⊆Z,{\displaystyle C\subseteq Z,}the following properties hold: For functionf:X→Y{\displaystyle f:X\to Y}and subsetsA,B⊆X{\displaystyle A,B\subseteq X}andS,T⊆Y,{\displaystyle S,T\subseteq Y,}the following properties hold: The results relating images and preimages to the (Boolean) algebra ofintersectionandunionwork for any collection of subsets, not just for pairs of subsets: (Here,S{\displaystyle S}can be infinite, evenuncountably infinite.) With respect to the algebra of subsets described above, the inverse image function is alattice homomorphism, while the image function is only asemilatticehomomorphism (that is, it does not always preserve intersections). This article incorporates material from Fibre onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Image_(mathematics)#Properties
Naive set theoryis any of several theories of sets used in the discussion of thefoundations of mathematics.[3]Unlikeaxiomatic set theories, which are defined usingformal logic, naive set theory is defined informally, innatural language. It describes the aspects ofmathematical setsfamiliar indiscrete mathematics(for exampleVenn diagramsand symbolic reasoning about theirBoolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics.[4] Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers,relations,functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments. Anaive theoryin the sense of "naive set theory" is a non-formalized theory, that is, a theory that usesnatural languageto describe sets and operations on sets. Such theory treats sets as platonic absolute objects. The wordsand,or,if ... then,not,for some,for everyare treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development ofset theorywas a naive set theory. It was created at the end of the 19th century byGeorg Cantoras part of his study ofinfinite sets[5]and developed byGottlob Fregein hisGrundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to The assumption that any property may be used to form a set, without restriction, leads toparadoxes. One common example isRussell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets. Some believe thatGeorg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instanceCantor's paradox[8]and theBurali-Forti paradox,[9]and did not believe that they discredited his theory.[10]Cantor's paradox can actually be derived from the above (false) assumption—that any propertyP(x)may be used to form a set—using forP(x)"xis acardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it isthisformal theory whichBertrand Russellactually addressed when he presented his paradox, not necessarily a theory Cantor—who, as mentioned, was aware of several paradoxes—presumably had in mind. Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when. A naive set theory is notnecessarilyinconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos'Naive Set Theory, which is actually an informal presentation of the usual axiomaticZermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system. Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows fromGödel's incompleteness theoremsthat a sufficiently complicatedfirst-order logicsystem (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do excludesomeparadoxes, likeRussell's paradox. Based onGödel's theorem, it is just not known – and never can be – if there arenoparadoxes at all in these theories or in any first-order set theory. The termnaive set theoryis still today also used in some literature[11]to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory. The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. References to particular axioms typically then occur only when demanded by tradition, e.g. theaxiom of choiceis often mentioned when used. Likewise, formal proofs occur only when warranted by exceptional circumstances. This informal usage of axiomatic set theory can have (depending on notation) precisely theappearanceof naive set theory as outlined below. It is considerably easier to read and write (in the formulation of most statements, proofs, and lines of discussion) and is less error-prone than a strictly formal approach. In naive set theory, asetis described as a well-defined collection of objects. These objects are called theelementsormembersof the set. Objects can be anything: numbers, people, other sets, etc. For instance, 4 is a member of the set of all evenintegers. Clearly, the set of even numbers is infinitely large; there is no requirement that a set be finite. The definition of sets goes back toGeorg Cantor. He wrote in his 1915 articleBeiträge zur Begründung der transfiniten Mengenlehre: Unter einer 'Menge' verstehen wir jede Zusammenfassung M von bestimmten wohlunterschiedenen Objekten unserer Anschauung oder unseres Denkens (welche die 'Elemente' von M genannt werden) zu einem Ganzen. A set is a gathering together into a whole of definite, distinct objects of our perception or of our thought—which are called elements of the set. It doesnotfollow from this definitionhowsets can be formed, and what operations on sets again will produce a set. The term "well-defined" in "well-defined collection of objects" cannot, by itself, guarantee the consistency and unambiguity of what exactly constitutes and what does not constitute a set. Attempting to achieve this would be the realm of axiomatic set theory or of axiomaticclass theory. The problem, in this context, with informally formulated set theories, not derived from (and implying) any particular axiomatic theory, is that there may be several widely differing formalized versions, that have both different sets and different rules for how new sets may be formed, that all conform to the original informal definition. For example, Cantor's verbatim definition allows for considerable freedom in what constitutes a set. On the other hand, it is unlikely that Cantor was particularly interested in sets containing cats and dogs, but rather only in sets containing purely mathematical objects. An example of such a class of sets could be thevon Neumann universe. But even when fixing the class of sets under consideration, it is not always clear which rules for set formation are allowed without introducing paradoxes. For the purpose of fixing the discussion below, the term "well-defined" should instead be interpreted as anintention, with either implicit or explicit rules (axioms or definitions), to rule out inconsistencies. The purpose is to keep the often deep and difficult issues of consistency away from the, usually simpler, context at hand. An explicit ruling out ofallconceivable inconsistencies (paradoxes) cannot be achieved for an axiomatic set theory anyway, due to Gödel's second incompleteness theorem, so this does not at all hamper the utility of naive set theory as compared to axiomatic set theory in the simple contexts considered below. It merely simplifies the discussion. Consistency is henceforth taken for granted unless explicitly mentioned. Ifxis a member of a setA, then it is also said thatxbelongs toA, or thatxis inA. This is denoted byx∈A. The symbol ∈ is a derivation from the lowercase Greek letterepsilon, "ε", introduced byGiuseppe Peanoin 1889 and is the first letter of the wordἐστί(means "is"). The symbol ∉ is often used to writex∉A, meaning "x is not in A". Two setsAandBare defined to beequalwhen they have precisely the same elements, that is, if every element ofAis an element ofBand every element ofBis an element ofA. (Seeaxiom of extensionality.) Thus a set is completely determined by its elements; the description is immaterial. For example, the set with elements 2, 3, and 5 is equal to the set of allprime numbersless than 6. If the setsAandBare equal, this is denoted symbolically asA=B(as usual). Theempty set, denoted as∅{\displaystyle \varnothing }and sometimes{}{\displaystyle \{\}}, is a set with no members at all. Because a set is determined completely by its elements, there can be only one empty set. (Seeaxiom of empty set.)[12]Although the empty set has no members, it can be a member of other sets. Thus∅≠{∅}{\displaystyle \varnothing \neq \{\varnothing \}}, because the former has no members and the latter has one member.[13] The simplest way to describe a set is to list its elements between curly braces (known as defining a setextensionally). Thus{1, 2}denotes the set whose only elements are1and2. (Seeaxiom of pairing.) Note the following points: (These are consequences of the definition of equality in the previous section.) This notation can be informally abused by saying something like{dogs}to indicate the set of all dogs, but this example would usually be read by mathematicians as "the set containing the single elementdogs". An extreme (but correct) example of this notation is{}, which denotes the empty set. The notation{x:P(x)}, or sometimes{x|P(x)}, is used to denote the set containing all objects for which the conditionPholds (known as defining a setintensionally). For example,{x|x∈R}denotes the set ofreal numbers,{x|xhas blonde hair}denotes the set of everything with blonde hair. This notation is calledset-builder notation(or "set comprehension", particularly in the context ofFunctional programming). Some variants of set builder notation are: Given two setsAandB,Ais asubsetofBif every element ofAis also an element ofB. In particular, each setBis a subset of itself; a subset ofBthat is not equal toBis called aproper subset. IfAis a subset ofB, then one can also say thatBis asupersetofA, thatAiscontained inB, or thatBcontainsA. In symbols,A⊆Bmeans thatAis a subset ofB, andB⊇Ameans thatBis a superset ofA. Some authors use the symbols ⊂ and ⊃ for subsets, and others use these symbols only forpropersubsets. For clarity, one can explicitly use the symbols ⊊ and ⊋ to indicate non-equality. As an illustration, letRbe the set of real numbers, letZbe the set of integers, letObe the set of odd integers, and letPbe the set of current or formerU.S. Presidents. ThenOis a subset ofZ,Zis a subset ofR, and (hence)Ois a subset ofR, where in all casessubsetmay even be read asproper subset. Not all sets are comparable in this way. For example, it is not the case either thatRis a subset ofPnor thatPis a subset ofR. It follows immediately from the definition of equality of sets above that, given two setsAandB,A=Bif and only ifA⊆BandB⊆A. In fact this is often given as the definition of equality. Usually when trying toprovethat two sets are equal, one aims to show these two inclusions. Theempty setis a subset of every set (the statement that all elements of the empty set are also members of any setAisvacuously true). The set of all subsets of a given setAis called thepower setofAand is denoted by2A{\displaystyle 2^{A}}orP(A){\displaystyle P(A)}; the "P" is sometimes in ascriptfont:⁠℘(A){\displaystyle \wp (A)}⁠. If the setAhasnelements, thenP(A){\displaystyle P(A)}will have2n{\displaystyle 2^{n}}elements. In certain contexts, one may consider all sets under consideration as being subsets of some givenuniversal set. For instance, when investigating properties of thereal numbersR(and subsets ofR),Rmay be taken as the universal set. A true universal set is not included in standard set theory (seeParadoxesbelow), but is included in some non-standard set theories. Given a universal setUand a subsetAofU, thecomplementofA(inU) is defined as In other words,AC("A-complement"; sometimes simplyA', "A-prime" ) is the set of all members ofUwhich are not members ofA. Thus withR,ZandOdefined as in the section on subsets, ifZis the universal set, thenOCis the set of even integers, while ifRis the universal set, thenOCis the set of all real numbers that are either even integers or not integers at all. Given two setsAandB, theirunionis the set consisting of all objects which are elements ofAor ofBor of both (seeaxiom of union). It is denoted byA∪B. TheintersectionofAandBis the set of all objects which are both inAand inB. It is denoted byA∩B. Finally, therelative complementofBrelative toA, also known as theset theoretic differenceofAandB, is the set of all objects that belong toAbutnottoB. It is written asA∖BorA−B. Symbolically, these are respectively The setBdoesn't have to be a subset ofAforA∖Bto make sense; this is the difference between the relative complement and the absolute complement (AC=U∖A) from the previous section. To illustrate these ideas, letAbe the set of left-handed people, and letBbe the set of people with blond hair. ThenA∩Bis the set of all left-handed blond-haired people, whileA∪Bis the set of all people who are left-handed or blond-haired or both.A∖B, on the other hand, is the set of all people that are left-handed but not blond-haired, whileB∖Ais the set of all people who have blond hair but aren't left-handed. Now letEbe the set of all human beings, and letFbe the set of all living things over 1000 years old. What isE∩Fin this case? No living human being isover 1000 years old, soE∩Fmust be theempty set{}. For any setA, the power setP(A){\displaystyle P(A)}is aBoolean algebraunder the operations of union and intersection. Intuitively, anordered pairis simply a collection of two objects such that one can be distinguished as thefirst elementand the other as thesecond element, and having the fundamental property that, two ordered pairs are equal if and only if theirfirst elementsare equal and theirsecond elementsare equal. Formally, an ordered pair withfirst coordinatea, andsecond coordinateb, usually denoted by (a,b), can be defined as the set{{a},{a,b}}.{\displaystyle \{\{a\},\{a,b\}\}.} It follows that, two ordered pairs (a,b) and (c,d) are equal if and only ifa=candb=d. Alternatively, an ordered pair can be formally thought of as a set {a,b} with atotal order. (The notation (a,b) is also used to denote anopen intervalon thereal number line, but the context should make it clear which meaning is intended. Otherwise, the notation ]a,b[ may be used to denote the open interval whereas (a,b) is used for the ordered pair). IfAandBare sets, then theCartesian product(or simplyproduct) is defined to be: That is,A×Bis the set of all ordered pairs whose first coordinate is an element ofAand whose second coordinate is an element ofB. This definition may be extended to a setA×B×Cof ordered triples, and more generally to sets of orderedn-tuplesfor any positive integern. It is even possible to define infiniteCartesian products, but this requires a more recondite definition of the product. Cartesian products were first developed byRené Descartesin the context ofanalytic geometry. IfRdenotes the set of allreal numbers, thenR2:=R×Rrepresents theEuclidean planeandR3:=R×R×Rrepresents three-dimensionalEuclidean space. There are some ubiquitous sets for which the notation is almost universal. Some of these are listed below. In the list,a,b, andcrefer tonatural numbers, andrandsarereal numbers. The unrestricted formation principle of sets referred to as theaxiom schema of unrestricted comprehension, is the source of several early appearing paradoxes: If the axiom schema of unrestricted comprehension is weakened to theaxiom schema of specificationoraxiom schema of separation, then all the above paradoxes disappear.[14]There is a corollary. With the axiom schema of separation as an axiom of the theory, it follows, as a theorem of the theory: Or, more spectacularly (Halmos' phrasing[15]): There is nouniverse.Proof: Suppose that it exists and call itU. Now apply the axiom schema of separation withX=Uand forP(x)usex∉x. This leads to Russell's paradox again. HenceUcannot exist in this theory.[14] Related to the above constructions is formation of the set where the statement following the implication certainly is false. It follows, from the definition ofY, using the usual inference rules (and some afterthought when reading the proof in the linked article below) both thatY∈Y→ {} ≠ {}andY∈Yholds, hence{} ≠ {}. This isCurry's paradox. It is (perhaps surprisingly) not the possibility ofx∈xthat is problematic. It is again the axiom schema of unrestricted comprehension allowing(x∈x) → {} ≠ {}forP(x). With the axiom schema of specification instead of unrestricted comprehension, the conclusionY∈Ydoes not hold and hence{} ≠ {}is not a logical consequence. Nonetheless, the possibility ofx∈xis often removed explicitly[16]or, e.g. in ZFC, implicitly,[17]by demanding theaxiom of regularityto hold.[17]One consequence of it is or, in other words, no set is an element of itself.[18] The axiom schema of separation is simply too weak (while unrestricted comprehension is a very strong axiom—too strong for set theory) to develop set theory with its usual operations and constructions outlined above.[14]The axiom of regularity is of a restrictive nature as well. Therefore, one is led to the formulation of other axioms to guarantee the existence of enough sets to form a set theory. Some of these have been described informally above and many others are possible. Not all conceivable axioms can be combined freely into consistent theories. For example, theaxiom of choiceof ZFC is incompatible with the conceivable "every set of reals isLebesgue measurable". The former implies the latter is false.
https://en.wikipedia.org/wiki/Naive_set_theory
Inmathematics, atopological spaceis, roughly speaking, ageometrical spacein whichclosenessis defined but cannot necessarily be measured by a numericdistance. More specifically, a topological space is asetwhose elements are calledpoints, along with an additional structure called a topology, which can be defined as a set ofneighbourhoodsfor each point that satisfy someaxiomsformalizing the concept of closeness. There are several equivalent definitions of a topology, the most commonly used of which is the definition throughopen sets, which is easier than the others to manipulate. A topological space is the most general type of amathematical spacethat allows for the definition oflimits,continuity, andconnectedness.[1][2]Common types of topological spaces includeEuclidean spaces,metric spacesandmanifolds. Although very general, the concept of topological spaces is fundamental, and used in virtually every branch of modern mathematics. The study of topological spaces in their own right is calledgeneral topology(or point-set topology). Around 1735,Leonhard Eulerdiscovered theformulaV−E+F=2{\displaystyle V-E+F=2}relating the number of vertices (V), edges (E) and faces (F) of aconvex polyhedron, and hence of aplanar graph. The study and generalization of this formula, specifically byCauchy(1789–1857) andL'Huilier(1750–1840),boosted the studyof topology. In 1827,Carl Friedrich GausspublishedGeneral investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A."[3][non-primary source needed] Yet, "untilRiemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered".[4]"MöbiusandJordanseem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces arehomeomorphicor not."[4] The subject is clearly defined byFelix Kleinin his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced byJohann Benedict Listingin 1847, although he had used the term in correspondence some years earlier instead of previously used "Analysis situs". The foundation of this science, for a space of any dimension, was created byHenri Poincaré. His first article on this topic appeared in 1894.[5]In the 1930s,James Waddell Alexander IIandHassler Whitneyfirst expressed the idea that a surface is a topological space that islocally like a Euclidean plane. Topological spaces were first defined byFelix Hausdorffin 1914 in his seminal "Principles of Set Theory".Metric spaceshad been defined earlier in 1906 byMaurice Fréchet, though it was Hausdorff who popularised the term "metric space" (German:metrischer Raum).[6][7][better source needed] The utility of the concept of atopologyis shown by the fact that there are several equivalent definitions of thismathematical structure. Thus one chooses theaxiomatizationsuited for the application. The most commonly used is that in terms ofopen sets, but perhaps more intuitive is that in terms ofneighbourhoodsand so this is given first. This axiomatization is due toFelix Hausdorff. LetX{\displaystyle X}be a (possibly empty) set. The elements ofX{\displaystyle X}are usually calledpoints, though they can be any mathematical object. LetN{\displaystyle {\mathcal {N}}}be afunctionassigning to eachx{\displaystyle x}(point) inX{\displaystyle X}a non-empty collectionN(x){\displaystyle {\mathcal {N}}(x)}of subsets ofX.{\displaystyle X.}The elements ofN(x){\displaystyle {\mathcal {N}}(x)}will be calledneighbourhoodsofx{\displaystyle x}with respect toN{\displaystyle {\mathcal {N}}}(or, simply,neighbourhoods ofx{\displaystyle x}). The functionN{\displaystyle {\mathcal {N}}}is called aneighbourhood topologyif theaxiomsbelow[8]are satisfied; and thenX{\displaystyle X}withN{\displaystyle {\mathcal {N}}}is called atopological space. The first three axioms for neighbourhoods have a clear meaning. The fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points ofX.{\displaystyle X.} A standard example of such a system of neighbourhoods is for the real lineR,{\displaystyle \mathbb {R} ,}where a subsetN{\displaystyle N}ofR{\displaystyle \mathbb {R} }is defined to be aneighbourhoodof a real numberx{\displaystyle x}if it includes an open interval containingx.{\displaystyle x.} Given such a structure, a subsetU{\displaystyle U}ofX{\displaystyle X}is defined to beopenifU{\displaystyle U}is a neighbourhood of all points inU.{\displaystyle U.}The open sets then satisfy the axioms given below in the next definition of a topological space. Conversely, when given the open sets of a topological space, the neighbourhoods satisfying the above axioms can be recovered by definingN{\displaystyle N}to be a neighbourhood ofx{\displaystyle x}ifN{\displaystyle N}includes an open setU{\displaystyle U}such thatx∈U.{\displaystyle x\in U.}[9] Atopologyon asetXmay be defined as a collectionτ{\displaystyle \tau }ofsubsetsofX, calledopen setsand satisfying the following axioms:[10] As this definition of a topology is the most commonly used, the setτ{\displaystyle \tau }of the open sets is commonly called atopologyonX.{\displaystyle X.} A subsetC⊆X{\displaystyle C\subseteq X}is said to beclosedin(X,τ){\displaystyle (X,\tau )}if itscomplementX∖C{\displaystyle X\setminus C}is an open set. Usingde Morgan's laws, the above axioms defining open sets become axioms definingclosed sets: Using these axioms, another way to define a topological space is as a setX{\displaystyle X}together with a collectionτ{\displaystyle \tau }of closed subsets ofX.{\displaystyle X.}Thus the sets in the topologyτ{\displaystyle \tau }are the closed sets, and their complements inX{\displaystyle X}are the open sets. There are many other equivalent ways to define a topological space: in other words the concepts of neighbourhood, or that of open or closed sets can be reconstructed from other starting points and satisfy the correct axioms. Another way to define a topological space is by using theKuratowski closure axioms, which define the closed sets as thefixed pointsof anoperatoron thepower setofX.{\displaystyle X.} Anetis a generalisation of the concept ofsequence. A topology is completely determined if for every net inX{\displaystyle X}the set of itsaccumulation pointsis specified. Many topologies can be defined on a set to form a topological space. When every open set of a topologyτ1{\displaystyle \tau _{1}}is also open for a topologyτ2,{\displaystyle \tau _{2},}one says thatτ2{\displaystyle \tau _{2}}isfinerthanτ1,{\displaystyle \tau _{1},}andτ1{\displaystyle \tau _{1}}iscoarserthanτ2.{\displaystyle \tau _{2}.}A proof that relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The termslargerandsmallerare sometimes used in place of finer and coarser, respectively. The termsstrongerandweakerare also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading. The collection of all topologies on a given fixed setX{\displaystyle X}forms acomplete lattice: ifF={τα:α∈A}{\displaystyle F=\left\{\tau _{\alpha }:\alpha \in A\right\}}is a collection of topologies onX,{\displaystyle X,}then themeetofF{\displaystyle F}is the intersection ofF,{\displaystyle F,}and thejoinofF{\displaystyle F}is the meet of the collection of all topologies onX{\displaystyle X}that contain every member ofF.{\displaystyle F.} Afunctionf:X→Y{\displaystyle f:X\to Y}between topological spaces is calledcontinuousif for everyx∈X{\displaystyle x\in X}and every neighbourhoodN{\displaystyle N}off(x){\displaystyle f(x)}there is a neighbourhoodM{\displaystyle M}ofx{\displaystyle x}such thatf(M)⊆N.{\displaystyle f(M)\subseteq N.}This relates easily to the usual definition in analysis. Equivalently,f{\displaystyle f}is continuous if theinverse imageof every open set is open.[11]This is an attempt to capture the intuition that there are no "jumps" or "separations" in the function. Ahomeomorphismis abijectionthat is continuous and whoseinverseis also continuous. Two spaces are calledhomeomorphicif there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical.[12] Incategory theory, one of the fundamentalcategoriesisTop, which denotes thecategory of topological spaceswhoseobjectsare topological spaces and whosemorphismsare continuous functions. The attempt to classify the objects of this category (up tohomeomorphism) byinvariantshas motivated areas of research, such ashomotopy theory,homology theory, andK-theory. A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given thediscrete topologyin which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given thetrivial topology(also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must beHausdorff spaceswhere limit points are unique. There exist numerous topologies on any givenfinite set. Such spaces are calledfinite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general. Any set can be given thecofinite topologyin which the open sets are the empty set and the sets whose complement is finite. This is the smallestT1topology on any infinite set.[13] Any set can be given thecocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations. The real line can also be given thelower limit topology. Here, the basic open sets are the half open intervals[a,b).{\displaystyle [a,b).}This topology onR{\displaystyle \mathbb {R} }is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. Ifγ{\displaystyle \gamma }is anordinal number, then the setγ=[0,γ){\displaystyle \gamma =[0,\gamma )}may be endowed with theorder topologygenerated by the intervals(α,β),{\displaystyle (\alpha ,\beta ),}[0,β),{\displaystyle [0,\beta ),}and(α,γ){\displaystyle (\alpha ,\gamma )}whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are elements ofγ.{\displaystyle \gamma .} Everymanifoldhas anatural topologysince it is locally Euclidean. Similarly, everysimplexand everysimplicial complexinherits a natural topology from . TheSierpiński spaceis the simplest non-discrete topological space. It has important relations to thetheory of computationand semantics. Every subset of a topological space can be given thesubspace topologyin which the open sets are the intersections of the open sets of the larger space with the subset. For anyindexed familyof topological spaces, the product can be given theproduct topology, which is generated by the inverse images of open sets of the factors under theprojectionmappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. This construction is a special case of aninitial topology. Aquotient spaceis defined as follows: ifX{\displaystyle X}is a topological space andY{\displaystyle Y}is a set, and iff:X→Y{\displaystyle f:X\to Y}is asurjectivefunction, then the quotient topology onY{\displaystyle Y}is the collection of subsets ofY{\displaystyle Y}that have openinverse imagesunderf.{\displaystyle f.}In other words, the quotient topology is the finest topology onY{\displaystyle Y}for whichf{\displaystyle f}is continuous. A common example of a quotient topology is when anequivalence relationis defined on the topological spaceX.{\displaystyle X.}The mapf{\displaystyle f}is then the natural projection onto the set ofequivalence classes. This construction is a special case of afinal topology. TheVietoris topologyon the set of all non-empty subsets of a topological spaceX,{\displaystyle X,}named forLeopold Vietoris, is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX,{\displaystyle X,}we construct a basis set consisting of all subsets of the union of theUi{\displaystyle U_{i}}that have non-empty intersections with eachUi.{\displaystyle U_{i}.} TheFell topologyon the set of all non-empty closed subsets of alocally compactPolish spaceX{\displaystyle X}is a variant of the Vietoris topology, and is named after mathematician James Fell. It is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX{\displaystyle X}and for every compact setK,{\displaystyle K,}the set of all subsets ofX{\displaystyle X}that are disjoint fromK{\displaystyle K}and have nonempty intersections with eachUi{\displaystyle U_{i}}is a member of the basis. Metric spaces embody ametric, a precise notion of distance between points. Everymetric spacecan be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on anynormed vector space. On a finite-dimensionalvector spacethis topology is the same for all norms. There are many ways of defining a topology onR,{\displaystyle \mathbb {R} ,}the set ofreal numbers. The standard topology onR{\displaystyle \mathbb {R} }is generated by theopen intervals. The set of all open intervals forms abaseor basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, theEuclidean spacesRn{\displaystyle \mathbb {R} ^{n}}can be given a topology. In theusual topologyonRn{\displaystyle \mathbb {R} ^{n}}the basic open sets are the openballs. Similarly,C,{\displaystyle \mathbb {C} ,}the set ofcomplex numbers, andCn{\displaystyle \mathbb {C} ^{n}}have a standard topology in which the basic open sets are open balls. For anyalgebraic objectswe can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure that is not finite, we often have a natural topology compatible with the algebraic operations, in the sense that the algebraic operations are still continuous. This leads to concepts such astopological groups,topological rings,topological fieldsandtopological vector spacesover the latter.Local fieldsare topological fields important innumber theory. TheZariski topologyis defined algebraically on thespectrum of a ringor analgebraic variety. OnRn{\displaystyle \mathbb {R} ^{n}}orCn,{\displaystyle \mathbb {C} ^{n},}the closed sets of the Zariski topology are thesolution setsof systems ofpolynomialequations. IfΓ{\displaystyle \Gamma }is afilteron a setX{\displaystyle X}then{∅}∪Γ{\displaystyle \{\varnothing \}\cup \Gamma }is a topology onX.{\displaystyle X.} Many sets oflinear operatorsinfunctional analysisare endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Alinear graphhas a natural topology that generalizes many of the geometric aspects ofgraphswithverticesandedges. Outer spaceof afree groupFn{\displaystyle F_{n}}consists of the so-called "marked metric graph structures" of volume 1 onFn.{\displaystyle F_{n}.}[14] Topological spaces can be broadly classified,up tohomeomorphism, by theirtopological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property not shared by them. Examples of such properties includeconnectedness,compactness, and variousseparation axioms. For algebraic invariants seealgebraic topology.
https://en.wikipedia.org/wiki/Topological_space#Definitions
Non-well-founded set theoriesare variants ofaxiomatic set theorythat allow sets to be elements of themselves and otherwise violate the rule ofwell-foundedness. In non-well-founded set theories, thefoundation axiomofZFCis replaced by axioms implying its negation. The study of non-well-founded sets was initiated byDmitry Mirimanoffin a series of papers between 1917 and 1920, in which he formulated the distinction between well-founded and non-well-founded sets; he did not regard well-foundedness as anaxiom. Although a number of axiomatic systems of non-well-founded sets were proposed afterwards, they did not find much in the way of applications until the book Non-Well-Founded Sets byPeter Aczelintroduceshyperset theoryin 1988.[1][2][3] The theory of non-well-founded sets has been applied in thelogicalmodellingof non-terminatingcomputationalprocesses in computer science (process algebraandfinal semantics),linguisticsandnatural languagesemantics(situation theory), philosophy (work on theLiar Paradox), and in a different setting,non-standard analysis.[4] In 1917, Dmitry Mirimanoff introduced[5][6][7][8]the concept ofwell-foundednessof a set: In ZFC, there is no infinite descending ∈-sequence by theaxiom of regularity. In fact, the axiom of regularity is often called thefoundation axiomsince it can be proved within ZFC−(that is, ZFC without the axiom of regularity) that well-foundedness implies regularity. In variants of ZFC without theaxiom of regularity, the possibility of non-well-founded sets with set-like ∈-chains arises. For example, a setAsuch thatA∈Ais non-well-founded. Although Mirimanoff also introduced a notion of isomorphism between possibly non-well-founded sets, he considered neither an axiom of foundation nor of anti-foundation.[7]In 1926,Paul Finslerintroduced the first axiom that allowed non-well-founded sets. After Zermelo adopted Foundation into his own system in 1930 (from previous work ofvon Neumann1925–1929) interest in non-well-founded sets waned for decades.[9]An early non-well-founded set theory wasWillard Van Orman Quine’sNew Foundations, although it is not merely ZF with a replacement for Foundation. Several proofs of the independence of Foundation from the rest of ZF were published in 1950s particularly byPaul Bernays(1954), following an announcement of the result in an earlier paper of his from 1941, and byErnst Speckerwho gave a different proof in hisHabilitationsschriftof 1951, proof which was published in 1957. Then in 1957Rieger's theoremwas published, which gave a general method for such proof to be carried out, rekindling some interest in non-well-founded axiomatic systems.[10]The next axiom proposal came in a 1960 congress talk ofDana Scott(never published as a paper), proposing an alternative axiom now calledSAFA.[11]Another axiom proposed in the late 1960s wasMaurice Boffa's axiom ofsuperuniversality, described by Aczel as the highpoint of research of its decade.[12]Boffa's idea was to make foundation fail as badly as it can (or rather, as extensionality permits): Boffa's axiom implies that everyextensionalset-likerelation is isomorphic to the elementhood predicate on a transitive class. A more recent approach to non-well-founded set theory, pioneered by M. Forti and F. Honsell in the 1980s, borrows from computer science the concept of abisimulation. Bisimilar sets are considered indistinguishable and thus equal, which leads to a strengthening of theaxiom of extensionality. In this context, axioms contradicting the axiom of regularity are known asanti-foundation axioms, and a set that is not necessarily well-founded is called ahyperset. Four mutuallyindependentanti-foundation axioms are well-known, sometimes abbreviated by the first letter in the following list: They essentially correspond to four different notions of equality for non-well-founded sets. The first of these, AFA, is based onaccessible pointed graphs(apg) and states that two hypersets are equal if and only if they can be pictured by the same apg. Within this framework, it can be shown that the equationx= {x} has one and only one solution, the uniqueQuine atomof the theory. Each of the axioms given above extends the universe of the previous, so that:V⊆ A ⊆ S ⊆ F ⊆ B. In the Boffa universe, the distinct Quine atoms form a proper class.[13] It is worth emphasizing that hyperset theory is an extension of classical set theory rather than a replacement: the well-founded sets within a hyperset domain conform to classical set theory. In published research, non-well-founded sets are also called hypersets, in parallel to thehyperreal numbersofnonstandard analysis.[14][15] The hypersets were extensively used byJon BarwiseandJohn Etchemendyin their 1987 bookThe Liar, on theliar's paradox. The book's proposals contributed to thetheory of truth.[14]The book is also a good introduction to the topic of non-well-founded sets.[14]
https://en.wikipedia.org/wiki/Non-well-founded_set_theory
Infirst-order logic, a first-order theory is given by asetof axioms in some language. This entry lists some of the more common examples used inmodel theoryand some of their properties. For every natural mathematical structure there is asignatureσ listing the constants, functions, and relations of the theory together with theirarities, so that the object is naturally aσ-structure. Given a signature σ there is a unique first-order languageLσthat can be used to capture the first-order expressible facts about the σ-structure. There are two common ways to specify theories: An Lσtheory may: The signature of the pure identity theory is empty, with no functions, constants, or relations. Pure identity theoryhas no (non-logical) axioms. It is decidable. One of the few interesting properties that can be stated in the language of pure identity theory is that of being infinite. This is given by an infinite set of axioms stating there are at least 2 elements, there are at least 3 elements, and so on: These axioms define thetheory of an infinite set. The opposite property of being finite cannot be stated infirst-order logicfor any theory that has arbitrarily large finite models: in fact any such theory has infinite models by thecompactness theorem. In general if a property can be stated by a finite number of sentences of first-order logic then the opposite property can also be stated in first-order logic, but if a property needs an infinite number of sentences then its opposite property cannot be stated in first-order logic. Any statement of pure identity theory is equivalent to either σ(N) or to ¬σ(N) for some finitesubsetNof thenon-negative integers, where σ(N) is the statement that the number of elements is inN. It is even possible to describe all possible theories in this language as follows. Any theory is either the theory of all sets of cardinality inNfor somefinitesubsetNof the non-negative integers, or the theory of all sets whose cardinality is not inN, for somefinite or infinitesubsetNof the non-negative integers. (There are no theories whose models are exactly sets of cardinalityNifNis an infinite subset of the integers.) The complete theories are the theories of sets of cardinalitynfor some finiten, and the theory of infinite sets. One special case of this is theinconsistent theorydefined by the axiom ∃x¬x=x. It is a perfectly good theory with many good properties: it is complete, decidable, finitely axiomatizable, and so on. The only problem is that it has no models at all. By Gödel's completeness theorem, it is the only theory (for any given language) with no models.[1]It is not the same as the theory of theempty set(in versions of first-order logic that allow a model to be empty): the theory of the empty set has exactly one model, which has no elements. A set of unary relationsPiforiin some setIis calledindependentif for every two disjoint finite subsetsAandBofIthere is some elementxsuch thatPi(x) is true foriinAand false foriinB. Independence can be expressed by a set of first-order statements. Thetheory of a countable number of independent unary relationsis complete, but has noatomic models. It is also an example of a theory that issuperstablebut nottotally transcendental. The signature ofequivalence relationshas one binary infix relation symbol ~, no constants, and no functions. Equivalence relations satisfy the axioms: Some first-order properties of equivalence relations are: The theory of an equivalence relation with exactly 2 infiniteequivalence classesis an easy example of a theory which is ω-categorical but not categorical for any largercardinal. The equivalence relation ~ should not be confused with theidentitysymbol '=': ifx=ythenx~y, but the converse is not necessarily true. Theories of equivalence relations are not all that difficult or interesting, but often give easy examples or counterexamples for various statements. The following constructions are sometimes used to produce examples of theories with certainspectra; in fact by applying them to a small number of explicit theoriesTone gets examples of complete countable theories with all possible uncountable spectra. IfTis a theory in some language, we define a new theory 2Tby adding a new binary relation to the language, and adding axioms stating that it is an equivalence relation, such that there are an infinite number of equivalence classes all of which aremodelsofT. It is possible to iterate this constructiontransfinitely: given anordinalα, define a new theory by adding an equivalence relationEβfor each β<α, together with axioms stating that whenever β<γ then eachEγequivalence class is the union of infinitely manyEβequivalence classes, and eachE0equivalence class is a model ofT. Informally, one can visualize models of this theory as infinitely branching trees of height α with models ofTattached to all leaves. The signature ofordershas no constants or functions, and one binary relation symbols ≤. (It is of course possible to use ≥, < or > instead as the basic relation, with the obvious minor changes to the axioms.) We definex≥y,x<y,x>yas abbreviations fory≤x,x≤y∧¬y≤x,y<x, Some first-order properties of orders: The theory DLO ofdense linear orders without endpoints(i.e. no smallest or largest element) is complete, ω-categorical, but not categorical for any uncountable cardinal. There are three other very similar theories: the theory of dense linear orders with a: Beingwell ordered("any non-empty subset has a minimal element") is not a first-order property; the usual definition involves quantifying over allsubsets. Latticescan be considered either as special sorts of partially ordered sets, with a signature consisting of one binary relation symbol ≤, or asalgebraic structureswith a signature consisting of two binary operations ∧ and ∨. The two approaches can be related by defininga≤bto meana∧b=a. For two binary operations the axioms for a lattice are: For one relation ≤ the axioms are: First-order properties include: Heyting algebrascan be defined as lattices with certain extra first-order properties. Completenessis not a first-order property of lattices. The signature ofgraphshas no constants or functions, and one binary relation symbolR, whereR(x,y) is read as "there is an edge fromxtoy". The axioms for thetheory of graphsare Thetheory of random graphshas the following extra axioms for each positive integern: The theory of random graphs is ω categorical, complete, and decidable, and its countable model is called theRado graph. A statement in the language of graphs is true in this theory if and only if the probability that ann-vertexrandom graphmodels the statement tends to 1 in the limit asngoes to infinity. There are several different signatures and conventions used forBoolean algebras: The axioms are: Tarski proved that the theory of Boolean algebras is decidable. We writex≤yas an abbreviation forx∧y=x, and atom(x) as an abbreviation for ¬x= 0 ∧ ∀yy≤x→y= 0 ∨y=x, read as "xis an atom", in other words a non-zero element with nothing between it and 0. Here are some first-order properties of Boolean algebras: The theory ofatomless Boolean algebrasis ω-categorical and complete. For any Boolean algebraB, there are several invariants defined as follows. Then two Boolean algebras areelementarily equivalentif and only if their invariantsl,m, andnare the same. In other words, the values of these invariants classify the possible completions of the theory of Boolean algebras. So the possible complete theories are: The signature ofgroup theoryhas one constant 1 (the identity), one function of arity 1 (the inverse) whose value ontis denoted byt−1, and one function of arity 2, which is usually omitted from terms. For any integern,tnis an abbreviation for the obvious term for thenth power oft. Groupsare defined by the axioms Some properties of groups that can be defined in the first-order language of groups are: The theory ofabelian groupsis decidable.[2]The theory ofinfinite divisible torsion-free abelian groupsis complete, as is the theory ofinfinite abelian groups of exponent p(forpprime). The theory offinite groupsis the set of first-order statements in the language of groups that are true in all finite groups (there are plenty of infinite models of this theory). It is not completely trivial to find any such statement that is not true for all groups: one example is "given two elements of order 2, either they are conjugate or there is a non-trivial element commuting with both of them". The properties of being finite, orfree, orsimple, or torsion are not first-order. More precisely, the first-order theory of all groups with one of these properties has models that do not have this property. The signature of (unital)ringshas two constants 0 and 1, two binary functions + and ×, and, optionally, one unary negation function −. Rings Axioms: Addition makes the ring into an abelian group, multiplication is associative and has an identity 1, and multiplication is left and right distributive. Commutative rings The axioms for rings plus ∀x∀yxy=yx. Fields The axioms for commutative rings plus ∀x(¬x= 0 → ∃yxy= 1) and ¬ 1 = 0. Many of the examples given here have only universal, oralgebraicaxioms. Theclassof structures satisfying such a theory has the property of being closed under substructure. For example, a subset of a group closed under the group actions of multiplication and inverse is again a group. Since the signature of fields does not usually include multiplicative and additive inverse, the axioms for inverses are not universal, and therefore a substructure of a field closed under addition and multiplication is not always a field. This can be remedied by adding unary inverse functions to the language. For any positive integernthe property that all equations of degreenhave a root can be expressed by a single first-order sentence: Perfect fields The axioms for fields, plus axioms for each prime numberpstating that ifp1 = 0 (i.e. the field hascharacteristicp), then every field element has apth root. Algebraically closed fields of characteristicp The axioms for fields, plus for every positiventhe axiom that all polynomials of degreenhave a root, plus axioms fixing the characteristic. The classical examples of complete theories.Categoricalin all uncountable cardinals. The theoryACFphas auniversal domain property, in the sense that every structureNsatisfying the universal axioms ofACFpis a substructure of a sufficiently large algebraically closed fieldM⊨ACF0{\displaystyle M\models ACF_{0}}, and additionally any two such embeddingsN→Minduce anautomorphismofM. Finite fields The theory of finite fields is the set of all first-order statements that are true in all finite fields. Significant examples of such statements can, for example, be given by applying theChevalley–Warning theorem, over theprime fields. The name is a little misleading as the theory has plenty of infinite models. Ax proved that the theory is decidable. Formally real fields The axioms for fields plus, for every positive integern, the axiom: That is, 0 is not a non-trivial sum of squares. Real closed fields The axioms for formally real fields plus the axioms: The theory of real closed fields is effective and complete and therefore decidable (theTarski–Seidenberg theorem). The addition of further function symbols (e.g., the exponential function, the sine function)may change decidability. p-adic fields Ax & Kochen (1965)showed that the theory ofp-adic fields is decidable and gave a set of axioms for it.[3] Axioms for various systems of geometry usually use a typed language, with the different types corresponding to different geometric objects such as points, lines, circles, planes, and so on. The signature will often consist of binary incidence relations between objects of different types; for example, the relation that a point lies on a line. The signature may have more complicated relations; for example ordered geometry might have a ternary "betweenness" relation for 3 points, which says whether one lies between two others, or a "congruence" relation between 2 pairs of points. Some examples of axiomatized systems of geometry includeordered geometry,absolute geometry,affine geometry,Euclidean geometry,projective geometry, andhyperbolic geometry. For each of these geometries there are many different and inequivalent systems of axioms for various dimensions. Some of these axiom systems include "completeness" axioms that are not first order. As a typical example, the axioms for projective geometry use 2 types, points and lines, and a binary incidence relation between points and lines. If point and line variables are indicated by small and capital letter, andaincident toAis written asaA, then one set of axioms is Euclid did not state all the axioms for Euclidean geometry explicitly, and the first complete list was given by Hilbert inHilbert's axioms. This is not a first-order axiomatization as one of Hilbert's axioms is a second order completeness axiom.Tarski's axiomsare a first-order axiomatization of Euclidean geometry. Tarski showed this axiom system is complete and decidable by relating it to the complete and decidable theory of real closed fields. The signature is that of fields (0, 1, +, −, ×) together with a unary function ∂, the derivation. The axioms are those for fields together with For this theory one can add the condition that the characteristic isp, a prime or zero, to get the theory DFpofdifferential fields of characteristicp(and similarly with the other theories below). IfKis a differential field then thefield of constantsk={u∈K:∂(u)=0}.{\displaystyle k=\{u\in K:\partial (u)=0\}.}The theory ofdifferentially perfect fieldsis the theory of differential fields together with the condition that the field of constants is perfect; in other words, for each primepit has the axiom: (There is little point in demanding that the whole field should be aperfect field, because in non-zero characteristic this implies the differential is 0.) For technical reasons to do withquantifier elimination, it is sometimes more convenient to force the constant field to be perfect by adding a new symbolrto the signature with the axioms Thetheory of the natural numbers with a successor functionhas signature consisting of a constant 0 and a unary functionS("successor":S(x) is interpreted asx+1), and has axioms: The last axiom (induction) can be replaced by the axioms The theory of the natural numbers with a successor function is complete and decidable, and is κ-categorical for uncountable κ but not for countable κ. Presburger arithmeticis the theory of the natural numbers under addition, with signature consisting of a constant 0, a unary functionS, and a binary function +. It is complete and decidable. The axioms are Many of the first-order theories described above can be extended to complete recursively enumerable consistent theories. This is no longer true for most of the following theories; they can usually encode both multiplication and addition of natural numbers, and this gives them enough power to encode themselves, which implies thatGödel's incompleteness theoremapplies and the theories can no longer be both complete and recursively enumerable (unless they are inconsistent). The signature of a theory of arithmetic has: Some authors take the signature to contain a constant 1 instead of the functionS, then defineSin the obvious way asSt= 1 +t. Robinson arithmetic(also calledQ). Axioms (1) and (2) govern the distinguished element 0. (3) assures thatSis aninjection. Axioms (4) and (5) are the standard recursive definition of addition; (6) and (7) do the same for multiplication. Robinson arithmetic can be thought of as Peano arithmetic without induction.Qis a weak theory for whichGödel's incompleteness theoremholds. Axioms: IΣnis first-order Peano arithmetic with induction restricted toΣnformulas(forn= 0, 1, 2, ...). The theory IΣ0is often denoted by IΔ0. This is a series of more and more powerful fragments of Peano arithmetic. The casen= 1 has about the same strength asprimitive recursive arithmetic(PRA).Exponential function arithmetic(EFA) is IΣ0with an axiom stating thatxyexists for allxandy(with the usual properties). First-orderPeano arithmetic,PA. The "standard" theory of arithmetic. The axioms are the axioms ofRobinson arithmeticabove, together with the axiom scheme of induction: Kurt Gödel's 1931 paper proved thatPAis incomplete, and has no consistent recursively enumerable completions. Complete arithmetic(also known astrue arithmetic) is the theory of the standard model of arithmetic, the natural numbersN. It is complete but does not have a recursively enumerable set of axioms. For thereal numbers, the situation is slightly different: The case that includes just addition and multiplication cannot encode the integers, and henceGödel's incompleteness theoremdoes not apply.Complicationsarise when adding further function symbols (e.g., exponentiation). Second-order arithmeticcan refer to a first-order theory (in spite of the name) with two types of variables, thought of as varying over integers and subsets of the integers. (There is also a theory of arithmetic in second order logic that is called second order arithmetic. It has only one model, unlike the corresponding theory in first-order logic, which is incomplete.) The signature will typically be the signature 0,S, +, × of arithmetic, together with a membership relation ∈ between integers and subsets (though there are numerous minor variations). The axioms are those ofRobinson arithmetic, together with axiom schemes ofinductionandcomprehension. There are many different subtheories of second order arithmetic that differ in which formulas are allowed in the induction and comprehension schemes. In order of increasing strength, five of the most common systems are These are defined in detail in the articles onsecond order arithmeticandreverse mathematics. The usual signature of set theory has one binary relation ∈, no constants, and no functions. Some of the theories below are "class theories" which have two sorts of object, sets and classes. There are three common ways of handling this in first-order logic: Some first-order set theories include: Some extra first-order axioms that can be added to one of these (usually ZF) include:
https://en.wikipedia.org/wiki/List_of_first-order_theories#Set_theories
Inmathematics, thecategory of topological spaces, often denotedTop, is thecategorywhoseobjectsaretopological spacesand whosemorphismsarecontinuous maps. This is a category because thecompositionof two continuous maps is again continuous, and the identity function is continuous. The study ofTopand of properties oftopological spacesusing the techniques ofcategory theoryis known ascategorical topology. N.B. Some authors use the nameTopfor the categories withtopological manifolds, withcompactly generated spacesas objects and continuous maps as morphisms or with thecategory of compactly generated weak Hausdorff spaces. Like many categories, the categoryTopis aconcrete category, meaning its objects aresetswith additional structure (i.e. topologies) and its morphisms arefunctionspreserving this structure. There is a naturalforgetful functor to thecategory of setswhich assigns to each topological space the underlying set and to each continuous map the underlyingfunction. The forgetful functorUhas both aleft adjoint which equips a given set with thediscrete topology, and aright adjoint which equips a given set with theindiscrete topology. Both of these functors are, in fact,right inversestoU(meaning thatUDandUIare equal to theidentity functoronSet). Moreover, since any function between discrete or between indiscrete spaces is continuous, both of these functors givefull embeddingsofSetintoTop. Topis alsofiber-completemeaning that thecategory of all topologieson a given setX(called thefiberofUaboveX) forms acomplete latticewhen ordered byinclusion. Thegreatest elementin this fiber is the discrete topology onX, while theleast elementis the indiscrete topology. Topis the model of what is called atopological category. These categories are characterized by the fact that everystructured source(X→UAi)I{\displaystyle (X\to UA_{i})_{I}}has a uniqueinitial lift(A→Ai)I{\displaystyle (A\to A_{i})_{I}}. InTopthe initial lift is obtained by placing theinitial topologyon the source. Topological categories have many properties in common withTop(such as fiber-completeness, discrete and indiscrete functors, and unique lifting of limits). The categoryTopis bothcomplete and cocomplete, which means that all smalllimits and colimitsexist inTop. In fact, the forgetful functorU:Top→Setuniquely lifts both limits and colimits and preserves them as well. Therefore, (co)limits inTopare given by placing topologies on the corresponding (co)limits inSet. Specifically, ifFis adiagraminTopand (L,φ:L→F) is a limit ofUFinSet, the corresponding limit ofFinTopis obtained by placing theinitial topologyon (L,φ:L→F). Dually, colimits inTopare obtained by placing thefinal topologyon the corresponding colimits inSet. Unlike manyalgebraiccategories, the forgetful functorU:Top→Setdoes not create or reflect limits since there will typically be non-universalconesinTopcovering universal cones inSet. Examples of limits and colimits inTopinclude:
https://en.wikipedia.org/wiki/Category_of_topological_spaces
Incategory theory, asmall setis one in a fixeduniverseofsets(as the worduniverseis used in mathematics in general). Thus, thecategory of small setsis thecategoryof all sets one cares to consider. This is used when one does not wish to bother withset-theoreticconcerns of what is and what is not considered a set, which concerns would arise if one tried to speak of the category of "all sets". A small set is not to be confused with asmall category, which is a category in which the collection of arrows (and therefore also the collection of objects) is a set. In other choices of foundations, such as Grothendieck universes, there exist both sets that belong to the universe, called “small sets” and sets that do not, such as the universe itself, “large sets”. We gain an intermediate notion of moderate set: a subset of the universe, which may be small or large. Every small set is moderate, but not conversely. Since in many cases the choice of foundations is irrelevant, it makes sense to always say “small set” for emphasis even if one has in mind a foundation where all sets are small. Similarly, a small family is a family indexed by a small set; the axiom of replacement (if it applies in the foundation in question) then says that the image of the family is also small.
https://en.wikipedia.org/wiki/Small_set_(category_theory)
Inmathematics, thecategory of measurable spaces, often denotedMeas, is thecategorywhoseobjectsaremeasurable spacesand whosemorphismsaremeasurable maps.[1][2][3][4]This is a category because thecompositionof two measurable maps is again measurable, and the identity function is measurable. N.B. Some authors reserve the nameMeasfor categories whose objects aremeasure spaces, and denote the category ofmeasurable spacesasMble, or other notations. Some authors also restrict the category only to particular well-behaved measurable spaces, such asstandard Borel spaces. Like many categories, the categoryMeasis aconcrete category, meaning its objects aresetswith additional structure (i.e.sigma-algebras) and its morphisms arefunctionspreserving this structure. There is a naturalforgetful functor to thecategory of setswhich assigns to each measurable space the underlying set and to each measurable map the underlyingfunction. The forgetful functorUhas both aleft adjoint which equips a given set with the discrete sigma-algebra, and aright adjoint which equips a given set with the indiscrete or trivial sigma-algebra. Both of these functors are, in fact,right inversestoU(meaning thatUDandUIare equal to theidentity functoronSet). Moreover, since any function between discrete or between indiscrete spaces is measurable, both of these functors givefull embeddingsofSetintoMeas. The categoryMeasis bothcomplete and cocomplete, which means that all smalllimits and colimitsexist inMeas. In fact, the forgetful functorU:Meas→Setuniquely lifts both limits and colimits and preserves them as well. Therefore, (co)limits inMeasare given by placing particular sigma-algebras on the corresponding (co)limits inSet. Examples of limits and colimits inMeasinclude:
https://en.wikipedia.org/wiki/Category_of_measurable_spaces
In mathematics, theElementary Theory of the Category of SetsorETCSis a set ofaxiomsforset theoryproposed byWilliam Lawverein 1964.[1]Although it was originally stated in the language ofcategory theory, as Leinster pointed out, the axioms can be stated without references to category theory. ETCS is a basic example ofstructural set theory, an approach to set theory that emphasizes sets as abstract structures (as opposed to collections of elements). The real message is this: simply by writing down a few mundane, uncontroversial statements about sets and functions, we arrive at an axiomatization that reflects how sets are used in everyday mathematics. Informally, the axioms are as follows: (here, set, function and composition of functions are primitives)[3] The resulting theory is weaker thanZFC. If theaxiom schema of replacementis added as another axiom, the resulting theory is equivalent to ZFC.[4] Thisset theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Elementary_Theory_of_the_Category_of_Sets
Informal semantics, ageneralized quantifier(GQ) is an expression that denotes aset of sets. This is the standard semantics assigned toquantifiednoun phrases. For example, the generalized quantifierevery boydenotes the set of sets of which every boy is a member:{X∣∀x(xis a boy→x∈X)}{\displaystyle \{X\mid \forall x(x{\text{ is a boy}}\to x\in X)\}} This treatment of quantifiers has been essential in achieving acompositionalsemanticsfor sentences containing quantifiers.[1][2] A version oftype theoryis often used to make the semantics of different kinds of expressions explicit. The standard construction defines the set of typesrecursivelyas follows: Given this definition, we have the simple typeseandt, but also acountableinfinityof complex types, some of which include:⟨e,t⟩;⟨t,t⟩;⟨⟨e,t⟩,t⟩;⟨e,⟨e,t⟩⟩;⟨⟨e,t⟩,⟨⟨e,t⟩,t⟩⟩;…{\displaystyle \langle e,t\rangle ;\qquad \langle t,t\rangle ;\qquad \langle \langle e,t\rangle ,t\rangle ;\qquad \langle e,\langle e,t\rangle \rangle ;\qquad \langle \langle e,t\rangle ,\langle \langle e,t\rangle ,t\rangle \rangle ;\qquad \ldots } We can now assign types to the words in our sentence above (Every boy sleeps) as follows. and so we can see that the generalized quantifier in our example is of type⟨⟨e,t⟩,t⟩{\displaystyle \langle \langle e,t\rangle ,t\rangle } Thus, every denotes a function from asetto a function from a set to a truth value. Put differently, it denotes a function from a set to a set of sets. It is that function which for any two setsA,B,every(A)(B)= 1 if and only ifA⊆B{\displaystyle A\subseteq B}. A useful way to write complex functions is thelambda calculus. For example, one can write the meaning ofsleepsas the following lambda expression, which is a function from an individualxto the proposition thatx sleeps.λx.sleep′(x){\displaystyle \lambda x.\mathrm {sleep} '(x)}Such lambda terms are functions whose domain is what precedes the period, and whose range are the type of thing that follows the period. Ifxis a variable that ranges over elements ofDe{\displaystyle D_{e}}, then the following lambda term denotes theidentity functionon individuals:λx.x{\displaystyle \lambda x.x} We can now write the meaning ofeverywith the following lambda term, whereX,Yare variables of type⟨e,t⟩{\displaystyle \langle e,t\rangle }:λX.λY.X⊆Y{\displaystyle \lambda X.\lambda Y.X\subseteq Y} If we abbreviate the meaning ofboyandsleepsas "B" and "S", respectively, we have that the sentenceevery boy sleepsnow means the following:(λX.λY.X⊆Y)(B)(S){\displaystyle (\lambda X.\lambda Y.X\subseteq Y)(B)(S)}Byβ-reduction,(λY.B⊆Y)(S){\displaystyle (\lambda Y.B\subseteq Y)(S)}andB⊆S{\displaystyle B\subseteq S} The expressioneveryis adeterminer. Combined with anoun, it yields ageneralized quantifierof type⟨⟨e,t⟩,t⟩{\displaystyle \langle \langle e,t\rangle ,t\rangle }. Ageneralized quantifierGQ is said to bemonotone increasing(also calledupward entailing) if, for every pair of setsXandY, the following holds: The GQevery boyis monotone increasing. For example, the set of things thatrun fastis a subset of the set of things thatrun. Therefore, the first sentence belowentailsthe second: A GQ is said to bemonotone decreasing(also calleddownward entailing) if, for every pair of setsXandY, the following holds: An example of a monotone decreasing GQ isno boy. For this GQ we have that the first sentence below entails the second. The lambda term for thedeterminernois the following. It says that the two sets have an emptyintersection.λX.λY.X∩Y=∅{\displaystyle \lambda X.\lambda Y.X\cap Y=\emptyset }Monotone decreasing GQs are among the expressions that can license anegative polarity item, such asany. Monotone increasing GQs do not license negative polarity items. A GQ is said to benon-monotoneif it is neither monotone increasing nor monotone decreasing. An example of such a GQ isexactly three boys. Neither of the following sentences entails the other. The first sentence does not entail the second. The fact that the number of students that ran is exactly three does not entail that each of these studentsran fast, so the number of students that did that can be smaller than 3. Conversely, the second sentence does not entail the first. The sentenceexactly three students ran fastcan be true, even though the number of students who merely ran (i.e. not so fast) is greater than 3. The lambda term for the (complex)determinerexactly threeis the following. It says that thecardinalityof theintersectionbetween the two sets equals 3.λX.λY.|X∩Y|=3{\displaystyle \lambda X.\lambda Y.|X\cap Y|=3} A determiner D is said to beconservativeif the following equivalence holds:D(A)(B)↔D(A)(A∩B){\displaystyle D(A)(B)\leftrightarrow D(A)(A\cap B)}For example, the following two sentences are equivalent. It has been proposed thatalldeterminers—in every natural language—are conservative.[2]The expressiononlyis not conservative. The following two sentences are not equivalent. But it is, in fact, not common to analyzeonlyas adeterminer. Rather, it is standardly treated as afocus-sensitiveadverb.
https://en.wikipedia.org/wiki/Generalized_quantifier
Inmathematics, afamily, orindexed family, is informally a collection of objects, each associated with an index from someindex set. For example, a family ofreal numbers, indexed by the set ofintegers, is a collection of real numbers, where a given function selects one real number for each integer (possibly the same) as indexing. More formally, an indexed family is amathematical functiontogether with itsdomainI{\displaystyle I}andimageX{\displaystyle X}(that is, indexed families and mathematical functions are technically identical, just points of view are different). Often theelementsof the setX{\displaystyle X}are referred to as making up the family. In this view, indexed families are interpreted as collections of indexed elements instead of functions. The setI{\displaystyle I}is called theindex setof the family, andX{\displaystyle X}is theindexed set. Sequencesare one type of families indexed bynatural numbers. In general, the index setI{\displaystyle I}is not restricted to becountable. For example, one could consider an uncountable family of subsets of the natural numbers indexed by the real numbers. LetI{\displaystyle I}andX{\displaystyle X}be sets andf{\displaystyle f}afunctionsuch thatf:I→Xi↦xi=f(i),{\displaystyle {\begin{aligned}f~:~&I\to X\\&i\mapsto x_{i}=f(i),\end{aligned}}}wherei{\displaystyle i}is an element ofI{\displaystyle I}and the imagef(i){\displaystyle f(i)}ofi{\displaystyle i}under the functionf{\displaystyle f}is denoted byxi{\displaystyle x_{i}}. For example,f(3){\displaystyle f(3)}is denoted byx3.{\displaystyle x_{3}.}The symbolxi{\displaystyle x_{i}}is used to indicate thatxi{\displaystyle x_{i}}is the element ofX{\displaystyle X}indexed byi∈I.{\displaystyle i\in I.}The functionf{\displaystyle f}thus establishes afamily of elements inX{\displaystyle X}indexed byI,{\displaystyle I,}which is denoted by(xi)i∈I,{\displaystyle \left(x_{i}\right)_{i\in I},}or simply(xi){\displaystyle \left(x_{i}\right)}if the index set is assumed to be known. Sometimes angle brackets or braces are used instead of parentheses, although the use of braces risks confusing indexed families with sets. Functionsand indexed families are formally equivalent, since any functionf{\displaystyle f}with adomainI{\displaystyle I}induces a family(f(i))i∈I{\displaystyle (f(i))_{i\in I}}and conversely. Being an element of a family is equivalent to being in the range of the corresponding function. In practice, however, a family is viewed as a collection, rather than a function. Any setX{\displaystyle X}gives rise to a family(xt)t∈X,{\displaystyle \left(x_{t}\right)_{t\in X},}whereX{\displaystyle X}is indexed by itself (meaning thatf{\displaystyle f}is the identity function). However, families differ from sets in that the same object can appear multiple times with different indices in a family, whereas a set is a collection of distinct objects. A family contains any element exactly onceif and only ifthe corresponding function isinjective. An indexed family(xi)i∈I{\displaystyle \left(x_{i}\right)_{i\in I}}defines a setX={xi:i∈I},{\displaystyle {\mathcal {X}}=\{x_{i}:i\in I\},}that is, the image ofI{\displaystyle I}underf.{\displaystyle f.}Since the mappingf{\displaystyle f}is not required to beinjective, there may existi,j∈I{\displaystyle i,j\in I}withi≠j{\displaystyle i\neq j}such thatxi=xj.{\displaystyle x_{i}=x_{j}.}Thus,|X|≤|I|{\displaystyle |{\mathcal {X}}|\leq |I|}, where|A|{\displaystyle |A|}denotes thecardinalityof the setA.{\displaystyle A.}For example, the sequence((−1)i)i∈N{\displaystyle \left((-1)^{i}\right)_{i\in \mathbb {N} }}indexed by the natural numbersN={1,2,3,…}{\displaystyle \mathbb {N} =\{1,2,3,\ldots \}}has image set{(−1)i:i∈N}={−1,1}.{\displaystyle \left\{(-1)^{i}:i\in \mathbb {N} \right\}=\{-1,1\}.}In addition, the set{xi:i∈I}{\displaystyle \{x_{i}:i\in I\}}does not carry information about any structures onI.{\displaystyle I.}Hence, by using a set instead of the family, some information might be lost. For example, an ordering on the index set of a family induces an ordering on the family, but no ordering on the corresponding image set. An indexed family(Bi)i∈J{\displaystyle \left(B_{i}\right)_{i\in J}}is asubfamilyof an indexed family(Ai)i∈I,{\displaystyle \left(A_{i}\right)_{i\in I},}if and only ifJ{\displaystyle J}is a subset ofI{\displaystyle I}andBi=Ai{\displaystyle B_{i}=A_{i}}holds for alli∈J.{\displaystyle i\in J.} For example, consider the following sentence: The vectorsv1,…,vn{\displaystyle v_{1},\ldots ,v_{n}}arelinearly independent. Here(vi)i∈{1,…,n}{\displaystyle \left(v_{i}\right)_{i\in \{1,\ldots ,n\}}}denotes a family of vectors. Thei{\displaystyle i}-th vectorvi{\displaystyle v_{i}}only makes sense with respect to this family, as sets are unordered so there is noi{\displaystyle i}-th vector of a set. Furthermore,linear independenceis defined as a property of a collection; it therefore is important if those vectors are linearly independent as a set or as a family. For example, if we considern=2{\displaystyle n=2}andv1=v2=(1,0){\displaystyle v_{1}=v_{2}=(1,0)}as the same vector, then thesetof them consists of only one element (as asetis a collection of unordered distinct elements) and is linearly independent, but the family contains the same element twice (since indexed differently) and is linearly dependent (same vectors are linearly dependent). Suppose a text states the following: A square matrixA{\displaystyle A}is invertible,if and only ifthe rows ofA{\displaystyle A}are linearly independent. As in the previous example, it is important that the rows ofA{\displaystyle A}are linearly independent as a family, not as a set. For example, consider the matrixA=[1111].{\displaystyle A={\begin{bmatrix}1&1\\1&1\end{bmatrix}}.}Thesetof the rows consists of a single element(1,1){\displaystyle (1,1)}as a set is made of unique elements so it is linearly independent, but the matrix is not invertible as the matrixdeterminantis 0. On the other hand, thefamilyof the rows contains two elements indexed differently such as the 1st row(1,1){\displaystyle (1,1)}and the 2nd row(1,1){\displaystyle (1,1)}so it is linearly dependent. The statement is therefore correct if it refers to the family of rows, but wrong if it refers to the set of rows. (The statement is also correct when "the rows" is interpreted as referring to amultiset, in which the elements are also kept distinct but which lacks some of the structure of an indexed family.) Letn{\displaystyle \mathbf {n} }be the finite set{1,2,…n},{\displaystyle \{1,2,\ldots n\},}wheren{\displaystyle n}is a positiveinteger. Index sets are often used in sums and other similar operations. For example, if(ai)i∈I{\displaystyle \left(a_{i}\right)_{i\in I}}is an indexed family of numbers, the sum of all those numbers is denoted by∑i∈Iai.{\displaystyle \sum _{i\in I}a_{i}.} When(Ai)i∈I{\displaystyle \left(A_{i}\right)_{i\in I}}is afamily of sets, theunionof all those sets is denoted by⋃i∈IAi.{\displaystyle \bigcup _{i\in I}A_{i}.} Likewise forintersectionsandCartesian products. The analogous concept incategory theoryis called adiagram. A diagram is afunctorgiving rise to an indexed family of objects in acategoryC, indexed by another categoryJ, and related bymorphismsdepending on two indices.
https://en.wikipedia.org/wiki/Indexed_family
Inmathematical logic,Russell's paradox(also known asRussell's antinomy) is aset-theoretic paradoxpublished by theBritishphilosopherandmathematician,Bertrand Russell, in 1901.[1][2]Russell's paradox shows that everyset theorythat contains anunrestricted comprehension principleleads to contradictions.[3]According to the unrestricted comprehension principle, for any sufficiently well-definedproperty, there is thesetof all and only the objects that have that property. LetRbe the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) IfRis not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: Russell also showed that a version of the paradox could be derived in theaxiomatic systemconstructed by the German philosopher and mathematicianGottlob Frege, hence undermining Frege's attempt to reduce mathematics to logic and calling into question thelogicist programme. Two influential ways of avoiding the paradox were both proposed in 1908: Russell's owntype theoryand theZermelo set theory. In particular, Zermelo's axioms restricted the unlimited comprehension principle. With the additional contributions ofAbraham Fraenkel, Zermelo set theory developed into the now-standardZermelo–Fraenkel set theory(commonly known as ZFC when including theaxiom of choice). The main difference between Russell's and Zermelo's solution to the paradox is that Zermelo modified the axioms of set theory while maintaining a standard logical language, while Russell modified the logical language itself. The language of ZFC, with thehelp of Thoralf Skolem, turned out to be that offirst-order logic.[4] The paradox had already been discovered independently in 1899 by the German mathematicianErnst Zermelo.[5]However, Zermelo did not publish the idea, which remained known only toDavid Hilbert,Edmund Husserl, and other academics at theUniversity of Göttingen. At the end of the 1890s,Georg Cantor– considered the founder of modern set theory – had already realized that his theory would lead to a contradiction, as he told Hilbert andRichard Dedekindby letter.[6] Most sets commonly encountered are not members of themselves. Let us call a set "normal" if it is not a member of itself, and "abnormal" if it is a member of itself. Clearly every set must be either normal or abnormal. For example, consider the set of allsquaresin aplane. This set is not itself a square in the plane, thus it is not a member of itself and is therefore normal. In contrast, the complementary set that contains everything which isnota square in the plane is itself not a square in the plane, and so it is one of its own members and is therefore abnormal. Now we consider the set of all normal sets,R, and try to determine whetherRis normal or abnormal. IfRwere normal, it would be contained in the set of all normal sets (itself), and therefore be abnormal; on the other hand ifRwere abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion thatRis neither normal nor abnormal: Russell's paradox. The term "naive set theory" is used in various ways. In one usage, naive set theory is a formal theory, that is formulated in afirst-order languagewith a binary non-logicalpredicate∈{\displaystyle \in }, and that includes theaxiom of extensionality: and the axiom schema ofunrestricted comprehension: for any predicateφ{\displaystyle \varphi }withxas a free variable insideφ{\displaystyle \varphi }. Substitutex∉x{\displaystyle x\notin x}forφ(x){\displaystyle \varphi (x)}to get Then byexistential instantiation(reusing the symboly{\displaystyle y}) anduniversal instantiationwe have a contradiction. Therefore, this naive set theory isinconsistent.[7] Prior to Russell's paradox (and to other similar paradoxes discovered around the time, such as theBurali-Forti paradox), a common conception of the idea of set was the "extensional concept of set", as recounted by von Neumann and Morgenstern:[8] A set is an arbitrary collection of objects, absolutely no restriction being placed on the nature and number of these objects, the elements of the set in question. The elements constitute and determine the set as such, without any ordering or relationship of any kind between them. In particular, there was no distinction between sets and proper classes as collections of objects. Additionally, the existence of each of the elements of a collection was seen as sufficient for the existence of the set of said elements. However, paradoxes such as Russell's and Burali-Forti's showed the impossibility of this conception of set, by examples of collections of objects that do not form sets, despite all said objects being existent. From theprinciple of explosionofclassical logic,anyproposition can be proved from acontradiction. Therefore, the presence of contradictions like Russell's paradox in an axiomatic set theory is disastrous; since if any formula can be proved true it destroys the conventional meaning of truth and falsity. Further, since set theory was seen as the basis for an axiomatic development of all other branches of mathematics, Russell's paradox threatened the foundations of mathematics as a whole. This motivated a great deal of research around the turn of the 20th century to develop a consistent (contradiction-free) set theory. In 1908,Ernst Zermeloproposed anaxiomatizationof set theory that avoided the paradoxes of naive set theory by replacing arbitrary set comprehension with weaker existence axioms, such as hisaxiom of separation(Aussonderung). (Avoiding paradox was not Zermelo's original intention, but instead to document which assumptions he used in proving thewell-ordering theorem.)[9]Modifications to this axiomatic theory proposed in the 1920s byAbraham Fraenkel,Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory calledZFC. This theory became widely accepted once Zermelo'saxiom of choiceceased to be controversial, and ZFC has remained the canonicalaxiomatic set theorydown to the present day. ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any setX, any subset ofXdefinable usingfirst-order logicexists. The objectRdefined by Russell's paradox above cannot be constructed as a subset of any setX, and is therefore not a set in ZFC. In some extensions of ZFC, notably invon Neumann–Bernays–Gödel set theory, objects likeRare calledproper classes. ZFC is silent about types, although thecumulative hierarchyhas a notion of layers that resemble types. Zermelo himself never accepted Skolem's formulation of ZFC using the language of first-order logic. As José Ferreirós notes, Zermelo insisted instead that "propositional functions (conditions or predicates) used for separating off subsets, as well as the replacement functions, can be 'entirelyarbitrary'[ganzbeliebig]"; the modern interpretation given to this statement is that Zermelo wanted to includehigher-order quantificationin order to avoidSkolem's paradox. Around 1930, Zermelo also introduced (apparently independently of von Neumann), theaxiom of foundation, thus—as Ferreirós observes—"by forbidding 'circular' and 'ungrounded' sets, it [ZFC] incorporated one of the crucial motivations of TT [type theory]—the principle of the types of arguments". This 2nd order ZFC preferred by Zermelo, including axiom of foundation, allowed a rich cumulative hierarchy. Ferreirós writes that "Zermelo's 'layers' are essentially the same as the types in the contemporary versions of simple TT [type theory] offered by Gödel and Tarski. One can describe the cumulative hierarchy into which Zermelo developed his models as the universe of a cumulative TT in which transfinite types are allowed. (Once we have adopted an impredicative standpoint, abandoning the idea that classes are constructed, it is not unnatural to accept transfinite types.) Thus, simple TT and ZFC could now be regarded as systems that 'talk' essentially about the same intended objects. The main difference is that TT relies on a strong higher-order logic, while Zermelo employed second-order logic, and ZFC can also be given a first-order formulation. The first-order 'description' of the cumulative hierarchy is much weaker, as is shown by the existence of countable models (Skolem's paradox), but it enjoys some important advantages."[10] In ZFC, given a setA, it is possible to define a setBthat consists of exactly the sets inAthat are not members of themselves.Bcannot be inAby the same reasoning in Russell's Paradox. This variation of Russell's paradox shows that no set contains everything. Through the work of Zermelo and others, especiallyJohn von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear: they are the elements of thevon Neumann universe,V, built up from theempty setbytransfinitely iteratingthepower setoperation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements ofV. Whether it isappropriateto think of sets in this way is a point of contention among the rival points of view on thephilosophy of mathematics. Other solutions to Russell's paradox, with an underlying strategy closer to that oftype theory, includeQuine'sNew FoundationsandScott–Potter set theory. Yet another approach is to define multiple membership relation with appropriately modified comprehension scheme, as in theDouble extension set theory. Russell discovered the paradox in May[11]or June 1901.[12]By his own account in his 1919Introduction to Mathematical Philosophy, he "attempted to discover some flaw in Cantor's proof that there is no greatest cardinal".[13]In a 1902 letter,[14]he announced the discovery toGottlob Fregeof the paradox in Frege's 1879Begriffsschriftand framed the problem in terms of both logic and set theory, and in particular in terms of Frege's definition offunction:[a][b] There is just one point where I have encountered a difficulty. You state (p. 17 [p. 23 above]) that a function too, can act as the indeterminate element. This I formerly believed, but now this view seems doubtful to me because of the following contradiction. Letwbe the predicate: to be a predicate that cannot be predicated of itself. Canwbe predicated of itself? From each answer its opposite follows. Therefore we must conclude thatwis not a predicate. Likewise there is no class (as a totality) of those classes which, each taken as a totality, do not belong to themselves. From this I conclude that under certain circumstances a definable collection [Menge] does not form a totality. Russell would go on to cover it at length in his 1903The Principles of Mathematics, where he repeated his first encounter with the paradox:[15] Before taking leave of fundamental questions, it is necessary to examine more in detail the singular contradiction, already mentioned, with regard to predicates not predicable of themselves. ... I may mention that I was led to it in the endeavour to reconcile Cantor's proof.... Russell wrote to Frege about the paradox just as Frege was preparing the second volume of hisGrundgesetze der Arithmetik.[16]Frege responded to Russell very quickly; his letter dated 22 June 1902 appeared, with van Heijenoort's commentary in Heijenoort 1967:126–127. Frege then wrote an appendix admitting to the paradox,[17]and proposed a solution that Russell would endorse in hisPrinciples of Mathematics,[18]but was later considered by some to be unsatisfactory.[19]For his part, Russell had his work at the printers and he added an appendix on thedoctrine of types.[20] Ernst Zermelo in his (1908)A new proof of the possibility of a well-ordering(published at the same time he published "the first axiomatic set theory")[21]laid claim to prior discovery of theantinomyin Cantor's naive set theory. He states: "And yet, even the elementary form that Russell9gave to the set-theoretic antinomies could have persuaded them [J. König, Jourdain, F. Bernstein] that the solution of these difficulties is not to be sought in the surrender of well-ordering but only in a suitable restriction of the notion of set".[22]Footnote 9 is where he stakes his claim: 91903, pp. 366–368. I had, however, discovered this antinomy myself, independently of Russell, and had communicated it prior to 1903 to Professor Hilbert among others.[23] Frege sent a copy of hisGrundgesetze der Arithmetikto Hilbert; as noted above, Frege's last volume mentioned the paradox that Russell had communicated to Frege. After receiving Frege's last volume, on 7 November 1903, Hilbert wrote a letter to Frege in which he said, referring to Russell's paradox, "I believe Dr. Zermelo discovered it three or four years ago". A written account of Zermelo's actual argument was discovered in theNachlassofEdmund Husserl.[24] In 1923,Ludwig Wittgensteinproposed to "dispose" of Russell's paradox as follows: The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a propositionF(F(fx)), in which the outer functionFand the inner functionFmust have different meanings, since the inner one has the formO(fx)and the outer one has the formY(O(fx)). Only the letter 'F' is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead ofF(Fu)we write(do) : F(Ou) . Ou = Fu. That disposes of Russell's paradox. (Tractatus Logico-Philosophicus, 3.333) Russell andAlfred North Whiteheadwrote their three-volumePrincipia Mathematicahoping to achieve what Frege had been unable to do. They sought to banish the paradoxes ofnaive set theoryby employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by purely logical means. WhilePrincipia Mathematicaavoided the known paradoxes and allows the derivation of a great deal of mathematics, its system gave rise to new problems. In any event,Kurt Gödelin 1930–31 proved that while the logic of much ofPrincipia Mathematica, now known as first-order logic, iscomplete,Peano arithmeticis necessarily incomplete if it isconsistent. This is very widely—though not universally—regarded as having shown thelogicistprogram of Frege to be impossible to complete. In 2001, A Centenary International Conference celebrating the first hundred years of Russell's paradox was held in Munich and its proceedings have been published.[12] There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, thebarber paradoxsupposes a barber who shaves all men who do not shave themselves and only men who do not shave themselves. When one thinks about whether the barber should shave himself or not, a similar paradox begins to emerge.[25] An easy refutation of the "layman's versions" such as the barber paradox seems to be that no such barber exists, or that the barber is not a man, and so can exist without paradox. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within a given theory is unsatisfactory. Note the difference between the statements "such a set does not exist" and "it is anempty set". It is like the difference between saying "There is no bucket" and saying "The bucket is empty". A notable exception to the above may be theGrelling–Nelson paradox, in which words and meaning are the elements of the scenario rather than people and hair-cutting. Though it is easy to refute the barber's paradox by saying that such a barber does not (andcannot) exist, it is impossible to say something similar about a meaningfully defined word. One way that the paradox has been dramatised is as follows: Suppose that every public library has to compile a catalogue of all its books. Since the catalogue is itself one of the library's books, some librarians include it in the catalogue for completeness; while others leave it out as it being one of the library's books is self evident. Now imagine that all these catalogues are sent to the national library. Some of them include themselves in their listings, others do not. The national librarian compiles two master catalogues—one of all the catalogues that list themselves, and one of all those that do not.[26] The question is: should these master catalogues list themselves? The 'catalogue of all catalogues that list themselves' is no problem. If the librarian does not include it in its own listing, it remains a true catalogue of those catalogues that do include themselves. If he does include it, it remains a true catalogue of those that list themselves. However, just as the librarian cannot go wrong with the first master catalogue, he is doomed to fail with the second. When it comes to the 'catalogue of all catalogues that do not list themselves', the librarian cannot include it in its own listing, because then it would include itself, and so belong in the other catalogue, that of catalogues that do include themselves. However, if the librarian leaves it out, the catalogue is incomplete. Either way, it can never be a true master catalogue of catalogues that do not list themselves.[26] As illustrated above for the barber paradox, Russell's paradox is not hard to extend. Take: Form the sentence: Sometimes the "all" is replaced by "all⟨V⟩ers". An example would be "paint": or "elect" In theSeason 8episode ofThe Big Bang Theory, "The Skywalker Intrusion",Sheldon Cooperanalyzes the song "Play That Funky Music", concluding that the lyrics present a musical example of Russell's Paradox.[27] Paradoxes that fall in this scheme include:
https://en.wikipedia.org/wiki/Russell%27s_paradox
Defuzzificationis the process of producing a quantifiable result incrisp logic, givenfuzzy setsand corresponding membership degrees. It is the process that maps a fuzzy set to a crisp set. It is typically needed infuzzy controlsystems. These systems will have a number of rules that transform a number of variables into a fuzzy result, that is, the result is described in terms of membership infuzzy sets. For example, rules designed to decide how much pressure to apply might result in "Decrease Pressure (15%), Maintain Pressure (34%), Increase Pressure (72%)". Defuzzification is interpreting the membership degrees of the fuzzy sets into a specific decision or real value. The simplest but least useful defuzzification method is to choose the set with the highest membership, in this case, "Increase Pressure" since it has a 72% membership, and ignore the others, and convert this 72% to some number. The problem with this approach is that it loses information. The rules that called for decreasing or maintaining pressure might as well have not been there in this case. A common and useful defuzzification technique iscenter of gravity. First, the results of the rules must be added together in some way. The most typical fuzzy set membership function has the graph of atriangle. Now, if this triangle were to be cut in a straight horizontal line somewhere between the top and the bottom, and the top portion were to be removed, the remaining portion forms atrapezoid. The first step of defuzzification typically "chops off" parts of the graphs to form trapezoids (or other shapes if the initial shapes were not triangles). For example, if the output has "Decrease Pressure (15%)", then this triangle will be cut 15% the way up from the bottom. In the most common technique, all of these trapezoids are then superimposed one upon another, forming a singlegeometric shape. Then, thecentroidof this shape, called thefuzzy centroid, is calculated. Thexcoordinate of the centroid is the defuzzified value. There are many different methods of defuzzification available, including the following:[1] The maxima methods are good candidates for fuzzy reasoning systems. The distribution methods and the area methods exhibit the property of continuity that makes them suitable for fuzzy controllers.[1]
https://en.wikipedia.org/wiki/Defuzzification
Afuzzy conceptis an idea of which the boundaries of application can vary considerably according to context or conditions, instead of being fixed once and for all.[1]This means the idea is somewhatvagueor imprecise.[2]Yet it is not unclear or meaningless. It has a definite meaning, which can be made more exact only through further elaboration and specification - including a closer definition of the context in which the concept is used. The study of the characteristics of fuzzy concepts and fuzzy language is calledfuzzy semantics.[3]The inverse of a "fuzzy concept" is a "crisp concept" (i.e. a precise concept). For engineers, "Fuzziness is imprecision or vagueness of definition."[4]For computer scientists, a fuzzy concept is an idea which is "to an extent applicable" in a situation. It means that the concept can havegradationsof significance orunsharp(variable) boundaries of application; a fuzzy statement is a statement which is true "to some extent", and that extent can often be represented by a scaled value (a score). For mathematicians, a "fuzzy concept" is usually afuzzy setor a combination of such sets (seefuzzy mathematicsandfuzzy set theory). Incognitive linguistics, the things that belong to a "fuzzy category" exhibit gradations offamily resemblance, and the borders of the category are not clearly defined.[5]In a more general, popular sense – contrasting with its technical meanings – a "fuzzy concept" refers to an imprecise idea which is "somewhat vague" for any kind of reason, or which is "approximately true" in a situation. Fuzzy concepts are often used to navigate imprecision in the real world, when exact information is not available. In the past, the very idea of reasoning with fuzzy concepts faced considerable resistance from academic elites.[6]They did not want to endorse the use of imprecise concepts in research or argumentation, and regardedfuzzy logicwith suspicion or even hostility. Yet although people might not be aware of it, the use of fuzzy concepts has risen gigantically in all walks of life from the 1970s onward.[7]That is mainly due to advances in electronic engineering, fuzzy mathematics and digital computer programming. The new technology allows very complex inferences about "variations on a theme" to be anticipated and fixed in a program.[8]ThePerseveranceMarsrover, a driverlessNASAvehicle used to explore theJezero crateron the planetMars, features fuzzy logic programming that steers it through rough terrain.[9]Similarly, to the North, the Chinese Mars roverZhurongused fuzzy logic algorithms to calculate its travel route inUtopia Planitiafrom sensor data.[10] Newneuro-fuzzycomputational methods[11]make it possible for machines to identify, measure, correct/adjust for and respond to fine gradations of significance with great precision.[12]It means that practically useful concepts can be coded and applied to all kinds of tasks, even if ordinarily these concepts are never precisely defined. Nowadays engineers, statisticians and programmers often represent fuzzy concepts mathematically, using fuzzy logic, fuzzy values, fuzzy variables and fuzzy sets[13](see alsofuzzy set theory). Fuzzy logic can play a significant role inartificial intelligenceprogramming, for example because it can model human cognitive processes more easily than other methods.[14] Problems of vagueness and fuzziness have probably always existed in human experience.[15]In the West, ancient texts show that philosophers and scientists were already thinking about those kinds of problems inclassical antiquity. According to theDaoistthought ofLaoziandZhuang Zhouin ancient China, "vagueness is not regarded with suspicion, but is simply an acknowledged characteristic of the world around us" - a subject formeditationand a source of insight.[16] The ancientSorites paradoxfirst raised the logical problem of how we could exactly define the threshold at which a change in quantitative gradation turns into a qualitative or categorical difference.[17]With some physical processes, this threshold seems relatively easy to identify. For example, water turns into steam at 100 °C or 212 °F. Of course, the boiling point depends partly on atmospheric pressure, which decreases at higher altitudes; it is also affected by the level ofhumidity- in that sense, the boiling point is "somewhat fuzzy", because it can vary under different conditions.[18]Nevertheless, for every altitude and humidity level, we can predict accurately what the boiling point will be, if we know the conditions. With many other processes and gradations, however, the point of change is much more difficult to locate, and remains somewhat vague. Thus, the boundaries between qualitatively different things may beunsharp: we know that there are boundaries, but we cannot define them exactly. For example, to identify "the oldest city in the world", we have to define what counts as a city, and at what point a growing human settlement becomes a city.[19] According to the modern idea of thecontinuum fallacy, the fact that a statement is to an extent vague, does not automatically mean that it has no validity. The question then arises of how (by what method or approach) we could ascertain and define the validity that the fuzzy statementdoeshave. The Nordic myth ofLoki's wagersuggested that concepts that lack precise meanings or lack precise boundaries of application cannot be usefully discussed at all, because they evade any clear definition.[20]However, the 20th-century idea of "fuzzy concepts" proposes that "somewhat vague terms" can be operated with, because we can explicate and define the variability of their application - by assigning numbers to gradations of applicability. This idea sounds simple enough, but it had large implications. In Western civilization, the intellectual recognition of fuzzy concepts has been traced back to a diversity of famous and less well-known thinkers,[21]including (among many others)Eubulides,[22]Epicurus,[23]Plato,[24]Cicero,[25]Georg Wilhelm Friedrich Hegel,[26]Karl MarxandFriedrich Engels,[27]Friedrich Nietzsche,[28]William James,[29]Hugh MacColl,[30]Charles S. Peirce,[31]Carl Gustav Hempel,[32]Max Black,[33]Arto Salomaa,[34]Ludwig Wittgenstein,[35]Jan Łukasiewicz,[36]Emil Leon Post,[37]Alfred Tarski,[38]Georg Cantor,[39]Nicolai A. Vasiliev,[40]Kurt Gödel,[41]Stanisław Jaśkowski,[42]Willard Van Orman Quine,[43]Petr Hájek,[44]Joseph Goguen,[45]Jan Pavelka,[46]George J. Klir,[47]Didier Dubois,[48]andDonald Knuth.[49] Across at least two and a half millennia, all of them had something to say about graded concepts with unsharp boundaries. This suggests at least that the awareness of the existence of concepts with "fuzzy" characteristics, in one form or another, has a very long history in human thought. Quite a few 20th century logicians, mathematicians and philosophers also tried toanalyzethe characteristics of fuzzy concepts as a recognized species, sometimes with the aid of some kind ofmany-valued logicorsubstructural logic.[50] An early attempt in the post-WW2 era to create a mathematical theory of sets with gradations of set membership was made byAbraham Kaplanand Hermann F. Schott in 1951. They intended to apply the idea to empirical research. Kaplan and Schott expressed the degree of membership of empirical classes using real numbers between 0 and 1, and they defined corresponding notions of intersection, union, complementation and subset.[51]However, at the time, their idea "fell on stony ground".[52]J. Barkley Rosser Sr.published a treatise on many-valued logics in 1952, anticipating "many-valued sets".[53]Another treatise was published in 1963 byAlexander Zinovievand others.[54] In 1964, the American philosopherWilliam Alstonintroduced the term "degree vagueness" to describe vagueness in an idea that results from the absence of a definite cut-off point along an implied scale (in contrast to "combinatory vagueness" caused by a term that has a number of logically independent conditions of application).[55] The German mathematician Dieter Klaua published a German-language paper on fuzzy sets in 1965,[56]but he used a different terminology (he referred to "many-valued sets", not "fuzzy sets").[57] In the late 1960s, two popular introductions to many-valued logic were published by Robert J. Ackermann andNicholas Rescher.[58]Rescher's book includes a bibliography on fuzzy theory up to 1965, which was extended by Robert Wolf and Joseph De Kerf for 1966–1975.[59]Haack provides references to significant works after 1974.[60]In 1980,Didier Duboisand Henri Prade published a detailed annotated bibliography on the field of fuzzy set theory.[61]George J. Klir and Bo Yuan provided an overview of the subject inFuzzy sets and fuzzy logicduring the mid-1990s.[62]Merrie Bergmann provides a more recent (2008) introduction to fuzzy reasoning.[63]A standard modern reference work isFuzzy Logic and Mathematics: A Historical Perspective(2017) by Radim Bělohlávek,Joseph W. DaubenandGeorge J. Klir.[64] The Iranian-born American computer scientistLotfi A. Zadeh(1921–2017) is usually credited with inventing the specific idea of a "fuzzy concept" in his seminal 1965 paper on fuzzy sets, because he presented a mathematical formalization of the phenomenon that was widely accepted by scholars.[65]It was also Zadeh who played a decisive role in developing the field of fuzzy logic, fuzzy sets and fuzzy systems, with a large number of scholarly papers.[66]Unlike most philosophical theories of vagueness, Zadeh's engineering approach had the advantage that it could be directly applied to computer programming.[67]Zadeh's seminal 1965 paper is acknowledged to be one of the most-cited scholarly articles in the 20th century.[68]In 2014, it was placed 46th in the list of the world's 100 most-cited research papers of all time.[69]Since the mid-1960s, many scholars have contributed to elaborating the theory of reasoning with graded concepts, and the research field continues to expand.[70] The ordinary scholarly definition of a concept as "fuzzy" has been in use from the 1970s onward. Radim Bělohlávek explains: "There exists strong evidence, established in the 1970s in the psychology of concepts... that human concepts have a graded structure in that whether or not a concept applies to a given object is a matter of degree, rather than a yes-or-no question, and that people are capable of working with the degrees in a consistent way. This finding is intuitively quite appealing, because people say "this product is more or less good" or "to a certain degree, he is a good athlete", implying the graded structure of concepts. In his classic paper, Zadeh called the concepts with a graded structurefuzzy conceptsand argued that these concepts are a rule rather than an exception when it comes to how people communicate knowledge. Moreover, he argued that to model such concepts mathematically is important for the tasks of control, decision making, pattern recognition, and the like. Zadeh proposed the notion of afuzzy setthat gave birth to the field offuzzy logic..."[71] Hence, a concept is generally regarded as "fuzzy" in a logical sense if: The fact that a concept is fuzzy does not prevent its use in logical reasoning; it merely affects the type of reasoning which can be applied (seefuzzy logic). If the concept has gradations of meaningful significance, it may be necessary to specify and formalize what those gradations are, if they can make an important difference. Not all fuzzy concepts have the same logical structure, but they can often be formally described or reconstructed using fuzzy logic or othersubstructural logics.[73]The advantage of this approach is, that numerical notation enables a potentiallyinfinitenumber of truth-values between complete truth and complete falsehood, and thus it enables - in theory, at least - the greatest precision in stating the degree of applicability of a logical rule. One of the first scholars who pointed out the need to distinguish the theory of fuzzy sets from probability theory was Zadeh's pupilJoseph Goguen.[74]Petr Hájek, writing about the foundations of fuzzy logic, likewise sharply distinguished between "fuzziness" and "uncertainty": "The sentence "The patient is young" is true to some degree – the lower the age of the patient (measured e.g. in years), the more the sentence is true. Truth of a fuzzy proposition is a matter of degree. I recommend to everybody interested in fuzzy logic that they sharply distinguish fuzziness from uncertainty as a degree of belief (e.g. probability). Compare the last proposition with the proposition "The patient will survive next week". This may well be considered as a crisp proposition which is either (absolutely) true or (absolutely) false; but we do not know which is the case. We may have some probability (chance, degree of belief) that the sentence is true; but probability is not a degree of truth.[75] Inmetrology(the science of measurement), it is acknowledged that for any measure we care to make, there exists anamount of uncertaintyabout its accuracy, but this degree of uncertainty is conventionally expressed with a magnitude of likelihood, and not as a degree of truth. In 1975,Lotfi A. Zadehintroduced a distinction between "Type 1 fuzzy sets" without uncertainty and "Type 2 fuzzy sets" with uncertainty, which has been widely accepted.[76]Simply put, in the former case, each fuzzy number is linked to a non-fuzzy (natural) number, while in the latter case, each fuzzy number is linked to another fuzzy number. In philosophicallogicand linguistics, fuzzy concepts are often regarded as vague or imprecise ideas which in their application, or strictly speaking, are neither completely true nor completely false.[77]Such ideas require further elaboration, specification or qualification to understand their applicability (the conditions under which they truly make sense).[78]Kit Finestates that "when a philosopher talks ofvaguenesshe has in mind a certain kind of indeterminacy in the relation of something to the world".[79]The "fuzzy area" can also refer simply to aresidualnumber of cases which cannot be allocated to a known and identifiable group, class or set, if strict criteria are used. The French thinkersGilles DeleuzeandFélix Guattarireferred occasionally to fuzzy sets in connection with theirphenomenologicalconcept ofmultiplicities. InA Thousand Plateaus, they state that "a set is fuzzy if its elements belong to it only by virtue of specific operations of consistency and consolidation, which themselves follow a special logic",[80]In their bookWhat Is Philosophy?, which deals with the functions of concepts, they suggest that all philosophical concepts could be regarded as "vague or fuzzy sets, simple aggregates of perceptions and affections, which form within the lived as immanent to a subject, to a consciousness [and which] are qualitative or intensive multiplicities, like "redness" or "baldness," where we cannot decide whether certain elements do or do not belong to the set."[81] Inmathematicsandstatistics, a fuzzy variable (such as "the temperature", "hot" or "cold") is a value which could lie in a probablerangedefined by some quantitative limits orparameters, and which can be usefully described with imprecise categories (such as "high", "medium" or "low") using some kind ofscaleor conceptual hierarchy. In mathematics andcomputer science, the gradations of applicable meaning of a fuzzy concept are described in terms ofquantitativerelationships defined by logical operators. Such an approach is sometimes called "degree-theoretic semantics" by logicians and philosophers,[82]but the more usual term isfuzzy logicormany-valued logic.[83]The novelty of fuzzy logic is, that it "breaks with the traditional principle thatformalisationshould correct and avoid, but not compromise with, vagueness".[84]The basic idea of fuzzy logic is that a real number is assigned to each statement written in a language, within a range from 0 to 1, where 1 means that the statement is completely true, and 0 means that the statement is completely false, while values less than 1 but greater than 0 represent that the statement is "partly true", to a given, quantifiable extent.Susan Haackcomments: "Whereas in classical set theory an object either is or is not a member of a given set, in fuzzy set theory membership is a matter of degree; the degree of membership of an object in a fuzzy set is represented by some real number between 0 and 1, with 0 denotingnomembership and 1fullmembership."[85] "Truth" in this mathematical context usually means simply that "something is the case", or that "something is applicable". This makes it possible to analyze a distribution of statements for their truth-content, identify data patterns, make inferences and predictions, and model how processes operate.Petr Hájekclaimed that "fuzzy logic is not just some "applied logic", but may bring "new light to classical logical problems", and therefore might be well classified as a distinct branch of "philosophical logic" similar to e.g.modal logics.[86]Fuzzy logic does not abolish the"hard and soft science"distinction, but modifies it, by redefining what scientific rigour means in many fields of research. Fuzzy logic offers computationally-oriented systems of concepts and methods, to formalize types of reasoning which are ordinarily approximate only, and not exact. In principle, this allows us to give a definite, precise answer to the question, "To what extent is something the case?", or, "To what extent is something applicable?". Via a series of switches, this kind of reasoning can be built into electronic devices. That was already happening before fuzzy logic was invented, but using fuzzy logic in modelling has become an important aid in design, which creates many new technical possibilities. Fuzzy reasoning (i.e., reasoning with graded concepts) turns out to have many practical uses.[87]It is nowadays widely used in: It looks like fuzzy logic will eventually be applied in almost every aspect of life, even if people are not aware of it, and in that sense fuzzy logic is an astonishingly successful invention.[93]The scientific and engineering literature on the subject is constantly increasing. Originally lot of research on fuzzy logic was done by Japanese pioneers inventing new machinery, electronic equipment and appliances (see alsoFuzzy control system).[94]The idea became so popular in Japan, that the English word entered Japanese language (ファジィ概念). "Fuzzy theory" (ファジー理論) is a recognized field in Japanese scientific research. Since that time, the movement has spread worldwide; nearly every country nowadays has its own fuzzy systems association, although some are larger and more developed than others. In some cases, the local body is a branch of an international one. In other cases, the fuzzy systems program falls underartificial intelligenceorsoft computing. There are also some emerging networks of researchers which do not yet have their own website. The following list is only provisional and illustrative - many more groups could possibly be added: Lotfi A. Zadehestimated around 2014 that there were more than 50,000 fuzzy logic–related, patented inventions. He listed 28 journals at that time dealing with fuzzy reasoning, and 21 journal titles onsoft computing. His searches found close to 100,000 publications with the word "fuzzy" in their titles, but perhaps there are even 300,000.[118]In March 2018,Google Scholarfound 2,870,000 titles which included the word "fuzzy". When he died on 11 September 2017 at age 96, Professor Zadeh had received more than 50 engineering and academic awards, in recognition of his work.[119] The technique of fuzzy concept lattices is increasingly used in programming for the formatting, relating and analysis of fuzzy data sets. According to the computer scientist Andrei Popescu atMiddlesex University London,[120]a concept can be operationally defined to consist of: Once the context is defined, we can specify relationships of sets of objects with sets of attributes which they do, or do not share. Whether an object belongs to a concept, and whether an object does, or does not have an attribute, can often be a matter of degree. Thus, for example, "many attributes are fuzzy rather than crisp".[121]To overcome this issue, a numerical value is assigned to each attribute along a scale, and the results are placed in a table which links each assigned object-value within the given range to a numerical value (a score) denoting a given degree of applicability. This is the basic idea of a "fuzzy concept lattice", which can also be graphed; different fuzzy concept lattices can be connected to each other as well (for example, in "fuzzy conceptual clustering" techniques used to group data, originally invented byEnrique H. Ruspini). Fuzzy concept lattices are a useful programming tool for the exploratory analysis ofbig data, for example in cases where sets of linked behavioural responses are broadly similar, but can nevertheless vary in important ways, within certain limits. It can help to find out what the structure and dimensions are, of a behaviour that occurs with an important but limited amount of variation in a large population.[122] Coding with fuzzy lattices can be useful, for instance, in thepsephologicalanalysis ofbig dataabout voter behaviour, where researchers want to explore the characteristics and associations involved in "somewhat vague" opinions; gradations in voter attitudes; and variability in voter behaviour (or personal characteristics) within a set of parameters.[123]The basic programming techniques for this kind of fuzzyconcept mappinganddeep learningare by now well-established[124]and big data analytics had a strong influence on the US elections of 2016.[125]A US study concluded in 2015 that for 20% of undecided voters,Google's secret search algorithm had the power to change the way they voted.[126] Very large quantities of data can now be explored using computers with fuzzy logic programming[127]and open-source architectures such asApache Hadoop,Apache Spark, andMongoDB. One author claimed in 2016 that it is now possible to obtain, link and analyze "400 data points" for each voter in a population, usingOraclesystems (a "data point" is a number linked to one or more categories, which represents a characteristic).[128] However,NBC Newsreported in 2016 that the Anglo-American firmCambridge Analyticawhich profiled voters forDonald Trump(Steve Bannonwas a board member)[129]did not have 400, but 4,000 data points for each of 230 million US adults.[130]Cambridge Analytica's own website claimed that "up to 5,000 data points" were collected for each of 220 million Americans, a data set of more than 1 trillion bits of formatted data.[131]The Guardianlater claimed that Cambridge Analytica in fact had, according to its own company information, "up to 7,000 data points" on 240 million American voters.[132] Harvard UniversityProfessorLatanya Sweeneycalculated, that if a U.S. company knows just yourdate of birth, yourZIP codeandsex, the company has an 87% chance to identify you by name – simply by using linked data sets from various sources.[133]With 4,000–7,000 data points instead of three, a very comprehensive personal profile becomes possible for almost every voter, and many behavioural patterns can be inferred by linking together different data sets. It also becomes possible to identify and measure gradations in personal characteristics which, in aggregate, have very large effects. Some researchers argue that this kind of big data analysis has severe limitations, and that the analytical results can only be regarded as indicative, and not as definitive.[134]This was confirmed byKellyanne Conway,Donald Trump's campaign advisor and counselor in 2016, who emphasized the importance of human judgement and common sense in drawing conclusions from fuzzy data.[135]Conway candidly admitted that much of her own research would "never see the light of day", because it was client confidential.[136]Another Trump adviser criticized Conway, claiming that she "produces an analysis that buries every terrible number and highlights every positive number"[137] In a video interview published byThe Guardianin March 2018, whistleblowerChristopher WyliecalledCambridge Analyticaa "full-service propaganda machine" rather than a bona fide data science company. Its own site revealed with "case studies" that it has been active in political campaigns in numerous different countries, influencing attitudes and opinions.[138]Wylie explained, that "we spent a million dollars harvesting tens of millions ofFacebookprofiles, and those profiles were used as the basis of the algorithms that became the foundation of Cambridge Analytica itself. The company itself was founded on using Facebook data".[139] On 19 March 2018,Facebookannounced it had hired the digital forensics firm Stroz Friedberg to conduct a "comprehensive audit" of Cambridge Analytica, while Facebook shares plummeted 7 percent overnight (erasing roughly $40 billion in market capitalization).[140]Cambridge Analyticahad not just used the profiles ofFacebookusers to compile data sets. According toChristopher Wylie's testimony, the company also harvested the data of each user's network of friends, leveraging the original data set. It then converted, combined and migrated its results intonewdata sets, which can in principle survive in some format, even if the original data sources are destroyed. It created and applied algorithms using data to which - critics argue - it could not have been entitled. This was denied byCambridge Analytica, which stated on its website that it legitimately "uses data to change audience behavior" among customers and voters (whochooseto view and provide information). If advertisers can do that, why not a data company? Where should the line be drawn? Legally, it remained a "fuzzy" area. The tricky legal issue then became, what kind of dataCambridge Analytica(or any similar company) is actually allowed to have and keep.[141]Facebookitself became the subject of anotherU.S. Federal Trade Commissioninquiry, to establish whether Facebook violated the terms of a 2011 consent decree governing its handing of user data (data which was allegedly transferred to Cambridge Analytica without Facebook's and user's knowledge).[142]Wiredjournalist Jessi Hempel commented in a CBNC panel discussion that "Now there is this fuzziness from the top of the company [i.e. Facebook] that I have never seen in the fifteen years that I have covered it."[143] Interrogating Facebook's CEOMark Zuckerbergbefore the U.S.House Energy and Commerce Committeein April 2018, New Mexico Congressman Rep.Ben Ray Lujánput it to him that the Facebook corporation might well have "29,000 data points" on each Facebook user. Zuckerberg claimed that he "did not really know". Lujan's figure was based onProPublicaresearch, which in fact suggested that Facebook may even have 52,000 data points for many Facebook users.[144]When Zuckerberg replied to his critics, he stated that because the revolutionary technology of Facebook (with 2.2 billion users worldwide, at that time) had ventured into previously unknown territory, it was unavoidable that mistakes would be made, despite the best of intentions. He justified himself saying that: "For the first ten or twelve years of the company, I viewed our responsibility primarily as building tools, that if we could put those tools in people's hands, then that would empower people to do good things. What we have learnt now... is that we need to take a more proactive role and a broader view of our responsibility."[145] In July 2018,FacebookandInstagrambarred access fromCrimson Hexagon, a company that advises corporations and governments using one trillion scraped social media posts, which it mined and processed with artificial intelligence and image analysis.[146] It remained "fuzzy" what was more important to Zuckerberg: making money from user's information, or real corporate integrity in the use of personal information.[147]Zuckerberg implied, that he believed that, on balance, Facebook had donemore good than harm, and that, if he had believed that wasn't the case, he would never have persevered with the business. Thus, "the good" was itself a fuzzy concept, because it was a matter of degree ("more good than bad"). He had to sell stuff, to keep the business growing. If people do not like Facebook, then they simply should not join it, or opt out, they have the choice. Many critics however feel that people really are in no position to make an informed choice, because they have no idea of how exactly their information will or might be used by third parties contracting with Facebook; because the company legally owns the information that users provide online, they have no control over that either, except to restrict themselves in what they write online (the same applies to many other online services). After theNew York Timesbroke the news on 17 March 2018, that copies of the Facebook data set scraped by Cambridge Analytica could still be downloaded from the Internet, Facebook was severely criticized by government representatives.[148]When questioned, Zuckerberg admitted that "In general we collect data on people who are not signed up for Facebook for security purposes" with the aim "to help prevent malicious actors from collecting public information from Facebook users, such as names".[149]From 2018 onward, Facebook faced a lot more lawsuits brought against the company, alleging data breaches, security breaches and misuse of personal information (seeLawsuits involving Meta PlatformsandFacebook Federal Litigation Filings).[150]There still exists no standard international regulatory framework for social network information,[151]and it is often unclear what happens to the stored information, after a provider company closes down, or is taken over by another company. Zuckerberg'sMetacompany also reports its own legal actions.[152] On 2 May 2018, it was reported that theCambridge Analyticacompany was shutting down and was starting bankruptcy proceedings, after losing clients and facing escalating legal costs.[153]Thereputational damagewhich the company had suffered or caused, had become too great. A traditional objection tobig datais, that it cannot cope with rapid change: events move faster that the statistics can keep up with. Yet the technology now exists for corporations likeAmazon,Google,Apple Inc.andMicrosoftto pump cloud-based data streams from app-users straight into big data analytics programmes, in real time.[154]Provided that the right kinds of analytical concepts are used, it is now technically possible to draw definite and important conclusions about gradations of human and natural behaviour using very large fuzzy data sets and fuzzy programming – and increasingly it can be done very fast. This achievement has become highly topical in military technology, in areas such as cybersecurity; tracking and monitoring systems; guidance systems (for firearms, explosive launchers, vehicles, planes, vessels, artillery, missiles, satellites, drones and bombs); threat identification/evaluation systems; risk and strategy appraisal; arms transfer and arms race impact assessments; and targeting methods. The identification of a threat and the response to it often have to happen very fast, with a high degree of accuracy, for which comprehensive artificial intelligence is essential.[155]Dr Tal Mimran, a lecturer atHebrew UniversityinJerusalemand a former legal adviser to theIsraeli Defence Force(IDF) stated: "During the period in which I served in the target room [between 2010 and 2015], you needed a team of around 20 intelligence officers to work for around 250 days to gather something between 200 to 250 targets. Today, the AI will do that in a week.”[156] Although no comprehensive overviews appear to be publicly available, a large amount of scientific research on fuzzy systems was funded or sponsored by the military.[157]However, military uses of fuzzy systems research can also have spin-offs for medical applications.[158] There have been many academic debates about the meaning, relevance and utility of fuzzy concepts, as well as their appropriate use.[159]Rudolf E. Kálmánstated in 1972 that "there is no such thing as a fuzzy concept... We do talk about fuzzy things but they are not scientific concepts".[160]The suggestion is that to qualify as a concept, the concept must always be clearandprecise, without any fuzziness. A vague notion would be at best a prologue to formulating a concept.[161]In 2011, three Chinese engineers alleged that "Fuzzy set, its t-norm, s-norm and fuzzy supplement theories have already become the academic virus in the world".[162] Lotfi A. Zadehhimself confessed that: "I knew that just by choosing the labelfuzzyI was going to find myself in the midst of a controversy... If it weren't calledfuzzy logic, there probably wouldn't be articles on it on the front page of theNew York Times. So let us say it has a certain publicity value. Of course, many people don't like that publicity value, and when they see it in theNew York Times, it doesn't sit well with them."[163] However, the impact of the invention of fuzzy reasoning went far beyond names and labels. When Zadeh gave his acceptance speech in Japan for the 1989 Honda Foundation prize, which he received for inventing fuzzy theory, he stated that "The concept of a fuzzy set has had an upsetting effect on the established order."[164] According toThe Foundations of Arithmeticby the logicianGottlob Frege, "A definition of a concept... must be complete; it must unambiguously determine, as regards any object, whether or not it falls under the concept... the concept must have a sharp boundary... a concept that is not sharply defined is wrongly termed a concept. Such quasi-conceptual constructions cannot be recognized as concepts by logic. Thelaw of the excluded middleis really just another form of the requirement that the concept should have a sharp boundary."[165] In his notes onlanguage games,Ludwig Wittgensteinreplied to Frege's argument as follows: "One can say that the concept of a game is a concept with blurred edges. “But is a blurred concept a concept at all?” Is a photograph that is not sharp a picture of a person at all? Is it even always an advantage to replace a picture that is not sharp by one that is? Isn’t one that isn’t sharp often just what we need? Frege compares a concept to a region, and says that a region without clear boundaries can’t be called a region at all. This presumably means that we can’t do anything with it. But is it senseless to say “Stay roughly here”? Imagine that I were standing with someone in a city square and said that. As I say it, I do not bother drawing any boundary, but just make a pointing gesture as if I were indicating a particular spot. And this is just how one might explain what a game is."[166] There is no general agreement among philosophers and scientists about how the notion of a "concept" (and in particular, a scientific concept), should be defined.[167]A concept could be defined as a mental representation, as a cognitive capacity, as an abstract object, as a cluster of linked phenomena etc.[168]Edward E. Smith & Douglas L. Medin stated that "there will likely be no crucial experiments or analyses that will establish one view of concepts as correct and rule out all others irrevocably."[169]Of course, scientists also quite often do use imprecise analogies in their models to help understanding an issue.[170]A concept can be clear enough,but not(or not sufficiently) precise. Rather uniquely, terminology scientists at the German National Standards Institute (Deutsches Institut für Normung) provided an official standard definition of what a concept is (under the terminology standards DIN 2330 of 1957, completely revised in 1974 and last revised in 2022; and DIN 2342 of 1986, also last revised in 2022).[171]According to the official German definition, a concept is a unit of thought which is created through abstraction for a set of objects, and which identifies shared (or related) characteristics of those objects. The subsequent ISO definition is very similar. Under the ISO 1087 terminology standard of theInternational Standards Organization(first published in October 2000, reviewed in 2005 and revised in 2019), a concept is defined as a unit of thought or an idea constituted through abstraction on the basis of properties common to a set of objects.[172]It is acknowledged that although a concept usually has one definition or one meaning, it may have multiple designations, terms of expression, symbolizations or representations. Thus, for example, the same concept can have different names in different languages. Both verbs and nouns can express concepts. A concept can also be thought of as "a way of looking at the world". The official terminological standards are useful for many practical purposes. But for more complex concepts the standards may not be so helpful. The reason is that complex concepts do not necessarily denote only a collection of objects which have something in common. A complex concept may for example express aGestalt, i.e. it may express a totality whichismore,meansmore, anddoesmore than the sum of its parts (as recognized inAristotle'sMetaphysics). It may be that the parts cannot exist other than within the totality.[173]The totality could also be a "totality of totalities". In such cases, the definition of the complex concept is not (or not fully) reducible to what its parts have in common. Modelling such a concept requires more than identifying and enumerating the parts that are included in (and excluded from) the concept. It requires also a specification of what all the parts together "add up to", or what they constitute collectively. In some respects at least, the totality differs qualitatively from any of its parts. TheGestaltcould be a fuzzy object, figure or shape. Reasoning with fuzzy concepts is often viewed as a kind of "logical corruption" or scientific perversion because, it is claimed, fuzzy reasoning rarely reaches a definite "yes" or a definite "no". A clear, precise and logically rigorous conceptualization is no longer a necessary prerequisite, for carrying out a procedure, a project, or an inquiry, since "somewhat vague ideas" can always be accommodated, formalized and programmed with the aid of fuzzy expressions. The purist idea is, that either a rule applies, or it does not apply. When a rule is said to apply only "to some extent", then in truth the rule doesnotapply. Thus, a compromise with vagueness or indefiniteness is, on this view, effectively a compromise with error - an error of conceptualization, an error in the inferential system, or an error in physically carrying out a task. The computer scientistWilliam Kahanargued in 1975 that "the danger of fuzzy theory is that it will encourage the sort of imprecise thinking that has brought us so much trouble."[174]He said subsequently, "With traditional logic there is no guaranteed way to find that something is contradictory, but once it is found, you'd be obliged to do something. But with fuzzy sets, the existence of contradictory sets can't cause things to malfunction. Contradictory information doesn't lead to a clash. You just keep computing. (...) Life affords many instances of getting the right answer for the wrong reasons... It is in the nature of logic to confirm or deny. The fuzzy calculus blurs that. (...) Logic isn't following the rules of Aristotle blindly. It takes the kind of pain known to the runner. He knows he is doing something. When you are thinking about something hard, you'll feel a similar sort of pain. Fuzzy logic is marvellous. It insulates you from pain. It's thecocaineof science."[175] According to Kahan, statements of a degree of probability are usually verifiable. There are standard tests one can do. By contrast, there is no conclusive procedure which can decide the validity of assigning particular fuzzy truth values to a data set in the first instance. It is just assumed that a model or program will work, "if" particular fuzzy values are accepted and used, perhaps based on some statistical comparisons or try-outs. In programming, a problem can usually be solved in several different ways, not just one way, but an important issue is, which solution works best in the short term, and in the long term. Kahan implies, that fuzzy solutions may create more problems in the long term, than they solve in the short term. For example, if one starts off designing a procedure, not with well thought-out, precise concepts, but rather by using fuzzy or approximate expressions which conveniently patch up (or compensate for) badly formulated ideas, the ultimate result could be a complicated, malformed mess, that does not achieve the intended goal. Had the reasoning and conceptualization been much sharper at the start, then the design of the procedure might have been much simpler, more efficient and effective - and fuzzy expressions or approximations would not be necessary, or required much less. Thus, byallowingthe use of fuzzy or approximate expressions, one might actually foreclose more rigorous thinking about design, and one might build something that ultimately does not meet expectations. If (say) an entity X turns out to belong for 65% to category Y, and for 35% to category Z, how should X be allocated? One could plausibly decide to allocate X to Y, making a rule that, if an entity belongs for 65% or more to Y, it is to be treated as an instance of category Y, and never as an instance of category Z. One could, however, alternatively decide to change the definitions of the categorization system, to ensure that all entities such as X fall 100% in one category only. This kind of argument claims, that boundary problems can be resolved (or vastly reduced) simply by using better categorization or conceptualization methods. If we treat X "as if" it belongs 100% to Y, while in truth it only belongs 65% to Y, then arguably we are really misrepresenting things. If we keep doing that with a lot of related variables, we can greatly distort the true situation, and make it look like something that it isn't. In a "fuzzy permissive" environment, it might become far too easy, to formalize and use a concept which is itself badly defined, and which could have been defined much better. In that environment, there is always a quantitative way out, for concepts that do not quite fit, or which don't quite do the job for which they are intended. The cumulative adverse effect of the discrepancies might, in the end, be much larger than ever anticipated. A typical reply to Kahan's objections is, that fuzzy reasoning never "rules out" ordinary binary logic, but insteadpresupposesordinary true-or-false logic.Lotfi Zadehstated that "fuzzy logic is not fuzzy. In large measure, fuzzy logic is precise."[176]It is a precise logic of imprecision. Fuzzy logic is not a replacement of, or substitute for ordinary logic, but an enhancement of it, with many practical uses. Fuzzy thinking does oblige action, but primarily in response to a change in quantitative gradation, not in response to a contradiction. One could say, for example, that ultimately one iseither"alive"or"dead", which is perfectly true. Meantime though one is "living", which is also a significant truth - yet "living" is a fuzzy concept.[177]It is true that fuzzy logic by itself usually cannot eliminate inadequate conceptualization or bad design. Yet it can at least make explicit, what exactly the variations are in the applicability of a concept which has unsharp boundaries. If one always had perfectly crisp concepts available, perhaps no fuzzy expressions would be necessary. In reality though, one often does not have all the crisp concepts to start off with. One might not have them yet for a long time, or ever - or, several successive "fuzzy" approximations might be needed, to get there. A "fuzzy permissive" environment may be appropriate and useful, precisely because it permits things to be actioned, that would never have been achieved, if there had been crystal clarity about all the consequences from the start, or if people insisted on absolute precision prior to doing anything. Scientists often try things out on the basis of "hunches", and processes likeserendipitycan play a role. Learning something new, or trying to create something new, is rarely a completely formal-logical or linear process. There are not only "knowns" and "unknowns" involved, but also "partlyknown" phenomena, i.e., things which are known or unknown "to some degree". Even if, ideally, we would prefer to eliminate fuzzy ideas, we might need them initially to get there, further down the track. Any method of reasoning is a tool. If its application has bad results, it is not the tool itself that is to blame, but its inappropriate use. It would be better to educate people in the bestuseof the tool, if necessary with appropriate authorization, than tobanthe tool pre-emptively, on the ground that it "could" or "might" be abused. Exceptions to this rule would include things like computer viruses and illegal weapons that can only cause great harm if they are used. There is no evidence though that fuzzy concepts as a species are intrinsically harmful, even if some bad concepts can cause harm if used in inappropriate contexts. Susan Haackonce claimed that a many-valued logic requires neither intermediate terms between true and false, nor a rejection of bivalence.[178]She implied that the intermediate terms (i.e. the gradations of truth) can always be restated as conditional if-then statements, and by implication, that fuzzy logic is fully reducible to binary true-or-false logic. This interpretation is disputed (it assumes that the knowledge already exists to fit the intermediate terms to a logical sequence), but even if it was correct, assigning a number to the applicability of a statement is often enormously more efficient than a long string of if-then statements that would have the same intended meaning. That point is obviously of great importance to computer programmers, educators and administrators seeking to code a process, activity, message or operation as simply as possible, according to logically consistent rules. Prof. Haack is, of course, quite correct when she argues that fuzzy logic does not do away with binary logic.[179] It may be wonderful to have an unlimited number of distinctions available to define what one means, but not all scholars would agree that any concept is equal to, or reducible to, a mathematicalset.[180]Some phenomena are difficult or impossible to quantify and count, in particular if they lack discrete boundaries (for example, clouds).George Lakoffemphasized that it is not true that fuzzy-set theory is the only or necessarily the most appropriate way to start modelling concepts.[181] Qualities may not be fully reducible to quantities[182]– if there are no qualities, it may become impossible to say what the numbers are numbers of, or what they refer to, except that they refer to other numbers or numerical expressions such as algebraic equations. A measure requires a counting unit defined by a category, but the definition of that category is essentially qualitative; a language which is used to communicate data is difficult to operate, without any qualitative distinctions and categories. We may, for example, transmit a text in binary code, but the binary code does not tell us directly what the text intends. It has to be translated, decoded or converted first, before it becomes comprehensible. In creating aformalizationorformal specificationof a concept, for example for the purpose of measurement, administrative procedure or programming, part of the meaning of the concept may be changed or lost.[183]For example, if we deliberately program an event according to a concept, it might kill off the spontaneity, spirit, authenticity and motivational pattern which is ordinarily associated with that type of event. Quantificationis not an unproblematic process.[184]To quantify a phenomenon, we may have to introduce special assumptions and definitions which disregard part of totality of the phenomenon. Programmers, statisticians or logicians are concerned in their work with the main operational or technical significance of a concept which is specifiable in objective, quantifiable terms. They are not primarily concerned with all kinds of imaginative frameworks associated with the concept, or with those aspects of the concept which seem to have no particular functional purpose – however entertaining they might be. However, some of the qualitative characteristics of the concept may not be quantifiable or measurable at all, at least not directly. The temptation exists to ignore them, or try to infer them from data results. If, for example, we want to count the number of trees in a forest area with any precision, we have to define what counts as one tree, and perhaps distinguish them from saplings, split trees, dead trees, fallen trees etc. Soon enough it becomes apparent that the quantification of trees involves a degree of abstraction – we decide to disregard some timber, dead or alive, from the population of trees, in order to count those trees that conform to our chosen concept of a tree. We operate in fact with an abstract concept of what a tree is, which diverges to some extent from the true diversity of trees there are. Even so, there may be some trees, of which it is not very clear, whether they should be counted as a tree or not. It may be difficult to define the exact boundary where the forest begins and ends.[189]The forest boundary might also change somewhat in the course of time.[190]A certain amount of "fuzziness" in the definition of a tree and of the forest may therefore remain. The implication is, that the seemingly "exact" number offered for the total quantity of trees in the forest may be much less exact than one might think - it is probably more an estimate or indication of magnitude, rather than an exact description.[191]Yet - and this is the point - the imprecise measure can be very useful and sufficient for all intended purposes. It is tempting to think, that if something can be measured, it must exist, and that if we cannot measure it, it does not exist. Neither might be true. Researchers try to measure such things asintelligenceorgross domestic product, without much scientific agreement about what these things actually are, how they exist, and what the correct measures might be.[192] When one wants to count and quantify distinct objects using numbers, one needs to be able to distinguish between all of those separate objects as countable units. If this is difficult or impossible, then, although this may not invalidate a quantitative procedure as such, quantification is not really possible in practice. At best, we may be able to assume or infer indirectly a certain distribution of quantities that must be there. In this sense, scientists often useproxy variablesto substitute as measures for variables which are known (or thought) to be there, but which themselves cannot be observed or measured directly. The exact relationship between vagueness and fuzziness is disputed. Philosophers often regard fuzziness as a particular kind of vagueness,[193]and consider that "no specific assignment of semantic values to vague predicates, not even a fuzzy one, can fully satisfy our conception of what the extensions of vague predicates are like".[194]Surveying recent literature on how to characterize vagueness, Matti Eklund states that appeal to lack of sharp boundaries, borderline cases and "sorites-susceptible" predicates are the three informal characterizations of vagueness which are most common in the literature.[195] However,Lotfi A. Zadehclaimed that "vagueness connotes insufficientspecificity, whereas fuzziness connotes unsharpness ofclass boundaries". Thus, he argued, a sentence like "I will be back in a few minutes" is fuzzybut notvague, whereas a sentence such as "I will be back sometime", is fuzzyandvague. His suggestion was that fuzziness and vagueness are logically quite different qualities, rather than fuzziness being a type or subcategory of vagueness. Zadeh claimed that "inappropriate use of the term 'vague' is still a common practice in the literature of philosophy".[196] In the scholarly inquiry aboutethicsandmeta-ethics,vagueor fuzzy concepts and borderline cases are standard topics of controversy. Central to ethics are theories of "value", what is "good" or "bad" for people and why that is, and the idea of "rule following" as a condition for moral integrity, consistency and non-arbitrary behaviour. Yet, if human valuations or moral rules are only vague or fuzzy, then they may not be able to orient or guide behaviour. It may become impossible to operationalize rules. Evaluations may not permit definite moral judgements, in that case. Hence, clarifying fuzzy moral notions is usually considered to be critical for the ethical endeavour as a whole.[197] Nevertheless,Scott Soameshas made the case that vagueness or fuzziness can bevaluableto rule-makers, because "their use of it is valuable to the people to whom rules are addressed".[198]It may be more practical and effective to allow for some leeway (and personal responsibility) in the interpretation of how a rule should be applied - bearing in mind the overall purpose which the rule intends to achieve. If a rule or procedure is stipulated too exactly, it can sometimes have a result which is contrary to the aim which it was intended to help achieve. For example, "TheChildren and Young Persons Actcould have specified a precise age below which a child may not be left unsupervised. But doing so would have incurred quite substantial forms of arbitrariness (for various reasons, and particularly because of the different capacities of children of the same age)".[199] A related sort of problem is, that if the application of a legal concept is pursued too exactly and rigorously, it may have consequences that cause a serious conflict withanotherlegal concept. This is not necessarily a matter of bad law-making. When a law is made, it may not be possible to anticipate all the cases and events to which it will apply later (even if 95% of possible cases are predictable). The longer a law is in force, the more likely it is, that people will run into problems with it, that were not foreseen when the law was made. So, the further implications of one rule may conflict with another rule. "Common sense" might not be able to resolve things. In that scenario, too much precision can get in the way of justice. Very likely a special court ruling wil have to set a norm. The general problem for jurists is, whether "the arbitrariness resulting from precision is worse than the arbitrariness resulting from the application of a vague standard".[200]David Lanius has examined nine arguments for the "value of vagueness" in different contexts.[201] The definitional disputes about fuzziness remain unresolved so far, mainly because, as anthropologists and psychologists have documented, different languages (or symbol systems) that have been created by people to signal meanings suggest differentontologies.[202]Put simply: it is not merely that describing "what is there" involves symbolic representations of some kind. How distinctions are drawn, influences perceptions of "what is there", and vice versa, perceptions of "what is there" influence how distinctions are drawn.[203]This is an important reason why, asAlfred Korzybskinoted, people frequently confuse the symbolic representation of reality, conveyed by languages and signs, with reality itself.[204] Fuzziness implies, that there exists a potentiallyinfinitenumber of truth values between complete truth and complete falsehood. If that is the case, it creates the foundational issue of what, in the case, can justify or prove the existence of the categorical absolutes which are assumed by logical or quantitative inference. If there is an infinite number of shades of grey, how do we know what is totally black and white, and how could we identify that? To illustrate the ontological issues, cosmologistMax Tegmarkargues boldly that the universe consists of math: "If you accept the idea that both space itself, and all the stuff in space, have no properties at all except mathematical properties," then the idea that everything is mathematical "starts to sound a little bit less insane."[205] Tegmark moves from theepistemicclaim that mathematics is the only known symbol system which can in principle express absolutely everything, to themethodologicalclaim that everything is reducible to mathematical relationships, and then to theontologicalclaim, that ultimately everything that exists is mathematical (themathematical universe hypothesis). The argument is then reversed, so thatbecauseeverything is mathematical in reality, mathematics isnecessarilythe ultimate universal symbol system. The main criticisms of Tegmark's approach are that (1) the steps in this argument do not necessarily follow, (2) no conclusive proof or test is possible for the claim that a total reduction of everything to mathematics is feasible, among other things because qualitative categories remain indispensable to understand and navigate what quantities mean, and (3) it may be that a complete reduction to mathematics cannot be accomplished, without at least partly altering, negating or deleting a non-mathematical significance of phenomena, experienced perhaps asqualia.[206] In hismeta-mathematicalmetaphysics,Edward N. Zaltahas claimed that for every set of properties of a concrete object, therealwaysexistsexactlyone abstract object that encodesexactlythat set of properties and no others - a foundational assumption oraxiomfor hisontologyof abstract objects[207]By implication, for every fuzzy object there exists always at least onedefuzzifiedconcept which encodes it exactly. It is a modern interpretation ofPlato'smetaphysics of knowledge,[208]which expresses confidence in the ability of science to conceptualize the world exactly. The Platonic-style interpretation was critiqued byHartry H. Field.[209]Mark Balaguer argues that we do not really know whether mind-independent abstract objects exist or not; so far, we cannot prove whetherPlatonic realismis definitely true or false.[210]Defending a cognitive realism,Scott Soamesargues that the reason why this unsolvable conundrum has persisted, is because the ultimate constitution of the meaning of concepts and propositions was misconceived. Traditionally, it was thought that concepts can be truly representational, because ultimately they are related to intrinsically representational Platonic complexes ofuniversalsandparticulars(seetheory of forms). However, once concepts and propositions are regarded as cognitive-event types, it is possible to claim that they are able to be representational, because they are constitutively related to intrinsically representational cognitive acts in the real world.[211]As another philosopher put it, "The question of how we can know the world around us is not entirely unlike the question of how it is that the food our environment provides happens to agree with our stomachs. Either can become a mystery if we forget that minds, like stomachs, originated in and have been conditioned by a pre-existent natural order."[212] Along these lines, it could be argued that reality, and the human cognition of reality, will inevitably contain some fuzzy characteristics, which can perhaps be represented only by concepts which are themselves fuzzy to some or other extent. Even using ordinaryset theoryandbinary logicto reason something out, logicians have discovered that it is possible to generate statements which are logically speaking not completely true or imply aparadox,[213]even although in other respects they conform to logical rules (seeRussell's paradox). If a margin of indeterminacy therefore persists, then binary logic cannot totally remove fuzziness.David Hilbertconcluded that the existence of logical paradoxes tells us "that we must develop a meta-mathematical analysis of the notions of proof and of the axiomatic method; their importance is methodological as well as epistemological".[214] The idea of fuzzy concepts has also been applied in the philosophical, sociological and linguistic analysis of human behaviour.[215] In a 1973 paper,George Lakoffanalyzed hedges in the interpretation of the meaning of categories.[216]Charles Raginand others have applied the idea to sociological analysis.[217]For example, fuzzy set qualitative comparative analysis ("fsQCA") has been used by German researchers to study problems posed by ethnic diversity in Latin America.[218]InNew Zealand,Taiwan,Iran,Malaysia, theEuropean UnionandCroatia, economists have used fuzzy concepts to model and measure the underground economy of their country.[219]Kofi Kissi Dompere applied methods of fuzzy decision, approximate reasoning, negotiation games and fuzzy mathematics to analyze the role of money, information and resources in a "political economy ofrent-seeking", viewed as a game played between powerful corporations and the government.[220]The German researcher Thomas Kron has used fuzzy methods to model sociological theory, creating an integral action-theoretical model with the aid of fuzzy logic. With Lars Winter, Kron developed the system theory ofNiklas Luhmannfurther, using the so-called "Kosko-Cube".[221]Kron studies transnational terrorism and other contemporary phenomena using fuzzy logic, to understand conditions involving uncertainty, hybridity, violence and cultural systems.[222] A concept may be deliberately created by sociologists as anideal typeto understand something imaginatively, without any strong claim that it is a "true and complete description" or a "true and complete reflection" of whatever is being conceptualized.[223]In a more general sociological or journalistic sense, a "fuzzy concept" has come to mean a concept which is meaningful but inexact, implying that it does not exhaustively or completely define the meaning of the phenomenon to which it refers – often because it is too abstract. In this context, it is said that fuzzy concepts "lack clarity and are difficult to test or operationalize".[224]To specify the relevant meaning more precisely, additional distinctions, conditions and/or qualifiers would be required. A few examples can illustrate this kind of usage: The main reason why the term "fuzzy concept" is now often used in describing human behaviour, is that human interaction has many characteristics which are difficult to quantify and measure precisely (although we know that they have magnitudes and proportions), among other things because they are interactive and reflexive (the observers and the observed mutually influence the meaning of events).[229]Those human characteristics can be usefully expressed only in anapproximateway (seereflexivity (social theory)).[230] Newspaper stories frequently contain fuzzy concepts, which are readily understood and used, even although they are far from exact. Thus, many of the meanings which people ordinarily use to negotiate their way through life in reality turn out to be "fuzzy concepts". While people often do need to be exact about some things (e.g. money or time), many areas of their lives involve expressions which are far from exact. Sometimes the term is also used in apejorativesense. For example, aNew York Timesjournalist wrote thatPrince Sihanouk"seems unable to differentiate between friends and enemies, a disturbing trait since it suggests that he stands for nothing beyond the fuzzy concept of peace and prosperity in Cambodia".[231] The use of fuzzy logic in the social sciences and humanities has remained limited until recently.Lotfi A. Zadehsaid in a 1994 interview that: "I expected people in the social sciences – economics, psychology, philosophy, linguistics, politics, sociology, religion and numerous other areas to pick up on it. It's been somewhat of a mystery to me why even to this day, so few social scientists have discovered how useful it could be."[232] Two decades later, after a digitalinformation explosiondue to the growing use of the internet and mobile phones worldwide, fuzzy concepts and fuzzy logic were increasingly being applied inbig dataanalysis of social, commercial and psychological phenomena. Manysociometricandpsychometricindicators are based partly on fuzzy concepts and fuzzy variables. Jaakko Hintikkaonce claimed that "the logic of natural language we are in effect already using can serve as a 'fuzzy logic' better than its trade name variant without any additional assumptions or constructions."[233]That might help to explain why fuzzy logic has not been used much to formalize concepts in the "soft" social sciences. Lotfi A. Zadehrejected such an interpretation, on the ground that in many human endeavours as well as technologies it is highly important to define more exactly "to what extent" something is applicable or true, when it is known that its applicability can vary to some important extent among large populations. Reasoning which accepts and uses fuzzy concepts can be shown to be perfectly valid with the aid of fuzzy logic, because the degrees of applicability of a concept can be more precisely and efficiently defined with the aid of numerical notation. Another possible explanation for the traditional lack of use of fuzzy logic by social scientists is simply that, beyond basic statistical analysis (using programs such asSPSSandExcel) the mathematical knowledge of social scientists is often rather limited; they may not know how to formalize and code a fuzzy concept using the conventions of fuzzy logic. The standard software packages used provide only a limited capacity to analyze fuzzy data sets, if at all, and considerable skills are required. Yet Jaakko Hintikka may be correct, in the sense that it can be much more efficient to use natural language to denote a complex idea, than to formalize it in logical terms. The quest for formalization might introduce much more complexity, which is not wanted, and which detracts from communicating the relevant issue. Some concepts used in social science may be impossible to formalize exactly, even though they are quite useful and people understand their appropriate application quite well. Fuzzy concepts can generateuncertaintybecause they are imprecise (especially if they refer to a process in motion, or a process of transformation where something is "in the process of turning into something else"). In that case, they do not provide a clear orientation for action or decision-making ("what does X really mean, intend or imply?"); reducing fuzziness, perhaps by applying fuzzy logic,[234]might generate more certainty. However, this is not necessarily always so.[235]A concept, even although it is not fuzzy at all, and even though it is very exact, could equally well fail to capture the meaning of something adequately. That is, a concept can be very precise and exact, but not – or insufficiently –applicableorrelevantin the situation to which it refers. In this sense, a definition can be "very precise", but "miss the point" altogether. A fuzzy concept may indeed providemoresecurity, because it provides a meaning for something when an exact concept is unavailable – which is better than not being able to denote it at all. A concept such asGod, although not easily definable, for instance can provide security to the believer.[236] In physics, theobserver effectandHeisenberg's uncertainty principle[237]indicate that there is a physical limit to the amount of precision that is knowable, with regard to the movements of subatomic particles and waves. That is, features of physical reality exist, where we can know that they vary in magnitude, but of which we can never know or predict exactly how big or small the variations are. This insight suggests that, in some areas of our experience of the physical world, fuzziness is inevitable and can never be totally removed. Since thephysical universeitself is incredibly large and diverse, it is not easy to imagine it, grasp it or describe it without using fuzzy concepts. Ordinary language, which uses symbolic conventions and associations which are often not logical, inherently contains many fuzzy concepts[238]– "knowing what you mean" in this case depends partly on knowing the context (or being familiar with the way in which a term is normally used, or what it is associated with).[239] This can be easily verified for instance by consulting adictionary, athesaurusor anencyclopediawhich show the multiple meanings of words, or by observing the behaviours involved in ordinary relationships which rely on mutually understood meanings (see alsoImprecise language).Bertrand Russellregarded ordinary language (in contrast to logic) as intrinsically vague.[240] To communicate, receive or convey amessage, an individual somehow has to bridge his own intended meaning and the meanings which are understood by others, i.e., the message has to be conveyed in a way that it will be socially understood, preferably in the intended manner. Thus, people might state: "you have to say it in a way that I understand". Even if the message is clear and precise, it may nevertheless not be received in the way it was intended. Bridging meanings may be done instinctively, habitually or unconsciously, but it usually involves a choice of terms, assumptions orsymbolswhose meanings are not completely fixed, but which depend among other things on how the receivers of the message respond to it, or thecontext. In this sense, meaning is often "negotiated" or "interactive" (or, more cynically, manipulated). This gives rise to many fuzzy concepts. The semantic challenge of conveying meanings to an audience was explored in detail, and analyzed logically, by the British philosopherPaul Grice- using, among other things, the concept ofimplicature.[241]Implicature refers to what issuggestedby a message to the recipient, without being either explicitly expressed or logically entailed by its content. The suggestion could be very clear to the recipient (perhaps a sort of code), but it could also be vague or fuzzy. Various different aspects of human experience commonly generate concepts with fuzzy characteristics. The formation of fuzzy concepts is partly due to the fact that the human brain does not operate like a computer (see alsoChinese room).[242] According tofuzzy-trace theory, partly inspired byGestalt psychology, human intuition is a non-arbitrary, reasonable and rational process of cognition; it literally "makes sense" (see also:Problem of multiple generality).[248] In part, fuzzy concepts arise also becauselearningor the growth ofunderstandinginvolves a transition from a vague awareness, which cannot orient behaviour greatly, to clearer insight, which can orient behaviour. At the first encounter with an idea, the sense of the idea may be rather hazy. When more experience with the idea has occurred, a clearer and more precise grasp of the idea results, as well as a better understanding of how and when to use the idea (or not). In his study ofimplicit learning,Arthur S. Reberaffirms that there does not exist a very sharp boundary between the conscious and the unconscious, and "there are always going to be lots of fuzzy borderline cases of material that is marginally conscious and lots of elusive instances of functions and processes that seem to slip in and out of personal awareness".[249] Thus, an inevitable component of fuzziness exists and persists in human consciousness, because of continual variation of gradations in awareness, along a continuum from theconscious, thepreconscious, and thesubconsciousto theunconscious. The hypnotherapistMilton H. Ericksonsimilarly noted that the conscious mind and the unconscious normally interact.[250] Some psychologists and logicians argue that fuzzy concepts are a necessary consequence of the reality that any kind of distinction we might like to draw haslimits of application. At a certain level of generality, a distinction works fine. But if we pursued its application in a very exact andrigorousmanner, or overextend its application, it appears that the distinction simply does not apply in some areas or contexts, or that we cannot fully specify how it should be drawn. Ananalogymight be, that zooming atelescope,camera, ormicroscopein and out, reveals that a pattern which is sharply focused at a certain distance becomes blurry at another distance, or disappears altogether. Faced with any large, complex and continually changing phenomenon, any short statement made about that phenomenon is likely to be "fuzzy", i.e., it is meaningful, but – strictly speaking – incorrect and imprecise.[251]It will not really do full justice to the reality of what is happening with the phenomenon. A correct, precise statement would require a lot of elaborations and qualifiers. Nevertheless, the "fuzzy" description turns out to be a useful shorthand that saves a lot of time in communicating what is going on ("you know what I mean"). Inpsychophysics, it was discovered that the perceptual distinctions we draw in the mind are often more definite than they are in the real world. Thus, the brain actually tends to "sharpen up" or "enhance" our perceptions of differences in the external world. If there are more gradations and transitions in reality, than our conceptual or perceptual distinctions can capture in our minds, then it could be argued that how those distinctions will actually apply, mustnecessarilybecome vaguer at some point. In interacting with the external world, the human mind may often encounter new, or partlynew phenomena or relationshipswhich cannot (yet) be sharply defined given the background knowledge available, and by known distinctions, associations or generalizations. "Crisis management plans cannot be put 'on the fly' after the crisis occurs. At the outset, information is oftenvague, even contradictory. Events move so quickly that decision makers experience a sense of loss of control. Often denial sets in, and managers unintentionally cut off information flow about the situation" -L. Paul Bremer.[254] It also can be argued that fuzzy concepts are generated by a certain sort oflifestyleor way of working which evades definite distinctions, makes them impossible or inoperable, or which is in some way chaotic. To obtain concepts which are not fuzzy, it must be possible totestout their application in some way. But in the absence of any relevant clear distinctions, lacking an orderly environment, or when everything is "in a state offlux" or in transition, it may not be possible to do so, so that the amount of fuzziness increases. Fuzzy concepts often play a role in the creative process of forming new concepts to understand something. In the most primitive sense, this can be observed in infants who, through practical experience, learn to identify, distinguish and generalise the correct application of a concept, and relate it to other concepts.[255]However, fuzzy concepts may also occur in scientific, journalistic, programming and philosophical activity, when a thinker is in the process of clarifying and defining a newly emerging concept which is based on distinctions which, for one reason or another, cannot (yet) be more exactly specified or validated. Fuzzy concepts are often used to denotecomplexphenomena, or to describe something which is developing and changing, which might involve shedding some old meanings and acquiring new ones. Many concepts which are used fairly universally in daily life (such as "love",[274]"God",[275]"health",[276]"social",[277]"sustainability"[278]"tolerance" etc.) are considered to be intrinsically fuzzy concepts, to the extent that their meaning usually cannot be completely and exactly specified with logical operators or objective terms, and can have multiple interpretations and personal (subjective) meanings. Yet such concepts are not at all meaningless. People keep using the concepts, even if they are difficult to define precisely. It may also be possible to specify one personal meaning for the concept, without however placing restrictions on a different use of the concept in other contexts (as when, for example, one says "this is what I mean by X" in contrast to other possible meanings). In ordinary speech, concepts may sometimes also be uttered purely randomly; for example a child may repeat the same idea in completely unrelated contexts, or anexpletiveterm may be uttered arbitrarily. A feeling or sense is conveyed, without it being fully clear what it is about. Happinessmay be an example of a word with variable meanings depending on context or timing.[279] Fuzzy concepts can be used deliberately to createambiguityandvagueness, as an evasive tactic, or to bridge what would otherwise be immediately recognized as acontradictionof terms. They might be used to indicate that there is definitely a connection between two things, without giving a complete specification of what the connection is, for some or other reason. This could be due to a failure or refusal to be more precise. It could be academic bluff or pretense of knowledge. But it could also be a prologue to a more exact formulation of a concept, or to a better understanding of it.[280] Fuzzy concepts can be used as a practical method to describe something of which a complete description would be an unmanageably large undertaking, or very time-consuming; thus, a simplified indication of what is at issue is regarded as sufficient, although it is not exact. There is also such a thing as an "economy of distinctions", meaning that it is not helpful or efficient to use more detailed definitions than are really necessary for a given purpose. In this sense,Karl Popperrejectedpedantryand commented that: "...it is always undesirable to make an effort to increase precision for its own sake – especially linguistic precision – since this usually leads to loss of clarity, and to a waste of time and effort on preliminaries which often turn out to be useless, because they are bypassed by the real advance of the subject: one should never try to be more precise than the problem situation demands. I might perhaps state my position as follows. Every increase in clarity is of intellectual value in itself; an increase in precision or exactness has only a pragmatic value as a means to some definite end..."[281] The provision of "too many details" could be disorienting and confusing, instead of being enlightening, while a fuzzy term might be sufficient to provide an orientation. The reason for using fuzzy concepts can therefore be purely pragmatic, if it is not feasible or desirable (for practical purposes) to provide "all the details" about the meaning of a shared symbol or sign. Thus people might say "I realize this is not exact, but you know what I mean" – they assume practically that stating all the details is not required for the purpose of the communication. Lotfi A. Zadehpicked up this point, and drew attention to a "major misunderstanding" about applying fuzzy logic. It is true that the basic aim of fuzzy logic is to make what is imprecise more precise. Yet in many cases, fuzzy logic is used paradoxically to "imprecisiate what is precise", meaning that there is a deliberate tolerance for imprecision for the sake of simplicity of procedure and economy of expression. In such uses, there is a tolerance for imprecision, because making ideas more precise would be unnecessary and costly, while "imprecisiation reduces cost and enhances tractability" (tractability means "being easy to manage or operationalize"). Zadeh calls this approach the "Fuzzy Logic Gambit" (a gambit means giving up something now, to achieve a better position later). In the Fuzzy Logic Gambit, "what is sacrificed is precision in [quantitative] value, but not precision in meaning", and more concretely, "imprecisiation in value is followed by precisiation in meaning". Zadeh cited as exampleTakeshi Yamakawa's programming for aninverted pendulum, where differential equations are replaced by fuzzy if-then rules in which words are used in place of numbers.[282] Common use of this sort of approach (combining words and numbers in programming), has led some logicians to regard fuzzy logic merely as an extension ofBoolean logic(atwo-valued logicor binary logic is simply replaced with amany-valued logic). However, Boolean concepts have a logical structure which differs from fuzzy concepts. An important feature in Boolean logic is, that an element of a set can also belong to any number of other sets; even so, the elementeitherdoes,ordoes not belong to a set (or sets). By contrast, whether an element belongs to a fuzzy set is a matter of degree, and not always a definite yes-or-no question. All the same, the Greek mathematician Costas Drossos suggests in various papers that, using a "non-standard" mathematical approach, we could also construct fuzzy sets with Boolean characteristics and Boolean sets with fuzzy characteristics.[283]This would imply, that in practice the boundary between fuzzy sets and Boolean sets is itself fuzzy, rather than absolute. For a simplified example, we might be able to state, that a conceptXis definitely applicable to a finite set of phenomena, and definitely not applicable to all other phenomena. Yet, within the finite set of relevant items,Xmight befullyapplicable to one subset of the included phenomena, while it is applicable only "to some varying extent or degree" to another subset of phenomena which are also included in the set. Following ordinary set theory, this generates logical problems, if e.g. overlapping subsets within sets are related to other overlapping subsets within other sets. Inmathematical logic,computer programming,philosophyandlinguistics, fuzzy concepts can be defined more accurately, by describing the concepts using the terms offuzzy logicor othersubstructural logics. With the rapid development of computer programming languages and digital processing capacity since the 1970s, it is now accepted in the sciences that there isn't just one "correct" way to formalize items of knowledge. Innovators realized that concepts and processes can be represented using many different kinds of tools, methods and systems - according to what happens to be the most useful, effective or efficient method for a given purpose.[284]Aided by new software and artificial intelligence, many traditional and new sorts of techniques can be applied to clarify ideas, such as: In this way, we can obtain a more exact understanding of the meaning and use of a fuzzy concept, and possibly decrease the amount of fuzziness. It may not be possible to specify all the possible meanings or applications of a concept completely and exhaustively, but if it is possible to capture the majority of them, statistically or otherwise, this may be useful enough for practical purposes. A process ofdefuzzificationis said to occur, when fuzzy concepts can be logically described in terms offuzzy sets, or the relationships between fuzzy sets, which makes it possible to define variations in the meaning or applicability of concepts asquantities. Effectively, qualitative differences are in that case described more precisely as quantitative variations, or quantitative variability. Assigning a numerical value then denotes the magnitude of variation along a scale from zero to one. The difficulty that can occur in judging the fuzziness of a concept can be illustrated with the question"Is this one of those?". If it is not possible to clearly answer this question, that could be because "this" (the object) is itself fuzzy and evades definition, or because "one of those" (the concept of the object) is fuzzy and inadequately defined. Thus, the source of fuzziness may be in (1) the nature of the reality being dealt with, (2) the concepts used to interpret it, or (3) the way in which the two are being related by a person.[287]It may be that the personal meanings which people attach to something are quite clear to the persons themselves, but that it is not possible to communicate those meanings to others except as fuzzy concepts.
https://en.wikipedia.org/wiki/Fuzzy_concept
Fuzzy mathematicsis the branch ofmathematicsincludingfuzzy set theoryandfuzzy logicthat deals with partial inclusion of elements in a set on a spectrum, as opposed to simple binary "yes" or "no" (0 or 1) inclusion. It started in 1965 after the publication ofLotfi Asker Zadeh's seminal workFuzzy sets.[1]Linguisticsis an example of a field that utilizes fuzzy set theory. Afuzzy subsetAof asetXis afunctionA:X→L, whereLis theinterval[0, 1]. This function is also called a membership function. A membership function is a generalization of anindicator function(also called acharacteristic function) of a subset defined forL= {0, 1}. More generally, one can use anycomplete latticeLin a definition of a fuzzy subsetA.[2] The evolution of the fuzzification of mathematical concepts can be broken down into three stages:[3] Usually, a fuzzification of mathematical concepts is based on a generalization of these concepts from characteristic functions to membership functions. LetAandBbe two fuzzy subsets ofX. TheintersectionA∩BandunionA∪Bare defined as follows: (A∩B)(x) = min(A(x),B(x)), (A∪B)(x) = max(A(x),B(x)) for allxinX. Instead ofminandmaxone can uset-normand t-conorm, respectively;[4]for example, min(a,b) can be replaced by multiplicationab. A straightforward fuzzification is usually based onminandmaxoperations because in this case more properties of traditional mathematics can be extended to the fuzzy case. An important generalization principle used in fuzzification of algebraic operations is a closure property. Let * be abinary operationonX. The closure property for a fuzzy subsetAofXis that for allx,yinX,A(x*y) ≥ min(A(x),A(y)). Let (G, *) be agroupandAa fuzzy subset ofG. ThenAis afuzzy subgroupofGif for allx,yinG,A(x*y−1) ≥ min(A(x),A(y−1)). A similar generalization principle is used, for example, for fuzzification of thetransitivityproperty. LetRbe a fuzzy relation onX, i.e.Ris a fuzzy subset ofX×X. ThenRis (fuzzy-)transitive if for allx,y,zinX,R(x,z) ≥ min(R(x,y),R(y,z)). Fuzzy subgroupoids and fuzzy subgroups were introduced in 1971 by A. Rosenfeld.[5][6][7] Analogues of other mathematical subjects have been translated to fuzzy mathematics, such as fuzzy field theory and fuzzy Galois theory,[8]fuzzy topology,[9][10]fuzzy geometry,[11][12][13][14]fuzzy orderings,[15]and fuzzy graphs.[16][17][18]
https://en.wikipedia.org/wiki/Fuzzy_mathematics
Fuzzy set operationsare a generalization ofcrisp setoperationsforfuzzy sets. There is in fact more than one possible generalization. The most widely used operations are calledstandard fuzzy set operations; they comprise:fuzzy complements,fuzzy intersections, andfuzzy unions. Let A and B be fuzzy sets that A,B ⊆ U, u is any element (e.g. value) in the U universe: u ∈ U. The complement is sometimes denoted by∁A or A∁instead of¬A. In general, the triple (i,u,n) is calledDe Morgan Tripletiff so that for allx,y∈ [0, 1] the following holds true: (generalized De Morgan relation).[1]This implies the axioms provided below in detail. μA(x) is defined as the degree to whichxbelongs toA. Let∁Adenote a fuzzy complement ofAof typec. Thenμ∁A(x) is the degree to whichxbelongs to∁A, and the degree to whichxdoes not belong toA. (μA(x) is therefore the degree to whichxdoes not belong to∁A.) Let a complement∁Abe defined by a function cis astrongnegator(akafuzzy complement). A function c satisfying axioms c1 and c3 has at least one fixpoint a*with c(a*) = a*, and if axiom c2 is fulfilled as well there is exactly one such fixpoint. For the standard negator c(x) = 1-x the unique fixpoint is a*= 0.5 .[2] The intersection of two fuzzy setsAandBis specified in general by a binary operation on the unit interval, a function of the form Axioms i1 up to i4 define at-norm(akafuzzy intersection). The standard t-norm min is the only idempotent t-norm (that is,i(a1,a1) =afor alla∈ [0,1]).[2] The union of two fuzzy setsAandBis specified in general by a binary operation on the unit interval function of the form Axioms u1 up to u4 define at-conorm(akas-normorfuzzy union). The standard t-conorm max is the only idempotent t-conorm (i. e. u (a1, a1) = a for all a ∈ [0,1]).[2] Aggregation operations on fuzzy sets are operations by which several fuzzy sets are combined in a desirable way to produce a single fuzzy set. Aggregation operation onnfuzzy set (2 ≤n) is defined by a function
https://en.wikipedia.org/wiki/Fuzzy_set_operations
Fuzzy subalgebrastheory is a chapter offuzzy set theory. It is obtained from an interpretation in a multi-valued logic of axioms usually expressing the notion ofsubalgebraof a givenalgebraic structure. Consider a first order language for algebraic structures with amonadic predicatesymbol S. Then afuzzy subalgebrais afuzzy modelof a theory containing, for anyn-ary operation h, the axioms ∀x1,...,∀xn(S(x1)∧.....∧S(xn)→S(h(x1,...,xn)){\displaystyle \forall x_{1},...,\forall x_{n}(S(x_{1})\land .....\land S(x_{n})\rightarrow S(h(x_{1},...,x_{n}))} and, for any constant c, S(c). The first axiom expresses the closure of S with respect to the operation h, and the second expresses the fact that c is an element in S. As an example, assume that thevaluation structureis defined in [0,1] and denote by⊙{\displaystyle \odot }the operation in [0,1] used to interpret the conjunction. Then a fuzzy subalgebra of an algebraic structure whose domain is D is defined by a fuzzy subsets : D → [0,1]of D such that, for every d1,...,dnin D, ifhis theinterpretationof the n-ary operation symbol h, then Moreover, ifcis the interpretation of a constant c such that s(c) = 1. A largely studied class of fuzzy subalgebras is the one in which the operation⊙{\displaystyle \odot }coincides with the minimum. In such a case it is immediate to prove the following proposition. Proposition.A fuzzy subset s of an algebraic structure defines a fuzzy subalgebra if and only if for every λ in [0,1], theclosed cut{x ∈ D : s(x)≥ λ} of s is a subalgebra. The fuzzy subgroups and the fuzzy submonoids are particularly interesting classes of fuzzy subalgebras. In such a case a fuzzy subsetsof a monoid (M,•,u) is afuzzy submonoidif and only if whereuis theneutral elementin A. Given a group G, afuzzy subgroupof G is a fuzzy submonoid s of G such that It is possible to prove that the notion of fuzzy subgroup is strictly related with the notions offuzzy equivalence. In fact, assume that S is a set, G a group of transformations in S and (G,s) a fuzzy subgroup of G. Then, by setting we obtain a fuzzy equivalence. Conversely, let e be a fuzzy equivalence in S and, for every transformation h of S, set Then s defines a fuzzy subgroup of transformationin S. In a similar way we can relate the fuzzy submonoids with the fuzzy orders.
https://en.wikipedia.org/wiki/Fuzzy_subalgebra
Linear partial information (LPI)is a method of making decisions based on insufficient orfuzzy information. LPI was introduced in 1970 by Polish–Swiss mathematicianEdward Kofler(1911–2007) to simplifydecisionprocesses. Compared toother methodsthe LPI-fuzziness isalgorithmicallysimple and particularly indecision making, more practically oriented. Instead of anindicator functionthe decision makerlinearizesany fuzziness by establishing of linear restrictions for fuzzy probability distributions or normalized weights. In the LPI-procedure the decision makerlinearizesany fuzziness instead of applying a membership function. This can be done by establishingstochasticand non-stochastic LPI-relations. A mixed stochastic and non-stochastic fuzzification is often a basis for the LPI-procedure. By using the LPI-methods any fuzziness in any decision situation can be considered on the base of thelinearfuzzy logic. Any Stochastic Partial InformationSPI(p), which can be considered as a solution of alinear inequality system, is called Linear Partial InformationLPI(p)about probabilityp. It can be considered as an LPI-fuzzification of the probabilitypcorresponding to the concepts of linear fuzzy logic. Despite the fuzziness of information, it is often necessary to choose the optimal, most cautious strategy, for example in economic planning, in conflict situations or in daily decisions. This is impossible without the concept of fuzzy equilibrium. The concept of fuzzy stability is considered as an extension into a time interval, taking into account the corresponding stability area of the decision maker. The more complex is the model, the softer a choice has to be considered. The idea of fuzzy equilibrium is based on the optimization principles. Therefore, the MaxEmin-, MaxGmin- and PDP-stability have to be analyzed. The violation of these principles leads often to wrong predictions and decisions. Considering a given LPI-decision model, as aconvolutionof the corresponding fuzzy states or a disturbance set, the fuzzy equilibrium strategy remains the most cautious one, despite the presence of the fuzziness. Any deviation from this strategy can cause a loss for the decision maker.
https://en.wikipedia.org/wiki/Linear_partial_information
In the field ofartificial intelligence, the designationneuro-fuzzyrefers to combinations ofartificial neural networksandfuzzy logic. Neuro-fuzzy hybridization results in ahybrid intelligent systemthat combines the human-like reasoning style of fuzzy systems with the learning andconnectioniststructure of neural networks. Neuro-fuzzy hybridization is widely termed as fuzzy neural network (FNN) or neuro-fuzzy system (NFS) in the literature. Neuro-fuzzy system (the more popular term is used henceforth) incorporates the human-like reasoning style of fuzzy systems through the use offuzzy setsand a linguistic model consisting of a set of IF-THEN fuzzy rules. The main strength of neuro-fuzzy systems is that they areuniversal approximatorswith the ability to solicit interpretable IF-THEN rules. The strength of neuro-fuzzy systems involves two contradictory requirements in fuzzy modeling: interpretability versus accuracy. In practice, one of the two properties prevails. The neuro-fuzzy in fuzzy modeling research field is divided into two areas: linguistic fuzzy modeling that is focused on interpretability, mainly the Mamdani model; and precise fuzzy modeling that is focused on accuracy, mainly the Takagi-Sugeno-Kang (TSK) model. Although generally assumed to be the realization of afuzzy systemthroughconnectionistnetworks, this term is also used to describe some other configurations including: It must be pointed out that interpretability of the Mamdani-type neuro-fuzzy systems can be lost. To improve the interpretability of neuro-fuzzy systems, certain measures must be taken, wherein important aspects of interpretability of neuro-fuzzy systems are also discussed.[2] A recent research line addresses thedata stream miningcase, where neuro-fuzzy systems are sequentially updated with new incoming samples on demand and on-the-fly. Thereby, system updates not only include a recursive adaptation of model parameters, but also a dynamic evolution and pruning of model components (neurons, rules), in order to handleconcept driftand dynamically changing system behavior adequately and to keep the systems/models "up-to-date" anytime. Comprehensive surveys of various evolving neuro-fuzzy systems approaches can be found in[3]and.[4] Pseudo outer product-based fuzzy neural networks(POPFNN) are a family of neuro-fuzzy systems that are based on the linguistic fuzzy model.[5] Three members of POPFNN exist in the literature: The "POPFNN" architecture is a five-layerneural networkwhere the layers from 1 to 5 are called: input linguistic layer, condition layer, rule layer, consequent layer, output linguistic layer. The fuzzification of the inputs and the defuzzification of the outputs are respectively performed by the input linguistic and output linguistic layers while the fuzzy inference is collectively performed by the rule, condition and consequence layers. The learning process of POPFNN consists of three phases: Various fuzzy membership generationalgorithmscan be used: Learning Vector Quantization (LVQ), Fuzzy Kohonen Partitioning (FKP) or Discrete Incremental Clustering (DIC). Generally, the POP algorithm and its variant LazyPOP are used to identify the fuzzy rules.
https://en.wikipedia.org/wiki/Neuro-fuzzy
Rough fuzzy hybridizationis a method ofhybrid intelligent systemorsoft computing, whereFuzzy settheory is used for linguistic representation of patterns, leading to afuzzy granulationof thefeature space.Rough settheory is used to obtain dependency rules which model informative regions in the granulated feature space. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Rough_fuzzy_hybridization
Incomputer science, arough set, first described byPolishcomputer scientistZdzisław I. Pawlak, is a formal approximation of acrisp set(i.e., conventional set) in terms of a pair of sets which give thelowerand theupperapproximation of the original set. In the standard version of rough set theory described in Pawlak (1991),[1]the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may befuzzy sets. The following section contains an overview of the basic framework of rough set theory, as originally proposed byZdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found inPawlak (1991)and cited references. The initial and basic theory of rough sets is sometimes referred to as"Pawlak Rough Sets"or"classical rough sets", as a means to distinguish it from more recent extensions and generalizations. LetI=(U,A){\displaystyle I=(\mathbb {U} ,\mathbb {A} )}be an information system (attribute–value system), whereU{\displaystyle \mathbb {U} }is a non-empty, finite set of objects (the universe) andA{\displaystyle \mathbb {A} }is a non-empty, finite set of attributes such thatI:U→Va{\displaystyle I:\mathbb {U} \rightarrow V_{a}}for everya∈A{\displaystyle a\in \mathbb {A} }.Va{\displaystyle V_{a}}is the set of values that attributea{\displaystyle a}may take. The information table assigns a valuea(x){\displaystyle a(x)}fromVa{\displaystyle V_{a}}to each attributea{\displaystyle a}and objectx{\displaystyle x}in the universeU{\displaystyle \mathbb {U} }. With anyP⊆A{\displaystyle P\subseteq \mathbb {A} }there is an associatedequivalence relationIND(P){\displaystyle \mathrm {IND} (P)}: The relationIND(P){\displaystyle \mathrm {IND} (P)}is called aP{\displaystyle P}-indiscernibility relation. The partition ofU{\displaystyle \mathbb {U} }is a family of allequivalence classesofIND(P){\displaystyle \mathrm {IND} (P)}and is denoted byU/IND(P){\displaystyle \mathbb {U} /\mathrm {IND} (P)}(orU/P{\displaystyle \mathbb {U} /P}). If(x,y)∈IND(P){\displaystyle (x,y)\in \mathrm {IND} (P)}, thenx{\displaystyle x}andy{\displaystyle y}areindiscernible(or indistinguishable) by attributes fromP{\displaystyle P}. The equivalence classes of theP{\displaystyle P}-indiscernibility relation are denoted[x]P{\displaystyle [x]_{P}}. For example, consider the following information table: When the full set of attributesP={P1,P2,P3,P4,P5}{\displaystyle P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}}is considered, we see that we have the following seven equivalence classes: Thus, the two objects within the first equivalence class,{O1,O2}{\displaystyle \{O_{1},O_{2}\}}, cannot be distinguished from each other based on the available attributes, and the three objects within the second equivalence class,{O3,O7,O10}{\displaystyle \{O_{3},O_{7},O_{10}\}}, cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attributeP={P1}{\displaystyle P=\{P_{1}\}}alone is selected, we obtain the following, much coarser, equivalence-class structure: LetX⊆U{\displaystyle X\subseteq \mathbb {U} }be a target set that we wish to represent using attribute subsetP{\displaystyle P}; that is, we are told that an arbitrary set of objectsX{\displaystyle X}comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subsetP{\displaystyle P}. In general,X{\displaystyle X}cannot be expressed exactly, because the set may include and exclude objects which are indistinguishable on the basis of attributesP{\displaystyle P}. For example, consider the target setX={O1,O2,O3,O4}{\displaystyle X=\{O_{1},O_{2},O_{3},O_{4}\}}, and let attribute subsetP={P1,P2,P3,P4,P5}{\displaystyle P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}}, the full available set of features. The setX{\displaystyle X}cannot be expressed exactly, because in[x]P,{\displaystyle [x]_{P},}, objects{O3,O7,O10}{\displaystyle \{O_{3},O_{7},O_{10}\}}are indiscernible. Thus, there is no way to represent any setX{\displaystyle X}whichincludesO3{\displaystyle O_{3}}butexcludesobjectsO7{\displaystyle O_{7}}andO10{\displaystyle O_{10}}. However, the target setX{\displaystyle X}can beapproximatedusing only the information contained withinP{\displaystyle P}by constructing theP{\displaystyle P}-lower andP{\displaystyle P}-upper approximations ofX{\displaystyle X}: TheP{\displaystyle P}-lower approximation, orpositive region, is the union of all equivalence classes in[x]P{\displaystyle [x]_{P}}which are contained by (i.e., are subsets of) the target set – in the example,P_X={O1,O2}∪{O4}{\displaystyle {\underline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}}, the union of the two equivalence classes in[x]P{\displaystyle [x]_{P}}which are contained in the target set. The lower approximation is the complete set of objects inU/P{\displaystyle \mathbb {U} /P}that can bepositively(i.e., unambiguously) classified as belonging to target setX{\displaystyle X}. TheP{\displaystyle P}-upper approximationis the union of all equivalence classes in[x]P{\displaystyle [x]_{P}}which have non-empty intersection with the target set – in the example,P¯X={O1,O2}∪{O4}∪{O3,O7,O10}{\displaystyle {\overline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}\cup \{O_{3},O_{7},O_{10}\}}, the union of the three equivalence classes in[x]P{\displaystyle [x]_{P}}that have non-empty intersection with the target set. The upper approximation is the complete set of objects that inU/P{\displaystyle \mathbb {U} /P}thatcannotbe positively (i.e., unambiguously) classified as belonging to thecomplement(X¯{\displaystyle {\overline {X}}}) of the target setX{\displaystyle X}. In other words, the upper approximation is the complete set of objects that arepossiblymembers of the target setX{\displaystyle X}. The setU−P¯X{\displaystyle \mathbb {U} -{\overline {P}}X}therefore represents thenegative region, containing the set of objects that can be definitely ruled out as members of the target set. Theboundary region, given by set differenceP¯X−P_X{\displaystyle {\overline {P}}X-{\underline {P}}X}, consists of those objects that can neither be ruled in nor ruled out as members of the target setX{\displaystyle X}. In summary, the lower approximation of a target set is aconservativeapproximation consisting of only those objects which can positively be identified as members of the set. (These objects have no indiscernible "clones" which are excluded by the target set.) The upper approximation is aliberalapproximation which includes all objects that might be members of target set. (Some objects in the upper approximation may not be members of the target set.) From the perspective ofU/P{\displaystyle \mathbb {U} /P}, the lower approximation contains objects that are members of the target set with certainty (probability = 1), while the upper approximation contains objects that are members of the target set with non-zero probability (probability > 0). The tuple⟨P_X,P¯X⟩{\displaystyle \langle {\underline {P}}X,{\overline {P}}X\rangle }composed of the lower and upper approximation is called arough set; thus, a rough set is composed of two crisp sets, one representing alower boundaryof the target setX{\displaystyle X}, and the other representing anupper boundaryof the target setX{\displaystyle X}. Theaccuracyof the rough-set representation of the setX{\displaystyle X}can be given[1]by the following: That is, the accuracy of the rough set representation ofX{\displaystyle X},αP(X){\displaystyle \alpha _{P}(X)},0≤αP(X)≤1{\displaystyle 0\leq \alpha _{P}(X)\leq 1}, is the ratio of the number of objects which canpositivelybe placed inX{\displaystyle X}to the number of objects that canpossiblybe placed inX{\displaystyle X}– this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), thenαP(X)=1{\displaystyle \alpha _{P}(X)=1}, and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation). Rough set theory is one of many methods that can be employed to analyse uncertain (including vague) systems, although less common than more traditional methods ofprobability,statistics,entropyandDempster–Shafer theory. However a key difference, and a unique strength, of using classical rough set theory is that it provides an objective form of analysis.[2]Unlike other methods, as those given above, classical rough set analysis requires no additional information, external parameters, models, functions, grades or subjective interpretations to determine set membership – instead it only uses the information presented within the given data.[3]More recent adaptations of rough set theory, such as dominance-based, decision-theoretic and fuzzy rough sets, have introduced more subjectivity to the analysis. In general, the upper and lower approximations are not equal; in such cases, we say that target setX{\displaystyle X}isundefinableorroughly definableon attribute setP{\displaystyle P}. When the upper and lower approximations are equal (i.e., the boundary is empty),P¯X=P_X{\displaystyle {\overline {P}}X={\underline {P}}X}, then the target setX{\displaystyle X}isdefinableon attribute setP{\displaystyle P}. We can distinguish the following special cases of undefinability: An interesting question is whether there are attributes in the information system (attribute–value table) which are more important to the knowledge represented in the equivalence class structure than other attributes. Often, we wonder whether there is a subset of attributes which can, by itself, fully characterize the knowledge in the database; such an attribute set is called areduct. Formally, a reduct is a subset of attributesRED⊆P{\displaystyle \mathrm {RED} \subseteq P}such that A reduct can be thought of as asufficientset of features – sufficient, that is, to represent the category structure. In the example table above, attribute set{P3,P4,P5}{\displaystyle \{P_{3},P_{4},P_{5}\}}is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set: Attribute set{P3,P4,P5}{\displaystyle \{P_{3},P_{4},P_{5}\}}is a reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that[x]RED≠[x]P{\displaystyle [x]_{\mathrm {RED} }\neq [x]_{P}}. The reduct of an information system isnot unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is{P1,P2,P5}{\displaystyle \{P_{1},P_{2},P_{5}\}}, producing the same equivalence-class structure as[x]P{\displaystyle [x]_{P}}. The set of attributes which is common to all reducts is called thecore: the core is the set of attributes which is possessed byeveryreduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set ofnecessaryattributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is{P5}{\displaystyle \{P_{5}\}}; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are alldispensable. However, removing{P5}{\displaystyle \{P_{5}\}}by itselfdoeschange the equivalence-class structure, and thus{P5}{\displaystyle \{P_{5}\}}is theindispensableattribute of this information system, and hence the core. It is possible for the core to be empty, which means that there is no indispensable attribute: any single attribute in such an information system can be deleted without altering the equivalence-class structure. In such cases, there is noessentialor necessary attribute which is required for the class structure to be represented. One of the most important aspects of database analysis or data acquisition is the discovery of attribute dependencies; that is, we wish to discover which variables are strongly related to which other variables. Generally, it is these strong relationships that will warrant further investigation, and that will ultimately be of use in predictive modeling. In rough set theory, the notion of dependency is defined very simply. Let us take two (disjoint) sets of attributes, setP{\displaystyle P}and setQ{\displaystyle Q}, and inquire what degree of dependency obtains between them. Each attribute set induces an (indiscernibility) equivalence class structure, the equivalence classes induced byP{\displaystyle P}given by[x]P{\displaystyle [x]_{P}}, and the equivalence classes induced byQ{\displaystyle Q}given by[x]Q{\displaystyle [x]_{Q}}. Let[x]Q={Q1,Q2,Q3,…,QN}{\displaystyle [x]_{Q}=\{Q_{1},Q_{2},Q_{3},\dots ,Q_{N}\}}, whereQi{\displaystyle Q_{i}}is a given equivalence class from the equivalence-class structure induced by attribute setQ{\displaystyle Q}. Then, thedependencyof attribute setQ{\displaystyle Q}on attribute setP{\displaystyle P},γP(Q){\displaystyle \gamma _{P}(Q)}, is given by That is, for each equivalence classQi{\displaystyle Q_{i}}in[x]Q{\displaystyle [x]_{Q}}, we add up the size of its lower approximation by the attributes inP{\displaystyle P}, i.e.,P_Qi{\displaystyle {\underline {P}}Q_{i}}. This approximation (as above, for arbitrary setX{\displaystyle X}) is the number of objects which on attribute setP{\displaystyle P}can be positively identified as belonging to target setQi{\displaystyle Q_{i}}. Added across all equivalence classes in[x]Q{\displaystyle [x]_{Q}}, the numerator above represents the total number of objects which – based on attribute setP{\displaystyle P}– can be positively categorized according to the classification induced by attributesQ{\displaystyle Q}. The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependencyγP(Q){\displaystyle \gamma _{P}(Q)}"can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes inP{\displaystyle P}to determine the values of attributes inQ{\displaystyle Q}". Another, intuitive, way to consider dependency is to take the partition induced byQ{\displaystyle Q}as the target classC{\displaystyle C}, and considerP{\displaystyle P}as the attribute set we wish to use in order to "re-construct" the target classC{\displaystyle C}. IfP{\displaystyle P}can completely reconstructC{\displaystyle C}, thenQ{\displaystyle Q}depends totally uponP{\displaystyle P}; ifP{\displaystyle P}results in a poor and perhaps a random reconstruction ofC{\displaystyle C}, thenQ{\displaystyle Q}does not depend uponP{\displaystyle P}at all. Thus, this measure of dependency expresses the degree offunctional(i.e., deterministic) dependency of attribute setQ{\displaystyle Q}on attribute setP{\displaystyle P}; it isnotsymmetric. The relationship of this notion of attribute dependency to more traditional information-theoretic (i.e., entropic) notions of attribute dependence has been discussed in a number of sources, e.g. Pawlak, Wong, & Ziarko (1988),[4]Yao & Yao (2002),[5]Wong, Ziarko, & Ye (1986),[6]and Quafafou & Boussouf (2000).[7] The category representations discussed above are allextensionalin nature; that is, a category or complex class is simply the sum of all its members. To represent a category is, then, just to be able to list or identify all the objects belonging to that category. However, extensional category representations have very limited practical use, because they provide no insight for deciding whether novel (never-before-seen) objects are members of the category. What is generally desired is anintentionaldescription of the category, a representation of the category based on a set ofrulesthat describe the scope of the category. The choice of such rules is not unique, and therein lies the issue ofinductive bias. SeeVersion spaceandModel selectionfor more about this issue. There are a few rule-extraction methods. We will start from a rule-extraction procedure based on Ziarko & Shan (1995).[8] Let us say that we wish to find the minimal set of consistent rules (logical implications) that characterize our sample system. For a set ofconditionattributesP={P1,P2,P3,…,Pn}{\displaystyle {\mathcal {P}}=\{P_{1},P_{2},P_{3},\dots ,P_{n}\}}and a decision attributeQ,Q∉P{\displaystyle Q,Q\notin {\mathcal {P}}}, these rules should have the formPiaPjb…Pkc→Qd{\displaystyle P_{i}^{a}P_{j}^{b}\dots P_{k}^{c}\to Q^{d}}, or, spelled out, where{a,b,c,…}{\displaystyle \{a,b,c,\dots \}}are legitimate values from the domains of their respective attributes. This is a form typical ofassociation rules, and the number of items inU{\displaystyle \mathbb {U} }which match the condition/antecedent is called thesupportfor the rule. The method for extracting such rules given inZiarko & Shan (1995)is to form adecision matrixcorresponding to each individual valued{\displaystyle d}of decision attributeQ{\displaystyle Q}. Informally, the decision matrix for valued{\displaystyle d}of decision attributeQ{\displaystyle Q}lists all attribute–value pairs thatdifferbetween objects havingQ=d{\displaystyle Q=d}andQ≠d{\displaystyle Q\neq d}. This is best explained by example (which also avoids a lot of notation). Consider the table above, and letP4{\displaystyle P_{4}}be the decision variable (i.e., the variable on the right side of the implications) and let{P1,P2,P3}{\displaystyle \{P_{1},P_{2},P_{3}\}}be the condition variables (on the left side of the implication). We note that the decision variableP4{\displaystyle P_{4}}takes on two different values, namely{1,2}{\displaystyle \{1,2\}}. We treat each case separately. First, we look at the caseP4=1{\displaystyle P_{4}=1}, and we divide upU{\displaystyle \mathbb {U} }into objects that haveP4=1{\displaystyle P_{4}=1}and those that haveP4≠1{\displaystyle P_{4}\neq 1}. (Note that objects withP4≠1{\displaystyle P_{4}\neq 1}in this case are simply the objects that haveP4=2{\displaystyle P_{4}=2}, but in general,P4≠1{\displaystyle P_{4}\neq 1}would include all objects having any value forP4{\displaystyle P_{4}}other thanP4=1{\displaystyle P_{4}=1}, and there may be several such classes of objects (for example, those havingP4=2,3,4,etc.{\displaystyle P_{4}=2,3,4,etc.}).) In this case, the objects havingP4=1{\displaystyle P_{4}=1}are{O1,O2,O3,O7,O10}{\displaystyle \{O_{1},O_{2},O_{3},O_{7},O_{10}\}}while the objects which haveP4≠1{\displaystyle P_{4}\neq 1}are{O4,O5,O6,O8,O9}{\displaystyle \{O_{4},O_{5},O_{6},O_{8},O_{9}\}}. The decision matrix forP4=1{\displaystyle P_{4}=1}lists all the differences between the objects havingP4=1{\displaystyle P_{4}=1}and those havingP4≠1{\displaystyle P_{4}\neq 1}; that is, the decision matrix lists all the differences between{O1,O2,O3,O7,O10}{\displaystyle \{O_{1},O_{2},O_{3},O_{7},O_{10}\}}and{O4,O5,O6,O8,O9}{\displaystyle \{O_{4},O_{5},O_{6},O_{8},O_{9}\}}. We put the "positive" objects (P4=1{\displaystyle P_{4}=1}) as the rows, and the "negative" objectsP4≠1{\displaystyle P_{4}\neq 1}as the columns. To read this decision matrix, look, for example, at the intersection of rowO3{\displaystyle O_{3}}and columnO6{\displaystyle O_{6}}, showingP12,P30{\displaystyle P_{1}^{2},P_{3}^{0}}in the cell. This means thatwith regard todecision valueP4=1{\displaystyle P_{4}=1}, objectO3{\displaystyle O_{3}}differs from objectO6{\displaystyle O_{6}}on attributesP1{\displaystyle P_{1}}andP3{\displaystyle P_{3}}, and the particular values on these attributes for the positive objectO3{\displaystyle O_{3}}areP1=2{\displaystyle P_{1}=2}andP3=0{\displaystyle P_{3}=0}. This tells us that the correct classification ofO3{\displaystyle O_{3}}as belonging to decision classP4=1{\displaystyle P_{4}=1}rests on attributesP1{\displaystyle P_{1}}andP3{\displaystyle P_{3}}; although one or the other might be dispensable, we know thatat least oneof these attributes isindispensable. Next, from each decision matrix we form a set ofBooleanexpressions, one expression for each row of the matrix. The items within each cell are aggregated disjunctively, and the individuals cells are then aggregated conjunctively. Thus, for the above table we have the following five Boolean expressions: Each statement here is essentially a highly specific (probablytoospecific) rule governing the membership in classP4=1{\displaystyle P_{4}=1}of the corresponding object. For example, the last statement, corresponding to objectO10{\displaystyle O_{10}}, states that all the following must be satisfied: It is clear that there is a large amount of redundancy here, and the next step is to simplify using traditionalBoolean algebra. The statement(P11∨P22∨P30)∧(P11∨P22)∧(P11∨P22∨P30)∧(P11∨P22∨P30)∧(P11∨P22){\displaystyle (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})}corresponding to objects{O1,O2}{\displaystyle \{O_{1},O_{2}\}}simplifies toP11∨P22{\displaystyle P_{1}^{1}\lor P_{2}^{2}}, which yields the implication Likewise, the statement(P12∨P30)∧(P20)∧(P12∨P30)∧(P12∨P20∨P30)∧(P20){\displaystyle (P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})}corresponding to objects{O3,O7,O10}{\displaystyle \{O_{3},O_{7},O_{10}\}}simplifies toP12P20∨P30P20{\displaystyle P_{1}^{2}P_{2}^{0}\lor P_{3}^{0}P_{2}^{0}}. This gives us the implication The above implications can also be written as the following rule set: It can be noted that each of the first two rules has asupportof 1 (i.e., the antecedent matches two objects), while each of the last two rules has a support of 2. To finish writing the rule set for this knowledge system, the same procedure as above (starting with writing a new decision matrix) should be followed for the case ofP4=2{\displaystyle P_{4}=2}, thus yielding a new set of implications for that decision value (i.e., a set of implications withP4=2{\displaystyle P_{4}=2}as the consequent). In general, the procedure will be repeated for each possible value of the decision variable. The data system LERS (Learning from Examples based on Rough Sets)[9]may induce rules from inconsistent data, i.e., data with conflicting objects. Two objects are conflicting when they are characterized by the same values of all attributes, but they belong to different concepts (classes). LERS uses rough set theory to compute lower and upper approximations for concepts involved in conflicts with other concepts. Rules induced from the lower approximation of the conceptcertainlydescribe the concept, hence such rules are calledcertain. On the other hand, rules induced from the upper approximation of the concept describe the conceptpossibly, so these rules are calledpossible. For rule induction LERS uses three algorithms: LEM1, LEM2, and IRIM. The LEM2 algorithm of LERS is frequently used for rule induction and is used not only in LERS but also in other systems, e.g., in RSES.[10]LEM2 explores the search space ofattribute–value pairs. Its input data set is a lower or upper approximation of a concept, so its input data set is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few definitions to describe the LEM2 algorithm. The LEM2 algorithm is based on an idea of an attribute–value pair block. LetX{\displaystyle X}be a nonempty lower or upper approximation of a concept represented by a decision-value pair(d,w){\displaystyle (d,w)}. SetX{\displaystyle X}dependson a setT{\displaystyle T}of attribute–value pairst=(a,v){\displaystyle t=(a,v)}if and only if SetT{\displaystyle T}is aminimal complexofX{\displaystyle X}if and only ifX{\displaystyle X}depends onT{\displaystyle T}and no proper subsetS{\displaystyle S}ofT{\displaystyle T}exists such thatX{\displaystyle X}depends onS{\displaystyle S}. LetT{\displaystyle \mathbb {T} }be a nonempty collection of nonempty sets of attribute–value pairs. ThenT{\displaystyle \mathbb {T} }is alocal coveringofX{\displaystyle X}if and only if the following three conditions are satisfied: each memberT{\displaystyle T}ofT{\displaystyle \mathbb {T} }is a minimal complex ofX{\displaystyle X}, For our sample information system, LEM2 will induce the following rules: Other rule-learning methods can be found, e.g., in Pawlak (1991),[1]Stefanowski (1998),[11]Bazan et al. (2004),[10]etc. Rough set theory is useful for rule induction from incomplete data sets. Using this approach we can distinguish between three types of missing attribute values:lost values(the values that were recorded but currently are unavailable),attribute-concept values(these missing attribute values may be replaced by any attribute value limited to the same concept), and"do not care" conditions(the original values were irrelevant). Aconcept(class) is a set of all objects classified (or diagnosed) the same way. Two special data sets with missing attribute values were extensively studied: in the first case, all missing attribute values were lost,[12]in the second case, all missing attribute values were "do not care" conditions.[13] In attribute-concept values interpretation of a missing attribute value, the missing attribute value may be replaced by any value of the attribute domain restricted to the concept to which the object with a missing attribute value belongs.[14]For example, if for a patient the value of an attribute Temperature is missing, this patient is sick with flu, and all remaining patients sick with flu have values high or very-high for Temperature when using the interpretation of the missing attribute value as the attribute-concept value, we will replace the missing attribute value with high and very-high. Additionally, thecharacteristic relation, (see, e.g.,Grzymala-Busse & Grzymala-Busse (2007)) enables to process data sets with all three kind of missing attribute values at the same time: lost, "do not care" conditions, and attribute-concept values. Rough set methods can be applied as a component of hybrid solutions inmachine learninganddata mining. They have been found to be particularly useful forrule inductionandfeature selection(semantics-preservingdimensionality reduction). Rough set-based data analysis methods have been successfully applied inbioinformatics,economicsand finance, medicine, multimedia, web andtext mining, signal and image processing,software engineering, robotics, and engineering (e.g. power systems andcontrol engineering). Recently the three regions of rough sets are interpreted as regions of acceptance, rejection and deferment. This leads to three-way decision making approach with the model which can potentially lead to interesting future applications. The idea of rough set was proposed byPawlak(1981) as a new mathematical tool to deal with vague concepts. Comer, Grzymala-Busse, Iwinski, Nieminen, Novotny, Pawlak, Obtulowicz, and Pomykala have studied algebraic properties of rough sets. Different algebraic semantics have been developed by P. Pagliani, I. Duntsch, M. K. Chakraborty, M. Banerjee and A. Mani; these have been extended to more generalized rough sets by D. Cattaneo and A. Mani, in particular. Rough sets can be used to representambiguity,vaguenessand generaluncertainty. Since the development of rough sets, extensions and generalizations have continued to evolve. Initial developments focused on the relationship - both similarities and difference - withfuzzy sets. While some literature contends these concepts are different, other literature considers that rough sets are a generalization of fuzzy sets - as represented through either fuzzy rough sets or rough fuzzy sets. Pawlak (1995) considered that fuzzy and rough sets should be treated as being complementary to each other, addressing different aspects of uncertainty and vagueness. Three notable extensions of classical rough sets are: Rough sets can be also defined, as a generalisation, by employing a rough membership function instead of objective approximation. The rough membership function expresses a conditional probability thatx{\displaystyle x}belongs toX{\displaystyle X}givenR{\displaystyle \textstyle \mathbb {R} }. This can be interpreted as a degree thatx{\displaystyle x}belongs toX{\displaystyle X}in terms of information aboutx{\displaystyle x}expressed byR{\displaystyle \textstyle \mathbb {R} }. Rough membership primarily differs from the fuzzy membership in that the membership of union and intersection of sets cannot, in general, be computed from their constituent membership as is the case of fuzzy sets. In this, rough membership is a generalization of fuzzy membership. Furthermore, the rough membership function is grounded more in probability than the conventionally held concepts of the fuzzy membership function. Several generalizations of rough sets have been introduced, studied and applied to solving problems. Here are some of these generalizations:
https://en.wikipedia.org/wiki/Rough_set
TheDice-Sørensen coefficient(see below for other names) is a statistic used to gauge the similarity of twosamples. It was independently developed by the botanistsLee Raymond Dice[1]andThorvald Sørensen,[2]who published in 1945 and 1948 respectively. The index is known by several other names, especiallySørensen–Dice index,[3]Sørensen indexandDice's coefficient. Other variations include the "similarity coefficient" or "index", such asDice similarity coefficient(DSC). Common alternate spellings for Sørensen areSorenson,SoerensonandSörenson, and all three can also be seen with the–senending (theDanish letter øis phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII). Other names include: Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as where |X| and |Y| are thecardinalitiesof the two sets (i.e. the number of elements in each set). The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets. When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as It is different from theJaccard indexwhich only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1.[9]It can be viewed as asimilarity measureover sets. Similarly to theJaccard index, the set operations can be expressed in terms of vector operations over binary vectorsaandb: which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms. For setsXandYof keywords used ininformation retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :[10] When taken as astring similaritymeasure, the coefficient may be calculated for two strings,xandyusingbigramsas follows:[11] wherentis the number of character bigrams found in both strings,nxis the number of bigrams in stringxandnyis the number of bigrams in stringy. For example, to calculate the similarity between: We would find the set of bigrams in each word: Each set has four elements, and the intersection of these two sets has only one element:ht. Inserting these numbers into the formula, we calculate,s= (2 · 1) / (4 + 4) = 0.25. Source:[12] For a discrete (binary) ground truthA{\displaystyle A}and continuous measuresB{\displaystyle B}in the interval [0,1], the following formula can be used: cDC=2|A∩B|c∗|A|+|B|{\displaystyle cDC={\frac {2|A\cap B|}{c*|A|+|B|}}} Where|A∩B|=Σiaibi{\displaystyle |A\cap B|=\Sigma _{i}a_{i}b_{i}}and|B|=Σibi{\displaystyle |B|=\Sigma _{i}b_{i}} c can be computed as follows: c=ΣiaibiΣiaisign⁡(bi){\displaystyle c={\frac {\Sigma _{i}a_{i}b_{i}}{\Sigma _{i}a_{i}\operatorname {sign} {(b_{i})}}}} IfΣiaisign⁡(bi)=0{\displaystyle \Sigma _{i}a_{i}\operatorname {sign} {(b_{i})}=0}which means no overlap between A and B, c is set to 1 arbitrarily. This coefficient is not very different in form from theJaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficientS{\displaystyle S}, one can calculate the respective Jaccard index valueJ{\displaystyle J}and vice versa, using the equationsJ=S/(2−S){\displaystyle J=S/(2-S)}andS=2J/(1+J){\displaystyle S=2J/(1+J)}. Since the Sørensen–Dice coefficient does not satisfy thetriangle inequality, it can be considered asemimetricversion of the Jaccard index.[4] The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function is not a proper distance metric as it does not satisfy the triangle inequality.[4]The simplest counterexample of this is given by the three setsX={a}{\displaystyle X=\{a\}},Y={b}{\displaystyle Y=\{b\}}andZ=X∪Y={a,b}{\displaystyle Z=X\cup Y=\{a,b\}}. We haved(X,Y)=1{\displaystyle d(X,Y)=1}andd(X,Z)=d(Y,Z)=1/3{\displaystyle d(X,Z)=d(Y,Z)=1/3}. To satisfy the triangle inequality, the sum of any two sides must be greater than or equal to that of the remaining side. However,d(X,Z)+d(Y,Z)=2/3<1=d(X,Y){\displaystyle d(X,Z)+d(Y,Z)=2/3<1=d(X,Y)}. The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960[13]). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of twofuzzy sets[14]). As compared toEuclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers.[15]Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computerlexicographyfor measuring the lexical association score of two given words.[16]logDice is also used as part of the Mash Distance for genome and metagenome distance estimation[17]Finally, Dice is used inimage segmentation, in particular for comparing algorithm output against reference masks in medical applications.[8] The expression is easily extended toabundanceinstead of presence/absence of species. This quantitative version is known by several names:
https://en.wikipedia.org/wiki/S%C3%B8rensen_similarity_index
Type-2 fuzzy sets and systemsgeneralize standardType-1 fuzzy setsandsystemsso that more uncertainty can be handled. From the beginning of fuzzy sets, criticism was made about the fact that the membership function of a type-1 fuzzy set has no uncertainty associated with it, something that seems to contradict the wordfuzzy, since that word has the connotation of much uncertainty. So, what does one do when there is uncertainty about the value of the membership function? The answer to this question was provided in 1975 by the inventor of fuzzy sets,Lotfi A. Zadeh,[1]when he proposed more sophisticated kinds of fuzzy sets, the first of which he called a "type-2 fuzzy set". A type-2 fuzzy set lets us incorporate uncertainty about the membership function into fuzzy set theory, and is a way to address the above criticism of type-1 fuzzy sets head-on. And, if there is no uncertainty, then a type-2 fuzzy set reduces to a type-1 fuzzy set, which is analogous to probability reducing to determinism when unpredictability vanishes. Type1 fuzzy systems are working with a fixedmembership function, while in type-2 fuzzy systems the membership function is fluctuating. A fuzzy set determines how input values are converted into fuzzy variables.[2] In order to symbolically distinguish between a type-1 fuzzy set and a type-2 fuzzy set, a tilde symbol is put over the symbol for the fuzzy set; so, A denotes a type-1 fuzzy set, whereas à denotes the comparable type-2 fuzzy set. When the latter is done, the resulting type-2 fuzzy set is called a "general type-2 fuzzy set" (to distinguish it from the special interval type-2 fuzzy set). Zadeh didn't stop with type-2 fuzzy sets, because in that 1976 paper[1]he also generalized all of this to type-nfuzzy sets. The present article focuses only on type-2 fuzzy sets because they are thenext stepin the logical progression from type-1 to type-nfuzzy sets, wheren= 1, 2, ... . Although some researchers are beginning to explore higher than type-2 fuzzy sets, as of early 2009, this work is in its infancy. The membership function of a general type-2 fuzzy set, Ã, is three-dimensional (Fig. 1), where the third dimension is the value of the membership function at each point on its two-dimensional domain that is called its "footprint of uncertainty"(FOU). For an interval type-2 fuzzy set that third-dimension value is the same (e.g., 1) everywhere, which means that no new information is contained in the third dimension of an interval type-2 fuzzy set. So, for such a set, the third dimension is ignored, and only the FOU is used to describe it. It is for this reason that an interval type-2 fuzzy set is sometimes called afirst-order uncertaintyfuzzy set model, whereas a general type-2 fuzzy set (with its useful third-dimension) is sometimes referred to as asecond-order uncertaintyfuzzy set model. The FOU represents the blurring of a type-1 membership function, and is completely described by its two bounding functions (Fig. 2), a lower membership function (LMF) and an upper membership function (UMF), both of which are type-1 fuzzy sets! Consequently, it is possible to use type-1 fuzzy set mathematics to characterize and work with interval type-2 fuzzy sets. This means that engineers and scientists who already know type-1 fuzzy sets will not have to invest a lot of time learning about general type-2 fuzzy set mathematics in order to understand and use interval type-2 fuzzy sets. Work on type-2 fuzzy sets languished during the 1980s and early-to-mid 1990s, although a small number of articles were published about them. People were still trying to figure out what to do with type-1 fuzzy sets, so even though Zadeh proposed type-2 fuzzy sets in 1976, the time was not right for researchers to drop what they were doing with type-1 fuzzy sets to focus on type-2 fuzzy sets. This changed in the latter part of the 1990s as a result of Jerry Mendel and his student's works on type-2 fuzzy sets and systems.[3]Since then, more and more researchers around the world are writing articles about type-2 fuzzy sets and systems. Interval type-2 fuzzy sets have received the most attention because the mathematics that is needed for such sets—primarilyInterval arithmetic—is much simpler than the mathematics that is needed for general type-2 fuzzy sets. So, the literature about interval type-2 fuzzy sets is large, whereas the literature about general type-2 fuzzy sets is much smaller. Both kinds of fuzzy sets are being actively researched by an ever-growing number of researchers around the world and have resulted in successful employment in a variety of domains such as robot control.[4] Formally, the following have already been worked out for interval type-2 fuzzy sets: Type-2 fuzzy sets are finding very wide applicability in rule-based fuzzy logic systems (FLSs) because they let uncertainties be modeled by them whereas such uncertainties cannot be modeled by type-1 fuzzy sets. A block diagram of a type-2 FLS is depicted in Fig. 3. This kind of FLS is used in fuzzy logic control, fuzzy logic signal processing, rule-based classification, etc., and is sometimes referred to as afunction approximationapplication of fuzzy sets, because the FLS is designed to minimize an error function. The following discussions, about the four components in Fig. 3 rule-based FLS, are given for an interval type-2 FLS, because to-date they are the most popular kind of type-2 FLS; however, most of the discussions are also applicable for a general type-2 FLS. Rules, that are either provided by subject experts or are extracted from numerical data, are expressed as a collection of IF-THEN statements, e.g., Fuzzy sets are associated with the terms that appear in the antecedents (IF-part) or consequents (THEN-part) of rules, and with the inputs to and the outputs of the FLS. Membership functions are used to describe these fuzzy sets, and in a type-1 FLS they are all type-1 fuzzy sets, whereas in an interval type-2 FLS at least one membership function is an interval type-2 fuzzy set. An interval type-2 FLS lets any one or all of the following kinds of uncertainties be quantified: In Fig. 3, measured (crisp) inputs are first transformed into fuzzy sets in theFuzzifierblock because it is fuzzy sets and not numbers that activate the rules which are described in terms of fuzzy sets and not numbers. Three kinds of fuzzifiers are possible in an interval type-2 FLS. When measurements are: In Fig. 3, after measurements are fuzzified, the resulting input fuzzy sets are mapped into fuzzy output sets by theInferenceblock. This is accomplished by first quantifying each rule using fuzzy set theory, and by then using the mathematics of fuzzy sets to establish the output of each rule, with the help of an inference mechanism. If there areMrules then the fuzzy input sets to the Inference block will activate only a subset of those rules, where the subset contains at least one rule and usually way fewer thanMrules. The inference is done one rule at a time. So, at the output of the Inference block, there will be one or morefired-rule fuzzy output sets. In most engineering applications of an FLS, a number (and not a fuzzy set) is needed as its final output, e.g., the consequent of the rule given above is "Rotate the valve a bit to the right." No automatic valve will know what this means because "a bit to the right" is a linguistic expression, and a valve must be turned by numerical values, i.e. by a certain number of degrees. Consequently, the fired-rule output fuzzy sets have to be converted into a number, and this is done in the Fig. 3Output Processingblock. In a type-1 FLS, output processing, called "defuzzification", maps a type-1 fuzzy set into a number. There are many ways for doing this, e.g., compute the union of the fired-rule output fuzzy sets (the result is another type-1 fuzzy set) and then compute the center of gravity of the membership function for that set; compute a weighted average of the centers of gravity of each of the fired rule consequent membership functions; etc. Things are somewhat more complicated for an interval type-2 FLS, because to go from an interval type-2 fuzzy set to a number (usually) requires two steps (Fig. 3). The first step, called "type-reduction", is where an interval type-2 fuzzy set is reduced to an interval-valued type-1 fuzzy set. There are as many type-reduction methods as there are type-1 defuzzification methods. An algorithm developed by Karnik and Mendel[6][3]now known as the "KM algorithm" is used for type-reduction. Although this algorithm is iterative, it is very fast. The second step of Output Processing, which occurs after type-reduction, is still called "defuzzification". Because a type-reduced set of an interval type-2 fuzzy set is always a finite interval of numbers, the defuzzified value is just the average of the two end-points of this interval. It is clear from Fig. 3 that there can be two outputs to an interval type-2 FLS—crisp numerical values and the type-reduced set. The latter provides a measure of the uncertainties that have flowed through the interval type-2 FLS, due to the (possibly) uncertain input measurements that have activated rules whose antecedents or consequents or both are uncertain. Just as standard deviation is widely used in probability and statistics to provide a measure of unpredictable uncertainty about a mean value, the type-reduced set can provide a measure of uncertainty about the crisp output of an interval type-2 FLS. Another application for fuzzy sets has also been inspired by Zadeh[23][24][25]— "Computing with Words". Different acronyms have been used for "computing with words," e.g., CW and CWW. According to Zadeh: Of course, he did not mean that computers would actually compute using words—single words or phrases—rather than numbers. He meant that computers would be activated by words, which would be converted into a mathematical representation using fuzzy sets and that these fuzzy sets would be mapped by a CWW engine into some other fuzzy set after which the latter would be converted back into a word. A natural question to ask is: Which kind of fuzzy set—type-1 or type-2—should be used as a model for a word? Mendel[26][27]has argued, on the basis ofKarl Popper's concept of "falsificationism",[28][25]that using a type-1 fuzzy set as a model for a word is scientifically incorrect. An interval type-2 fuzzy set should be used as a (first-order uncertainty) model for a word. Much research is underway about CWW. Type-2 fuzzy sets were applied in the following areas: Freeware MATLAB implementations, which cover general and interval type-2 fuzzy sets and systems, as well as type-1 fuzzy systems, are available at:http://sipi.usc.edu/~mendel/software.Software supporting discrete interval type-2 fuzzy logic systems is available at:DIT2FLS Toolbox -http://dit2fls.com/projects/dit2fls-toolbox/DIT2FLS Library Package -http://dit2fls.com/projects/dit2fls-library-package/ Java libraries including source code for type-1, interval- and general type-2 fuzzy systems are available at:http://juzzy.wagnerweb.net/. Python library for type 1 and type 2 fuzzy sets is available at:https://github.com/carmelgafa/type2fuzzy Python library for interval type 2 fuzzy sets and systems is available at:https://github.com/Haghrah/PyIT2FLS An open source Matlab/Simulink Toolbox for Interval Type-2 Fuzzy Logic Systems is available at:http://web.itu.edu.tr/kumbasart/type2fuzzy.htm There are twoIEEE Expert Nowmulti-media modules that can be accessed from the IEEE at:[1]
https://en.wikipedia.org/wiki/Type-2_fuzzy_sets_and_systems
This is aglossary of mereology.Mereologyis the philosophical study of part-whole relationships, also called parthood relationships.[1]
https://en.wikipedia.org/wiki/Glossary_of_mereology
Inmereology, an area ofmetaphysics, the termgunkapplies to any whole whose parts all have further proper parts. That is, a gunky object is not made of indivisibleatomsorsimples. Because parthood istransitive, any part of gunk is itself gunk. The term was first used byDavid Lewisin his workParts of Classes(1991),[1]in which he conceived of the possibility of "atomless gunk",[2]which was shortened to "gunk" by later writers. Dean W. Zimmerman defends the possibility of atomless gunk.[3] If point-sized objects are always simple, then a gunky object does not have any point-sized parts, and may be best described by an approach such asWhitehead's point-free geometry. By usual accounts of gunk, such asAlfred Tarski's in 1929,[4]three-dimensional gunky objects also do not have other degenerate parts shaped like one-dimensional curves or two-dimensional surfaces. Gunk is an important test case for accounts of the composition of material objects: for instance,Ted Siderhas challengedPeter van Inwagen's account of composition because it is inconsistent with the possibility of gunk. Sider's argument also applies to a simpler view than van Inwagen's: mereologicalnihilism, the view that only material simples exist. If nihilism isnecessarily true, then gunk is impossible. But, as Sider argues, because gunk is both conceivable and possible, nihilism is false, or at best a contingent truth.[5] Gunk has also played an important role in the history oftopology[6]in recent debates concerning change, contact, and the structure of physicalspace. The composition of space and the composition of material objects are related byreceptacles—regions of space that could harbour a material object. (The term receptacles was coined byRichard Cartwright.)[7]It seems reasonable to assume that if space is gunky, a receptacle is gunky and then a material object is possibly gunky. Arguably, discussions of material gunk run all the way back to at leastAristotleand possibly as far back asAnaxagoras, and include such thinkers asWilliam of Ockham,René Descartes, and Alfred Tarski.[5][8]However, the first contemporary mentionings of gunk is found in the writings ofA. N. WhiteheadandBertrand Russell, and later in the writings of David Lewis.[8]Elements of gunk thought are present inZeno's famous paradoxes of plurality. Zeno argued that if there were such things as discrete instants of time, then objects can never move through time. Aristotle's solution toZeno's paradoxesinvolves the idea that time is not made out of durationless instants, but ever smaller temporal intervals. Every interval of time can be divided into smaller and smaller intervals, without ever terminating in some privileged set of durationless instants.[9]In other words, motion is possible because time is gunky. Despite having been a relatively common position inmetaphysics, afterCantor's discovery of the distinction between denumerable and non-denumerable infinitecardinalities, and mathematical work byAdolf Grünbaum, gunk theory was no longer seen as a necessary alternative to a topology of space made out of points.[8]Recent mathematical work in the topology of spacetime by scholars such as Peter Roeper and Frank Arntzenius have reopened the question of whether a gunky spacetime is a feasible framework for doingphysics.[9][10] Possibly the most influential formulation of a theory of gunky spacetime comes from A. N. Whitehead in his seminal workProcess and Reality.[11]Whitehead argues that there are no point regions of space and that every region of space has some three-dimensional extension. Under a Whiteheadian conception of spacetime, points, lines, planes, and other less-than-three-dimensional objects are constructed out of a method of "extensive abstraction", in which points, lines, and planes are identified with infinitely converging abstract sets of nested extended regions.[11] Ted Siderhas argued that even the possibility of gunk undermines another position, that ofmereological nihilism.[5]Sider's argument is as follows: This argument only depends on whether or not gunk is even possible, not whether or not the actual world is a gunky one. Sider defends premise #1 by appealing to the fact that since nihilism is a metaphysical thesis, it must be true or false of necessity.[5]In defense of premise #2, Sider argues that since a gunk world is conceivable; that is, we can imagine a gunky world without any internal contradiction, the gunk must be possible. Premise #3 follows from an understanding of necessity and possibility that stems from an understanding of possible world semantics. Simply put, a proposition P is necessarily false if and only if it is false in all possible worlds, and if a proposition P is possible, it is true in at least one possible world. Thus, if a proposition is possible, then it is not necessarily false, as it is not false in all possible worlds. The conclusion, #4, follows deductively from the other premises. Sider's argument is valid, so most strategies to resist the argument have focused on denying one or more of his premises. Strategies that deny #1 have been called the "contingency defense". Deniers of #1 say that the facts that determine the composition of objects are not necessary facts, but can differ in different possible worlds. As such, nihilism is a contingent matter of fact, and the possibility of gunk does not undermine the possibility of nihilism. This is the strategy endorsed by Cameron[12]and Miller.[13] Alternatively, one could deny #2 and say that gunk is metaphysically impossible. Most strategies that take this route deny #2 in virtue of denying another relatively common intuition: that conceivability entails metaphysical possibility. Although this metaphysical principle dates back to at least the works of Descartes, recent work by philosophers such as Marcus[14]and Roca-Royes[15]has shed some doubt on the reliability of conceivability as a guide to metaphysical possibility. Furthermore, Sider's own arguments in defense of #1 seem to undermine the argument: gunk is also a metaphysical thesis, thus, it seems that (like #1) it would also have to be either necessarily true or necessarily false. The argument would only work if gunk were necessarily true, but this would amount to question-begging.
https://en.wikipedia.org/wiki/Gunk_(mereology)
Holismis the interdisciplinary idea thatsystemspossess properties as wholes apart from the properties of their component parts.[1][2][3]The aphorism "The whole is greater than the sum of its parts", typically attributed toAristotle, is often given as a summary of this proposal.[4]The concept of holism can inform the methodology for a broad array ofscientificfields and lifestyle practices. When applications of holism are said to reveal properties of a whole system beyond those of its parts, these qualities are referred to asemergent propertiesof that system. Holism in all contexts is often placed in opposition toreductionism, a dominant notion in thephilosophy of sciencethat systems containing parts contain no unique properties beyond those parts. Proponents of holism consider the search for emergent properties within systems to be demonstrative of their perspective.[5] The term "holism" was coined byJan Smuts(1870–1950) in his 1926 bookHolism and Evolution.[6]While he never assigned a consistent meaning to the word, Smuts used holism to represent at least three features of reality.[7]First, holism claims that every scientifically measurable thing, either physical or psychological, does possess a nature as a whole beyond its parts. His examples includeatoms,cells, or an individual'spersonality. Smuts discussed this sense of holism in his claim that an individual's body and mind are notcompletely separatedbut instead connect and represent the holistic idea of a person. In his second sense, Smuts referred to holism as the cause of evolution. He argued that evolution is neither anaccidentnor is itbrought about by the actionsof sometranscendentforce, such as a God. Smuts criticized writers who emphasizedDarwinianconcepts ofnatural selectionandgenetic variationto support an accidental view of natural processes within the universe. Smuts perceived evolution as the process of nature correcting itself creatively and intentionally. In this way, holism is described as the tendency of a whole system to creatively respond to environmental stressors, a process in which parts naturally work together to bring the whole into more advanced states. Smuts used Pavlovian studies to argue that theinheritance of behavioral changessupports his idea of creative evolution as opposed to purely accidental development in nature.[8][9]Smuts believed that this creative process was intrinsic within all physical systems of parts and ruled out indirect,transcendent forces.[7][10]Finally, Smuts used holism to explain the concrete (nontranscendent) nature of the universe in general. In his words, holism is "the ultimate synthetic, ordering, organizing, regulative activity in the universe which accounts for all the structural groupings and syntheses in it."[11]Smuts argued that a holistic view of the universe explains its processes and their evolution more effectively than a reductive view. Professional philosophers of science and linguistics did not considerHolism and Evolutionseriously upon its initial publication in 1926 and the work has received criticism for a lack of theoretical coherence.[7][10][12]Some biological scientists, however, did offer favorable assessments shortly after its first print.[13]Over time, the meaning of the word holism became most closely associated with Smuts' first conception of the term, yet without any metaphysical commitments tomonism,dualism, or similar concepts which can be inferred from his work.[7] The advent of holism in the 20th century coincided with the gradual development ofquantum mechanics. Holism in physics is the nonseparability of physical systems from their parts, especially quantum phenomena.Classical physicscannot be regarded as holistic, as the behavior of individual parts represents the whole. However, the state of a system inquantum theoryresists a certain kind of reductive analysis. For example, two spatially separated quantum systems are described as "entangled," or nonseparable from each other, when a meaningful analysis of one system is indistinguishable from that of the other.[14]There are different conceptions of nonseparability in physics and its exploration is considered to broadly present insight into theontologicalproblem.[14][15] In one sense, holism for physics is a perspective about the best way to understand the nature of a physical system. In this sense, holism is the methodological claim that systems are accurately understood according to their properties as a whole. A methodological reductionist in physics might seek to explain, for example, the behavior of a liquid by examining its component molecules, atoms, ions or electrons. A methodological holist, on the other hand, believes there is something misguided about this approach; one proponent, a condensed matter physicist, puts it: “the most important advances in this area come about by the emergence of qualitatively new concepts at the intermediate or macroscopic levels—concepts which, one hopes, will be compatible with one's information about the microscopic constituents, but which are in no sense logically dependent on it.”[16]This perspective is considered a conventional attitude among contemporary physicists.[14]In another sense, holism is a metaphysical claim that the nature of a system is not determined by the properties of its component parts. There are three varieties of this sense of physical holism. The metaphysical claim does not assert that physical systems involveabstractproperties beyond the composition of its physical parts, but that there are concrete properties aside from those of its basic physical parts. Theoretical physicistDavid Bohm(1917-1992) supports this view head-on. Bohm believed that a complete description of the universe would have to go beyond a simple list of all its particles and their positions, there would also have to be a physical quantum field associated with the properties of those particlesguidingtheir trajectories.[17][18]Bohm's ontological holism concerning the nature of whole physical systems was literal.[19]ButNiels Bohr(1885-1962), on the other hand, held ontological holism from an epistemological angle, rather than a literal one.[20]Bohr saw anobservational apparatusto be a part of a system under observation, besides the basic physical parts themselves. His theory agrees with Bohm that whole systems were not merely composed of their parts and it identifies properties such as position and momentum as those of whole systems beyond those of its components.[21]But Bohr states that these holistic properties are only meaningful in experimental contexts when physical systems are under observation and that these systems, when not under observation, cannot be said to have meaningful properties, even if these properties took place outside our observation. While Bohr claims these holistic properties exist only insofar as they can be observed, Bohm took his ontological holism one step further by claiming these properties must existregardless. Semantic holismsuggests that the meaning of individual words depends on the meaning of other words, forming a large web of interconnections. In general, meaning holism states that the properties which determine the meaning of a word are connected such that if the meaning of one word changes, the meaning of every other word in the web changes as well.[22][23]The set of words that alter in meaning due to a change in the meaning of some other is not necessarily specified in meaning holism, but typically such a change is taken straightforwardly to affect the meaning of every word in the language.[24][25][26][27] In scientific disciplines, reductionism is the opposing viewpoint to holism. But in the context oflinguisticsor thephilosophy of language, reductionism is typically referred to as atomism. Specifically, atomism states that each word's meaning is independent and so there are no emergent properties within a language. Additionally, there is meaning molecularism which states that a change in one word alters the meaning of only a relatively small set of other words. The linguistic perspective of meaning holism is traced back to Quine[28]but was subsequently formalized by analytic philosophersMichael Dummett,Jerry Fodor, andErnest Lepore.[29]While this holistic approach attempts to resolve a classical problem for the philosophy of language concerning how words convey meaning, there is debate over its validity mostly from two angles of criticism: opposition tocompositionalityand, especially, instability of meaning. The first claims that meaning holism conflicts with the compositionality of language. Meaning in some languages is compositional in that meaning comes from the structure of an expression's parts.[29][30]Meaning holism suggests that the meaning of words plays an inferential role in the meaning of other words: "pet fish" mightimplya meaning of "less than 3 ounces." Since holistic views of meaning assume meaning depends on which words are used and how those words confer meaning onto other words, rather than how they are structured, meaning holism stands in conflict withcompositionalismand leaves statements with potentially ambiguous meanings.[29][31]The second criticism claims that meaning holism makes meaning in language unstable. If some words must be used to infer the meaning of other words, then in order to communicate a message, the sender and the receiver must share an identical set of inferential assumptions or beliefs. If these beliefs were different, meaning may be lost.[32][33] Many types of communication would be directly affected by the principles of meaning holism such as informative communication,[32]language learning,[34][35][36]and communication about psychological states.[37][38]Nevertheless, some meaning holists maintain that the instability of meaning holism is an acceptable feature from several different angles.[14]In one example,contextualholists make this point simply by suggesting we often do not actually share identical inferential assumptions but instead rely on context to counter differences of inference and support communication.[39] Scientific applications of holism within biology are referred to assystems biology. The opposing analytical approach of systems biology isbiological organizationwhich modelsbiological systemsand structures only in terms of their component parts. "The reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge...the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models."[40]The objective in systems biology is to advance models of the interactions in a system. Holistic approaches to modelling have involved cellular modelling strategies,[41]genomic interaction analysis,[42]and phenotype prediction.[43] Systems medicineis a practical approach tosystems biologyand accepts its holistic assumptions. Systems medicine takes the systems of the human body as made up of a complete whole and uses this as a starting point in its research and, ultimately, treatment. The term holism[44]is also sometimes used in the context of variouslifestylepractices, such asdieting, education, and healthcare, to refer to ways of life that either supplement or replace conventional practices. In these contexts, holism is not necessarily a rigorous or well-defined methodology for obtaining a particular lifestyle outcome. It is sometimes simply an adjective to describe practices which account for factors that standard forms of these practices may discount, especially in the context ofalternative medicine.
https://en.wikipedia.org/wiki/Holism
Aholonis something that is simultaneously a whole in and of itself, as well as a part of a larger whole. In this way, a holon can be considered asubsystemwithin a largerhierarchicalsystem.[1] The holon represents a way to overcome thedichotomy between parts and wholes, as well as a way to account for both theself-assertiveand the integrative tendencies oforganisms.[2]Holons are sometimes discussed in the context ofself-organizing holarchic open (SOHO) systems.[2][1] The wordholon(Ancient Greek:ὅλον) is a combination of the Greekholos(ὅλος) meaning 'whole', with the suffix-onwhich denotes aparticleor part (as inprotonandneutron). Holons are self-reliant units that possess a degree of independence and can handle contingencies without asking higher authorities for instructions (i.e., they have a degree ofautonomy). These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole. The termholonwas coined byArthur KoestlerinThe Ghost in the Machine(1967), though Koestler first articulated the concept inThe Act of Creation(1964), in which he refers to the relationship between the searches forsubjectiveandobjectiveknowledge: Einstein's space is no closer to reality thanVan Gogh'ssky. The glory of science is not in a truth more absolute than the truth ofBachorTolstoy, but in the act of creation itself. The scientist's discoveries impose his own order on chaos, as the composer or painter imposes his; an order that always refers to limited aspects of reality, and is based on the observer's frame of reference, which differs from period to period as aRembrantnude differs from a nude byManet.[3] Koestler would finally propose the termholoninThe Ghost in the Machine(1967), using it to describe natural organisms as composed of semi-autonomous sub-wholes (or, parts) that are linked in a form of hierarchy, aholarchy, to form a whole.[2][4][5]The title of the book itself points to the notion that the entire 'machine' of life and of theuniverseitself is ever-evolving toward more and more complex states, as if a ghost were operating the machine.[6] The first observation was influenced by a story told to him byHerbert A. Simon—the 'parableof the two watchmakers'—in which Simon concludes thatcomplex systemsevolve from simple systems much more rapidly when there are stable intermediate forms present in theevolutionary processcompared to when they are not present:[7] There once were two watchmakers, named Bios and Mekhos, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Bios prospered while Mekhos became poorer and poorer. In the end, Mekhos lost his shop and worked as a mechanic for Bios. What was the reason behind this? The watches consisted of about 1000 parts each. The watches that Mekhos made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be completely reassembled from the basic elements. On the other hand Bios designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. When Bios had to put his watches down to attend to some interruption they did not break up into their elemental parts but only into their sub-assemblies. Now, the watchmakers were each disturbed at the same rate of once per hundred assembly operations. However, due to their different assembly methods, it took Mekhos four thousand times longer than Bios to complete a single watch. The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in non-livingmatter(atomicandmolecularstructure),living organisms, andsocial organizations.
https://en.wikipedia.org/wiki/Holon_(philosophy)
Implicate orderandexplicate orderareontologicalconcepts forquantum theorycoined bytheoretical physicistDavid Bohmduring the early 1980s. They are used to describe two different frameworks for understanding the same phenomenon or aspect of reality. In particular, the concepts were developed in order to explain the bizarre behaviors ofsubatomic particleswhichquantum physicsdescribes and predicts with elegant precision but struggles to explain.[1] In Bohm'sWholeness and the Implicate Order, he used these notions to describe how the appearance of such phenomena might appear differently, or might be characterized by, varying principal factors, depending on contexts such as scales.[2]The implicate (also referred to as the "enfolded") order is seen as a deeper and more fundamental order of reality. In contrast, the explicate or "unfolded" order includes the abstractions that humans normally perceive. As he wrote: The notion of implicate and explicate orders emphasizes the primacy of structure and process over individual objects. The latter are seen as mere approximations of an underlying process. In this approach, quantum particles and other objects are understood to have only a limited degree of stability and autonomy.[3] Bohm believed that the weirdness of the behavior of quantum particles is caused by unobserved forces, maintaining that space and time might actually be derived from an even deeper level of objective reality. In the words ofF. David Peat, Bohm considered that what we take for reality are "surface phenomena, explicate forms that have temporarily unfolded out of an underlying implicate order." That is, the implicate order is the ground from which realityemerges.[4] Bohm, his colleagueBasil Hiley, and other physicists ofBirkbeck Collegeworked toward a model of quantum physics in which the implicate order is represented in the form of an appropriatealgebraor otherpregeometry. They consideredspacetimeitself as part of an explicate order that is connected to an implicate order that they calledpre-space.Thespacetime manifoldand the properties oflocalityandnonlocalityall arise from an order in such pre-space. A. M. Frescura and Hiley suggested that an implicate order could be carried by an algebra, with the explicate order being contained in the variousrepresentationsof this algebra.[5][6] In analogy toAlfred North Whitehead's notion of "actual occasion,"[7]Bohm considered the notion ofmoment– a moment being a not entirely localizable event, with events being allowed to overlap[8]and being connected in an overall implicate order:[9] I propose that each moment of time is a projection from the total implicate order. The termprojectionis a particularly happy choice here, not only because its common meaning is suitable for what is needed, but also because its mathematical meaning as a projection operation,P, is just what is required for working out these notions in terms of the quantum theory. Bohm emphasized the primary role of the implicate order's structure:[10] My attitude is that the mathematics of the quantum theory dealsprimarilywith the structure of the implicate pre-space and with how an explicate order of space and time emerges from it, rather than with movements of physical entities, such as particles and fields. (This is a kind of extension of what is done in general relativity, which deals primarily with geometry and only secondarily with the entities that are described within this geometry.) Central to Bohm's schema are correlations betweenobservablesof entities which seem separated by great distances in the explicate order (such as a particular electron here on Earth and analpha particlein one of the stars in theAbell 1835 galaxy, then a possible candidate for farthest galaxy from Earth known to humans), manifestations of the implicate order. Within quantum theory, there isentanglementof such objects. This view of order necessarily departs from any notion which entails signalling, and therefore causality. The correlation of observables does not imply a causal influence, and in Bohm's schema, the latter represents 'relatively' independent events in spacetime; and therefore explicate order. The implicate order represents the proposal of a generalmetaphysicalconcept in terms of which it is claimed thatmatterandconsciousnessmight both be understood, in the sense that it is proposed that both matter and consciousness: (i) enfold the structure of the whole within each region, and (ii) involve continuous processes of enfoldment and unfoldment. For example, in the case of matter, entities such as atoms may represent continuous enfoldment and unfoldment which manifests as a relatively stable and autonomous entity that can be observed to follow a relatively well-defined path in spacetime. In the case of consciousness, Bohm pointed toward evidence presented byKarl Pribramthatmemoriesmay be enfolded within every region of thebrainrather than being localized (for example, in particular regions of the brain, cells, or atoms). Bohm went on to say: As in our discussion of matter in general, it is now necessary to go into the question of how in consciousness the explicate order is what is manifest ... the manifest content of consciousness is based essentially on memory, which is what allows such content to be held in a fairly constant form. Of course, to make possible such constancy it is also necessary that this content be organized, not only through relatively fixed association but also with the aid of the rules of logic, and of our basic categories of space, time, causality, universality, etc. ... there will be a strong background of recurrent, stable, and separable features, against which the transitory and changing aspects of the unbroken flow of experience will be seen as fleeting impressions that tend to be arranged and ordered mainly in terms of the vast totality of the relatively static and fragmented content of [memories].[11] Bohm also claimed that "as with consciousness, each moment has a certain explicate order, and in addition it enfolds all the others, though in its own way. So the relationship of each moment in the whole to all the others is implied by its total content: the way in which it 'holds' all the others enfolded within it." Bohm characterises consciousness as a process in which at each moment, content that was previously implicate is presently explicate, and content which was previously explicate has become implicate. One may indeed say that our memory is a special case of the process described above, for all that is recorded is held enfolded within the brain cells and these are part of matter in general. The recurrence and stability of our own memory as a relatively independent sub-totality is thus brought about as part of the very same process that sustains the recurrence and stability in the manifest order of matter in general. It follows, then, that the explicate and manifest order of consciousness is not ultimately distinct from that of matter in general.[12] Bohm also used the termunfoldmentto characterise processes in which the explicate order becomes relevant (or "relevated"). Bohm likens unfoldment also to the decoding of a televisionsignalto produce a sensibleimageon ascreen. The signal, screen, and television electronics in this analogy represent the implicate order, while the image produced represents the explicate order. He also uses an example in which an ink droplet can be introduced into a highlyviscoussubstance(such asglycerine), and the substance rotated very slowly, such that there is negligiblediffusionof the substance. In this example, the droplet becomes a thread, which in turn eventually becomes invisible. However, by rotating the substance in the reverse direction, the droplet can essentially reform. When it is invisible, according to Bohm, the order of the ink droplet as a pattern can be said to beimplicatewithin the substance. In another analogy, Bohm asks us to consider a pattern produced by making small cuts in a folded piece of paper and then, literally, unfolding it. Widely separated elements of the pattern are, in actuality, produced by the same original cut in the folded piece of paper. Here, the cuts in the folded paper represent the implicate order, and the unfolded pattern represents the explicate order. Bohm employed thehologramas a means of characterising implicate order, noting that eachregionof aphotographicplate in which a hologram is observable contains within it the whole three-dimensional image, which can be viewed from a range of perspectives. That is, each region contains a whole and undivided image. In Bohm's words: There is the germ of a new notion of order here. This order is not to be understood solely in terms of a regular arrangement of objects (e.g., in rows) or as a regular arrangement of events (e.g., in a series). Rather, a total order is contained, in some implicit sense, in each region of space and time. Now, the word 'implicit' is based on the verb 'to implicate'. This means 'to fold inward' ... so we may be led to explore the notion that in some sense each region contains a total structure 'enfolded' within it".[13] Bohm noted that, although the hologram conveys undivided wholeness, it is nevertheless static. In this view of order, laws represent invariant relationships between explicate entities and structures, and thus Bohm maintained that, in physics, the explicate order generally reveals itself within well-constructed experimental contexts as, for example, in the sensibly observable results of instruments. With respect to implicate order, however, Bohm asked us to consider the possibility instead "that physical law should refer primarily to an order of undivided wholeness of the content of description similar to that indicated by the hologram rather than to an order of analysis of such content into separate parts...".[14] In the workScience, Order, and Creativity(Bohm and Peat, 1987), examples of implicate orders in science are laid out, as well as implicate orders which relate to painting, poetry and music. Bohm and Peat emphasize the role of orders of varying complexity, which influence the perception of a work of art as a whole. They note that implicate orders are accessible to humanexperience. They refer, for instance, to earlier notes which reverberate when listening to music, or various resonances of words and images which are perceived when reading or hearing poetry. Christopher Alexanderdiscussed his work in person with Bohm, and pointed out connections among his work and Bohm's notion of an implicate order inThe Nature of Order.[15] Bohm features as a fictional character in the novelThe Waveby British authorLochlan Bloom. The novel includes multiple narratives and explores many of the concepts of Bohm's work on implicate and explicate orders.[16] In proposing this new notion of order, Bohm explicitly challenged a number of tenets that he believed are fundamental to much scientific work: Bohm's proposals have at times been dismissed largely on the basis of such tenets. Hisparadigmis generally opposed toreductionism, and some view it as a form ofontologicalholism. On this, Bohm noted of prevailing views among physicists that "the world is assumed to be constituted of a set of separately existent, indivisible, and unchangeable 'elementary particles', which are the fundamental 'building blocks' of the entire universe ... there seems to be an unshakable faith among physicists that either such particles, or some other kind yet to be discovered, will eventually make possible a complete and coherent explanation of everything" (Bohm 1980, p. 173). In Bohm's conception of order, primacy is given to the undivided whole, and the implicate order inherent within the whole, rather than to parts of the whole, such as particles, quantum states, and continua. This whole encompasses all things,structures, abstractions, and processes, including processes that result in (relatively) stable structures as well as those that involve a metamorphosis of structures or things. In this view, parts may be entities normally regarded asphysical, such asatomsorsubatomic particles, but they may also beabstractentities, such as quantum states. Whatever their nature and character, according to Bohm, these parts are considered in terms of the whole, and in such terms, they constitute relatively separate and independent "sub-totalities." The implication of the view is, therefore, that nothing isfundamentallyseparate or independent. Bohm 1980, p. 11, said: "The new form of insight can perhaps best be called Undivided Wholeness in Flowing Movement. This view implies that flow is in some sense prior to that of the ‘things’ that can be seen to form and dissolve in this flow." According to Bohm, a vivid image of this sense of analysis of the whole is afforded byvortexstructures in a flowingstream. Such vortices can be relatively stablepatternswithin a continuous flow, but such an analysis does not imply that the flow patterns have any sharp division, or that they are literally separate and independently existent entities; rather, they are most fundamentally undivided. Thus, according to Bohm’s view, the whole is in continuousflux, and hence is referred to as theholomovement(movement of the whole). A key motivation for Bohm in proposing a new notion of order was thewell-known incompatibilityofquantum theorywithrelativity theory.Bohm 1980, p. xv summarised the state of affairs he perceived to exist: ...in relativity, movement is continuous, causally determinate and well defined, while in quantum mechanics it is discontinuous, not causally determinate and not well-defined. Each theory is committed to its own notions of essentially static and fragmentary modes of existence (relativity to that of separate events connectible bysignals, and quantum mechanics to a well-defined quantum state). One thus sees that a new kind of theory is needed which drops these basic commitments and at most recovers some essential features of the older theories as abstract forms derived from a deeper reality in which what prevails is unbroken wholeness. Bohm maintained that relativity and quantum theories are in basiccontradictionin these essential respects, and that a new concept of order should begin with that toward which both theories point: undivided wholeness. This should not be taken to mean that he advocated such powerful theories be discarded. He argued that each was relevant in a certain context—i.e., a set of interrelated conditions within the explicate order—rather than having unlimited scope, and that apparent contradictions stem from attempts to overgeneralize by superposing the theories on one another, implying greater generality or broader relevance than is ultimately warranted. Thus,Bohm 1980, pp. 156–167 argued: "... in sufficiently broad contexts such analytic descriptions cease to be adequate ... 'the law of the whole' will generally include the possibility of describing the 'loosening' of aspects from each other, so that they will be relatively autonomous in limited contexts ... however, any form of relativeautonomy(andheteronomy) is ultimately limited byholonomy, so that in a broad enough context such forms are seen to be merely aspects, relevated in the holomovement, rather than disjoint and separately existent things in interaction." Before developing hisimplicit orderapproach, Bohm had proposed ahidden variabletheory of quantum physics (seeBohm interpretation). According to Bohm, a key motivation for doing so had been purely to show thepossibilityof such theories. On this,Bohm 1980, p. 81 said, "... it should be kept in mind that before this proposal was made there had existed the widespread impression that no conception of any hidden variable at all, not even if it were abstract and hypothetical, could possibly be consistent with the quantum theory."Bohm 1980, p. 110 also claimed that "the demonstration of the possibility of theories of hidden variables may serve in a more general philosophical sense to remind us of the unreliability of conclusions based on the assumption of the complete universality of certain features of a given theory, however general their domain of validity seems to be." Another aspect of Bohm's motivation had been to point out a confusion he perceived to exist in quantum theory. On the dominant approaches in quantum theory, he said: "...we wish merely to point out that this whole line of approach re-establishes at the abstract level of statistical potentialities the same kind of analysis into separate and autonomous components in interaction that is denied at the more concrete level of individual objects" (Bohm 1980, p. 174).
https://en.wikipedia.org/wiki/Implicate_and_explicate_order
George Spencer-Brown(2 April 1923 – 25 August 2016) was an Englishpolymathbest known as the author ofLaws of Form. He described himself as a "mathematician, consulting engineer,psychologist, educational consultant and practitioner, consultingpsychotherapist, author, and poet".[1] Born inGrimsby, Lincolnshire, England, Spencer-Brown attendedMill Hill Schooland then passed the First M.B. in 1940 atLondon Hospital Medical College[2](now part ofBarts and The London School of Medicine and Dentistry). After serving in the Royal Navy (1943–47), he studied atTrinity College, Cambridge, earning Honours in Philosophy (1950) and Psychology (1951), and where he metBertrand Russell. From 1952 to 1958, he taught philosophy atChrist Church, Oxford, took M.A. degrees in 1954 from both Oxford and Cambridge, and wrote his doctorate thesisProbability and Scientific Inferenceunder the supervision ofWilliam Knealewhich was published as a book in 1957.[3][4] During the 1960s, he became a disciple of the innovative Scottish psychiatristR. D. Laing, frequently cited inLaws of Form. In 1964, onBertrand Russell's recommendation, he became a lecturer in formal mathematics at theUniversity of London. From 1969 onward, he was affiliated with the Department of Pure Mathematics and Mathematical Statistics at theUniversity of Cambridge. In the 1970s and 1980s, he was visiting professor at theUniversity of Western Australia,Stanford University, and at theUniversity of Maryland, College Park.[citation needed] Laws of Form, at once a work of mathematics and of philosophy, emerged from work in electronic engineering Spencer-Brown did around 1960, and from lectures onmathematical logiche later gave under the auspices of the University of London's extension program. First published in 1969, it has never been out of print. Spencer-Brown referred to the mathematical system ofLaws of Formas the "primary algebra" and the "calculus of indications"; others have termed it "boundary algebra". The primary algebra is essentially an elegant minimalist notation for thetwo-element Boolean algebra. One core aspect of the text is the 'observer dilemma' that arises from the very situation of the observer to have decided on the object of observation - while inevitably leaving aside other objects. Such an un-observed object is attributed the 'unmarked state', the realm of all 'unmarked space'.[5] Laws of Formhas influenced, among others,Heinz von Foerster,Louis Kauffman,Niklas Luhmann,Humberto Maturana,Francisco Varela,Leon Conrad,[6]and William Bricken. Some of these authors have modified and extended the primary algebra, with interesting consequences. In a 1976 letter to the Editor ofNature, Spencer-Brown claimed a proof of thefour-color theorem, which is not computer-assisted.[7]The preface of the 1979 edition ofLaws of Formrepeats that claim, and further states that the generally accepted computational proof by Appel, Haken, and Koch has 'failed' (page xii). Spencer-Brown's claimed proof of the four-color theorem has yet to find any defenders; Kauffman provides a detailed review of parts of that work.[8][9] The 6th edition ofLaws of Formadvertises that it includes "the first-ever proof ofRiemann's hypothesis".[10] During his time at Cambridge,[clarification needed]Spencer-Brown was a chesshalf-blue. He held two world records as aglider pilot, and was a sportscorrespondentto theDaily Express.[11]He also wrote some novels and poems, sometimes employing the pen nameJames Keys. Spencer-Brown died on 25 August 2016.[citation needed]He was buried at theLondon Necropolis,Brookwood,Surrey.[citation needed] While not denying some of his talent, not all critics of Spencer-Brown's claims and writings have been willing to assess them at his own valuation; the poetry is at the most charitable reading an idiosyncratic taste, and some prominent voices have been decidedly dismissive of the value of his formal material. For exampleMartin Gardnerwrote in his essay: "M-Pire Maps": In December of 1976 G. Spencer-Brown, the maverick British mathematician, startled his colleagues by announcing he had a proof of the four-color theorem that did not require computer checking. Spencer-Brown's supreme confidence and his reputation as a mathematician brought him an invitation to give a seminar on his proof at Stanford University. At the end of three months all the experts who attended the seminar agreed that the proofs logic was laced with holes, but Spencer-Brown returned to England still sure of its validity. The "proof' has not yet been published.Spencer-Brown is the author of a curious little book calledLaws of Form,[12]which is essentially a reconstruction of thepropositional calculusby means of an eccentric notation. The book, which the British mathematician John Horton Conway once described asbeautifully written but "content-free,"has a large circle ofcounterculturedevotees.[13]
https://en.wikipedia.org/wiki/G._Spencer-Brown
Inphilosophy,mereological essentialismis amereologicalthesis about the relationship between wholes, their parts, and the conditions of their persistence. According to mereological essentialism, objects have their parts necessarily. If an object were to lose or gain a part, it would no longer be the original object. Mereological essentialism is typically taken to be a thesis about concrete material objects, but it may also be applied to abstract objects, such as asetor proposition. If mereological essentialism is correct, aproposition, or thought, has its parts essentially; in other words, it hasontological commitmentsto all its conceptual components. The two prominent, competing material models of mereological essentialism areendurantismandperdurantism. It is important to note that neither endurantism nor perdurantism imply mereological essentialism. One may advocate for either model without being committed to accepting mereological essentialism. Within an endurantist framework, objects are extended within space; they are collections of spatial parts. Objects persist through change (endure) by being wholly present at every instant of time. According to mereological essentialism, enduring objects have only their spatial parts essentially. Within a perdurantist framework, objects are extended through space-time; they have parts in both space and time. Under a framework that combines mereological essentialism and perdurantism, objects have both theirtemporal partsand spatial parts essentially. Essentiality can be explained by referencingnecessityand/orpossible worlds. Mereological essentialism is then the thesis that objects have their parts necessarily or objects have their parts in every possible world in which the object exists. In other words, an object X composed of two parts a and b ceases to exist if it loses either part. Additionally, X ceases to exist if it gains a new part, c. Mereological essentialism is a position defended[by whom?]in the debate regardingmaterial constitution. For instance, several answers have been proposed regarding the question: "What is the relationship between a statue and the lump of clay from which it is made?"Coincidentalismis the view that the statue and the lump of clay are two objects located at the same place. The lump of clay should be distinguished from the statue because they have different persistence conditions. The lump would not survive the loss of a bit of clay, but the statue would. The statue would not survive being squashed into a ball, but the lump of clay would. The following philosophers have thought mereological essentialism to be true: Pre-20th century:Peter Abelard;Gottfried Leibniz 20th century:G.E. Moore;Roderick Chisholm; James Van Cleve 21st century: Michael Jubien; Mark Heller Chisholm and van Cleve consider objects as enduring. Michael Jubien and Mark Heller defend mereological essentialism for perduring objects. There are several arguments for mereological essentialism. Some are more formal; others use mereological essentialism as a solution to various philosophical puzzles or paradoxes. (This approach is mentioned in Olson (2006).) What would be the opposite of mereological essentialism? It would be that objects would survive the loss of any part. We can call this mereological inessentialism. But mereological inessentialism means that a table would survive replacement or loss of any of its parts. By successive replacement we could change the parts of the table so in the end it would look like a chair. This is theShip of Theseusparadox. Because it is difficult to justify a clearly defined point at which the table is destroyed and replaced by the chair, the best solution to this puzzle may be mereological essentialism (Chisholm 1973). Imagine a person called Deon. He has a proper part, his foot. One day he loses his foot. The resulting entity is then known as Theon. But it seems that Theon existed when Deon existed by being a proper part of Deon. Did Deon survive? If he did, then Deon and Theon must be identical. But Theon is a proper part of Deon. This is paradoxical. One way to solve this puzzle is to deny that Deon has any proper parts. Defending this view is rejecting the principle of arbitrary undetached parts (Van Inwagen 1981). It means that a cup in front of you doesn't have a left part, a right part, a part where the ear of the cup is or a part where the coffee is stored (if the hole of the cup is a part of the cup). Some philosophers reject the existence of individual objects, orsimples. According to such authors, the world does not contain single, individualizable objects which we can use logic to quantify. Instead the world only contains stuff, or masses ofmatterwhich come in different quantities. We have for instance a gram of gold. There is a grammatical difference between stuff and things. It would not make sense to say, "take a gold," but instead we must specify a lump of gold (Simons 1987). Standard methods of quantification are methods of invoking thinghood on the world; it is then argued that if the world is made only of stuff, mereological essentialism must be true. The argument from a world made only of stuff was first noted by van Cleve (1986). Defenders of a stuff ontology are Michael Jubien (1993) and Mark Heller (1990). Because mereology is a new branch offormal systems, clear arguments against mereological essentialism have not yet been raised. The most common counterargument is that mereological essentialismentailsthat an object which undergoes a subtle change is not the same object. This seems to be directly contrary to common sense. For example, if my car gets a flat tire and I then replace the tire, mereological essentialism entails that it is not the same car. The most common argument against mereological essentialism is the view that it cannot be universally true. Take us, for example. Ashumans, which areliving organisms, we survive by having our parts replaced bymetabolic processesor evenorgan transplantation. We might have our hair or fingernails cut. All of these procedures do not seem to cause the nonexistence of the person or, for that matter, the nonexistence of any living organism. Therefore, mereological essentialism cannot be universally true (Plantinga 1975). This argument may fail if the mereological essentialist believes inpresentism, that the present is the only relevantly true world. This view is a response to the problem of Qualitative Change.
https://en.wikipedia.org/wiki/Mereological_essentialism
Inphilosophy,mereological nihilism(also calledcompositional nihilism) is themetaphysicalthesis that there are no objects withproper parts. Equivalently, mereological nihilism says that mereologicalsimples, or objects without any proper parts, are the only material objects that exist.[dubious–discuss]Mereological nihilism is distinct from ordinarynihilisminsofar as ordinary nihilism typically focuses on the nonexistence of common metaphysical assumptions such as ethical truths and objective meaning, rather than the nonexistence of composite objects. Our everyday perceptual experience suggests that we are surrounded by macrophysical objects that have other, smaller objects as their proper parts. For example, there seem to be such objects as tables, which appear to be composed of various other objects, such as the table-legs, a flat surface, and perhaps the nails or bolts holding those pieces together. Those latter objects, in turn, appear to be composed of still smaller objects. And so on. Indeed,everyputative material object our perceptual faculties are capable of representing appears to be composed of smaller parts. Mereological nihilists claim that there are no composite material objects. According to mereological nihilism, there are only fundamental physical simples arranged in various spatial patterns. For example, the mereological nihilist claims that, despite appearances to the contrary, there really are no tables. There are only fundamental physical simples spatially arranged and causally interrelated in such a way as to jointly cause perceptual faculties like ours to have table-like perceptual experiences. Nihilists often abbreviate claims like this one as follows: there are fundamental physical simples arranged table-wise. Ted Sider argued in a 2013 article that we should think of composition as arrangement.[1]According to Sider, when we say "there is a table", we mean there are mereological simples arranged table-wise. Mereological nihilism entails the denial of what is called classicalmereology, which is succinctly defined by philosopherAchille Varzi:[2] Mereology (from the Greek μερος, 'part') is the theory of parthood relations: of the relations of part to whole and the relations of part to part within a whole. Its roots can be traced back to the early days of philosophy, beginning with thePresocratic atomistsand continuing throughout the writings ofPlato(especially theParmenidesand theTheaetetus),Aristotle(especially theMetaphysics, but also thePhysics, theTopics, andDe partibus animalium), andBoethius(especiallyIn Ciceronis Topica). As can be seen from Varzi's passage, classical mereology depends on the idea that there aremetaphysicalrelations that connect part(s) to whole. Mereological nihilists maintain that such relations between part and whole do not exist. Nihilists typically claim that our senses give us the (false) impression that there are composite material objects, and then attempt to explain why nonetheless our thought and talk about such objects is 'close enough' to the truth to be innocuous and reasonable in most conversational contexts.[citation needed]Sider's linguistic revision that reformulates the existence of composite objects as merely the existence of arrangements of mereological simples is an example of this.[1]Tallant (2013) has argued against this maneuver. Tallant has argued that mereological nihilism is committed to answering the following question: when is it that a group of mereological simples is arranged in a particular way?[3]What relations must maintain among a group of mereological simples such that they are arranged table-wise? It seems the nihilist can determine when a group of objects compose another object: for them, never. But the nihilist, if he is committed to Sider's view, is committed to answering how mereological simples can be arranged in particular ways. The objection that can be raised against nihilism is that it seems to posit the existence of far fewer objects than we typically think exist. The nihilist's ontology has been criticized for being too sparse, as it only includes mereological simples, and denies the existence of composite objects that we intuitively take to exist, like tables, planets, and animals. Another challenge that nihilists face arises when composition is examined in the context of contemporary physics. According to findings in quantum physics, there are multiple kinds of decomposition in different physical contexts. For example, there is no single decomposition of light; light can be said to be either composed of particles or waves depending on the context.[4]This empirical perspective poses a problem for nihilism because it does not seem like all material objects perfectly decompose to mereological simples.[dubious–discuss]In addition, some philosophers have speculated that there may not be a "bottom level" of reality. Atoms used to be understood as the most fundamental material objects, but were later discovered to be composed of subatomic particles and quarks. It is then possible that the most fundamental entities of current physics can actually be decomposed further than what is currently considered their base form, and their parts can be further decomposed.[dubious–discuss]If matter is infinitely decomposable in this respect, then mereological simples do not exist as an absolute entity. This poses a conflict with an initial assumption within Mereological Nihilism according to the belief that only mereological simples exist.[5] Philosophers in favor of something close to pure mereological nihilism includePeter Unger,Cian Dorr, and Ross Cameron. There are a few philosophers who argue for what could be considered a partial nihilism, or what has been called quasi-nihilism, which is the position that only objects of a certain kind have parts. One such position isorganicism: the view that living beings exist, but there are no other objects with parts, and all other objects that we believe to be composite—chairs, planets, etc.—therefore do not exist. Rather, other than living beings, which are composites (objects that have parts), there are only true atoms, or basic building blocks (which they call simples). Organicist philosophers includeTrenton MerricksandPeter van Inwagen.[6][7] Peter Van Inwagen maintains that all material objects are mereological simples with the exception of biological life such that the only composite objects are living things. Van Inwagen's view can be formulated like this: "Necessarily, for any non-overlapping xs, there is an object composed of the xs if either (i) the activities of the xs constitute a life or (ii) there is only one of the xs." In other words, Van Inwagen contends that mereological atoms form a composite object when they engage in a sort of special, complex activity which amounts to a life.[8] One reason why Van Inwagen's solution to the Special Composition Question is so attractive is that it allows us to account a conscious subject as a composite object. Nihilists have to maintain that the subject of a single consciousness is somehow the product of many discrete mereological atoms. Van Inwagen's argument against nihilism can be characterized as such:[9] In addition to allowing for the existence of trees, cats, and human beings, Van Inwagen's view is attractive because it inherits nihilism's elegant solutions to traditional problems in mereology like the Ship of Theseus and the problem of the many. One objection that can be offered against Van Inwagen's view is the vagueness of the category of life and the ambiguity of when something gets "caught up" in a life. For example, if a cat takes a breath and inhales a carbon atom, it is unclear at what point that atom becomes officially incorporated into the cat's body.[10] Even though there are notablesorchairs, Van Inwagen thinks that it is still permissible to assert sentences such as 'there are tables'. This is because such a sentence can be paraphrased as 'there are simples arranged tablewise'; it is appropriate to assert it when there are simples arranged a certain way. It is a common mistake to hold that Van Inwagen's view is that tables are identical to simples arranged tablewise. This is not his view: Van Inwagen would reject the claim that tables are identical to simples arranged tablewise because he rejects the claim that composition is identity. Nonetheless, he maintains that an ordinary speaker who asserts, for instance, "There are four chairs in that room" will speak truly if there are, indeed, simples in the room arranged in the appropriate way (so as to make up, in the ordinary view, four chairs). He claims that the statement and its paraphrase "describe the same fact". Van Inwagen suggests an analogy with the motion of the sun: an ordinary speaker who asserts that "the sun has moved behind the elms" will still speak truly, even though we accept the Copernican claim that this is not, strictly speaking, literally true. (For details, see his book "Material Beings".)[citation needed]
https://en.wikipedia.org/wiki/Mereological_nihilism
Informal ontology, a branch ofmetaphysics, and inontological computer science,mereotopologyis afirst-order theory, embodyingmereologicalandtopologicalconcepts, of the relations among wholes, parts, parts of parts, and theboundariesbetween parts. Mereotopology begins in philosophy with theories articulated byA. N. Whiteheadin several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics wasKarl Mengerin his bookDimensionstheorie(1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1).[1]The theory of Whitehead's 1929Process and Realityaugmented the part-whole relation with topological notions such ascontiguityandconnection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology.[2]The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entryWhitehead's point-free geometryincludes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section. Although mereotopology is a mathematical theory, we owe its subsequent development tologiciansand theoreticalcomputer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course infirst-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment ofpoint-free geometry, see Gerla (1995).Lattice-theoretic (algebraic) treatments of mereotopology ascontact algebrashave been applied to separate thetopologicalfrom themereologicalstructure, see Stell (2000), Düntsch and Winter (2004). Barry Smith,[3]Anthony Cohn,Achille Varziand their co-authors have shown that mereotopology can be useful informal ontologyandcomputer science, by allowing the formalization of relations such ascontact,connection,boundaries,interiors, holes, and so on. Mereotopology has been applied also as a tool for qualitativespatial-temporal reasoning, with constraint calculi such as theRegion Connection Calculus(RCC). It provides the starting point for the theory of fiat boundaries developed by Smith and Varzi,[4]which grew out of the attempt to distinguish formally between Mereotopology is being applied by Salustri in the domain of digital manufacturing (Salustri, 2002) and by Smith and Varzi to the formalization of basic notions of ecology and environmental biology (Smith and Varzi, 1999,[7]2002[8]). It has been applied also to deal with vague boundaries in geography (Smith and Mark, 2003[9]), and in the study of vagueness and granularity (Smith and Brogaard, 2002,[10]Bittner and Smith, 2001,[11]2001a[12]). Casati and Varzi (1999: ch.4) set out a variety of mereotopological theories in a consistent notation. This section sets out several nested theories that culminate in their preferred theoryGEMTC, and follows their exposition closely. The mereological part of GEMTC is the conventional theoryGEM. Casati and Varzi do not say if themodelsof GEMTC include any conventionaltopological spaces. We begin with somedomain of discourse, whose elements are calledindividuals(asynonymformereologyis "the calculus of individuals"). Casati and Varzi prefer limiting the ontology to physical objects, but others freely employ mereotopology to reason about geometric figures and events, and to solve problems posed by research inmachine intelligence. An upper case Latin letter denotes both arelationand thepredicateletter referring to that relation infirst-order logic. Lower case letters from the end of the alphabet denote variables ranging over the domain; letters from the start of the alphabet are names of arbitrary individuals. If a formula begins with anatomic formulafollowed by thebiconditional, the subformula to the right of the biconditional is a definition of the atomic formula, whose variables areunbound. Otherwise, variables not explicitly quantified are tacitlyuniversally quantified. The axiomCnbelow corresponds to axiomC.nin Casati and Varzi (1999: ch. 4). We begin with a topological primitive, abinary relationcalledconnection; the atomic formulaCxydenotes that "xis connected toy." Connection is governed, at minimum, by the axioms: C1.Cxx.{\displaystyle \ Cxx.}(reflexive) C2.Cxy→Cyx.{\displaystyle Cxy\rightarrow Cyx.}(symmetric) LetE, the binary relation ofenclosure, be defined as: Exy↔[Czx→Czy].{\displaystyle Exy\leftrightarrow [Czx\rightarrow Czy].} Exyis read as "yenclosesx" and is also topological in nature. A consequence ofC1-2is thatEisreflexiveandtransitive, and hence apreorder. IfEis also assumedextensional, so that: (Exa↔Exb)↔(a=b),{\displaystyle (Exa\leftrightarrow Exb)\leftrightarrow (a=b),} thenEcan be provedantisymmetricand thus becomes apartial order. Enclosure, notatedxKy, is the single primitive relation of thetheories in Whitehead (1919, 1920), the starting point of mereotopology. Letparthoodbe the defining primitivebinary relationof the underlyingmereology, and let theatomic formulaPxydenote that "xis part ofy". We assume thatPis apartial order. Call the resulting minimalist mereological theoryM. Ifxis part ofy, we postulate thatyenclosesx: C3.Pxy→Exy.{\displaystyle \ Pxy\rightarrow Exy.} C3nicely connectsmereologicalparthood totopologicalenclosure. LetO, the binary relation of mereologicaloverlap, be defined as: Oxy↔∃z[Pzx∧Pzy].{\displaystyle Oxy\leftrightarrow \exists z[Pzx\land \ Pzy].} LetOxydenote that "xandyoverlap." WithOin hand, a consequence ofC3is: Oxy→Cxy.{\displaystyle Oxy\rightarrow Cxy.} Note that theconversedoes not necessarily hold. While things that overlap are necessarily connected, connected things do not necessarily overlap. If this were not the case,topologywould merely be a model ofmereology(in which "overlap" is always either primitive or defined). Ground mereotopology (MT) is the theory consisting of primitiveCandP, definedEandO, the axiomsC1-3, and axioms assuring thatPis apartial order. Replacing theMinMTwith the standardextensionalmereologyGEMresults in the theoryGEMT. LetIPxydenote that "xis an internal part ofy."IPis defined as: IPxy↔(Pxy∧(Czx→Ozy)).{\displaystyle IPxy\leftrightarrow (Pxy\land (Czx\rightarrow Ozy)).} Let σxφ(x) denote the mereological sum (fusion) of all individuals in the domain satisfying φ(x). σ is avariable bindingprefixoperator. The axioms ofGEMassure that this sum exists if φ(x) is afirst-order formula. With σ and the relationIPin hand, we can define theinteriorofx,ix,{\displaystyle \mathbf {i} x,}as the mereological sum of all interior partszofx, or: ix={\displaystyle \mathbf {i} x=}dfσz[IPzx].{\displaystyle \sigma z[IPzx].} Two easy consequences of this definition are: iW=W,{\displaystyle \mathbf {i} W=W,} whereWis the universal individual, and C5.[13]P(ix)x.{\displaystyle \ P(\mathbf {i} x)x.}(Inclusion) The operatorihas two more axiomatic properties: C6.i(ix)=ix.{\displaystyle \mathbf {i} (\mathbf {i} x)=\mathbf {i} x.}(Idempotence) C7.i(x×y)=ix×iy,{\displaystyle \mathbf {i} (x\times y)=\mathbf {i} x\times \mathbf {i} y,} wherea×bis the mereological product ofaandb, not defined whenOabis false.idistributes over product. It can now be seen thatiisisomorphicto theinterior operatoroftopology. Hence thedualofi, the topologicalclosure operatorc, can be defined in terms ofi, andKuratowski's axioms forcare theorems. Likewise, given an axiomatization ofcthat is analogous toC5-7,imay be defined in terms ofc, andC5-7become theorems. AddingC5-7toGEMTresults in Casati and Varzi's preferred mereotopological theory,GEMTC. xisself-connectedif it satisfies the following predicate: SCx↔((Owx↔(Owy∨Owz))→Cyz).{\displaystyle SCx\leftrightarrow ((Owx\leftrightarrow (Owy\lor Owz))\rightarrow Cyz).} Note that the primitive and defined predicates ofMTalone suffice for this definition. The predicateSCenables formalizing the necessary condition given inWhitehead'sProcess and Realityfor the mereological sum of two individuals to exist: they must be connected. Formally: C8.Cxy→∃z[SCz∧Ozx∧(Pwz→(Owx∨Owy)).{\displaystyle Cxy\rightarrow \exists z[SCz\land Ozx\land (Pwz\rightarrow (Owx\lor Owy)).} Given some mereotopologyX, addingC8toXresults in what Casati and Varzi call theWhiteheadian extensionofX, denotedWX. Hence the theory whose axioms areC1-8isWGEMTC. The converse ofC8is aGEMTCtheorem. Hence given the axioms ofGEMTC,Cis a defined predicate ifOandSCare taken as primitive predicates. If the underlying mereology isatomlessand weaker thanGEM, the axiom that assures the absence of atoms (P9in Casati and Varzi 1999) may be replaced byC9, which postulates that no individual has atopological boundary: C9.∀x∃y[Pyx∧(Czy→Ozx)∧¬(Pxy∧(Czx→Ozy))].{\displaystyle \forall x\exists y[Pyx\land (Czy\rightarrow Ozx)\land \lnot (Pxy\land (Czx\rightarrow Ozy))].} When the domain consists of geometric figures, the boundaries can be points, curves, and surfaces. What boundaries could mean, given other ontologies, is not an easy matter and is discussed in Casati and Varzi (1999: ch. 5).
https://en.wikipedia.org/wiki/Mereotopology
Ameronomyor is a hierarchicaltaxonomythat deals with part–whole relationships. For example, a car has parts that include engine, body and wheels; and the body has parts that include doors and windows. These conceptual structures are used inlinguisticsandcomputer science, with applications inbiology. The part–whole relationship is sometimes referred to asHAS-A, and corresponds to object composition inobject-oriented programming.[1] The study of meronomy is known asmereology, and in linguistics ameronymis the name given to a constituent part of, a substance of, or a member of something. "X" is a meronym of "Y" if an X is a part of a Y.[2]The unit of organisation that corresponds to the taxonomicaltaxonis themeron. In formal terms, in the context ofknowledge representationandontologies, a meronomy is apartial orderof concept types by the part–wholerelation.[3] In the classic study of parts and wholes,mereology, the three defining properties of a partial order serve asaxioms.[4]They are, respectively, that the part-of relation is: Meronomies may be represented insemantic weblanguages such asOWLandSKOS. Innatural languagesthey are represented bymeronymsandholonyms.
https://en.wikipedia.org/wiki/Meronomy
Inlinguistics,meronymy(fromAncient Greekμέρος(méros)'part'andὄνυμα(ónuma)'name') is asemantic relationbetween ameronymdenoting a part and aholonymdenoting a whole. In simpler terms, a meronym is in apart-ofrelationshipwith its holonym. For example,fingeris a meronym ofhand,which is its holonym. Similarly,engineis a meronym ofcar,which is its holonym. Fellow meronyms (naming the various fellow parts of any particular whole) are calledcomeronyms(for example,leaves,branches,trunk, androotsare comeronyms under the holonym oftree). Holonymy(fromAncient Greekὅλος(hólos)'whole'andὄνυμα(ónuma)'name') is the converse of meronymy. A closely related concept is that ofmereology, which specifically deals with part–whole relations and is used inlogic. It is formally expressed in terms offirst-order logic. A meronymy can also be considered apartial order. Meronym and holonym refer topartandwholerespectively, which is not to be confused withhypernymwhich refers totype. For example, a holonym ofleafmight betree(a leaf is a part of a tree), whereas a hypernym ofoak treemight betree(an oak tree is a type of tree). Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Meronymy
The termmonad(fromAncient Greekμονάς(monas)'unity'andμόνος(monos)'alone')[1]is used in somecosmic philosophyandcosmogonyto refer to a most basic or original substance. As originally conceived by thePythagoreans, the Monad is thereforeSupreme Being,divinity, or the totality of all things. According to some philosophers of theearly modern period, most notablyGottfried Wilhelm Leibniz, there are infinite monads, which are the basic and immense forces,elementary particles, orsimplest units, that make up the universe.[2] According toHippolytus, the worldview was inspired by thePythagoreans, who called the first thing that came into existence the "monad", which begat (bore) thedyad(from the Greek word for two), which begat thenumbers, which begat thepoint, begettinglinesorfiniteness, etc.[3]It meantdivinity, the first being, or the totality of all beings, referring incosmogony(creation theories) variously to source acting alone and/or an indivisible origin andequivalent comparators.[4] PythagoreanandNeoplatonicphilosophers likePlotinusandPorphyrycondemned Gnosticism (seeNeoplatonism and Gnosticism) for its treatment of the monad. In hisLatintreatyMaximae theologiae,Alan of Lilleaffirms "God is an intelligiblesphere, whose center is everywhere and whose circumference is nowhere." The French philosopherRabelaisascribed this proposition toHermes Trismegistus.[5] The symbolism is a freeexegesisrelated to theChristianTrinity.[5]Alan of Lille mentions theTrismegistus'Book of the Twenty-Four Philosopherswhere it says a Monad can uniquely beget another Monad in which more followers of this religion saw the come to being of God the Son from God the Father, both by way of generation or by way of creation.[5]This statement is also shared by thepaganauthor of theAsclepius[5]which sometimes has been identified with Trismegistus. TheBook of the Twenty-Four Philosopherscompletes the scheme adding that the ardor of the second Monad to the first Monad would be theHoly Ghost.[5]It closes a physical circle in a logical triangle (with aretroaction). TheEuclideansymbolismof the centered sphere also concerns theseculardebate on the existence of acenter of the universe. The idea of the monad is also reflected in thedemiurge, or the belief of one supreme being that brought about thecreation of the universe. For the Pythagoreans, the generation of number series was related to objects ofgeometryas well ascosmogony.[6]According toDiogenes Laërtius, from the monad evolved thedyad; from it numbers; from numbers, points; then lines, two-dimensionalentities, three-dimensional entities, bodies, culminating inthe four elementsearth, water, fire and air, from which the rest of our world is built up.[7][a] The term monad was adopted fromGreek philosophybymodern philosophersGiordano Bruno,Anne Conway,Gottfried Wilhelm Leibniz(Monadology),John Dee(The Hieroglyphic Monad), and others. The concept of the monad as a universal substance is also used byTheosophistsas a synonym for theSanskritterm "svabhavat";the Mahatma Lettersmake frequent use of the term.[9]
https://en.wikipedia.org/wiki/Monad_(philosophy)
Inmathematicsandlogic,plural quantificationis the theory that an individualvariablex may take onplural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories. The point of the theory is to givefirst-order logicthe power ofset theory, but without any "existential commitment" to such objects as sets. The classic expositions are Boolos 1984 and Lewis 1991. The view is commonly associated withGeorge Boolos, though it is older (see notablySimons1982), and is related to the view of classes defended byJohn Stuart Milland othernominalistphilosophers. Mill argued that universals or "classes" are not a peculiar kind of thing, having an objective existence distinct from the individual objects that fall under them, but "is neither more nor less than the individual things in the class". (Mill 1904, II. ii. 2, also I. iv. 3). A similar position was also discussed byBertrand Russellin chapter VI of Russell (1903), but later dropped in favour of a "no-classes" theory. See alsoGottlob Frege1895 for a critique of an earlier view defended byErnst Schroeder. The general idea can be traced back toLeibniz. (Levey 2011, pp. 129–133) Interest revived in plurals with work in linguistics in the 1970s byRemko Scha,Godehard Link,Fred Landman,Friederike Moltmann,Roger Schwarzschild,Peter Lasersohnand others, who developed ideas for a semantics of plurals. Sentences like are said to involve amultigrade(also known asvariably polyadic, alsoanadic) predicate or relation ("cooperate" in this example), meaning that they stand for the same concept even though they don't have a fixedarity(cf. Linnebo & Nicolas 2008). The notion of multigrade relation/predicate has appeared as early as the 1940s and has been notably used byQuine(cf. Morton 1975). Plural quantification deals with formalizing the quantification over the variable-length arguments of such predicates, e.g. "xxcooperate" wherexxis a plural variable. Note that in this example it makes no sense, semantically, to instantiatexxwith the name of a single person. Broadly speaking, nominalism denies theexistence of universals(abstract entities), like sets, classes, relations, properties, etc. Thus the plural logics were developed as an attempt to formalize reasoning about plurals, such as those involved in multigrade predicates, apparently without resorting to notions that nominalists deny, e.g. sets. Standard first-order logic has difficulties in representing some sentences with plurals. Most well-known is theGeach–Kaplan sentence"some critics admire only one another". Kaplan proved that it isnonfirstorderizable(the proof can be found in that article). Hence its paraphrase into a formal language commits us to quantification over (i.e. the existence of) sets. Boolos argued thatsecond-ordermonadicquantification may be systematically interpreted in terms of plural quantification, and that, therefore, second-order monadic quantification is "ontologically innocent".[1] Later, Oliver & Smiley (2001), Rayo (2002), Yi (2005) and McKay (2006) argued that sentences such as also cannot be interpreted in monadic second-order logic. This is because predicates such as "are shipmates", "are meeting together", "are surrounding a building" are notdistributive. A predicate F is distributive if, whenever some things are F, each one of them is F. But in standard logic,every monadic predicate is distributive. Yet such sentences also seem innocent of any existential assumptions, and do not involve quantification. So one can propose a unified account of plural terms that allows for both distributive and non-distributive satisfaction of predicates, while defending this position against the "singularist" assumption that such predicates are predicates of sets of individuals (or of mereological sums). Several writers[who?]have suggested that plural logic opens the prospect of simplifying thefoundations of mathematics, avoiding theparadoxesof set theory, and simplifying the complex and unintuitive axiom sets needed in order to avoid them.[clarification needed] Recently, Linnebo & Nicolas (2008) have suggested that natural languages often containsuperplural variables(and associated quantifiers) such as "these people, those people, and these other people compete against each other" (e.g. as teams in an online game), while Nicolas (2008) has argued that plural logic should be used to account for the semantics of mass nouns, like "wine" and "furniture". This section presents a simple formulation of plural logic/quantification approximately the same as given by Boolos inNominalist Platonism(Boolos 1985). Sub-sentential units are defined as Fullsentencesare defined as The last two lines are the only essentially new component to the syntax for plural logic. Other logical symbols definable in terms of these can be used freely as notational shorthands. This logic turns out to be equi-interpretable withmonadic second-order logic. Plural logic's model theory/semantics is where the logic's lack of sets is cashed out. A model is defined as a tuple(D,V,s,R){\displaystyle (D,V,s,R)}whereD{\displaystyle D}is the domain,V{\displaystyle V}is a collection of valuationsVF{\displaystyle V_{F}}for each predicate nameF{\displaystyle F}in the usual sense, ands{\displaystyle s}is a Tarskian sequence (assignment of values to variables) in the usual sense (i.e. a map from singular variable symbols to elements ofD{\displaystyle D}). The new componentR{\displaystyle R}is a binary relation relating values in the domain to plural variable symbols. Satisfaction is given as Where for singular variable symbols,s≈xs′{\displaystyle s\approx _{x}s'}means that for all singular variable symbolsy{\displaystyle y}other thanx{\displaystyle x}, it holds thatsy=sy′{\displaystyle s_{y}=s'_{y}}, and for plural variable symbols,R≈x¯R′{\displaystyle R\approx _{\bar {x}}R'}means that for all plural variable symbolsy¯{\displaystyle {\bar {y}}}other thanx¯{\displaystyle {\bar {x}}}, and for all objects of the domaind{\displaystyle d}, it holds thatdRy¯=dR′y¯{\displaystyle dR{\bar {y}}=dR'{\bar {y}}}. As in the syntax, only the last two are truly new in plural logic. Boolos observes that by using assignmentrelationsR{\displaystyle R}, the domain does not have to include sets, and therefore plural logic achieves ontological innocence while still retaining the ability to talk about the extensions of a predicate. Thus, the plural logic comprehension schema∃x¯.∀y.y≺x¯↔F(y){\displaystyle \exists {\bar {x}}.\forall y.y\prec {\bar {x}}\leftrightarrow F(y)}does not yield Russell's paradox because the quantification of plural variables does not quantify over the domain. Another aspect of the logic as Boolos defines it, crucial to this bypassing of Russell's paradox, is the fact that sentences of the formF(x¯){\displaystyle F({\bar {x}})}are not well-formed: predicate names can only combine with singular variable symbols, not plural variable symbols. This can be taken as the simplest, and most obvious argument that plural logic as Boolos defined it is ontologically innocent.
https://en.wikipedia.org/wiki/Plural_quantification
In contemporarymereology, asimpleorindivisible monomere(in mereology, not in chemistry) is any thing that has no proper parts. Sometimes the term "atom" is used, although in recent years[when?]the term "simple" has become the standard. Simples are to be contrasted withatomless gunk(where something is "gunky" if it is such that every proper part has a further proper part; apotential omnidivisible). Necessarily, given the definitions, everything is either composed of simples, gunk or a mixture of the two. Classical mereology is consistent with both the existence of gunk and either finite or infinite simples (see Hodges and Lewis 1968). Mirroring thespecial composition questionis the Simple Question.[1]It asks what the jointly necessary and sufficient conditions are forxto be a mereological simple. In the literature this question explicitly concerns what it is for a material object to lack proper parts, although there is no reason why similar questions cannot be asked of things from otherontologicalcategories. There have been many suggested answers to the Simple Question. Answers include thatxis a simple if and only if it is a point-sized object; thatxis a simple if and only if it is indivisible; or thatxis a simple if and only if it is maximally continuous. Kris McDaniel has argued that what it is for an object to be a simple is a matter of brute fact, and that there is no non-trivial answer to the Simple Question (2007b). Of those philosophers who believe the material world contains simples, there has recently been debate over whether there can be extended simples (see Braddon-Mitchell and Miller 2006, Hudson 2006, Markosian 1998, 2004, McDaniel 2007a, 2007b, McKinnon 2003, Parsons 2000, Sider 2006, Simons 2004inter alia). An extended simple is (i) a material object; (ii) simple, and (iii) it occupies an extended region of space. Various reasons have been offered in favor of the claim that extended simples are possible, including: (a) that they are conceivable (Markosian 1998), (b) that purportedly plausible modal principles claiming, roughly, that there are no necessary connections between distinct existencesentailtheir possibility (McDaniel 2007a, Saucedo 2009, Sider 2006), and (c) that contemporary physical theories entail that there are extended simples (Braddon-Mitchell and Miller 2006). One might also argue in favor of the possibility of extended simples by noting that their existence is consistent with the answer to the Simple Question one endorses. In the literature, however, the reasoning is often reversed: Those who think that extended simples are possible often use their purported possibility to argue against answers to the Simple Question that entail their impossibility and those who think that they are impossible uses their purported impossibility to argue against answers to the Simple Question that entail (or strongly suggest) their possibility. There have been arguments against extended simples. Arguments include variants on Lewis' argument from temporary intrinsics, as well as arguments that intuitively an extended object must have, for instance, a right half and a left half, and thus have parts (cf Zimmerman 1996: 10) Similarly, one who endorses theDoctrine of Arbitrarily Undetatched Parts, which states that necessarily, if an object occupies regionRthen every occupiable proper sub-region ofRis exactly occupied by a proper part of that object (see van Inwagen 1981), might use that principle in an argument against the possibility of extended simples. If there are no extended simples, the only remaining options would material objects being made of unextended simples (objects that have a space-time extension of 0) or atomless gunk. Some philosophers seem to have held that the whole universe is one enormous extended simple. According to some interpretations of Descartes and Spinoza, for instance, they held this view. More recently, this view has been defended in Schaffer 2007. The use of 'simple' is not restricted to material objects. Anything, no matter what ontological category it is from, is a simple if and only if it has no proper parts. Thus Lewis has argued that singletons are simples (Lewis 1991) and spacetime points are often thought to be simples (although in some non-standard spacetimes, points have proper parts). Similarly, there is a question of whether things from other categories – for instance, fictional characters and properties, if there are such things – are simples. Furthermore, just as every material object may be made of atomless gunk rather than simples, so too for objects from other ontological categories. For instance, some have held that spacetime is gunky, claiming that every region of spacetime has a proper sub-region.
https://en.wikipedia.org/wiki/Simple_(philosophy)
Compositional objectsare wholes instantiated by collections of parts. If anontologywishes to permit the inclusion of compositional objects it must define which collections ofobjectsare to be considered parts composing a whole.Mereology, the study of relationships between parts and their wholes, provides specifications on how parts must relate to one another in order to compose a whole. Ontological disputes do not revolve around what particular matter is present; rather, the center ofdisputationis what objects can be said to beinstantiatedby a given collection ofmatter. The token objects posited by a given ontology may be classified as instances of one or more distinct object types. As the types of objects accepted proliferate, so do the possible tokens that a given collection of matter can be said to instantiate. This creates variations in size between ontologies, which serve as an arena for disputes amongphilosophers. The ontologies of present concern are those that include compositional objects amongpositedtypes. Compositional objects are objects made of a collection of one or more parts . These objects seem to be included in any intuitively constructed ontology as objects ordinarily encountered are doubtless composed of parts. For example, any ontology that affirms the existence of tables, rabbits, or rocks necessarily commits to the inclusion of some compositional objects. Thespecificationof 'some' compositional objects foretells the point of attack suffered by thesetheories. Clarification demands that these theories provide a means to account for which compositional objects are included and which are excluded. One may include tables and, presumably, chairs, but what about the composition of the table and surrounding chairs? What characteristics of a collection of parts determine that they form a whole?[1] Mereological nihilismis an extreme eliminative position. Mereological nihilism denies that any objects actually instantiate the parthood relation appealed to in theoretical descriptions of mereology. If there are no relationships that count as parthood relationships, then there are no composite objects. One may initially seek to reject such a position by pointing to its counterintuitive conclusions. However, there are other mereological positions that prove equallycounterintuitiveand so a more substantial rebuttal is required. A principled rejection of mereological nihilism is put forward those committed to atomless gunk. A mereology isgunkyif every part is itself a whole composed of further parts. There is no end to thedecompositionof objects, no fundamental part or mereological atom. There is no place for the atoms posited by mereological nihilism in gunky ontologies. This causes a problem because if all that exists areatoms, but there is nothing like an atom that exists within an ontology, then nothing can be said to exist (Van Cleve, 2008). Noting the appeal of accepting that things do exist, one must reject mereological nihilism in order to maintain a gunky ontology. Not everyone will strive to maintain a gunky ontology and so mereological nihilism is still potentially a viable position. There are various attempts to conserve the existence of parthood relationships. These theories all attempt to specify characteristics that a collection of objects must possess in order to compose a whole. Characteristics may derive from some principle or be proposed as brute fact. A principled account of the composition relationship will appeal to a general characteristic which is sufficient to instantiate the relationship. Many of these accounts appeal to characteristics derived from intuitive notions about what does or does not allow objects to function as parts in a whole. Two such proposed restricting characteristics are connection and cohesion (Van Cleve, 2008). First, connection is thestipulationthat objects must bespatiallycontinuous to some degree in order to be considered parts composing a whole. Objects like tables are made of legs connected to tops. Tables and legs are in direct contact with one another, the parts are spatially contiguous. Yet, the chairs are only in proximity to the table and so do not compose a table set. In order to maintain the standard of absolute contiguity one would have to recruit the airmoleculesbridging span between the table and chairs. This is unsatisfactory though because it fails to exclude extraordinary objects such as the table, the air molecules, and the dog's nose as he begs for food. It seems that it is necessary to redefine connection as some degree of proximity between parts within a whole . By abandoning the extreme of direct contact, any account of connection acquires the burden of defining what degree of proximity instantiates composition. It will not do to leave specification of degree for future theorists if one cannot even show it is possible to provide such a determination in a principled manner. The continuum of thespatial dimensionis a three dimensional axes composed of distinct ordered points. Suppose absolute succession of points along a dimension corresponds to direct contact of parts. According to a moderate formulation of connection, composition is instantiated by two objects separated by a countable number of discrete points (x), where (x) need not be one, but cannot be unbounded. Unfortunately, even the more moderate formulation is untenable. Criticizing the possibility of bounding degree, Sider (2001) takes as given these premises: (1) On a continuum of discrete points, if there are both instances of both composition and not, then the series of points instantiating composition (e.g. (1, 2, 3, 4)) is continuous with any series not (e.g. (5, 6, 7)). (2) There is no principled way determine a cutoff for composition along such continuums (no non-arbitrary way to determine between (1, 2, 3) and (1, 2, 3, 4)). (3) Since the nature of existence does not allow for indeterminacy a cutoff must be specified (a failure to determine between (1, 2, 3) and (1, 2, 3, 4) leaves (4) in a position between existence and non-existence that does not exist). Conclusion: If composition is to be non-arbitrarythen it must either always occur or never. Sider's rejection of any bounding of degree is not particular to spatial proximity. Degree of cohesion can also be represented as a continuum. Much like absolute spatial contiguity was determined too strict, absolute cohesion is also rejected. To illustrate Van Cleve (2008) describes how a rod and line compose a fishing rod. The line must move with the rod to some degree. In order to accomplish this knots of line are tied around the rod. As the knots are tightened the line becomes more and more fixed to the rod. There is a cutoff where the line could be tighter, yet is tight enough to compose the fishing rod. Any variable represented on a continuum will fail to provide a principleddeterminationof this cutoff. According to Van Inwagen a collection of objects are considered parts composing a whole when that whole demonstrates life (Van Cleeve, 2008). This approach guarantees the existence of you and me, while ruling out extraordinary objects consistent with other conservative theories. Detractors of the 'life' criterion point out the difficulty of defining when life is present. It is not clear if a virion, a virus particle composed of nucleic acid and surrounding capsid, is a compositional object or not. Additionally, in some formerlyparadigmaticcases of life it can be difficult to identify when it is no longer present, and thus the compositional object is no longer extant (e.g. brain death). Mereological universalismis an extreme permissive position. Essentially, mereological universalism contends that any collection of objects constitutes a whole. This secures the existence of any compositional objects intuitively thought to exist. However, by the same light that ordinary objects exist, so do much stranger ones. For example, there exists both the object composed of my key ring and keys and the object composed of the moon and six pennies located on James Van Cleve's desk (Van Cleve, 2008). Motivation for such a counterintuitive position is not immediately apparent, but arises from the ability to reject all alternatives. Despite little intuitive appeal, mereological universalism seems less susceptible to principled rejection than any of its alternatives. Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2016/entries/ordinary-objects/>. Van Cleve, J. (2008). The moon and sixpence: a defense of mereological universalism. Varzi, Achille, "Mereology", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), forthcoming URL = <https://plato.stanford.edu/archives/win2016/entries/mereology/>.
https://en.wikipedia.org/wiki/Composition_(objects)
Information Processing Language(IPL) is aprogramming languagecreated byAllen Newell,Cliff Shaw, andHerbert A. SimonatRAND Corporationand theCarnegie Institute of Technologyabout 1956. Newell had the job of language specifier-application programmer, Shaw was the system programmer, and Simon had the job of application programmer-user. The code includes features intended to help with programs that perform simple problem solving actions such as lists,dynamic memory allocation,data types,recursion,functionsas arguments, generators, andcooperative multitasking. IPL invented the concept of list processing, albeit in anassembly-languagestyle. An IPL computer has: The data structure of IPL is the list, but lists are more intricate structures than in many languages. A list consists of a singly linked sequence of symbols, as might be expected—plus somedescription lists, which are subsidiary singly linked lists interpreted as alternating attribute names and values. IPL provides primitives to access and mutate attribute value by name. The description lists are given local names (of the form 9–1). So, a list named L1 containing the symbols S4 and S5, and described by associating value V1 to attribute A1 and V2 to A2, would be stored as follows. 0 indicates the end of a list; the cell names 100, 101, etc. are automatically generated internal symbols whose values are irrelevant. These cells can be scattered throughout memory; only L1, which uses a regional name that must be globally known, needs to reside in a specific place. IPL is anassembly languagefor manipulating lists. It has a few cells which are used as special-purpose registers. H1, for example, is the program counter. The SYMB field of H1 is the name of the current instruction. However, H1 is interpreted as a list; the LINK of H1 is, in modern terms, a pointer to the beginning of the call stack. For example, subroutine calls push the SYMB of H1 onto this stack. H2 is the free-list. Procedures which need to allocate memory grab cells off of H2; procedures which are finished with memory put it on H2. On entry to a function, the list of parameters is given in H0; on exit, the results should be returned in H0. Many procedures return a boolean result indicating success or failure, which is put in H5. Ten cells, W0-W9, are reserved for public working storage. Procedures are "morally bound" (to quote the CACM article) to save and restore the values of these cells. There are eight instructions, based on the values of P: subroutine call, push/pop S to H0; push/pop the symbol in S to the list attached to S; copy value to S; conditional branch. In these instructions, S is the target. S is either the value of the SYMB field if Q=0, the symbol in the cell named by SYMB if Q=1, or the symbol in the cell named by the symbol in the cell named by SYMB if Q=2. In all cases but conditional branch, the LINK field of the cell tells which instruction to execute next. IPL has a library of some 150 basic operations. These include such operations as: IPL was first utilized to demonstrate that the theorems inPrincipia Mathematicawhich were proven laboriously by hand, byBertrand RussellandAlfred North Whitehead, could in fact beproven by computation. According to Simon's autobiographyModels of My Life, this application was originally developed first by hand simulation, using his children as the computing elements, while writing on and holding up note cards as the registers which contained the state variables of the program. IPL was used to implement several earlyartificial intelligenceprograms, also by the same authors: theLogic Theorist(1956), theGeneral Problem Solver(1957), and theircomputer chessprogramNSS(1958). Several versions of IPL were created: IPL-I (never implemented), IPL-II (1957 forJOHNNIAC), IPL-III (existed briefly), IPL-IV, IPL-V (1958, forIBM 650,IBM 704,IBM 7090,Philco model 212, many others. Widely used). IPL-VI was a proposal for an IPL hardware.[1][2][3] A co-processor “IPL-VC” for the CDC 3600 at Argonne National Libraries was developed which could run IPL-V commands.[4][5]It was used to implement another checker-playing program.[6]This hardware implementation did not improve running times sufficiently to “compete favorably with a language more directly oriented to the structure of present-day machines”.[7] IPL was soon displaced byLisp, which had much more powerful features, a simpler syntax, and the benefit of automaticgarbage collection. IPL arguably introduced several programming language features: Many of these features were generalized, rationalized, and incorporated into Lisp[8]and from there into many other programming languages during the next several decades.
https://en.wikipedia.org/wiki/Information_Processing_Language
Introduction to Mathematical Philosophyis a book (1919 first edition) by philosopherBertrand Russell, in which the author seeks to create an accessible introduction to various topics within thefoundations of mathematics. According to the preface, the book is intended for those with only limited knowledge of mathematics and no prior experience with themathematical logicit deals with.[1]Accordingly, it is often used in introductoryphilosophy of mathematicscourses at institutions of higher education.[2][3] Introduction to Mathematical Philosophywas written while Russell was serving time inBrixton Prisondue to his anti-war activities.[4] The book deals with a wide variety of topics within the philosophy of mathematics and mathematical logic including the logical basis and definition ofnatural numbers,realandcomplex numbers,limitsandcontinuity, andclasses.[5] This article about a non-fiction book onphilosophy of scienceis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Introduction_to_Mathematical_Philosophy
Intheoretical computer scienceabisimulationis abinary relationbetween statetransition systems, associating systems that behave in the same way in that one system simulates the other and vice versa. Intuitively two systems are bisimilar if they, assuming we view them as playing agameaccording to some rules, match each other's moves. In this sense, each of the systems cannot be distinguished from the other by an observer. Given alabeled state transition system(S, Λ, →), whereSis a set of states,Λ{\displaystyle \Lambda }is a set of labels and → is a set of labelled transitions (i.e., a subset ofS×Λ×S{\displaystyle S\times \Lambda \times S}), abisimulationis abinary relationR⊆S×S{\displaystyle R\subseteq S\times S}, such that bothRand itsconverseRT{\displaystyle R^{T}}aresimulations. From this follows that thesymmetricclosure of a bisimulation is a bisimulation, and that each symmetric simulation is a bisimulation. Thus some authors define bisimulation as a symmetric simulation.[1] Equivalently,Ris abisimulationif and only if for every pair of states(p,q){\displaystyle (p,q)}inRand all labelsλinΛ{\displaystyle \Lambda }: Given two statespandqinS,pisbisimilartoq, writtenp∼q{\displaystyle p\,\sim \,q}, if and only if there is a bisimulationRsuch that(p,q)∈R{\displaystyle (p,q)\in R}. This means that the bisimilarity relation∼is the union of all bisimulations:(p,q)∈∼{\displaystyle (p,q)\in \,\sim \,}precisely when(p,q)∈R{\displaystyle (p,q)\in R}for some bisimulationR. The set of bisimulations is closed under union;[Note 1]therefore, the bisimilarity relation is itself a bisimulation. Since it is the union of all bisimulations, it is the unique largest bisimulation. Bisimulations are also closed under reflexive, symmetric, and transitive closure; therefore, the largest bisimulation must be reflexive, symmetric, and transitive. From this follows that the largest bisimulation—bisimilarity—is anequivalence relation.[2] Bisimulation can be defined in terms ofcomposition of relationsas follows. Given alabelled state transition system(S,Λ,→){\displaystyle (S,\Lambda ,\rightarrow )}, abisimulationrelationis abinary relationRoverS(i.e.,R⊆S×S) such that∀λ∈Λ{\displaystyle \forall \lambda \in \Lambda } R;→λ⊆→λ;R{\displaystyle R\ ;\ {\overset {\lambda }{\rightarrow }}\quad {\subseteq }\quad {\overset {\lambda }{\rightarrow }}\ ;\ R}andR−1;→λ⊆→λ;R−1{\displaystyle R^{-1}\ ;\ {\overset {\lambda }{\rightarrow }}\quad {\subseteq }\quad {\overset {\lambda }{\rightarrow }}\ ;\ R^{-1}} From the monotonicity and continuity of relation composition, it follows immediately that the set of bisimulations is closed under unions (joinsin theposetof relations), and a simple algebraic calculation shows that the relation of bisimilarity—the join of all bisimulations—is an equivalence relation. This definition, and the associated treatment of bisimilarity, can be interpreted in any involutivequantale. Bisimilarity can also be defined inorder-theoreticalfashion, in terms offixpoint theory, more precisely as the greatest fixed point of a certain function defined below. Given alabelled state transition system(S{\displaystyle S}, Λ, →), defineF:P(S×S)→P(S×S){\displaystyle F:{\mathcal {P}}(S\times S)\to {\mathcal {P}}(S\times S)}to be a function from binary relations overS{\displaystyle S}to binary relations overS{\displaystyle S}, as follows: LetR{\displaystyle R}be any binary relation overS{\displaystyle S}.F(R){\displaystyle F(R)}is defined to be the set of all pairs(p,q){\displaystyle (p,q)}inS{\displaystyle S}×S{\displaystyle S}such that: ∀λ∈Λ.∀p′∈S.p→λp′⇒∃q′∈S.q→λq′and(p′,q′)∈R{\displaystyle \forall \lambda \in \Lambda .\,\forall p'\in S.\,p{\overset {\lambda }{\rightarrow }}p'\,\Rightarrow \,\exists q'\in S.\,q{\overset {\lambda }{\rightarrow }}q'\,{\textrm {and}}\,(p',q')\in R}and∀λ∈Λ.∀q′∈S.q→λq′⇒∃p′∈S.p→λp′and(p′,q′)∈R{\displaystyle \forall \lambda \in \Lambda .\,\forall q'\in S.\,q{\overset {\lambda }{\rightarrow }}q'\,\Rightarrow \,\exists p'\in S.\,p{\overset {\lambda }{\rightarrow }}p'\,{\textrm {and}}\,(p',q')\in R} Bisimilarity is then defined to be thegreatest fixed pointofF{\displaystyle F}. Bisimulation can also be thought of in terms of a game between two players: attacker and defender. "Attacker" goes first and may choose any valid transition,λ{\displaystyle \lambda }, from(p,q){\displaystyle (p,q)}. That is,(p,q)→λ(p′,q){\displaystyle (p,q){\overset {\lambda }{\rightarrow }}(p',q)}or(p,q)→λ(p,q′){\displaystyle (p,q){\overset {\lambda }{\rightarrow }}(p,q')} The "Defender" must then attempt to match that transition,λ{\displaystyle \lambda }from either(p′,q){\displaystyle (p',q)}or(p,q′){\displaystyle (p,q')}depending on the attacker's move. I.e., they must find anλ{\displaystyle \lambda }such that:(p′,q)→λ(p′,q′){\displaystyle (p',q){\overset {\lambda }{\rightarrow }}(p',q')}or(p,q′)→λ(p′,q′){\displaystyle (p,q'){\overset {\lambda }{\rightarrow }}(p',q')} Attacker and defender continue to take alternating turns until: By the above definition the system is a bisimulation if and only if there exists a winning strategy for the defender. A bisimulation for state transition systems is a special case ofcoalgebraicbisimulation for the type of covariant powersetfunctor. Note that every state transition system(S,Λ,→){\displaystyle (S,\Lambda ,\rightarrow )}can be mappedbijectivelyto a functionξ→{\displaystyle \xi _{\rightarrow }}fromS{\displaystyle S}to thepowersetofS{\displaystyle S}indexed byΛ{\displaystyle \Lambda }written asP(Λ×S){\displaystyle {\mathcal {P}}(\Lambda \times S)}, defined byp↦{(λ,q)∈Λ×S:p→λq}.{\displaystyle p\mapsto \{(\lambda ,q)\in \Lambda \times S:p{\overset {\lambda }{\rightarrow }}q\}.} Letπi:S×S→S{\displaystyle \pi _{i}\colon S\times S\to S}bei{\displaystyle i}-thprojection, mapping(p,q){\displaystyle (p,q)}top{\displaystyle p}andq{\displaystyle q}respectively fori=1,2{\displaystyle i=1,2}; andP(Λ×π1){\displaystyle {\mathcal {P}}(\Lambda \times \pi _{1})}the forward image ofπ1{\displaystyle \pi _{1}}defined by dropping the third componentP↦{(λ,p)∈Λ×S:∃q.(λ,p,q)∈P}{\displaystyle P\mapsto \{(\lambda ,p)\in \Lambda \times S:\exists q.(\lambda ,p,q)\in P\}}whereP{\displaystyle P}is a subset ofΛ×S×S{\displaystyle \Lambda \times S\times S}. Similarly forP(Λ×π2){\displaystyle {\mathcal {P}}(\Lambda \times \pi _{2})}. Using the above notations, a relationR⊆S×S{\displaystyle R\subseteq S\times S}is abisimulationon a transition system(S,Λ,→){\displaystyle (S,\Lambda ,\rightarrow )}if and only if there exists a transition systemγ:R→P(Λ×R){\displaystyle \gamma \colon R\to {\mathcal {P}}(\Lambda \times R)}on the relationR{\displaystyle R}such that thediagram commutes, i.e. fori=1,2{\displaystyle i=1,2}, the equationsξ→∘πi=P(Λ×πi)∘γ{\displaystyle \xi _{\rightarrow }\circ \pi _{i}={\mathcal {P}}(\Lambda \times \pi _{i})\circ \gamma }hold whereξ→{\displaystyle \xi _{\rightarrow }}is the functional representation of(S,Λ,→){\displaystyle (S,\Lambda ,\rightarrow )}. In special contexts the notion of bisimulation is sometimes refined by adding additional requirements or constraints. An example is that ofstutter bisimulation, in which one transition of one system may be matched with multiple transitions of the other, provided that the intermediate states are equivalent to the starting state ("stutters").[3] A different variant applies if the state transition system includes a notion ofsilent(orinternal) action, often denoted withτ{\displaystyle \tau }, i.e. actions that are not visible by external observers, then bisimulation can be relaxed to beweak bisimulation, in which if two statesp{\displaystyle p}andq{\displaystyle q}are bisimilar and there is some number of internal actions leading fromp{\displaystyle p}to some statep′{\displaystyle p'}then there must exist stateq′{\displaystyle q'}such that there is some number (possibly zero) of internal actions leading fromq{\displaystyle q}toq′{\displaystyle q'}. A relationR{\displaystyle {\mathcal {R}}}on processes is a weak bisimulation if the following holds (withS∈{R,R−1}{\displaystyle {\mathcal {S}}\in \{{\mathcal {R}},{\mathcal {R}}^{-1}\}}, anda,τ{\displaystyle a,\tau }being an observable and mute transition respectively): ∀p,q.(p,q)∈S⇒p→τp′⇒∃q′.q→τ∗q′∧(p′,q′)∈S{\displaystyle \forall p,q.\quad (p,q)\in {\mathcal {S}}\Rightarrow p{\stackrel {\tau }{\rightarrow }}p'\Rightarrow \exists q'.\quad q{\stackrel {\tau ^{\ast }}{\rightarrow }}q'\wedge (p',q')\in {\mathcal {S}}}∀p,q.(p,q)∈S⇒p→ap′⇒∃q′.q→τ∗aτ∗q′∧(p′,q′)∈S{\displaystyle \forall p,q.\quad (p,q)\in {\mathcal {S}}\Rightarrow p{\stackrel {a}{\rightarrow }}p'\Rightarrow \exists q'.\quad q{\stackrel {\tau ^{\ast }a\tau ^{\ast }}{\rightarrow }}q'\wedge (p',q')\in {\mathcal {S}}} This is closely related[how?]to the notion of bisimulation "up to" a relation.[4] Typically, if thestate transition systemgives theoperational semanticsof aprogramming language, then the precise definition of bisimulation will be specific to the restrictions of the programming language. Therefore, in general, there may be more than one kind of bisimulation (respectively bisimilarity) relationship depending on the context. SinceKripke modelsare a special case of (labelled) state transition systems, bisimulation is also a topic inmodal logic. In fact, modal logic is the fragment offirst-order logicinvariant under bisimulation (van Benthem's theorem). Checking that two finite transition systems are bisimilar can be done inpolynomial time.[5]The fastest algorithms arequasilinear timeusingpartition refinementthrough a reduction to the coarsest partition problem.
https://en.wikipedia.org/wiki/Bisimulation
Incomputer science,coinductionis a technique for defining and proving properties of systems ofconcurrentinteractingobjects. Coinduction is themathematicaldualtostructural induction.[citation needed]Coinductively defineddata typesare known ascodataand are typicallyinfinitedata structures, such asstreams. As a definition orspecification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As aprooftechnique, it may be used to show that an equation is satisfied by all possibleimplementationsof such a specification. To generate and manipulate codata, one typically usescorecursivefunctions, in conjunction withlazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result. In programming, co-logic programming (co-LP for brevity) "is a natural generalization oflogic programmingand coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation,concurrent logic programming,model checking,bisimilarityproofs, etc."[1]Experimental implementations of co-LP are available from theUniversity of Texas at Dallas[2]and in the languageLogtalk(for examples see[3]) andSWI-Prolog. In his bookTypes and Programming Languages,[4]Benjamin C. Piercegives a concise statement of both theprinciple of inductionand theprinciple of coinduction. While this article is not primarily concerned withinduction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required. LetU{\displaystyle U}be a set andF{\displaystyle F}be amonotone function2U→2U{\displaystyle 2^{U}\rightarrow 2^{U}}, that is: X⊆Y⇒F(X)⊆F(Y){\displaystyle X\subseteq Y\Rightarrow F(X)\subseteq F(Y)} Unless otherwise stated,F{\displaystyle F}will be assumed to be monotone. These terms can be intuitively understood in the following way. Suppose thatX{\displaystyle X}is a set of assertions, andF(X){\displaystyle F(X)}is the operation that yields the consequences ofX{\displaystyle X}. ThenX{\displaystyle X}isF-closedwhen one cannot conclude anymore than has already been asserted, whileX{\displaystyle X}isF-consistentwhen all of the assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions"). TheKnaster–Tarski theoremtells us that theleast fixed-pointofF{\displaystyle F}(denotedμF{\displaystyle \mu F}) is given by the intersection of allF-closedsets, while thegreatest fixed-point(denotedνF{\displaystyle \nu F}) is given by the union of allF-consistentsets. We can now state the principles of induction and coinduction. The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property ofμF{\displaystyle \mu F}. By theprinciple of induction, it suffices to exhibit anF-closedsetX{\displaystyle X}for which the property holds. Dually, suppose you wish to show thatx∈νF{\displaystyle x\in \nu F}. Then it suffices to exhibit anF-consistentset thatx{\displaystyle x}is known to be a member of. Consider the following grammar of datatypes: T=⊥|⊤|T×T{\displaystyle T=\bot \;|\;\top \;|\;T\times T} That is, the set of types includes the "bottom type"⊥{\displaystyle \bot }, the "top type"⊤{\displaystyle \top }, and (non-homogenous) lists. These types can be identified with strings over the alphabetΣ={⊥,⊤,×}{\displaystyle \Sigma =\{\bot ,\top ,\times \}}. LetΣ≤ω{\displaystyle \Sigma ^{\leq \omega }}denote all (possibly infinite) strings overΣ{\displaystyle \Sigma }. Consider the functionF:2Σ≤ω→2Σ≤ω{\displaystyle F:2^{\Sigma ^{\leq \omega }}\rightarrow 2^{\Sigma ^{\leq \omega }}}: F(X)={⊥,⊤}∪{x×y:x,y∈X}{\displaystyle F(X)=\{\bot ,\top \}\cup \{x\times y:x,y\in X\}} In this context,x×y{\displaystyle x\times y}means "the concatenation of stringx{\displaystyle x}, the symbol×{\displaystyle \times }, and stringy{\displaystyle y}." We should now define our set of datatypes as a fixpoint ofF{\displaystyle F}, but it matters whether we take theleastorgreatestfixpoint. Suppose we takeμF{\displaystyle \mu F}as our set of datatypes. Using theprinciple of induction, we can prove the following claim: To arrive at this conclusion, consider the set of all finite strings overΣ{\displaystyle \Sigma }. ClearlyF{\displaystyle F}cannot produce an infinite string, so it turns out this set isF-closedand the conclusion follows. Now suppose that we takeνF{\displaystyle \nu F}as our set of datatypes. We would like to use theprinciple of coinductionto prove the following claim: Here⊥×⊥×⋯{\displaystyle \bot \times \bot \times \cdots }denotes the infinite list consisting of all⊥{\displaystyle \bot }. To use theprinciple of coinduction, consider the set: {⊥×⊥×⋯}{\displaystyle \{\bot \times \bot \times \cdots \}} This set turns out to beF-consistent, and therefore⊥×⊥×⋯∈νF{\displaystyle \bot \times \bot \times \cdots \in \nu F}. This depends on the suspicious statement that ⊥×⊥×⋯=(⊥×⊥×⋯)×(⊥×⊥×⋯){\displaystyle \bot \times \bot \times \cdots =(\bot \times \bot \times \cdots )\times (\bot \times \bot \times \cdots )} The formal justification of this is technical and depends on interpreting strings assequences, i.e. functions fromN→Σ{\displaystyle \mathbb {N} \rightarrow \Sigma }. Intuitively, the argument is similar to the argument that0.0¯1=0{\displaystyle 0.{\bar {0}}1=0}(seeRepeating decimal). Consider the following definition of astream:[5] This would seem to be a definition that isnot well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream. Consider theendofunctorF{\displaystyle F}in thecategory of sets: F(x)=A×xF(f)=⟨idA,f⟩{\displaystyle {\begin{aligned}F(x)&=A\times x\\F(f)&=\langle \mathrm {id} _{A},f\rangle \end{aligned}}} Thefinal F-coalgebraνF{\displaystyle \nu F}has the following morphism associated with it: out:νF→F(νF)=A×νF{\displaystyle \mathrm {out} :\nu F\rightarrow F(\nu F)=A\times \nu F} This induces another coalgebraF(νF){\displaystyle F(\nu F)}with associated morphismF(out){\displaystyle F(\mathrm {out} )}. BecauseνF{\displaystyle \nu F}isfinal, there is a unique morphism F(out)¯:F(νF)→νF{\displaystyle {\overline {F(\mathrm {out} )}}:F(\nu F)\rightarrow \nu F} such that out∘F(out)¯=F(F(out)¯)∘F(out)=F(F(out)¯∘out){\displaystyle \mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ F(\mathrm {out} )=F\left({\overline {F(\mathrm {out} )}}\circ \mathrm {out} \right)} The compositionF(out)¯∘out{\displaystyle {\overline {F(\mathrm {out} )}}\circ \mathrm {out} }induces anotherF-coalgebra homomorphismνF→νF{\displaystyle \nu F\rightarrow \nu F}. SinceνF{\displaystyle \nu F}is final, this homomorphism is unique and thereforeidνF{\displaystyle \mathrm {id} _{\nu F}}. Altogether we have: F(out)¯∘out=idνFout∘F(out)¯=F(F(out)¯)∘out)=idF(νF){\displaystyle {\begin{aligned}{\overline {F(\mathrm {out} )}}\circ \mathrm {out} &=\mathrm {id} _{\nu F}\\\mathrm {out} \circ {\overline {F(\mathrm {out} )}}=F\left({\overline {F(\mathrm {out} )}}\right)\circ \mathrm {out} )&=\mathrm {id} _{F(\nu F)}\end{aligned}}} This witnesses the isomorphismνF≃F(νF){\displaystyle \nu F\simeq F(\nu F)}, which in categorical terms indicates thatνF{\displaystyle \nu F}is a fixed point ofF{\displaystyle F}and justifies the notation.[6][verification needed] We will show thatStream Ais the final coalgebra of the functorF(x)=A×x{\displaystyle F(x)=A\times x}. Consider the following implementations: These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details. We will demonstrate how theprinciple of inductionsubsumes mathematical induction. LetP{\displaystyle P}be some property ofnatural numbers. We will take the following definition of mathematical induction: 0∈P∧(n∈P⇒n+1∈P)⇒P=N{\displaystyle 0\in P\land (n\in P\Rightarrow n+1\in P)\Rightarrow P=\mathbb {N} } Now consider the functionF:2N→2N{\displaystyle F:2^{\mathbb {N} }\rightarrow 2^{\mathbb {N} }}: F(X)={0}∪{x+1:x∈X}{\displaystyle F(X)=\{0\}\cup \{x+1:x\in X\}} It should not be difficult to see thatμF=N{\displaystyle \mu F=\mathbb {N} }. Therefore, by theprinciple of induction, if we wish to prove some propertyP{\displaystyle P}ofN{\displaystyle \mathbb {N} }, it suffices to show thatP{\displaystyle P}isF-closed. In detail, we require: F(P)⊆P{\displaystyle F(P)\subseteq P} That is, {0}∪{x+1:x∈P}⊆P{\displaystyle \{0\}\cup \{x+1:x\in P\}\subseteq P} This is preciselymathematical inductionas stated.
https://en.wikipedia.org/wiki/Coinduction
In computer programming, ananamorphismis a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence. The above layman's description can be stated more formally incategory theory: the anamorphism of acoinductive typedenotes the assignment of acoalgebrato its uniquemorphismto thefinal coalgebraof anendofunctor. These objects are used infunctional programmingasunfolds. Thecategorical dual(aka opposite) of the anamorphism is thecatamorphism. Infunctional programming, ananamorphismis a generalization of the concept ofunfoldson coinductivelists. Formally, anamorphisms aregeneric functionsthat cancorecursivelyconstruct a result of a certaintypeand which is parameterized by functions that determine the next single step of the construction. The data type in question is defined as the greatest fixed pointν X . F Xof a functorF. By the universal property of final coalgebras, there is a unique coalgebra morphismA → ν X . F Xfor any otherF-coalgebraa : A → F A. Thus, one can define functions from a typeA_into_ a coinductive datatype by specifying a coalgebra structureaonA. As an example, the type of potentially infinitelists(with elements of a fixed typevalue) is given as the fixed point[value] = ν X . value × X + 1, i.e. a list consists either of avalueand a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this: It is the fixed point of the functorF value, where: One can easily check that indeed the type[value]is isomorphic toF value [value], and thus[value]is the fixed point. (Also note that in Haskell, least and greatest fixed points of functors coincide, therefore inductive lists are the same as coinductive, potentially infinite lists.) Theanamorphismfor lists (then usually known asunfold) would build a (potentially infinite) list from a state value. Typically, the unfold takes a state valuexand a functionfthat yields either a pair of a value and a new state, or a singleton to mark the end of the list. The anamorphism would then begin with a first seed, compute whether the list continues or ends, and in case of a nonempty list, prepend the computed value to the recursive call to the anamorphism. A Haskell definition of an unfold, or anamorphism for lists, calledana, is as follows: We can now implement quite general functions usingana, for example a countdown: This function will decrement an integer and output it at the same time, until it is negative, at which point it will mark the end of the list. Correspondingly,ana f 3will compute the list[2,1,0]. An anamorphism can be defined for any recursive type, according to a generic pattern, generalizing the second version ofanafor lists. For example, the unfold for the tree data structure is as follows To better see the relationship between the recursive type and its anamorphism, note thatTreeandListcan be defined thus: The analogy withanaappears by renamingbin its type: With these definitions, the argument to the constructor of the type has the same type as the return type of the first argument ofana, with the recursive mentions of the type replaced withb. One of the first publications to introduce the notion of an anamorphism in the context of programming was the paperFunctional Programming with Bananas, Lenses, Envelopes and Barbed Wire,[1]byErik Meijeret al., which was in the context of theSquiggolprogramming language. Functions likezipanditerateare examples of anamorphisms.ziptakes a pair of lists, say ['a','b','c'] and [1,2,3] and returns a list of pairs [('a',1),('b',2),('c',3)].Iteratetakes a thing, x, and a function, f, from such things to such things, and returns the infinite list that comes from repeated application of f, i.e. the list [x, (f x), (f (f x)), (f (f (f x))), ...]. To prove this, we can implement both using our generic unfold,ana, using a simple recursive routine: In a language like Haskell, even the abstract functionsfold,unfoldandanaare merely defined terms, as we have seen from the definitions given above. Incategory theory, anamorphisms are thecategorical dualofcatamorphisms(and catamorphisms are the categorical dual of anamorphisms). That means the following. Suppose (A,fin) is afinalF-coalgebrafor someendofunctorFof somecategoryinto itself. Thus,finis amorphismfromAtoFA, and since it is assumed to be final we know that whenever (X,f) is anotherF-coalgebra (a morphismffromXtoFX), there will be a uniquehomomorphismhfrom (X,f) to (A,fin), that is a morphismhfromXtoAsuch thatfin.h = Fh.f. Then for each suchfwe denote byanafthat uniquely specified morphismh. In other words, we have the following defining relationship, given some fixedF,A, andfinas above: A notation for anaffound in the literature is[(f)]{\displaystyle [\!(f)\!]}. The brackets used are known aslens brackets, after which anamorphisms are sometimes referred to aslenses.
https://en.wikipedia.org/wiki/Anamorphism
Decoding Chomsky: Science and Revolutionary Politicsis a 2016 book by the anthropologistChris KnightonNoam Chomsky's approach to politics and science. Knight admires Chomsky's politics, but argues that his linguistic theories were influenced in damaging ways by his immersion since the early 1950s in an intellectual culture heavily dominated by US military priorities, an immersion deepened when Chomsky secured employment in a Pentagon-funded electronics laboratory in the Massachusetts Institute of Technology.[1] In October 2016, Chomsky dismissed the book, tellingThe New York Timesthat it was based on a false assumption since, in fact, no military "work was being done on campus" during his time at MIT.[2]In a subsequent public comment, Chomsky on similar grounds denounced Knight's entire narrative as a "wreck ... complete nonsense throughout".[3]In contrast, a reviewer for the USChronicle of Higher EducationdescribedDecoding Chomskyas perhaps "the most in-depth meditation on 'the Chomsky problem' ever published".[4]In the UK, theNew Scientistdescribed Knight's account as "trenchant and compelling".[5]The controversy continued in theLondon Review of Books, where the sociologist of scienceHilary RosecitedDecoding Chomskyapprovingly, provoking Chomsky to denounce what he called "Knight's astonishing performance" in two subsequent letters.[6]The debate aroundDecoding Chomskythen continued inOpen Democracy, with contributions fromFrederick Newmeyer, Randy Allen Harris and others.[7] Since the book was published, Knight has published what he claims is evidence that Chomsky worked on a military sponsored "command and control" project for theMITRE Corporationin the early 1960s.[8] Decoding Chomskybegins with Chomsky's claim that his political and scientific outputs have little connection with each other. For example, asked in 2006 whether his science and his politics are in any way related to one another, Chomsky replied that the connection is "almost non-existent ... There is a kind of loose, abstract connection in the background. But if you look for practical connections, they're non-existent."[9] Knight accepts that scientific research and political involvement are distinct kinds of activity serving very different purposes. But he claims that, in Chomsky's case, the conflicts intrinsic to his institutional situation forced him to drive an unusually deep and damaging wedge between his politics and his science. Knight points out that Chomsky began his career working in an electronics laboratory whose primary technological mission he detested on moral and political grounds. Funded by the Pentagon, the Research Laboratory of Electronics at MIT was involved in contributing to the basic research required for hi-tech weapons systems.[10]Suggesting that he was well aware of MIT's role at the time, Chomsky himself recalls: There was extensive [military] research on the MIT campus. ... In fact, a good deal of the [nuclear] missile guidance technology was developed right on the MIT campus and in laboratories run by the university.[11] It was because of his anti-militarist conscience, Knight argues, that such research priorities were experienced by him as deeply troubling. By way of evidence, Knight citesGeorge Steinerin a 1967The New York Review of Booksarticle, "Will Noam Chomsky announce that he will stop teaching at MIT or anywhere in this country so long as torture and napalm go on? ... Will he even resign from a university very largely implicated in the kind of 'strategic studies' he so rightly scorns?" Chomsky said, "I have given a good bit of thought to the specific suggestions that you put forth... leaving the country or resigning from MIT, which is, more than any other university, associated with activities of the department of 'defense.' ... As to MIT, I think that its involvement in the war effort is tragic and indefensible."[12] Chomsky's situation at MIT, according to Knight, is summed up by Chomsky when he describes some of his colleagues this way: It is appalling that a person can come through an MIT education and say the kinds of things that were quoted in theNew York Timesarticle on Sunday, November 9 [1969]... One student said, right along straight Nazi scientist lines:What I'm designing may one day be used to kill millions of people. I don't care. That's not my responsibility. I'm given an interesting technological problem and I get enjoyment out of solving it.You know perfectly well that we can name twenty faculty members who've said the same thing. ... This is an attitude that is very widely held and very widely expressed.[13] In order to maintain his moral and political integrity, Knight argues, Chomsky resolved to limit his cooperation to pure linguistic theory of such an abstract kind that it could not conceivably have any military use. With this aim in mind, Chomsky's already highly abstract theoretical modelling became so unusually abstract that not even language's practical function in social communication could be acknowledged or explored. One damaging consequence, according to Knight, was that scientific investigation of the ways in which real human beings use language became divorced from what quickly became the prevailing MIT school of formal linguistic theory. Knight argues that the conflicting pressures Chomsky experienced had the effect of splitting his intellectual output in two, prompting him to ensure that any work he conducted for the military was purely theoretical—of no practical use to anyone—while his activism, being directed relentlesslyagainstthe military, was preserved free of any obvious connection with his science. To an unprecedented extent, according to Knight, mind in this way became divorced from body, thought from action, and knowledge from its practical applications, these disconnects characterizing a philosophical paradigm which came to dominate much of intellectual life for half a century across the Western world. InCurrent Affairs,Norbert HornsteinandNathan J. Robinsondismiss the book as exhibiting a complete misunderstanding of Chomsky's linguistic theories and beliefs. They question the motives of Yale University Press, asking why Yale considered it appropriate to publish Knight's critique, which they say attacks Chomsky through political conjecture rather than addressing his linguistic ideas. Comparing Knight's Marxist criticism to a conservative criticism that was released in the same year byTom Wolfe, they speculate that both were published with similar motivations – that Chomsky's criticisms were a threat to the power behind the publishers.[14] InMoment,Robert Barskywrote that, since Knight was never formally trained in Chomsky's conception of theoretical linguistics, he has no right to comment on whether it stands up as science.Decoding Chomsky,writes Barsky, offers no original insights, consisting only of "a weak rehash of critiques from naysayers to Chomsky's approach". While Barsky concedes that Chomsky did work in a military laboratory, he writes that this is not significant, since virtually all US scientists receive Pentagon funding one way or another..[15] Peter Stone wrote that Knight clearly hates Chomsky and "for that reason he wroteDecoding Chomsky– a nasty, mean-spirited, vitriolic, ideologically-driven hatchet job". Stone states that, although Knight aligns himself with the political Left, "the level of venom on display here exceeds that of all but the most unhinged of Chomsky’s detractors on the Right." He states that "Knight spares no opportunity to paint Chomsky’s every thought and deed in the blackest possible terms" and that: "Decoding Chomskyis not a critique of a body of work in linguistics; it is an attempt to demonise a man for his perceived political deviations, even though that man happens to be on the same side of the political spectrum as the man who is demonising him. ReadingDecoding Chomskytaught me something about the mindset of the prosecutors in the Moscow Show Trials."[16] The linguistFrederick Newmeyerconcedes that the Pentagon expected to be able to use Chomsky's findings for military purposes, but says that the idea that Chomsky promoted very abstract approaches to linguistics in order to prevent such military use is 'implausible, to put it mildly' and states that Knight's portrayal of Chomsky's attitude to the study of language as a system of social communication is "a gross oversimplification".[17] Decoding Chomskyhas been positively received by a wide range of scientists, intellectual historians and commentators including:Michael Tomasello,Daniel Everett,David Hawkes,Luc Steels,Sarah Blaffer HrdyandFrederick Newmeyer.[18]Reviewing the book inThe Times Literary Supplement, Houman Barekat commended Knight for an "engaging and thought-provoking intellectual history".[19]InThe American Ethnologist,Sean O'Neill said of the book: "History comes alive via compelling narrative. ... Knight is indeed an impressive historian when it comes to recounting the gripping personal histories behind Chomsky's groundbreaking contributions to science and philosophy."[20] In his book,The Anarchist Imagination, political scientistCarl Levycommends Knight for documenting how Chomsky's notion ofUniversal Grammarechoed ideas put forward earlier byClaude Levi-Strauss, linguistRoman Jakobsonand Jakobson's muse, the anarchist poet and Russian revolutionary visionaryVelimir Khlebnikov.[21] The linguistDaniel Everettwrote that "Knight's exploration is unparalleled. No other study has provided such a full understanding of Chomsky's background, intellectual foibles, objectives, inconsistencies, and genius."[22]The linguist Gary Lupyan wrote that Knight "makes a compelling case for the scientific vacuousness of [Chomsky’s linguistic] ideas."[23]According to Bruce Nevin inThe Brooklyn Rail, "Knight shows how Chomsky has acquiesced in – more than that, has participated in and abetted – a radical post-war transformation of the relation of science to society, legitimating one of the significant political achievements of the right, the pretense that science is apolitical."[24] The philosopherThomas Klikauerwrote thatDecoding Chomskyis "an insightful book and, one might say, a-pleasure-to-read kind of book."[25]Another philosopher,Rupert Readdescribed the book as "a brilliant, if slightly harsh, disquisition".[26]In theChronicle of Higher EducationTom Bartlett described the book as a "compelling read".[27]InAnarchist StudiesPeter Seyferth said the book "focuses on all the major phases of Chomsky's linguistic theories, their institutional preconditions and their ideological and political ramifications. And it is absolutely devastating."[28] “The overriding responsibility of the scientist who proposes a hypothesis or theory is to subject it to every imaginable experimental test that might disprove it, and if an idea cannot be tested, it has no more worth than the claims in an advertisement for toothpaste. Knight reviews how Chomsky’s proposals are notoriously inaccessible to empirical test and have become more so with each successive revision.” David Golumbiahas described himself as "a huge admirer ofDecoding Chomsky" while Les Levidow described the book as "impressive". The linguist Randy Allen Harris praises the book as one which everyone interested in Chomsky should read, although he qualifies this by commenting "I don't think very much of its deliberate unusability theory"[29]In defense of the author, however, Harris expresses his bemusement at Chomsky's "breathtaking" misreading of Knight's theory on this score. "Knight's whole argument", writes Harris, "depends on the premise that Chomsky 'was at all times refusing to collude' with the military", making it astonishing that "Chomsky seems to think that Knight slanderously accuses him of complicity with the U.S. military - that his active resistance to the war in Vietnam refutes Knight's position rather than, as it actually does, supports Knight's position."[30]In line with Knight and also with his fellow expert in Chomskyan linguistics, Frederick Newmeyer, Harris acknowledges that: "… the military investment in Chomskyan theory, whether at MIT or elsewhere, was expected to produce results for such military applications as encryption, machine translation, information retrieval, and command-and-control systems for jets and weapon delivery."[31] Sarah Blaffer Hrdyis today widely regarded as one of greatest Darwinian thinkers since Darwin himself.[32]In her most recent book,Father Time: A Natural History of Men and Babies,she praises Knight for dismissing as 'a kind of madness' Chomsky's idea that language somehow emerged in our species suddenly and independently of previous Darwinian evolution. In defence of her conviction that selection pressures favoring 'other-regarding sensibilities in infants' must have preceded language's emergence, she recommends her own previous publications, a 2011 article by primatologist Klaus Zuberbühler 'and especially Chapter 22,Before Language, in Chris Knight's 2016 bookDecoding Chomsky'.[33] In his book, Knight writes that the US military initially funded Chomsky's linguistics because they were interested inmachine translation. Later their focus shifted and Knight cites Air Force Colonel Edmund Gaines’ statement that: "We sponsored linguistic research in order to learn how to build command and control systems that could understand English queries directly."[34] From 1963, Chomsky worked as a consultant to theMITRE Corporation, a military research institute set up by the US Air Force. According to one of Chomsky's former students,Barbara Partee, MITRE's justification for sponsoring Chomsky's approach to linguistics was "that in the event of a nuclear war, the generals would be underground with some computers trying to manage things, and that it would probably be easier to teach computers to understand English than to teach the generals to program."[35] Chomsky made his most detailed response to Knight in the 2019 book,The Responsibility of Intellectuals: Reflections by Noam Chomsky and others after 50 years. In this response, Chomsky dismissed Knight’s claims as a "vulgar exercise of defamation" and a "web of deceit and misinformation".[36] Knight, in turn, responded to Chomsky citing more documents, including one that states that MITRE's work to support "US Air Force-supplied command and control systems ... involves the application of a logico-mathematical formulation of linguistic structure developed by Noam Chomsky." Knight cites other documents that he claims show that Chomsky's student, LieutenantSamuel Jay Keyser, did apply Chomskyan theory to the control of military aircraft, including theB-58nuclear-armed bomber.[37]
https://en.wikipedia.org/wiki/Decoding_Chomsky
Generative grammaris a research tradition inlinguisticsthat aims to explain thecognitivebasis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, orgenerativists(/ˈdʒɛnərətɪvɪsts/),[1]tend to share certain working assumptions such as thecompetence–performancedistinction and the notion that somedomain-specificaspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such asusage-based models of language. Generative linguistics includes work in core areas such assyntax,semantics,phonology,psycholinguistics, andlanguage acquisition, with additional extensions to topics includingbiolinguisticsandmusic cognition. Generative grammar began in the late 1950s with the work ofNoam Chomsky, having roots in earlier approaches such asstructural linguistics. The earliest version of Chomsky's model was calledTransformational grammar, with subsequent iterations known asGovernment and binding theoryand theMinimalist program. Other present-day generative models includeOptimality theory,Categorial grammar, andTree-adjoining grammar. Generative grammar is an umbrella term for a variety of approaches to linguistics. What unites these approaches is the goal of uncovering the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge.[2][3] Generative grammar studies language as part ofcognitive science. Thus, research in the generative tradition involves formulating and testing hypotheses about the mental processes that allow humans to use language.[4][5][6] Like other approaches in linguistics, generative grammar engages inlinguistic descriptionrather thanlinguistic prescription.[7][8] Generative grammar proposes models of language consisting of explicit rule systems, which make testablefalsifiablepredictions. This is different fromtraditional grammarwhere grammatical patterns are often described more loosely.[9][10]These models are intended to be parsimonious, capturing generalizations in the data with as few rules as possible. For example, because Englishimperativetag questionsobey the same restrictions that second personfuturedeclarativetags do,Paul Postalproposed that the two constructions are derived from the same underlying structure. By adopting this hypothesis, he was able to capture the restrictions on tags with a single rule. This kind of reasoning is commonplace in generative research.[9] Particular theories within generative grammar have been expressed using a variety offormal systems, many of which are modifications or extensions ofcontext free grammars.[9] Generative grammar generally distinguisheslinguistic competenceandlinguistic performance.[11]Competence is the collection of subconscious rules that one knows when one knows a language; performance is the system which puts these rules to use.[11][12]This distinction is related to the broader notion ofMarr's levelsused in other cognitive sciences, with competence corresponding to Marr's computational level.[13] For example, generative theories generally provide competence-based explanations for whyEnglishspeakers would judge the sentence in (1) asodd. In these explanations, the sentence would beungrammaticalbecause the rules of English only generate sentences wheredemonstrativesagreewith thegrammatical numberof their associatednoun.[14] By contrast, generative theories generally provide performance-based explanations for the oddness ofcenter embeddingsentences like one in (2). According to such explanations, the grammar of English could in principle generate such sentences, but doing so in practice is so taxing onworking memorythat the sentence ends up beingunparsable.[14][15] In general, performance-based explanations deliver a simpler theory of grammar at the cost of additional assumptions about memory and parsing. As a result, the choice between a competence-based explanation and a performance-based explanation for a given phenomenon is not always obvious and can require investigating whether the additional assumptions are supported by independent evidence.[15][16]For example, while many generative models of syntax explainisland effectsby positing constraints within the grammar, it has also been argued that some or all of these constraints are in fact the result of limitations on performance.[17][18] Non-generative approaches often do not posit any distinction between competence and performance. For instance,usage-based models of languageassume that grammatical patterns arise as the result of usage.[19] A major goal of generative research is to figure out which aspects of linguistic competence are innate and which are not. Within generative grammar, it is generally accepted that at least somedomain-specificaspects are innate, and the term "universal grammar" is often used as a placeholder for whichever those turn out to be.[20][21] The idea that at least some aspects are innate is motivated bypoverty of the stimulusarguments.[22][23]For example, one famous poverty of the stimulus argument concerns the acquisition ofyes-no questionsin English. This argument starts from the observation that children only make mistakes compatible with rules targetinghierarchical structureeven though the examples which they encounter could have been generated by a simpler rule that targets linear order. In other words, children seem to ignore the possibility that the question rule is as simple as "switch the order of the first two words" and immediately jump to alternatives that rearrangeconstituentsintree structures. This is taken as evidence that children are born knowing that grammatical rules involve hierarchical structure, even though they have to figure out what those rules are.[22][23][24]The empirical basis of poverty of the stimulus arguments has been challenged byGeoffrey Pullumand others, leading to back-and-forth debate in thelanguage acquisitionliterature.[25][26]Recent work has also suggested that somerecurrent neural networkarchitectures are able to learn hierarchical structure without an explicit constraint.[27] Within generative grammar, there are a variety of theories about what universal grammar consists of. One notable hypothesis proposed byHagit Borerholds that the fundamental syntactic operations are universal and that all variation arises from differentfeature-specifications in thelexicon.[21][28]On the other hand, a strong hypothesis adopted in some variants ofOptimality Theoryholds that humans are born with a universal set of constraints, and that all variation arises from differences in how these constraints are ranked.[21][29]In a 2002 paper,Noam Chomsky,Marc HauserandW. Tecumseh Fitchproposed that universal grammar consists solely of the capacity for hierarchical phrase structure.[30] In day-to-day research, the notion that universal grammar exists motivates analyses in terms of general principles. As much as possible, facts about particular languages are derived from these general principles rather than from language-specific stipulations.[20] Research in generative grammar spans a number of subfields. These subfields are also studied in non-generative approaches. Syntax studies the rule systems which combine smaller units such asmorphemesinto larger units such asphrasesandsentences.[31]Within generative syntax, prominent approaches includeMinimalism,Government and binding theory,Lexical-functional grammar(LFG), andHead-driven phrase structure grammar(HPSG).[3] Phonology studies the rule systems which organize linguistic sounds. For example, research in phonology includes work onphonotacticrules which govern whichphonemescan be combined, as well as those that determine the placement ofstress,tone, and othersuprasegmentalelements.[32]Within generative grammar, a prominent approach to phonology isOptimality Theory.[29] Semantics studies the rule systems that determine expressions' meanings. Within generative grammar, semantics is a species offormal semantics, providingcompositionalmodels of how thedenotationsof sentences are computed on the basis of the meanings of the individualmorphemesand their syntactic structure.[33] Generative grammar has been applied tomusic theoryandanalysissince the 1980s.[34]One notable approach isFred LerdahlandRay Jackendoff'sGenerative theory of tonal music, which formalized and extended ideas fromSchenkerian analysis.[35] Recent work in generative-inspiredbiolinguisticshas proposed that universal grammar consists solely of syntacticrecursion, and that it arose recently in humans as the result of a random genetic mutation.[36]Generative-inspired biolinguistics has not uncovered any particular genes responsible for language. While some prospects were raised at the discovery of theFOXP2gene,[37][38]there is not enough support for the idea that it is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech.[39] Analytical models based on semantics anddiscoursepragmaticswere rejected by theBloomfieldianschool of linguistics[40]whose derivatives place theobjectinto theverb phrase, following fromWilhelm Wundt'sVölkerpsychologie. Formalisms based on this convention were constructed in the 1950s byZellig HarrisandCharles Hockett. These gave rise to modern generative grammar.[41] As a distinct research tradition, generative grammar began in the late 1950s with the work ofNoam Chomsky.[42]However, its roots include earlierstructuralistapproaches such asglossematicswhich themselves had older roots, for instance in the work of the ancient Indian grammarianPāṇini.[43][44][45]Military funding to generative research was an important factor in its early spread in the 1960s.[46] The initial version of generative syntax was calledtransformational grammar. In transformational grammar, rules called transformations mapped a level of representation calleddeep structuresto another level of representation called surface structure. The semantic interpretation of a sentence was represented by its deep structure, while the surface structure provided its pronunciation. For example, an active sentence such as "The doctor examined the patient" and "The patient was examined by the doctor", had the same deep structure. The difference in surface structures arises from the application of the passivization transformation, which was assumed to not affect meaning. This assumption was challenged in the 1960s by the discovery of examples such as "Everyone in the room knows two languages" and "Two languages are known by everyone in the room".[47] After theLinguistics warsof the late 1960s and early 1970s, Chomsky developed a revised model of syntax calledGovernment and binding theory, which eventually grew intoMinimalism. In the aftermath of those disputes, a variety of other generative models of syntax were proposed includingrelational grammar,Lexical-functional grammar(LFG), andHead-driven phrase structure grammar(HPSG).[48] Generative phonology originally focused onrewrite rules, in a system commonly known asSPE Phonologyafter the 1968 bookThe Sound Pattern of Englishby Chomsky andMorris Halle. In the 1990s, this approach was largely replaced byOptimality theory, which was able to capture generalizations calledconspiracieswhich needed to be stipulated in SPE phonology.[29] Semantics emerged as a subfield of generative linguistics during the late 1970s, with the pioneering work ofRichard Montague. Montague proposed a system calledMontague grammarwhich consisted of interpretation rules mapping expressions from a bespoke model of syntax to formulas ofintensional logic. Subsequent work byBarbara Partee,Irene Heim,Tanya Reinhart, and others showed that the key insights of Montague Grammar could be incorporated into more syntactically plausible systems.[49][50]
https://en.wikipedia.org/wiki/Generative_linguistics
"The Library of Babel" (Spanish:La biblioteca de Babel) is ashort storybyArgentineauthor and librarianJorge Luis Borges(1899–1986), conceiving of a universe in the form of a vast library containing all possible 410-page books of a certain format andcharacter set. The story was originally published in Spanish in Borges'1941collection of storiesEl jardín de senderos que se bifurcan(The Garden of Forking Paths). That entire book was, in turn, included within his much-reprintedFicciones(1944). TwoEnglish-languagetranslations appeared approximately simultaneously in1962, one by James E. Irby in a diverse collection of Borges's works titledLabyrinthsand the other by Anthony Kerrigan as part of a collaborative translation of the entirety ofFicciones. Borges' narrator describes how his universe consists of an enormous expanse of adjacenthexagonalrooms. In each room, there is an opening in the floor to the hexagons above and below, four walls of bookshelves, and two junctions between hexagons each containing a latrine, a sleeping closet, and a stairwell. Though the order and content of the books are random and apparently completely meaningless, the inhabitants believe that the books contain every possible ordering of just 25 basiccharacters(22 letters, the period, the comma, and space). Though the vast majority of the books in this universe are puregibberish, the laws of probability dictate that the library also must contain, somewhere, every coherent book ever written, or that might ever be written, and every possiblepermutationor slightly erroneous version of every one of those books. The narrator notes that the library must contain all useful information, including predictions of the future, biographies of any person, and translations of every book in alllanguages. Conversely, for many of the texts, some language could be devised that would make it readable with any of a vast number of different contents. Despite these theories, all books are functionally totally useless to the reader, as any correct, legible text that can exist occurs due to pure chance and must exist alongside countless completely incorrect writings. This leads to manysuperstitions,cults, andheresieswithin the widerorganized religionof the library; The "Purifiers" arbitrarily destroy books they deem nonsense as they scour through the library seeking the "Crimson Hexagon" and its illustrated, magical books. Others believe that since all books exist in the library, somewhere one of the books must be a perfect index of the library's contents; some even believe that amessianic figureknown as the "Man of the Book" has read it, and they travel through the library seeking him. The narrator notes the population of the library has been gravely decimated by centuries of religious conflict and disease, but maintains his faith in the beauty and organization of the library as undeniable proof of aGodor otherdemiurge, reaffirming his own attempts to find some ultimate meaning to the library and humanity's existence within it. The story repeats the theme of Borges'1939essay "The Total Library" ("La Biblioteca Total"), which in turn acknowledges the earlier development of this theme byKurd Lasswitzin his1901story "The Universal Library" ("Die Universalbibliothek"): Certain examples thatAristotleattributes toDemocritusandLeucippusclearly prefigure it, but its belated inventor isGustav Theodor Fechner, and its first exponent,Kurd Lasswitz. [...] In his bookThe Race with the Tortoise(Berlin, 1919), DrTheodor Wolffsuggests that it is a derivation from, or a parody of,Ramón Llull's thinking machine [...] The elements of his game are the universal orthographic symbols, not the words of a language [...] Lasswitz arrives at twenty-five symbols (twenty-two letters, the space, the period, the comma), whose recombinations and repetitions encompass everything possible to express in all languages. The totality of such variations would form a Total Library of astronomical size. Lasswitz urges mankind to construct that inhuman library, which chance would organize and which would eliminate intelligence. (Wolff'sThe Race with the Tortoiseexpounds the execution and the dimensions of that impossible enterprise.)[1] Many of Borges' signature motifs are featured in the story, includinginfinity,reality,cabalistic reasoning, andlabyrinths. The concept of the library is often compared toBorel's dactylographic monkey theorem. There is no reference to monkeys or typewriters in "The Library of Babel", although Borges had mentioned that analogy in "The Total Library": "[A] half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in theBritish Museum." In this story, the closest equivalent is the line, "A blasphemous sect suggested [...] that all men should juggle letters and symbols until they constructed, by an improbable gift of chance, these canonical books." Borges makes an oblique reference to reproducing Shakespeare, as the only decipherable sentence in one of the books in the library, "O time thy pyramids", is surely taken from Shakespeare'sSonnet 123which opens with the lines "No Time, thou shalt not boast that I do change, Thy pyramids...". Borges would examine a similar idea in his 1976 story, "The Book of Sand", in which there is an infinite book (or book with an indefinite number of pages) rather than an infinite library. Moreover, the story'sBook of Sandis said to be written in an unknown alphabet and its content is not obviously random. In The Library of Babel, Borges interpolates Italian mathematicianBonaventura Cavalieri's suggestion that any solid body could be conceptualized as the superimposition of an infinite number of planes. The concept of the library is also overtly analogous to the view of the universe as aspherehaving its center everywhere and itscircumferencenowhere. ThemathematicianandphilosopherBlaise Pascalemployed thismetaphor, and in an earlier essay Borges noted that Pascal's manuscript called the sphereeffroyable,or "frightful". The quote at the beginning of the story, "By this art you may contemplate the variation of the twenty-three letters," is fromRobert Burton's 1621The Anatomy of Melancholy. In mainstream theories of natural language syntax, every syntactically valid utterance can be extended to produce a new, longer one, because ofrecursion.[2]However, the books in the Library of Babel are of bounded length ("each book is of four hundred and ten pages; each page, of forty lines, each line, of some eighty letters"), so the Library can only contain a finite number of distinct strings. Borges' narrator notes this fact, but believes that the Library is nevertheless infinite; he speculates that it repeats itself periodically, giving an eventual "order" to the "disorder" of the seemingly random arrangement of books. Mathematics professor William Goldbloom Bloch confirms the narrator's intuition, deducing in his popular mathematics bookThe Unimaginable Mathematics of Borges' Library of Babelthat the library's structure necessarily has at least one room whose shelves are not full (because the number of books per room does not divide the total number of books evenly), and the rooms on each floor of the library must either be connected into a singleHamiltonian cycle, or possibly be disconnected into subsets that cannot reach each other.[3] W. V. O. Quinenotes that the Library of Babel is finite, and that any text that does not fit in a single book can be reconstructed by finding a second book with the continuation. The size of the alphabet can be reduced by usingMorse codeeven though it makes the books more verbose; the size of the books can also be reduced by splitting each into multiple volumes and discarding the duplicates. Writes Quine, "The ultimate absurdity is now staring us in the face: a universal library of two volumes, one containing a single dot and the other a dash. Persistent repetition and alternation of the two are sufficient, we well know, for spelling out any and every truth. The miracle of the finite but universal library is a mere inflation of the miracle of binary notation: everything worth saying, and everything else as well, can be said with two characters."[4] The full possible set of protein sequences (protein sequence space) has been compared to the Library of Babel.[5][6]In theLibrary of Babel, finding any book that made sense was almost impossible due to the sheer number and lack of order. The same would be true of protein sequences if it were not for natural selection, which has picked out only protein sequences that make sense. Additionally, each protein sequence is surrounded by a set of neighbors (point mutants) that are likely to have at least some function.Daniel Dennett's 1995 bookDarwin's Dangerous Ideaincludes an elaboration of the Library of Babel concept to imagine the set of all possible genetic sequences, which he calls the Library of Mendel, in order to illustrate the mathematics ofgenetic variation. Dennett uses this concept again later in the book to imagine all possible algorithms that can be included in hisToshibacomputer, which he calls the Library of Toshiba. He describes the Library of Mendel and the Library of Toshiba as subsets within the Library of Babel.
https://en.wikipedia.org/wiki/The_Library_of_Babel
Beyond the Infinite Two Minutes(Japanese:ドロステのはてで僕ら,romanized:Dorosute no hate de bokura,lit.'We at the end of the Droste') is a 2020 Japanesescience fiction comedyfilm written by Makoto Ueda and cinematographed and directed byJunta Yamaguchiin hisdirectorial debut. Café owner Kato (Kazunari Tosa) discovers that his computer's monitor shows what will happen two minutes into the future from the perspective of the television in the café, which itself displays what happened two minutes into the past. The computer is brought down toface the television, creating aDroste effect, allowing the characters to see several minutes into the future. Kato's friends and coworkers discover this. Persuaded by a future version of himself, he decides to ask his love interest, Megumi, on a date; she declines, but Kato is forced to pretend to encourage his past self to prevent a paradox. Kato's friends also attempt to take advantage of the time window, getting caught up in a gang rivalry in the process. Kato uses his knowledge of the near future, as well as objects the group obtained throughout the film, to attack the gang members and save Megumi who has been taken upstairs. Returning downstairs, they find that two time cops have sedated everyone except himself and Megumi. The cops try to force the pair to ingest memory-wiping powder, but they sneeze it away, causing the cops to disappear from reality as a result of a paradox. Megumi and Kato sit down and discuss their lives together. The film was shot over the course of seven days in a Kyoto café by members of the Europe Kikaku theater troupe. The film, which is edited to appear as if it was shot inone long shot, is an example ofnagamawashi, a microgenre of mostly-low-budget one-shot Japanese films that have gained popularity after the success ofOne Cut of the Deadin 2017. It was made with 3 millionJapanese yen.[1] The film premiered at Tollywood, a small Tokyo cinema, to an audience of twelve. However, due to theCOVID-19 pandemicseverely limiting production and release of mainstream films, the film was selected to be screened by major theater chainToho Cinemas.[1] It went on to pick up a number of awards and nominations in festivals inSitges,Brusselsand Montreal.[1]The film was praised for its playful energy and industrious low-budget spirit.[citation needed]On thereview aggregatorwebsiteRotten Tomatoes, 99% of 69 critics' reviews are positive, with an average rating of 8.3/10. The website's consensus reads: "A time-travel comedy shot through with infectious energy,Beyond the Infinite Two Minutesoffers sci-fi fans a low-key -- and highly entertaining -- treat."[2]The New York Timeswas relatively negative, arguing that the film "rarely surmounts the twistiness of its premise and the repetitiveness of its setups."[3]
https://en.wikipedia.org/wiki/Beyond_the_Infinite_Two_Minutes
Chinese boxes(Chinese:套盒;pinyin:tàohé) are a set ofboxesof graduated size, each fitting inside the next larger box. A traditional style inChinese design, nested boxes have proved a popular packaging option in the West for novelty or display reasons. Chinese nested boxes have inspired similar forms of packaging around the world, but also have found use as a figurative description, providing an illustrative example to demonstrate situations of conceptually nested or recursive arrangements. In literature, a Chinese box structure refers to aframe narrative,[1]where a novel or drama is told in the form of a narrative inside a narrative (and so on), giving views from different perspectives. Examples includePlato's dialogueSymposium,Mary Shelley's 1818 novelFrankenstein,Jostein Gaarder'sThe Solitaire Mystery,Emily Brontë'sWuthering Heights,[2]andJoseph Conrad'sHeart of Darkness. Thisdesign-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Chinese_boxes
Afalse awakeningis a vivid and convincingdreamaboutawakeningfromsleep, while the dreamer in reality continues to sleep. After a false awakening, subjects often dream they are performing their daily morning routine such as showering or eating breakfast. False awakenings, mainly those in which one dreams that they have awoken from a sleep that featured dreams, take on aspects of adouble dreamor adream within a dream. A classic example in fiction is the double false awakening of the protagonist inGogol'sPortrait(1835). Studies have shown that false awakening is closely related tolucid dreamingthat often transforms into one another. The only differentiating feature between them is that the dreamer has a logical understanding of the dream in a lucid dream, while that is not the case in a false awakening.[1] Once one realizes they are falsely awakened, they either wake up or begin lucid dreaming.[1] A false awakening may occur following a dream or following alucid dream(one in which the dreamer has been aware of dreaming). Particularly, if the false awakening follows a lucid dream, the false awakening may turn into a "pre-lucid dream",[2]that is, one in which the dreamer may start to wonder if they are really awake and may or may not come to the correct conclusion. In a study byHarvardpsychologistDeirdre Barrett, 2,000 dreams from 200 subjects were examined and it was found that false awakenings and lucidity were significantly more likely to occur within the same dream or within different dreams of the same night. False awakenings often preceded lucidity as a cue, but they could also follow the realization of lucidity, often losing it in the process.[3] Because the mind still dreams after a false awakening, there may be more than one false awakening in a single dream. Subjects may dream they wake up, eat breakfast, brush their teeth, and so on; suddenly awake again in bed (still in a dream), begin morning rituals again, awaken again, and so forth. The philosopherBertrand Russellclaimed to have experienced "about a hundred" false awakenings in succession while coming around from a general anesthetic.[4] Giorgio Buzzisuggests that FAs may indicate the occasional re-appearing of a vestigial (or anyway anomalous) REM sleep in the context of disturbed or hyperaroused sleep (lucid dreaming,sleep paralysis, or situations of high anticipation). This peculiar form of REM sleep permits the replay of unaltered experiential memories, thus providing a unique opportunity to study how waking experiences interact with the hypothesized predictive model of the world. In particular, it could permit to catch a glimpse of the protoconscious world without the distorting effect of ordinary REM sleep.[5] In accordance with the proposed hypothesis, a high prevalence of FAs could be expected in children, whose "REM sleep machinery" might be less developed.[5] Gibson's dreamprotoconsciousnesstheory states that false awakening is shaped on some fixed patterns depicting real activities, especially the day-to-day routine. False awakening is often associated with highly realistic environmental details of the familiar events like the day-to-day activities or autobiographic andepisodicmoments.[5] Certain aspects of life may be dramatized or out of place in false awakenings. Things may seem wrong: details, like the painting on a wall, not being able to talk or difficulty reading (reportedly, reading in lucid dreams is often difficult or impossible).[6]A common theme in false awakenings is visiting the bathroom, upon which the dreamer will see that their reflection in the mirror is distorted (which can be an opportunity for lucidity, but usually resulting in wakefulness). Celia Greensuggested a distinction should be made between two types of false awakening:[2] Type 1 is the more common, in which the dreamer seems to wake up, but not necessarily in realistic surroundings; that is, not in their own bedroom. A pre-lucid dream may ensue. More commonly, dreamers will believe they have awakened, and then either genuinely wake up in their own bed or "fall back asleep" in the dream. A common false awakening is a "late for work" scenario. A person may "wake up" in a typical room, with most things looking normal, and realize they overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to truly awaken the dreamer (much like from anightmare). Another common Type 1 example of false awakening can result in bedwetting. In this scenario, the dreamer has had a false awakening and while in the state of dream has performed all the traditional behaviors that precede urinating – arising from bed, walking to the bathroom, and sitting down on the toilet or walking up to a urinal. The dreamer may then urinate and suddenly wake up to find they have wet themselves. The Type 2 false awakening seems to be considerably less common. Green characterized it as follows: The subject appears to wake up in a realistic manner but to an atmosphere of suspense.... The dreamer's surroundings may at first appear normal, and they may gradually become aware of something uncanny in the atmosphere, and perhaps of unwanted [unusual] sounds and movements, or they may "awake" immediately to a "stressed" and "stormy" atmosphere. In either case, the end result would appear to be characterized by feelings of suspense, excitement or apprehension.[7] Charles McCreerydraws attention to the similarity between this description and the description by the German psychopathologistKarl Jaspers(1923) of the so-called "primary delusionary experience" (a general feeling that precedes more specific delusory belief).[8]Jaspers wrote: Patients feel uncanny and that there is something suspicious afoot. Everything gets anew meaning. The environment is somehow different—not to a gross degree—perception is unaltered in itself but there is some change which envelops everything with a subtle, pervasive and strangely uncertain light.... Something seems in the air which the patient cannot account for, a distrustful, uncomfortable, uncanny tension invades him.[9] McCreery suggests this phenomenological similarity is not coincidental and results from the idea that both phenomena, the Type 2 false awakening and the primary delusionary experience, are phenomena of sleep.[10]He suggests that the primary delusionary experience, like other phenomena of psychosis such as hallucinations and secondary or specific delusions, represents an intrusion into waking consciousness of processes associated withstage 1 sleep. It is suggested that the reason for these intrusions is that the psychotic subject is in a state ofhyperarousal, a state that can lead to whatIan Oswaldcalled "microsleeps" in waking life.[11] Other researchers doubt that these are clearly distinguished types, as opposed to being points on a subtle spectrum.[12] The clinical and neurophysiological descriptions of false awakening are rare. One notable report by Takeuchiet al.,[13]was considered by some experts as a case of false awakening. It depicts ahypnagogichallucinationof an unpleasant and fearful feeling of presence in sleeping lab with perception of having risen from the bed. Thepolysomnographyshowed abundant trains of alpha rhythm onEEG(sometimes blocked byREMsmixed withslow eye movementsand low muscle tone). Conversely, the two experiences of FA monitored here were close to regular REM sleep. Even quantitative analysis clearly shows theta waves predominantly, suggesting that these two experiences are a product of adreamingrather than a fully conscious brain.[14] The clinical and neurophysiological characteristics of false awakening are
https://en.wikipedia.org/wiki/Dream_within_a_dream
Thehomunculus argumentis aninformal fallacywhereby a concept is explained in terms of the concept itself,recursively, without first defining or explaining the original concept.[1]This fallacy arises most commonly in the theory ofvision. One may explain human vision by noting thatlightfrom the outside world forms an image on theretinasin theeyesand something (or someone) in thebrainlooks at these images as if they are images on a movie screen (this theory of vision is sometimes termed the theory of theCartesian theater: it is most associated, nowadays, with the psychologistDavid Marr). The question arises as to the nature of this internal viewer. The assumption here is that there is a "little man" or "homunculus" inside the brain "looking at" the movie. The reason why this is a fallacy may be understood by asking how the homunculus "sees" the internal movie. The answer[citation needed]is that there is another homunculus inside the first homunculus's "head" or "brain" looking at this "movie". But that raises the question of howthishomunculus sees the "outside world". To answer that seems to require positinganotherhomunculus inside this second homunculus's head, and so forth. In other words, a situation ofinfinite regressis created. The problem with the homunculus argument is that it tries to account for a phenomenon in terms of the very phenomenon that it is supposed to explain.[2] Another example is withcognitivisttheories that argue that the human brain uses "rules" to carry out operations (these rules often conceptualised as being like thealgorithmsof acomputer program). For example, in his work of the 1950s, 1960s and 1970s,Noam Chomskyargued that (in the words of one of his books) human beings useRules and Representations(or to be more specific, rules acting on representations) in order to cognate (more recently Chomsky has abandoned this view; cf. theMinimalist Program). Now, in terms of (say) chess, the players are given "rules" (i.e., the rules of chess) to follow. So: whousesthese rules? The answer is self-evident: the players of the game (of chess) use the rules: it's not the case that the rulesthemselvesplay chess. The rules themselves are merely inert marks on paper until ahuman beingreads, understands and uses them. But what about the "rules" that are, allegedly, inside our head (brain)? Who reads, understands and uses them? Again, the implicit answer is, and some would argue must be, a "homunculus": a little man who reads the rules of the world and then gives orders to the body to act on them. But again we are in a situation ofinfinite regress, because this implies that the homunculus utilizes cognitive processes that are also rule bound, which presupposes another homunculus insideitshead, and so on and so forth. Therefore, so the argument goes, theories of mind that imply or state explicitly that cognition isrulebound cannot be correct unless some way is found to "ground" the regress. This is important because it is often assumed incognitive sciencethat rules andalgorithmsare essentially the same: in other words, the theory that cognition is rule bound is often believed to imply that thought (cognition) is essentially the manipulation of algorithms, andthisis one of the key assumptions of some varieties ofartificial intelligence. Homunculus arguments are alwaysfallaciousunless some way can be found to "ground" the regress. Inpsychologyandphilosophy of mind, "homunculus arguments" (or the "homunculus fallacies") are extremely useful for detecting where theories ofmindfail or are incomplete. The homunculus fallacy is closely related toRyle's regress.
https://en.wikipedia.org/wiki/Homunculus_argument
Matryoshka dolls(Russian:матрёшка,romanized:matryoshka/ˌmætriˈɒʃkə/), also known asstacking dolls,nesting dolls,Russian tea dolls, orRussian dolls,[1]are a set of woodendollsof decreasing size placed one inside another. The nameMatryoshkais adiminutiveform ofMatryosha(Матрёша), in turn ahypocorismof the Russian female first nameMatryona(Матрёна).[2] A set of matryoshkas consists of a wooden figure, which separates at the middle, top from bottom, to reveal a smaller figure of the same sort inside, which has, in turn, another figure inside of it, and so on. The first Russian nested doll set was made in 1890 bywood turning craftsmanandwood carverVasily Zvyozdochkinfrom a design bySergey Malyutin, who was a folk crafts painter atAbramtsevo. Traditionally the outer layer is a woman, dressed in a Russiansarafandress. The figures inside may be of any gender; the smallest, innermost doll is typically a baby turned from a single piece of wood. Much of the artistry is in the painting of each doll, which can be very elaborate. The dolls often follow a theme; the themes may vary, fromfairy talecharacters toSoviet leaders. In some countries, matryoshka dolls are often referred to asbabushka dolls, though they are not known by this name in Russian;babushka(бабушка) means'grandmother; old woman'.[3] The first Russian nested doll set was carved in 1890 at the Children's Education Workshop by Vasily Zvyozdochkin and designed by Sergey Malyutin, who was a folk crafts painter in the Abramtsevo estate ofSavva Mamontov, a Russian industrialist and patron of arts.[4][5]Mamontov's brother, Anatoly Ivanovich Mamontov (1839–1905), created the Children's Education Workshop to make and sell children's toys. The doll set was painted by Malyutin. Malyutin's doll set consisted of eight dolls—the outermost was a mother in a traditional dress holding a red-combedrooster. The inner dolls were her children, girls and a boy, and the innermost a baby. The Children's Education Workshop was closed in the late 1890s, but the tradition of the matryoshka simply relocated toSergiyev Posad, the Russian city known as a toy-making center since the fourteenth century.[6][4] The inspiration for matryoshka dolls is not clear. Matryoshka dolls may have been inspired by a nesting doll imported from Japan.[5][7]The Children's Education workshop where Zvyozdochkin was a lathe operator received a five piece, cylinder-shaped nesting doll featuring Fukuruma (Fukurokuju) in the late 1890s,[8]which is now part of the collection at the Sergiev Posad Museum of Toys.[8]Other east Asian dolls share similarities with matryoshka dolls such as theKokeshidolls,[4][9]originating in NorthernHonshū, themain islandofJapan, although they cannot be placed one inside another, and the round hollowdaruma dolldepicting a Buddhist monk.[9][10]Another possible source of inspiration is the nesting Easter eggs produced on a lathe by Russian woodworkers during the late 19th Century.[3][11] Savva Mamontov's wife presented a set of matryoshka dolls at theExposition Universellein Paris in 1900, and the toy earned a bronze medal. Soon after, matryoshka dolls were being made in several places in Russia and shipped around the world. The first matryoshka dolls were produced in the Children's Education (Detskoye vospitanie) workshop inMoscow.[12]After it closed in 1904, production was transferred to the city ofSergiev Posad(Сергиев Посад),[12]known as Sergiev (Сергиев) from 1919 to 1930 and Zagorsk from 1930 to 1991.[13] Matryoshka factories were later established in other cities and villages: Following the collapse of the Soviet Union, the closure of many matryoshka factories, and the loosening of restrictions, independent artists began to produce matryoshka dolls in homes and art studios.[22] Ordinarily, matryoshka dolls are crafted fromlindenwood. There is a popular misconception that they are carved from one piece of wood. Rather, they are produced using: alatheequipped with abalance bar; four heavy 2 foot (0.61 m) long distinct types ofchisels(hook, knife, pipe, and spoon); and a "set of handmade wooden calipers particular to a size of the doll". The tools are handforgedby a villageblacksmithfrom car axles or other salvage. Awood carveruniquely crafts each set of woodencalipers. Multiple pieces of wood are meticulously carved into the nesting set.[23] The standard shape approximates a human silhouette[24]with a flared base on the largest doll for stability.[25]Other shapes include potbelly, cone, bell, egg, bottle, sphere, and cylinder.[24] The size and number of pieces varies widely. The industry standard from the Soviet period, which accounts for approximately 50% of all matryoshka produced, is six inches tall and consists of 5 dolls except for matryoshka dolls manufactured inSemenovwhose standard is five inches tall and consists of 6 pieces.[24][25]Other common sets are the 3 piece, the 7 piece, and the 10 piece.[25] Matryoshka dolls painted in the traditional style share common elements. They depict female figures wearing a peasant dress (sarafan) and scarf or shawl usually with an apron and flowers.[24][25]Each successively smaller doll is identical or nearly so.[3][24]Distinctive regional styles developed in different areas of matryoshka manufacture. Matryoshka dolls[26]are often designed to follow a particular theme; for instance, peasant girls in traditional dress. Originally, themes were often drawn from tradition orfairy talecharacters, in keeping with the craft tradition—but since the late 20th century, they have embraced a larger range, including Russian leaders and popular culture. Common themes of matryoshkas are floral and relate to nature. Often Christmas, Easter, and religion are used as themes for the doll. Modern artists create many new styles of nesting dolls, mostly as an alternative purchase option for tourism. These include animal collections, portraits, andcaricaturesof famous politicians, musicians, athletes, astronauts, "robots", and popular movie stars. Today, some Russian artists specialize in painting themed matryoshka dolls that feature specific categories of subjects, people, or nature. Areas with notable matryoshka styles includeSergiyev Posad, Semionovo (now the town ofSemyonov),[17]Polkhovsky Maydan[ru], and the city ofKirov. The largest set of matryoshka dolls in the world is a 51-piece set hand-painted by Youlia Bereznitskaia of Russia, completed in 2003. The tallest doll in the set measures 53.97 centimetres (21.25 in); the smallest, 0.31 centimetres (0.12 in). Arranged side-by-side, the dolls span 3.41 metres (11 ft 2.25 in).[27] Matryoshkas are also usedmetaphorically, as adesign paradigm, known as the "matryoshka principle" or "nested doll principle".[citation needed]It denotes a recognizable relationship of "object-within-similar-object" that appears in the design of many other natural and crafted objects. Examples of this use include thematrioshka brain,[citation needed]theMatroskamedia-container format,[citation needed]and the Russian Doll model ofmulti-walled carbon nanotubes.[citation needed] Theonion metaphoris similar. If the outer layer is peeled off an onion, a similar onion exists within. This structure is employed by designers in applications such as the layering of clothes or the design of tables, where a smaller table nests within a larger table, and a smaller one within that. The metaphor of the matryoshka doll (or its onion equivalent) is also used in the description of shell companies and similar corporate structures that are used in the context of tax-evasion schemes in low-tax jurisdictions (for example, offshore tax havens).[28]It has also been used to describesatellitesand suspected weapons in space.[29] Matryoshka is often seen as a symbol of the feminine side of Russian culture.[30]Matryoshka is associated in Russia with family and fertility.[31]Matryoshka is used as the symbol for the epithet Mother Russia.[32]Matryoshka dolls are a traditional representation of the mother carrying a child within her and can be seen as a representation of a chain of mothers carrying on the family legacy through the child in their wombs. Furthermore, matryoshka dolls are used to illustrate the unity of body, soul, mind, heart, and spirit.[33][34][35] In 2020, theUnicode Consortiumapproved the matryoshka doll (🪆) as one of the newemojicharacters in release v.13.[36]The matryoshka or nesting doll emoji was submitted to the consortium by Jef Gray and Samantha Sunne,[37]as a non-religious, apolitical symbol of Russian-East European-Far East Asian culture.[38]
https://en.wikipedia.org/wiki/Matryoshka_doll
Inphysics,mathematicsandstatistics,scale invarianceis a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality. The technical term for thistransformationis adilatation(also known asdilation). Dilatations can form part of a largerconformal symmetry. In mathematics, one can consider the scaling properties of afunctionorcurvef(x)under rescalings of the variablex. That is, one is interested in the shape off(λx)for some scale factorλ, which can be taken to be a length or size rescaling. The requirement forf(x)to be invariant under all rescalings is usually taken to be for some choice of exponent Δ, and for all dilationsλ. This is equivalent tofbeing ahomogeneous functionof degree Δ. Examples of scale-invariant functions are themonomialsf(x)=xn{\displaystyle f(x)=x^{n}}, for whichΔ =n, in that clearly An example of a scale-invariant curve is thelogarithmic spiral, a kind of curve that often appears in nature. Inpolar coordinates(r,θ), the spiral can be written as Allowing for rotations of the curve, it is invariant under all rescalingsλ; that is,θ(λr)is identical to a rotated version ofθ(r). The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of ahomogeneous polynomial, and more generally to ahomogeneous function. Homogeneous functions are the natural denizens ofprojective space, and homogeneous polynomials are studied asprojective varietiesinprojective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry ofschemes, it has connections to various topics instring theory. It is sometimes said thatfractalsare scale-invariant, although more precisely, one should say that they areself-similar. A fractal is equal to itself typically for only a discrete set of valuesλ, and even then a translation and rotation may have to be applied to match the fractal up to itself. Thus, for example, theKoch curvescales with∆ = 1, but the scaling holds only for values ofλ= 1/3nfor integern. In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve. Some fractals may have multiple scaling factors at play at once; such scaling is studied withmulti-fractal analysis. Periodicexternal and internal raysare invariant curves . IfP(f)is theaverage, expectedpower at frequencyf, then noise scales as with Δ = 0 forwhite noise, Δ = −1 forpink noise, and Δ = −2 forBrownian noise(and more generally,Brownian motion). More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by theprobability distribution. Examples of scale-invariant distributions are thePareto distributionand theZipfian distribution. Tweedie distributionsare a special case ofexponential dispersion models, a class of statistical models used to describe error distributions for thegeneralized linear modeland characterized byclosureunder additive and reproductive convolution as well as under scale transformation.[1]These include a number of common distributions: thenormal distribution,Poisson distributionandgamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positivestable distributions, and extreme stable distributions. Consequent to their inherent scale invariance Tweedierandom variablesYdemonstrate avariancevar(Y) tomeanE(Y) power law: whereaandpare positive constants. This variance to mean power law is known in the physics literature asfluctuation scaling,[2]and in the ecology literature asTaylor's law.[3] Random sequences, governed by the Tweedie distributions and evaluated by themethod of expanding binsexhibit abiconditionalrelationship between the variance to mean power law and power lawautocorrelations. TheWiener–Khinchin theoremfurther implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest1/fnoise.[4] TheTweedie convergence theoremprovides a hypothetical explanation for the wide manifestation of fluctuation scaling and1/fnoise.[5]It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express avariance functionthat comes within thedomain of attractionof a Tweedie model. Almost all distribution functions with finitecumulant generating functionsqualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express thisasymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types.[4] Much as thecentral limit theoremrequires certain kinds of random variables to have as a focus of convergence theGaussian distributionand expresswhite noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express1/fnoise and fluctuation scaling.[4] Inphysical cosmology, the power spectrum of the spatial distribution of thecosmic microwave backgroundis near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude,P(k), ofprimordial fluctuationsas a function ofwave number,k, is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal ofcosmic inflation. Classical field theoryis generically described by a field, or set of fields,φ, that depend on coordinates,x. Valid field configurations are then determined by solvingdifferential equationsforφ, and these equations are known asfield equations. For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields, The parameter Δ is known as thescaling dimensionof the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory isnotscale-invariant. A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution,φ(x), one always has other solutions of the form For a particular field configuration,φ(x), to be scale-invariant, we require that where Δ is, again, thescaling dimensionof the field. We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations willnotbe scale-invariant, and in such cases the symmetry is said to bespontaneously broken. An example of a scale-invariant classical field theory iselectromagnetismwith no charges or currents. The fields are the electric and magnetic fields,E(x,t) andB(x,t), while their field equations areMaxwell's equations. With no charges or currents,these field equationstake the form ofwave equations wherecis the speed of light. These field equations are invariant under the transformation Moreover, given solutions of Maxwell's equations,E(x,t) andB(x,t), it holds thatE(λx,λt) andB(λx,λt) are also solutions. Another example of a scale-invariant classical field theory is the masslessscalar field(note that the namescalaris unrelated to scale invariance). The scalar field,φ(x,t)is a function of a set of spatial variables,x, and a time variable,t. Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation, and is invariant under the transformation The name massless refers to the absence of a term∝m2φ{\displaystyle \propto m^{2}\varphi }in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. Inrelativistic field theories, a mass-scale,mis physically equivalent to a fixed length scale through and so it should not be surprising that massive scalar field theory isnotscale-invariant. The field equations in the examples above are alllinearin the fields, which has meant that thescaling dimension, Δ, has not been so important. However, one usually requires that the scalar fieldactionis dimensionless, and this fixes thescaling dimensionofφ. In particular, whereDis the combined number of spatial and time dimensions. Given this scaling dimension forφ, there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is masslessφ4theoryforD= 4. The field equation is (Note that the nameφ4derives from the form of theLagrangian, which contains the fourth power ofφ.) WhenD= 4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ = 1. The field equation is then invariant under the transformation The key point is that the parametergmust be dimensionless, otherwise one introduces a fixed length scale into the theory: Forφ4theory, this is only the case inD= 4. Note that under these transformations the argument of the functionφis unchanged. The scale-dependence of aquantum field theory(QFT) is characterised by the way itscoupling parametersdepend on the energy-scale of a given physical process. This energy dependence is described by therenormalization group, and is encoded in thebeta-functionsof the theory. For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known asfixed pointsof the corresponding renormalization group flow.[6] A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (sincephotonsare massless and non-interacting) and is therefore scale-invariant, much like the classical theory. However, in nature the electromagnetic field is coupled to charged particles, such aselectrons. The QFT describing the interactions of photons and charged particles isquantum electrodynamics(QED), and this theory is not scale-invariant. We can see this from theQED beta-function. This tells us that theelectric charge(which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particlesisscale-invariant, QED isnotscale-invariant. Free, masslessquantized scalar field theoryhas no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as theGaussian fixed point. However, even though the classical masslessφ4theory is scale-invariant inD= 4, the quantized version isnotscale-invariant. We can see this from thebeta-functionfor the coupling parameter,g. Even though the quantized masslessφ4is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is theWilson–Fisher fixed point, below. Scale-invariant QFTs are almost always invariant under the fullconformal symmetry, and the study of such QFTs isconformal field theory(CFT).Operatorsin a CFT have a well-definedscaling dimension, analogous to thescaling dimension,∆, of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known asanomalous scaling dimensions. The φ4theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to beanomalous. A classically scale-invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe calledcosmic inflation, as long as the theory can be studied throughperturbation theory.[7] Instatistical mechanics, as a system undergoes aphase transition, its fluctuations are described by a scale-invariantstatistical field theory. For a system in equilibrium (i.e. time-independent) inDspatial dimensions, the corresponding statistical field theory is formally similar to aD-dimensional CFT. The scaling dimensions in such problems are usually referred to ascritical exponents, and one can in principle compute these exponents in the appropriate CFT. An example that links together many of the ideas in this article is the phase transition of theIsing model, a simple model offerromagneticsubstances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form aD-dimensional periodic lattice. Associated with each lattice site is amagnetic moment, orspin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.) The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature,Tc,spontaneous magnetizationis said to occur. This means that belowTcthe spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions. An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distancer. This has the generic behaviour: for some particular value ofη{\displaystyle \eta }, which is an example of a critical exponent. The fluctuations at temperatureTcare scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is theWilson–Fisher fixed point, a particular scale-invariantscalar field theory. In this context,G(r)is understood as acorrelation functionof scalar fields, Now we can fit together a number of the ideas seen already. From the above, one sees that the critical exponent,η, for this phase transition, is also ananomalous dimension. This is because the classical dimension of the scalar field, is modified to become whereDis the number of dimensions of the Ising model lattice. So thisanomalous dimensionin the conformal field theory is thesameas a particular critical exponent of the Ising model phase transition. Note that for dimensionD≡ 4−ε,ηcan be calculated approximately, using theepsilon expansion, and one finds that In the physically interesting case of three spatial dimensions, we haveε=1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is thatηis numerically small in three dimensions. On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of theminimal models, a family of well-understood CFTs, and it is possible to computeη(and the other critical exponents) exactly, The anomalous dimensions in certain two-dimensional CFTs can be related to the typicalfractal dimensionsof random walks, where the random walks are defined viaSchramm–Loewner evolution(SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2dcritical Ising model and the more general 2dcriticalPotts model. Relating other 2dCFTs to SLE is an active area of research. A phenomenon known asuniversalityis seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems: Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition orcritical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories. The set of different microscopic theories described by the same scale-invariant theory is known as auniversality class. Other examples of systems which belong to a universality class are: The key observation is that, for all of these different systems, the behaviour resembles aphase transition, and that the language of statistical mechanics and scale-invariantstatistical field theorymay be applied to describe them. Under certain circumstances,fluid mechanicsis a scale-invariant classical field theory. The fields are the velocity of the fluid flow,u(x,t){\displaystyle \mathbf {u} (\mathbf {x} ,t)}, the fluid density,ρ(x,t){\displaystyle \rho (\mathbf {x} ,t)}, and the fluid pressure,P(x,t){\displaystyle P(\mathbf {x} ,t)}. These fields must satisfy both theNavier–Stokes equationand thecontinuity equation. For aNewtonian fluidthese take the respective forms ρ∂u∂t+ρu⋅∇u=−∇P+μ(∇2u+13∇(∇⋅u)){\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}+\rho \mathbf {u} \cdot \nabla \mathbf {u} =-\nabla P+\mu \left(\nabla ^{2}\mathbf {u} +{\frac {1}{3}}\nabla \left(\nabla \cdot \mathbf {u} \right)\right)} whereμ{\displaystyle \mu }is thedynamic viscosity. In order to deduce the scale invariance of these equations we specify anequation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider theisothermalideal gas, which satisfies wherecs{\displaystyle c_{s}}is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations Given the solutionsu(x,t){\displaystyle \mathbf {u} (\mathbf {x} ,t)}andρ(x,t){\displaystyle \rho (\mathbf {x} ,t)}, we automatically have thatλu(λx,λ2t){\displaystyle \lambda \mathbf {u} (\lambda \mathbf {x} ,\lambda ^{2}t)}andλρ(λx,λ2t){\displaystyle \lambda \rho (\lambda \mathbf {x} ,\lambda ^{2}t)}are also solutions. Incomputer visionandbiological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed.[8]Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data.[9][10]Examples of applications includeblob detection,corner detection,ridge detection, and object recognition via thescale-invariant feature transform.
https://en.wikipedia.org/wiki/Scale_invariance
Inmathematics, aself-similarobject is exactly or approximatelysimilarto a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such ascoastlines, are statistically self-similar: parts of them show the same statistical properties at many scales.[2]Self-similarity is a typical property offractals.Scale invarianceis an exact form of self-similarity where at any magnification there is a smaller piece of the object that issimilarto the whole. For instance, a side of theKoch snowflakeis bothsymmetricaland scale-invariant; it can be continually magnified 3x without changing shape. The non-trivial similarity evident in fractals is distinguished by their fine structure, or detail on arbitrarily small scales. As acounterexample, whereas any portion of astraight linemay resemble the whole, further detail is not revealed. Peitgenet al.explain the concept as such: If parts of a figure are small replicas of the whole, then the figure is calledself-similar....A figure isstrictly self-similarif the figure can be decomposed into parts which are exact replicas of the whole. Any arbitrary part contains an exact replica of the whole figure.[3] Since mathematically, a fractal may show self-similarity under arbitrary magnification, it is impossible to recreate this physically. Peitgenet al.suggest studying self-similarity using approximations: In order to give an operational meaning to the property of self-similarity, we are necessarily restricted to dealing with finite approximations of the limit figure. This is done using the method which we will call box self-similarity where measurements are made on finite stages of the figure using grids of various sizes.[4] This vocabulary was introduced byBenoit Mandelbrotin 1964.[5] Inmathematics,self-affinityis a feature of afractalwhose pieces arescaledby different amounts in thexandydirections. This means that to appreciate the self-similarity of these fractal objects, they have to be rescaled using ananisotropicaffine transformation. Acompacttopological spaceXis self-similar if there exists afinite setSindexing a set of non-surjectivehomeomorphisms{fs:s∈S}{\displaystyle \{f_{s}:s\in S\}}for which IfX⊂Y{\displaystyle X\subset Y}, we callXself-similar if it is the onlynon-emptysubsetofYsuch that the equation above holds for{fs:s∈S}{\displaystyle \{f_{s}:s\in S\}}. We call aself-similar structure. The homeomorphisms may beiterated, resulting in aniterated function system. The composition of functions creates the algebraic structure of amonoid. When the setShas only two elements, the monoid is known as thedyadic monoid. The dyadic monoid can be visualized as an infinitebinary tree; more generally, if the setShaspelements, then the monoid may be represented as ap-adictree. Theautomorphismsof the dyadic monoid is themodular group; the automorphisms can be pictured ashyperbolic rotationsof the binary tree. A more general notion than self-similarity isself-affinity. TheMandelbrot setis also self-similar aroundMisiurewicz points. Self-similarity has important consequences for the design of computer networks, as typical network traffic has self-similar properties. For example, inteletraffic engineering,packet switcheddata traffic patterns seem to be statistically self-similar.[6]This property means that simple models using aPoisson distributionare inaccurate, and networks designed without taking self-similarity into account are likely to function in unexpected ways. Similarly,stock marketmovements are described as displayingself-affinity, i.e. they appear self-similar when transformed via an appropriateaffine transformationfor the level of detail being shown.[7]Andrew Lodescribes stock market log return self-similarity ineconometrics.[8] Finite subdivision rulesare a powerful technique for building self-similar sets, including theCantor setand theSierpinski triangle. Somespace filling curves, such as thePeano curveandMoore curve, also feature properties of self-similarity.[9] Theviable system modelofStafford Beeris an organizational model with an affine self-similar hierarchy, where a given viable system is one element of the System One of a viable system one recursive level higher up, and for whom the elements of its System One are viable systems one recursive level lower down. Self-similarity can be found in nature, as well. Plants, such asRomanesco broccoli, exhibit strong self-similarity.
https://en.wikipedia.org/wiki/Self-similarity
Astory within a story, also referred to as anembedded narrative, is aliterary devicein which a character within astorybecomes the narrator of a second story (within the first one).[1]Multiple layers of stories within stories are sometimes callednested stories. A play may have a brief play within it, such as in Shakespeare's playHamlet; a film may show the characters watching a short film; or a novel may contain a short story within the novel. A story within a story can be used in all types of narration includingpoems, andsongs. Stories within stories can be used simply to enhance entertainment for the reader or viewer, or can act as examples to teach lessons to other characters.[2]The inner story often has a symbolic and psychological significance for the characters in the outer story. There is often some parallel between the two stories, and the fiction of the inner story is used to reveal the truth in the outer story.[3]Often the stories within a story are used to satirize views, not only in the outer story, but also in the real world. When a story is told within another instead of being told as part of the plot, it allows the author to play on the reader's perceptions of the characters—the motives and thereliability of the storytellerare automatically in question.[2] Stories within a story may disclose the background of characters or events, tell of myths and legends that influence the plot, or even seem to be extraneous diversions from the plot. In some cases, the story within a story is involved in the action of the plot of the outer story. In others, the inner story is independent, and could either be skipped or stand separately, although many subtle connections may be lost. Often there is more than one level of internal stories, leading to deeply-nested fiction.Mise en abymeis theFrenchterm for a similar literary device (also referring to the practice inheraldryof placing the image of a small shield on a larger shield). The literary device of stories within a story dates back to a device known as a "frame story", where a supplemental story is used to help tell the main story. Typically, the outer story or "frame" does not have much matter, and most of the work consists of one or more complete stories told by one or more storytellers. The earliest examples of "frame stories" and "stories within stories" were in ancient Egyptian andIndian literature, such as the Egyptian "Tale of the Shipwrecked Sailor"[4]andIndian epicslike theRamayana,Seven Wise Masters,HitopadeshaandVikrama and Vethala. InVishnu Sarma'sPanchatantra, an inter-woven series of colorful animal tales are told with one narrative opening within another, sometimes three or four layers deep, and then unexpectedly snapping shut in irregular rhythms to sustain attention. In the epicMahabharata, theKurukshetra Waris narrated by a character inVyasa'sJaya, which itself is narrated by a character inVaisampayana'sBharata, which itself is narrated by a character in Ugrasrava'sMahabharata. BothThe Golden AssbyApuleiusandMetamorphosesbyOvidextend the depths of framing to several degrees. Another early example is theOne Thousand and One Nights(Arabian Nights), where the general story is narrated by an unknown narrator, and in this narration the stories are told byScheherazade. In many of Scheherazade's narrations, there are alsostories narrated, and even in some of these, there are some other stories.[5]An example of this is "The Three Apples", amurder mysterynarrated by Scheherazade. Within the story, after the murderer reveals himself, he narrates aflashbackof events leading up to the murder. Within this flashback, anunreliable narratortells a story to mislead the would-be murderer, who later discovers that he was misled after another character narrates the truth to him.[6]As the story concludes, the "Tale of Núr al-Dín Alí and his Son" is narrated within it. This perennially popular work can be traced back toArabic,Persian, and Indian storytelling traditions. Mary Shelley'sFrankensteinhas a deeply nested frame story structure, that features the narration of Walton, who records the narration of Victor Frankenstein, who recounts the narration of his creation, who narrates the story of a cabin dwelling family he secretly observes. Another classic novel with a frame story isWuthering Heights, the majority of which is recounted by the central family's housekeeper to a boarder. Similarly,Roald Dahl's storyThe Wonderful Story of Henry Sugaris about a rich bachelor who finds an essay written by someone who learned to "see" playing cards from the reverse side. The full text of this essay is included in the story, and itself includes a lengthy sub-story told as a true experience by one of the essay's protagonists, Imhrat Khan. Lewis Carroll'sAlicebooks,Alice's Adventures in Wonderland(1865) andThrough the Looking-Glass(1871), have several multiple poems that are mostly recited by several characters to the titular character. The most notable examples are "You Are Old, Father William","'Tis the Voice of the Lobster", "Jabberwocky", and "The Walrus and the Carpenter". Chaucer'sThe Canterbury TalesandBoccaccio'sDecameronare also classic frame stories. In Chaucer'sCanterbury Tales, the characters tell tales suited to their personalities and tell them in ways that highlight their personalities. The noble knight tells a noble story, the boring character tells a very dull tale, and the rude miller tells a smutty tale.Homer'sOdysseytoo makes use of this device;Odysseus' adventures at sea are all narrated by Odysseus to the court of kingAlcinousinScheria. Other shorter tales, many of them false, account for much of theOdyssey. Many modern children's story collections are essentiallyanthologyworks connected by this device, such asArnold Lobel'sMouse Tales,Paula Fox'sThe Little Swineherd, and Phillip and Hillary Sherlock'sEars and Tails and Common Sense. A well-known modern example of framing is the fantasy genre workThe Princess Bride(boththe bookandthe film). In the film, a grandfather is reading the story ofThe Princess Brideto his grandson. In the book, a more detailed frame story has a father editing a much longer (but fictive) work for his son, creating his own "Good Parts Version" (as the book called it) by leaving out all the parts that would bore or displease a young boy. Both the book and the film assert that the central story is from a book calledThe Princess Brideby a nonexistent author namedS. Morgenstern. In the Welsh novelAelwyd F'Ewythr Robert(1852) see byGwilym Hiraethog, a visitor to a farm in north Wales tells the story ofUncle Tom's Cabinto those gathered around the hearth. Sometimes a frame story exists in the same setting as the main story. On the television seriesThe Young Indiana Jones Chronicles, each episode was framed as though it were being told byIndywhen he was older (usually acted byGeorge Hall, but once byHarrison Ford). The same device of an adult narrator representing the older version of a young protagonist is used in the filmsStand by MeandA Christmas Story, and the television showThe Wonder YearsandHow I Met Your Mother. InThe Amory Wars, a tale told through the music ofCoheed and Cambria, tells a story for the first two albums but reveals that the story is being actively written by a character called the Writer in the third. During the album, the Writer delves into his own story and kills one of the characters, much to the dismay of the main character. The critically acclaimedBeatlesalbumSgt. Pepper's Lonely Hearts Club Bandis presented as a stage show by the fictional eponymous band, and one of its songs, "A Day in the Life", is in the form of a story within a dream. Similarly, theFugeesalbumThe Scoreis presented as the soundtrack to a fictional film, as are several other notableconcept albums, whileWyclef Jean'sThe Carnivalis presented as testimony at a trial. The majority ofAyreon's albums outline a sprawling, loosely interconnected science fiction narrative, as do the albums ofJanelle Monae. OnTom Waits's concept albumAlice(consisting of music he wrote for the musical of the same name), most of the songs are (very) loosely inspired by bothAlice in Wonderland, and the book's real-life author,Lewis Carroll, and inspirationAlice Liddell. The song "Poor Edward", however, is presented as a story told by a narrator aboutEdward Mordrake, and the song "Fish and Bird" is presented as a retold story that the narrator heard from a sailor. In his 1895historical novelPharaoh,Bolesław Prusintroduces a number of stories within the story, ranging in length fromvignettesto full-blown stories, many of them drawn fromancient Egyptiantexts, that further the plot, illuminatecharacters, and even inspire the fashioning of individual characters.Jan Potocki'sThe Manuscript Found in Saragossa(1797–1805) has an interlocking structure with stories-within-stories reaching several levels of depth. Theprovenanceof the story is sometimes explained internally, as inThe Lord of the RingsbyJ. R. R. Tolkien, which depicts theRed Book of Westmarch(a story-internal version of the book itself) as a history compiled by several of the characters. ThesubtitleofThe Hobbit("There and Back Again") is depicted as part of a rejected title of this book within a book, andThe Lord of the Ringsis a part of the final title.[7] An example of an interconnected inner story is "The Mad Trist" inEdgar Allan Poe'sFall of the House of Usher, where through somewhat mystical means the narrator's reading of the story within a story influences the reality of the story he has been telling, so that what happens in "The Mad Trist" begins happening in "The Fall of the House of Usher". Also, inDon QuixotebyMiguel de Cervantes, there are many stories within the story that influence the hero's actions (there are others that even the author himself admits are purely digressive). Most of the first part is presented as a translation of afound manuscriptby (fictional)Cide Hamete Benengeli. A commonly independentlyanthologisedstory is "The Grand Inquisitor" byDostoevskyfrom his longpsychological novelThe Brothers Karamazov, which is told by one brother to another to explain, in part, his view on religion and morality. It also, in a succinct way, dramatizes many of Dostoevsky's interior conflicts. An example of a "bonus material" style inner story is the chapter "The Town Ho's Story" inHerman Melville's novelMoby-Dick; that chapter tells a fully formed story of an excitingmutinyand contains many plot ideas that Melville had conceived during the early stages of writingMoby-Dick—ideas originally intended to be used later in the novel—but as the writing progressed, these plot ideas eventually proved impossible to fit around the characters that Melville went on tocreate and develop. Instead of discarding the ideas altogether, Melville wove them into a coherent short story and had the character Ishmael demonstrate his eloquence and intelligence bytelling the storyto his impressed friends. One of the most complicated structures of a story within a story was used byVladimir Nabokovin his novelThe Gift. There, as inner stories, function both poems and short stories by the main character Fyodor Cherdyntsev as well as the whole Chapter IV, a critical biography of NikolayChernyshevsky(also written by Fyodor). This novel is considered one of the first metanovels in literature. With the rise ofliterary modernism, writers experimented with ways in which multiple narratives might nest imperfectly within each other. A particularly ingenious example of nested narratives isJames Merrill's 1974modernist poem"Lost in Translation". InRabih Alameddine's novelThe Hakawati, orThe Storyteller, the protagonist describes coming home to the funeral of his father, one of a long line of traditional Arabic storytellers. Throughout the narrative, the author becomes hakawati (an Arabic word for a teller of traditional tales) himself, weaving the tale of the story of his own life and that of his family with folkloric versions of tales from Qur'an, the Old Testament, Ovid, and One Thousand and One Nights. Both the tales he tells of his family (going back to his grandfather) and the embedded folk tales, themselves embed other tales, often 2 or more layers deep. InSue Townsend'sAdrian Mole: The Wilderness Years,Adrianwrites the bookLo! The Flat Hills of My Homeland, in which the character Jake Westmorland writes a book calledSparg of Kronk, where the character Sparg writes a book with no language. InAnthony Horowitz'sMagpie Murders, a significant proportion of the book features a fictional but authentically formatted mystery novel by Alan Conway, titled 'Magpie Murders'. The secondary novel ends before its conclusion returning the narrative to the original, and primary, story where the protagonist and reviewer of the book attempts to find the final chapter. As this progresses characters and messages within the fictionalMagpie Murdersmanifest themselves within the primary narrative and the final chapter's content reveals the reason for its original absence. Dreams are a common way of including stories inside stories, and can sometimes go several levels deep. Both the bookThe Arabian Nightmareand the curse of "eternal waking" from theNeil GaimanseriesThe Sandmanfeature an endless series of waking from one dream into another dream. InCharles Maturin's novelMelmoth the Wanderer, the use of vast stories-within-stories creates a sense of dream-like quality in the reader. The 2023 Christian fictional novelJust OncebyKaren Kingsburyfeatures a series of three nested stories, all centering around the main characters of Hank and Irvel Myers:[citation needed] This structure is also found in classic religious and philosophical texts. The structure ofThe SymposiumandPhaedo, attributed toPlato, is of a story within a story within a story. In the ChristianBible, thegospelsare accounts of the life and ministry ofJesus. However, they also include within them theparablesthat Jesus told. In more modern philosophical works,Jostein Gaarder's books often feature this device. Examples areThe Solitaire Mystery, where the protagonist receives a small book from a baker, in which the baker tells the story of a sailor who tells the story of another sailor, andSophie's World, about a girl who is actually a character in a book that is being read by Hilde, a girl in another dimension. Later on in the book Sophie questions this idea, and realizes that Hilde too could be a character in a story that in turn is being read by another. Mahabharata, an Indian epic that is also the world's longest epic, has a nested structure.[8] The experimental modernist works that incorporate multiple narratives into one story are quite often science fiction or science fiction influenced. These include most of the various novels written by the American authorKurt Vonnegut. Vonnegut includes the recurring characterKilgore Troutin many of his novels. Trout acts as the mysteriousscience fictionwriter who enhances the morals of the novels through plot descriptions of his stories. Books such asBreakfast of ChampionsandGod Bless You, Mr. Rosewaterare sprinkled with these plot descriptions.Stanisław Lem'sTale of the Three Storytelling Machines of King GeniusfromThe Cyberiadhas several levels of storytelling. All levels tell stories of the same person, Trurl. House of Leavesis the tale of a man who finds a manuscript telling the story of a documentary that may or may not have ever existed, contains multiple layers of plot. The book includes footnotes and letters that tell their own stories only vaguely related to the events in the main narrative of the book, and footnotes for fake books. Robert A. Heinlein's later books (The Number of the Beast,The Cat Who Walks Through WallsandTo Sail Beyond the Sunset) propose the idea that every real universe is a fiction in another universe. Thishypothesisenables many writers who are characters in the books to interact with their own creations.Margaret Atwood's novelThe Blind Assassinis interspersed with excerpts from a novel written by one of the main characters; the novel-within-a-novel itself contains ascience fictionstory written by one ofthatnovel's characters. InPhilip K. Dick's novelThe Man in the High Castle, each character comes into interaction with a book calledThe Grasshopper Lies Heavy, which was written by the Man in the High Castle. As Dick's novel details a world in which theAxis Powers of World War IIhadsucceeded in dominating the known world, the novel within the novel details an alternative to this history in which the Allies overcome the Axis and bring stability to the world – a victory which itself is quite different from real history. InRed Orc's RagebyPhilip J. Farmer, a doublyrecursive methodis used to intertwine its fictional layers. This novel is part of a science fiction series, theWorld of Tiers. Farmer collaborated in the writing of this novel with an American psychiatrist, A. James Giannini, who had previously used theWorld of Tiersseries in treating patients in group therapy. During these therapeutic sessions, the content and process of the text and novelist was discussed rather than the lives of the patients. In this way subconscious defenses could be circumvented. Farmer took the real life case-studies and melded these with adventures of his characters in the series.[9] TheQuantum LeapnovelKnights of the Morningstaralso features a character who writes a book by that name. InMatthew Stover'sStar WarsnovelShatterpoint, the protagonistMace Windunarrates the story within his journal, while the main story is being told from thethird-person limitedpoint of view. SeveralStar Trektales are stories or events within stories, such asGene Roddenberry'snovelizationofStar Trek: The Motion Picture,J. A. Lawrence'sMudd's Angels,John M. Ford'sThe Final Reflection,Margaret Wander Bonanno'sStrangers from the Sky(which adopts the conceit that it is a book from the future by an author called Gen Jaramet-Sauner), and J. R. Rasmussen's "Research" in the anthologyStar Trek: Strange New WorldsII.Steven Barnes's novelization of theStar Trek: Deep Space Nineepisode "Far Beyond the Stars" partners withGreg Cox'sThe Eugenics Wars: The Rise and Fall of Khan Noonien Singh(Volume Two) to tell us that the fictional story "Far Beyond the Stars" (whose setting and cast closely resembleDeep Space Nine)—and, by extension, all ofStar Trekitself—is the creation of 1950s writer Benny Russell. The bookCloud Atlas(later adapted into a film byThe WachowskisandTom Tykwer) consisted of six interlinked stories nested inside each other in a Russian doll fashion. The first story (that of Adam Ewing in the 1850s befriending an escaped slave) is interrupted halfway through and revealed to be part of a journal being read by composer Robert Frobisher in 1930s Belgium. His own story of working for a more famous composer is told in a series of letters to his lover Rufus Sixsmith, which are interrupted halfway through and revealed to be in the possession of an investigative journalist named Luisa Rey and so on. Each of the first five tales are interrupted in the middle, with the sixth tale being told in full, before the preceding five tales are finished in reverse order. Each layer of the story either challenges the veracity of the previous layer, or is challenged by the succeeding layer. Presuming each layer to be a true telling within the overall story, a chain of events is created linking Adam Ewing's embrace of the abolitionist movement in the 1850s to the religious redemption of a post-apocalyptic tribal man over a century after the fall of modern civilization. The characters in each nested layer take inspiration or lessons from the stories of their predecessors in a manner that validates a belief stated in the sixth tale that "Our lives are not our own. We are bound to others, past and present and by each crime, and every kindness, we birth our future." The Crying of Lot 49byThomas Pynchonhas several characters seeing a play calledThe Courier's Tragedyby the fictitiousJacobeanplaywrightRichard Wharfinger. The events of the play broadly mirror those of the novel and give the character Oedipa Maas a greater context to consider her predicament; the play concerns a feud between two rival mail distribution companies, which appears to be ongoing to the present day, and in which, if this is the case, Oedipa has found herself involved. As inHamlet, the director makes changes to the original script; in this instance, a couplet that was added, possibly by religious zealots intent on giving the play extra moral gravity, are said only on the night that Oedipa sees the play. From what Pynchon relates, this is the only mention in the play of Thurn and Taxis' rivals' name—Trystero—and it is the seed for the conspiracy that unfurls. A significant portion ofWalter Moers'Labyrinth of Dreaming Booksis anekphrasison the subject of an epic puppet theater presentation. Another example is found inSamuel Delany'sTrouble on Triton, which features a theater company that produces elaborate staged spectacles for randomly selected single-person audiences. Plays produced by the "Caws of Art" theater company also feature in Russell Hoban's modern fable,The Mouse and His Child.Raina Telgemeier's best-sellingDramais a graphic novel about a middle-school musical production, and the tentative romantic fumblings of its cast members. InManuel Puig'sKiss of the Spider Woman, ekphrases on various old movies, some real, and some fictional, make up a substantial portion of the narrative. InPaul Russell'sBoys of Life, descriptions of movies by director/antihero Carlos (loosely inspired by controversial directorPier Paolo Pasolini) provide a narrative counterpoint and add a touch of surrealism to the main narrative. They additionally raise the question of whether works of artistic genius justify or atone for the sins and crimes of their creators. Auster'sThe Book of Illusions(2002) and Theodore Roszak'sFlicker(1991) also rely heavily on fictional films within their respective narratives. This dramatic device was probably first used byThomas KydinThe Spanish Tragedyaround 1587, where the play is presented before an audience of two of the characters, who comment upon the action.[10][11]From references in other contemporary works, Kyd is also assumed to have been the writer of an early, lost version ofHamlet(the so-calledUr-Hamlet), with a play-within-a-play interlude.[12]William Shakespeare'sHamletretains this device by having Hamlet ask some strolling players to performThe Murder of Gonzago. The action and characters inThe Murdermirror the murder of Hamlet's father in the main action, and Prince Hamlet writes additional material to emphasize this. Hamlet wishes to provoke the murderer, his uncle, and sums this up by saying "the play's the thing wherein I'll catch the conscience of the king." Hamlet calls this new playThe Mouse-trap(a title thatAgatha Christielater took for the long-running playThe Mousetrap). Christie's work was parodied in Tom Stoppard'sThe Real Inspector Hound, in which two theater critics are drawn into the murder mystery they are watching. The audience is similarly absorbed into the action in Woody Allen's playGod, which is about two failed playwrights in Ancient Greece. The phrase "The Conscience of the King" also became the title of aStar Trekepisode featuring a production of Hamlet which leads to the exposure of a murderer (although not a king). The playI Hate Hamletand the movieA Midwinter's Taleare about a production ofHamlet, which in turn includes a production ofThe Murder of Gonzago, as does theHamlet-based filmRosencrantz & Guildenstern Are Dead, which even features a third-level puppet theatre version within their play. Similarly, inAnton Chekhov'sThe Seagullthere are specific allusions toHamlet: in the first act a son stages a play to impress his mother, a professional actress, and her new lover; the mother responds by comparing her son to Hamlet. Later he tries to come between them, as Hamlet had done with his mother and her new husband. The tragic developments in the plot follow in part from the scorn the mother shows for her son's play.[13] Shakespeare adopted the play-within-a-play device for many of his other plays as well, includingA Midsummer Night's DreamandLove's Labours Lost. Almost the whole ofThe Taming of the Shrewis a play-within-a-play, presented to convinceChristopher Sly, a drunken tinker, that he is a nobleman watching a private performance, but the device has no relevance to the plot (unless Katharina's subservience to her "lord" in the last scene is intended to strengthen the deception against the tinker[14]) and is often dropped in modern productions. The musicalKiss Me, Kateis about the production of a fictitious musical,The Taming of the Shrew, based on the comedyThe Taming of the ShrewbyWilliam Shakespeare, and features several scenes from it.Pericles, Prince of Tyredraws in part on the 14th-centuryConfessio Amantis(itself a frame story) byJohn Gower, and Shakespeare has the ghost of Gower "assume man's infirmities" to introduce his work to the contemporary audience and comment on the action of the play.[15] InFrancis Beaumont'sKnight of the Burning Pestle(c. 1608) a supposed common citizen from the audience, actually a "planted" actor, condemns the play that has just started and "persuades" the players to present something about a shopkeeper. The citizen's "apprentice" then acts, pretending to extemporise, in the rest of the play. This is a satirical tilt at Beaumont's playwright contemporaries and their current fashion for offering plays about London life.[16] The operaPagliacciis about a troupe of actors who perform a play about marital infidelity that mirrors their own lives,[17]and composerRichard Rodney Bennettandplaywright-librettistBeverley Cross'sThe Mines of Sulphurfeatures a ghostly troupe of actors who perform a play about murder that similarly mirrors the lives of their hosts, from whom they depart, leaving them with the plague as nemesis.[18]John Adams'Nixon in China(1985–1987) features a surreal version ofMadam Mao'sRed Detachment of Women, illuminating the ascendance of human values over the disillusionment of high politics in the meeting.[19] InBertolt Brecht'sThe Caucasian Chalk Circle, a play is staged as aparableto villagers in theSoviet Unionto justify the re-allocation of their farmland: the tale describes how a child is awarded to a servant-girl rather than its natural mother, an aristocrat, as the woman most likely to care for it well. This kind of play-within-a-play, which appears at the beginning of the main play and acts as a "frame" for it, is called an "induction". Brecht's one-act playThe Elephant Calf(1926) is a play-within-a-play performed in the foyer of the theatre during hisMan Equals Man. InJean Giraudoux's playOndine, all of act two is a series of scenes within scenes, sometimes two levels deep. This increases thedramatic tensionand also makes more poignant the inevitable failure of the relationship between themortalHans andwater spriteOndine. The Two-Character PlaybyTennessee Williamshas a concurrent double plot with the convention of a play within a play. Felice and Clare are siblings and are both actor/producers touringThe Two-Character Play. They have supposedly been abandoned by their crew and have been left to put on the play by themselves. The characters in the play are also brother and sister and are also named Clare and Felice. The Mysteries, a modern reworking of the medievalmystery plays, remains faithful to its roots by having the modern actors play the sincere, naïve tradesmen and women as they take part in the original performances.[20] Alternatively, a play might be about the production of a play, and include the performance of all or part of the play, as inNoises Off,A Chorus of Disapproval, orLilies. Similarly, the musicalMan of La Manchapresents the story of Don Quixote as an impromptu play staged in prison byQuixote's author,Miguel de Cervantes. In most stagings of the musicalCats, which include the song "Growltiger's Last Stand" – a recollection of an old play by Gus the Theatre Cat – the character of LadyGriddlebonesings "The Ballad of Billy McCaw". (However, many productions of the show omit "Growltiger's Last Stand", and "The Ballad of Billy McCaw" has at times been replaced with a mock aria, so this metastory is not always seen.) Depending on the production, there is another musical scene called "The Awful Battle of the Pekes and the Pollices" where the Jellicles put on a show for their leader. InLestat: The Musical, there are three play within a plays. First, when Lestat visits his childhood friend, Nicolas, who works in a theater, where he discovers his love for theater; and two more when the Theater of the Vampires perform. One is used as a plot mechanism to explain the vampire god, Marius, which sparks an interest in Lestat to find him. A play within a play occurs in the musicalThe King and I, where Princess Tuptim and the royal dancers give a performance ofSmall House of Uncle Thomas(orUncle Tom's Cabin) to their English guests. The play mirrors Tuptim's situation, as she wishes to run away from slavery to be with her lover, Lun Tha. In stagings ofDina Rubina's playAlways the Same Dream, the story is about staging a school play based on a poem byPushkin. Joseph Heller's 1967 playWe Bombed in New Havenis about actors engaged in a play about military airmen; the actors themselves become at times unsure whether they are actors or actual airmen. The 1937 musicalBabes in Armsis about a group of kids putting on a musical to raise money. The central plot device was retained for the popular 1939 film version withJudy GarlandandMickey Rooney. A similar plot was recycled for the filmsWhite ChristmasandThe Blues Brothers. The 1946 film noirThe Locketcontains a nestedflashbackstructure, with a screenplay bySheridan Gibneybased on the story "What Nancy Wanted" by Norma Barzman. TheFrançois TruffautfilmDay for Nightis about the making of a fictitious movie calledMeet Pamela(Je vous présente Pamela) and shows the interactions of the actors as they are making this movie about a woman who falls for her husband's father. The story ofPamelainvolves lust, betrayal, death, sorrow, and change, events that are mirrored in the experiences of the actors portrayed inDay for Night. There are a wealth of other movies that revolve around the film industry itself, even if not centering exclusively on one nested film. These include the darkly satirical classicSunset Boulevardabout an aging star and her parasitic victim, and the Coen Brothers' farceHail, Caesar! The script toKarel Reisz's movieThe French Lieutenant's Woman(1981), written byHarold Pinter, is a film-within-a-film adaptation ofJohn Fowles's book. In addition to the Victorian love story of the book, Pinter creates a present-day background story that shows a love affair between the main actors. The Muppet Moviebegins withthe Muppetssitting down in a theater to watch the eponymous movie, whichKermit the Frogclaims to be a semi-biographical account of how they all met. InBuster Keaton'sSherlock Jr., Keaton's protagonist actually enters into a film while it is playing in a cinema, as does the main character in theArnold SchwarzeneggerfilmThe Last Action Hero. A similar device is used in the music video for the song "Take On Me" byA-ha, which features a woman entering a pencil sketch. Conversely,Woody Allen'sPurple Rose of Cairois about a film character exiting the film to interact with the real world. Allen's earlier filmPlay it Again, Samfeatured liberal use of characters, dialogue and clips from the film classicCasablancaas a central device. The 2002Pedro AlmodóvarfilmTalk to Her(Hable con ella) has the chief character Benigno tell a story calledThe Shrinking Loverto Alicia, a long-term comatose patient whom Benigno, a male nurse, is assigned to care for. The film presentsThe Shrinking Loverin the form of a black-and-white silent melodrama. To prove his love to a scientist girlfriend,The Shrinking Loverprotagonist drinks a potion that makes him progressively smaller. The resulting seven-minute scene, which is readily intelligible and enjoyable as a stand-alone short subject, is considerably more overtly comic than the rest ofTalk to Her—the protagonist climbs giant breasts as if they were rock formations and even ventures his way inside a (compared to him) gigantic vagina. Critics have noted thatThe Shrinking Loveressentially is a sex metaphor. Later inTalk to Her, the comatose Alicia is discovered to be pregnant and Benigno is sentenced to jail for rape.The Shrinking Loverwas named Best Scene of 2002 in theSkandies, an annual survey of online cinephiles and critics invited each year by critic Mike D'Angelo.[21] Tropic Thunder(2008) is acomedy filmrevolving around a group ofprima donnaactors making aVietnam Warfilm (itself also namedTropic Thunder) when their fed-up writer and director decide to abandon them in the middle of the jungle, forcing them to fight their way out. The concept was perhaps[original research?]inspired by the 1986 comedyThree Amigos, where three washed-up silent film stars are expected to live out a real-life version of their old hit movies. The same idea of life being forced to imitate art is also reprised in theStar TrekparodyGalaxy Quest. The first episode of theanimeseriesThe Melancholy of Haruhi Suzumiyaconsists almost entirely of a poorly made film that the protagonists created, complete withKyon's typical, sarcastic commentary. Chuck Jones's 1953cartoonDuck AmuckshowsDaffy Ducktrapped in a cartoon that an unseen animator repeatedly manipulates. At the end, it is revealed that the whole cartoon was being controlled byBugs Bunny. TheDuck Amuckplot was essentially replicated in one of Jones' later cartoons,Rabbit Rampage(1955), in which Bugs Bunny turns out to be the victim of the sadistic animator (Elmer Fudd). A similar plot was also included in an episode ofNew Looney Tunes, in which Bugs is the victim, Daffy is the animator, and it was made on a computer instead of a pencil and paper. In 2007, theDuck Amucksequence was parodied onDrawn Together("Nipple Ring-Ring Goes to Foster Care"). All feature-length films byJörg ButtgereitexceptSchrammfeature a film within the film. InNekromantik, the protagonist goes to the cinema to see the fictional slasher filmVera. InDer Todesking, one of the character watches a video of the fictional Nazi exploitation filmVera – Todesengel der Gestapoand inNekromantik 2, the characters go to see a film calledMon déjeuner avec Vera, which is a parody ofLouis Malle'sMy Dinner with André. Quentin Tarantino'sInglourious Basterdsdepicts aNazi propagandafilm calledNation's Pride, which glorifies a soldier in the German army.Nation's Prideis directed byEli Roth. Joe Dante'sMatineedepictsMant, an early-1960s sci-fi/horror movie about a man who turns into anant. In one scene, the protagonists see aDisney-style family movie calledThe Shook-Up Shopping Cart. The 2002 martial arts epicHeropresented the same narrative several different times, as recounted by different storytellers, but with both factual and aesthetic differences. Similarly, in the whimsical 1988Terry GilliamfilmThe Adventures of Baron Munchausen, and the 2003Tim BurtonfilmBig Fish, the bulk of the film is a series of stories told by an (extremely) unreliable narrator. In the 2006 Tarsem filmThe Fall, an injured silent-movie stuntman tellsheroic fantasystories to a little girl with a broken arm to pass time in the hospital, which the film visualizes and presents with the stuntman's voice becoming voiceover narration. The fantasy tale bleeds back into and comments on the film's "present-tense" story. There are often incongruities based on the fact that the stuntman is an American and the girl Persian—the stuntman's voiceover refers to "Indians", "a squaw" and "a teepee", but the visuals show a Bollywood-style devi and a Taj Mahal-like castle. The same conceit of an unreliable narrator was used to very different effect in the 1995 crime dramaThe Usual Suspects(which garnered an Oscar forKevin Spacey's performance). Walt Disney's 1946 live-action drama filmSong of the Southhas three animated sequences, all based on theBr'er Rabbitstories, told as moral fables byUncle Remus(James Baskett) to seven-year-old Johnny (Bobby Driscoll) and his friends Ginny (Luana Patten) and Toby (Glenn Leedy). The seminal 1950 Japanese filmRashomon, based on the Japanese short story "In a Grove" (1921), utilizes theflashback-within-a-flashback technique. The story unfolds in flashback as the four witnesses in the story—the bandit, the murderedsamurai, his wife, and the nameless woodcutter—recount the events of one afternoon in a grove. But it is also a flashback within a flashback, because the accounts of the witnesses are being retold by a woodcutter and a priest to a ribald commoner as they wait out a rainstorm in a ruined gatehouse. The filmInceptionhas a deeply nested structure that is itself part of the setting, as the characters travel deeper and deeper into layers of dreams within dreams. Similarly, in the beginning of the music video for theMichael Jacksonsong "Thriller", the heroine is terrorized by her monster boyfriend in what turns out to be a film within a dream. The filmThe Grand Budapest Hotelhas four layers of narration: starting with a young girl at the author's memorial reading his book, it cuts to the old author in 1985 telling of an incident in 1968 when he, as a young author, stayed at the hotel and met the owner, old Zero. He was then told the story of young Zero and M. Gustave, from 1932, which makes up most of the narrative. Then in 2025, The filmDog Manis a flim in a comic for theDog Manseries. The 2001 filmMoulin Rouge!features a fictitious musical within a film, called "Spectacular Spectacular". The 1942Ernst LubitschcomedyTo Be or Not to Beconfuses the audience in the opening scenes with a play, "The Naughty Nazis", about Adolf Hitler which appears to be taking place within the actual plot of the film. Thereafter, the acting company players serve as the protagonists of the film and frequently use acting/costumes to deceive various characters in the film.Hamletalso serves as an important throughline in the film, as suggested by the title.Laurence Oliviersets the opening scene of his 1944 film ofHenry Vin thetiring roomof the oldGlobe Theatreas the actors prepare for their roles on stage. The early part of the film follows the actors in these "stage" performances and only later does the action almost imperceptibly expand to the full realism of theBattle of Agincourt. By way of increasingly more artificial sets (based on mediaeval paintings) the film finally returns to The Globe. Mel Brooks' filmThe Producersrevolves around a scheme to make money by producing a disastrously bad Broadway musical,Springtime for Hitler.Ironically the film itself was later made into its own Broadway musical (although a more intentionally successful one). TheOutkastmusic video for the song "Roses" is a short film about a high school musical. InDiary of a Wimpy Kid, the middle-schoolers put on a play ofThe Wizard of Oz, whileHigh School Musicalis a romantic comedy about the eponymous musical itself. A high school production is also featured in the gay teen romantic comedyLove, Simon. A 2012 Italian film,Caesar Must Die, stars real-life Italian prisoners who rehearse Shakespeare'sJulius CaesarinRebibbiaprison playingfictionalItalian prisoners rehearsing the same play in the same prison. In addition, the film itself becomes aJulius Caesaradaption of sorts as the scenes are frequently acted all around the prison, outside of rehearsals, and the prison life becomes indistinguishable from the play.[22] The main plot device inRepo! The Genetic Operais an opera which is going to be held the night of the events of the film. All of the principal characters of the film play a role in the opera, though the audience watching the opera is unaware that some of the events portrayed are more than drama. The 1990 biopicKorczak, about the last days of a Jewish children's orphanage in Nazi occupied Poland, features an amateur production ofRabindranath Tagore'sThe Post Office, which was selected by the orphanage's visionary leader as a way of preparing his charges for their own impending death. That same production is also featured in the stage playKorczak's Children,also inspired by the same historical events. The 1973 filmThe National Health, an adaptation of the 1969 playThe National HealthbyPeter Nichols, features a send-up of a typical American hospitalsoap operabeing shown on a television situated in an underfunded, unmistakably BritishNHShospital. TheJim CarreyfilmThe Truman Showis about a person who grows to adulthood without ever realizing that he is the unwitting hero of the immersive eponymous television show. InToy Story 2, the lead characterWoodylearns that he is based on the lead character of the same name of a 1950sWesternshow known asWoody's Roundup, which was seemingly cancelled due to the rise ofscience fiction, though this is eventually debunked after the final episode of the show can be seen playing. The first example of a video game within a video game is almost certainlyTim Stryker's 1980s era[vague]text-only gameFazuul(also the world's first online multiplayer game), in which one of the objects that the player can create is a minigame. Another early use of this trope was inCliff Johnson's 1987 hitThe Fool's Errand, a thematically linked narrative puzzle game, in which several of the puzzles were semi-independent games played against NPCs. Power Factorhas been cited as a rare example of a video game in which the entire concept is a video game within a video game: The player takes on the role of a character who is playing a "Virtual Reality Simulator", in which he in turn takes on the role of the hero Redd Ace.[23]The.hackfranchise also gives the concept a central role. It features a narrative in which internet advancements have created an MMORPG franchise called The World. Protagonists Kite andHaseotry to uncover the mysteries of the events surrounding The World. Characters in.hackare aware that they are video game characters. More commonly, however, the video game within a video game device takes the form of mini-games that are non-plot-oriented, and optional to the completion of the game. For example, in theYakuzaandShenmuefranchises, there are playable arcade machines featuring other Sega games that are scattered throughout the game world. InFinal Fantasy VIIthere are several video games that can be played in an arcade in the Gold Saucer theme park. InAnimal Crossing, the player can acquire individual NES emulations through various means and place them within their house, where they are playable in their entirety. When placed in the house, the games take the form of aNintendo Entertainment System. InFallout 4andFallout 76, the protagonist can find several cartridges throughout the wasteland that can be played on their pip-boy (an electronic device that exists only in the world of the game) or any terminal computer. InCeleste, there is a hidden room in which the protagonist can play the originalPICO-8prototype of the game. In theRemedyvideo game titledMax Payne, players can chance upon a number of ongoing television shows when activating or happening upon various television sets within the game environs, depending on where they are within the unfolding game narrative. Among them areLords & Ladies,Captain Baseball Bat Boy,Dick Justiceand the pinnacle television serialAddress Unknown– heavily inspired byDavid Lynch-style film narrative, particularlyTwin Peaks,Address Unknownsometimes prophesies events or character motives yet to occur in the Max Payne narrative. InGrand Theft Auto IV, the player can watch several TV channels which include many programs: reality shows, cartoons, and even game shows.[24] Terrance & PhillipfromSouth Parkcomments on the levels of violence and acceptable behaviour in the media and allow criticism of the outer cartoon to be addressed in the cartoon itself. Similarly, on the long running animated sitcomThe Simpsons, Bart's favorite cartoon,Itchy and Scratchy(a parody ofTom & Jerry), often echoes the plotlines of the main show.The Simpsonsalso parodied this structure with numerous 'layers' of sub-stories in the Season 17 episode "The Seemingly Never-Ending Story". The animated seriesSpongeBob SquarePantsfeatures numerous fictional shows, most notably,The Adventures of Mermaid Man and Barnacle Boy,which stars the titular elderly superheroesMermaid Man(Ernest Borgnine) andBarnacle Boy(Tim Conway). On the showDear White People, theScandalparodyDefamationoffers an ironic commentary on the main show's theme of interracial relationships. Similarly, each season of theHBOshowInsecurehas featured a different fictional show, including the slavery-era soap operaDue North, the rebooted black 1990s sitcomKev'yn,and the investigative documentary seriesLooking for LaToya. TheIrishtelevision seriesFather Tedfeatures a television show,Father Ben, which has characters and storylines almost identical to that ofFather Ted. The television shows30 Rock,Studio 60 on the Sunset Strip,Sonny with a Chance, andKappa Mikeyfeature a sketch show within the TV show. An extended plotline on the semi-autobiographical sitcomSeinfelddealt with the main characters developing a sitcom about their lives. The gag was reprised onCurb Your Enthusiasm, another semi-autobiographical show by and aboutSeinfeldco-creator Larry David, when the long-anticipatedSeinfeldreunion was staged entirely inside the new show. The "USS Callister" episode of theBlack Mirroranthology television series is about a man who is obsessed with aStar Trek-like show and recreates it as part of a virtual reality game. The concept of a film within a television series is employed in theMacrossuniverse.The Super Dimension Fortress Macross: Do You Remember Love?(1984) was originally intended as an alternative theatrical re-telling of the television seriesThe Super Dimension Fortress Macross(1982), but was later "retconned" into the Macrosscanonas a popular film within the television seriesMacross 7(1994). TheStargate SG-1episode "Wormhole X-Treme!" features a fictional TV show with an almost identical premise toStargate SG-1. A later episode, "200", depicts ideas for a possible reboot ofWormhole X-Treme!, including using a "younger and edgier" cast, or evenThunderbirds-style puppets. TheGleeepisode "Extraordinary Merry Christmas" features the members of New Directions starring in a black-and-white Christmas television special that is presented within the episode itself. The special is a homage to bothStar Wars Holiday Specialand the "Judy Garland Christmas Special". The British TV seriesDon't Hug Me I'm Scared, based on theweb seriesDon't Hug Me I'm Scared, is notable for being apuppet showthat includes a fictionalclaymationTV series within the show:Grolton & Hovris, a parody ofWallace and Gromit. Seinfeldhad a number of reoccurring fictional films, including a sci-fi film calledThe Flaming Globes of Sigmundand, most notably,Rochelle, Rochelle, a parody of artsy but exploitative foreign films. The trippy, metaphysically loopy thrillerDeath Castleis a central element of theMaster of Noneepisode "New York, I Love You". Theseries finaleofBarryfeatures a biopic of the titular character which was calledThe Mask Collector,and its production served as the catalyst for the last 4 episodes of Barry's final season. Stories inside stories can allow for genre changes.Arthur Ransomeuses the device to let his young characters in theSwallows and Amazonsseriesof children's books, set in the recognisable everyday world, take part in fantastic adventures of piracy in distant lands: two of the twelve books,Peter DuckandMissee Lee(and some would includeGreat Northern?as a third), are adventures supposedly made up by the characters.[25]Similarly, the film version ofChitty Chitty Bang Banguses a story within a story format to tell a purely fantastic fairy tale within a relatively more realistic frame-story. The film version ofThe Wizard of Ozdoes the same thing by making its inner story into a dream. Lewis Carroll's celebratedAlicebooks use the same device of a dream as an excuse for fantasy, while Carroll's less well-knownSylvie and Brunosubverts the trope by allowing the dream figures to enter and interact with the "real" world. In each episode ofMister Rogers' Neighborhood, the main story was realistic fiction, with live action human characters, while an inner story took place in theNeighborhood of Make-Believe, in which most characters were puppets, except Lady Aberlin and occasionally Mr. McFeely, played byBetty AberlinandDavid Newellin both realms. Some stories feature what might be called a literary version of theDroste effect, where an image contains a smaller version of itself (also a common feature in manyfractals). An early version is found in an ancient Chinese proverb, in which an old monk situated in a temple found on a high mountain recursively tells the same story to a younger monk about an old monk who tells a younger monk a story regarding an old monk sitting in a temple located on a high mountain, and so on.[26]The same concept is at the heart ofMichael Ende's classic children's novelThe Neverending Story, which prominently features a book of the same title. This is later revealed to be the same book the audience is reading, when it begins to be retold again from the beginning, thus creating an infinite regression that features as a plot element. Another story that includes versions of itself isNeil Gaiman'sThe Sandman: Worlds' Endwhich contains several instances of multiple storytelling levels, includingCerements(issue #55) where one of the inmost levels corresponds to one of the outer levels, turning the story-within-a-story structure into an infinite regression. Jesse Ball'sThe Way Through Doorsfeatures a deeply nested set of stories within stories, most of which explore alternate versions of the main characters. The frame device is that the main character is telling stories to a woman in a coma (similar to Almodóvar'sTalk to Her, mentioned above). Richard Adams' classic Watership Down includes several memorable tales about the legendary prince of rabbits, El-Ahraira, as told by master storyteller, Dandelion. Samuel Delany's great surrealist sci-fi classicDhalgrenfeatures the main character discovering a diary apparently written by a version of himself, with incidents that usually reflect, but sometimes contrast with the main narrative. The last section of the book is taken up entirely by journal entries, about which readers must choose whether to take as completing the narrator's own story. Similarly, inKiese Laymon'sLong Division, the main character discovers a book, also calledLong Division, featuring what appears to be himself, except as living twenty years earlier. The title book in Charles Yu'sHow to Live Safely in a Science Fictional Universeexists within itself as a stable creation of a closed loop in time. Likewise, in the Will Ferrell comedyStranger than Fictionthe main character discovers he is a character in a book that (along with its author) also exists in the same universe. The 1979 bookGödel, Escher, BachbyDouglas Hofstadterincludes a narrative betweenAchilles and the Tortoise(characters borrowed fromLewis Carroll, who in turn borrowed them fromZeno), and within this story they find the book "Provocative Adventures of Achilles and the Tortoise Taking Place in Sundry Spots of the Globe", which they begin to read, the Tortoise taking the part of the Tortoise, and Achilles taking the part of Achilles. Within this self-referential narrative, the two characters find the book "Provocative Adventures of Achilles and the Tortoise Taking Place in Sundry Spots of the Globe", which they begin to read, this time each taking the other's part. The 1979 experimental novelIf on a winter's night a travelerbyItalo Calvinofollows a reader, addressed in the second person, trying to read the very same book, but being interrupted by ten other recursively nested incomplete stories. Robert Altman's satirical Hollywood noirThe Playerends with theantiherobeing pitched a movie version of his own story, complete with an unlikely happy ending. The long-running musicalA Chorus Linedramatizes its own creation, and the life stories of its own original cast members. The famous final number does double duty as the showstopper for both the musical the audience is watching and the one the characters are appearing in.Austin Powers in Goldmemberbegins with an action film opening, which turns out to be a sequence being filmed bySteven Spielberg. Near the ending, the events of the film itself are revealed to be a movie being enjoyed by the characters. Jim Henson'sThe Muppet Movieis framed as a screening of the movie itself, and the screenplay for the movie is present inside the movie, which ends with an abstracted, abbreviated re-staging of its own events. The 1985 Tim Burton filmPee-Wee's Big Adventureends with the main characters watching a film version of their own adventures, but as reimagined as a Hollywood blockbuster action film, withJames Brolinas a more stereotypically manly version of thePaul Reubenstitle character. Episode 14 of theanimeseriesMartian Successor Nadesicois essentially a clip show, but has several newly animated segments based onGekigangar III, an anime that exists within its universe and that many characters are fans of, that involves the characters of that show watching Nadesico. The episode ends with the crew of the Nadesico watching the very same episode of Gekigangar, causing aparadox.Mel Brooks's 1974 comedyBlazing Saddlesleaves its Western setting when the climactic fight scene breaks out, revealing the setting to have been a set in theWarner Bros.studio lot; the fight spills out onto an adjacent musical set, then into the studio canteen, and finally onto the streets. The two protagonists arrive atGrauman's Chinese Theatre, which is showing the "premiere" ofBlazing Saddles; they enter the cinema to watch the conclusion of their own film. Brooks recycled the gag in his 1987Star Warsparody,Spaceballs, where the villains are able to locate the heroes by watching a copy of the movie they are in on VHS video tape (a comic exaggeration of the phenomenon of films being available on video before their theatrical release). Brooks also made the 1976 parodySilent Movieabout a buffoonish team of filmmakers trying to make the first Hollywood silent film in forty years—which is essentially that film itself (another forty years later, life imitated art imitating art, when an actual modern silent movie became a hit, the Oscar winnerThe Artist). The film-within-a-film format is used in theScreamhorror series. InScream 2, the opening scene takes place in a movie theater where a screening ofStabis played which depicts the events fromthe first film. In between the events ofScream 2andScream 3, a second film was released calledStab 2.Scream 3is about the actors filming a fictional third installment in the Stab series. The actors playing the trilogy's characters end up getting killed, much in the same way as the characters they are playing on screen and in the same order. In between the events ofScream 3andScream 4, four other Stab films are released. In the opening sequence ofScream 4two characters are watchingStab 7before they get killed. There's also a party in which all seven Stab movies were going to be shown. References are also made toStab 5involvingtime travelas a plot device. In the fifth installment of the series, also namedScream, an eighth Stab film is mentioned having been released before the film takes place. The characters in the film, several of which are fans of the series, heavily criticize the film, similar to howScream 4was criticized. Additionally, late in the film, Mindy watches the first Stab by herself. During the depiction of Ghostface sneaking up behind Randy on the couch from the first film in Stab, Ghostface sneaks up on Mindy and attacks and stabs her. DirectorSpike Jonze'sAdaptationis a fictionalized version of screenwriterCharlie Kaufman's struggles to adapt the non-cinematic bookThe Orchid Thiefinto a Hollywood blockbuster. As his onscreen self succumbs to the temptation to commercialize the narrative, Kaufman incorporates those techniques into the script, including tropes such as an invented romance, a car chase, a drug-running sequence, and an imaginary identical twin for the protagonist. (The movie also features scenes about the making ofBeing John Malkovich, previously written by Kaufman and directed by Jonze.) Similarly, in Kaufman's self-directed 2008 filmSynecdoche, New York, the main character Caden Cotard is a skilled director of plays who receives a grant, and ends up creating a remarkable theater piece intended as a carbon copy of the outside world. The layers of copies of the world ends up several layers deep. The same conceit was previously used by frequent Kaufman collaboratorMichel Gondryin his music video for theBjörksong "Bachelorette", which features a musical that is about, in part, the creation of that musical. A mini-theater and small audience appear on stage to watch the musical-within-a-musical, and at some point, within that second musical a yet-smaller theater and audience appear. Fractal fiction is sometimes utilized invideo gamesto play with the concept of player choice: In the first chapter ofStories Untold, the player is required to play atext adventure, which eventually becomes apparent to be happening in the same environment the player is in; inSuperhotthe narrative itself is constructed around the player playing a game called Superhot. Occasionally, a story within a story becomes such a popular element that the producer(s) decide to develop it autonomously as a separate and distinct work. This is an example of aspin-off. Such spin-offs may be produced as a way of providing additional information on the fictional world for fans. InHomestuckbyAndrew Hussie, there is a comic calledSweet Bro and Hella Jeff, created by one of the characters, Dave Strider. It was later adapted to its own ongoing series. In theToy Storyfilm universe,Buzz Lightyearis an animated toy action figure, which was based on a fictitious cartoon series,Buzz Lightyear of Star Command, which did not exist in the real world except for snippets seen withinToy Story. Later,Buzz Lightyear of Star Commandwas produced in the real world and was itself later joined byLightyear, a film described as the source material for the toy and cartoon series. Kujibiki Unbalance, a series in theGenshikenuniverse, has spawned merchandise of its own, and been remade into a series on its own. The popularDog Manseries of children's graphic novels is presented as a creation of the main characters of authorDav Pilkey's earlier series,Captain Underpants. In the animated online franchiseHomestar Runner, many of the best-known features were spun off from each other. The best known was "Strong Bad Emails", which depicted the villain of the original story giving snarky answers to fan emails, but that in turn spawned several other long-running features which started out as figments of Strong Bad's imagination, including the teen-oriented cartoon parody "Teen Girl Squad" and the anime parody "20X6". In theHarry Potterseries, three such supplemental books have been produced:Fantastic Beasts and Where to Find Them, a guidebook used by the characters;Quidditch Through the Ages, a book from the school library; andThe Tales of Beedle the Bard, presenting fairy tales told to children of the wizarding world. In the works ofKurt Vonnegut,Kilgore Trouthas written a novel calledVenus on the Half-Shell. In 1975 real-world authorPhilip José Farmerwrote a science-fiction novel calledVenus on the Half-Shell, published under the name Kilgore Trout. Captain Proton: Defender of the Earth, a story byDean Wesley Smith, was adapted from the holonovelCaptain Protonin theStar Trekuniverse. One unique example is theTyler Perrycomedy/horror hitBoo! A Madea Halloween, which originated as a parody of Tyler Perry films in theChris RockfilmTop 5.
https://en.wikipedia.org/wiki/Story_within_a_story#Fractal_fiction
Video feedbackis the process that starts and continues when avideo camerais pointed at its own playbackvideo monitor. The loop delay from camera to display back to camera is at least onevideoframe time, due to the input and output scanning processes; it can be more if there is more processing in the loop. First discovered shortly afterCharlie Ginsburginvented the first video recorder forAmpexin 1956, video feedback was considered a nuisance and unwanted noise.[citation needed]Technicians and studio camera operators were chastised for allowing a video camera to see its own monitor as the overload of self-amplified video signal caused significant problems with the 1950s video pickup, often ruining the pickup.[citation needed]It could also causescreen burn-inon television screens and monitors of the time as well, by generating static brightly illuminated display patterns. In the 1960s early examples of video feedback art became introduced into thepsychedelic artscene inNew York City.Nam June Paikis often cited as the firstvideo artist; he had clips of video feedback on display in New York City at theGreenwich Cafein the mid 1960s. Early video feedback works were produced by media artist experimenters on the East and West Coasts of the United States in the late 1960s and early 1970s. Video feedback artistsSteina and Woody Vasulka, withRichard Lowenbergand others, formed The Kitchen, which was located in the kitchen of a broken-down hotel in lowerManhattan; whileSkip Sweeneyand others founded Video Free America inSan Francisco, to nurture their video art and feedback experiments. David Sohnmentions video feedback in his 1970 bookFilm, the Creative Eye. This book was part of the base curriculum forRichard LedererofSt. Paul's SchoolinConcord, New Hampshire, when he made video feedback part of an English curriculum in his 1970s course Creative Eye in Film. Several students in this class participated regularly in the making and recording of video feedback.Sonyhad released the VuMax series of recording video cameras and manually "hand-looped" video tape decks by this time which did two things: it increased the resolution of the video image, which improved picture quality, and it made video tape recording technology available to the general public for the first time and allowed for such video experimentation by anyone. During the 1980s and into the 1990s video technology became enhanced and evolved into high quality, high definition video recording.Michael C. Andersengenerated the first known mathematical formula of the video feedback process,[1]and he has also generated aMendeleev's square to show the gradual progressive formulaic change of the video image as certain parameters are adjusted.[2] In the 1990s the rave scene and a social return to art of a more psychedelic nature brought back displays of video feedback on large disco dance floor video screens around the world. There are filters for non-linear video editors that often have video feedback as the filter description, or as a setting on a filter. These filter types either mimic or directly utilize video feedback for its result effect and can be recognized by its vortex, phantasmagoric manipulation of the original recorded image. Many artists have used optical feedback. A famous example isQueen'smusic video for "Bohemian Rhapsody" (1975). The effect (in this simple case) can be compared to looking at oneself between two mirrors. Other videos that use variations of video feedback include: This technique—under the name "howl-around"—was employed for the opening titles sequence for the Britishscience fictionseriesDoctor Who,[3]which employed this technique from 1963 to 1973. Initially this was in black and white, and redone in 1967 to showcase the show's new625-linebroadcast resolution and feature theDoctor's face (Patrick Troughtonat that time). It was redone again, in colour this time, in 1970. The next title sequence for the show, which debuted in 1973, abandoned this technique in favour ofslit-scan photography. An example of optical feedback in science is theoptical cavityfound in almost everylaser, which typically consists of two mirrors between which light is amplified. In the late 1990s it was found that so-called unstable-cavity lasers produce light beams whose cross-section present afractalpattern.[4] Optical feedback in science is often closely related to video feedback, so an understanding of video feedback can be useful for other applications of optical feedback. Video feedback has been used to explain the essence of fractal structure of unstable-cavity laser beams.[5] Video feedback is also useful as an experimental-mathematics tool. Examples of its use include the making of fractal patterns using multiple monitors, and multiple images produced using mirrors. Optical feedback is also found in theimage intensifiertube and its variants. Here the feedback is usually an undesirable phenomenon, where the light generated by the phosphor screen "feeds back" to the photocathode, causing the tube to oscillate, and ruining the image. This is typically suppressed by an aluminium reflective screen deposited on the back of the phosphor screen, or by incorporating amicrochannel plate detector. Optical feedback has been used experimentally in these tubes to amplify an image, in the manner of the cavity laser, but this technique has had limited use. Optical feedback has also been experimented with as an electron source, since a photocathode-phosphor cell will 'latch' when triggered, providing a steady stream of electrons. Douglas Hofstadterdiscusses video feedback in his bookI Am a Strange Loopabout the human mind and consciousness. He devotes a chapter to describing his experiments with video feedback. At some point during the session, I accidentally stuck my hand momentarily in front of the camera's lens. Of course the screen went all dark, but when I removed my hand, the previous pattern did not just pop right back onto the screen, as expected. Instead I saw a different pattern on the screen, but this pattern, unlike anything I'd seen before, was not stationary.[6]
https://en.wikipedia.org/wiki/Video_feedback
Inceptionis a 2010science fictionactionheist filmwritten and directed byChristopher Nolan, who also produced it withEmma Thomas, his wife. The film starsLeonardo DiCaprioas a professional thief who steals information by infiltrating thesubconsciousof his targets. He is offered a chance to have his criminal history erased as payment for the implantation of another person's idea into a target's subconscious. Theensemble castincludesKen Watanabe,Joseph Gordon-Levitt,Marion Cotillard,Elliot Page,[a]Tom Hardy,Cillian Murphy,Tom Berenger,Dileep Rao, andMichael Caine. After the 2002 completion ofInsomnia, Nolan presented toWarner Bros.a written 80-pagetreatmentfor ahorror filmenvisioning "dream stealers," based onlucid dreaming. Deciding he needed more experience before tackling a production of this magnitude and complexity, Nolan shelved the project and instead worked on 2005'sBatman Begins, 2006'sThe Prestige, and 2008'sThe Dark Knight. The treatment was revised over six months and was purchased by Warner in February 2009.Inceptionwas filmed in six countries, beginning in Tokyo on June 19 and ending in Canada on November 22. Its official budget was $160 million, split betweenWarner Bros.andLegendary. Nolan's reputation and success withThe Dark Knighthelped secure the film's US$100 million in advertising expenditure. Inception's premiere was held inLondonon July 8, 2010; it was released in both conventional andIMAXtheaters beginning on July 16, 2010.Inceptiongrossed over $837 million worldwide, becoming thefourth-highest-grossing film of 2010. Considered one of the best films of the 2010s,[4]Inception, among itsnumerous accolades, won fourOscars(Best Cinematography,Best Sound Editing,Best Sound Mixing,Best Visual Effects) and was nominated for four more (Best Picture,Best Original Screenplay,Best Art Direction,Best Original Score) at the83rd Academy Awards. Dom Cobb and Arthur are "extractors" who performcorporate espionageusing experimental dream-sharing technology to infiltrate their targets'subconsciousand extract information. Their latest target, Saito, is impressed with Cobb's ability tolayer multiple dreams within each other. He offers to hire Cobb for the ostensibly impossible job of implanting an idea into a person's subconscious; performing "inception" on Robert Fischer, the son of Saito's competitor Maurice Fischer, with the idea to dissolve his father's company. In return, Saito promises to clear Cobb's criminal status, allowing him to return home to his children. Cobb accepts the offer and assembles his team: a forger named Eames, a chemist named Yusuf, and a college student named Ariadne. Ariadne is tasked with designing the dream's architecture, something Cobb himself cannot do for fear of being sabotaged by his mind's projection of his late wife, Mal. Maurice Fischer dies, and the team sedates Robert Fischer into a three-layer shared dream on an airplane to America bought by Saito. Time on each layer runs slower than the layer above, with one member staying behind on each to perform a music-synchronized "kick" (using the French song "Non, je ne regrette rien") to awaken dreamers on all three levels simultaneously. The team abducts Robert in a city on the first level, but his trained subconscious projections attack them. After Saito is wounded, Cobb reveals that while dying in the dream would usually awaken dreamers, Yusuf's sedatives will instead send them into "Limbo": a world of infinite subconscious. Eames impersonates Robert's godfather, Peter Browning, to introduce the idea of an alternate will to dissolve the company. Cobb tells Ariadne that he and Mal entered Limbo while experimenting with dream-sharing, experiencing fifty years in one night due to the time dilation with reality. After waking up, Mal still believed she was dreaming. Attempting to "wake up," she committed suicide and framed Cobb for her murder to force him to do the same. Cobb fled the U.S., leaving his children behind. Yusuf drives the team around the first level as they are sedated into the second level, a hotel dreamed by Arthur. Cobb persuades Robert that Browning has kidnapped him to stop the dissolution and that Cobb is a defensive projection, leading Robert to another third level deeper as part of a ruse to enter Robert's subconscious. In the third level, the team infiltrates an alpine fortress with a projection of Maurice inside, where the inception itself can be performed. However, Yusuf performs his kick too soon by driving off a bridge, forcing Arthur and Eames to improvise a new set of kicks synchronized with them hitting the water by rigging an elevator and the fortress, respectively, with explosives. Mal then appears and kills Robert before he can be subjected to the inception, and he and Saito are lost in Limbo, forcing Cobb and Ariadne to rescue them in time for Robert's inception and Eames's kick. Cobb reveals that during their time in Limbo, Mal refused to return to reality; Cobb had to convince her it was only a dream, accidentally incepting in her the belief that the real world was still a dream. Cobb makes peace with his part in Mal's death. Ariadne kills Mal's projection and wakes Robert up with a kick. Revived into the third level, Robert discovers the planted idea: his dying father telling him to create something for himself. While Cobb searches for Saito in Limbo, the others ride the synced kicks back to reality. Cobb finds an aged Saito and reminds him of their agreement. The dreamers all awaken on the plane, and Saito makes a phone call. Arriving inLos Angeles, Cobb passes the immigration checkpoint, and his father-in-law accompanies him to his home. Cobb uses Mal's "totem" – atopthat spins indefinitely in a dream – to test if he is indeed in the real world, but he chooses not to observe the result and instead joins his children. Initially, Christopher Nolan wrote an 80-pagetreatmentabout dream-stealers.[20]Nolan had originally envisionedInceptionas ahorror film,[20]but eventually wrote it as aheist filmeven though he found that "traditionally [they] are very deliberately superficial in emotional terms."[21]Upon revisiting his script, he decided that basing it in that genre did not work because the story "relies so heavily on the idea of the interior state, the idea of dream andmemory. I realized I needed to raise the emotional stakes."[21] Nolan worked on the script for nine to ten years.[5]When he first started thinking about making the film, Nolan was influenced by "that era of movies where you hadThe Matrix(1999), you hadDark City(1998), you hadThe Thirteenth Floor(1999) and, to a certain extent, you hadMemento(2000), too. They were based in the principles that the world around you might not be real."[21][22] Nolan first pitched the film to Warner Bros. in 2001, but decided that he needed more experience making large-scale films, and embarked onBatman BeginsandThe Dark Knight.[23]He soon realized that a film likeInceptionneeded a large budget because "as soon as you're talking about dreams, the potential of the human mind is infinite. And so the scale of the film has to feel infinite. It has to feel like you could go anywhere by the end of the film. And it has to work on a massive scale."[23]After makingThe Dark Knight, Nolan decided to makeInceptionand spent six months completing the script.[23]Nolan said that the key to completing the script was wondering what would happen if several people shared the same dream. "Once you remove the privacy, you've created an infinite number of alternative universes in which people can meaningfully interact, with validity, with weight, with dramatic consequences."[24] Nolan had been trying to work with Leonardo DiCaprio for years and met him several times, but was unable to recruit him for any of his films untilInception.[12]DiCaprio finally agreed because he was "intrigued by this concept—this dream-heist notion and how this character's going to unlock his dreamworld and ultimately affect his real life."[25]: 93–94He read the script and found it to be "very well written, comprehensive but you really had to have Chris in person, to try to articulate some of the things that have been swirling around his head for the last eight years."[23]DiCaprio and Nolan spent months talking about the screenplay. Nolan took a long time re-writing the script in order "to make sure that the emotional journey of his [DiCaprio's] character was the driving force of the movie."[5]On February 11, 2009, it was announced thatWarner Bros.purchasedInception, aspec scriptwritten by Nolan.[26] Principal photographybegan inTokyoon June 19, 2009, with the scene in which Saito first hires Cobb during a helicopter flight over the city.[20][8]: 13 The production moved to the United Kingdom and shot in a convertedairship hangarinCardington, Bedfordshire, north of London.[8]: 14There, the hotel bar set which tilted 30 degrees was built.[27]: 29A hotel corridor was also constructed byGuy Hendrix Dyas, the production designer,Chris Corbould, the special effects supervisor, andWally Pfister, the director of photography; it rotated a full 360 degrees to create the effect of alternate directions ofgravityfor scenes set during the second level of dreaming, where dream-sector physics become chaotic. The idea was inspired by a technique used inStanley Kubrick's2001: A Space Odyssey(1968). Nolan said, "I was interested in taking those ideas, techniques, and philosophies and applying them to an action scenario".[27]: 32The filmmakers originally planned to make the hallway only 40 feet (12 m) long, but as the action sequence became more elaborate, the hallway's length was increased to 100 ft (30 m). The corridor was suspended along eight large concentric rings that were spaced equidistantly outside its walls and powered by two massive electric motors.[8]: 14 Joseph Gordon-Levitt, who plays Arthur, spent several weeks learning to fight in a corridor that spun like "a gianthamster wheel".[21]Nolan said of the device, "It was like some incredible torture device; we thrashed Joseph for weeks, but in the end we looked at the footage, and it looks unlike anything any of us has seen before. The rhythm of it is unique, and when you watch it, even if you know how it was done, it confuses your perceptions. It's unsettling in a wonderful way".[21]Gordon-Levitt remembered, "it was six-day weeks of just, like, coming home at night battered ... The light fixtures on the ceiling are coming around on the floor, and you have to choose the right time to cross through them, and if you don't, you're going to fall."[28]On July 15, 2009, filming took place atUniversity College Londonfor the sequences occurring inside a Paris college of architecture in the story,[20]including the library, Flaxman Gallery and Gustav Tuck Theatre.[29] Filming moved to France, where they shot Cobb entering the college of architecture (the place used for the entrance was theMusée Galliera) and the pivotal scenes between Ariadne and Cobb, in abistro(a fictional one set up at the corner of Rue César Franck and Rue Bouchut), and lastly on theBir-Hakeim bridge.[8]: 17For the explosion that takes place during the bistro scene, local authorities would not allow the use of real explosives. High-pressurenitrogenwas used to create the effect of a series of explosions. Pfister used six high-speed cameras to capture the sequence from different angles and make sure that they got the shot. The visual effects department enhanced the sequence, adding more destruction and flying debris. For the "Paris folding" sequence and when Ariadne "creates" the bridges,green screenandCGIwere used on location.[8]: 17 Tangier, Morocco, doubled asMombasa, where Cobb hires Eames and Yusuf. A foot chase was shot in the streets and alleyways of the historicmedina quarter.[8]: 18To capture this sequence, Pfister employed a mix ofhand-held cameraandsteadicamwork.[8]: 19Tangier was also used as the setting for filming an important riot scene during the initial foray into Saito's mind. Filming moved to theLos Angelesarea, where some sets were built on a Warner Bros.sound stage, including the interior rooms of Saito's Japanese castle (the exterior was done on a small set built inMalibu Beach). The dining room was inspired by the historicNijō Castle, built around 1603. These sets were inspired by a mix ofJapanese architectureand Western influences.[8]: 19 The production staged a multi-vehiclecar chaseon the streets ofdowntown Los Angeles, which involved a freight train crashing down the middle of a street.[8]: 20To do this, the filmmakers configured a train engine on the chassis of a tractor trailer. The replica was made fromfiberglassmolds taken from authentic train parts and matched in terms of color and design.[8]: 21Also, the car chase was supposed to be set in the midst of a downpour, but the L.A. weather stayed typically sunny. The filmmakers set up elaborate effects (e.g., rooftopwater cannons) to give the audience the impression that the weather was overcast and soggy. L.A. was also the site of the climactic scene where aFord Econolinevan runs off theSchuyler Heim Bridgein slow motion.[30]This sequence was filmed on and off for months, with the van being shot out of a cannon, according to actor Dileep Rao. Capturing the actors suspended within the van in slow motion took a whole day to film. Once the van landed in the water, the challenge for the actors was to avoid panic. "And when they ask you to act, it's a bit of an ask," explained Cillian Murphy.[30]The actors had to be underwater for four to five minutes while drawing air fromscuba tanks; underwaterbuddy breathingis shown in this sequence.[30] Cobb's house was in Pasadena. The hotel lobby was filmed at the CAA building in Century City. "Limbo" was made on location in Los Angeles andMorocco, with the beach scene filmed atPalos Verdesbeach with CGI buildings. N Hope St. in Los Angeles was the primary filming location for "Limbo", with green screen and CGI being used to create the dream landscape. The final phase of principal photography took place inAlbertain late November 2009. The location manager discovered a temporarily closed ski resort,Fortress Mountain.[8]: 22An elaborate set was assembled near the top station of the Canadianchairlift, taking three months to build.[25]: 93The production had to wait for a huge snowstorm, which eventually arrived.[20]The ski-chase sequence was inspired by Nolan's favoriteJames Bondfilm,On Her Majesty's Secret Service(1969): "What I liked about it that we've tried to emulate in this film is there's a tremendous balance in that movie of action and scale and romanticism and tragedy and emotion."[25]: 91 The film was shot primarily in theanamorphic formaton35 mm film, with key sequences filmed on65 mm, and aerial sequences inVistaVision. Nolan did not shoot any footage withIMAXcameras as he had withThe Dark Knight. "We didn't feel that we were going to be able to shoot in IMAX because of the size of the cameras because this film, given that it deals with a potentially surreal area, the nature of dreams and so forth, I wanted it to be as realistic as possible. Not be bound by the scale of those IMAX cameras, even though I love the format dearly".[5]In addition Nolan and Pfister tested usingShowscanandSuper Dimension 70as potential large-format,high-frame-ratecamera systems to use for the film, but ultimately decided against either format.[27]: 29Sequences in slow motion were filmed on a Photo-Sonics 35 mm camera at speeds of up to 1,000 frames per second. Wally Pfister tested shooting some of these sequences using a high speeddigital camera, but found the format to be too unreliable due to technical glitches. "Out of six times that we shot on the digital format, we only had one usable piece and it didn't end up in the film. Out of the six times we shot with the Photo-Sonics camera and 35 mm running through it, every single shot was in the movie."[31]Nolan also chose not to shoot any of the film in3Das he prefers shooting on film[5]usingprime lenses, which is not possible with 3D cameras.[32]Nolan has also criticized the dim image that 3D projection produces, and disputes that traditional film does not allow realisticdepth perception, saying "I think it's a misnomer to call it 3D versus 2D. The whole point of cinematic imagery is it's three dimensional... You know 95% of ourdepth cuescome fromocclusion, resolution, color and so forth, so the idea of calling a 2D movie a '2D movie' is a little misleading."[33]Nolan did testconvertingInceptioninto 3D in post-production but decided that, while it was possible, he lacked the time to complete the conversion to a standard he was happy with.[20][33]In February 2011Jonathan Liebesmansuggested that Warner Bros. were attempting a 3D conversion forBlu-rayrelease.[34] Wally Pfister gave each location and dream level a distinctive look to aid the audience's recognition of the narrative's location during the heavily crosscut portion of the film: the mountain fortress appears sterile and cool, the hotel hallways have warm hues, and the scenes in the van are more neutral.[27]: 35–36 Nolan has said that the film "deals with levels of reality, and perceptions of reality which is something I'm very interested in. It's an action film set in a contemporary world, but with a slight science-fiction bent to it", while also describing it as "very much an ensemble film structured somewhat as a heist movie. It's an action adventure that spans the globe".[35] For dream sequences inInception, Nolan used littlecomputer-generated imagery, preferringpractical effectswhenever possible. Nolan said, "It's always very important to me to do as much as possible in-camera, and then, if necessary, computer graphics are very useful to build on or enhance what you have achieved physically."[8]: 12To this end, visual effects supervisorPaul Franklinbuilt aminiatureof the mountain fortress set and then blew it up for the film. For the fight scene that takes place in zero gravity, he used CG-based effects to "subtly bend elements like physics, space and time."[36] The most challenging effect was the "Limbo" city level at the end of the film, because it continually developed during production. Franklin had artists build concepts while Nolan expressed his ideal vision: "Something glacial, with clear modernist architecture, but with chunks of it breaking off into the sea like icebergs".[36]Franklin and his team ended up with "something that looked like an iceberg version ofGotham Citywith water running through it."[36]They created a basic model of a glacier and then designers created a program that added elements like roads, intersections and ravines until they had a complex, yet organic-looking, cityscape. For the Paris-folding sequence, Franklin had artists producing concept sketches and then they created rough computer animations to give them an idea of what the sequence looked like while in motion. Later during principal photography, Nolan was able to direct DiCaprio and Page based on this rough computer animation that Franklin had created.Inceptionhad nearly 500 visual effects shots (in comparison,Batman Beginshad approximately 620), which is relatively few in comparison to contemporary effects-heavy films, which can have as many as 2,000 visual effects shots.[36] The score forInceptionwas written byHans Zimmer,[16]who described his work as "a very electronic,[37]dense score",[38]filled with "nostalgia and sadness" to match Cobb's feelings throughout the film.[39]The music was written simultaneously to filming,[38]and features a guitar sound reminiscent ofEnnio Morricone, played byJohnny Marr, former guitarist ofthe Smiths.Édith Piaf's "Non, je ne regrette rien" ("No, I Regret Nothing") appears throughout the film, used to accurately time the dreams, and Zimmer reworked pieces of the song into cues of the score.[39]Asoundtrack albumwas released on July 11, 2010, byReprise Records.[40]The majority of the score was also included in high resolution5.1 surround soundon the second disc of the two-disc Blu-ray release.[41]Hans Zimmer's music was nominated for an Academy Award in theBest Original Scorecategory in 2011, losing toTrent ReznorandAtticus RossofThe Social Network.[42] InInception, Nolan wanted to explore "the idea of people sharing a dream space... That gives you the ability to access somebody'sunconscious mind. What would that be used and abused for?"[5]The majority of the film's plot takes place in these interconnected dream worlds. This structure creates a framework where actions in the real or dream worlds ripple across others. The dream is always in a state of production, and shifts across the levels as the characters navigate it.[43]By contrast, the world ofThe Matrix(1999) is an authoritarian, computer-controlled one, alluding to theories of social control developed by thinkersMichel FoucaultandJean Baudrillard. However, according to one interpretation Nolan's world has more in common with the works ofGilles DeleuzeandFélix Guattari.[43] David DenbyinThe New Yorkercompared Nolan's cinematic treatment of dreams toLuis Buñuel's inBelle de Jour(1967) andThe Discreet Charm of the Bourgeoisie(1972).[44]He criticized Nolan's "literal-minded" action level sequencing compared to Buñuel, who "silently pushed us into reveries and left us alone to enjoy our wonderment, but Nolan is working on so many levels of representation at once that he has to lay in pages of dialogue just to explain what's going on." The latter captures "the peculiar malign intensity of actual dreams."[44] Deirdre Barrett, a dream researcher atHarvard University, said that Nolan did not get every detail accurate regarding dreams, but their illogical, rambling, disjointed plots would not make for a great thriller anyway. However, "he did get many aspects right," she said, citing the scene in which a sleeping Cobb is shoved into a full bath, and in the dream world water gushes into the windows of the building, waking him up. "That's very much how real stimuli get incorporated, and you very often wake up right after that intrusion."[45] Nolan himself said, "I tried to work that idea of manipulation and management of aconscious dreambeing a skill that these people have. Really the script is based on those common, very basic experiences and concepts, and where can those take you? And the only outlandish idea that the film presents, really, is the existence of a technology that allows you to enter and share the same dream as someone else."[21] Others have argued that the film is itself ametaphor for filmmaking, and that the filmgoing experience itself, images flashing before one's eyes in a darkened room, is akin to a dream. Writing inWired, Jonah Lehrer supported this interpretation and presented neurological evidence that brain activity is strikingly similar during film-watching and sleeping. In both, thevisual cortexis highly active and theprefrontal cortex, which deals with logic, deliberate analysis, and self-awareness, is quiet.[46]Paul argued that the experience of going to a picturehouse is itself an exercise in shared dreaming, particularly when viewingInception: the film's sharp cutting between scenes forces the viewer to create larger narrative arcs to stitch the pieces together. This demand of production parallel to consumption of the images, on the part of the audience is analogous to dreaming itself. As in the film's story, in a cinema one enters into the space of another's dream, in this case Nolan's, as with any work of art, one's reading of it is ultimately influenced by one's own subjective desires and subconscious.[43]At Bir-Hakeim bridge in Paris, Ariadne creates an illusion of infinity by addingfacing mirrorsunderneath its struts, Stephanie Dreyfus inla Croixasked "Is this not a strong, beautiful metaphor for the cinema and its power of illusion?"[47] Nolan combined elements from several different film genres into the film, notably science fiction,heist film, andfilm noir.Marion Cotillardplays "Mal" Cobb, Dom Cobb's projection of his guilt over his deceased wife's suicide. As the film's main antagonist, she is a frequent, malevolent presence in his dreams. Dom is unable to control these projections of her, challenging his abilities as an extractor.[12]Nolan described Mal as "the essence of the femme fatale",[8]: 9the keynoirreference in the film. As a "classic femme fatale" her relationship with Cobb is in his mind, a manifestation of Cobb's own neurosis and fear of how little he knows about the woman he loves.[48]DiCaprio praised Cotillard's performance saying that "she can be strong and vulnerable and hopeful and heartbreaking all in the same moment, which was perfect for all the contradictions of her character".[8]: 10 Nolan began with the structure of a heist movie, sinceexpositionis an essential element of that genre, though adapted it to have a greater emotional narrative suited to the world of dreams and subconscious.[48]As Denby described this device: "the outer shell of the story is an elaborate caper".[44]Kristin Thompsonargued that exposition was a major formal device in the film. While a traditional heist movie has a heavy dose of exposition at the beginning as the team assembles and the leader explains the plan, inInceptionthis becomes nearly continuous as the group progresses through the various levels of dreaming.[49]Three quarters of the film, until the van begins to fall from the bridge, are devoted to explaining its plot. In this way, exposition takes precedence over characterization. The characters' relationships are created by their respective skills and roles. Ariadne, likeher ancient namesake, creates the maze and guides the others through it, but also helps Cobb navigate his own subconscious, and as the sole student of dream sharing, helps the audience understand the concept of the plot.[50] Nolan drew inspiration from the works ofJorge Luis Borges,[20][51]including "The Secret Miracle" and "The Circular Ruins",[52]and from the filmsBlade Runner(1982) andThe Matrix(1999).[52][53]While Nolan has not confirmed this, it has also been suggested by many observers that the movie draws heavy inspiration from the 2006 animated filmPaprika.[54][55][56] The film cuts to theclosing creditsfrom a shot of the top apparently starting to show an ever so faint wobble, inviting speculation about whether the final sequence was reality or another dream. Nolan confirmed that the ambiguity was deliberate,[48]saying, "I've been asked the question more times than I've ever been asked any other question about any other film I've made... What's funny to me is that people really do expect me to answer it."[57]The film's script concludes with "Behind him, on the table, the spinning top is STILL SPINNING. And we—FADE OUT".[58]Nolan said, "I put that cut there at the end, imposing an ambiguity from outside the film. That always felt the right ending to me—it always felt like the appropriate 'kick' to me... The real point of the scene—and this is what I tell people—is that Cobb isn't looking at the top. He's looking at his kids. He's left it behind. That's the emotional significance of the thing."[57] Caine interpreted the ending as meaning that Cobb is in the real world, quoting Nolan as telling him "'Well, when you're in the scene, it's reality.' So get that — if I'm in it, it's reality. If I'm not in it, it's a dream". While reiterating that he was uncomfortable with definitively explaining the scene, Nolan in 2023 creditedEmma Thomasas providing "the correct answer, which is Leo's character ... doesn't care at that point".[59]Mark Fisherargued that "a century of cultural theory" cautions against accepting theauthor's interpretationas anything more than a supplementary text, and this all the more so given the theme of the instability of any one master position in Nolan's films. Therein the manipulator is often the one who ends up manipulated, and Cobb's "not caring" about whether or not his world is real may be the price of his happiness and release.[60] Warner Bros. spent US$100 million marketing the film. AlthoughInceptionwas not part of an existing franchise, Sue Kroll, president of Warner's worldwide marketing, said the company believed it could gain awareness due to the strength of "Christopher Nolan as a brand". Kroll declared that "We don't have the brand equity that usually drives a big summer opening, but we have a great cast and a fresh idea from a filmmaker with a track record of making incredible movies. If you can't make those elements work, it's a sad day."[61]The studio also tried to maintain a campaign of secrecy—as reported by the Senior VP of Interactive Marketing, Michael Tritter, "You have this movie which is going to have a pretty big built in fanbase... but you also have a movie that you are trying to keep very secret. Chris [Nolan] really likes people to see his movies in a theater and not see it all beforehand so everything that you do to market that—at least early on—is with an eye to feeding the interest to fans."[62] Aviral marketingcampaign was employed for the film. After the revelation of the first teaser trailer, in August 2009, the film's official website featured only an animation of Cobb's spinning top. In December, the top toppled over and the website opened the online gameMind Crime, which upon completion revealedInception's poster.[63]The rest of the campaign unrolled afterWonderConin April 2010, where Warner gave away promotional T-shirts featuring the PASIV briefcase used to create the dream space, and had aQR codelinking to an online manual of the device.[64]Mind Crimealso received a stage 2 with more resources, including a hidden trailer for the movie.[65]More pieces of viral marketing began to surface beforeInception's release, such as a manual filled with bizarre images and text sent toWiredmagazine,[66]and the online publication of posters, ads, phone applications, and strange websites all related to the film.[67][68]Warner also released an online prequel comic,Inception: The Cobol Job.[69] The official trailer released on May 10, 2010, throughMind Gamewas extremely well received.[65]It featured an original piece of music, "Mind Heist", by recording artistZack Hemsey,[70]rather than music from the score.[71]The trailer quickly went viral with numerousmashupscopying its style, both by amateurs on sites like YouTube[72]and by professionals on sites such asCollegeHumor.[73][74]On June 7, 2010, a behind-the-scenes featurette on the film was released in HD on Yahoo! Movies.[75] Inceptionand its film trailers are widely credited for launching the trend throughout the 2010s in which blockbuster movie trailers repeatedly hit audiences with so-called "braam" sounds: "bassy, brassy, thunderous notes—like a foghorn on steroids—meant to impart a sense of apocalyptic momentousness".[76]However, different composers worked on the teaser trailer, first trailer, second trailer, and film score, meaning that identifying the composer(s) responsible for that trend is a complicated task.[76] Inceptionwas released onDVDandBlu-rayon December 3, 2010, in France,[77]and the week after in the United Kingdom and United States (December 7, 2010).[78][79]The film was released onVHSinSouth Korea, making it one of the last major studio films released for the format.[80]Warner Bros. also made available in the United States a limited Blu-ray edition packaged in a metal replica of the PASIV briefcase, which included extras such as a metal replica of the spinning top totem. With a production run of less than 2,000, it sold out in one weekend.[81]Inceptionwas released on4K Blu-rayanddigital copyalong with other Christopher Nolan films on December 19, 2017.[82]As of 2018[update], thehome videoreleases have sold over 9million units and grossed over$160 million.[83] In a November 2010 interview, Nolan expressed his intention to develop a video game set in theInceptionworld, working with a team of collaborators. He described it as "a longer-term proposition", referring to the medium of video games as "something I've wanted to explore".[84] Inceptionwas re-released in theaters for its tenth anniversary, starting on August 12, 2020, in international markets and on August 21 in the U.S.[85]The re-release was originally announced by Warner Bros. in June 2020 and scheduled for July 17, 2020, taking the original release date for Nolan's upcoming filmTenetafter its delay to July 31 due to the impact of theCOVID-19 pandemicon movie theaters.[86]AfterTenetwas delayed again to August 12, the re-release was shifted to July 31,[87]before setting on the August release date following a third delay.[85] Inceptionwas released in both conventional andIMAXtheaters on July 16, 2010.[89][90]The film had its world premiere atLeicester Squarein London on July 8, 2010.[91]In the United States and Canada,Inceptionwas released theatrically in 3,792 conventional theaters and 195 IMAX theaters.[89]The film grossed US$21.8 million during its opening day on July 16, 2010, with midnight screenings in 1,500 locations.[92]Overall the film made US$62.7 million and debuted at No.1 on its opening weekend.[93]Inception's opening weekend gross made it the second-highest-grossing debut for ascience fiction filmthat was not a sequel, remake or adaptation, behindAvatar's US$77 million opening-weekend gross in 2009.[93]The film held the top spot of the box office rankings in its second and third weekends, with drops of just 32% (US$42.7 million) and 36% (US$27.5 million), respectively,[94][95]before dropping to second place in its fourth week, behindThe Other Guys.[96] Inceptioninitially grossed US$292 million in the United States and Canada, US$56 million in the United Kingdom, Ireland andMaltaand US$479 million in other countries for a total of US$828 million worldwide.[3]Its five highest-grossing markets after the US and Canada (US$292 million) were China (US$68 million), the United Kingdom, Ireland andMalta(US$56 million), France and theMaghrebregion (US$43 million), Japan (US$40 million) andSouth Korea(US$38 million).[97]It was thesixth-highest-grossing film of 2010 in North America,[98]and thefourth-highest-grossing film of 2010, behindToy Story 3,Alice in WonderlandandHarry Potter and the Deathly Hallows – Part 1.[99]Its subsequent re-releases increased its gross to US$839 million.[3]Inceptionis the fourth most lucrative production in Christopher Nolan's career—behindThe Dark Knight,The Dark Knight RisesandOppenheimer[100]—and the second most forLeonardo DiCaprio—behindTitanic.[101] OnRotten Tomatoes,Inceptionholds an approval rating of 87% based on 368 reviews, with an average rating of 8.2/10. The website's critical consensus reads: "Smart, innovative, and thrilling,Inceptionis that rare summer blockbuster that succeeds viscerally as well as intellectually."[102]Metacritic, another review aggregator, assigned the film a weighted average score of 74 out of 100, based on 42 critics, indicating "generally favorable" reviews.[103]Audiences polled byCinemaScoregave the film an average grade of "B+" on an A+ to F scale.[104] Peter TraversofRolling StonecalledInceptiona "wildly ingenious chess game," and concluded "the result is a knockout."[105]Justin Chang ofVarietypraised the film as "a conceptualtour de force" and wrote, "applying a vivid sense of procedural detail to a fiendishly intricate yarn set in thelabyrinthof theunconscious mind, the writer-director has devised a heist thriller forsurrealists, aJungian'sRififi, that challenges viewers to sift through multiple layers of (un)reality."[106]Jim Vejvoda ofIGNrated the film as perfect, deeming it "a singular accomplishment from a filmmaker who has only gotten better with each film."[107]Relevant's David Roark called it Nolan's "greatest accomplishment", saying, "Visually, intellectually and emotionally,Inceptionis a masterpiece."[108] In its August 2010 issue,Empiregave the film a full five stars and wrote, "it feels likeStanley Kubrickadapting the work of the great sci-fi authorWilliam Gibson[...] Nolan delivers another true original: welcome to an undiscovered country."[109]Entertainment Weekly's Lisa Schwarzbaum gave the film a B+ grade and wrote, "It's a rolling explosion of images as hypnotizing and sharply angled as any in a drawing byM. C. Escheror a state-of-the-biz video game; the backwards splicing of Nolan's ownMementolooks rudimentary by comparison."[110]Roger Ebertof theChicago Sun-Timesawarded the film a full four stars and said thatInception"is all about process, about fighting our way through enveloping sheets of reality and dream, reality within dreams, dreams without reality. It's a breathtaking juggling act."[111]Richard Roeper, also of theSun-Times, gaveInceptionan "A+" score and called it "one of the best movies of the [21st] century."[112]BBC Radio 5 Live'sMark KermodenamedInceptionas the best film of 2010, stating that "Inceptionis proof that people are not stupid, that cinema is not trash, and that it is possible for blockbusters and art to be the same thing."[113] Michael Phillipsof theChicago Tribunegave the film 3 out of 4 stars and wrote, "I found myself wishingInceptionwere weirder, further out [...] the film is Nolan's labyrinth all the way, and it's gratifying to experience a summer movie with large visual ambitions and with nothing more or less on its mind than (asShakespearesaid) a dream that hath no bottom."[114]Time'sRichard Corlisswrote that the film's "noble intent is to implant one man's vision in the mind of a vast audience [...] The idea of moviegoing as communal dreaming is a century old. WithInception, viewers have a chance to see that notion get a state-of-the-art update."[115]Kenneth Turanof theLos Angeles Timesfelt that Nolan was able to blend "the best of traditional and modern filmmaking. If you're searching for smart and nervy popular entertainment, this is what it looks like."[116]USA Today'sClaudia Puig gave the film three-and-a-half out of four stars and felt that Nolan "regards his viewers as possibly smarter than they are—or at least as capable of rising to his inventive level. That's a tall order. But it's refreshing to find a director who makes us stretch, even occasionally struggle, to keep up."[117] Not all reviewers gave the film positive reviews.New Yorkmagazine'sDavid Edelsteinsaid in his review that he had "no idea what so many people are raving about. It's as if someone went into their heads while they were sleeping and planted the idea thatInceptionis a visionary masterpiece and—hold on ... Whoa! I think I get it. The movie is a metaphor for the power of delusional hype—a metaphor for itself."[118]The New York Observer'sRex Reedsaid the film's development was "pretty much what we've come to expect from summer movies in general and Christopher Nolan movies in particular ... [it] doesn't seem like much of an accomplishment to me."[119]A. O. ScottofThe New York Timescommented "there is a lot to see inInception, there is nothing that counts as genuine vision. Mr. Nolan's idea of the mind is too literal, too logical, and too rule-bound to allow the full measure of madness."[120]The New Yorker'sDavid Denbyconsidered the film to be "not nearly as much fun as Nolan imagined it to be", concluding that "Inceptionis a stunning-looking film that gets lost in fabulous intricacies, a movie devoted to its own workings and to little else."[44] While some critics have tended to view the film as perfectly straightforward, and even criticize its overarching themes as "the stuff of torpid platitudes", online discussion has been much more positive.[121]Heated debate has centered on the ambiguity of the ending, with many critics like Devin Faraci making the case that the film is self-referential and tongue-in-cheek, both a film about film-making and a dream about dreams.[122]Other critics readInceptionas Christian allegory and focus on the film's use of religious and water symbolism.[123]Yet other critics, such asKristin Thompson, see less value in the ambiguous ending of the film and more in its structure and novel method of storytelling, highlightingInceptionas a new form of narrative that revels in "continuous exposition".[49] Several critics and scholars have noted the film has many striking similarities to the 2006animefilmPaprikabySatoshi Kon(andYasutaka Tsutsui's 1993novel of the same name), including plot similarities, similar scenes, and similar characters, arguing thatInceptionwas influenced byPaprika.[54][55][56][124][125]Several sources have also noted plot similarities between the film and the 2002Uncle ScroogecomicThe Dream of a LifetimebyDon Rosa.[126][127][128]The influence of Tarkovsky'sSolarisonInceptionwas noted as well.[129][130] Inceptionappeared on over 273 critics' lists of the top ten films of 2010, being picked as number-one on at least 55 of those lists.[131]It was the second-most-mentioned film in both the top ten lists and number-one rankings, only behindThe Social Networkalong withToy Story 3,True Grit,The King's Speech, andBlack Swanas the most critically acclaimed films of 2010.[131]AuthorStephen KingplacedInceptionat No. 3 in his list of top 10 best films of the year.[132]FilmmakerDenis Villeneuvecited it as among his favorite films of all time.[133] Critics and publications who ranked the film first for that year includedRichard Roeperof theChicago Sun-Times,Kenneth Turanof theLos Angeles Times(tied withThe Social NetworkandToy Story 3), Tasha Robinson ofThe A.V. Club,Empiremagazine, and Kirk Honeycutt ofThe Hollywood Reporter.[134] Inceptionwas listed on many critics' top ten lists.[135] In March 2011, the film was voted byBBC Radio 1andBBC Radio 1Xtralisteners as their ninth-favorite film of all time.[136]ProducerRoger CormancitedInceptionas an example of "great imagination and originality".[137]In 2012,Inceptionwas ranked the 35th-best-edited film of all time by theMotion Picture Editors Guild.[138]In the same year,Total Filmnamed it the most-rewatchable movie of all time.[139]In 2014,EmpirerankedInceptionthe tenth-greatest film ever made on their list of "The 301 Greatest Movies Of All Time" as voted by the magazine's readers,[140]whileRolling Stonemagazine named it the second-best science fiction film since the turn of the century.[141]Inceptionwas ranked 84th onHollywood's 100 Favorite Films, a list compiled byThe Hollywood Reporterin 2014, surveying "Studio chiefs, Oscar winners and TV royalty".[142]In 2016,Inceptionwas voted the51st-best film of the 21st CenturybyBBC, as picked by 177 film critics from around the world.[143]The film was included in theVisual Effects Society's list of "The Most Influential Visual Effects Films of All Time".[144]In 2019,Total FilmnamedInceptionthe best film of the 2010s.[145]Many critics and media outlets includedInceptionin their rankings of the best films of the 2010s.[146][147][148][149][150][151]The film was included inForbesmagazine's list ofTop 150 Greatest Films of 21st Century.[152] In April 2014,The Daily Telegraphplaced the title on its top ten list of the most overrated films.Telegraph's Tim Robey stated, "It's a criminal failing of the movie that it purports to be about people's dreams being invaded, but demonstrates no instinct at all for what a dream has ever felt like, and no flair for making us feel like we're in one, at any point."[153]The film won an informal poll by theLos Angeles Timesas the most overrated movie of 2010.[154] The film won many awards in technical categories, such asAcademy AwardsforBest Cinematography,Best Sound Editing,Best Sound Mixing, andBest Visual Effects,[42]and theBritish Academy Film AwardsforBest Production Design,Best Special Visual EffectsandBest Sound.[155]In most of its artistic nominations, such as Film, Director, and Screenplay at the Oscars, BAFTAs andGolden Globes, the film was defeated byThe Social NetworkorThe King's Speech.[42][155][156]However, the film did win the two highest honors for a science fiction or fantasy film: the 2011Bradbury Awardfor best dramatic production[157]and the 2011Hugo Award for Best Dramatic Presentation (Long Form).[158] Numerous pop and hip hop songs reference the film, includingCommon's "Blue Sky",N.E.R.D.'s "Hypnotize U",XV's "The Kick",Black Eyed Peas' "Just Can't Get Enough",Lil Wayne's "6 Foot 7 Foot",Jennifer Lopez's "On the Floor", andB.o.B's "Strange Clouds", whileT.I.hadInception-based artwork on two of his mixtapes. An instrumental track byJoe Buddenis titled "Inception".[159]The animated seriesSouth Parkparodies the film in the show's tenth episode of itsfourteenth season, titled "Insheeption."[160]The film was also an influence forAriana Grande's video for "No Tears Left to Cry."[161]"Lawnmower Dog", the second episode of the animated comedy showRick and Morty, parodied the film.[162]In an episode ofThe Simpsons, named "How I Wet Your Mother", the plot spoofsInceptionwith various scenes parodying moments from the film.[163]The showrunners of the television seriesThe Flashsaid its season 4 finale was inspired byInception.[164]In February 2020, American singer-songwriterTaylor Swiftreleased a lyric video for her single "The Man", which featured visuals bearing resemblance to the film. The song also mentions DiCaprio in its lyrics.[165] The film's title has been colloquialized as the suffix-ception, which can be jokingly appended to a noun to indicate a layering, nesting, orrecursionof the thing in question.[166]
https://en.wikipedia.org/wiki/Inception
Thedream argumentis the postulation that the act ofdreamingprovides preliminary evidence that thesenseswe trust to distinguishrealityfromillusionshould not be fully trusted, and therefore, any state that is dependent on our senses should at the very least be carefully examined and rigorously tested to determine whether it is in fact reality. Whiledreaming, one does not normallyrealize one is dreaming. On more rare occasions, thedream may be contained inside another dreamwith the very act of realizing that one is dreaming, itself, being only a dream that one is not aware of having. This has ledphilosophersto wonder whether it is possible for one ever to be certain, at any given point in time, that one is not in fact dreaming, or whether indeed it could be possible for one to remain in a perpetual dream state and never experience the reality ofwakefulnessat all.[citation needed] InWestern philosophythis philosophical puzzle was referred to byPlato(Theaetetus158b-d),Aristotle(Metaphysics1011a6), and theAcademic Skeptics.[1]It is now best known fromRené Descartes'Meditations on First Philosophy. The dreamargumenthas become one of the most prominentskeptical hypotheses.[citation needed] InEastern philosophythis type of argument is sometimes referred to as the "Zhuangzi paradox": He who dreams of drinking wine may weep when morning comes; he who dreams of weeping may in the morning go off to hunt. While he is dreaming he does not know it is a dream, and in his dream he may even try to interpret a dream. Only after he wakes does he know it was a dream. And someday there will be a great awakening when we know that this is all a great dream. Yet the stupid believe they are awake, busily and brightly assuming they understand things, calling this man ruler, that one herdsman—how dense! Confucius and you are both dreaming! And when I say you are dreaming, I am dreaming, too. Words like these will be labeled the Supreme Swindle. Yet, after ten thousand generations, a great sage may appear who will know their meaning, and it will still be as though he appeared with astonishing speed.[2] TheYogacharaphilosopherVasubandhu(4th to 5th century C.E.) referenced the argument in his "Twenty verses on appearance only." The dream argument came to feature prominently inMahayanaandTibetan Buddhistphilosophy. Someschools of thought(e.g.,Dzogchen) considerperceivedrealityto be literally unreal. AsChögyal Namkhai Norbuputs it: "In a real sense, all the visions that we see in our lifetime are like a big dream ..."[3]In this context, the term 'visions' denotes not onlyvisual perceptions, but also appearances perceived through allsenses, includingsounds,smells,tastes, andtactile sensations, andoperationson perceived mental objects. Dreaming provides a springboard for those who question whether our own reality may be an illusion. The ability of the mind to be tricked into believing a mentally generated world is the "real world" means at least one variety of simulated reality is a common, even nightly event.[4] Those who argue that the world is not simulated must concede that the mind—at least the sleeping mind—is not itself an entirely reliable mechanism for attempting to differentiate reality from illusion.[5] Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once. In the past, philosophersJohn LockeandThomas Hobbeshave separately attempted to refute Descartes's account of the dream argument. Locke claimed that you cannot experience pain in dreams. Various scientific studies conducted within the last few decades provided evidence against Locke's claim by concluding that pain in dreams can occur, but on very rare occasions.[7]Philosopher Ben Springett has said that Locke might respond to this by stating that the agonizing pain of stepping into a fire is non-comparable to stepping into a fire in a dream. Hobbes claimed that dreams are susceptible to absurdity while the waking life is not.[8] Many contemporary philosophers have attempted to refute dream skepticism in detail (see, e.g., Stone (1984)).[9]Ernest Sosa(2007) devoted a chapter of a monograph to the topic, in which he presented a new theory of dreaming and argued that his theory raises a new argument for skepticism, which he attempted to refute. InA Virtue Epistemology: Apt Belief and Reflective Knowledge, he states: "in dreaming we do not really believe; we only make-believe."[10]Jonathan Ichikawa (2008) and Nathan Ballantyne & Ian Evans (2010) have offered critiques of Sosa's proposed solution. Ichikawa argued that as we cannot tell whether our beliefs in waking life are truly beliefs and not imaginings, like in a dream, we are still not able to tell whether we are awake or dreaming. The dream hypothesis is also used to develop other philosophical concepts, such as Valberg'spersonal horizon: what this world would be internal to ifthiswere all a dream.[11] Norman Malcolmin his monograph "Dreaming" (published in 1959) elaborated on Wittgenstein's question as to whether it really mattered if people who tell dreams "really had these images while they slept, or whether it merely seems so to them on waking". He argues that the sentence "I am asleep" is a senseless form of words; that dreams cannot exist independently of the waking impression; and that skepticism based on dreaming "comes from confusing the historical and dream telling senses...[of]...the past tense" (page 120). In the chapter: "Do I Know I Am Awake ?" he argues that we do not have to say: "I know that I am awake" simply because it would be absurd to deny that one is awake. PhilosopherDaniel Dennettexpanded on this idea with his cassette tape hypothesis of dreaming.[12]He conjectured that dreams are not real conscious experiences, and are instead pseudo-memories that emerge upon awakening from sleep. These pseudo-memories do not correspond to any real dream experiences, and are instead strictly fabrications of experiences that never occurred. Philosopher Jennifer Windt has counter-argued against dream skepticism, drawing on the psychology oflucid dreaming, and has advanced a conceptual framework of dreaming as real imaginative experiences.[13] Malcolm, N. (1959) Dreaming London: Routledge & Kegan Paul, 2nd Impression 1962.
https://en.wikipedia.org/wiki/Dream_argument
In thepsychologysubfield ofoneirology, alucid dreamis a type ofdreamwherein the dreamer realizes that they are dreaming during their dream. The capacity to have lucid dreams is a trainablecognitive skill.[1][2]During a lucid dream, the dreamer may gain some amount ofvolitional controlover the dream characters, narrative, or environment, although this control of dream content is not the salient feature of lucid dreaming.[3][4][5][6]An important distinction is that lucid dreaming is a distinct type of dream from other types of dreams such asprelucid dreamsand vivid dreams, although prelucid dreams are a precursor to lucid dreams, and lucid dreams are often accompanied with enhanced dream vividness. Lucid dreams are also a distinct state from other lucid boundary sleep states such as lucidhypnagogiaor lucidhypnopompia. In formal psychology, lucid dreaming has been studied and reported for many years. Prominent figures from ancient to modern times have been fascinated by lucid dreams and have sought ways to better understand their causes and purpose. Many different theories have emerged as a result of scientific research on the subject.[7][8]Further developments in psychological research have pointed to ways in which this form of dreaming may be utilized as atherapeutic technique.[9] The termlucid dreamwas coined byDutchauthor andpsychiatristFrederik van Eedenin his 1913 articleA Study of Dreams,[6]though descriptions of dreamers being aware that they are dreaming predate the article.[6]PsychologistStephen LaBergeis widely considered the progenitor and leading pioneer of modern lucid dreaming research.[10]He is the founder of the Lucidity Institute atStanford University. Paul Tholeylaid theepistemologicalbasis for the research of lucid dreams, proposing seven different conditions of clarity that a dream must fulfill in order to be defined as a lucid dream:[11][12][13] Later, in 1992, a study byDeirdre Barrettexamined whether lucid dreams contained four "corollaries" of lucidity: Barrett found less than a quarter of lucidity accounts exhibited all four.[14] Subsequently,Stephen LaBergestudied the prevalence among lucid dreams of the ability to control the dream scenario, and found that while dream control and dream awareness are correlated, neither requires the other. LaBerge found dreams that exhibit one clearly without the capacity for the other. He also found dreams where, although the dreamer is lucid and aware they could exercise control, they choose simply to observe.[3] The practice of lucid dreaming is central to both the ancient IndianHindupractice ofYoga nidraand the Tibetan Buddhist practice ofdream Yoga. The cultivation of such awareness was a common practice among earlyBuddhists.[15] Early references to the phenomenon are also found in ancient Greek writing. For example, the philosopherAristotlewrote: "often when one is asleep, there is something in consciousness which declares that what then presents itself is but a dream."[16]Meanwhile, the physicianGalen of Pergamonused lucid dreams as a form of therapy.[17]In addition, a letter written bySaint Augustine of Hippoin AD 415 tells the story of a dreamer, Doctor Gennadius, and refers to lucid dreaming.[18][19] Philosopher and physicianSir Thomas Browne(1605–1682) was fascinated by dreams and described his own ability to lucid dream in hisReligio Medici, stating: "...yet in one dream I can compose a whole Comedy, behold the action, apprehend the jests and laugh my self awake at the conceits thereof."[20] Samuel Pepys, in his diary entry for 15 August 1665, records a dream, stating: "I had my Lady Castlemayne in my arms and was admitted to use all the dalliance I desired with her, and then dreamt that this could not be awake, but that it was only a dream."[21] In 1867, the French sinologistMarie-Jean-Léon, Marquis d'Hervey de Saint Denysanonymously publishedLes Rêves et Les Moyens de Les Diriger; Observations Pratiques("Dreams and the ways to direct them; practical observations"), in which he describes his own experiences of lucid dreaming, and proposes that it is possible for anyone to learn to dream consciously.[22][23] In 1913, Dutch psychiatrist and writerFrederik (Willem) van Eeden(1860–1932) coined the term "lucid dream" in an article entitled "A Study of Dreams".[24][16][23] Some have suggested that the term is a misnomer because Van Eeden was referring to a phenomenon more specific than a lucid dream.[25]Van Eeden intended the term lucid to denote "having insight", as in the phrasea lucid intervalapplied to someone in temporary remission from apsychosis, rather than as a reference to the perceptual quality of the experience, which may or may not be clear and vivid.[26] Clinical psychologist, Kristen LaMarca outlined four stages[27]towards mastering the skill of lucid dreaming: Progression along the skill levels is akin to a maturity in the development of the practitioner's discipline, methodology and application. In 1968,Celia Greenanalyzed the main characteristics of such dreams, reviewing previously published literature on the subject and incorporating new data from participants of her own. She concluded that lucid dreams were a category of experience quite distinct from ordinary dreams and said they were associated withrapid eye movement sleep(REM sleep). Green was also the first to link lucid dreams to the phenomenon offalse awakenings,[28]which has since been corroborated by more recent studies.[29] In 1973, the National Institute of Mental Health reported that researchers at the University of California, San Francisco, were able to train sleeping subjects to recognize they were in REM dreaming and indicate this by pressing micro switches on their thumbs. Using tones and mild shocks as cues, the experiments showed that the subjects were able to signal knowledge of their various sleep stages, including dreaming.[30] In 1975, Dr. Keith Hearne had the idea to exploit the nature of rapid eye movements (REM) to allow a dreamer to send a message directly from dreams to the waking world. Working with an experienced lucid dreamer (Alan Worsley), he eventually succeeded in recording (via the use of anelectrooculogramor EOG) a pre-defined set of eye movements signaled from within Worsley's lucid dream. This occurred at around 8 am on the morning of April 12, 1975. Hearne's EOG experiment was formally recognized through publication in the journal for The Society for Psychical Research. Lucid dreaming was subsequently researched by asking dreamers to perform pre-determined physical responses while experiencing a dream, including eye movement signals.[31][32] In 1980,Stephen LaBergeatStanford Universitydeveloped such techniques as part of his doctoral dissertation.[33]In 1985, LaBerge performed a pilot study that showed thattime perceptionwhile counting during a lucid dream is about the same as during waking life. Lucid dreamers counted out ten seconds while dreaming, signaling the start and the end of the count with a pre-arranged eye signal measured withelectrooculogramrecording.[34][35][36]LaBerge's results were confirmed by German researchers D. Erlacher and M. Schredl in 2004.[37] In a further study by Stephen LaBerge, four subjects were compared either singing or counting while dreaming. LaBerge found that the right hemisphere was more active during singing and the left hemisphere was more active during counting.[38] NeuroscientistJ. Allan Hobsonhas hypothesized what might be occurring in the brain while lucid. The first step to lucid dreaming is recognizing one is dreaming. This recognition might occur in thedorsolateral prefrontal cortex, which is one of the few areas deactivated during REM sleep and whereworking memoryoccurs. Once this area is activated and the recognition of dreaming occurs, the dreamer must be cautious to let the dream continue but be conscious enough to remember that it is a dream. While maintaining this balance, theamygdalaandparahippocampal cortexmight be less intensely activated.[39]To continue the intensity of the dream hallucinations, it is expected theponsand theparieto-occipital junctionstay active.[40] Usingelectroencephalography(EEG) and other polysomnographical measurements, LaBerge and others have shown that lucid dreams begin in the rapid eye movement (REM) stage of sleep.[41][42][43]LaBerge also proposes that there are higher amounts of beta-1 frequency band (13–19 Hz) brain wave activity experienced by lucid dreamers, hence there is an increased amount of activity in theparietal lobesmaking lucid dreaming a conscious process.[44] Paul Tholey, a GermanGestalt psychologistand a professor of psychology andsports science, originally studied dreams in order to resolve the question of whether one dreams in colour or black and white. In his phenomenological research, he outlined an epistemological frame usingcritical realism.[45]Tholey instructed his subjects to continuously suspect waking life to be a dream, in order that such a habit would manifest itself during dreams. He called this technique for inducing lucid dreams theReflexionstechnik(reflection technique).[46]Subjects learned to have such lucid dreams; they observed their dream content and reported it soon after awakening. Tholey could examine the cognitive abilities of dream figures.[47]Nine trained lucid dreamers were directed to set other dream figures arithmetic and verbal tasks during lucid dreaming. Dream figures who agreed to perform the tasks proved more successful in verbal than in arithmetic tasks. Tholey discussed his scientific results with Stephen LaBerge, who has a similar approach.[48] A study was conducted by Stephen LaBerge and other scientists to see if it were possible to attain the ability to lucid dream through a drug. In 2018,galantaminewas given to 121 patients in a double-blind, placebo-controlled trial, the only one of its kind. Some participants found as much as a 42 percent increase in their ability to lucid dream, compared to self-reports from the past six months, and ten people experienced a lucid dream for the first time. It is theorized that galantamine allowsacetylcholineto build up, leading to greater recollection and awareness during dreaming.[49] Teams of cognitive scientists have established real-time two-way communication with people undergoing a lucid dream. During dreaming they were able to consciously communicate with experimenters via eye movements[50]or facial muscle signals, were able to comprehend complex questions and use working memory. Such interactive lucid dreaming could be a new approach for the scientific exploration of the dream state and could have applications for learning and creativity.[51][52][53][54]Researchers have also demonstrated that individuals in a lucid dream can control and respond to feedback within a virtual environment.[55] Other researchers suggest that lucid dreaming is not a state of sleep, but of brief wakefulness, or "micro-awakening".[56][57]Experiments byStephen LaBergeused "perception of the outside world" as a criterion forwakefulnesswhile studying lucid dreamers, and their sleep state was corroborated with physiological measurements.[32]LaBerge's subjects experienced their lucid dream while in a state of REM, which critics felt may mean that the subjects are fully awake. J. Allen Hobson responded that lucid dreaming must be a state of both waking and dreaming.[58] PhilosopherNorman Malcolmwas a proponent of dream skepticism.[59]He has argued against the possibility of checking the accuracy of dream reports, pointing out that "the only criterion of the truth of a statement that someone has had a certain dream is, essentially, his saying so."[60]Yet dream reports are not the only evidence that some inner drama is being played out during REM sleep.Electromyographyon speech and body muscles has demonstrated the sleeping body covertly walking, gesturing and talking while in REM.[61][62] In 2016, a meta-analytic study by David Saunders and colleagues[63]on 34 lucid dreaming studies, taken from a period of 50 years, demonstrated that 55% of a pooled sample of 24,282 people claimed to have experienced lucid dreams at least once or more in their lifetime. Furthermore, for those that stated they did experience lucid dreams, approximately 23% reported to experience them on a regular basis, as often as once a month or more. In a 2004 study on lucid dream frequency and personality, a moderate correlation between nightmare frequency and frequency of lucid dreaming was demonstrated. Some lucid dreamers also reported that nightmares are a trigger for dream lucidity.[64]Previous studies have reported that lucid dreaming is more common among adolescents than adults.[65] A 2015 study by Julian Mutz and Amir-Homayoun Javadi showed that people who had practiced meditation for a long time tended to have more lucid dreams. The authors claimed that "Lucid dreaming is a hybrid state of consciousness with features of both waking and dreaming" in a review they published in Neuroscience of Consciousness[7]in 2017. Mutz and Javadi found that during lucid dreaming, there is an increase in activity of the dorsolateral prefrontal cortex, the bilateral frontopolar prefrontal cortex, theprecuneus, the inferior parietal lobules, and thesupramarginal gyrus. All are brain functions related to higher cognitive functions including working memory, planning, and self-consciousness. The researchers also found that during a lucid dream, "levels of self-determination" were similar to those that people experienced during states of wakefulness. They also found that lucid dreamers can only control limited aspects of their dream at once. Mutz and Javadi also have stated that by studying lucid dreaming further, scientists could learn more about various types of consciousness, which happen to be less easy to separate and research at other times.[66] It has been suggested that those who suffer fromnightmarescould benefit from the ability to be aware they are indeed dreaming.[2][67]A pilot study performed in 2006 showed that lucid dreaming therapy treatment was successful in reducing nightmare frequency. This treatment consisted of exposure to the idea, mastery of the technique, and lucidity exercises. It was not clear what aspects of the treatment were responsible for the success of overcoming nightmares, though the treatment as a whole was said to be successful.[68] Australian psychologist Milan Colic has explored the application of principles fromnarrative therapyto clients' lucid dreams, to reduce the impact not only of nightmares during sleep but also depression, self-mutilation, and other problems in waking life.[69]Colic found that therapeutic conversations could reduce the distressing content of dreams, while understandings about life—and even characters—from lucid dreams could be applied to their lives with marked therapeutic benefits.[70] Psychotherapists have applied lucid dreaming as a part of therapy. Studies have shown that, by inducing a lucid dream, recurrent nightmares can be alleviated. It is unclear whether this alleviation is due to lucidity or the ability to alter the dream itself. A 2006 study performed by Victor Spoormaker and Van den Bout evaluated the validity of lucid dreaming treatment (LDT) in chronic nightmare sufferers.[71]LDT is composed of exposure, mastery and lucidity exercises. Results of lucid dreaming treatment revealed that the nightmare frequency of the treatment groups had decreased. In another study, Spoormaker, Van den Bout, and Meijer (2003) investigated lucid dreaming treatment for nightmares by testing eight subjects who received a one-hour individual session, which consisted of lucid dreaming exercises.[72]The results of the study revealed that the nightmare frequency had decreased and the sleep quality had slightly increased. Holzinger, Klösch, and Saletu managed a psychotherapy study under the working name of ‘Cognition during dreaming—a therapeutic intervention in nightmares’, which included 40 subjects, men and women, 18–50 years old, whose life quality was significantly altered by nightmares.[73]The test subjects were administered Gestalt group therapy and 24 of them were also taught to enter the state of lucid dreaming by Holzinger. This was purposefully taught in order to change the course of their nightmares. The subjects then reported the diminishment of their nightmare prevalence from 2–3 times a week to 2–3 times per month. In her bookThe Committee of Sleep,Deirdre Barrettdescribes how some experienced lucid dreamers have learned to remember specific practical goals such as artists looking for inspiration seeking a show of their own work once they become lucid or computer programmers looking for a screen with their desired code. However, most of these dreamers had many experiences of failing to recall waking objectives before gaining this level of control.[74] Exploring the World of Lucid DreamingbyStephen LaBergeandHoward Rheingold(1990) discusses creativity within dreams and lucid dreams, including testimonials from a number of people who claim they have used the practice of lucid dreaming to help them solve a number of creative issues, from an aspiring parent thinking of potential baby names to a surgeon practicing surgical techniques. The authors discuss how creativity in dreams could stem from "conscious access to the contents of our unconscious minds"; access to "tacit knowledge"—the things we know but can't explain, or things we know but are unaware that we know.[75] The Dreams Behind the Musicbook by Craig Webb (2016) details lucid dreams of a number of musical artists, including how they are able not just to hear, but also compose, mix, arrange, practice, and perform music while conscious within their dreams.[76] Though lucid dreaming can be beneficial to a number of aspects of life, some risks have been suggested. Those struggling with certain mental illnesses could find it hard to tell the difference between reality and the lucid dream (psychosis).[77][78] A very small percentage of people may experiencesleep paralysis, which can sometimes be confused with lucid dreaming. Although from the outside, both seem to be quite similar, there are a few distinct differences that can help differentiate them. A person usually experiences sleep paralysis when they partially wake up inREM atonia, a state in which said person is partially paralyzed and cannot move their limbs. When in sleep paralysis, people may also experience hallucinations. Although said hallucinations cannot cause physical damage, they may still be frightening. There are three common types of hallucinations:[79]an intruder in the same room, a crushing feeling on one's chest or back, and a feeling of flying or levitating. About 7.6% of the general population have experienced sleep paralysis at least once.[80]Exiting sleep paralysis to a waking state can be achieved by intently focusing on a part of the body, such as a finger, and wiggling it, continuing the action of moving to then the hand, the arm, and so on, until the person is fully awake.[81] Long-term risks with lucid dreaming have not been extensively studied,[82][83][84]although many people have reported lucid dreaming for many years without any adverse effects. In 2018, researchers at theWisconsin Institute for Sleep and Consciousnessconducted a study that concluded individuals who lucid dream more frequently have a more active and well connectedprefrontal cortex.[85]
https://en.wikipedia.org/wiki/Lucid_dream
Sleep paralysisis a state,during waking uporfalling asleep, in which a person is conscious but in a complete state of full-bodyparalysis.[1][2]During an episode, the person mayhallucinate(hear, feel, or see things that are not there), which often results infear.[1][3]Episodes generally last no more than a few minutes.[2]It can reoccur multiple times or occur as a single episode.[1][3] The condition may occur in those who are otherwise healthy or those withnarcolepsy, or it may run in families as a result of specificgeneticchanges. The condition can be triggered bysleep deprivation,psychological stress, or abnormalsleep cycles. The underlying mechanism is believed to involve a dysfunction inREM sleep.[2]Diagnosis is based on a person's description. Other conditions that can present similarly include narcolepsy,atonic seizure, andhypokalemic periodic paralysis.[2]Treatment options for sleep paralysis have been poorly studied. It is recommended that people be reassured that the condition is common and generally not serious. Other efforts that may be tried includesleep hygiene,cognitive behavioral therapy, andantidepressants.[1] Between 8% to 50% of people experience sleep paralysis at some point during their lifetime.[2][4]About 5% of people have regular episodes. Males and females are affected equally.[2]Sleep paralysis has been described throughout history. It is believed to have played a role in the creation of stories aboutalien abductionand otherparanormalevents.[1] The main symptom of sleep paralysis is being unable to move or speak during awakening.[1] Imagined sounds such ashumming,hissing,static,zappingand buzzing noises are reported during sleep paralysis.[5]Other sounds such asvoices,whispersandroarsare also experienced. It has also been known that one may feel pressure on their chest and intense pain in their head during an episode.[6]These symptoms are usually accompanied by intenseemotionssuch asfearandpanic.[7]People also have sensations of being dragged out of bed or of flying,numbness, and feelings of electrictinglesorvibrationsrunning through their body.[8] Sleep paralysis may include hallucinations, such as an intruding presence or dark figure in the room. These are commonly known assleep paralysis demons. It may also include suffocating or the individual, feeling a sense of terror, accompanied by a feeling of pressure on one's chest anddifficulty breathing.[9] The pathophysiology of sleep paralysis has not been concretely identified, although there are several theories about its cause.[10]The first of these stems from the understanding that sleep paralysis is aparasomniaresulting from dysfunctional overlap of the REM and waking stages of sleep.[11]Polysomnographic studies have found that individuals who experience sleep paralysis have shorter REM sleep latencies than normal along with shortened NREM and REM sleep cycles, and fragmentation of REM sleep. This study supports the observation that disturbance of regular sleeping patterns can precipitate an episode of sleep paralysis, because fragmentation of REM sleep commonly occurs when sleep patterns are disrupted and has now been seen in combination with sleep paralysis.[12] Another major theory is that the neural functions that regulate sleep are out of balance, causing different sleep states to overlap. In this case, cholinergic sleep "on"neural populationsare hyperactivated and the serotonergic sleep "off" neural populations are under-activated. As a result, the cells capable of sending the signals, that would allow for complete arousal from the sleep state, the serotonergic neural populations, have difficulty in overcoming the signals sent by the cells that keep the brain in the sleep state. During normal REM sleep, the threshold for a stimulus to cause arousal is greatly elevated. Under normal conditions, medial andvestibular nuclei,cortical,thalamic, andcerebellarcenters coordinate things such as head and eye movement, and orientation in space.[8] In individuals reporting sleep paralysis, there is almost no blocking of exogenous stimuli, which means it is much easier for a stimulus to arouse the individual. Thevestibular nucleiin particular has been identified as being closely related to dreaming during the REM stage of sleep.[8]According to this hypothesis, vestibular-motor disorientation, unlike hallucinations, arise from completely endogenous sources of stimuli.[13] If the effects of sleep "on" neural populations cannot be counteracted, characteristics of REM sleep are retained upon awakening. Common consequences of sleep paralysis include headaches, muscle pains or weakness or paranoia. As the correlation with REM sleep suggests, the paralysis is not complete: use ofEOGtraces shows that eye movement is still possible during such episodes; however, the individual experiencing sleep paralysis is unable to speak.[14] Research has found a genetic component in sleep paralysis.[15]The characteristic fragmentation of REM sleep,hypnopompic, andhypnagogichallucinations have a heritable component in other parasomnias, which lends credence to the idea that sleep paralysis is also genetic. Twin studies have shown that if one twin of a monozygotic pair (identical twins) experiences sleep paralysis that the other twin is very likely to experience it as well.[16]The identification of a genetic component means that there is some sort of disruption of a function at the physiological level. Further studies must be conducted to determine whether there is a mistake in the signaling pathway for arousal as suggested by the first theory presented, or whether the regulation of melatonin or the neural populations themselves have been disrupted. Several types of hallucinations have been linked to sleep paralysis: the belief that there is an intruder in the room, the feeling of a presence, and the sensation of floating. One common hallucination is the presence of anincubus. A neurological hypothesis is that in sleep paralysis thecerebellum, which usually coordinates body movement and provides information on body position, experiences a brief myoclonic spike in brain activity inducing afloatingsensation.[13] Theintruderand incubus hallucinations highly correlate with one another, and moderately correlated with the third hallucination, vestibular-motor disorientation, also known asout-of-body experiences,[13]which differ from the other two in not involving the threat-activated vigilance system.[17] Ahyper-vigilantstate created in themidbrainmay further contribute to hallucinations.[8]More specifically, the emergency response is activated in the brain when individuals wake up paralyzed and feel vulnerable to attack. This helplessness can intensify the effects of the threat response well above the level typical of normal dreams, which could explain why such visions during sleep paralysis are so vivid. The threat-activated vigilance system is a protective mechanism that differentiates between dangerous situations and determines whether the fear response is appropriate.[13] The hyper-vigilance response can lead to the creation of endogenous stimuli that contribute to the perceived threat.[8]A similar process may explain hallucinations, with slight variations, in which an evil presence is perceived by the subject to be attempting to suffocate them, either by pressing heavily on the chest or by strangulation. A neurological explanation holds that this results from a combination of the threat vigilance activation system and the muscle paralysis associated with sleep paralysis that removes voluntary control of breathing. Several features of REM breathing patterns exacerbate the feeling of suffocation.[13]These include shallow rapid breathing,hypercapnia, and slight blockage of the airway, which is a symptom prevalent insleep apneapatients.[8] According to this account, the subjects attempt to breathe deeply and find themselves unable to do so, creating a sensation of resistance, which the threat-activated vigilance system interprets as an unearthly being sitting on their chest, threatening suffocation.[8]The sensation of entrapment causes a feedback loop when the fear of suffocation increases as a result of continued helplessness, causing the subjects to struggle to end the SP episode.[13] Sleep paralysis is mainly diagnosed via clinical interview and ruling out other potentialsleep disordersthat could account for the feelings of paralysis.[10][11]Several measures are available to reliably diagnose[17][18]or screen (Munich Parasomnia Screening)[19]for recurrent isolated sleep paralysis. Episodes of sleep paralysis can occur in the context of several medical conditions (e.g., narcolepsy,hypokalemia). When episodes occur independent of these conditions or substance use, it is termed "isolated sleep paralysis" (ISP).[18]When ISP episodes are more frequent and cause clinically significant distress or interference, it is classified as "recurrent isolated sleep paralysis" (RISP). Episodes of sleep paralysis, regardless of classification, are generally short (1–6 minutes), but longer episodes also have been documented.[8] It can be difficult to differentiate betweencataplexybrought on by narcolepsy and true sleep paralysis, because the two phenomena are physically indistinguishable. The best way to differentiate between the two is to note when the attacks occur most often. Narcolepsy attacks are more common when the individual is falling asleep; ISP and RISP attacks are more common upon awakening.[17] Similar conditions include:[20] Several circumstances have been identified that are associated with an increased risk of sleep paralysis. These includeinsomnia,sleep deprivation, an erratic sleep schedule,stress, and physical fatigue. It is also believed that there may be a genetic component in the development of RISP, because there is a high concurrent incidence of sleep paralysis inmonozygotic twins.[16]Sleeping in thesupine positionhas been found an especially prominent instigator of sleep paralysis.[9][21] Sleeping in the supine position is believed to make the sleeper more vulnerable to episodes of sleep paralysis because in this sleeping position, it is possible for the soft palate to collapse and obstruct the airway. This is a possibility regardless of whether the individual has been diagnosed withsleep apneaor not. There may also be a greater rate of microarousals while sleeping in the supine position because there is a greater amount of pressure being exerted on the lungs due to gravity.[21] While many factors can increase the risk for ISP or RISP, they can be avoided with minor lifestyle changes.[11] Medical treatment starts with education about sleep stages and the inability to move muscles during REM sleep. People should be evaluated fornarcolepsyif symptoms persist.[22]The safest treatment for sleep paralysis is for people to adopt healthier sleeping habits. However, in more serious casestricyclic antidepressantsorselective serotonin reuptake inhibitors(SSRIs) may be used. Most people tend to overcome sleep paralysis by being woken up through touch or movement.[23]Despite the fact that these treatments are prescribed, there is currently no drug that has been found to completely interrupt episodes of sleep paralysis majority of the time.[24] Though no large trials have taken place which focus on the treatment of sleep paralysis, several drugs have promise in case studies. Two trials ofGHBfor people with narcolepsy demonstrated reductions in sleep paralysis episodes.[25] Pimavanserinhas been proposed as a possible candidate for future studies in treating sleep paralysis.[26] Some of the earliest work in treating sleep paralysis was done using acognitive-behavior therapycalled CA-CBT. The work focuses on psycho-education and modifying catastrophic cognitions about the sleep paralysis attack.[27][28]This approach has previously been used to treat sleep paralysis in Egypt, although clinical trials are lacking.[29] The first published psychosocial treatment for recurrent isolated sleep paralysis was cognitive-behavior therapy - isolated sleep paralysis (CBT-ISP).[18]It begins with self-monitoring of symptoms, cognitive restructuring of maladaptive thoughts relevant to ISP (e.g., "the paralysis will be permanent"), and psychoeducation about the nature of sleep paralysis. Prevention techniques include ISP-specific sleep hygiene and the preparatory use of various relaxation techniques (e.g. diaphragmatic breathing, mindfulness, progressive muscle relaxation, meditation). Episode disruption techniques[30]are first practiced in session and then applied during actual attacks. No controlled trial of CBT-ISP has yet been conducted to prove its effectiveness. Sleep paralysis is experienced equally in males and females.[4][31]Lifetime prevalence rates derived from 35 aggregated studies indicate that approximately 8% of the general population, 28% of students, and 32% of psychiatric patients experience at least one episode of sleep paralysis at some point in their lives.[4]Rates of recurrent sleep paralysis are not as well known, but 15–45% of those with a lifetime history of sleep paralysis may meet diagnostic criteria for Recurrent Isolated Sleep Paralysis.[17][10]In surveys from Canada, China, England, Japan and Nigeria, 20% to 60% of individuals reported having experienced sleep paralysis at least once in their lifetime.[7]In general, non-whites appear to experience sleep paralysis at higher rates than whites, but the magnitude of the difference is rather small.[4]Approximately 36% of the general population that experiences isolated sleep paralysis, develop it between 25 and 44 years of age.[32] Isolated sleep paralysis is commonly seen in patients that have been diagnosed with narcolepsy. Approximately 30–50% of people that have been diagnosed with narcolepsy, have experienced sleep paralysis as an auxiliary symptom. A majority of the individuals who have experienced sleep paralysis, have sporadic episodes that occur once a month to once a year. Only 3% of individuals who experience sleep paralysis that is not associated with aneuromuscular disorderhave nightly episodes.[32] The original definition of sleep paralysis was codified bySamuel Johnsonin hisA Dictionary of the English Languageasnightmare, a term that evolved into the modern definition. The term was first used and dubbed by British neurologist,S.A.K. Wilsonin his 1928 dissertation,The Narcolepsies.[33]Such sleep paralysis was widely considered the work ofdemons, and more specificallyincubi, which were thought to sit on the chests of sleepers. InOld English, the name for these beings wasmareormære(from aproto-Germanic*marōn, cf.Old Norsemara), hence comes themarein the wordnightmare. The word might becognatetoGreekMarōn(in theOdyssey) andSanskritMāra. Although the core features of sleep paralysis (e.g., atonia, a clear sensorium, and frequent hallucinations) appear to be universal, the ways in which they are experienced vary according to time, place, and culture.[9][34]Over 100 terms have been identified for these experiences.[18]Some scientists have proposed sleep paralysis as an explanation for reports of paranormal and spiritual phenomena such asghosts,[35][36]alien visits,[37]demonsordemonic possession,[9][38]alien abductionexperiences,[39][40]thenight hagandshadow peoplehaunting.[10][13] According to some scientists, culture may be a major factor in shaping sleep paralysis.[38]When sleep paralysis is interpreted through a particular cultural filter, it may take on greater salience. For example, if sleep paralysis is feared in a certain culture, this fear could lead to conditioned fear, and thus worsen the experience, in turn leading to higher rates.[9][38]Consistent with this idea, high rates and long durations of immobility during sleep paralysis have been found in Egypt, where there are elaborate beliefs about sleep paralysis, involvingmalevolent spirit-like creatures, thejinn.[38] Research has found that sleep paralysis is associated with great fear and fear of impending death in 50% of sufferers in Egypt. A study comparing rates and characteristics of sleep paralysis in Egypt and Denmark found that the phenomenon is three times more common in Egypt than Denmark.[38]In Denmark, unlike Egypt, there are no elaborate supernatural beliefs about sleep paralysis, and the experience is often interpreted as an odd physiological event, with overall shorter sleep paralysis episodes and fewer people (17%) fearing that they could die from it.[34] Thenight hagis a generic name for a folkloric creature found in cultures around the world, and which is used to explain the phenomenon of sleep paralysis. A common description is that a person feels the presence of a supernatural malevolent being which immobilizes the person as if standing on the chest.[41]This phenomenon goes by many names. InAlbanian folk beliefs,Mokthiis believed to be a male spirit with a goldenfezhat who appears to women who are usually tired or suffering and stops them from moving. It is believed that if they can take his golden hat, he will grant them a wish, but then he will visit them frequently although he is harmless. There are talismans that can provide protection from Mokthi and one way is to put one's husband's hat near the pillow while sleeping.MokthiorMakthiinAlbanianmeans "Nightmare".[42] InBengali folklore, sleep paralysis is believed to be caused by a supernatural entity calledBoba(Bengali:বোবা,lit.'dumb').Bobaattacks a person by strangling him when the person sleeps in asupine position. InBengal, the phenomenon is calledBobay Dhora(Bengali:বোবায় ধরা,lit.'Struck byBoba').[43] Sleep paralysis amongCambodiansis known as "the ghost pushes you down," and entails the belief in dangerous visitations from deceased relatives.[36] InEgypt, sleep paralysis is conceptualized as a terrifyingjinnattack.[34] In the different regions ofItaly, there are many examples of supernatural beings associated with sleep paralysis. In the regions ofMarcheandAbruzzo, it is referred to as aPandafeche[it]orpantafica[it]attack;[9]thePandafecheusually refers to an evilwitch, sometimes a ghostlike spirit or a terrifying catlike creature, that mounts on the chest of the victim and tries to harm him. The only way to avoid her is to keep a bag of sand or beans close to the bed, so that the witch will stop to count how many beans or sand-grains are inside it. A similar tradition is present in theSardinianfolklore, where theAmmuntadoreis known as a creature that mounts on the people's chest during their sleep to give them nightmares, and it can change its shape according to the person's fears. In Northern Italy, specifically in theTyrolarea, theTrudis a witch that sits on the people's chest at night, making them unable to breathe; to chase her away, people should make thesign of the Cross, something that would need a great struggle in a situation of paralysis.[44]A similar folklore is present in theSannioarea, around the city ofBenevento, where the witch is calledJanara.[45]In Southern Italy, sleep paralysis is usually explained with the presence of aspritestanding on the people's chest; if the person manages to catch the sprite (or steal his hat), in exchange for his freedom (or to have his hat back) he can reveal the hiding place of a rich treasure; this sprite has different names in different regions of Italy: Monaciello inCampania, Monachicchio inBasilicata, Laurieddhu or Scazzamurill inApulia, Mazzmuredd inMolise.[45] InNewfoundland, which is in eastern Canada, sleep paralysis is referred to as the Old Hag,[35][46]and victims of ahaggingare said to behag-riddenupon awakening.[47]Victims report being completely conscious, but unable to speak or move, and report a person or an animal which sits upon their chest.[48]Despite the name, the attacker can be either male or female.[49]Some suggested cures or preventions for the Old Hag include sleeping with a Bible under the pillow,[48]calling the sleeper's name backwards[50]or in an extreme example, sleeping with a shingle or board embedded with nails strapped to the chest.[51]This object was called a Hag Board.[52]The Old Hag is well-enough known in the province to be a pop culture figure, appearing in films and plays[53]as well as in crafted objects.[54] Nigeria[55]has myriad interpretations of the cause of sleep paralysis, due to numerous cultures and belief systems that exist there. Sleep paralysis is sometimes interpreted as space alien abduction in theUnited States.[56] Various forms ofmagicandspiritual possessionwere also advanced as causes, in literature. In nineteenth-centuryEurope, the vagaries of diet were thought to be responsible. For example, inCharles Dickens'sA Christmas Carol,Ebenezer Scroogeattributes theghosthe sees to "... an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato..." In a similar vein, theHousehold Cyclopedia(1881) offers the following advice about nightmares: Great attention is to be paid to regularity and choice of diet. Intemperance of every kind is hurtful, but nothing is more productive of this disease than drinking bad wine. Of eatables those which are most prejudicial are all fat and greasy meats and pastry. Moderate exercise contributes in a superior degree to promote the digestion of food and prevent flatulence; those, however, who are necessarily confined to a sedentary occupation, should particularly avoid applying themselves to study or bodily labor immediately after eating. Going to bed before the usual hour is a frequent cause of night-mare, as it either occasions the patient to sleep too long or to lie long awake in the night. Passing a whole night or part of a night without rest likewise gives birth to the disease, as it occasions the patient, on the succeeding night, to sleep too soundly. Indulging in sleep too late in the morning, is an almost certain method to bring on the paroxysm, and the more frequently it returns, the greater strength it acquires; the propensity to sleep at this time is almost irresistible.[57] J. M. Barrie, the author of thePeter Panstories, may have had sleep paralysis. He said of himself "In my early boyhood it was a sheet that tried to choke me in the night."[58]He also described several incidents in the Peter Pan stories that indicate that he was familiar with an awareness of a loss of muscle tone whilst in a dream-like state. For example, Maimie is asleep but calls out "What was that....It is coming nearer! It is feeling your bed with its horns-it is boring for [into] you",[59]and when the Darling children were dreaming of flying, Barrie says "Nothing horrid was visible in the air, yet their progress had become slow and laboured, exactly as if they were pushing their way through hostile forces. Sometimes they hung in the air until Peter had beaten on it with his fists."[60]Barrie describes manyparasomniasand neurological symptoms in his books and uses them to explore the nature of consciousness from an experiential point of view.[61] The Nightmareis a 2015 documentary that discusses the causes of sleep paralysis as seen through extensive interviews with participants, and the experiences are re-enacted by professional actors. In synopsis, it proposes that such cultural phenomena asalien abduction, thenear-death experienceandshadow peoplecan, in many cases, be attributed to sleep paralysis. The "real-life" horror film debuted at theSundance Film Festivalon January 26, 2015, and premiered in theatres on June 5, 2015.[62]
https://en.wikipedia.org/wiki/Sleep_paralysis
TheZhuangzi(historically romanizedChuang Tzŭ) is an ancient Chinese text that is one of the two foundational texts ofTaoism, alongside theTao Te Ching. It was written during the lateWarring States period(476–221 BC) and is named for its traditional author,Zhuang Zhou, who is customarily known as "Zhuangzi" ("Master Zhuang"). TheZhuangziconsists of stories and maxims that exemplify the nature of the ideal Taoist sage. It recounts many anecdotes, allegories, parables, and fables, often expressed with irreverence or humor. Recurring themes include embracing spontaneity and achieving freedom from the human world and its conventions. The text aims to illustrate the arbitrariness andultimate falsity of dichotomiesnormally embraced by human societies, such as those between good and bad, large and small, life and death, or human and nature. In contrast with the focus on good morals and personal duty expressed by many Chinese philosophers of the period, Zhuang Zhou promoted carefree wandering and following nature, through which one would ultimately become one with the "Way" (Tao). Though appreciation for the work often focuses on its philosophy, theZhuangziis also regarded as one of the greatest works of literature in theClassical Chinesecanon. It has significantly influenced major Chinese writers and poets across more than two millennia, with the first attested commentary on the work written during theHan dynasty(202 BC – 220 AD). It has been called "the most important pre-Qintext for the study of Chinese literature".[1] TheZhuangziis presented as the collected works of a man namedZhuang Zhou—traditionally referred to as "Zhuangzi" (莊子; "Master Zhuang"), using the traditional Chinesehonorific. Almost nothing is concretely known of Zhuang Zhou's life. Most of what is known comes from theZhuangziitself, which was subject to changes in later centuries. Most historians place his birth around 369 BC in a place called Meng (蒙) in the historicalstate of Song, near present-dayShangqiu, Henan. His death is variously placed at 301, 295, or 286 BC.[2] Zhuang Zhou is thought to have spent time in the southernstate of Chu, as well as in theQicapital ofLinzi.Sima Qianincluded a biography of Zhuang Zhou in the Han-eraShiji(c.91 BC),[3]but it seems to have been sourced mostly from theZhuangziitself.[4]The American sinologistBurton Watsonconcluded: "Whoever Zhuang Zhou was, the writings attributed to him bear the stamp of a brilliant and original mind".[5]University of Sydneylecturer Esther Klein observes: "In the perception of the vast majority of readers, whoever authored the coreZhuangzitextwasMaster Zhuang."[6] The only version of theZhuangziknown to exist in its entirety consists of 33 chapters originally prepared around AD 300 by theJin-erascholarGuo Xiang(252–312), who reduced the text from an earlier form of 52 chapters. The first 7 of these, referred to as the 'inner chapters' (內篇;nèipiān), were considered even before Guo to have been wholly authored by Zhuang Zhou himself. This attribution has been traditionally accepted since, and is still assumed by many modern scholars.[7]The original authorship of the remaining 26 chapters has been the subject of perennial debate: they were divided by Guo into 15 'outer chapters' (外篇;wàipiān) and 11 'miscellaneous chapters' (雜篇;zápiān).[8] Today, it is generally accepted that the outer and miscellaneous chapters were the result of a process of "accretion and redaction" in which later authors "[responded] to the scintillating brilliance" of the original inner chapters,[9]although close intertextual analysis does not support the inner chapters comprising the earliest stratum.[10]Multiple authorship over time was a typical feature of Warring States texts of this genre.[11]A limited consensus has been established regarding five distinct "schools" of authorship, each responsible for their own layers of substance within the text.[12]Despite the lack of traceable attribution, modern scholars generally accept that the surviving chapters were originally composed between the 4th and 2nd centuries BC.[13] Excepting textual analysis, details of the text's history prior to theHan dynasty(202 BC – 220 AD) are largely unknown. Traces of its influence on the philosophy of texts written during the lateWarring States period, such as theGuanzi,Han FeiziandHuainanzi, suggest that theZhuangzi'sintellectual lineage had already been fairly influential in the states of Qi and Chu by the 3rd century BC.[14]Sima Qianrefers to theZhuangzias a 100,000-character work in theShiji, and references several chapters present in the received text.[15] Many scholars consider aZhuangzicomposed of 52 chapters, as attested by theBook of Hanin 111 AD, to have been the original form of the text.[16]During the late 1st century BC, the entire Han imperial library—including its edition of theZhuangzi—was subject to considerable redaction and standardization by the polymathLiu Xiang(77–6 BC) and his sonLiu Xin(c.46 BC– AD 23). All extant copies of theZhuangziultimately derive from a version that was further edited and redacted to 33 chapters byGuo Xiangc.300 AD,[16]who worked from the material previously edited by Liu. Guo plainly stated that he had made considerable edits to the outer and miscellaneous chapters in an attempt to preserve Zhuang Zhou's original ideas from later distortions, in a way that "did not hesitate to impose his personal understanding and philosophical preferences on the text".[17]The received text as edited by Guo is approximately 63,000 characters long—around two-thirds the attested length of the Han-era manuscript. While none are known to exist in full, versions of the text unaffected by both the Guo and Liu revisions survived into theTang dynasty(618–907), with the existing fragments hinting at the folkloric nature of the material removed by Guo.[18] Portions of theZhuangzihave been found among thebamboo sliptexts discovered in tombs dating to the earlyHan dynasty, particularly at theShuangguduisite nearFuyanginAnhui, and theMount Zhangjiasite nearJingzhouinHubei. The earlierGuodian Chu Slips—unearthed nearJingmen, Hubei, and dating to the Warring States periodc.300 BC—contain what appears to be a short fragment parallel to the "Ransacking Coffers" chapter (No.10 of 33).[8] TheDunhuang manuscripts—discovered in the early 20th century byWang Yuanlu, then obtained and analysed by the Hungarian-British explorerAurel Steinand the French sinologistPaul Pelliot—contain numerousZhuangzifragments dating to the early Tang dynasty. Stein and Pelliot took most of the manuscripts back to Europe; they are presently held at theBritish Libraryand theBibliothèque nationale de France. TheZhuangzifragments among the manuscripts constitute approximately twelve chapters of Guo Xiang's edition.[19] AZhuangzimanuscript dating to theMuromachi period(1338–1573) is preserved in theKōzan-jitemple inKyoto; it is considered one of Japan's national treasures. The manuscript has seven complete selections from the outer and miscellaneous chapters, and is believed to be a close copy of a 7th-century annotated edition written by the Chinese Taoist masterCheng Xuanying.[20] Topics Neo Confucianism New Confucianism Topics TheZhuangziconsists ofanecdotes,allegories,parables, andfablesthat are often humorous or irreverent in nature. Most of these are fairly short and simple, such as the humans "Lickety" and "Split" drilling seven holes into the primordial "Wonton" (No. 7), or Zhuang Zhou being discovered sitting and drumming on a basin after his wife dies (No. 18). A few are longer and more complex, like the story ofLie Yukouand themagus, or the account of theYellow Emperor's music (both No. 14). Most of the stories within theZhuangziseem to have been invented by Zhuang Zhou himself. This distinguishes the text from other works of the period, where anecdotes generally only appear as occasional interjections, and were usually drawn from existingproverbsor legends.[21] Some stories are completely whimsical, such as the strange description of evolution from "misty spray" through a series of substances and insects to horses and humans (No. 18), while a few other passages seem to be "sheer playful nonsense" which read likeLewis Carroll's "Jabberwocky". TheZhuangziis full of quirky and fantastic character archetypes, such as "Mad Stammerer", "Fancypants Scholar", "Sir Plow", and a man who fancies that his left arm will turn into a rooster, his right arm will turn into a crossbow, and his buttocks will become cartwheels.[22] A master of language, Zhuang Zhou sometimes engages in logic and reasoning, but then turns it upside down or carries the arguments to absurdity to demonstrate the limitations of human knowledge and the rational world. SinologistVictor H. Maircompares Zhuang Zhou's process of reasoning toSocratic dialogue—exemplified by the debate between Zhuang Zhou and fellow philosopherHuiziregarding the "joy of fish" (No. 17). Mair additionally characterizes Huizi's paradoxes near the end of the book as being "strikingly like those ofZeno of Elea".[23] The most famous of allZhuangzistories appears at the end of the second chapter, "On the Equality of Things", and consists of a dream being briefly recalled. 昔者莊周夢為胡蝶,栩栩然胡蝶也,自喻適志與。不知周也。Once, Zhuang Zhou dreamed he was a butterfly, a butterfly flitting and fluttering about, happy with himself and doing as he pleased. He didn't know that he was Zhuang Zhou.俄然覺,則蘧蘧然周也。不知周之夢為胡蝶與,胡蝶之夢為周與。周與胡蝶,則必有分矣。此之謂物化。Suddenly he woke up and there he was, solid and unmistakable Zhuang Zhou. But he didn't know if he was Zhuang Zhou who had dreamt he was a butterfly, or a butterfly dreaming that he was Zhuang Zhou. Between Zhuang Zhou and the butterfly there must be some distinction! This is called the Transformation of Things. The image of Zhuang Zhou wondering if he was a man who dreamed of being a butterfly or a butterfly dreaming of being a man became so well known that whole dramas have been written on its theme.[25]In the passage, Zhuang Zhou "[plays] with the theme of transformation",[25]illustrating that "the distinction between waking and dreaming is anotherfalse dichotomy. If [one] distinguishes them, how can [one] tell if [one] is now dreaming or awake?"[26] Another well-known passage dubbed "The Death of Wonton" illustrates the dangers Zhuang Zhou saw in going against the innate nature of things.[27] 南海之帝為儵,北海之帝為忽,中央之帝為渾沌。儵與忽時相與遇於渾沌之地,渾沌待之甚善。儵與忽謀報渾沌之德,曰:人皆有七竅,以視聽食息,此獨無有,嘗試鑿之。日鑿一竅,七日而渾沌死。The emperor of the Southern Seas was Lickety, the emperor of the Northern Sea was Split, and the emperor of the Centre was Wonton. Lickety and Split often met each other in the land of Wonton, and Wonton treated them very well. Wanting to repay Wonton's kindness, Lickety and Split said, "All people have seven holes for seeing, hearing, eating, and breathing. Wonton alone lacks them. Let's try boring some holes for him." So every day they bored one hole [in him], and on the seventh day Wonton died. Zhuang Zhou believed that the greatest of all human happiness could be achieved through a higher understanding of the nature of things, and that in order to develop oneself fully one needed to express one's innate ability.[25] Chapter 17 contains a well-known exchange between Zhuang Zhou and Huizi, featuring a heavy use of wordplay; it has been compared to aSocratic dialogue.[23] 莊子與惠子遊於濠梁之上。莊子曰:儵魚出遊從容,是魚樂也。Zhuangzi and Huizi were enjoying themselves on the bridge over the Hao River. Zhuangzi said, "Theminnowsare darting about free and easy! This is how fish are happy."惠子曰:子非魚,安知魚之樂。莊子曰:子非我,安知我不知魚之樂。Huizi replied, "You are not a fish. How[a]do you know that the fish are happy?" Zhuangzi said, "You are not I. How do you know that I do not know that the fish are happy?"惠子曰:我非子,固不知子矣;子固非魚也,子之不知魚之樂全矣。Huizi said, "I am not you, to be sure, so of course I don't know about you. But you obviously are not a fish; so the case is complete that you do not know that the fish are happy."莊子曰:請循其本。子曰汝安知魚樂云者,既已知吾知之而問我,我知之濠上也。Zhuangzi said, "Let's go back to the beginning of this. You said, How do you know that the fish are happy; but in asking me this, you already knew that I know it. I know it right here above the Hao." The precise point Zhuang Zhou intends to make in the debate is not entirely clear. The text appears to stress that "knowing" a thing is simply a state of mind: moreover, that it is not possible to determine whether "knowing" has any objective meaning. This sequence has been cited as an example of Zhuang Zhou's mastery of language, with reason subtly employed in order to make an anti-rationalist point.[32] A passage in chapter 18 describes Zhuang Zhou's reaction following the death of his wife, expressing a view of death as something not to be feared. 莊子妻死,惠子弔之,莊子則方箕踞鼓盆而歌。惠子曰:與人居長子,老身死,不哭亦足矣,又鼓盆而歌,不亦甚乎。Zhuangzi's wife died. When Huizi went to convey his condolences, he found Zhuangzi sitting with his legs sprawled out, pounding on a tub and singing. "You lived with her, she brought up your children and grew old," said Huizi. "It should be enough simply not to weep at her death. But pounding on a tub and singing—this is going too far, isn't it?"莊子曰:不然。是其始死也,我獨何能無概然。察其始而本無生,非徒無生也,而本無形,非徒無形也,而本無氣。雜乎芒芴之間,變而有氣,氣變而有形,形變而有生,今又變而之死,是相與為春秋冬夏四時行也。Zhuangzi said, "You're wrong. When she first died, do you think I didn't grieve like anyone else? But I looked back to her beginning and the time before she was born. Not only the time before she was born, but the time before she had a body. Not only the time before she had a body, but the time before she had a spirit. In the midst of the jumble of wonder and mystery a change took place and she had a spirit. Another change and she had a body. Another change and she was born. Now there's been another change and she's dead. It's just like the progression of the four seasons, spring, summer, fall, winter."人且偃然寢於巨室,而我噭噭然隨而哭之,自以為不通乎命,故止也。"Now she's going to lie down peacefully in a vast room. If I were to follow after her bawling and sobbing, it would show that I don't understand anything about fate. So I stopped." Zhuang Zhou seems to have viewed death as a natural process of transformation to be wholly accepted, where a person gives up one form of existence and assumes another.[34]In the second chapter, Zhuang Zhou makes the point that, for all humans know, death may in fact be better than life: "How do I know that loving life is not a delusion? How do I know that in hating death I am not like a man who, having left home in his youth, has forgotten the way back?"[35]His writings teach that "the wise man or woman accepts death with equanimity and thereby achieves absolute happiness."[34] Zhuang Zhou's own death is depicted in chapter 32, pointing to the body of lore that grew up around him in the decades following his death.[13]It serves to embody and reaffirm the ideas attributed to Zhuang Zhou throughout the previous chapters. 莊子將死,弟子欲厚葬之。莊子曰:吾以天地為棺槨,以日月為連璧,星辰為珠璣,萬物為齎送。吾葬具豈不備邪。何以加此。When Master Zhuang was about to die, his disciples wanted to give him a lavish funeral. Master Zhuang said: "I take heaven and earth as my inner and outer coffins, the sun and moon as my pair ofjade disks, the stars and constellations as my pearls and beads, the ten thousand things as my funerary gifts. With my burial complete, how is there anything left unprepared? What shall be added to it?"弟子曰:吾恐烏鳶之食夫子也。莊子曰:在上為烏鳶食,在下為螻蟻食,奪彼與此,何其偏也。The disciples said: "We are afraid that the crows andkiteswill eat you, Master!" Master Zhuang said: "Above ground I'd be eaten by crows and kites, below ground I'd be eaten bymole cricketsand ants. You rob the one and give to the other—how skewed would that be?" The principles and attitudes expressed in theZhuangziform the core of philosophicalTaoism. The text recommends embracing a natural spontaneity in order to better align one's inner self with the cosmic "Way". It also encourages keeping a distance from politics and social obligations, accepting death as a natural transformation, and appreciating things otherwise viewed as useless or lacking purpose. The text implores the reader to reject societal norms and conventional reasoning. The other major philosophical schools in ancient China—includingConfucianism,Legalism, andMohism—all proposed concrete social, political, and ethical reforms. By reforming both individuals and society as a whole, thinkers from these schools sought to alleviate human suffering, and ultimately solve the world's problems.[5]Contrarily, Zhuang Zhou believed the key to true happiness was to free oneself from worldly impingements through a principle of 'inaction' (wu wei)—action that is not based in purposeful striving or motivated by potential gain. As such, he fundamentally opposed systems that sought to impose order on individuals.[37][38] TheZhuangzidescribes the universe as being in a constant state of spontaneous change, which is not driven by any conscious God or force ofwill. It argues that humans, owing to their exceptional cognitive ability, tend to create artificial distinctions that remove them from the natural spontaneity of the universe. These include those of good versus bad, large versus small, and usefulness versus uselessness. It proposes that humans can achieve ultimate happiness by rejecting these distinctions, and living spontaneously in kind.[39]Zhuang Zhou often uses examples of craftsmen and artisans to illustrate the mindlessness and spontaneity he felt should characterize human action. AsBurton Watsondescribed, "the skilled woodcarver, the skilled butcher, the skilled swimmer does not ponder orratiocinateon the course of action he should take; his skill has become so much a part of him that he merely acts instinctively and spontaneously and, without knowing why, achieves success".[37]The term "wandering" (遊;yóu) is used throughout theZhuangzito describe how an enlightened person "wanders through all of creation, enjoying its delights without ever becoming attached to any one part of it".[37]The nonhuman characters throughout the text are often identified as being useful vehicles for metaphor. However, some recent scholarship has characterized theZhuangzias being "anti-anthropocentric" or even "animalistic" in the significance it ascribes to nonhuman characters. When viewed through this lens, theZhuangziquestions humanity's central place in the world, or even rejects the distinction between the human and natural worlds altogether.[40] Political positions in theZhuangzigenerally pertain to what governments should not do, rather than what they should do or how they may be reformed. The text seems to oppose formal government, viewing it as fundamentally problematic due to "the opposition between man and nature".[41]Zhuang Zhou attempts to illustrate that "as soon as government intervenes in natural affairs, it destroys all possibility of genuine happiness".[42]It is unclear whether Zhuang Zhou's positions amount to a form ofanarchism.[43] Western scholars have noted strong anti-rationalistthemes present throughout theZhuangzi. Whereas reason and logic as understood inAncient Greek philosophyproved foundational to the entire Western tradition, Chinese philosophers often preferred to rely on moral persuasion and intuition. Throughout Chinese history, theZhuangzisignificantly informed skepticism towards rationalism. In the text, Zhuang Zhou frequently turns logical arguments upside-down in order to satirize and discredit them. However, according to Mair he does not abandon language and reason altogether, but "only wishe[s] to point out that over-dependence on them could limit the flexibility of thought".[44]Confuciushimself is a recurring character in the text—sometimes engaging in invented debates withLaozi, where Confucius is consistently portrayed as being the less authoritative, junior figure of the two. In some appearances, Confucius is subjected to mockery and made "the butt of many jokes", while in others he is treated with unambiguous respect, intermittently serving as the "mouthpiece" for Zhuang Zhou's ideas.[45] TheZhuangziandTao Te Chingare considered to be the two fundamental texts in theTaoist tradition. It is accepted that some version of theTao Te Chinginfluenced the composition of theZhuangzi; however, the two works are distinct in their perspectives on the Tao itself. TheZhuangziuses the word "Tao" (道) less frequently than theTao Te Ching, with the former often using 'heaven' (天) in places the latter would use "Tao". While Zhuang Zhou discusses the personal process of following the Tao at length, compared to Laozi he articulates little about the nature of the Tao itself. TheZhuangzi's only direct description of the Tao is contained in "The Great Ancestral Teacher" (No. 6), in a passage "demonstrably adapted" from chapter 21 of theTao Te Ching. The inner chapters and theTao Te Chingagree that limitations inherent to human language preclude any sufficient description of the Tao. Meanwhile, imperfect descriptions are ubiquitous throughout both texts.[46] Of the texts written in China prior to its unification under theQin dynastyin 221 BC, theZhuangzimay have been the most influential on later literary works. For the period, it demonstrated an unparalleled creativity in its use of language.[47]Virtually every major Chinese writer or poet in history, fromSima XiangruandSima Qianduring theHan dynasty,Ruan JiandTao Yuanmingduring theSix Dynasties,Li Baiduring theTang dynasty, toSu ShiandLu Youin theSong dynastywere "deeply imbued with the ideas and artistry of theZhuangzi".[48] Traces of theZhuangzi's influence in lateWarring States periodphilosophical texts such as theGuanzi,Han Feizi, andLüshi Chunqiusuggest that Zhuang Zhou's intellectual lineage was already influential by the 3rd century BC. During theQinandHan dynasties, with their respective state-sponsoredLegalistandConfucianideologies, theZhuangzidoes not seem to have been highly regarded. One exception is "Fuon the Owl" (鵩鳥賦;Fúniǎo fù)—the earliest known definitive example offurhapsody, written by the Han-era scholarJia Yiin 170 BC. Jia does not reference theZhuangziby name, but cites it for one-sixth of the poem.[49] TheSix Dynastiesperiod (AD 220–589) that followed the collapse of the Han saw Confucianism temporarily surpassed by a resurgence of interest in Taoism and old divination texts such as theI Ching, with many poets, artists, and calligraphers of this period drawing influence from theZhuangzi.[50]The poetsRuan JiandXi Kang—both members of theSeven Sages of the Bamboo Grove—admired the work; an essay authored by Ruan entitled "Discourse on Summing Up theZhuangzi" (達莊論;Dá Zhuāng lùn) is still extant.[21] TheZhuangzihas been called "the most important of all the Daoist writings",[51]with the inner chapters embodying the core ideas of philosophical Taoism.[13]During the 4th century AD, theZhuangzibecame a major source of imagery and terminology for theShangqing School, a new form of Taoism that had become popular among the aristocracy of theJin dynasty(266–420). Shangqing School Taoism borrowed numerous terms from theZhuangzi, such as "perfected man" (真人;zhēnrén), "Great Clarity" (太清;Tài Qīng), and "fasting the mind" (心齋;xīn zhāi). While their use of these terms was distinct from that found in theZhuangziitself, their incidence still demonstrates the text's influence on Shangqing thought.[52] TheZhuangziwas very influential in the adaptation of Buddhism to Chinese culture after Buddhism was first brought to China from India in the 1st century AD.Zhi Dun, China's first aristocratic Buddhist monk, wrote a prominent commentary to theZhuangziin the mid-4th century. TheZhuangzialso played a significant role in the formation ofChan Buddhism—and therefore ofZenin Japan—which grew out of "a fusion of Buddhist ideology and ancient Daoist thought." Traits of Chan practice traceable to theZhuangziinclude a distrust of language and logic, an insistence that the "Way" can be found in everything, even dung and urine, and a fondness for dialogues based onkoans.[52] In 742, an imperial proclamation fromEmperor Xuanzong of Tang(r.712–756) canonized theZhuangzias one of theChinese classics, awarding it the honorific title 'True Scripture of Southern Florescence' (南華真經;Nánhuá zhēnjīng).[53]Nevertheless, most scholars throughout Chinese history did not consider it as being a "classic" per se, due to its non-Confucian nature.[54] Throughout Chinese history, theZhuangziremained the pre-eminent expression of core Taoist ideals. The 17th-century scholarGu Yanwulamented the flippant use of theZhuangzion theimperial examinationessays as representing a decline in traditional morals at the end of theMing dynasty(1368–1644).[55]Jia Baoyu, the main protagonist of the classic 18th-century novelDream of the Red Chamber, often turns to theZhuangzifor comfort amid the strife in his personal and romantic relationships.[56]The story of Zhuang Zhou drumming on a tub and singing after the death of his wife inspired an entire tradition of folk music in the central Chinese provinces ofHubeiandHunancalled "funeral drumming" (喪鼓;sànggǔ) that survived into the 18th and 19th centuries.[57] Outside of East Asia, theZhuangziis not as popular as theTao Te Chingand is rarely known by non-scholars. A number of prominent scholars have attempted to bring theZhuangzito wider attention among Western readers. In 1939, the British sinologistArthur Waleydescribed it as "one of the most entertaining as well as one of the profoundest books in the world".[58]In the introduction to his 1994 translation, Victor H. Mair wrote that he "[felt] a sense of injustice that theDao De Jingis so well known to my fellow citizens while theZhuangziis so thoroughly ignored, because I firmly believe that the latter is in every respect a superior work."[59] Western thinkers who have been influenced by the text includeMartin Heidegger, who became deeply interested in the oeuvres of Laozi and Zhuang Zhou during the 1930s. In particular, Heidegger was drawn to theZhuangzi's treatment of usefulness versus uselessness. He explicitly references one of the debates between Zhuang Zhou and Huizi (No. 24) within the third dialogue ofCountry Path Conversations, written as theSecond World Warwas coming to an end.[60]In the dialogue, Heidegger's characters conclude that "pure waiting" as expressed in theZhuangzi—that is, waiting for nothing—is the only viable mindset for the German people in the wake of the failure ofnational socialismand Germany's comprehensive defeat.[61]
https://en.wikipedia.org/wiki/The_Butterfly_Dream
The Twilight Zoneis a science fiction horror anthology television series, presented by Forest Whitaker. It is the second of threerevivalsofRod Serling's original1959–64 television series. It aired for one season on theUPNnetwork, with actorForest Whitakerassuming Serling's role as narrator and on-screen host.[4]It was a co-production between Spirit Dance Entertainment, Trilogy Entertainment Group, Joshmax Productions Services,[5]andNew Line Television. It premiered on September 18, 2002, and aired its final episode on May 21, 2003. Broadcast in an hour format with two half-hour stories, it was canceled after one season. Reruns continue to air in syndication and have aired onMyNetworkTVsince summer 2008 and stream onTubias of fall 2023. The series tended to address contemporary issues head-on; e.g. terrorism, racism, gender roles, sexuality, andstalking. Noteworthy episodes featuredJason AlexanderasDeathwanting to retire from harvesting souls,Lou Diamond Phillipsas aswimming pool cleanerbeing shot repeatedly in his dreams,Susanna Thompsonas a woman whose stated wish results in an "upgrading" of her family,Usheras a police officer being bothered by telephone calls from beyond the grave,Brian Austin Greenas a businessman who encounters items from his past that somehow reappear,Jeffrey Combsas a hypochondriac whose diseases become reality, andKatherine Heiglplaying a woman who went back in time on a suicide mission to kill the infantAdolf Hitler. The series also includes remakes and updates of stories presented in the originalTwilight Zonetelevision series, including the famous "Eye of the Beholder" starringMolly Sims. One of the updates, "The Monsters Are on Maple Street", is a modernized version of the classic episode "The Monsters Are Due on Maple Street". The original show was about the paranoia surrounding a neighborhood-wide blackout. In the course of the episode, somebody suggests an alien invasion being the cause of the blackouts, and that one of the neighbors may be an alien. The anti-alien hysteria is an allegory for the anti-communist paranoia of the time, and the 2003 remake, starringAndrew McCarthyandTitus Welliver, replaces aliens with terrorists. The show also contains "It's Still a Good Life", a sequel to the events of "It's a Good Life", an episode of the original series produced 41 years earlier.Bill Mumyreturned to play the adult version of Anthony, the demonic child he had played in the original story, with Mumy's daughter,Liliana, appearing as Anthony's daughter, an initially more benevolent but even more powerful child.Cloris Leachmanalso returned as Anthony's mother. Mumy went on to serve as a screenwriter for other episodes in the revival. Other guest stars include:Penn Badgley,Scott Bairstow,Jason Bateman,Gil Bellows,Elizabeth Berkley,Xander Berkeley,Olivia d'Abo,Linda Cardellini,Keith Hamilton Cobb,Rory Culkin,Reed Diamond,Shannon Elizabeth,Ethan Embry,Sean Patrick Flanery,Lukas Haas,Wood Harris,Hill Harper,Jonathan Jackson,Moira Kelly,Erik King,Wayne Knight,Wallace Langham,Method Man,Samantha Mathis,Christopher McDonald,Tangi Miller,Pat O'Brien,Adrian Pasdar,Emily Perkins,Jeremy Piven,Jaime Pressly,James Remar,Portia de Rossi,Eriq La Salle,Michael Shanks,Jeremy Sisto,Jessica Simpson,Ione Skye,Amber Tamblyn,Christopher Titus,Robin Tunney,Vincent Ventresca,Dylan Walsh,Don S. Davis,Frank Whaley,Alicia Witt, andGordon Michael Woolvett. McDonald, Langham, Xander Berkeley, and Haas had all previously guest starred in the1980s revival. An original opening was used for the first half of the season which included images of Rod Serling and a creepier musical arrangement.[6]For unknown reasons, this was changed to the more iconic opening with a rock-theme score provided byJonathan Davis(singer of the bandKorn). This version of the opening has the Serling images removed and would be the main one used in all episodes in future reruns and on the DVD boxset release. The series did not enjoy the same level of critical or ratings success as the original series or the 1980s revival, and only lasted one season. The complete series was released on DVD byNew Line Home Entertainmentin a six disc box set on September 7, 2004. The episodes are presented in their production order, not their broadcast order.[7]
https://en.wikipedia.org/wiki/The_Pool_Guy_(The_Twilight_Zone)
Incomputer programming, ananonymous function(function literal,expressionorblock) is afunctiondefinition that is notboundto anidentifier. Anonymous functions are often arguments being passed tohigher-order functionsor used for constructing the result of a higher-order function that needs to return a function.[1]If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous infunctional programming languagesand other languages withfirst-class functions, where they fulfil the same role for thefunction typeasliteralsdo for otherdata types. Anonymous functions originate in the work ofAlonzo Churchin his invention of thelambda calculus, in which all functions are anonymous, in 1936, before electronic computers.[2]In several programming languages, anonymous functions are introduced using the keywordlambda, and anonymous functions are often referred to aslambdasorlambda abstractions. Anonymous functions have been a feature ofprogramming languagessinceLispin 1958, and a growing number of modern programming languages support anonymous functions. The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual functionf(x) =Mwould be written(λx.M), and whereMis an expression that usesx. Compare to the Python syntax oflambdax:M. The name "arrow function" refers to the mathematical "maps to" symbol,x↦M. Compare to the JavaScript syntax ofx=>M.[3] Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples includeclosuresandcurrying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by name. Anonymous functions often provide a briefer notation than defining named functions. In languages that do not permit the definition of named functions in local scopes, anonymous functions may provide encapsulation via localized scope, however the code in the body of such anonymous function may not be re-usable, or amenable to separate testing. Short/simple anonymous functions used in expressions may be easier to read and understand than separately defined named functions, though without adescriptive namethey may be more difficult to understand. In some programming languages, anonymous functions are commonly implemented for very specific purposes such as binding events to callbacks or instantiating the function for particular values, which may be more efficient in aDynamic programming language, more readable, and less error-prone than calling a named function. The following examples are written in Python 3. When attempting to sort in a non-standard way, it may be easier to contain the sorting logic as an anonymous function instead of creating a named function. Most languages provide a generic sort function that implements asort algorithmthat will sort arbitrary objects. This function usually accepts an arbitrary function that determines how to compare whether two elements are equal or if one is greater or less than the other. Consider this Python code sorting a list of strings by length of the string: The anonymous function in this example is the lambda expression: The anonymous function accepts one argument,x, and returns the length of its argument, which is then used by thesort()method as the criteria for sorting. Basic syntax of a lambda function in Python is The expression returned by the lambda function can be assigned to a variable and used in the code at multiple places. Another example would be sorting items in a list by the name of their class (in Python, everything has a class): Note that11.2has class name "float",10has class name "int", and'number'has class name "str". The sorted order is "float", "int", then "str". Closures are functions evaluated in an environment containingbound variables. The following example binds the variable "threshold" in an anonymous function that compares the input to the threshold. This can be used as a sort of generator of comparison functions: It would be impractical to create a function for every possible comparison function and may be too inconvenient to keep the threshold around for further use. Regardless of the reason why a closure is used, the anonymous function is the entity that contains the functionality that does the comparing. Currying is the process of changing a function so that rather than taking multiple inputs, it takes a single input and returns a function which accepts the second input, and so forth. In this example, a function that performsdivisionby any integer is transformed into one that performs division by a set integer. While the use of anonymous functions is perhaps not common with currying, it still can be used. In the above example, the function divisor generates functions with a specified divisor. The functions half and third curry the divide function with a fixed divisor. The divisor function also forms a closure by binding the variabled. Ahigher-order functionis a function that takes a function as an argument or returns one as a result. This is commonly used to customize the behavior of a generically defined function, often a looping construct or recursion scheme. Anonymous functions are a convenient way to specify such function arguments. The following examples are in Python 3. The map function performs a function call on each element of a list. The following examplesquaresevery element in an array with an anonymous function. The anonymous function accepts an argument and multiplies it by itself (squares it). The above form is discouraged by the creators of the language, who maintain that the form presented below has the same meaning and is more aligned with the philosophy of the language: The filter function returns all elements from a list that evaluate True when passed to a certain function. The anonymous function checks if the argument passed to it is even. The same as with map, the form below is considered more appropriate: A fold function runs over all elements in a structure (for lists usually left-to-right, a "left fold", calledreducein Python), accumulating a value as it goes. This can be used to combine all elements of a structure into one value, for example: This performs The anonymous function here is the multiplication of the two arguments. The result of a fold need not be one value. Instead, both map and filter can be created using fold. In map, the value that is accumulated is a new list, containing the results of applying a function to each element of the original list. In filter, the value that is accumulated is a new list containing only those elements that match the given condition. The following is a list ofprogramming languagesthat support unnamed anonymous functions fully, or partly as some variant, or not at all. This table shows some general trends. First, the languages that do not support anonymous functions (C,Pascal,Object Pascal) are allstatically typedlanguages. However, statically typed languages can support anonymous functions. For example, theMLlanguages are statically typed and fundamentally include anonymous functions, andDelphi, a dialect ofObject Pascal, has been extended to support anonymous functions, as hasC++(by theC++11standard). Second, the languages that treat functions asfirst-class functions(Dylan,Haskell,JavaScript,Lisp,ML,Perl,Python,Ruby,Scheme) generally have anonymous function support so that functions can be defined and passed around as easily as other data types.
https://en.wikipedia.org/wiki/Anonymous_function
Innumerical analysis,fixed-point iterationis a method of computingfixed pointsof a function. More specifically, given a functionf{\displaystyle f}defined on thereal numberswith real values and given a pointx0{\displaystyle x_{0}}in thedomainoff{\displaystyle f}, the fixed-point iteration isxn+1=f(xn),n=0,1,2,…{\displaystyle x_{n+1}=f(x_{n}),\,n=0,1,2,\dots }which gives rise to thesequencex0,x1,x2,…{\displaystyle x_{0},x_{1},x_{2},\dots }ofiterated functionapplicationsx0,f(x0),f(f(x0)),…{\displaystyle x_{0},f(x_{0}),f(f(x_{0})),\dots }which is hoped toconvergeto a pointxfix{\displaystyle x_{\text{fix}}}. Iff{\displaystyle f}is continuous, then one can prove that the obtainedxfix{\displaystyle x_{\text{fix}}}is a fixed point off{\displaystyle f}, i.e.,f(xfix)=xfix.{\displaystyle f(x_{\text{fix}})=x_{\text{fix}}.} More generally, the functionf{\displaystyle f}can be defined on anymetric spacewith values in that same space. Anattracting fixed pointof a functionfis afixed pointxfixoffwith aneighborhoodUof "close enough" points aroundxfixsuch that for any value ofxinU, the fixed-point iteration sequencex,f(x),f(f(x)),f(f(f(x))),…{\displaystyle x,\ f(x),\ f(f(x)),\ f(f(f(x))),\dots }is contained inUandconvergestoxfix. The basin of attraction ofxfixis the largest such neighborhoodU.[1] The naturalcosinefunction ("natural" means inradians, not degrees or other units) has exactly one fixed point, and that fixed point is attracting. In this case, "close enough" is not a stringent criterion at all—to demonstrate this, start withanyreal number and repeatedly press thecoskey on a calculator (checking first that the calculator is in "radians" mode). It eventually converges to theDottie number(about 0.739085133), which is a fixed point. That is where the graph of the cosine function intersects the liney=x{\displaystyle y=x}.[2] Not all fixed points are attracting. For example, 0 is a fixed point of the functionf(x) = 2x, but iteration of this function for any value other than zero rapidly diverges. We say that the fixed point off(x)=2x{\displaystyle f(x)=2x}is repelling. An attracting fixed point is said to be astable fixed pointif it is alsoLyapunov stable. A fixed point is said to be aneutrally stable fixed pointif it isLyapunov stablebut not attracting. The center of alinear homogeneous differential equationof the second order is an example of a neutrally stable fixed point. Multiple attracting points can be collected in anattracting fixed set. TheBanach fixed-point theoremgives a sufficient condition for the existence of attracting fixed points. Acontraction mappingfunctionf{\displaystyle f}defined on acomplete metric spacehas precisely one fixed point, and the fixed-point iteration is attracted towards that fixed point for any initial guessx0{\displaystyle x_{0}}in the domain of the function. Common special cases are that (1)f{\displaystyle f}is defined on the real line with real values and isLipschitz continuouswith Lipschitz constantL<1{\displaystyle L<1}, and (2) the functionfis continuously differentiable in an open neighbourhood of a fixed pointxfix, and|f′(xfix)|<1{\displaystyle |f'(x_{\text{fix}})|<1}. Although there are otherfixed-point theorems, this one in particular is very useful because not all fixed-points are attractive. When constructing a fixed-point iteration, it is very important to make sure it converges to the fixed point. We can usually use the Banach fixed-point theorem to show that the fixed point is attractive. Attracting fixed points are a special case of a wider mathematical concept ofattractors. Fixed-point iterations are a discretedynamical systemon one variable.Bifurcation theorystudies dynamical systems and classifies various behaviors such as attracting fixed points,periodic orbits, orstrange attractors. An example system is thelogistic map. In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. Convergent fixed-point iterations are mathematically rigorous formalizations of iterative methods. If we writeg(x)=x−f(x)f′(x){\textstyle g(x)=x-{\frac {f(x)}{f'(x)}}}, we may rewrite the Newton iteration as the fixed-point iterationxn+1=g(xn){\textstyle x_{n+1}=g(x_{n})}. If this iteration converges to a fixed pointxfix{\displaystyle x_{\text{fix}}}ofg, thenxfix=g(xfix)=xfix−f(xfix)f′(xfix){\textstyle x_{\text{fix}}=g(x_{\text{fix}})=x_{\text{fix}}-{\frac {f(x_{\text{fix}})}{f'(x_{\text{fix}})}}}, sof(xfix)/f′(xfix)=0,{\textstyle f(x_{\text{fix}})/f'(x_{\text{fix}})=0,} The speed of convergence of the iteration sequence can be increased by using aconvergence accelerationmethod such asAnderson accelerationandAitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known asSteffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic. The termchaos gamerefers to a method of generating thefixed pointof anyiterated function system(IFS). Starting with any pointx0, successive iterations are formed asxk+1=fr(xk), wherefris a member of the given IFS randomly selected for each iteration. Hence the chaos game is a randomized fixed-point iteration. The chaos game allows plotting the general shape of afractalsuch as theSierpinski triangleby repeating the iterative process a large number of times. More mathematically, the iterations converge to the fixed point of the IFS. Wheneverx0belongs to the attractor of the IFS, all iterationsxkstay inside the attractor and, with probability 1, form adense setin the latter.
https://en.wikipedia.org/wiki/Fixed-point_iteration
Inmathematical logic, thelambda calculus(also written asλ-calculus) is aformal systemfor expressingcomputationbased on functionabstractionandapplicationusing variablebindingandsubstitution. Untyped lambda calculus, the topic of this article, is auniversal machine, amodel of computationthat can be used to simulate anyTuring machine(and vice versa). It was introduced by the mathematicianAlonzo Churchin the 1930s as part of his research into thefoundations of mathematics. In 1936, Church found a formulation which waslogically consistent, and documented it in 1940. Lambda calculus consists of constructinglambda termsand performingreductionoperations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules:[a] The reduction operations include: IfDe Bruijn indexingis used, then α-conversion is no longer required as there will be no name collisions. Ifrepeated applicationof the reduction steps eventually terminates, then by theChurch–Rosser theoremit will produce aβ-normal form. Variable names are not needed if using a universal lambda function, such asIota and Jot, which can create any function behavior by calling it on itself in various combinations. Lambda calculus isTuring complete, that is, it is a universalmodel of computationthat can be used to simulate anyTuring machine.[3]Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denotebindinga variable in afunction. Lambda calculus may beuntypedortyped. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictlyweakerthan the untyped lambda calculus, which is the primary subject of this article, in the sense thattyped lambda calculi can express lessthan the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, insimply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (seebelow). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus. Lambda calculus has applications in many different areas inmathematics,philosophy,[4]linguistics,[5][6]andcomputer science.[7][8]Lambda calculus has played an important role in the development of thetheoryofprogramming languages.Functional programminglanguages implement lambda calculus. Lambda calculus is also a current research topic incategory theory.[9] Lambda calculus was introduced by mathematicianAlonzo Churchin the 1930s as part of an investigation into thefoundations of mathematics.[10][c]The original system was shown to belogically inconsistentin 1935 whenStephen KleeneandJ. B. Rosserdeveloped theKleene–Rosser paradox.[11][12] Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus.[13]In 1940, he also introduced a computationally weaker, but logically consistent system, known as thesimply typed lambda calculus.[14] Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks toRichard Montagueand other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics[15]and computer science.[16] There is some uncertainty over the reason for Church's use of the Greek letterlambda(λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006): By the way, why did Church choose the notation "λ"? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation "x^{\displaystyle {\hat {x}}}" used for class-abstraction byWhitehead and Russell, by first modifying "x^{\displaystyle {\hat {x}}}" to "∧x{\displaystyle \land x}" to distinguish function-abstraction from class-abstraction, and then changing "∧{\displaystyle \land }" to "λ" for ease of printing. This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen. Dana Scotthas also addressed this question in various public lectures.[17]Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard: Dear Professor Church, Russell had theiota operator, Hilbert had theepsilon operator. Why did you choose lambda for your operator? According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe". Computable functionsare a fundamental concept within computer science and mathematics. The lambda calculus provides simplesemanticsfor computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.The first simplification is that the lambda calculus treats functions "anonymously"; it does not give them explicit names. For example, the function can be rewritten inanonymous formas (which is read as "atupleofxandyismappedtox2+y2{\textstyle x^{2}+y^{2}}").[d]Similarly, the function can be rewritten in anonymous form as where the input is simply mapped to itself.[d] The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance thesquare_sum{\textstyle \operatorname {square\_sum} }function, can be reworked into an equivalent function that accepts a single input, and as output returnsanotherfunction, that in turn accepts a single input. For example, can be reworked into This method, known ascurrying, transforms a function that takes multiple arguments into a chain of functions each with a single argument. Function applicationof thesquare_sum{\textstyle \operatorname {square\_sum} }function to the arguments (5, 2), yields at once whereas evaluation of the curried version requires one more step to arrive at the same result. The lambda calculus consists of a language oflambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as anequational theoryor as anoperational definition. As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, socurryingis used to implement functions of several variables. The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term". The following three rules give aninductive definitionthat can be applied to build all syntactically valid lambda terms:[e] Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See§ Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such asλx.x2.{\displaystyle \lambda x.x^{2}.}The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes. Anabstractionλx.t{\displaystyle \lambda x.t}denotes an§ anonymous function[g]that takes a single inputxand returnst. For example,λx.(x2+2){\displaystyle \lambda x.(x^{2}+2)}is an abstraction representing the functionf{\displaystyle f}defined byf(x)=x2+2,{\displaystyle f(x)=x^{2}+2,}using the termx2+2{\displaystyle x^{2}+2}fort. The namef{\displaystyle f}is superfluous when using abstraction. The syntax(λx.t){\displaystyle (\lambda x.t)}bindsthe variablexin the termt. The definition of a function with an abstraction merely "sets up" the function but does not invoke it. Anapplicationts{\displaystyle ts}represents the application of a functiontto an inputs, that is, it represents the act of calling functionton inputsto producet(s){\displaystyle t(s)}. A lambda term may refer to a variable that has not been bound, such as the termλx.(x+y){\displaystyle \lambda x.(x+y)}(which represents the function definitionf(x)=x+y{\displaystyle f(x)=x+y}). In this term, the variableyhas not been defined and is considered an unknown. The abstractionλx.(x+y){\displaystyle \lambda x.(x+y)}is a syntactically valid term and represents a function that adds its input to the yet-unknowny. Parentheses may be used and might be needed to disambiguate terms. For example, The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variablexis a placeholder in both examples. Here,example 1definesa functionλx.B{\displaystyle \lambda x.B}, whereB{\displaystyle B}is(λx.x)x{\displaystyle (\lambda x.x)x}, an anonymous function(λx.x){\displaystyle (\lambda x.x)}, with inputx{\displaystyle x}; while example 2,M{\displaystyle M}N{\displaystyle N}, is M applied to N, whereM{\displaystyle M}is the lambda term(λx.(λx.x)){\displaystyle (\lambda x.(\lambda x.x))}being applied to the inputN{\displaystyle N}which isx{\displaystyle x}. Both examples 1 and 2 would evaluate to theidentity functionλx.x{\displaystyle \lambda x.x}. In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions. For example, the lambda termλx.x{\displaystyle \lambda x.x}represents theidentity function,x↦x{\displaystyle x\mapsto x}. Further,λx.y{\displaystyle \lambda x.y}represents theconstant functionx↦y{\displaystyle x\mapsto y}, the function that always returnsy{\displaystyle y}, no matter the input. As an example of a function operating on functions, thefunction compositioncan be defined asλf.λg.λx.(f(gx)){\displaystyle \lambda f.\lambda g.\lambda x.(f(gx))}. There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms. A basic form of equivalence, definable on lambda terms, isalpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter. For instance,λx.x{\displaystyle \lambda x.x}andλy.y{\displaystyle \lambda y.y}are alpha-equivalent lambda terms, and they both represent the same function (the identity function). The termsx{\displaystyle x}andy{\displaystyle y}are not alpha-equivalent, because they are not bound in an abstraction. In many presentations, it is usual to identify alpha-equivalent lambda terms. The following definitions are necessary in order to be able to define β-reduction: Thefree variables[h]of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively: For example, the lambda term representing the identityλx.x{\displaystyle \lambda x.x}has no free variables, but the functionλx.yx{\displaystyle \lambda x.yx}has a single free variable,y{\displaystyle y}. Supposet{\displaystyle t},s{\displaystyle s}andr{\displaystyle r}are lambda terms, andx{\displaystyle x}andy{\displaystyle y}are variables. The notationt[x:=r]{\displaystyle t[x:=r]}indicates substitution ofr{\displaystyle r}forx{\displaystyle x}int{\displaystyle t}in acapture-avoidingmanner. This is defined so that: For example,(λx.x)[y:=y]=λx.(x[y:=y])=λx.x{\displaystyle (\lambda x.x)[y:=y]=\lambda x.(x[y:=y])=\lambda x.x}, and((λx.y)x)[x:=y]=((λx.y)[x:=y])(x[x:=y])=(λx.y)y{\displaystyle ((\lambda x.y)x)[x:=y]=((\lambda x.y)[x:=y])(x[x:=y])=(\lambda x.y)y}. The freshness condition (requiring thaty{\displaystyle y}is not in thefree variablesofr{\displaystyle r}) is crucial in order to ensure that substitution does not change the meaning of functions. For example, a substitution that ignores the freshness condition could lead to errors:(λx.y)[y:=x]=λx.(y[y:=x])=λx.x{\displaystyle (\lambda x.y)[y:=x]=\lambda x.(y[y:=x])=\lambda x.x}. This erroneous substitution would turn the constant functionλx.y{\displaystyle \lambda x.y}into the identityλx.x{\displaystyle \lambda x.x}. In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable. For example, switching back to our correct notion of substitution, in(λx.y)[y:=x]{\displaystyle (\lambda x.y)[y:=x]}the abstraction can be renamed with a fresh variablez{\displaystyle z}, to obtain(λz.y)[y:=x]=λz.(y[y:=x])=λz.x{\displaystyle (\lambda z.y)[y:=x]=\lambda z.(y[y:=x])=\lambda z.x}, and the meaning of the function is preserved by substitution. The β-reduction rule[b]states that an application of the form(λx.t)s{\displaystyle (\lambda x.t)s}reduces to the termt[x:=s]{\displaystyle t[x:=s]}. The notation(λx.t)s→t[x:=s]{\displaystyle (\lambda x.t)s\to t[x:=s]}is used to indicate that(λx.t)s{\displaystyle (\lambda x.t)s}β-reduces tot[x:=s]{\displaystyle t[x:=s]}. For example, for everys{\displaystyle s},(λx.x)s→x[x:=s]=s{\displaystyle (\lambda x.x)s\to x[x:=s]=s}. This demonstrates thatλx.x{\displaystyle \lambda x.x}really is the identity. Similarly,(λx.y)s→y[x:=s]=y{\displaystyle (\lambda x.y)s\to y[x:=s]=y}, which demonstrates thatλx.y{\displaystyle \lambda x.y}is a constant function. The lambda calculus may be seen as an idealized version of a functional programming language, likeHaskellorStandard ML. Under this view,β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the termΩ=(λx.xx)(λx.xx){\displaystyle \Omega =(\lambda x.xx)(\lambda x.xx)}. Here(λx.xx)(λx.xx)→(xx)[x:=λx.xx]=(x[x:=λx.xx])(x[x:=λx.xx])=(λx.xx)(λx.xx){\displaystyle (\lambda x.xx)(\lambda x.xx)\to (xx)[x:=\lambda x.xx]=(x[x:=\lambda x.xx])(x[x:=\lambda x.xx])=(\lambda x.xx)(\lambda x.xx)}. That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate. Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied totruth values, strings, or other non-number objects. Lambda expressions are composed of: The set of lambda expressions,Λ, can bedefined inductively: Instances of rule 2 are known asabstractionsand instances of rule 3 are known asapplications.[18]See§ reducible expression This set of rules may be written inBackus–Naur formas: To keep the notation of lambda expressions uncluttered, the following conventions are usually applied: The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to bebound. In an expression λx.M, the part λxis often calledbinder, as a hint that the variablexis getting bound by prepending λxtoM. All other variables are calledfree. For example, in the expression λy.x x y,yis a bound variable andxis a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence ofxin the expression is bound by the second lambda: λx.y(λx.z x). The set offree variablesof a lambda expression,M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows: An expression that contains no free variables is said to beclosed. Closed lambda expressions are also known ascombinatorsand are equivalent to terms incombinatory logic. The meaning of lambda expressions is defined by how expressions can be reduced.[22] There are three kinds of reduction: We also speak of the resulting equivalences: two expressions areα-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly. The termredex, short forreducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M)Nis a β-redex in expressing the substitution ofNforxinM. The expression to which a redex reduces is called itsreduct; the reduct of (λx.M)NisM[x:=N].[b] Ifxis not free inM, λx.M xis also an η-redex, with a reduct ofM. α-conversion(alpha-conversion), sometimes known as α-renaming,[23]allows bound variable names to be changed. For example, α-conversion of λx.xmight yield λy.y. Terms that differ only by α-conversion are calledα-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent. The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.xcould result in λy.λx.x, but it couldnotresult in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion ofvariable shadowing. Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replacexwithyin λx.λy.x, we get λy.λy.y, which is not at all the same. In programming languages with static scope, α-conversion can be used to makename resolutionsimpler by ensuring that no variable namemasksa name in a containingscope(seeα-renaming to make name resolution trivial). In theDe Bruijn indexnotation, any two α-equivalent terms are syntactically identical. Substitution, writtenM[x:=N], is the process of replacing allfreeoccurrences of the variablexin the expressionMwith expressionN. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression): To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y:=x] to result in λx.x, because the substitutedxwas supposed to be free but ended up being bound. The correct substitution in this case is λz.x,up toα-equivalence. Substitution is defined uniquely up to α-equivalence.See Capture-avoiding substitutionsabove. β-reduction(betareduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M)NisM[x:=N].[b] For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n× 2) 7 → 7 × 2. β-reduction can be seen to be the same as the concept oflocal reducibilityinnatural deduction, via theCurry–Howard isomorphism. η-conversion(etaconversion) expresses the idea ofextensionality,[24]which in this context is that two functions are the sameif and only ifthey give the same result for all arguments. η-conversion converts between λx.fxandfwheneverxdoes not appear free inf. η-reduction changes λx.fxtof, and η-expansion changesfto λx.fx, under the same requirement thatxdoes not appear free inf. η-conversion can be seen to be the same as the concept oflocal completenessinnatural deduction, via theCurry–Howard isomorphism. For the untyped lambda calculus, β-reduction as arewriting ruleis neitherstrongly normalisingnorweakly normalising. However, it can be shown that β-reduction isconfluentwhen working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other). Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it. The basic lambda calculus may be used to modelarithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sectionsi,ii,iii, and§ iv. There are several possible ways to define thenatural numbersin lambda calculus, but by far the most common are theChurch numerals, which can be defined as follows: and so on. Or using the alternative syntax presented above inNotation: A Church numeral is ahigher-order function—it takes a single-argument functionf, and returns another single-argument function. The Church numeralnis a function that takes a functionfas argument and returns then-th composition off, i.e. the functionfcomposed with itselfntimes. This is denotedf(n)and is in fact then-th power off(considered as an operator);f(0)is defined to be the identity function. Such repeated compositions (of a single functionf) obey thelaws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of0impossible.) One way of thinking about the Church numeraln, which is often useful when analysing programs, is as an instruction 'repeatntimes'. For example, using thePAIRandNILfunctions defined below, one can define a function that constructs a (linked) list ofnelements all equal toxby repeating 'prepend anotherxelement'ntimes, starting from an empty list. The lambda term is By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved. We can define a successor function, which takes a Church numeralnand returnsn+ 1by adding another application off, where '(mf)x' means the function 'f' is applied 'm' times on 'x': Because them-th composition offcomposed with then-th composition offgives them+n-th composition off, addition can be defined as follows: PLUScan be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that and are β-equivalent lambda expressions. Since addingmto a numberncan be accomplished by adding 1mtimes, an alternative definition is: Similarly, multiplication can be defined as Alternatively since multiplyingmandnis the same as repeating the addnfunctionmtimes and then applying it to zero. Exponentiation has a rather simple rendering in Church numerals, namely The predecessor function defined byPREDn=n− 1for a positive integernandPRED 0 = 0is considerably more difficult. The formula can be validated by showing inductively that ifTdenotes(λg.λh.h(gf)), thenT(n)(λu.x) = (λh.h(f(n−1)(x)))forn> 0. Two other definitions ofPREDare given below, one usingconditionalsand the other usingpairs. With the predecessor function, subtraction is straightforward. Defining SUBmnyieldsm−nwhenm>nand0otherwise. By convention, the following two definitions (known as Church Booleans) are used for the Boolean valuesTRUEandFALSE: Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct): We are now able to compute some logic functions, for example: and we see thatAND TRUE FALSEis equivalent toFALSE. Apredicateis a function that returns a Boolean value. The most fundamental predicate isISZERO, which returnsTRUEif its argument is the Church numeral0, butFALSEif its argument were any other Church numeral: The following predicate tests whether the first argument is less-than-or-equal-to the second: and sincem=n, ifLEQmnandLEQnm, it is straightforward to build a predicate for numerical equality. The availability of predicates and the above definition ofTRUEandFALSEmake it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as: which can be verified by showing inductively thatn(λg.λk.ISZERO (g1)k(PLUS (gk) 1)) (λv.0)is the addn− 1 function forn> 0. A pair (2-tuple) can be defined in terms ofTRUEandFALSE, by using theChurch encoding for pairs. For example,PAIRencapsulates the pair (x,y),FIRSTreturns the first element of the pair, andSECONDreturns the second. A linked list can be defined as either NIL for the empty list, or thePAIRof an element and a smaller list. The predicateNULLtests for the valueNIL. (Alternatively, withNIL := FALSE, the constructl(λh.λt.λz.deal_with_head_h_and_tail_t) (deal_with_nil)obviates the need for an explicit NULL test). As an example of the use of pairs, the shift-and-increment function that maps(m,n)to(n,n+ 1)can be defined as which allows us to give perhaps the most transparent version of the predecessor function: There is a considerable body ofprogramming idiomsfor lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation forprogramming language semantics, effectively using lambda calculus as alow-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign. In lambda calculus, alibrarywould take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to usefto meanN(some explicit lambda-term) inM(another lambda-term, the "main program"), one can say Authors often introducesyntactic sugar, such aslet,[k]to permit writing the above in the more intuitive order By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program. A notable restriction of thisletis that the namefmay not be referenced inN, forNis outside the scope of the abstraction bindingf, which isM; this means a recursive function definition cannot be written withlet. Theletrec[l]construction would allow writing recursive function definitions, where the scope of the abstraction bindingfincludesNas well asM. Or self-application a-la that which leads toYcombinator could be used. Recursionis when a function invokes itself. What would a value be which were to represent such a function? It has to refer to itself somehow inside itself, just as the definition refers to itself inside itself. If this value were to contain itself by value, it would have to be of infinite size, which is impossible. Other notations, which support recursion natively, overcome this by referring to the functionby nameinside its definition. Lambda calculus cannot express this, since in it there simply are no names for terms to begin with, only arguments' names, i.e. parameters in abstractions. Thus, a lambda expression can receive itself as its argument and refer to (a copy of) itself via the corresponding parameter's name. This will work fine in case it was indeed called with itself as an argument. For example,(λx.xx)E= (E E)will express recursion whenEis an abstraction which is applying its parameter to itself inside its body to express a recursive call. Since this parameter receivesEas its value, its self-application will be the same(E E)again. As a concrete example, consider thefactorialfunctionF(n), recursively defined by In the lambda expression which is to represent this function, aparameter(typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it with itself as its first argument will amount to the recursive call. Thus to achieve recursion, the intended-as-self-referencing argument (calledshere, reminiscent of "self", or "self-applying") must always be passed to itself within the function body at a recursive call point: and we have Heres sbecomesthe same(E E)inside the result of the application(E E), and using the same function for a call is the definition of what recursion is. The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced there by the parameter namesto be called via the self-applicationss, again and again as needed, each timere-creatingthe lambda-termF = E E. The application is an additional step just as the name lookup would be. It has the same delaying effect. Instead of havingFinside itself as a wholeup-front, delaying its re-creation until the next call makes its existence possible by having twofinitelambda-termsEinside it re-create it on the flylateras needed. This self-applicational approach solves it, but requires re-writing each recursive call as a self-application. We would like to have a generic solution, without the need for any re-writes: Given a lambda term with first argument representing recursive call (e.g.Ghere), thefixed-pointcombinatorFIXwill return a self-replicating lambda expression representing the recursive function (here,F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression(FIX G)is re-created inside itself, at call-point, achievingself-reference. In fact, there are many possible definitions for thisFIXoperator, the simplest of them being: In the lambda calculus,Ygis a fixed-point ofg, as it expands to: Now, to perform the recursive call to the factorial function for an argumentn, we would simply call(YG)n. Givenn= 4, for example, this gives: Every recursively defined function can be seen as a fixed point of some suitably defined higher order function (also known as functional) closing over the recursive call with an extra argument. Therefore, usingY, every recursive function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication, and comparison predicates of natural numbers, using recursion. WhenY combinatoris coded directly in astrict programming language, the applicative order of evaluation used in such languages will cause an attempt to fully expand the internal self-application(xx){\displaystyle (xx)}prematurely, causingstack overflowor, in case oftail call optimization, indefinite looping.[27]A delayed variant of Y, theZ combinator, can be used in such languages. It has the internal self-application hidden behind an extra abstraction througheta-expansion, as(λv.xxv){\displaystyle (\lambda v.xxv)}, thus preventing its premature expansion:[28] Certain terms have commonly accepted names:[29][30][31] Iis the identity function.SKandBCKWform completecombinator calculussystems that can express any lambda term - seethe next section.ΩisUU, the smallest term that has no normal form.YIis another such term.Yis standard and definedabove, and can also be defined asY=BU(CBU), so thatYg=g(Yg).TRUEandFALSEdefinedaboveare commonly abbreviated asTandF. IfNis a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-termT(x,N) which is equivalent toλx.Nbut lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, asT(x,N) removes all occurrences ofxfromN, while still allowing argument values to be substituted into the positions whereNcontains anx. The conversion functionTcan be defined by: In either case, a term of the formT(x,N)Pcan reduce by having the initial combinatorI,K, orSgrab the argumentP, just like β-reduction of(λx.N)Pwould do.Ireturns that argument.Kthrows the argument away, just like(λx.N)would do ifxhas no free occurrence inN.Spasses the argument on to both subterms of the application, and then applies the result of the first to the result of the second. The combinatorsBandCare similar toS, but pass the argument on to only one subterm of an application (Bto the "argument" subterm andCto the "function" subterm), thus saving a subsequentKif there is no occurrence ofxin one subterm. In comparison toBandC, theScombinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. TheWcombinator does only the latter, yielding theB, C, K, W systemas an alternative toSKI combinator calculus. Atyped lambda calculusis a typedformalismthat uses the lambda-symbol (λ{\displaystyle \lambda }) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (seeKinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory anduntyped lambda calculusa special case with only one type.[32] Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typedimperative programminglanguages. Typed lambda calculi play an important role in the design oftype systemsfor programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation. Typed lambda calculi are closely related tomathematical logicandproof theoryvia theCurry–Howard isomorphismand they can be considered as theinternal languageof classes ofcategories, e.g., the simply typed lambda calculus is the language of aCartesian closed category(CCC). Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:[33][34][35] Weak reduction strategies do not reduce under lambda abstractions: Strategies with sharing reduce computations that are "the same" in parallel: There is no algorithm that takes as input any two lambda expressions and outputsTRUEorFALSEdepending on whether one expression reduces to the other.[13]More precisely, nocomputable functioncandecidethe question. This was historically the first problem for which undecidability could be proven. As usual for such a proof,computablemeans computable by anymodel of computationthat isTuring complete. In fact computability can itself be defined via the lambda calculus: a functionF:N→Nof natural numbers is a computable function if and only if there exists a lambda expressionfsuch that for every pair ofx,yinN,F(x)=yif and only iffx=βy, wherexandyare theChurch numeralscorresponding toxandy, respectively and =βmeaning equivalence with β-reduction. See theChurch–Turing thesisfor other approaches to defining computability and their equivalence. Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has anormal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing aGödel numberingfor lambda expressions, he constructs a lambda expressionethat closely follows the proof ofGödel's first incompleteness theorem. Ifeis applied to its own Gödel number, a contradiction results. The notion ofcomputational complexityfor the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.[36]To be precise, one must somehow find the location of all of the occurrences of the bound variableVin the expressionE, implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations ofVinEisO(n)in the lengthnofE.Director stringswere an early approach that traded this time cost for a quadratic space usage.[37]More generally this has led to the study of systems that useexplicit substitution. In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is areasonabletime cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps.[38]This was a long-standing open problem, due tosize explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.[39] An unreasonable model does not necessarily mean inefficient.Optimal reductionreduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction.[40]It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost.[39]In addition the BOHM prototype implementation of optimal reduction outperformed bothCamlLight and Haskell on pure lambda terms.[40] As pointed out byPeter Landin's 1965 paper "A Correspondence betweenALGOL 60and Church's Lambda-notation",[41]sequentialprocedural programminglanguages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application. For example, inPythonthe "square" function can be expressed as a lambda expression as follows: The above example is an expression that evaluates to a first-class function. The symbollambdacreates an anonymous function, given a list of parameter names,x– just a single argument in this case, and an expression that is evaluated as the body of the function,x**2. Anonymous functions are sometimes called lambda expressions. For example,Pascaland many other imperative languages have long supported passingsubprogramsasargumentsto other subprograms through the mechanism offunction pointers. However, function pointers are an insufficient condition for functions to befirst classdatatypes, because a function is a first class datatype if and only if new instances of the function can be created atruntime. Such runtime creation of functions is supported inSmalltalk,JavaScript,Wolfram Language, and more recently inScala,Eiffel(as agents),C#(as delegates) andC++11, among others. TheChurch–Rosserproperty of the lambda calculus means that evaluation (β-reduction) can be carried out inany order, even in parallel. This means that variousnondeterministicevaluation strategiesare relevant. However, the lambda calculus does not offer any explicit constructs forparallelism. One can add constructs such asfuturesto the lambda calculus. Otherprocess calculihave been developed for describing communication and concurrency. The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a setDisomorphic to the function spaceD→D, of functions on itself. However, no nontrivial suchDcan exist, bycardinalityconstraints because the set of all functions fromDtoDhas greater cardinality thanD, unlessDis asingleton set. In the 1970s,Dana Scottshowed that if onlycontinuous functionswere considered, a set ordomainDwith the required property could be found, thus providing amodelfor the lambda calculus.[42] This work also formed the basis for thedenotational semanticsof programming languages. These extensions are in thelambda cube: Theseformal systemsare extensions of lambda calculus that are not in the lambda cube: These formal systems are variations of lambda calculus: These formal systems are related to lambda calculus: Some parts of this article are based on material fromFOLDOC, used withpermission.
https://en.wikipedia.org/wiki/Lambda_calculus#Recursion_and_fixed_points
Lambda liftingis ameta-processthat restructures acomputer programso thatfunctionsare defined independently of each other in a globalscope. An individuallifttransforms a local function (subroutine) into a global function. It is a two step process, consisting of: The term "lambda lifting" was first introduced by Thomas Johnsson around 1982 and was historically considered as a mechanism for implementingprogramming languagesbased onfunctional programming. It is used in conjunction with other techniques in some moderncompilers. Lambda lifting is not the same as closure conversion. It requires allcall sitesto be adjusted (adding extra arguments (parameters) to calls) and does not introduce aclosurefor the lifted lambda expression. In contrast, closure conversion does not require call sites to be adjusted but does introduce a closure for the lambda expression mapping free variables to values. The technique may be used on individual functions, incode refactoring, to make a function usable outside the scope in which it was written. Lambda lifts may also be repeated, to transform the program. Repeated lifts may be used to convert a program written inlambda calculusinto a set ofrecursive functions, without lambdas. This demonstrates the equivalence of programs written in lambda calculus and programs written as functions.[1]However it does not demonstrate the soundness of lambda calculus for deduction, as theeta reductionused in lambda lifting is the step that introducescardinality problemsinto the lambda calculus, because it removes the value from the variable, without first checking that there is only one value that satisfies the conditions on the variable (seeCurry's paradox). Lambda lifting is expensive on processing time for the compiler. An efficient implementation of lambda lifting isO(n2){\displaystyle O(n^{2})}on processing time for the compiler.[2] In theuntyped lambda calculus, where the basic types are functions, lifting may change the result ofbeta reductionof a lambda expression. The resulting functions will have the same meaning, in a mathematical sense, but are not regarded as the same function in the untyped lambda calculus. See alsointensional versus extensional equality. The reverse operation to lambda lifting islambda dropping.[3] Lambda dropping may make the compilation of programs quicker for the compiler, and may also increase the efficiency of the resulting program, by reducing the number of parameters, and reducing the size of stack frames. However it makes a function harder to re-use. A dropped function is tied to its context, and can only be used in a different context if it is first lifted. The following algorithm is one way to lambda-lift an arbitrary program in a language which doesn't support closures asfirst-class objects: If the language has closures as first-class objects that can be passed as arguments or returned from other functions, the closure will need to be represented by a data structure that captures the bindings of the free variables. The followingOCamlprogram computes the sum of the integers from 1 to 100: (Thelet recdeclaressumas a function that may call itself.) The function f, which adds sum's argument to the sum of the numbers less than the argument, is a local function. Within the definition of f, n is a free variable. Start by converting the free variable to a parameter: Next, lift f into a global function: The following is the same example, this time written inJavaScript: Lambda lifting andclosureare both methods for implementingblock structuredprograms. It implements block structure by eliminating it. All functions are lifted to the global level. Closure conversion provides a "closure" which links the current frame to other frames. Closure conversion takes less compile time. Recursive functions, and block structured programs, with or without lifting, may be implemented using astackbased implementation, which is simple and efficient. However a stack frame based implementation must bestrict (eager). The stack frame based implementation requires that the life of functions belast-in, first-out(LIFO). That is, the most recent function to start its calculation must be the first to end. Some functional languages (such asHaskell) are implemented usinglazy evaluation, which delays calculation until the value is needed. The lazy implementation strategy gives flexibility to the programmer. Lazy evaluation requires delaying the call to a function until a request is made to the value calculated by the function. One implementation is to record a reference to a "frame" of data describing the calculation, in place of the value. Later when the value is required, the frame is used to calculate the value, just in time for when it is needed. The calculated value then replaces the reference. The "frame" is similar to astack frame, the difference being that it is not stored on the stack. Lazy evaluation requires that all the data required for the calculation be saved in the frame. If the function is "lifted", then the frame needs only record thefunction pointer, and the parameters to the function. Some modern languages usegarbage collectionin place of stack based allocation to manage the life of variables. In a managed, garbage collected environment, aclosurerecords references to the frames from which values may be obtained. In contrast a lifted function has parameters for each value needed in the calculation. TheLet expressionis useful in describing lifting, dropping, and the relationship between recursive equations and lambda expressions. Most functional languages have let expressions. Also, block structured programming languages likeALGOLandPascalare similar in that they too allow the local definition of a function for use in a restrictedscope. Theletexpression used here is a fully mutually recursive version oflet rec, as implemented in many functional languages. Let expressions are related toLambda calculus. Lambda calculus has a simple syntax and semantics, and is good for describing Lambda lifting. It is convenient to describe lambda lifting as a translations fromlambdato aletexpression, and lambda dropping as the reverse. This is becauseletexpressions allow mutual recursion, which is, in a sense, more lifted than is supported in lambda calculus. Lambda calculus does not support mutual recursion and only one function may be defined at the outermost global scope. Conversion ruleswhich describe translation without lifting are given in theLet expressionarticle. The following rules describe the equivalence of lambda and let expressions, Meta-functions will be given that describe lambda lifting and dropping. A meta-function is a function that takes a program as a parameter. The program is data for the meta-program. The program and the meta program are at different meta-levels. The following conventions will be used to distinguish program from the meta program, For simplicity the first rule given that matches will be applied. The rules also assume that the lambda expressions have been pre-processed so that each lambda abstraction has a unique name. The substitution operator is used extensively. The expressionL[G:=S]{\displaystyle L[G:=S]}means substitute every occurrence ofGinLbySand return the expression. The definition used is extended to cover the substitution of expressions, from the definition given on theLambda calculuspage. The matching of expressions should compare expressions for alpha equivalence (renaming of variables). Each lambda lift takes a lambda abstraction which is a sub expression of a lambda expression and replaces it by a function call (application) to a function that it creates. The free variables in the sub expression are the parameters to the function call. Lambda lifts may be used on individual functions, incode refactoring, to make a function usable outside the scope in which it was written. Such lifts may also be repeated, until the expression has no lambda abstractions, to transform the program. A lift is given a sub-expression within an expression to lift to the top of that expression. The expression may be part of a larger program. This allows control of where the sub-expression is lifted to. The lambda lift operation used to perform a lift within a program is, The sub expression may be either a lambda abstraction, or a lambda abstraction applied to a parameter. Two types of lift are possible. An anonymous lift has a lift expression which is a lambda abstraction only. It is regarded as defining an anonymous function. A name must be created for the function. A named lift expression has a lambda abstraction applied to an expression. This lift is regarded as a named definition of a function. An anonymous lift takes a lambda abstraction (calledS). ForS; The lambda lift is the substitution of the lambda abstractionSfor a function application, along with the addition of a definition for the function. The new lambda expression hasSsubstituted for G:L[S:=G] means substitution ofSforGinL. The function definitions has the function definitionG = Sadded. In the above ruleGis the function application that is substituted for the expressionS. It is defined by, whereVis the function name. It must be a new variable, i.e. a name not already used in the lambda expression, wherevars⁡[E]{\displaystyle \operatorname {vars} [E]}is a meta function that returns the set of variables used inE. Seede-lambdainConversion from lambda to let expressions. The result is, The function callGis constructed by adding parameters for each variable in the free variable set (represented byV), to the functionH, The named lift is similar to the anonymous lift except that the function nameVis provided. As for the anonymous lift, the expressionGis constructed fromVby applying the free variables ofS. It is defined by, For example, Seede-lambdainConversion from lambda to let expressions. The result is, gives, A lambda lift transformation takes a lambda expression and lifts all lambda abstractions to the top of the expression. The abstractions are then translated intorecursive functions, which eliminates the lambda abstractions. The result is a functional program in the form, whereMis a series of function definitions, andNis the expression representing the value returned. For example, Thede-letmeta function may then be used to convert the result back into lambda calculus. The processing of transforming the lambda expression is a series of lifts. Each lift has, After the lifts are applied the lets are combined together into a single let. ThenParameter droppingis applied to remove parameters that are not necessary in the "let" expression. The let expression allows the function definitions to refer to each other directly, whereas lambda abstractions are strictly hierarchical, and a function may not directly refer to itself. There are two different ways that an expression may be selected for lifting. The first treats all lambda abstractions as defining anonymous functions. The second, treats lambda abstractions which are applied to a parameter as defining a function. Lambda abstractions applied to a parameter have a dual interpretation as either a let expression defining a function, or as defining an anonymous function. Both interpretations are valid. These two predicates are needed for both definitions. lambda-free- An expression containing no lambda abstractions. lambda-anon- An anonymous function. An expression likeλx1....λxn.X{\displaystyle \lambda x_{1}.\ ...\ \lambda x_{n}.X}where X is lambda free. Search for the deepest anonymous abstraction, so that when the lift is applied the function lifted will become a simple equation. This definition does not recognize a lambda abstractions with a parameter as defining a function. All lambda abstractions are regarded as defining anonymous functions. lift-choice- The first anonymous found in traversing the expression ornoneif there is no function. For example, Search for the deepest named or anonymous function definition, so that when the lift is applied the function lifted will become a simple equation. This definition recognizes a lambda abstraction with an actual parameter as defining a function. Only lambda abstractions without an application are treated as anonymous functions. For example, For example, theY combinator, is lifted as, and afterParameter dropping, As a lambda expression (seeConversion from let to lambda expressions), (λf.(λx.f(xx))(λx.f(xx))){\displaystyle (\lambda f.(\lambda x.f\ (x\ x))(\lambda x.f\ (x\ x)))} If lifting anonymous functions only, the Y combinator is, and afterParameter dropping, As a lambda expression, The first sub expression to be chosen for lifting isλx.f(xx){\displaystyle \lambda x.f\ (x\ x)}. This transforms the lambda expression intoλf.(pf)(pf){\displaystyle \lambda f.(p\ f)\ (p\ f)}and creates the equationpfx=f(xx){\displaystyle p\ f\ x=f(x\ x)}. The second sub expression to be chosen for lifting isλf.(pf)(pf){\displaystyle \lambda f.(p\ f)\ (p\ f)}. This transforms the lambda expression intoqp{\displaystyle q\ p}and creates the equationqpf=(pf)(pf){\displaystyle q\ p\ f=(p\ f)\ (p\ f)}. And the result is, Surprisingly this result is simpler than the one obtained from lifting named functions. Apply function toK, So, or The Y-Combinator calls its parameter (function) repeatedly on itself. The value is defined if the function has afixed point. But the function will never terminate. Lambda dropping[4]is making the scope of functions smaller and using the context from the reduced scope to reduce the number of parameters to functions. Reducing the number of parameters makes functions easier to comprehend. In theLambda liftingsection, a meta function for first lifting and then converting the resulting lambda expression into recursive equation was described. The Lambda Drop meta function performs the reverse by first converting recursive equations to lambda abstractions, and then dropping the resulting lambda expression, into the smallest scope which covers all references to the lambda abstraction. Lambda dropping is performed in two steps, A Lambda drop is applied to an expression which is part of a program. Dropping is controlled by a set of expressions from which the drop will be excluded. where, The lambda drop transformation sinks all abstractions in an expression. Sinking is excluded from expressions in a set of expressions, where, sink-transinks each abstraction, starting from the innermost, Sinking is moving a lambda abstraction inwards as far as possible such that it is still outside all references to the variable. Application- 4 cases. Abstraction. Use renaming to ensure that the variable names are all distinct. Variable- 2 cases. Sink test excludes expressions from dropping, For example, Parameter dropping is optimizing a function for its position in the function. Lambda lifting added parameters that were necessary so that a function can be moved out of its context. In dropping, this process is reversed, and extra parameters that contain variables that are free may be removed. Dropping a parameter is removing an unnecessary parameter from a function, where the actual parameter being passed in is always the same expression. The free variables of the expression must also be free where the function is defined. In this case the parameter that is dropped is replaced by the expression in the body of the function definition. This makes the parameter unnecessary. For example, consider, In this example the actual parameter for the formal parameterois alwaysp. Aspis a free variable in the whole expression, the parameter may be dropped. The actual parameter for the formal parameteryis alwaysn. Howevernis bound in a lambda abstraction. So this parameter may not be dropped. The result of dropping the parameter is, For the main example, The definition ofdrop-params-tranis, where, For each abstraction that defines a function, build the information required to make decisions on dropping names. This information describes each parameter; the parameter name, the expression for the actual value, and an indication that all the expressions have the same value. For example, in, the parameters to the functiongare, Each abstraction is renamed with a unique name, and the parameter list is associated with the name of the abstraction. For example,gthere is parameter list. build-param-listsbuilds all the lists for an expression, by traversing the expression. It has four parameters; Abstraction- A lambda expression of the form(λN.S)L{\displaystyle (\lambda N.S)\ L}is analyzed to extract the names of parameters for the function. Locate the name and start building the parameter list for the name, filling in the formal parameter names. Also receive any actual parameter list from the body of the expression, and return it as the actual parameter list from this expression Variable- A call to a function. For a function name or parameter start populating actual parameter list by outputting the parameter list for this name. Application- An application (function call) is processed to extract actual parameter details. Retrieve the parameter lists for the expression, and the parameter. Retrieve a parameter record from the parameter list from the expression, and check that the current parameter value matches this parameter. Record the value for the parameter name for use later in checking. The above logic is quite subtle in the way that it works. The same value indicator is never set to true. It is only set to false if all the values cannot be matched. The value is retrieved by usingSto build a set of the Boolean values allowed forS. If true is a member then all the values for this parameter are equal, and the parameter may be dropped. Similarly,defuses set theory to query if a variable has been given a value; Let- Let expression. And- For use in "let". For example, building the parameter lists for, gives, and the parameter o is dropped to give, build-list⁡[λx.λo.λy.oxy,D,V,D[g]]∧D[g]=L1{\displaystyle \operatorname {build-list} [\lambda x.\lambda o.\lambda y.o\ x\ y,D,V,D[g]]\land D[g]=L_{1}} build-list⁡[λo.λy.oxy,D,V,L1]∧D[g]=[x,_,_]::L1{\displaystyle \operatorname {build-list} [\lambda o.\lambda y.o\ x\ y,D,V,L_{1}]\land D[g]=[x,\_,\_]::L_{1}} build-list⁡[λy.oxy,D,V,L2]∧D[g]=[x,_,_]::[o,_,_]::L2{\displaystyle \operatorname {build-list} [\lambda y.o\ x\ y,D,V,L_{2}]\land D[g]=[x,\_,\_]::[o,\_,\_]::L_{2}} build-param-lists⁡[n(gmpn),D,V,T1]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [n\ (g\ m\ p\ n),D,V,T_{1}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[n,D,V,T2]∧build-param-lists⁡[gmpn,D,V,K2]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [n,D,V,T_{2}]\land \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[gmpn,D,V,K2]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} Gives, build-param-lists⁡[gmpn,D,V,K2]{\displaystyle \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]} build-param-lists⁡[gmp,D,V,T3]∧build-param-lists⁡[n,D,V,K3]{\displaystyle \operatorname {build-param-lists} [g\ m\ p,D,V,T_{3}]\land \operatorname {build-param-lists} [n,D,V,K_{3}]} build-param-lists⁡[gm,D,V,T4]∧build-param-lists⁡[p,D,V,K4]{\displaystyle \operatorname {build-param-lists} [g\ m,D,V,T_{4}]\land \operatorname {build-param-lists} [p,D,V,K_{4}]} build-param-lists⁡[g,D,V,T5]∧build-param-lists⁡[m,D,V,K5]{\displaystyle \operatorname {build-param-lists} [g,D,V,T_{5}]\land \operatorname {build-param-lists} [m,D,V,K_{5}]} D[g]=[x,S5,A5]::[o,S4,A4]::[y,S3,A3]::K2{\displaystyle D[g]=[x,S_{5},A_{5}]::[o,S_{4},A_{4}]::[y,S_{3},A_{3}]::K_{2}} build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[gqp,D,V,T6]∧build-param-lists⁡[n,D,V,K6]{\displaystyle \operatorname {build-param-lists} [g\ q\ p,D,V,T_{6}]\land \operatorname {build-param-lists} [n,D,V,K_{6}]} build-param-lists⁡[gq,D,V,T7]∧build-param-lists⁡[p,D,V,K7]{\displaystyle \operatorname {build-param-lists} [g\ q,D,V,T_{7}]\land \operatorname {build-param-lists} [p,D,V,K_{7}]} build-param-lists⁡[g,D,V,T8]∧build-param-lists⁡[m,D,V,K8]{\displaystyle \operatorname {build-param-lists} [g,D,V,T_{8}]\land \operatorname {build-param-lists} [m,D,V,K_{8}]} D[g]=[x,S8,A8]::[o,S6,A7]::[y,S6,A6]::K1{\displaystyle D[g]=[x,S_{8},A_{8}]::[o,S_{6},A_{7}]::[y,S_{6},A_{6}]::K_{1}} As there are no definitions for,V[n],V[p],V[q],V[m]{\displaystyle V[n],V[p],V[q],V[m]}, then equate can be simplified to, By removing expressions not needed,D[g]=[x,S5,A5]::[o,S4,A4]::[y,S3,A3]::K2{\displaystyle D[g]=[x,S_{5},A_{5}]::[o,S_{4},A_{4}]::[y,S_{3},A_{3}]::K_{2}} D[g]=[x,S8,A8]::[o,S6,A7]::[y,S6,A6]::K1{\displaystyle D[g]=[x,S_{8},A_{8}]::[o,S_{6},A_{7}]::[y,S_{6},A_{6}]::K_{1}} By comparing the two expressions forD[g]{\displaystyle D[g]}, get, IfS3{\displaystyle S_{3}}is true; IfS3{\displaystyle S_{3}}is false there is no implication. SoS3=_{\displaystyle S_{3}=\_}which means it may be true or false. IfS4{\displaystyle S_{4}}is true; IfS5{\displaystyle S_{5}}is true; SoS5{\displaystyle S_{5}}is false. The result is,D[g]=[x,false,_]::[o,_,p]::[y,_,n]::_{\displaystyle D[g]=[x,\operatorname {false} ,\_]::[o,\_,p]::[y,\_,n]::\_} build-param-lists⁡[ox,D,V,T9]∧build-param-lists⁡[y,D,V,K9]{\displaystyle \operatorname {build-param-lists} [o\ x,D,V,T_{9}]\land \operatorname {build-param-lists} [y,D,V,K_{9}]} build-param-lists⁡[o,D,V,T10]∧build-param-lists⁡[x,D,V,K10]∧build-param-lists⁡[y,D,V,K10]{\displaystyle \operatorname {build-param-lists} [o,D,V,T_{10}]\land \operatorname {build-param-lists} [x,D,V,K_{10}]\land \operatorname {build-param-lists} [y,D,V,K_{10}]} By similar arguments as used above get, and from previously, Another example is, Here x is equal to f. The parameter list mapping is, and the parameter x is dropped to give, The logic inequateis used in this more difficult example. ∧D[f]=[F2,S2,A2]::[F1,S1,A1]::_{\displaystyle \land D[f]=[F_{2},S_{2},A_{2}]::[F_{1},S_{1},A_{1}]::\_} After collecting the results together, From the two definitions forD[p]{\displaystyle D[p]}; so UsingD[q]=D[p]{\displaystyle D[q]=D[p]}and by comparing with the above, so, in, reduces to, also, reduces to, So the parameter list for p is effectively; Use the information obtained byBuild parameter liststo drop actual parameters that are no longer required.drop-paramshas the parameters, Abstraction where, where, Variable For a function name or parameter start populating actual parameter list by outputting the parameter list for this name. Application- An application (function call) is processed to extract Let- Let expression. And- For use in "let". From the results of building parameter lists; so, so, D[g]=[[x,false,_],[o,_,p],[y,_,n]]{\displaystyle D[g]=[[x,\operatorname {false} ,\_],[o,\_,p],[y,\_,n]]}=[F3,S3,A3]::[F2,S2,A2]::[F1,S1,A1]::_]{\displaystyle =[F_{3},S_{3},A_{3}]::[F_{2},S_{2},A_{2}]::[F_{1},S_{1},A_{1}]::\_]} F3=x,S3=false,A3=_{\displaystyle F_{3}=x,S_{3}=\operatorname {false} ,A_{3}=\_}F2=o,S2=_,A2=p{\displaystyle F_{2}=o,S_{2}=\_,A_{2}=p}F1=y,S1=_,A1=n{\displaystyle F_{1}=y,S_{1}=\_,A_{1}=n} gmn{\displaystyle g\ m\ n} D[g]=[[x,false,_],[o,_,p],[y,_,n]]{\displaystyle D[g]=[[x,\operatorname {false} ,\_],[o,\_,p],[y,\_,n]]}=[F6,S6,A6]::[F5,S5,A5]::[F4,S4,A4]::_]{\displaystyle =[F_{6},S_{6},A_{6}]::[F_{5},S_{5},A_{5}]::[F_{4},S_{4},A_{4}]::\_]} F6=x,S6=false,A6=_{\displaystyle F_{6}=x,S_{6}=\operatorname {false} ,A_{6}=\_}F5=o,S5=_,A5=p{\displaystyle F_{5}=o,S_{5}=\_,A_{5}=p}F4=y,S4=_,A4=n{\displaystyle F_{4}=y,S_{4}=\_,A_{4}=n} gqn{\displaystyle g\ q\ n} drop-formalremoves formal parameters, based on the contents of the drop lists. Its parameters are, drop-formalis defined as, Which can be explained as, Starting with the function definition of the Y-combinator, Which gives back theY combinator,
https://en.wikipedia.org/wiki/Lambda_lifting