text
stringlengths 1
1k
| source
stringlengths 31
152
|
---|---|
ogic programming". Annals of Pure and Applied Logic. 51 (1–2): 125–157. doi:10.1016/0168-0072(91)90068-W.
Ehud Shapiro (Editor). Concurrent Prolog. MIT Press. 1987.
James Slagle. "Experiments with a Deductive Question-Answering Program". CACM. December 1965.
Gabbay, Dov M.; Hogger, Christopher John; Robinson, J.A., eds. (1993-1998). Handbook of Logic in Artificial Intelligence and Logic Programming.Vols. 1–5, Oxford University Press.
== Further reading ==
Carl Hewitt. "Procedural Embedding of Knowledge in Planner". IJCAI 1971.
Carl Hewitt. "The Repeated Demise of Logic Programming and Why It Will Be Reincarnated". AAAI Spring Symposium: What Went Wrong and Why: Lessons from AI Research and Applications 2006: 2–9.
Evgeny Dantsin, Thomas Eiter, Georg Gottlob, Andrei Voronkov: Complexity and expressive power of logic programming. ACM Comput. Surv. 33(3): 374–425 (2001)
Ulf Nilsson and Jan Maluszynski, Logic, Programming and Prolog
== External links ==
Logic Programming Virtual Librar
|
https://en.wikipedia.org/wiki/Logic_programming
|
xternal links ==
Logic Programming Virtual Library entry
Bibliographies on Logic Programming Archived 2008-12-04 at the Wayback Machine
Association for Logic Programming (ALP)
Theory and Practice of Logic Programming (journal)
Logic programming in C++ with Castor
Logic programming Archived 2011-09-03 at the Wayback Machine in Oz
Prolog Development Center
Racklog: Logic Programming in Racket
|
https://en.wikipedia.org/wiki/Logic_programming
|
In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.
A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available.
Though semaphores are useful for preventing race conditions, they do not guarantee their absence. Semaphores that allow an arbitrary resource count are called counting semaphores, while semaphores that are restricted to the
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
hores, while semaphores that are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as the THE multiprogramming system.
== Library analogy ==
Suppose a physical library has ten identical study rooms, to be used by one student at a time. Students must request a room from the front desk. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that the room is free.
In the simplest implementation, the clerk at the front desk knows only the number of free rooms available. This requires that all of the students use their room while they have signed up for it and return
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
room while they have signed up for it and return it when they are done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time.
In this scenario, the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7, and so on. If someone requests a room and the current value of the semaphore is 0, they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
hod can be used to select the one who will occupy the room (like FIFO or randomly picking one). And of course, a student must inform the clerk about releasing their room only after really leaving it.
=== Important observations ===
When used to control access to a pool of resources, a semaphore tracks only how many resources are free. It does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource.
The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied.
The success of the protocol requires applications to follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave sl
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
d (which practically means a program may behave slowly, act erratically, hang, or crash) if even a single process acts incorrectly. This includes:
requesting a resource and forgetting to release it;
releasing a resource that was never requested;
holding a resource for a long time without needing it;
using a resource without requesting it first (or after releasing it).
Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem.
== Semantics and implementation ==
Counting semaphores are equipped with two operations, historically denoted as P and V (see § Operation names for alternative names). Operation V increments the semaphore S, and operation P decrements it.
The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes t
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it.
One important property of semaphore S is that its value cannot be changed except by using the V and P operations.
A simple way to understand wait (P) and signal (V) operations is:
wait: Decrements the value of the semaphore variable by 1. If the new value of the semaphore variable is negative, the process executing wait is blocked (i.e., added to the semaphore's queue). Otherwise, the process continues execution, having used a unit of the resource.
signal: Increments the value of the semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queu
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
om the semaphore's waiting queue to the ready queue.
Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily.
The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in Unix. The modified V and P operations are as follows, using square brackets to indicate atomic operations, i.e., operations that appear indivisible to other processes:
function V(semaphore S, integer I):
[S ← S + I]
function P(semaphore S, integer I):
repeat:
[if S ≥ I:
S ← S − I
break]
However, the rest of this section refers to semaphores with unary V and P operations, unless otherwise specified.
To avoid starvation, a semaphore has an associated queue of processes (usually with FIFO semantics). If a process performs a P operation on a s
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
ntics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered thereby, such that the highest priority process is taken from the queue first.
If the implementation does not ensure atomicity of the increment, decrement, and comparison operations, there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that can read, modify, and write the semaphore in a single operation. Without such a hardware instruction, an atomic operation may be synthesized by using a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
ocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system, a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock command.
== Examples ==
=== Trivial example ===
Consider a variable A and a boolean variable S. A is only accessed when S is marked true. Thus, S is a semaphore for A.
One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed.
=== Login queue ===
Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the s
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
er a user logs in, P is called, decrementing the semaphore S by 1. Whenever a user logs out, V is called, incrementing S by 1 representing a login slot that has become available. When S is 0, any users wishing to log in must wait until S increases. The login request is enqueued onto a FIFO queue until a slot is freed. Mutual exclusion is used to ensure that requests are enqueued in order. Whenever S increases (login slots available), a login request is dequeued, and the user owning the request is allowed to log in. If S is already greater than 0, then login requests are immediately dequeued.
=== Producer–consumer problem ===
In the producer–consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions:
the consumer must wait for the producer to produce something if the queue is empty;
the producer must wait for the consumer to co
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
pty;
the producer must wait for the consumer to consume something if the queue is full.
The semaphore solution to the producer–consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources.
The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example, by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore.
The emptyCount
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
d in place of the binary semaphore.
The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1.
The producer does the following repeatedly:
produce:
P(emptyCount)
P(useQueue)
putItemIntoQueue(item)
V(useQueue)
V(fullCount)
The consumer does the following repeatedly
consume:
P(fullCount)
P(useQueue)
item ← getItemFromQueue()
V(useQueue)
V(emptyCount)
Below is a substantive example:
A single consumer enters its critical section. Since fullCount is 0, the consumer blocks.
Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry.
The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue.
Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section.
Note that emptyCount may be much lower than the actual number o
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
tyCount may be much lower than the actual number of empty places in the queue, for example, where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount ≤ N always holds, with equality if and only if no producers or consumers are executing their critical sections.
=== Passing the baton pattern ===
The "Passing the baton" pattern proposed by Gregory R. Andrews is a generic scheme to solve many complex concurrent programming problems in which multiple processes compete for the same resource with complex access conditions (such as satisfying specific priority criteria or avoiding starvation). Given a shared resource, the pattern requires a private "priv" semaphore (initialized to zero) for each process (or class of processes) involved and a single mutual exclusion "mutex" semaphore (initialized to one).
The pseudo-code for each process is:
The pseudo-code of the resource acquisition and release primitives
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
of the resource acquisition and release primitives are:
Both primitives in turn use the "pass_the_baton" method, whose pseudo-code is:
Remarks
The pattern is called "passing the baton" because a process that releases the resource as well as a freshly reactivated process will activate at most one suspended process, that is, shall "pass the baton to it". The mutex is released only when a process is going to suspend itself (resource_acquire), or when pass_the_baton is unable to reactivate another suspended process.
== Operation names ==
The canonical names V and P come from the initials of Dutch words. V is generally explained as verhogen ("increase"). Several explanations have been offered for P, including proberen ("to test" or "to try"), passeren ("pass"), and pakken ("grab"). Dijkstra's earliest paper on the subject gives passering ("passing") as the meaning for P, and vrijgave ("release") as the meaning for V. It also mentions that the terminology is taken from that used in railr
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
t the terminology is taken from that used in railroad signals. Dijkstra subsequently wrote that he intended P to stand for prolaag, short for probeer te verlagen, literally "try to reduce", or to parallel the terms used in the other case, "try to decrease".
In ALGOL 68, the Linux kernel, and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal and wait, release and acquire (standard Java library), or post and pend. Some texts call them vacate and procure to match the original Dutch initials.
== Semaphores vs. mutexes ==
A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. However, they differ in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to han
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:
Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex.
Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted. (This is also a cost; if the mutex can prevent a task from being reclaimed, then a garbage collector has to monitor the mutex.)
Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition.
Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times.
Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner.
== See also ==
Async/await
Flag (programming)
Synchronization
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
==
Async/await
Flag (programming)
Synchronization (computer science)
Cigarette smokers problem
Dining philosophers problem
Readers–writers problem
Sleeping barber problem
Monitor
Spurious wakeup
== References ==
== External links ==
=== Introductions ===
Hilsheimer, Volker (2004). "Implementing a Read/Write Mutex" (Web page). Qt Quarterly, Issue 11 - Q3 2004
Zelenski, Julie; Parlante, Nick. "Thread and Semaphore Examples" (PDF). Handout. CS107 Programming Paradigms. Spring 2008 (23). Stanford Engineering Everywhere (SEE).
=== References ===
Dijkstra, Edsger W. Cooperating sequential processes (EWD-123) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. (transcription) (September 1965)
"semaphore.h - semaphores (REALTIME)". The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 Edition. Open Group. 2004.
Downey, Allen B. (2016) [2005]. "The Little Book of Semaphores" (2nd ed.). Green Tea Press.
Leppäjärvi, Jouni (May 11, 2008). "A
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
en Tea Press.
Leppäjärvi, Jouni (May 11, 2008). "A pragmatic, historically oriented survey on the universality of synchronization primitives" (PDF). University of Oulu, Finland.
|
https://en.wikipedia.org/wiki/Semaphore_(programming)
|
Object-oriented programming (OOP) is a programming paradigm based on the concept of objects. Objects can contain data (called fields, attributes or properties) and have actions they can perform (called procedures or methods and implemented in code). In OOP, computer programs are designed by making them out of objects that interact with one another.
Many of the most widely used programming languages (such as C++, Java, and Python) support object-oriented programming to a greater or lesser degree, typically as part of multiple paradigms in combination with others such as imperative programming and declarative programming.
Significant object-oriented languages include Ada, ActionScript, C++, Common Lisp, C#, Dart, Eiffel, Fortran 2003, Haxe, Java, JavaScript, Kotlin, Logo, MATLAB, Objective-C, Object Pascal, Perl, PHP, Python, R, Raku, Ruby, Scala, SIMSCRIPT, Simula, Smalltalk, Swift, Vala and Visual Basic.NET.
== History ==
The idea of "objects" in programming started with the artifici
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
"objects" in programming started with the artificial intelligence group at MIT in the late 1950s and early 1960s. Here, "object" referred to LISP atoms with identified properties (attributes).
Another early example was Sketchpad created by Ivan Sutherland at MIT in 1960–1961. In the glossary of his technical report, Sutherland defined terms like "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction. Later, in 1968, AED-0, MIT's version of the ALGOL programming language, connected data structures ("plexes") and procedures, prefiguring what were later termed "messages", "methods", and "member functions".
Topics such as data abstraction and modular programming were common points of discussion at this time.
Meanwhile, in Norway, Simula was developed during the years 1961–1967. Simula introduced essential object-oriented ideas, such as classes, inheritance, and dynamic binding.
Simula was used mainly by researchers
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
mic binding.
Simula was used mainly by researchers involved with physical modelling, like the movement of ships and their content through cargo ports. Simula is generally accepted as being the first language with the primary features and framework of an object-oriented language.
Influenced by both MIT and Simula, Alan Kay began developing his own ideas in November 1966. He would go on to create Smalltalk, an influential object-oriented programming language. By 1967, Kay was already using the term "object-oriented programming" in conversation. Although sometimes called the "father" of object-oriented programming, Kay has said his ideas differ from how object-oriented programming is commonly understood, and has implied that the computer science establishment did not adopt his notion.
A 1976 MIT memo co-authored by Barbara Liskov lists Simula 67, CLU, and Alphard as object-oriented languages, but does not mention Smalltalk.
In the 1970s, the first version of the Smalltalk programming lan
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
the first version of the Smalltalk programming language was developed at Xerox PARC by Alan Kay, Dan Ingalls and Adele Goldberg. Smalltalk-72 was notable for use of objects at the language level and its graphical development environment. Smalltalk was a fully dynamic system, allowing users to create and modify classes as they worked. Much of the theory of OOP was developed in the context of Smalltalk, for example multiple inheritance.
In the late 1970s and 1980s, object-oriented programming rose to prominence. The Flavors object-oriented Lisp was developed starting 1979, introducing multiple inheritance and mixins. In August 1981, Byte Magazine highlighted Smalltalk and OOP, introducing these ideas to a wide audience. LOOPS, the object system for Interlisp-D, was influenced by Smalltalk and Flavors, and a paper about it was published in 1982. In 1986, the first Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA) was attended by 1,000 people. This co
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ons (OOPSLA) was attended by 1,000 people. This conference marked the beginning of efforts to consolidate Lisp object systems, eventually resulting in the Common Lisp Object System. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory, but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv.
In the mid-1980s, new object-oriented languages like Objective-C, C++, and Eiffel emerged. Objective-C was developed by Brad Cox, who had used Smalltalk at ITT Inc.. Bjarne Stroustrup created C++ based on his experience using Simula for his PhD thesis. Bertrand Meyer produced the first design of the Eiffel language in 1985, which focused on software quality using a design by contract approach.
In the 1990s, object-oriented programming became the main way of programming, especially as more languages supported it. These included Visual FoxPro 3.0, C++, and Delphi. OOP became even more popu
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ro 3.0, C++, and Delphi. OOP became even more popular with the rise of graphical user interfaces, which used objects for buttons, menus and other elements. One well-known example is Apple's Cocoa framework, used on Mac OS X and written in Objective-C. OOP toolkits also enhanced the popularity of event-driven programming.
At ETH Zürich, Niklaus Wirth and his colleagues created new approaches to OOP. Modula-2 (1978) and Oberon (1987), included a distinctive approach to object orientation, classes, and type checking across module boundaries. Inheritance is not obvious in Wirth's design since his nomenclature looks in the opposite direction: It is called type extension and the viewpoint is from the parent down to the inheritor.
Many programming languages that existed before OOP have added object-oriented features, including Ada, BASIC, Fortran, Pascal, and COBOL. This sometimes caused compatibility and maintainability issues, as these languages were not originally designed with OOP in mind
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ages were not originally designed with OOP in mind.
In the new millenium, new languages like Python and Ruby have emerged that combine object-oriented and procedural styles. The most commercially important "pure" object-oriented languages continue to be Java, developed by Sun Microsystems, as well as C# and Visual Basic.NET (VB.NET), both designed for Microsoft's .NET platform. These languages show the benefits of OOP by creating abstractions from implementation. The .NET platform supports cross-language inheritance, allowing programs to use objects from multiple languages together.
== Features ==
Object-oriented programming focuses on working with objects, but not all OOP languages have every feature linked to OOP. Below are some common features of languages that are considered strong in OOP or support it along with other programming styles. Important exceptions are also noted. Christopher J. Date pointed out that comparing OOP with other styles, like relational programming, is dif
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
other styles, like relational programming, is difficult because there isn't a clear, agreed-upon definition of OOP.
=== Imperative programming ===
Features from imperative and structured programming are present in OOP languages and are also found in non-OOP languages.
Variables hold different data types like integers, strings, lists, and hash tables. Some data types are built-in while others result from combining variables using memory pointers.
Procedures – also known as functions, methods, routines, or subroutines – take input, generate output, and work with data. Modern languages include structured programming constructs like loops and conditionals.
Support for modular programming lets programmers organize related procedures into files and modules. This makes programs easier to manage. Each modules has its own namespace, so items in one module will not conflict with items in another.
Object-oriented programming (OOP) was created to make code easier to reuse and maintain. Howeve
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
to make code easier to reuse and maintain. However, it was not designed to clearly show the flow of a program's instructions—that was left to the compiler. As computers began using more parallel processing and multiple threads, it became more important to understand and control how instructions flow. This is difficult to do with OOP.
=== Objects ===
An object is a type of data structure that has two main parts: fields and methods. Fields may also be known as members, attributes, or properties, and hold information in the form of state variables. Methods are actions, subroutines, or procedures, defining the object's behavior in code. Objects are usually stored in memory, and in many programming languages, they work like pointers that link directly to a contiguous block containing the object instances's data.
Objects can contain other objects. This is called object composition. For example, an Employee object might have an Address object inside it, along with other information like "
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ect inside it, along with other information like "first_name" and "position". This type of structures shows "has-a" relationships, like "an employee has an address".
Some believe that OOP places too much focus on using objects rather than on algorithms and data structures. For example, programmer Rob Pike pointed out that OOP can make programmers think more about type hierarchy than composition. He has called object-oriented programming "the Roman numerals of computing". Rich Hickey, creator of Clojure, described OOP as overly simplistic, especially when it comes to representing real-world things that change over time. Alexander Stepanov said that OOP tries to fit everything into a single type, which can be limiting. He argued that sometimes we need multisorted algebras—families of interfaces that span multiple types, such as in generic programming. Stepanov also said that calling everything an "object" doesn't add much understanding.
==== Real-world modeling and relationships ====
S
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
==== Real-world modeling and relationships ====
Sometimes, objects represent real-world things and processes in digital form. For example, a graphics program may have objects such as "circle", "square", and "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product". Niklaus Wirth said, "This paradigm [OOP] closely reflects the structure of systems in the real world and is therefore well suited to model complex systems with complex behavior".
However, more often, objects represent abstract entities, like an open file or a unit converter. Not everyone agrees that OOP makes it easy to copy the real world exactly or that doing so is even necessary. Bob Martin suggests that because classes are software, their relationships don't match the real-world relationships they represent. Bertrand Meyer argues in Object-Oriented Software Construction, that a program is not a model of the world but a model of some part of the world; "Reality is a cousin tw
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
of some part of the world; "Reality is a cousin twice removed". Steve Yegge noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs), as opposed to functional programming which does the reverse. This can sometimes make OOP solutions more complicated than those written in procedural programming.
=== Inheritance ===
Most OOP languages allow reusing and extending code through "inheritance". This inheritance can use either "classes" or "prototypes", which have some differences but use similar terms for ideas like "object" and "instance".
==== Class-based ====
In class-based programming, the most common type of OOP, every object is an instance of a specific class. The class defines the data format, like variables (e.g., name, age) and methods (actions the object can take). Every instance of the class has the same set of variables and methods. Objects are created using a special method in the class known as a constr
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ng a special method in the class known as a constructor.
Here are a few key terms in class-based OOP:
Class variables – belong to the class itself, so all objects in the class share one copy.
Instance variables – belong to individual objects; every object has its own version of these variables.
Member variables – refers to both the class and instance variables that are defined by a particular class.
Class methods – linked to the class itself and can only use class variables.
Instance methods – belong to individual objects, and can use both instance and class variables
Classes may inherit from other classes, creating a hierarchy of "subclasses". For example, an "Employee" class might inherit from a "Person" class. This means the Employee object will have all the variables from Person (like name variables) plus any new variables (like job position and salary). Similarly, the subclass may expand the interface with new methods. Most languages also allow the subclass to override the meth
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
uages also allow the subclass to override the methods defined by superclasses. Some languages support multiple inheritance, where a class can inherit from more than one class, and other languages similarly support mixins or traits. For example, a mixin called UnicodeConversionMixin might add a method unicode_to_ascii() to both a FileReader and a WebPageScraper class.
Some classes are abstract, meaning they cannot be directly instantiated into objects; they're only meant to be inherited into other classes. Other classes are utility classes which contain only class variables and methods and are not meant to be instantiated or subclassed.
==== Prototype-based ====
In prototype-based programming, there aren't any classes. Instead, each object is linked to another object, called its prototype or parent. In Self, an object may have multiple or no parents, but in the most popular prototype-based language, Javascript, every object has exactly one prototype link, up to the base Object type wh
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
one prototype link, up to the base Object type whose prototype is null.
The prototype acts as a model for new objects. For example, if you have an object fruit, you can make two objects apple and orange, based on it. There is no fruit class, but they share traits from the fruit prototype. Prototype-based languages also allow objects to have their own unique properties, so the apple object might have an attribute sugar_content, while the orange or fruit objects do not.
==== No inheritance ====
Some languages, like Go, don't use inheritance at all. Instead, they encourage "composition over inheritance", where objects are built using smaller parts instead of parent-child relationships. For example, instead of inheriting from class Person, the Employee class could simply contain a Person object. This lets the Employee class control how much of Person it exposes to other parts of the program. Delegation is another language feature that can be used as an alternative to inheritance.
Progra
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
n be used as an alternative to inheritance.
Programmers have different opinions on inheritance. Bjarne Stroustrup, author of C++, has stated that it is possible to do OOP without inheritance. Rob Pike has criticized inheritance for creating complicated hierarchies instead of simpler solutions.
==== Inheritance and behavioral subtyping ====
People often think that if one class inherits from another, it means the subclass "is a" more specific version of the original class. This presumes the program semantics are that objects from the subclass can always replace objects from the original class without problems. This concept is known as behavioral subtyping, more specifically the Liskov substitution principle.
However, this is often not true, especially in programming languages that allow mutable objects, objects that change after they are created. In fact, subtype polymorphism as enforced by the type checker in OOP languages cannot guarantee behavioral subtyping in most if not all cont
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
antee behavioral subtyping in most if not all contexts. For example, the circle-ellipse problem is notoriously difficult to handle using OOP's concept of inheritance. Behavioral subtyping is undecidable in general, so it cannot be easily implemented by a compiler. Because of this, programmers must carefully design class hierarchies to avoid mistakes that the programming language itself cannot catch.
=== Dynamic dispatch ===
When a method is called on an object, the object itself—not outside code—decides which specific code to run. This process, called dynamic dispatch, usually happens at run time by checking a table linked to the object to find the correct method. In this context, a method call is also known as message passing, meaning the method name and its inputs are like a message sent to the object for it to act on. If the method choice depends on more than one type of object (such as other objects passed as parameters), it's called multiple dispatch.
Dynamic dispatch works tog
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
led multiple dispatch.
Dynamic dispatch works together with inheritance: if an object doesn't have the requested method, it looks up to its parent class (delegation), and continues up the chain until it finds the method or reaches the top.
=== Data abstraction and encapsulation ===
Data abstraction is a way of organizing code so that only certain parts of the data are visible to related functions (data hiding). This helps prevent mistakes and makes the program easier to manage. Because data abstraction works well, many programming styles, like object-oriented programming and functional programming, use it as a key principle. Encapsulation is another important idea in programming. It means keeping the internal details of an object hidden from the outside code. This makes it easier to change how an object works on the inside without affecting other parts of the program, such as in code refactoring. Encapsulation also helps keep related code together (decoupling), making it easier for
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
code together (decoupling), making it easier for programmers to understand.
In object-oriented programming, objects act as a barrier between their internal workings and external code. Outside code can only interact with an object by calling specific public methods or variables. If a class only allows access to its data through methods and not directly, this is called information hiding. When designing a program, it's often recommended to keep data as hidden as possible. This means using local variables inside functions when possible, then private variables (which only the object can use), and finally public variables (which can be accessed by any part of the program) if necessary. Keeping data hidden helps prevent problems when changing the code later. Some programming languages, like Java, control information hiding by marking variables as private (hidden) or public (accessible). Other languages, like Python, rely on naming conventions, such as starting a private method's name with
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ons, such as starting a private method's name with an underscore. Intermediate levels of access also exist, such as Java's protected keyword, (which allows access from the same class and its subclasses, but not objects of a different class), and the internal keyword in C#, Swift, and Kotlin, which restricts access to files within the same module.
Abstraction and information hiding are important concepts in programming, especially in object-oriented languages. Programs often create many copies of objects, and each one works independently. Supporters of this approach say it makes code easier to reuse and intuitively represents real-world situations. However, others argue that object-oriented programming does not enhance readability or modularity. Eric S. Raymond has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. Raymond compares this unfavourably to the approach taken with Unix and the C programming language.
One p
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
en with Unix and the C programming language.
One programming principle, called the "open/closed principle", says that classes and functions should be "open for extension, but closed for modification". Luca Cardelli has stated that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. The latter point is reiterated by Joe Armstrong, the principal inventor of Erlang, who is quoted as saying:
The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
Leo Brodie says that information hiding can lead to copying the same code in multiple places (duplicating code), which goes against the don't repeat yourself rule of software development.
=== Polymorphism ===
Polymorphism is the use of one symbol to represent multiple different types. In object-orien
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
epresent multiple different types. In object-oriented programming, polymorphism more specifically refers to subtyping or subtype polymorphism, where a function can work with a specific interface and thus manipulate entities of different classes in a uniform manner.
For example, imagine a program has two shapes: a circle and a square. Both come from a common class called "Shape." Each shape has its own way of drawing itself. With subtype polymorphism, the program doesn't need to know the type of each shape, and can simply call the "Draw" method for each shape. The programming language runtime will ensure the correct version of the "Draw" method runs for each shape. Because the details of each shape are handled inside their own classes, this makes the code simpler and more organized, enabling strong separation of concerns.
=== Open recursion ===
In object-oriented programming, objects have methods that can change or use the object's data. Many programming languages use a special word,
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
a. Many programming languages use a special word, like this or self, to refer to the current object. In languages that support open recursion, a method in an object can call other methods in the same object, including itself, using this special word. This allows a method in one class to call another method defined later in a subclass, a feature known as late binding.
== OOP languages ==
OOP languages can be grouped into different types based on how they support and use objects:
Pure OOP languages: In these languages, everything is treated as an object, even basic things like numbers and characters. They are designed to fully support and enforce OOP. Examples: Ruby, Scala, Smalltalk, Eiffel, Emerald, JADE, Self, Raku.
Mostly OOP languages: These languages focus on OOP but also include some procedural programming features. Examples: Java, Python, C++, C#, Delphi/Object Pascal, VB.NET.
Retrofitted OOP languages: These were originally designed for other types of programming but later a
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
esigned for other types of programming but later added some OOP features. Examples: PHP, JavaScript, Perl, Visual Basic (derived from BASIC), MATLAB, COBOL 2002, Fortran 2003, ABAP, Ada 95, Pascal.
Unique OOP languages: These languages have OOP features like classes and inheritance but use them in their own way. Examples: Oberon, BETA.
Object-based languages: These support some OOP ideas but avoid traditional class-based inheritance in favor of direct manipulation of objects. Examples: JavaScript, Lua, Modula-2, CLU, Go.
Multi-paradigm languages: These support both OOP and other programming styles, but OOP is not the predominant style in the language. Examples include Tcl, where TclOO allows both prototype-based and class-based OOP, and Common Lisp, with its Common Lisp Object System.
=== Popularity and reception ===
Many popular programming languages, like C++, Java, and Python, use object-oriented programming. In the past, OOP was widely accepted, but recently, some programmers ha
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
widely accepted, but recently, some programmers have criticized it and prefer functional programming instead. A study by Potok et al. found no major difference in productivity between OOP and other methods.
Paul Graham, a well-known computer scientist, believes big companies like OOP because it helps manage large teams of average programmers. He argues that OOP adds structure, making it harder for one person to make serious mistakes, but at the same time restrains smart programmers. Eric S. Raymond, a Unix programmer and open-source software advocate, argues that OOP is not the best way to write programs.
Richard Feldman says that, while OOP features helped some languages stay organized, their popularity comes from other reasons. Lawrence Krubner argues that OOP doesn't offer special advantages compared to other styles, like functional programming, and can make coding more complicated. Luca Cardelli says that OOP is slower and takes longer to compile than procedural programming.
===
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ger to compile than procedural programming.
=== OOP in dynamic languages ===
In recent years, object-oriented programming (OOP) has become very popular in dynamic programming languages. Some languages, like Python, PowerShell, Ruby and Groovy, were designed with OOP in mind. Others, like Perl, PHP, and ColdFusion, started as non-OOP languages but added OOP features later (starting with Perl 5, PHP 4, and ColdFusion version 6).
On the web, HTML, XHTML, and XML documents use the Document Object Model (DOM), which works with the JavaScript language. JavaScript is a well-known example of a prototype-based language. Instead of using classes like other OOP languages, JavaScript creates new objects by copying (or "cloning") existing ones. Another language that uses this method is Lua.
=== OOP in a network protocol ===
When computers communicate in a client-server system, they send messages to request services. For example, a simple message might include a length field (showing how big the
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
might include a length field (showing how big the message is), a code that identifies the type of message, and a data value. These messages can be designed as structured objects that both the client and server understand, so that each type of message corresponds to a class of objects in the client and server code. More complex messages might include structured objects as additional details. The client and server need to know how to serialize and deserialize these messages so they can be transmitted over the network, and map them to the appropriate object types. Both clients and servers can be thought of as complex object-oriented systems.
The Distributed Data Management Architecture (DDM) uses this idea by organizing objects into four levels:
Basic message details - Information like message length, type, and data.
Objects and collections - Similar to how objects work in Smalltalk, storing messages and their details.
Managers - Like file directories, these organize and store data,
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
e file directories, these organize and store data, as well as provide memory and processing power. They are similar to IBM i Objects.
Clients and servers - These are full systems that include managers and handle security, directory services, and multitasking.
The first version of DDM defined distributed file services. Later, it was expanded to support databases through the Distributed Relational Database Architecture (DRDA).
== Design patterns ==
Design patterns are common solutions to problems in software design. Some design patterns are especially useful for object-oriented programming, and design patterns are typically introduced in an OOP context.
=== Object patterns ===
The following are notable software design patterns for OOP objects.
Function object: Class with one main method that acts like an anonymous function (in C++, the function operator, operator())
Immutable object: does not change state after creation
First-class object: can be used without restriction
Container o
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
bject: can be used without restriction
Container object: contains other objects
Factory object: creates other objects
Metaobject: Used to create other objects (similar to a class, but an object)
Prototype object: a specialized metaobject that creates new objects by copying itself
Singleton object: only instance of its class for the lifetime of the program
Filter object: receives a stream of data as its input and transforms it into the object's output
A common anti-pattern is the God object, an object that knows or does too much.
=== Gang of Four design patterns ===
Design Patterns: Elements of Reusable Object-Oriented Software is a famous book published in 1994 by four authors: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. People often call them the "Gang of Four". The book talks about the strengths and weaknesses of object-oriented programming and explains 23 common ways to solve programming problems.
These solutions, called "design patterns," are grouped into three
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
, called "design patterns," are grouped into three types:
Creational patterns (5): Factory method pattern, Abstract factory pattern, Singleton pattern, Builder pattern, Prototype pattern
Structural patterns (7): Adapter pattern, Bridge pattern, Composite pattern, Decorator pattern, Facade pattern, Flyweight pattern, Proxy pattern
Behavioral patterns (11): Chain-of-responsibility pattern, Command pattern, Interpreter pattern, Iterator pattern, Mediator pattern, Memento pattern, Observer pattern, State pattern, Strategy pattern, Template method pattern, Visitor pattern
=== Object-orientation and databases ===
Both object-oriented programming and relational database management systems (RDBMSs) are widely used in software today. However, relational databases don't store objects directly, which creates a challenge when using them together. This issue is called object-relational impedance mismatch.
To solve this problem, developers use different methods, but none of them are perfect. One
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
fferent methods, but none of them are perfect. One of the most common solutions is object-relational mapping (ORM), which helps connect object-oriented programs to relational databases. Examples of ORM tools include Visual FoxPro, Java Data Objects, and Ruby on Rails ActiveRecord.
Some databases, called object databases, are designed to work with object-oriented programming. However, they have not been as popular or successful as relational databases.
Date and Darwen have proposed a theoretical foundation that uses OOP as a kind of customizable type system to support RDBMSs, but it forbids objects containing pointers to other objects.
=== Responsibility- vs. data-driven design ===
In responsibility-driven design, classes are built around what they need to do and the information they share, in the form of a contract. This is different from data-driven design, where classes are built based on the data they need to store. According to Wirfs-Brock and Wilkerson, the originators of respon
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
rfs-Brock and Wilkerson, the originators of responsibility-driven design, responsibility-driven design is the better approach.
=== SOLID and GRASP guidelines ===
SOLID is a set of five rules for designing good software, created by Michael Feathers:
Single responsibility principle: A class should have only one reason to change.
Open/closed principle: Software entities should be open for extension, but closed for modification.
Liskov substitution principle: Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.
Interface segregation principle: Clients should not be forced to depend upon interfaces that they do not use.
Dependency inversion principle: Depend upon abstractions, not concretes.
GRASP (General Responsibility Assignment Software Patterns) is another set of software design rules, created by Craig Larman, that helps developers assign responsibilities to different parts of a program:
Creator Principle: allo
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
erent parts of a program:
Creator Principle: allows classes create objects they closely use.
Information Expert Principle: assigns tasks to classes with the needed information.
Low Coupling Principle: reduces class dependencies to improve flexibility and maintainability.
High Cohesion Principle: designing classes with a single, focused responsibility.
Controller Principle: assigns system operations to separate classes that manage flow and interactions.
Polymorphism: allows different classes to be used through a common interface, promoting flexibility and reuse.
Pure Fabrication Principle: create helper classes to improve design, boost cohesion, and reduce coupling.
== Formal semantics ==
In object-oriented programming, objects are things that exist while a program is running. An object can represent anything, like a person, a place, a bank account, or a table of data. Many researchers have tried to formally define how OOP works. Records are the basis for understanding objects. They
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
ords are the basis for understanding objects. They can represent fields, and also methods, if function literals can be stored. However, inheritance presents difficulties, particularly with the interactions between open recursion and encapsulated state. Researchers have used recursive types and co-algebraic data types to incorporate essential features of OOP. Abadi and Cardelli defined several extensions of System F<: that deal with mutable objects, allowing both subtype polymorphism and parametric polymorphism (generics), and were able to formally model many OOP concepts and constructs. Although far from trivial, static analysis of object-oriented programming languages such as Java is a mature field, with several commercial tools.
== See also ==
Comparison of programming languages (object-oriented programming)
Component-based software engineering
Object association
Object modeling language
Object-oriented analysis and design
Object-oriented ontology
=== Systems ===
CADES
Common Ob
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
riented ontology
=== Systems ===
CADES
Common Object Request Broker Architecture (CORBA)
Distributed Component Object Model
Jeroo
=== Modeling languages ===
IDEF4
Interface description language
UML
== References ==
== Further reading ==
Abadi, Martin; Luca Cardelli (1998). A Theory of Objects. Springer Verlag. ISBN 978-0-387-94775-4.
Abelson, Harold; Gerald Jay Sussman (1997). Structure and Interpretation of Computer Programs. MIT Press. ISBN 978-0-262-01153-2. Archived from the original on 26 December 2017. Retrieved 22 January 2006.
Armstrong, Deborah J. (February 2006). "The Quarks of Object-Oriented Development". Communications of the ACM. 49 (2): 123–128. doi:10.1145/1113034.1113040. ISSN 0001-0782. S2CID 11485502.
Bloch, Joshua (2018). "Effective Java: Programming Language Guide" (third ed.). Addison-Wesley. ISBN 978-0134685991.
Booch, Grady (1997). Object-Oriented Analysis and Design with Applications. Addison-Wesley. ISBN 978-0-8053-5340-2.
Eeles, Peter; Oliver Sims (19
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
N 978-0-8053-5340-2.
Eeles, Peter; Oliver Sims (1998). Building Business Objects. John Wiley & Sons. ISBN 978-0-471-19176-6.
Gamma, Erich; Richard Helm; Ralph Johnson; John Vlissides (1995). Design Patterns: Elements of Reusable Object Oriented Software. Addison-Wesley. Bibcode:1995dper.book.....G. ISBN 978-0-201-63361-0.
Harmon, Paul; William Morrissey (1996). The Object Technology Casebook – Lessons from Award-Winning Business Applications. John Wiley & Sons. ISBN 978-0-471-14717-6.
Jacobson, Ivar (1992). Object-Oriented Software Engineering: A Use Case-Driven Approach. Addison-Wesley. Bibcode:1992oose.book.....J. ISBN 978-0-201-54435-0.
Kay, Alan. The Early History of Smalltalk. Archived from the original on 4 April 2005. Retrieved 18 April 2005.
Meyer, Bertrand (1997). Object-Oriented Software Construction. Prentice Hall. ISBN 978-0-13-629155-8.
Pecinovsky, Rudolf (2013). OOP – Learn Object Oriented Thinking & Programming. Bruckner Publishing. ISBN 978-80-904661-8-0.
Rumbaugh, Jame
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
Publishing. ISBN 978-80-904661-8-0.
Rumbaugh, James; Michael Blaha; William Premerlani; Frederick Eddy; William Lorensen (1991). Object-Oriented Modeling and Design. Prentice Hall. ISBN 978-0-13-629841-0.
Schach, Stephen (2006). Object-Oriented and Classical Software Engineering, Seventh Edition. McGraw-Hill. ISBN 978-0-07-319126-3.
Schreiner, Axel-Tobias (1993). Object oriented programming with ANSI-C. Hanser. hdl:1850/8544. ISBN 978-3-446-17426-9.
Taylor, David A. (1992). Object-Oriented Information Systems – Planning and Implementation. John Wiley & Sons. ISBN 978-0-471-54364-0.
Weisfeld, Matt (2009). The Object-Oriented Thought Process, Third Edition. Addison-Wesley. ISBN 978-0-672-33016-2.
West, David (2004). Object Thinking (Developer Reference). Microsoft Press. ISBN 978-0-7356-1965-4.
== External links ==
Introduction to Object Oriented Programming Concepts (OOP) and More by L.W.C. Nirosh
Discussion on Cons of OOP
OOP Concepts (Java Tutorials)
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
s (Java Tutorials)
|
https://en.wikipedia.org/wiki/Object-oriented_programming
|
In aspect and functional programming, advice describes a class of functions which modify other functions when the latter are run; it is a certain function, method or procedure that is to be applied at a given join point of a program.
== Use ==
The practical use of advice functions is generally to modify or otherwise extend the behavior of functions which cannot or should not be readily modified or extended. For instance, the Emacspeak Emacs-addon makes extensive use of advice: it must modify thousands of existing Emacs modules and functions such that it can produce audio output for the blind corresponding to the visual presentation, but it would be infeasible to copy all of them and redefine them to produce audio output in addition to their normal outputs; so, the Emacspeak programmers define advice functions which run before and after.
For a simple Emacs example: suppose after a user corrected a mis-spelled word using the Emacs ispell module, they wanted to re-spellcheck the entire
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
ll module, they wanted to re-spellcheck the entire buffer. ispell-word offers no such functionality, even if the corrected word appears frequently in the buffer. The user could track down the definition of ispell-word, copy it into their personal Emacs files, and add the desired functionality there, but this is tedious and, worse, brittle (the user's version is now out of sync with the core Emacs implementation, if it even works without further refactoring). What the user wants is quite simple — just to run another command any time ispell-word runs. Using advice, it can be done as simply as this:
While this example is obviously trivial, the strength of advice, especially when compared to similar facilities such as Python decorators and Java annotations, lies in the fact that not only do the advised functions / methods not need to be designed to accept advice, but also the advice themselves need not be designed to be usable as advice - they're just normal functions. The availability of
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
they're just normal functions. The availability of evaluation throughout the lifetime of a piece of code (cf. code staging) in Lisp allows advice to be inlined automatically into any other code in a variety of ways. Any piece of code can be advised to carry out any other computation before, after, around, or instead of its original definition.
=== Readability ===
Advice has the potential to introduce confusion, as a piece of advice applied to a function is not apparent to a user who tracks down the function's source definition to learn about it. In such cases, advice acts almost like a COMEFROM, a joke facility added to INTERCAL to spoof the spaghettification attendant to the extensive use of GOTOs. In practice, however, such issues rarely present themselves. Upstream developers and maintainers of Lisp packages and modules never use advice, since there is no advantage to be gained by advising functions when their original source definitions can be freely rewritten to include the desi
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
itions can be freely rewritten to include the desired features. Advice is only useful in that it enables downstream users to subsequently modify default behaviour in a way that does not require propagation of such modifications into the core implementation's source definition.
== Implementations ==
A form of advices were part of C with Classes in the late 1970s and early 1980s, namely functions called call and return defined in a class, which were called before (respectively, after) member functions of the class. However, these were dropped from C++.
Advices are part of the Common Lisp Object System (CLOS), as :before, :after, and :around methods, which are combined with the primary method under "standard method combination".
Common Lisp implementations provide advice functionality (in addition to the standard method combination for CLOS) as extensions. LispWorks supports advising functions, macros and CLOS methods.
EmacsLisp added advice-related code in version 19.28, 1994.
== His
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
vice-related code in version 19.28, 1994.
== History ==
The following is taken from a discussion at the mailing list aosd-discuss. Pascal Costanza contributed the following:
The term "advice" goes back to the term advising as introduced by Warren Teitelman in his PhD thesis in 1966. Here is a quote from Chapter 3 of his thesis:
Advising is the basic innovation in the model, and in the PILOT system. Advising consists of inserting new procedures at any or all of the entry or exit points to a particular procedure (or class of procedures). The procedures inserted are called "advice procedures" or simply "advice".
Since each piece of advice is itself a procedure, it has its own entries and exits. In particular, this means that the execution of advice can cause the procedure that it modifies to be bypassed completely, e.g., by specifying as an exit from the advice one of the exits from the original procedure; or the advice may change essential variables and continue with the computation
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
ntial variables and continue with the computation so that the original procedure is executed, but with modified variables. Finally, the advice may not alter the execution or affect the original procedure at all, e.g., it may merely perform some additional computation such as printing a message or recording history. Since advice can be conditional, the decision as to what is to be done can depend on the results of the computation up to that point.
The principal advantage of advising is that the user need not be concerned about the details of the actual changes in his program, nor the internal representation of advice. He can treat the procedure to be advised as a unit, a single block, and make changes to it without concern for the particulars of this block. This may be contrasted with editing in which the programmer must be cognizant of the internal structure of the procedure.
"Advising" found its way into BBN Lisp and later into Xerox PARC's Interlisp.
It also found its way to Flavors
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
PARC's Interlisp.
It also found its way to Flavors, the first object-oriented extension to Lisp developed at MIT. They were subsumed under the notion of method combination.
Since method combination and macros are closely related, it's also interesting to note that the first macro system was described in 1963, three years before Warren Teitelman's PhD thesis.
== See also ==
Function decorator (w.r.t. Python)
Aspect-oriented software development#Advice bodies
== Notes ==
Gregor Kiczales comments the above as follows:
== References ==
== External links ==
Teitelman's PhD thesis, PILOT: A Step Toward Man-Computer Symbiosis (AITR-221)
Interlisp reference manual from 1974
"Origin of Advice"
|
https://en.wikipedia.org/wiki/Advice_(programming)
|
In computing, reactive programming is a declarative programming paradigm concerned with data streams and the propagation of change. With this paradigm, it is possible to express static (e.g., arrays) or dynamic (e.g., event emitters) data streams with ease, and also communicate that an inferred dependency within the associated execution model exists, which facilitates the automatic propagation of the changed data flow.
For example, in an imperative programming setting, a := b + c would mean that a is being assigned the result of b + c at the instant the expression is evaluated, and later, the values of b and c can be changed with no effect on the value of a. On the other hand, in reactive programming, the value of a is automatically updated whenever the values of b or c change, without the program having to explicitly re-state the statement a := b + c to re-assign the value of a.
Another example is a hardware description language such as Verilog, where reactive programming enables cha
|
https://en.wikipedia.org/wiki/Reactive_programming
|
as Verilog, where reactive programming enables changes to be modeled as they propagate through circuits.
Reactive programming has been proposed as a way to simplify the creation of interactive user interfaces and near-real-time system animation.
For example, in a model–view–controller (MVC) architecture, reactive programming can facilitate changes in an underlying model being reflected automatically in an associated view.
== Approaches to creating reactive programming languages ==
Several popular approaches are employed in the creation of reactive programming languages. One approach is the specification of dedicated languages that are specific to various domain constraints. Such constraints usually are characterized by real-time, embedded computing or hardware description. Another approach involves the specification of general-purpose languages that include support for reactivity. Other approaches are articulated in the definition, and use of programming libraries, or embedded domain
|
https://en.wikipedia.org/wiki/Reactive_programming
|
d use of programming libraries, or embedded domain-specific languages, that enable reactivity alongside or on top of the programming language. Specification and use of these different approaches results in language capability trade-offs. In general, the more restricted a language is, the more its associated compilers and analysis tools are able to inform developers (e.g., in performing analysis for whether programs are able to execute in actual real-time). Functional trade-offs in specificity may result in deterioration of the general applicability of a language.
== Programming models and semantics ==
A variety of models and semantics govern reactive programming. We can loosely split them along the following dimensions:
Synchrony: synchronous versus asynchronous model of time
Determinism: deterministic versus non-deterministic evaluation process and results
Update process: callback versus dataflow versus actor
== Implementation techniques and challenges ==
=== Essence of impleme
|
https://en.wikipedia.org/wiki/Reactive_programming
|
hniques and challenges ==
=== Essence of implementations ===
Reactive programming language runtimes are represented by a graph that identifies the dependencies among the involved reactive values. In such a graph, nodes represent the act of computing, and edges model dependency relationships. Such a runtime employs said graph, to help it keep track of the various computations, which must be executed anew, once an involved input changes value.
==== Change propagation algorithms ====
The most common approaches to data propagation are:
Pull: The value consumer is in fact proactive, in that it regularly queries the observed source for values and reacts whenever a relevant value is available. This practice of regularly checking for events or value changes is commonly referred to as polling.
Push: The value consumer receives a value from the source whenever the value becomes available. These values are self-contained, e.g. they contain all the necessary information, and no further inform
|
https://en.wikipedia.org/wiki/Reactive_programming
|
l the necessary information, and no further information needs to be queried by the consumer.
Push-pull: The value consumer receives a change notification, which is a short description of the change, e.g. "some value changed" – this is the push part. However, the notification does not contain all the necessary information (which means it does not contain the actual values), so the consumer needs to query the source for more information (the specific value) after it receives the notification – this is the pull part. This method is commonly used when there is a large volume of data that the consumers might be potentially interested in. So in order to reduce throughput and latency, only light-weight notifications are sent; and then those consumers which require more information will request that specific information. This approach also has the drawback that the source might be overwhelmed by many requests for additional information after a notification is sent.
==== What to push? ====
At
|
https://en.wikipedia.org/wiki/Reactive_programming
|
notification is sent.
==== What to push? ====
At the implementation level, event reaction consists of the propagation across a graph's information, which characterizes the existence of change. Consequently, computations that are affected by such change then become outdated and must be flagged for re-execution. Such computations are then usually characterized by the transitive closure of the change (i.e. the full set of transitive dependencies a source affects) in its associated source. Change propagation may then lead to an update in the value of the graph's sinks.
Graph propagated information can consist of a node's complete state, i.e., the computation result of the involved node. In such cases, the node's previous output is then ignored. Another method involves delta propagation i.e. incremental change propagation. In this case, information is proliferated along a graph's edges, which consist only of deltas describing how the previous node was changed. This approach is especially
|
https://en.wikipedia.org/wiki/Reactive_programming
|
ous node was changed. This approach is especially important when nodes hold large amounts of state data, which would otherwise be expensive to recompute from scratch.
Delta propagation is essentially an optimization that has been extensively studied via the discipline of incremental computing, whose approach requires runtime satisfaction involving the view-update problem. This problem is infamously characterized by the use of database entities, which are responsible for the maintenance of changing data views.
Another common optimization is employment of unary change accumulation and batch propagation. Such a solution can be faster because it reduces communication among involved nodes. Optimization strategies can then be employed that reason about the nature of the changes contained within, and make alterations accordingly (e.g. two changes in the batch can cancel each other, and thus, simply be ignored). Yet another available approach, is described as invalidity notification propagatio
|
https://en.wikipedia.org/wiki/Reactive_programming
|
is described as invalidity notification propagation. This approach causes nodes with invalid input to pull updates, thus resulting in the update of their own outputs.
There are two principal ways employed in the building of a dependency graph:
The graph of dependencies are maintained implicitly within an event loop. Registration of explicit callbacks then results in the creation of implicit dependencies. Therefore, control inversion, which is induced via callback, is thus left in place. However, making callbacks functional (i.e. returning state value instead of unit value) necessitates that such callbacks become compositional.
A graph of dependencies is program-specific and generated by a programmer. This facilitates an addressing of the callback's control inversion in two ways: either a graph is specified explicitly (typically using a domain-specific language (DSL), which may be embedded), or a graph is implicitly defined with expression and generation using an effective, archetypal
|
https://en.wikipedia.org/wiki/Reactive_programming
|
ion and generation using an effective, archetypal language.
=== Implementation challenges in reactive programming ===
==== Glitches ====
When propagating changes, it is possible to pick propagation orders such that the value of an expression is not a natural consequence of the source program. We can illustrate this easily with an example. Suppose seconds is a reactive value that changes every second to represent the current time (in seconds). Consider this expression:
t = seconds + 1
g = (t > seconds)
Because t should always be greater than seconds, this expression should always evaluate to a true value. Unfortunately, this can depend on the order of evaluation. When seconds changes, two expressions have to update: seconds + 1 and the conditional. If the first evaluates before the second, then this invariant will hold. If, however, the conditional updates first, using the old value of t and the new value of seconds, then the expression will evaluate to a false value. This is call
|
https://en.wikipedia.org/wiki/Reactive_programming
|
ssion will evaluate to a false value. This is called a glitch.
Some reactive languages are glitch-free and prove this property. This is usually achieved by topologically sorting expressions and updating values in topological order. This can, however, have performance implications, such as delaying the delivery of values (due to the order of propagation). In some cases, therefore, reactive languages permit glitches, and developers must be aware of the possibility that values may temporarily fail to correspond to the program source, and that some expressions may evaluate multiple times (for instance, t > seconds may evaluate twice: once when the new value of seconds arrives, and once more when t updates).
==== Cyclic dependencies ====
Topological sorting of dependencies depends on the dependency graph being a directed acyclic graph (DAG). In practice, a program may define a dependency graph that has cycles. Usually, reactive programming languages expect such cycles to be "broken" by pl
|
https://en.wikipedia.org/wiki/Reactive_programming
|
languages expect such cycles to be "broken" by placing some element along a "back edge" to permit reactive updating to terminate. Typically, languages provide an operator like delay that is used by the update mechanism for this purpose, since a delay implies that what follows must be evaluated in the "next time step" (allowing the current evaluation to terminate).
==== Interaction with mutable state ====
Reactive languages typically assume that their expressions are purely functional. This allows an update mechanism to choose different orders in which to perform updates, and leave the specific order unspecified (thereby enabling optimizations). When a reactive language is embedded in a programming language with state, however, it may be possible for programmers to perform mutable operations. How to make this interaction smooth remains an open problem.
In some cases, it is possible to have principled partial solutions. Two such solutions include:
A language might offer a notion of a
|
https://en.wikipedia.org/wiki/Reactive_programming
|
ons include:
A language might offer a notion of a "mutable cell". A mutable cell is one that the reactive update system is aware of, so that changes made to the cell propagate to the rest of the reactive program. This enables the non-reactive part of the program to perform a traditional mutation while enabling reactive code to be aware of and respond to this update, thus maintaining the consistency of the relationship between values in the program. An example of a reactive language that provides such a cell is FrTime.
Properly encapsulated object-oriented libraries offer an encapsulated notion of state. In principle, it is therefore possible for such a library to interact smoothly with the reactive portion of a language. For instance, callbacks can be installed in the getters of the object-oriented library to notify the reactive update engine about state changes, and changes in the reactive component can be pushed to the object-oriented library through getters. FrTime employs such a s
|
https://en.wikipedia.org/wiki/Reactive_programming
|
d library through getters. FrTime employs such a strategy.
==== Dynamic updating of the graph of dependencies ====
In some reactive languages, the graph of dependencies is static, i.e., the graph is fixed throughout the program's execution. In other languages, the graph can be dynamic, i.e., it can change as the program executes. For a simple example, consider this illustrative example (where seconds is a reactive value):
t =
if ((seconds mod 2) == 0):
seconds + 1
else:
seconds - 1
end
t + 1
Every second, the value of this expression changes to a different reactive expression, which t + 1 then depends on. Therefore, the graph of dependencies updates every second.
Permitting dynamic updating of dependencies provides significant expressive power (for instance, dynamic dependencies routinely occur in graphical user interface (GUI) programs). However, the reactive update engine must decide whether to reconstruct expressions each time, or to keep an expression's node const
|
https://en.wikipedia.org/wiki/Reactive_programming
|
s each time, or to keep an expression's node constructed but inactive; in the latter case, ensure that they do not participate in the computation when they are not supposed to be active.
== Concepts ==
=== Degrees of explicitness ===
Reactive programming languages can range from very explicit ones where data flows are set up by using arrows, to implicit where the data flows are derived from language constructs that look similar to those of imperative or functional programming. For example, in an implicitly lifted functional reactive programming (FRP), a function call might implicitly cause a node in a data flow graph to be constructed. Reactive programming libraries for dynamic languages (such as the Lisp "Cells" and Python "Trellis" libraries) can construct a dependency graph from runtime analysis of the values read during a function's execution, allowing data flow specifications to be both implicit and dynamic.
Sometimes the term reactive programming refers to the architectural l
|
https://en.wikipedia.org/wiki/Reactive_programming
|
reactive programming refers to the architectural level of software engineering, where individual nodes in the data flow graph are ordinary programs that communicate with each other.
=== Static or dynamic ===
Reactive programming can be purely static where the data flows are set up statically, or be dynamic where the data flows can change during the execution of a program.
The use of data switches in the data flow graph could to some extent make a static data flow graph appear as dynamic, and blur the distinction slightly. True dynamic reactive programming however could use imperative programming to reconstruct the data flow graph.
=== Higher-order reactive programming ===
Reactive programming could be said to be of higher order if it supports the idea that data flows could be used to construct other data flows. That is, the resulting value out of a data flow is another data flow graph that is executed using the same evaluation model as the first.
=== Data flow differentiation ===
|
https://en.wikipedia.org/wiki/Reactive_programming
|
as the first.
=== Data flow differentiation ===
Ideally all data changes are propagated instantly, but this cannot be assured in practice. Instead, it might be necessary to give different parts of the data flow graph different evaluation priorities. This can be called differentiated reactive programming.
For example, in a word processor, the marking of spelling errors need not be totally in sync with the inserting of characters. Here differentiated reactive programming could potentially be used to give the spell checker lower priority, allowing it to be delayed while keeping other data-flows instantaneous.
However, such differentiation introduces additional design complexity. For example, deciding how to define the different data flow areas, and how to handle event passing between different data flow areas.
=== Evaluation models of reactive programming ===
Evaluation of reactive programs is not necessarily based on how stack-based programming languages are evaluated. Instead, when
|
https://en.wikipedia.org/wiki/Reactive_programming
|
programming languages are evaluated. Instead, when some data is changed, the change is propagated to all data that is derived partially or completely from the data that was changed. This change propagation could be achieved in a number of ways, where perhaps the most natural way is an invalidate/lazy-revalidate scheme.
It could be problematic simply to naively propagate a change using a stack, because of potential exponential update complexity if the data structure has a certain shape. One such shape can be described as "repeated diamonds shape", and has the following structure:
An→Bn→An+1, An→Cn→An+1, where n=1,2... This problem could be overcome by propagating invalidation only when some data is not already invalidated, and later re-validate the data when needed using lazy evaluation.
One inherent problem for reactive programming is that most computations that would be evaluated and forgotten in a normal programming language, needs to be represented in the memory as data-structures.
|
https://en.wikipedia.org/wiki/Reactive_programming
|
be represented in the memory as data-structures. This could potentially make reactive programming highly memory consuming. However, research on what is called lowering could potentially overcome this problem.
On the other side, reactive programming is a form of what could be described as "explicit parallelism", and could therefore be beneficial for utilizing the power of parallel hardware.
==== Similarities with observer pattern ====
Reactive programming has principal similarities with the observer pattern commonly used in object-oriented programming. However, integrating the data flow concepts into the programming language would make it easier to express them and could therefore increase the granularity of the data flow graph. For example, the observer pattern commonly describes data-flows between whole objects/classes, whereas object-oriented reactive programming could target the members of objects/classes.
== Approaches ==
=== Imperative ===
It is possible to fuse reactive pr
|
https://en.wikipedia.org/wiki/Reactive_programming
|
Imperative ===
It is possible to fuse reactive programming with ordinary imperative programming. In such a paradigm, imperative programs operate upon reactive data structures. Such a set-up is analogous to imperative constraint programming; however, while imperative constraint programming manages bidirectional data-flow constraints, imperative reactive programming manages one-way data-flow constraints. One reference implementation is the proposed Quantum runtime extension to JavaScript .
=== Object-oriented ===
Object-oriented reactive programming (OORP) is a combination of object-oriented programming and reactive programming. Perhaps the most natural way to make such a combination is as follows: instead of methods and fields, objects have reactions that automatically re-evaluate when the other reactions they depend on have been modified.
If an OORP language maintains its imperative methods, it would also fall under the category of imperative reactive programming.
=== Functional =
|
https://en.wikipedia.org/wiki/Reactive_programming
|
mperative reactive programming.
=== Functional ===
Functional reactive programming (FRP) is a programming paradigm for reactive programming on functional programming.
=== Actor based ===
Actors have been proposed to design reactive systems, often in combination with Functional reactive programming (FRP) and Reactive Streams to develop distributed reactive systems.
=== Rule based ===
A relatively new category of programming languages uses constraints (rules) as main programming concept. It consists of reactions to events, which keep all constraints satisfied. Not only does this facilitate event-based reactions, but it makes reactive programs instrumental to the correctness of software. An example of a rule based reactive programming language is Ampersand, which is founded in relation algebra.
== Implementations ==
ReactiveX, an API for implementing reactive programming with streams, observables and operators with multiple language implementations including RxJs, RxJava, Rx.NET,
|
https://en.wikipedia.org/wiki/Reactive_programming
|
e implementations including RxJs, RxJava, Rx.NET, RxPy and RxSwift.
Elm, a reactive composition of web user interfaces.
Reactive Streams, a JVM standard for asynchronous stream processing with non-blocking backpressure
ObservableComputations, a cross-platform .NET implementation.
Svelte, brings reactivity in the form of a variant JavaScript syntax that looks like JavaScript but is naturally reactive where JavaScript normally isn't.
Solid.js brings reactivity to JavaScript without changing JavaScript syntax semantics, along with reactive JSX templating.
Quantum JS, a runtime extension to JavaScript that brings imperative reactive programming to the language, creating a whole new category in the reactivity spectrum.
Rimmel.js, a modern functional-reactive JavaScript UI library designed with RxJS streams in mind.
== See also ==
Observable (Computing), observable in reactive programming.
== References ==
== External links ==
A survey on reactive programming A 2013 paper by E. Bainom
|
https://en.wikipedia.org/wiki/Reactive_programming
|
on reactive programming A 2013 paper by E. Bainomugisha, A. Lombide Carreton, T. Van Cutsem, S. Mostinckx, and W. De Meuter that surveys and provides a taxonomy of existing reactive programming approaches.
MIMOSA Project of INRIA - ENSMP, a general site about reactive programming.
Deprecating the Observer Pattern A 2010 paper by Ingo Maier, Tiark Rompf and Martin Odersky outlining a reactive programming framework for the Scala programming language.
Deprecating the Observer Pattern with Scala.React A 2012 paper by Ingo
RxJS, the Reactive Extensions library for "composing asynchronous [...] programs using observable sequences"
Tackling the Awkward Squad for Reactive Programming: The Actor-Reactor Model A 2020 paper that proposes a model of "actors" and "reactors" to avoid the issues that arise when combining imperative code with reactive code.
|
https://en.wikipedia.org/wiki/Reactive_programming
|
glob() () is a libc function for globbing, which is the archetypal use of pattern matching against the names in a filesystem directory such that a name pattern is expanded into a list of names matching that pattern. Although globbing may now refer to glob()-style pattern matching of any string, not just expansion into a list of filesystem names, the original meaning of the term is still widespread.
The glob() function and the underlying gmatch() function originated at Bell Labs in the early 1970s alongside the original AT&T UNIX itself and had a formative influence on the syntax of UNIX command line utilities and therefore also on the present-day reimplementations thereof.
In their original form, glob() and gmatch() derived from code used in Bell Labs in-house utilities that developed alongside the original Unix in the early 1970s. Among those utilities were also two command line tools called glob and find; each could be used to pass a list of matching filenames to other command line
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
list of matching filenames to other command line tools, and they shared the backend code subsequently formalized as glob() and gmatch(). Shell-statement-level globbing by default became commonplace following the "builtin"-integration of globbing-functionality into the 7th edition of the Unix shell in 1978. The Unix shell's -f option to disable globbing — i.e. revert to literal "file" mode — appeared in the same version.
The glob pattern quantifiers now standardized by POSIX.2 (IEEE Std 1003.2) fall into two groups, and can be applied to any character sequence ("string"), not just to directory entries.
"Metacharacters" (also called "Wildcards"):
? (not in brackets) matches any character exactly once.
* (not in brackets) matches a string of zero or more characters.
"Ranges/sets":
[...], where the first character within the brackets is not '!', matches any single character among the characters specified in the brackets. If the first character within brackets is '!', then the [!...] matc
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
acter within brackets is '!', then the [!...] matches any single character that is not among the characters specified in the brackets.
The characters in the brackets may be a list ([abc]) or a range ([a-c]) or denote a character class (like [[:space:]] where the inner brackets are part of the classname). POSIX does not mandate multi-range ([a-c0-3]) support, which derive originally from regular expressions.
As reimplementations of Bell Labs' UNIX proliferated, so did reimplementations of its Bell Labs' libc and shell, and with them glob() and globbing. Today, glob() and globbing are standardized by the POSIX.2 specification and are integral part of every Unix-like libc ecosystem and shell, including AT&T Bourne shell-compatible Korn shell (ksh), Z shell (zsh), Almquist shell (ash) and its derivatives and reimplementations such as busybox, toybox, GNU bash, Debian dash.
== Origin ==
The glob command, short for global, originates in the earliest versions of Bell Labs' Unix. The command
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
earliest versions of Bell Labs' Unix. The command interpreters of the early versions of Unix (1st through 6th Editions, 1969–1975) relied on a separate program to expand wildcard characters in unquoted arguments to a command: /etc/glob. That program performed the expansion and supplied the expanded list of file paths to the command for execution.
Glob was originally written in the B programming language. It was the first piece of mainline Unix software to be developed in a high-level programming language. Later, this functionality was provided as a C library function, glob(), used by programs such as the shell. It is usually defined based on a function named fnmatch(), which tests for whether a string matches a given pattern - the program using this function can then iterate through a series of strings (usually filenames) to determine which ones match. Both functions are a part of POSIX: the functions defined in POSIX.1 since 2001, and the syntax defined in POSIX.2. The idea of defini
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
the syntax defined in POSIX.2. The idea of defining a separate match function started with wildmat (wildcard match), a simple library to match strings against Bourne Shell globs.
Traditionally, globs do not match hidden files in the form of Unix dotfiles; to match them the pattern must explicitly start with .. For example, * matches all visible files while .* matches all hidden files.
== Syntax ==
The most common wildcards are *, ?, and […].
Normally, the path separator character (/ on Linux/Unix, MacOS, etc. or \ on Windows) will never be matched. Some shells, such as Unix shell have functionality allowing users to circumvent this.
=== Unix-like ===
On Unix-like systems *, ? is defined as above while […] has two additional meanings:
The ranges are also allowed to include pre-defined character classes, equivalence classes for accented characters, and collation symbols for hard-to-type characters. They are defined to match up with the brackets in POSIX regular expressions.
Unix g
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
the brackets in POSIX regular expressions.
Unix globbing is handled by the shell per POSIX tradition. Globbing is provided on filenames at the command line and in shell scripts. The POSIX-mandated case statement in shells provides pattern-matching using glob patterns.
Some shells (such as the C shell and Bash) support additional syntax known as alternation or brace expansion. Because it is not part of the glob syntax, it is not provided in case. It is only expanded on the command line before globbing.
The Bash shell also supports the following extensions:
Extended globbing (extglob): allows other pattern matching operators to be used to match multiple occurrences of a pattern enclosed in parentheses, essentially providing the missing kleene star and alternation for describing regular languages. It can be enabled by setting the extglob shell option. This option came from ksh93. The GNU fnmatch and glob has an identical extension.
globstar: allows ** on its own as a name component to r
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
tar: allows ** on its own as a name component to recursively match any number of layers of non-hidden directories. Also supported by the JavaScript libraries and Python's glob.
=== Windows and DOS ===
The original DOS was a clone of CP/M designed to work on Intel's 8088 and 8086 processors. Windows shells, following DOS, do not traditionally perform any glob expansion in arguments passed to external programs. Shells may use an expansion for their own builtin commands:
Windows PowerShell has all the common syntax defined as stated above without any additions.
COMMAND.COM and cmd.exe have most of the common syntax with some limitations: There is no […] and for COMMAND.COM the * may only appear at the end of the pattern. It can not appear in the middle of a pattern, except immediately preceding the filename extension separator dot.
Windows and DOS programs receive a long command-line string instead of argv-style parameters, and it is their responsibility to perform any splitting, quo
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
their responsibility to perform any splitting, quoting, or glob expansion. There is technically no fixed way of describing wildcards in programs since they are free to do what they wish. Two common glob expanders include:
The Microsoft C Runtime (msvcrt) command-line expander, which only supports ? and *. Both ReactOS (crt/misc/getargs.c) and Wine (msvcrt/data.c) contain a compatible open-source implementation of __getmainargs, the function operating under-the-hood, in their core CRT.
The Cygwin and MSYS dcrt0.cc command-line expander, which uses the unix-style glob() routine under-the-hood, after splitting the arguments.
Most other parts of Windows, including the Indexing Service, use the MS-DOS style of wildcards found in CMD. A relic of the 8.3 filename age, this syntax pays special attention to dots in the pattern and the text (filename). Internally this is done using three extra wildcard characters, <>". On the Windows API end, the glob() equivalent is FindFirstFile, and fnmatch(
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
e glob() equivalent is FindFirstFile, and fnmatch() corresponds to its underlying RtlIsNameInExpression. (Another fnmatch analogue is PathMatchSpec.) Both open-source msvcrt expanders use FindFirstFile, so 8.3 filename quirks will also apply in them.
=== SQL ===
The SQL LIKE operator has an equivalent to ? and * but not […].
Standard SQL uses a glob-like syntax for simple string matching in its LIKE operator, although the term "glob" is not generally used in the SQL community. The percent sign (%) matches zero or more characters and the underscore (_) matches exactly one.
Many implementations of SQL have extended the LIKE operator to allow a richer pattern-matching language, incorporating character ranges ([…]), their negation, and elements of regular expressions.
== Compared to regular expressions ==
Globs do not include syntax for the Kleene star which allows multiple repetitions of the preceding part of the expression; thus they are not considered regular expressions, which can
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
are not considered regular expressions, which can describe the full set of regular languages over any given finite alphabet.
Globs attempt to match the entire string (for example, S*.DOC matches S.DOC and SA.DOC, but not POST.DOC or SURREY.DOCKS), whereas, depending on implementation details, regular expressions may match a substring.
=== Implementing as regular expressions ===
The original Mozilla proxy auto-config implementation, which provides a glob-matching function on strings, uses a replace-as-RegExp implementation as above. The bracket syntax happens to be covered by regex in such an example.
Python's fnmatch uses a more elaborate procedure to transform the pattern into a regular expression.
== Other implementations ==
Beyond their uses in shells, globs patterns also find use in a variety of programming languages, mainly to process human input. A glob-style interface for returning files or an fnmatch-style interface for matching strings are found in the following programm
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
tching strings are found in the following programming languages:
C and C++ do not have built-in support for glob patterns in the ISO-defined standard libraries, however on Unix-like systems C and C++ may include <glob.h> from the C POSIX library to use glob().
C++ itself does not have direct support for glob patterns, however they may be approximated using the <filesystem> and <regex> headers, using std::filesystem::directory_iterator() and std::regex_match().
C# has multiple libraries available through NuGet such as Glob or DotNet.Glob.
D has a globMatch function in the std.path module.
JavaScript has a library called minimatch which is used internally by npm, and micromatch, a purportedly more optimized, accurate and safer globbing implementation used by Babel and yarn.
Go has a Glob function in the filepath package.
Java has a Files class in the package java.nio.file, containing methods that can operate on glob patterns.
Haskell has a Glob package with the main module System.FilePa
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
a Glob package with the main module System.FilePath.Glob. The pattern syntax is based on a subset of Zsh's. It tries to optimize the given pattern and should be noticeably faster than a naïve character-by-character matcher.
Perl has both a glob function (as discussed in Larry Wall's book Programming Perl) and a Glob extension which mimics the BSD glob routine. Perl's angle brackets can be used to glob as well: <*.log>.
PHP has a glob function.
Python has a glob module in the standard library which performs wildcard pattern matching on filenames, and an fnmatch module with functions for matching strings or filtering lists based on these same wildcard patterns. Guido van Rossum, author of the Python programming language, wrote and contributed a glob routine to BSD Unix in 1986. There were previous implementations of glob, e.g., in the ex and ftp programs in previous releases of BSD.
Ruby has a glob method for the Dir class which performs wildcard pattern matching on filenames. Several l
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
wildcard pattern matching on filenames. Several libraries such as Rant and Rake provide a FileList class which has a glob method or use the method FileList.[] identically.
Rust has multiple libraries that can match glob patterns, the most popular of these being the glob crate.
SQLite has a GLOB function.
Tcl contains a globbing facility.
== See also ==
Regular expression
Wildcard character
Matching wildcards
== References ==
|
https://en.wikipedia.org/wiki/Glob_(programming)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.