text
stringlengths 1
1k
| source
stringlengths 31
152
|
---|---|
3
x
+
2
y
≤
12
2
x
+
3
y
≤
12
x
,
y
≥
0
{\displaystyle {\begin{aligned}{\underset {x,y\in \mathbb {Z} }{\text{maximize}}}\quad &y\\{\text{subject to}}\quad &-x+y\leq 1\\&3x+2y\leq 12\\&2x+3y\leq 12\\&x,y\geq 0\end{aligned}}}
The feasible integer points are shown in red, and the red dashed lines indicate their convex hull, which is the smallest convex polyhedron that contains all of these points. The blue lines together with the coordinate axes define the polyhedron
|
https://en.wikipedia.org/wiki/Integer_programming
|
er with the coordinate axes define the polyhedron of the LP relaxation, which is given by the inequalities without the integrality constraint. The goal of the optimization is to move the black dashed line as far upward while still touching the polyhedron. The optimal solutions of the integer problem are the points
(
1
,
2
)
{\displaystyle (1,2)}
and
(
2
,
2
)
{\displaystyle (2,2)}
that both have an objective value of 2. The unique optimum of the relaxation is
(
1.8
,
2.8
)
{\displaystyle (1.8,2.8)}
with objective value of 2.8. If the solution of the relaxation is rounded to the nearest integers, it is not feasible for the ILP. See projection into simplex
== Proof of NP-hardness ==
The following is a reduction from minimum vertex cover to integer programming th
|
https://en.wikipedia.org/wiki/Integer_programming
|
rom minimum vertex cover to integer programming that will serve as the proof of NP-hardness.
Let
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
be an undirected graph. Define a linear program as follows:
min
∑
v
∈
V
y
v
y
v
+
y
u
|
https://en.wikipedia.org/wiki/Integer_programming
|
≥
1
∀
u
,
v
∈
E
y
v
∈
Z
+
∀
v
∈
V
{\displaystyle {\begin{aligned}\min \sum _{v\in V}y_{v}\\y_{v}+y_{u}&\geq 1&&\forall u,v\in E\\y_{v}&\in \mathbb {Z^{+}} &&\forall v\in V\end{aligned}}}
Given that the constraints limit
|
https://en.wikipedia.org/wiki/Integer_programming
|
ligned}}}
Given that the constraints limit
y
v
{\displaystyle y_{v}}
to either 0 or 1, any feasible solution to the integer program is a subset of vertices. The first constraint implies that at least one end point of every edge is included in this subset. Therefore, the solution describes a vertex cover. Additionally given some vertex cover C,
y
v
{\displaystyle y_{v}}
can be set to 1 for any
v
∈
C
{\displaystyle v\in C}
and to 0 for any
v
∉
C
{\displaystyle v\not \in C}
thus giving us a feasible solution to the integer program. Thus we can conclude that if we minimize the sum of
y
v
|
https://en.wikipedia.org/wiki/Integer_programming
|
v
{\displaystyle y_{v}}
we have also found the minimum vertex cover.
== Variants ==
Mixed-integer linear programming (MILP) involves problems in which only some of the variables,
x
i
{\displaystyle x_{i}}
, are constrained to be integers, while other variables are allowed to be non-integers.
Zero–one linear programming (or binary integer programming) involves problems in which the variables are restricted to be either 0 or 1. Any bounded integer variable can be expressed as a combination of binary variables. For example, given an integer variable,
0
≤
x
≤
U
{\displaystyle 0\leq x\leq U}
, the variable can be expressed using
⌊
log
2
U
⌋
+
|
https://en.wikipedia.org/wiki/Integer_programming
|
U
⌋
+
1
{\displaystyle \lfloor \log _{2}U\rfloor +1}
binary variables:
x
=
x
1
+
2
x
2
+
4
x
3
+
⋯
+
2
⌊
log
2
U
⌋
x
⌊
log
2
U
⌋
+
1
.
{\displaystyle x=x_{1}+2x_{2}+4x_{3}+\cdots +2^{\lfloor \log _{2}U\rfloor }x_{\lfloor \log _{2}U\rfloo
|
https://en.wikipedia.org/wiki/Integer_programming
|
oor \log _{2}U\rfloor }x_{\lfloor \log _{2}U\rfloor +1}.}
== Applications ==
There are two main reasons for using integer variables when modeling problems as a linear program:
The integer variables represent quantities that can only be integer. For example, it is not possible to build 3.7 cars.
The integer variables represent decisions (e.g. whether to include an edge in a graph) and so should only take on the value 0 or 1.
These considerations occur frequently in practice and so integer linear programming can be used in many applications areas, some of which are briefly described below.
=== Production planning ===
Mixed-integer programming has many applications in industrial productions, including job-shop modelling. One important example happens in agricultural production planning and involves determining production yield for several crops that can share resources (e.g. land, labor, capital, seeds, fertilizer, etc.). A possible objective is to maximize the total production, w
|
https://en.wikipedia.org/wiki/Integer_programming
|
e objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but the variables must be constrained to be integer.
=== Scheduling ===
These problems involve service and vehicle scheduling in transportation networks. For example, a problem may involve assigning buses or subways to individual routes so that a timetable can be met, and also to equip them with drivers. Here binary decision variables indicate whether a bus or subway is assigned to a route and whether a driver is assigned to a particular train or subway. The zero–one programming technique has been successfully applied to solve a project selection problem in which projects are mutually exclusive and/or technologically interdependent.
=== Territorial partitioning ===
Territorial partitioning or districting problems consist of partitioning a geographical region into districts in order to plan some operations while considering d
|
https://en.wikipedia.org/wiki/Integer_programming
|
order to plan some operations while considering different criteria or constraints. Some requirements for this problem are: contiguity, compactness, balance or equity, respect of natural boundaries, and socio-economic homogeneity. Some applications for this type of problem include: political districting, school districting, health services districting and waste management districting.
=== Telecommunications networks ===
The goal of these problems is to design a network of lines to install so that a predefined set of communication requirements are met and the total cost of the network is minimal. This requires optimizing both the topology of the network along with setting the capacities of the various lines. In many cases, the capacities are constrained to be integer quantities. Usually there are, depending on the technology used, additional restrictions that can be modeled as linear inequalities with integer or binary variables.
=== Cellular networks ===
The task of frequency pl
|
https://en.wikipedia.org/wiki/Integer_programming
|
=== Cellular networks ===
The task of frequency planning in GSM mobile networks involves distributing available frequencies across the antennas so that users can be served and interference is minimized between the antennas. This problem can be formulated as an integer linear program in which binary variables indicate whether a frequency is assigned to an antenna.
=== Other applications ===
Cash flow matching
Energy system optimization
UAV guidance
Transit map layouting
== Algorithms ==
The naive way to solve an ILP is to simply remove the constraint that x is integer, solve the corresponding LP (called the LP relaxation of the ILP), and then round the entries of the solution to the LP relaxation. But, not only may this solution not be optimal, it may not even be feasible; that is, it may violate some constraint.
=== Using total unimodularity ===
While in general the solution to LP relaxation will not be guaranteed to be integral, if the ILP has the form
|
https://en.wikipedia.org/wiki/Integer_programming
|
, if the ILP has the form
max
c
T
x
{\displaystyle \max \mathbf {c} ^{\mathrm {T} }\mathbf {x} }
such that
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
where
A
{\displaystyle A}
and
b
{\displaystyle \mathbf {b} }
have all integer entries and
A
{\displaystyle A}
is totally unimodular, then every basic feasible solution is integral. Consequently, the solution returned by the simplex algorithm is guaranteed to be integral. To show that every basic feasible solution is integral, let
x
|
https://en.wikipedia.org/wiki/Integer_programming
|
x
{\displaystyle \mathbf {x} }
be an arbitrary basic feasible solution . Since
x
{\displaystyle \mathbf {x} }
is feasible,
we know that
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
. Let
x
0
=
[
x
n
1
,
x
n
2
,
⋯
,
x
n
j
|
https://en.wikipedia.org/wiki/Integer_programming
|
j
]
{\displaystyle \mathbf {x} _{0}=[x_{n_{1}},x_{n_{2}},\cdots ,x_{n_{j}}]}
be the elements corresponding to the basis columns for the basic solution
x
{\displaystyle \mathbf {x} }
. By definition of a basis, there is some square submatrix
B
{\displaystyle B}
of
A
{\displaystyle A}
with linearly independent columns such that
B
x
0
=
b
{\displaystyle B\mathbf {x} _{0}=\mathbf {b} }
.
Since the columns of
B
{\displaystyle B}
are linearly independent and
B
{\displaystyle B}
is square,
|
https://en.wikipedia.org/wiki/Integer_programming
|
aystyle B}
is square,
B
{\displaystyle B}
is nonsingular,
and therefore by assumption,
B
{\displaystyle B}
is unimodular and so
det
(
B
)
=
±
1
{\displaystyle \det(B)=\pm 1}
. Also, since
B
{\displaystyle B}
is nonsingular, it is invertible and therefore
x
0
=
B
−
1
b
{\displaystyle \mathbf {x} _{0}=B^{-1}\mathbf {b} }
. By definition,
B
−
1
=
B
|
https://en.wikipedia.org/wiki/Integer_programming
|
a
d
j
det
(
B
)
=
±
B
a
d
j
{\displaystyle B^{-1}={\frac {B^{\mathrm {adj} }}{\det(B)}}=\pm B^{\mathrm {adj} }}
. Here
B
a
d
j
{\displaystyle B^{\mathrm {adj} }}
denotes the adjugate of
B
{\displaystyle B}
and is integral because
B
{\displaystyle B}
is integral. Therefore,
|
https://en.wikipedia.org/wiki/Integer_programming
|
⇒
B
−
1
=
±
B
a
d
j
is integral.
⇒
x
0
=
B
−
1
b
is integral.
|
https://en.wikipedia.org/wiki/Integer_programming
|
is integral.
⇒
Every basic feasible solution is integral.
{\displaystyle {\begin{aligned}&\Rightarrow B^{-1}=\pm B^{\mathrm {adj} }{\text{ is integral.}}\\&\Rightarrow \mathbf {x} _{0}=B^{-1}b{\text{ is integral.}}\\&\Rightarrow {\text{Every basic feasible solution is integral.}}\end{aligned}}}
Thus, if the matrix
A
{\displaystyle A}
of an ILP is totally unimodular, rather than use an ILP algorithm, the simplex method can be used to solve the LP relaxation and the solution will be integer.
=== Exact algorithms ===
When the matrix
A
{\displaystyle A}
is not totally unimodular, there are a variety of algorithms that can be used to so
|
https://en.wikipedia.org/wiki/Integer_programming
|
are a variety of algorithms that can be used to solve integer linear programs exactly. One class of algorithms are cutting plane methods, which work by solving the LP relaxation and then adding linear constraints that drive the solution towards being integer without excluding any integer feasible points.
Another class of algorithms are variants of the branch and bound method. For example, the branch and cut method that combines both branch and bound and cutting plane methods. Branch and bound algorithms have a number of advantages over algorithms that only use cutting planes. One advantage is that the algorithms can be terminated early and as long as at least one integral solution has been found, a feasible, although not necessarily optimal, solution can be returned. Further, the solutions of the LP relaxations can be used to provide a worst-case estimate of how far from optimality the returned solution is. Finally, branch and bound methods can be used to return multiple optimal
|
https://en.wikipedia.org/wiki/Integer_programming
|
nd methods can be used to return multiple optimal solutions.
=== Exact algorithms for a small number of variables ===
Suppose
A
{\displaystyle A}
is an m-by-n integer matrix and
b
{\displaystyle \mathbf {b} }
is an m-by-1 integer vector. We focus on the feasibility problem, which is to decide whether there exists an n-by-1 vector
x
{\displaystyle \mathbf {x} }
satisfying
A
x
≤
b
{\displaystyle A\mathbf {x} \leq \mathbf {b} }
.
Let V be the maximum absolute value of the coefficients in
A
{\displaystyle A}
and
b
{\displaystyle \mathbf {b} }
. If n (the number of variables) is a fixed constant, then the fea
|
https://en.wikipedia.org/wiki/Integer_programming
|
er of variables) is a fixed constant, then the feasibility problem can be solved in time polynomial in m and log V. This is trivial for the case n=1. The case n=2 was solved in 1981 by Herbert Scarf. The general case was solved in 1983 by Hendrik Lenstra, combining ideas by László Lovász and Peter van Emde Boas. Doignon's theorem asserts that an integer program is feasible whenever every subset of
2
n
{\displaystyle 2^{n}}
constraints is feasible; a method combining this result with algorithms for LP-type problems can be used to solve integer programs in time that is linear in
m
{\displaystyle m}
and fixed-parameter tractable (FPT) in
n
{\displaystyle n}
, but possibly doubly exponential in
n
{\displaystyle n}
, with no dependence on
V
|
https://en.wikipedia.org/wiki/Integer_programming
|
with no dependence on
V
{\displaystyle V}
.
In the special case of 0-1 ILP, Lenstra's algorithm is equivalent to complete enumeration: the number of all possible solutions is fixed (2n), and checking the feasibility of each solution can be done in time poly(m, log V). In the general case, where each variable can be an arbitrary integer, complete enumeration is impossible. Here, Lenstra's algorithm uses ideas from Geometry of numbers. It transforms the original problem into an equivalent one with the following property: either the existence of a solution
x
{\displaystyle \mathbf {x} }
is obvious, or the value of
x
n
{\displaystyle x_{n}}
(the n-th variable) belongs to an interval whose length is bounded by a function of n. In the latter case, the problem is reduced to a bounded number
|
https://en.wikipedia.org/wiki/Integer_programming
|
case, the problem is reduced to a bounded number of lower-dimensional problems. The run-time complexity of the algorithm has been improved in several steps:
The original algorithm of Lenstra had run-time
2
O
(
n
3
)
⋅
(
m
⋅
log
V
)
O
(
1
)
{\displaystyle 2^{O(n^{3})}\cdot (m\cdot \log V)^{O(1)}}
.
Kannan presented an improved algorithm with run-time
n
O
(
n
)
⋅
(
m
⋅
log
V
)
O
(
1
|
https://en.wikipedia.org/wiki/Integer_programming
|
O
(
1
)
{\displaystyle n^{O(n)}\cdot (m\cdot \log V)^{O(1)}}
.
Frank and Tardos presented an improved algorithm with run-time
n
2.5
n
⋅
2
O
(
n
)
⋅
(
m
⋅
log
V
)
O
(
1
)
{\displaystyle n^{2.5n}\cdot 2^{O(n)}\cdot (m\cdot \log V)^{O(1)}}
.: Prop.8
Dadush presented an improved algorithm with run-time
n
n
⋅
2
O
(
n
)
⋅
(
m
⋅
|
https://en.wikipedia.org/wiki/Integer_programming
|
⋅
(
m
⋅
log
V
)
O
(
1
)
{\displaystyle n^{n}\cdot 2^{O(n)}\cdot (m\cdot \log V)^{O(1)}}
.
Reis and Rothvoss presented an improved algorithm with run-time
(
log
n
)
O
(
n
)
⋅
(
m
⋅
log
V
)
O
(
1
)
{\displaystyle (\log n)^{O(n)}\cdot (m\cdot \log V)^{O(1)}}
.
These algorithms can also be used for mixed integer linear programs (MILP) - programs in which some variables are integer and some variables are real. The original algorithm of Lenstra: Sec.5 has run-time
|
https://en.wikipedia.org/wiki/Integer_programming
|
ec.5 has run-time
2
O
(
n
3
)
⋅
p
o
l
y
(
d
,
L
)
{\displaystyle 2^{O(n^{3})}\cdot poly(d,L)}
, where n is the number of integer variables, d is the number of continuous variables, and L is the binary encoding size of the problem. Using techniques from later algorithms, the factor
2
O
(
n
3
)
{\displaystyle 2^{O(n^{3})}}
can be improved to
2
O
(
n
log
n
)
|
https://en.wikipedia.org/wiki/Integer_programming
|
n
)
{\displaystyle 2^{O(n\log n)}}
or to
n
n
{\displaystyle n^{n}}
.
=== Heuristic methods ===
Since integer linear programming is NP-hard, many problem instances are intractable and so heuristic methods must be used instead. For example, tabu search can be used to search for solutions to ILPs. To use tabu search to solve ILPs, moves can be defined as incrementing or decrementing an integer constrained variable of a feasible solution while keeping all other integer-constrained variables constant. The unrestricted variables are then solved for. Short-term memory can consist of previously tried solutions while medium-term memory can consist of values for the integer constrained variables that have resulted in high objective values (assuming the ILP is a maximization problem). Finally, long-term memory can guide the s
|
https://en.wikipedia.org/wiki/Integer_programming
|
roblem). Finally, long-term memory can guide the search towards integer values that have not previously been tried.
Other heuristic methods that can be applied to ILPs include
Hill climbing
Simulated annealing
Reactive search optimization
Ant colony optimization
Hopfield neural networks
There are also a variety of other problem-specific heuristics, such as the k-opt heuristic for the traveling salesman problem. A disadvantage of heuristic methods is that if they fail to find a solution, it cannot be determined whether it is because there is no feasible solution or whether the algorithm simply was unable to find one. Further, it is usually impossible to quantify how close to optimal a solution returned by these methods are.
== Sparse integer programming ==
It is often the case that the matrix
A
{\displaystyle A}
that defines the integer program is sparse. In particular, this occurs when the matrix has a block structure, which is the case i
|
https://en.wikipedia.org/wiki/Integer_programming
|
matrix has a block structure, which is the case in many applications. The sparsity of the matrix can be measured as follows. The graph of
A
{\displaystyle A}
has vertices corresponding to columns of
A
{\displaystyle A}
, and two columns form an edge if
A
{\displaystyle A}
has a row where both columns have nonzero entries. Equivalently, the vertices correspond to variables, and two variables form an edge if they share an inequality. The sparsity measure
d
{\displaystyle d}
of
A
{\displaystyle A}
is the minimum of the tree-depth of the graph of
A
{\displaystyle A}
and the tree-depth of the graph of the transpose of
A
{\displaystyle A}
. Let
a
{\di
|
https://en.wikipedia.org/wiki/Integer_programming
|
Let
a
{\displaystyle a}
be the numeric measure of
A
{\displaystyle A}
defined as the maximum absolute value of any entry of
A
{\displaystyle A}
. Let
n
{\displaystyle n}
be the number of variables of the integer program. Then it was shown in 2018 that integer programming can be solved in strongly polynomial and fixed-parameter tractable time parameterized by
a
{\displaystyle a}
and
d
{\displaystyle d}
. That is, for some computable function
f
{\displaystyle f}
and some constant
k
{\displaystyle k}
, integer programming can be solved in time
f
(
a
,
d
)
n
|
https://en.wikipedia.org/wiki/Integer_programming
|
,
d
)
n
k
{\displaystyle f(a,d)n^{k}}
. In particular, the time is independent of the right-hand side
b
{\displaystyle b}
and objective function
c
{\displaystyle c}
. Moreover, in contrast to the classical result of Lenstra, where the number
n
{\displaystyle n}
of variables is a parameter, here the number
n
{\displaystyle n}
of variables is a variable part of the input.
== See also ==
Constrained least squares
Diophantine equation – Polynomial equation whose integer solutions are sought
== References ==
== Further reading ==
George L. Nemhauser; Laurence A. Wolsey (1988). Integer and combinatorial optimization. Wiley. ISBN 978-0-471-82819-8.
Alexander Schrijver (1998). Theory of linear and integer programming. Jo
|
https://en.wikipedia.org/wiki/Integer_programming
|
998). Theory of linear and integer programming. John Wiley and Sons. ISBN 978-0-471-98232-6.
Laurence A. Wolsey (1998). Integer programming. Wiley. ISBN 978-0-471-28366-9.
Dimitris Bertsimas; Robert Weismantel (2005). Optimization over integers. Dynamic Ideas. ISBN 978-0-9759146-2-5.
John K. Karlof (2006). Integer programming: theory and practice. CRC Press. ISBN 978-0-8493-1914-3.
H. Paul Williams (2009). Logic and Integer Programming. Springer. ISBN 978-0-387-92279-9.
Michael Jünger; Thomas M. Liebling; Denis Naddef; George Nemhauser; William R. Pulleyblank; Gerhard Reinelt; Giovanni Rinaldi; Laurence A. Wolsey, eds. (2009). 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-Art. Springer. ISBN 978-3-540-68274-5.
Der-San Chen; Robert G. Batson; Yu Dang (2010). Applied Integer Programming: Modeling and Solution. John Wiley and Sons. ISBN 978-0-470-37306-4.
Gerard Sierksma; Yori Zwols (2015). Linear and Integer Optimization: Theory and Practice. CRC Pre
|
https://en.wikipedia.org/wiki/Integer_programming
|
Integer Optimization: Theory and Practice. CRC Press. ISBN 978-1-498-71016-9.
== External links ==
A Tutorial on Integer Programming
Conference Integer Programming and Combinatorial Optimization, IPCO
The Aussois Combinatorial Optimization Workshop
|
https://en.wikipedia.org/wiki/Integer_programming
|
Programming is a form of music production and performance using electronic devices and computer software, such as sequencers and workstations or hardware synthesizers, sampler and sequencers, to generate sounds of musical instruments. These musical sounds are created through the use of music coding languages. There are many music coding languages of varying complexity. Music programming is also frequently used in modern pop and rock music from various regions of the world, and sometimes in jazz and contemporary classical music. It gained popularity in the 1950s and has been emerging ever since.
Music programming is the process in which a musician produces a sound or "patch" (be it from scratch or with the aid of a synthesizer/sampler), or uses a sequencer to arrange a song.
== Coding languages ==
Music coding languages are used to program the electronic devices to produce the instrumental sounds they make. Each coding language has its own level of difficulty and function.
=== Alda
|
https://en.wikipedia.org/wiki/Programming_(music)
|
own level of difficulty and function.
=== Alda ===
The music coding language Alda provides a tutorial on coding music and is, "designed for musicians who do not know how to program, as well as programmers who do not know how to music". The website also has links to install, tutorial, cheat sheet, docs, and community for anyone visiting the website.
=== LC ===
LC computer music programming language is a more complex computer music programming language meant for more experienced coders. One of the differences between this language and other music coding languages is that, "Unlike existing unit-generator languages, LC provides objects as well as library functions and methods that can directly represent microsounds and related manipulations that are involved in microsound synthesis."
== History and development ==
Music programming has had a vast history of development leading to the creation of different programs and languages. Each development comes with more function and utility a
|
https://en.wikipedia.org/wiki/Programming_(music)
|
development comes with more function and utility and each decade tends to favor a certain program and or piece of equipment.
=== MUSIC-N ===
The first digital synthesis family of computer programs and languages being MUSIC-N created by Max Mathews. The development of these programs, allowed for more flexibility and utility, eventually leading them to become fully developed languages. As programs such as MUSIC I, MUSIC II and MUSIC III were developed, which were all created by Max Matthews, new technologies were incorporated in such as the table-lookup oscillator in MUSIC II and the unit generator in MUSIC III. The breakthrough technologies such as the unit generator, which acted as a building block for music programming software, and the acoustic compiler, which allowed "unlimited number of sound synthesis structures to be created in the computer", further the complexity and evolution of music programming systems.
=== Drum machines ===
Around the time of the 1950s, electric rhythm
|
https://en.wikipedia.org/wiki/Programming_(music)
|
===
Around the time of the 1950s, electric rhythm machines began to make way into popular music. These machines began to gain much traction amongst many artists as they saw it as a way to create percussion sounds in an easier and more efficient way. Artists who used this kind of technology include J. J. Cale, Sly Stone, Phil Collins, Marvin Gaye, and Prince. Some of the popular drum machines through the time of the 1950s-1970s were the Side Man, Ace Tone's Rhythm Ace, Korg's Doncamatic, and Maestro's Rhythm King. In 1979, the LM-1 drum machine computer was released by guitarist Roger Linn, its goal being to help artists achieve realistic sounding drum sounds. This drum machine had eight different drum sounds: kick drum, snare, hi-hat, cabasa, tambourine, two tom toms, two congas, cowbell, clave, and handclaps. The different sounds could be recorded individually and they sounded real because of the high frequencies of the sound (28 kHz). Some notable artists who used the LM-1 were Pete
|
https://en.wikipedia.org/wiki/Programming_(music)
|
. Some notable artists who used the LM-1 were Peter Gabriel, Stevie Wonder, Michael Jackson, and Madonna. These developments continued to happen in future decades leading to the creation of new electrical instruments such as the Theremin, Hammond organ, electric guitar, synthesizer, and digital sampler. Other technologies such as the phonograph, tape-recorder, and compact disc have enabled artists to create and produce sounds without the use of live musicians.
=== Music programming in the 1980s ===
The music programming innovations of the 1980s brought many new unique sounds to this style of music. Popular music sounds during this time were the gated reverb, synthesizers, drum machines with 1980s sounds, vocal reverb, delay, and harmonization, and master bus mix downs and tape. Music programming began to emerge around this time which drew up controversy. Many artists were adapting more towards this technology and the traditional way music was made and recorded began to change. For in
|
https://en.wikipedia.org/wiki/Programming_(music)
|
usic was made and recorded began to change. For instance, many artists began to record their beats by programming instead of recording a live drummer.
=== Music programming in the early 2000s ===
Today, music programming is very common, with artists using software on a computer to produce music and not actually using physical instruments. These different programs are called digital audio workstations (DAW) and are used for editing, recording, and mixing music files. Most DAW programs incorporate the use of MIDI technology, which allows for music production software to carry out communication between electronic instruments, computers, and other related devices. While most DAWs carry out the same function and do the same thing, there are some that require less expertise and are easier for beginners to operate. These programs can be run on personal computers. Popular DAWs include: FL Studio, Avid Pro Tools, Apple Logic Pro X, Magix Acid Pro, Ableton Live, Presonus Studio One, Magix Samp
|
https://en.wikipedia.org/wiki/Programming_(music)
|
Pro, Ableton Live, Presonus Studio One, Magix Samplitude Pro X, Cockos Reaper, Propellerhead Reason, Steinberg Cubase Pro, GarageBand, and Bitwig Studio.
== Equipment ==
Technology: digital audio workstation, drum machine, groovebox, sampler, sequencer, synthesizer and MIDI
== References ==
== External links ==
Dobrian, Chris (1988). "Music Programming: An Introductory Essay". Claire Trevor School of the Arts, University of California, Irvine.
|
https://en.wikipedia.org/wiki/Programming_(music)
|
Fetal programming, also known as prenatal programming, is the theory that environmental cues experienced during fetal development play a seminal role in determining health trajectories across the lifespan.
Three main forms of programming that occur due to changes in the maternal environment are:
Changes in development that lead to greater disease risk;
Genetic changes that alter disease risk;
Epigenetic changes which alter disease risk of not only the child but also that of the next generation - i.e., after a famine, grandchildren of women who were pregnant during the famine, are born smaller than the normal size, despite nutritional deficiencies having been fulfilled.
These changes in the maternal environmental can be due to nutritional alteration, hormonal fluctuations or exposure to toxins.
== History ==
=== Dutch famine 1944–45 ===
In 1944–45, the German blockade of the Netherlands led to a lack of food supplies, causing the Dutch famine of 1944–45. The famine caused severe ma
|
https://en.wikipedia.org/wiki/Fetal_programming
|
tch famine of 1944–45. The famine caused severe malnutrition among the population, including women in various stages of pregnancy. The Dutch Famine Birth Cohort Study examined the impact of lack of nutrition on children born during or after this famine. It showed that throughout their life, these children were at greater risk of diabetes, cardiovascular disease, obesity, and other non-communicable diseases.
=== Barker hypothesis ===
In the 1980s, David Barker began a research study on this topic. The Barker Hypothesis, or Thrifty phenotype, forms the basis for much of the research conducted on fetal programming. This hypothesis states that if the fetus is exposed to low nutrition, it will adapt to that environment. Nutrients are diverted towards the developing heart, brain, and other essential fetal organs. The body also undergoes metabolic alterations that ensure survival despite low nutrition but may cause problems with normal or high nutrition. This leads to increased risk of meta
|
https://en.wikipedia.org/wiki/Fetal_programming
|
gh nutrition. This leads to increased risk of metabolic syndrome.
== Nutritional status ==
The developing fetus forms an impression of the world into which it will be born via its mother's nutritional status. Its development is thus modulated to create the best chance of survival. However, excessive or insufficient nutrition in the mother can provoke maladaptive developmental responses in the fetus, which in turn manifest in the form of post-natal diseases. This may have such a profound effect on the fetus’s adult life that it can even outweigh lifestyle factors.
=== Excessive nutrition ===
Body mass index before pregnancy and weight gain during pregnancy are linked to high blood pressure in the offspring during adulthood. Mouse models suggest that this is due to high levels of the fetal hormone leptin, which is present in the blood of individuals who are overweight or obese. There is a theory that this hormone hurts the regulatory systems of the fetus, and renders it impossible to
|
https://en.wikipedia.org/wiki/Fetal_programming
|
systems of the fetus, and renders it impossible to maintain normal blood pressure levels.
=== Insufficient nutrition ===
Pre-eclampsia, involving oxygen deprivation and death of trophoblastic cells that make up most of the placenta, is a disease which is often associated with maladaptive long-term consequences of inappropriate fetal programming. Here, an inadequately developed and poorly functioning placenta fails to meet the fetus’s nutritional needs during gestation, either by altering its selection for nutrients that can cross into fetal blood or restricting total volume thereof. Consequences of this for the fetus in adult life include cardiovascular and metabolic conditions.
== Hormonal influence ==
A delicate balance of hormones during pregnancy is regarded as highly relevant to fetal programming and may significantly influence the outcome of the offspring. Placental endocrine transfer from the mother to the developing fetus could be altered by the mental state of the mother,
|
https://en.wikipedia.org/wiki/Fetal_programming
|
uld be altered by the mental state of the mother, due to affected glucocorticoid transfer that takes place across the placenta.
=== Thyroid ===
Thyroid hormones play an instrumental role during the early development of the fetus's brain. Therefore, mothers suffering from thyroid-related issues and altered thyroid hormone levels may inadvertently trigger structural and functional changes in the fetal brain. The fetus can produce its thyroid hormones from the onset of the second trimester; however, maternal thyroid hormones are important for brain development before and after the baby can synthesize the hormones while still in the uterus. Due to this, the baby may experience an increased risk of neurological or psychiatric diseases later in life.
=== Cortisol ===
Cortisol (and glucocorticoids more generally) is the most well-studied hormonal mechanism that may have prenatal programming effects. Although cortisol has normative developmental effects during prenatal development, excess
|
https://en.wikipedia.org/wiki/Fetal_programming
|
ental effects during prenatal development, excess cortisol exposure has deleterious effects on fetal growth, the postnatal function of physiological systems such as the hypothalamic-pituitary-adrenal axis and brain structure or connectivity (e.g., amygdala).
During gestation, cortisol concentrations in maternal circulation are up to ten times higher than cortisol concentrations in fetal circulation. The maternal-to-fetal cortisol gradient is maintained by the placenta, which forms a structural and enzymatic barrier to cortisol. During the first two trimesters of gestation intrauterine cortisol is primarily produced by the maternal adrenal glands. However, during the third trimester the fetal adrenal glands begin to endogenously produce cortisol and become responsible for most intrauterine cortisol by the time the fetus reaches term.
== Psychological stress and psychopathology ==
Mental state of the mother during pregnancy affects the fetus in the uterus, predominantly via hormones a
|
https://en.wikipedia.org/wiki/Fetal_programming
|
fetus in the uterus, predominantly via hormones and genetics. The mother's mood, including maternal prenatal anxiety, depression and stress during pregnancy correlates with altered outcomes for the child. That being said, not every fetus exposed to these factors is affected in the same way and to the same degree, and genetic and environmental factors are believed to have a significant degree of influence.
=== Depression ===
Maternal depression poses one of the greatest risks for increased vulnerability to adverse outcomes for a baby that is developing in the uterus, especially in terms of susceptibility to a variety of psychological conditions. Mechanisms that may explain the connection between maternal depression and the offspring's future health are mostly unclear and form a current area of active research. Genetic inheritance that may be rendering the child more susceptible may play a role, including the effect on the intrauterine environment for the baby whilst the mother suffer
|
https://en.wikipedia.org/wiki/Fetal_programming
|
environment for the baby whilst the mother suffers from depression.
=== Psychological stress ===
Maternally experienced psychological stress that occurs either before or during gestation can have intergenerational effects on offspring. Stress experienced during gestation has been linked with preterm delivery, low birth weight, and increased risk of psychopathology. The new mother may suffer from after-effects too, such as postpartum depression, and subsequently may find parenting more difficult as compared to those who did not experience as much stress during their pregnancies.
== Toxins ==
Toxins such as alcohol, tobacco, and certain drugs to which the baby is exposed during its development are thought to contribute to fetal programming, especially via alterations to the HPA axis. If the exposure occurs during a critical phase of fetal development, it could have drastic and dire consequences for the fetus.
=== Alcohol ===
Prenatal and/or early postnatal exposure to alcohol (eth
|
https://en.wikipedia.org/wiki/Fetal_programming
|
al and/or early postnatal exposure to alcohol (ethanol) has been found to hurt a child's neuroendocrine and behavioral factors. Alcohol passes through the placenta on being ingested by the mother during her pregnancy, and makes its way to the baby in utero. Changes posed to the fetus through ethanol exposure may significantly effect growth and development; these are collectively known as fetal alcohol spectrum disorders (FASD). The exact interaction between ethanol and the developing fetus is complex and largely uncertain, however, several direct and indirect effects have been observed as the fetus matures. Predominant among these are irregularities in the fetus's endocrine, metabolic, and physiological functions.
=== Smoking ===
The negative consequences of smoking are well-known, and these may be even more apparent during pregnancy. Exposure to tobacco smoke during pregnancy, commonly known as in utero maternal tobacco smoke exposure (MTSE), can contribute towards various problems
|
https://en.wikipedia.org/wiki/Fetal_programming
|
e (MTSE), can contribute towards various problems in babies of smoking mothers. About 20% of mothers smoke whilst pregnant and this is associated with increased risk of complications, such as preterm birth, decreased fetal growth leading to lower birth weight, and impaired fetal lung development.
=== Drugs ===
There is evidence pointing towards pharmacological programming of the fetus during the first trimester. One type of drugs which is suspected of influencing the developing baby when used during pregnancy is anti-hypertensive drugs. Pre-eclampsia (a condition of hypertension during pregnancy) is a serious problem for the majority of pregnant mothers and can predispose the mother to a variety of complications, including increased risk of mortality and problems during parturition.
== References ==
== External links ==
MRC Lifecourse Epidemiology Unit page at the University of Southampton
Fetal Programming page on the Centre for Fetal Programming's website.
|
https://en.wikipedia.org/wiki/Fetal_programming
|
Fetal Programming's website.
|
https://en.wikipedia.org/wiki/Fetal_programming
|
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
A :- B1, ..., Bn.
and are read as declarative sentences in logical form:
A if B1 and ... and Bn.
A is called the head of the rule, B1, ..., Bn is called the body, and the Bi are called literals or conditions. When n = 0, the rule is called a fact and is written in the simplified form:
A.
Queries (or goals) have the same syntax as the bodies of rules and are commonly written in the form:
?- B1, ..., Bn.
In the simplest case of Horn clauses (or "definite" clauses), all of the A, B1, ..., Bn are atomic for
|
https://en.wikipedia.org/wiki/Logic_programming
|
clauses), all of the A, B1, ..., Bn are atomic formulae of the form p(t1 ,..., tm), where p is a predicate symbol naming a relation, like "motherhood", and the ti are terms naming objects (or individuals). Terms include both constant symbols, like "charles", and variables, such as X, which start with an upper case letter.
Consider, for example, the following Horn clause program:
Given a query, the program produces answers.
For instance for a query ?- parent_child(X, william), the single answer is
Various queries can be asked. For instance
the program can be queried both to generate grandparents and to generate grandchildren. It can even be used to generate all pairs of grandchildren and grandparents, or simply to check if a given pair is such a pair:
Although Horn clause logic programs are Turing complete, for most practical applications, Horn clause programs need to be extended to "normal" logic programs with negative conditions. For example, the definition of sibling uses a nega
|
https://en.wikipedia.org/wiki/Logic_programming
|
For example, the definition of sibling uses a negative condition, where the predicate = is defined by the clause X = X :
Logic programming languages that include negative conditions have the knowledge representation capabilities of a non-monotonic logic.
In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures. From this point of view, clause A :- B1,...,Bn is understood as:
to solve A, solve B1, and ... and solve Bn.
Negative conditions in the bodies of clauses also have a procedural interpretation, known as negation as failure: A negative literal not B is deemed to hold if and only if the positive literal B fails to hold.
Much of the research in the field of logic programming has been concerned with
|
https://en.wikipedia.org/wiki/Logic_programming
|
ield of logic programming has been concerned with trying to develop a logical semantics for negation as failure and with developing other semantics and other implementations for negation. These developments have been important, in turn, for supporting the development of formal methods for logic-based program verification and program transformation.
== History ==
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language that places no constraints o
|
https://en.wikipedia.org/wiki/Logic_programming
|
programming language that places no constraints on the order in which operations are performed.
Logic programming, with its current syntax of facts and rules, can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.
Although it was based on the proof methods of logic, Planner, developed by Carl Hewitt at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from a
|
https://en.wikipedia.org/wiki/Logic_programming
|
e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. Winograd used Micro-Planner to implement the landmark, natural-language understanding program SHRDLU. For the sake of efficiency, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA4, Popler, Conniver, QLISP, and the concurrent language Ether.
Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.
In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understandin
|
https://en.wikipedia.org/wiki/Logic_programming
|
eille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer invited Kowalski to Marseille, and together they discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL resolution (1971) behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications in clausal form. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, and that SL-resolution could be restricted (and generalised) to SLD resolution. Kowalski's procedural interpretation and SLD were described in a 1973 memo, published in 1974.
Colmerauer, with Philippe Roussel, used the
|
https://en.wikipedia.org/wiki/Logic_programming
|
1974.
Colmerauer, with Philippe Roussel, used the procedural interpretation as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D. Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.
Logic programming gained international attention during the 1980s, when it was chosen by the Japanese Ministry of International Trade and Industry to develop the software for the Fifth Generation Computer Systems (FGCS) project. The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence
|
https://en.wikipedia.org/wiki/Logic_programming
|
amming to develop advanced Artificial Intelligence applications on massively parallel computers. Although the project initially explored the use of Prolog, it later adopted the use of concurrent logic programming, because it was closer to the FGCS computer architecture.
However, the committed choice feature of concurrent logic programming interfered with the language's logical semantics and with its suitability for knowledge representation and problem solving applications. Moreover, the parallel computer systems developed in the project failed to compete with advances taking place in the development of more conventional, general-purpose computers. Together these two issues resulted in the FGCS project failing to meet its objectives. Interest in both logic programming and AI fell into world-wide decline.
In the meanwhile, more declarative logic programming approaches, including those based on the use of Prolog, continued to make progress independently of the FGCS project. In particular,
|
https://en.wikipedia.org/wiki/Logic_programming
|
independently of the FGCS project. In particular, although Prolog was developed to combine declarative and procedural representations of knowledge, the purely declarative interpretation of logic programs became the focus for applications in the field of deductive databases. Work in this field became prominent around 1977, when Hervé Gallaire and Jack Minker organized a workshop on logic and databases in Toulouse. The field was eventually renamed as Datalog.
This focus on the logical, declarative reading of logic programs was given further impetus by the development of constraint logic programming in the 1980s and Answer Set Programming in the 1990s. It is also receiving renewed emphasis in recent applications of Prolog
The Association for Logic Programming (ALP) was founded in 1986 to promote Logic Programming. Its official journal until 2000, was The Journal of Logic Programming. Its founding editor-in-chief was J. Alan Robinson. In 2001, the journal was renamed The Journal of Logi
|
https://en.wikipedia.org/wiki/Logic_programming
|
2001, the journal was renamed The Journal of Logic and Algebraic Programming, and the official journal of ALP became Theory and Practice of Logic Programming, published by Cambridge University Press.
== Concepts ==
Logic programs enjoy a rich variety of semantics and problem solving methods, as well as a wide range of applications in programming, databases, knowledge representation and problem solving.
=== Algorithm = Logic + Control ===
The procedural interpretation of logic programs, which uses backward reasoning to reduce goals to subgoals, is a special case of the use of a problem-solving strategy to control the use of a declarative, logical representation of knowledge to obtain the behaviour of an algorithm. More generally, different problem-solving strategies can be applied to the same logical representation to obtain different algorithms. Alternatively, different algorithms can be obtained with a given problem-solving strategy by using different logical representations.
The
|
https://en.wikipedia.org/wiki/Logic_programming
|
gy by using different logical representations.
The two main problem-solving strategies are backward reasoning (goal reduction) and forward reasoning, also known as top-down and bottom-up reasoning, respectively.
In the simple case of a propositional Horn clause program and a top-level atomic goal, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".
Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal are considered at a time. For example, subgoals ca
|
https://en.wikipedia.org/wiki/Logic_programming
|
are considered at a time. For example, subgoals can be solved in parallel, and clauses can also be tried in parallel. The first strategy is called and-parallel and the second strategy is called or-parallel. Other search strategies, such as intelligent backtracking, or best-first search to find an optimal solution, are also possible.
In the more general, non-propositional case, where sub-goals can share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.
In most cases, backward reasoning from a query or goal is more efficient than forward reasoning. But sometimes with Datalog and Answer Set Programming, there may be no query that is separate from the set of clauses as a whole, and then generating all the facts that can be derived from the clauses is a sensible problem-solving strategy. Here is
|
https://en.wikipedia.org/wiki/Logic_programming
|
s is a sensible problem-solving strategy. Here is another example, where forward reasoning beats backward reasoning in a more conventional computation task, where the goal ?- fibonacci(n, Result) is to find the nth fibonacci number:
Here the relation fibonacci(N, M) stands for the function fibonacci(N) = M, and the predicate N is Expression is Prolog notation for the predicate that instantiates the variable N to the value of Expression.
Given the goal of computing the fibonacci number of n, backward reasoning reduces the goal to the two subgoals of computing the fibonacci numbers of n-1 and n-2. It reduces the subgoal of computing the fibonacci number of n-1 to the two subgoals of computing the fibonacci numbers of n-2 and n-3, redundantly computing the fibonacci number of n-2. This process of reducing one fibonacci subgoal to two fibonacci subgoals continues until it reaches the numbers 0 and 1. Its complexity is of the order 2n. In contrast, forward reasoning generates the sequence
|
https://en.wikipedia.org/wiki/Logic_programming
|
ontrast, forward reasoning generates the sequence of fibonacci numbers, starting from 0 and 1 without any recomputation, and its complexity is linear with respect to n.
Prolog cannot perform forward reasoning directly. But it can achieve the effect of forward reasoning within the context of backward reasoning by means of tabling: Subgoals are maintained in a table, along with their solutions. If a subgoal is re-encountered, it is solved directly by using the solutions already in the table, instead of re-solving the subgoals redundantly.
=== Relationship with functional programming ===
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.
For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). In this respect, logic programs are similar to relational databases, which also represent functions as relations.
Compared with relational syntax, func
|
https://en.wikipedia.org/wiki/Logic_programming
|
s relations.
Compared with relational syntax, functional syntax is more compact for nested functions. For example, in functional syntax the definition of maternal grandmother can be written in the nested form:
The same definition in relational notation needs to be written in the unnested, flattened form:
However, nested syntax can be regarded as syntactic sugar for unnested syntax. Ciao Prolog, for example, transforms functional syntax into relational form and executes the resulting logic program using the standard Prolog execution strategy. Moreover, the same transformation can be used to execute nested relations that are not functional. For example:
=== Relationship with relational programming ===
The term relational programming has been used to cover a variety of programming languages that treat functions as a special case of relations. Some of these languages, such as miniKanren
and relational linear programming
are logic programming languages in the sense of this article.
Howe
|
https://en.wikipedia.org/wiki/Logic_programming
|
mming languages in the sense of this article.
However, the relational language RML is an imperative programming language
whose core construct is a
relational expression, which is similar to an expression in first-order predicate logic.
Other relational programming languages are based on the relational calculus or relational algebra.
=== Semantics of Horn clause programs ===
Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original logical consequence semantics, which understands solving a goal as showing that the goal is a theorem that is true in all models of the program.
In this approach, computation is theorem-proving in first-order logic; and both backward reasoning, as in SLD resolution, and forward reasoning, as in hyper-resolution, are correct and complete theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate proof-theoretic (or operat
|
https://en.wikipedia.org/wiki/Logic_programming
|
as providing a separate proof-theoretic (or operational) semantics for logic programs. But from a logical point of view, they are proof methods, rather than semantics.
The other approach to the declarative semantics of Horn clause programs is the satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some intended (or standard) model of the program. For Horn clause programs, there always exists such a standard model: It is the unique minimal model of the program.
Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of the program.
For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model:
The satisfiability semantics also has an alternative, more mathematical charac
|
https://en.wikipedia.org/wiki/Logic_programming
|
also has an alternative, more mathematical characterisation as the least fixed point of the function that uses the rules in the program to derive new facts from existing facts in one step of inference.
Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly.
The difference between the two declarative semantics can be seen with the definitions of addition and multiplication in successor arithmetic, which represents the natural numbers 0, 1, 2, ... as a sequence of terms of the form
|
https://en.wikipedia.org/wiki/Logic_programming
|
s 0, 1, 2, ... as a sequence of terms of the form 0, s(0), s(s(0)), .... In general, the term s(X) represents the successor of X, namely X + 1. Here are the standard definitions of addition and multiplication in functional notation:
X + 0 = X.
X + s(Y) = s(X + Y).
i.e. X + (Y + 1) = (X + Y) + 1
X × 0 = 0.
X × s(Y) = X + (X × Y).
i.e. X × (Y + 1) = X + (X × Y).
Here are the same definitions as a logic program, using add(X, Y, Z) to represent X + Y = Z, and multiply(X, Y, Z) to represent X × Y = Z:
The two declarative semantics both give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example 2 × 2 = X has the solution X = 4; and X × X = X + X has two solutions X = 0 and X = 2:
However, with the logical-consequence semantics, there are non-standard models of the program, in which, for example, add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))), i.e. 2 + 2 = 5 is true. But with the satisfiability semantics
|
https://en.wikipedia.org/wiki/Logic_programming
|
= 5 is true. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which 2 + 2 = 5 is false.
In both semantics, the goal ?- add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))) fails. In the satisfiability semantics, the failure of the goal means that the truth value of the goal is false. But in the logical consequence semantics, the failure means that the truth value of the goal is unknown.
=== Negation as failure ===
Negation as failure (NAF), as a way of concluding that a negative condition not p holds by showing that the positive condition p fails to hold, was already a feature of early Prolog systems. The resulting extension of SLD resolution is called SLDNF. A similar construct, called "thnot", also existed in Micro-Planner.
The logical semantics of NAF was unresolved until Keith Clark showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the logical consequence semant
|
https://en.wikipedia.org/wiki/Logic_programming
|
y of reasoning with the logical consequence semantics using the completion of a logic program in first-order logic.
Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:
A :- Body1.
...
A :- Bodyk.
as a definition of the predicate:
A iff (Body1 or ... or Bodyk)
where iff means "if and only if". The completion also includes axioms of equality, which correspond to unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program.
Consider, for example, the following program:
Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished:
This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor.
If, however
|
https://en.wikipedia.org/wiki/Logic_programming
|
t cannot be shown that tom is a minor.
If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated:
This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a non-monotonic logic.
But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated:
The completion of this program is:
The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning, and to Ray Reiter's closed world assumption.
The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, the satisfiability semantics became more popular for logic programs with negation. In the satisfiability semantics, negation is interpreted according to the cl
|
https://en.wikipedia.org/wiki/Logic_programming
|
ntics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program.
In the case of logic programs with negative conditions, there are two main variants of the satisfiability semantics: In the well-founded semantics, the intended model of a logic program is a unique, three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of inductive definition in mathematical logic. XSB Prolog implements the well-founded semantics using SLG resolution.
In the alternative stable model semantics, there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins answer set programming (ASP).
Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for stratified logic programs. For example, the program for sanctioning thieves is (locally) stratified, and
|
https://en.wikipedia.org/wiki/Logic_programming
|
r sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:
Attempts to understand negation in logic programming have also contributed to the development of abstract argumentation frameworks. In an argumentation interpretation of negation, the initial argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument that tom should be rehabilitated and reinstates the argument that tom should be punished.
=== Metalogic programming ===
Metaprogramming, in which programs are treated as data, was already a feature of early Prolog implementations. For example, the Edinburgh DEC10 implementation of Prolog included "an interpreter and a compiler, both written in Prolog itself". The simplest metaprogram is the so-called "vanilla" meta-interpreter:
where true represents an empty conjunction, and (B
|
https://en.wikipedia.org/wiki/Logic_programming
|
where true represents an empty conjunction, and (B,C) is a composite term representing the conjunction of B and C. The predicate clause(A,B) means that there is a clause of the form A :- B.
Metaprogramming is an application of the more general use of a metalogic or metalanguage to describe and reason about another language, called the object language.
Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. For example, in the following program, the atomic formula attends(Person, Meeting) occurs both as an object-level formula, and as an argument of the metapredicates prohibited and approved.
=== Relationship with the Computational-representational understanding of mind ===
In his popular Introduction to Cognitive Science, Paul Thagard includes logic and rules as alternative approaches to modelling human thinking. He argues that rules, which have the form IF condition THEN action, are "very similar" to logical conditionals, but
|
https://en.wikipedia.org/wiki/Logic_programming
|
, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausibility (page 51). Among other differences between logic and rules, he argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted as universally true", but rules can be defaults, which admit exceptions (page 44).
He states that "unlike logic, rule-based systems can also easily represent strategic information
about what to do" (page 45). For example, "IF you want to go home for the weekend, and you have bus fare, THEN
you can catch a bus". He does not observe that the same strategy of reducing a goal to subgoals can be interpreted, in the manner of logic programming, as applying backward reasoning to a logical conditional:
All of these characteristics of rule-based systems - search, forward and backward reasoning, default reasoning, and goal-reduction - are also defining ch
|
https://en.wikipedia.org/wiki/Logic_programming
|
asoning, and goal-reduction - are also defining characteristics of logic programming. This suggests that Thagard's conclusion (page 56) that:
Much of human knowledge is naturally described in terms of rules, and many kinds of thinking such as planning can be modeled by rule-based systems.
also applies to logic programming.
Other arguments showing how logic programming can be used to model aspects of human thinking are presented by Keith Stenning and Michiel van Lambalgen in their book,
Human Reasoning and Cognitive Science. They show how the non-monotonic character of logic programs can be used to explain human performance on a variety of psychological tasks. They also show (page 237) that "closed–world reasoning in its guise as logic programming has an appealing neural implementation, unlike classical logic."
In The Proper Treatment of Events,
Michiel van Lambalgen and Fritz Hamm investigate the use of constraint logic programming to code "temporal notions in natural language by lo
|
https://en.wikipedia.org/wiki/Logic_programming
|
o code "temporal notions in natural language by looking at the way human beings construct time".
=== Knowledge representation ===
The use of logic to represent procedural knowledge and strategic information was one of the main goals contributing to the early development of logic programming. Moreover, it continues to be an important feature of the Prolog family of logic programming languages today. However, many applications of logic programming, including Prolog applications, increasingly focus on the use of logic to represent purely declarative knowledge. These applications include both the representation of general commonsense knowledge and the representation of domain specific expertise.
Commonsense includes knowledge about cause and effect, as formalised, for example, in the situation calculus, event calculus and action languages. Here is a simplified example, which illustrates the main features of such formalisms. The first clause states that a fact holds immediately after an e
|
https://en.wikipedia.org/wiki/Logic_programming
|
se states that a fact holds immediately after an event initiates (or causes) the fact. The second clause is a frame axiom, which states that a fact that holds at a time continues to hold at the next time unless it is terminated by an event that happens at the time. This formulation allows more than one event to occur at the same time:
Here holds is a meta-predicate, similar to solve above. However, whereas solve has only one argument, which applies to general clauses, the first argument of holds is a fact and the second argument is a time (or state). The atomic formula holds(Fact, Time) expresses that the Fact holds at the Time. Such time-varying facts are also called fluents. The atomic formula happens(Event, Time) expresses that the Event happens at the Time.
The following example illustrates how these clauses can be used to reason about causality in a toy blocks world. Here, in the initial state at time 0, a green block is on a table and a red block is stacked on the green block (
|
https://en.wikipedia.org/wiki/Logic_programming
|
le and a red block is stacked on the green block (like a traffic light). At time 0, the red block is moved to the table. At time 1, the green block is moved onto the red block. Moving an object onto a place terminates the fact that the object is on any place, and initiates the fact that the object is on the place to which it is moved:
Forward reasoning and backward reasoning generate the same answers to the goal holds(Fact, Time). But forward reasoning generates fluents progressively in temporal order, and backward reasoning generates fluents regressively, as in the domain-specific use of regression in the situation calculus.
Logic programming has also proved to be useful for representing domain-specific expertise in expert systems. But human expertise, like general-purpose commonsense, is mostly implicit and tacit, and it is often difficult to represent such implicit knowledge in explicit rules. This difficulty does not arise, however, when logic programs are used to represent the ex
|
https://en.wikipedia.org/wiki/Logic_programming
|
, when logic programs are used to represent the existing, explicit rules of a business organisation or legal authority.
For example, here is a representation of a simplified version of the first sentence of the British Nationality Act, which states that a person who is born in the UK becomes a British citizen at the time of birth if a parent of the person is a British citizen at the time of birth:
Historically, the representation of a large portion of the British Nationality Act as a logic program in the 1980s was "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".
More recently, the PROLEG system, initiated in 2009 and consisting of approximately 2500 rules and exceptions of civil code and supreme court case rules in Japan, has become possibly the largest legal rule base in the world.
== Variants and extens
|
https://en.wikipedia.org/wiki/Logic_programming
|
l rule base in the world.
== Variants and extensions ==
=== Prolog ===
The SLD resolution rule of inference is neutral about the order in which subgoals in the bodies of clauses can be selected for solution. For the sake of efficiency, Prolog restricts this order to the order in which the subgoals are written. SLD is also neutral about the strategy for searching the space of SLD proofs.
Prolog searches this space, top-down, depth-first, trying different clauses for solving the same (sub)goal in the order in which the clauses are written.
This search strategy has the advantage that the current branch of the tree can be represented efficiently by a stack. When a goal clause at the top of the stack is reduced to a new goal clause, the new goal clause is pushed onto the top of the stack. When the selected subgoal in the goal clause at the top of the stack cannot be solved, the search strategy backtracks, removing the goal clause from the top of the stack, and retrying the attempted s
|
https://en.wikipedia.org/wiki/Logic_programming
|
the top of the stack, and retrying the attempted solution of the selected subgoal in the previous goal clause using the next clause that matches the selected subgoal.
Backtracking can be restricted by using a subgoal, called cut, written as !, which always succeeds but cannot be backtracked. Cut can be used to improve efficiency, but can also interfere with the logical meaning of clauses. In many cases, the use of cut can be replaced by negation as failure. In fact, negation as failure can be defined in Prolog, by using cut, together with any literal, say fail, that unifies with the head of no clause:
Prolog provides other features, in addition to cut, that do not have a logical interpretation. These include the built-in predicates assert and retract for destructively updating the state of the program during program execution.
For example, the toy blocks world example above can be implemented without frame axioms using destructive change of state:
The sequence of move events and the
|
https://en.wikipedia.org/wiki/Logic_programming
|
ge of state:
The sequence of move events and the resulting locations of the blocks can be computed by executing the query:
Various extensions of logic programming have been developed to provide a logical framework for such destructive change of state.
The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book, celebrating the 50 year anniversary of Prolog in 2022.
Prolog has also contributed to the development of other programming languages, including ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog.
=== Constraint logic programming ===
Constraint logic programming (CLP) combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of a clause. Constraint predicates are not defined by the facts and rules in the program, but are predefined by some domain-specifi
|
https://en.wikipedia.org/wiki/Logic_programming
|
program, but are predefined by some domain-specific model-theoretic structure or theory.
Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are simplified and checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.
Interestingly, the first version of Prolog already included a constraint predicate dif(term1, term2), from Philippe Roussel's 1972 PhD thesis, which succeeds if both of its arguments are different terms, but which is delayed if either of the terms contains a variable.
The following constraint logic program represents a toy temporal database of john's history as a teacher:
Here ≤ and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both t
|
https://en.wikipedia.org/wiki/Logic_programming
|
queries the database to find out when john both taught logic and was a professor:
The solution
2010 ≤ T, T ≤ 2012
results from simplifying the constraints
2005 ≤ T, T ≤ 2012, 2010 ≤ T, T < 2014.
Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.
=== Datalog ===
Datalog is a database definition language, which combines a relational view of data, as in relational databases, with a logical view, as in logic programming.
Relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, which access a database. Datalog uses logical connectives, such as or, and and not in the bodies of rules to define relations as part of the database itself.
It was recogni
|
https://en.wikipedia.org/wiki/Logic_programming
|
ons as part of the database itself.
It was recognized early in the development of relational databases that recursive queries cannot be expressed in either relational algebra or relational calculus, and that this defficiency can be remedied by introducing a least-fixed-point operator. In contrast, recursive relations can be defined naturally by rules in logic programs, without the need for any new logical connectives or operators.
Datalog differs from more general logic programming by having only constants and variables as terms. Moreover, all facts are variable-free, and rules are restricted, so that if they are executed bottom-up, then the derived facts are also variable-free.
For example, consider the family database:
Bottom-up execution derives the following set of additional facts and terminates:
Top-down execution derives the same answers to the query:
But then it goes into an infinite loop. However, top-down execution with tabling gives the same answers and terminates without
|
https://en.wikipedia.org/wiki/Logic_programming
|
ling gives the same answers and terminates without looping.
=== Answer set programming ===
Like Datalog, Answer Set programming (ASP) is not Turing-complete. Moreover, instead of separating goals (or queries) from the program to be used in solving the goals, ASP treats the whole program as a goal, and solves the goal by generating a stable model that makes the goal true. For this purpose, it uses the stable model semantics, according to which a logic program can have zero, one or more intended models. For example, the following program represents a degenerate variant of the map colouring problem of colouring two countries red or green:
The problem has four solutions represented by four stable models:
To represent the standard version of the map colouring problem, we need to add a constraint that two adjacent countries cannot be coloured the same colour. In ASP, this constraint can be written as a clause of the form:
With the addition of this constraint, the problem now has only
|
https://en.wikipedia.org/wiki/Logic_programming
|
tion of this constraint, the problem now has only two solutions:
The addition of constraints of the form :- Body. eliminates models in which Body is true.
Confusingly, constraints in ASP are different from constraints in CLP. Constraints in CLP are predicates that qualify answers to queries (and solutions of goals). Constraints in ASP are clauses that eliminate models that would otherwise satisfy goals. Constraints in ASP are like integrity constraints in databases.
This combination of ordinary logic programming clauses and constraint clauses illustrates the generate-and-test methodology of problem solving in ASP: The ordinary clauses define a search space of possible solutions, and the constraints filter out unwanted solutions.
Most implementations of ASP proceed in two steps: First they instantiate the program in all possible ways, reducing it to a propositional logic program (known as grounding). Then they apply a propositional logic problem solver, such as the DPLL algorithm or a
|
https://en.wikipedia.org/wiki/Logic_programming
|
c problem solver, such as the DPLL algorithm or a Boolean SAT solver. However, some implementations, such as s(CASP) use a goal-directed, top-down, SLD resolution-like procedure without
grounding.
=== Abductive logic programming ===
Abductive logic programming (ALP), like CLP, extends normal logic programming by allowing the bodies of clauses to contain literals whose predicates are not defined by clauses. In ALP, these predicates are declared as abducible (or assumable), and are used as in abductive reasoning to explain observations, or more generally to add new facts to the program (as assumptions) to solve goals.
For example, suppose we are given an initial state in which a red block is on a green block on a table at time 0:
Suppose we are also given the goal:
The goal can represent an observation, in which case a solution is an explanation of the observation. Or the goal can represent a desired future state of affairs, in which case a solution is a plan for achieving the goal.
|
https://en.wikipedia.org/wiki/Logic_programming
|
case a solution is a plan for achieving the goal.
We can use the rules for cause and effect presented earlier to solve the goal, by treating the happens predicate as abducible:
ALP solves the goal by reasoning backwards and adding assumptions to the program, to solve abducible subgoals. In this case there are many alternative solutions, including:
Here tick is an event that marks the passage of time without initiating or terminating any fluents.
There are also solutions in which the two move events happen at the same time. For example:
Such solutions, if not desired, can be removed by adding an integrity constraint, which is like a constraint clause in ASP:
Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret negation as failure as a form of abductive reasoning.
=== Inductive logic programming ===
Inductive logic programming (ILP) is an approach to machine learning that indu
|
https://en.wikipedia.org/wiki/Logic_programming
|
(ILP) is an approach to machine learning that induces logic programs as hypothetical generalisations of positive and negative examples. Given a logic program representing background knowledge and positive examples together with constraints representing negative examples, an ILP system induces a logic program that generalises the positive examples while excluding the negative examples.
ILP is similar to ALP, in that both can be viewed as generating hypotheses to explain observations, and as employing constraints to exclude undesirable hypotheses. But in ALP the hypotheses are variable-free facts, and in ILP the hypotheses are general rules.
For example, given only background knowledge of the mother_child and father_child relations, and suitable examples of the grandparent_child relation, current ILP systems can generate the definition of grandparent_child, inventing an auxiliary predicate, which can be interpreted as the parent_child relation:
Stuart Russell has referred to such invent
|
https://en.wikipedia.org/wiki/Logic_programming
|
ation:
Stuart Russell has referred to such invention of new concepts as the most important step needed for reaching human-level AI.
Recent work in ILP, combining logic programming, learning and probability, has given rise to the fields of statistical relational learning and probabilistic inductive logic programming.
=== Concurrent logic programming ===
Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).
A concurrent logic program is a set of guarded Horn clauses of the form:
H :- G1, ..., Gn | B1, ..., Bn.
The conjunction G1, ... , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:
H if G1 and ... and Gn and B1 and ... and Bn.
However, procedurally, when there are several clauses whose
|
https://en.wikipedia.org/wiki/Logic_programming
|
procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".
For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge), which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right:
Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list
|
https://en.wikipedia.org/wiki/Logic_programming
|
of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause:
The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].
Carl Hewitt has argued that, because of the indeterminacy of concurrent computation, concurrent logic programming cannot implement general concurrency. However, according to the logical semantics, any result of a computation of a concurrent logic program is a logical consequence of the program, even though not all logical consequences can be derived.
=== Concurrent constraint logic programming ===
Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block t
|
https://en.wikipedia.org/wiki/Logic_programming
|
rd, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.
=== Higher-order logic programming ===
Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.
=== Linear logic programming ===
Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO, Lolli, ACL, and Forum. Forum provides a goal-
|
https://en.wikipedia.org/wiki/Logic_programming
|
LO, Lolli, ACL, and Forum. Forum provides a goal-directed interpretation of all linear logic.
=== Object-oriented logic programming ===
F-logic extends logic programming with objects and the frame syntax.
Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.
=== Transaction logic programming ===
Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.
== See also ==
Automated theorem proving
Boolean satisfiability problem
Constraint logic programming
Control theory
Datalog
Fril
Functional programming
Fuzzy logic
Inductive logic programming
Linear logic
Logic in computer science (includes Formal methods)
Logic programming la
|
https://en.wikipedia.org/wiki/Logic_programming
|
nce (includes Formal methods)
Logic programming languages
Programmable logic controller
R++
Reasoning system
Rule-based machine learning
Satisfiability
Syntax and semantics of logic programming
== Citations ==
== Sources ==
=== General introductions ===
Baral, C.; Gelfond, M. (1994). "Logic programming and knowledge representation" (PDF). The Journal of Logic Programming. 19–20: 73–148. doi:10.1016/0743-1066(94)90025-6.
Kowalski, R. A. (1988). "The early years of logic programming" (PDF). Communications of the ACM. 31: 38–43. doi:10.1145/35043.35046. S2CID 12259230. [1]
Lloyd, J. W. (1987). Foundations of Logic Programming (2nd ed.). Springer-Verlag.
=== Other sources ===
John McCarthy. "Programs with common sense". Symposium on Mechanization of Thought Processes. National Physical Laboratory. Teddington, England. 1958.
Miller, Dale; Nadathur, Gopalan; Pfenning, Frank; Scedrov, Andre (1991). "Uniform proofs as a foundation for logic programming". Annals of Pure and Applied Logi
|
https://en.wikipedia.org/wiki/Logic_programming
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.