text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
The first complete treatise on calculus to be written in English and use the
### Leibniz notation
was not published until 1815.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
### Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid.
|
https://en.wikipedia.org/wiki/Calculus
|
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended.
|
https://en.wikipedia.org/wiki/Calculus
|
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities.
|
https://en.wikipedia.org/wiki/Calculus
|
There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
### Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
## Applications
of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure.
|
https://en.wikipedia.org/wiki/Calculus
|
## Applications
of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
## Principles
### Limits and infinitesimals
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals.
|
https://en.wikipedia.org/wiki/Calculus
|
For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols
$$
dx
$$
and
$$
dy
$$
were taken to be infinitesimal, and the derivative
$$
dy/dx
$$
was their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century.
|
https://en.wikipedia.org/wiki/Calculus
|
Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
### Differential calculus
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number.
|
https://en.wikipedia.org/wiki/Calculus
|
In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.
In more explicit terms the "doubling function" may be denoted by and the "squaring function" by . The "derivative" now takes the function , defined by the expression "", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime.
|
https://en.wikipedia.org/wiki/Calculus
|
The "derivative" now takes the function , defined by the expression "", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function , as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called is denoted by , pronounced "f prime" or "f dash". For instance, if is the squaring function, then is its derivative (the doubling function from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of is how the position is changing in time, that is, it is the velocity of the ball.
|
https://en.wikipedia.org/wiki/Calculus
|
If the input of the function represents time, then the derivative represents change concerning time. For example, if is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as , where is the independent variable, is the dependent variable, is the y-intercept, and:
$$
m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.
$$
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in divided by the change in varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let be a function, and fix a point in the domain of . is a point on the graph of the function. If is a number close to zero, then is a number close to . Therefore, is close to .
|
https://en.wikipedia.org/wiki/Calculus
|
If is a number close to zero, then is a number close to . Therefore, is close to . The slope between these two points is
$$
m = \frac{f(a+h) - f(a)}{(a+h) - a} = \frac{f(a+h) - f(a)}{h}.
$$
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so is the slope of the secant line between and . The second line is only an approximation to the behavior of the function at the point because it does not account for what happens between and . It is not possible to discover the behavior at by setting to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as tends to zero, meaning that it considers the behavior of for all small values of and extracts a consistent value for the case when equals zero:
$$
\lim_{h \to 0}{f(a+h) - f(a)\over{h}}.
$$
Geometrically, the derivative is the slope of the tangent line to the graph of at . The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function .
|
https://en.wikipedia.org/wiki/Calculus
|
The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function .
Here is a particular example, the derivative of the squaring function at the input 3. Let be the squaring function.
$$
\begin{align}f'(3) &=\lim_{h \to 0}{(3+h)^2 - 3^2\over{h}} \\
&=\lim_{h \to 0}{9 + 6h + h^2 - 9\over{h}} \\
&=\lim_{h \to 0}{6h + h^2\over{h}} \\
&=\lim_{h \to 0} (6 + h) \\
&= 6
\end{align}
$$
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
|
https://en.wikipedia.org/wiki/Calculus
|
This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation, introduced by Leibniz, for the derivative in the example above is
$$
\begin{align}
y&=x^2 \\
\frac{dy}{dx}&=2x.
\end{align}
$$
In an approach based on limits, the symbol is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, being the infinitesimally small change in caused by an infinitesimally small change applied to . We can also think of as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
$$
\frac{d}{dx}(x^2)=2x.
$$
In this usage, the in the denominator is read as "with respect to ".
|
https://en.wikipedia.org/wiki/Calculus
|
We can also think of as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
$$
\frac{d}{dx}(x^2)=2x.
$$
In this usage, the in the denominator is read as "with respect to ". Another example of correct notation could be:
$$
\begin{align}
g(t) &= t^2 + 2t + 4 \\
{d \over dt}g(t) &= 2t + 2
\end{align}
$$
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like and as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
### Integral calculus
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative. is an indefinite integral of when is a derivative of . (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.)
|
https://en.wikipedia.org/wiki/Calculus
|
is an indefinite integral of when is a derivative of . (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.
A motivating example is the distance traveled in a given time. If the speed is constant, only multiplication is needed:
$$
\mathrm{Distance} = \mathrm{Speed} \cdot \mathrm{Time}
$$
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
|
https://en.wikipedia.org/wiki/Calculus
|
However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If represents speed as it varies over time, the distance traveled between the times represented by and is the area of the region between and the -axis, between and .
To approximate that area, an intuitive method would be to divide up the distance between and into several equal segments, the length of each segment represented by the symbol . For each small segment, we can choose one value of the function . Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment.
|
https://en.wikipedia.org/wiki/Calculus
|
Call that value . Then the area of the rectangle with base and height gives the distance (time multiplied by speed ) traveled in that segment. Associated with each segment is the average value of the function above it, . The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as approaches zero.
The symbol of integration is
$$
\int
$$
, an elongated S chosen to suggest summation. The definite integral is written as:
$$
\int_a^b f(x)\, dx
$$
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width becomes the infinitesimally small .
The indefinite integral, or antiderivative, is written:
$$
\int f(x)\, dx.
$$
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.
|
https://en.wikipedia.org/wiki/Calculus
|
The Leibniz notation is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width becomes the infinitesimally small .
The indefinite integral, or antiderivative, is written:
$$
\int f(x)\, dx.
$$
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant. Since the derivative of the function , where is any constant, is , the antiderivative of the latter is given by:
$$
\int 2x\, dx = x^2 + C.
$$
The unspecified constant present in the indefinite integral or antiderivative is known as the constant of integration.
### Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
|
https://en.wikipedia.org/wiki/Calculus
|
Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function is continuous on the interval and if is a function whose derivative is on the interval , then
$$
\int_{a}^{b} f(x)\,dx = F(b) - F(a).
$$
Furthermore, for every in the interval ,
$$
\frac{d}{dx}\int_a^x f(t)\, dt = f(x).
$$
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.
|
https://en.wikipedia.org/wiki/Calculus
|
It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.
Applications
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications.
|
https://en.wikipedia.org/wiki/Calculus
|
In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
|
https://en.wikipedia.org/wiki/Calculus
|
Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and in studying radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.
|
https://en.wikipedia.org/wiki/Calculus
|
Domain adaptation is a field associated with machine learning and transfer learning. It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain).
A common example is spam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain).
Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends to multi-source domain adaptation.
Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).
## Classification of domain adaptation problems
Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.
### Distribution shifts
Common distribution shifts are classified as follows:
- Covariate Shift occurs when the input distributions of the source and destination change, but the relationship between inputs and labels remains unchanged. The above-mentioned spam filtering example typically falls in this category.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
### Distribution shifts
Common distribution shifts are classified as follows:
- Covariate Shift occurs when the input distributions of the source and destination change, but the relationship between inputs and labels remains unchanged. The above-mentioned spam filtering example typically falls in this category. Namely, the distributions (patterns) of emails may differ between the domains, but emails labeled as spam in the one domain should similarly be labeled in another.
- Prior Shift (Label Shift) occurs when the label distribution differs between the source and target datasets, while the conditional distribution of features given labels remains the same. An example is a classifier of hair color in images from Italy (source domain) and Norway (target domain). The proportions of hair colors (labels) differ, but images within classes like blond and black-haired populations remain consistent across domains. A classifier for the Norway population can exploit this prior knowledge of class proportions to improve its estimates.
- Concept Shift (Conditional Shift) refers to changes in the relationship between features and labels, even if the input distribution remains the same. For instance, in medical diagnosis, the same symptoms (inputs) may indicate entirely different diseases (labels) in different populations (domains).
### Data available during training
Domain adaptation problems typically assume that some data from the target domain is available during training.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
For instance, in medical diagnosis, the same symptoms (inputs) may indicate entirely different diseases (labels) in different populations (domains).
### Data available during training
Domain adaptation problems typically assume that some data from the target domain is available during training. Problems can be classified according to the type of this available data:
- Unsupervised: Unlabeled data from the target domain is available, but no labeled data. In the above-mentioned example of spam filtering, this corresponds to the case where emails from the target domain (user) are available, but they are not labeled as spam. Domain adaptation methods can benefit from such unlabeled data, by comparing its distribution (patterns) with the labeled source domain data.
- Semi-supervised: Most data that is available from the target domain is unlabelled, but some labeled data is also available. In the above-mentioned case of spam filter design, this corresponds to the case that the target user has labeled some emails as being spam or not.
- Supervised: All data that is available from the target domain is labeled. In this case, domain adaptation reduces to refinement of the source domain predictor. In the above-mentioned example classification of hair-color from images, this could correspond to the refinement of a network already trained on a large dataset of labeled images from Italy, using newly available labeled images from Norway.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
In this case, domain adaptation reduces to refinement of the source domain predictor. In the above-mentioned example classification of hair-color from images, this could correspond to the refinement of a network already trained on a large dataset of labeled images from Italy, using newly available labeled images from Norway.
## Formalization
Let
$$
X
$$
be the input space (or description space) and let
$$
Y
$$
be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis)
$$
h:X\to Y
$$
able to attach a label from
$$
Y
$$
to an example from
$$
X
$$
. This model is learned from a learning sample
$$
S=\{(x_i,y_i) \in (X \times Y)\}_{i=1}^m
$$
.
Usually in supervised learning (without domain adaptation), we suppose that the examples
$$
(x_i,y_i)\in S
$$
are drawn i.i.d. from a distribution
$$
D_S
$$
of support
$$
X\times Y
$$
(unknown and fixed). The objective is then to learn
$$
h
$$
(from
$$
S
$$
) such that it commits the least error possible for labelling new examples coming from the distribution
$$
D_S
$$
.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
from a distribution
$$
D_S
$$
of support
$$
X\times Y
$$
(unknown and fixed). The objective is then to learn
$$
h
$$
(from
$$
S
$$
) such that it commits the least error possible for labelling new examples coming from the distribution
$$
D_S
$$
.
The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions
$$
D_S
$$
and
$$
D_T
$$
on
$$
X\times Y
$$
. The domain adaptation task then consists of the transfer of knowledge from the source domain
$$
D_S
$$
to the target one
$$
D_T
$$
. The goal is then to learn
$$
h
$$
(from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain
$$
D_T
$$
.
The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?
## Four algorithmic principles
### Reweighting algorithms
The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).
### Iterative algorithms
A method for adapting consists in iteratively "auto-labeling" the target examples.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
### Reweighting algorithms
The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).
### Iterative algorithms
A method for adapting consists in iteratively "auto-labeling" the target examples. The principle is simple:
1. a model
$$
h
$$
is learned from the labeled examples;
1. _ BLOCK1_ automatically labels some target examples;
1. a new model is learned from the new labeled examples.
Note that there exist other iterative approaches, but they usually need target labeled examples.
### Search of a common representation space
The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task.
This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.
### Hierarchical Bayesian Model
The goal is to construct a Bayesian hierarchical model
$$
p(n)
$$
, which is essentially a factorization model for counts
$$
n
$$
, to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.
### Hierarchical Bayesian Model
The goal is to construct a Bayesian hierarchical model
$$
p(n)
$$
, which is essentially a factorization model for counts
$$
n
$$
, to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.
## Softwares
Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:
- SKADA (Python)
- ADAPT (Python)
- TLlib (Python)
- Domain-Adaptation-Toolbox (MATLAB)
## References
Category: Machine learning
|
https://en.wikipedia.org/wiki/Domain_adaptation
|
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.
Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion.
|
https://en.wikipedia.org/wiki/Descriptive_statistics
|
Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.
Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.
## Use in statistical analysis
Descriptive statistics provide simple summaries about the sample and about the observations that have been made. Such summaries may be either quantitative, i.e. summary statistics, or visual, i.e. simple-to-understand graphs. These summaries may either form the basis of the initial description of the data as part of a more extensive statistical analysis, or they may be sufficient in and of themselves for a particular investigation.
For example, the shooting percentage in basketball is a descriptive statistic that summarizes the performance of a player or a team.
|
https://en.wikipedia.org/wiki/Descriptive_statistics
|
These summaries may either form the basis of the initial description of the data as part of a more extensive statistical analysis, or they may be sufficient in and of themselves for a particular investigation.
For example, the shooting percentage in basketball is a descriptive statistic that summarizes the performance of a player or a team. This number is the number of shots made divided by the number of shots taken. For example, a player who shoots 33% is making approximately one shot in every three. The percentage summarizes or describes multiple discrete events. Consider also the grade point average. This single number describes the general performance of a student across the range of their course experiences.
The use of descriptive and summary statistics has an extensive history and, indeed, the simple tabulation of populations and of economic data was the first way the topic of statistics appeared. More recently, a collection of summarisation techniques has been formulated under the heading of exploratory data analysis: an example of such a technique is the box plot.
In the business world, descriptive statistics provides a useful summary of many types of data. For example, investors and brokers may use a historical account of return behaviour by performing empirical and analytical analyses on their investments in order to make better investing decisions in the future.
|
https://en.wikipedia.org/wiki/Descriptive_statistics
|
In the business world, descriptive statistics provides a useful summary of many types of data. For example, investors and brokers may use a historical account of return behaviour by performing empirical and analytical analyses on their investments in order to make better investing decisions in the future.
### Univariate analysis
Univariate analysis involves describing the distribution of a single variable, including its central tendency (including the mean, median, and mode) and dispersion (including the range and quartiles of the data-set, and measures of spread such as the variance and standard deviation). The shape of the distribution may also be described via indices such as skewness and kurtosis. Characteristics of a variable's distribution may also be depicted in graphical or tabular format, including histograms and stem-and-leaf display.
### Bivariate and multivariate analysis
When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:
- Cross-tabulations and contingency tables
- Graphical representation via scatterplots
- Quantitative measures of dependence
- Descriptions of conditional distributions
The main reason for differentiating univariate and bivariate analysis is that bivariate analysis is not only a simple descriptive analysis, but also it describes the relationship between two different variables.
|
https://en.wikipedia.org/wiki/Descriptive_statistics
|
### Bivariate and multivariate analysis
When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:
- Cross-tabulations and contingency tables
- Graphical representation via scatterplots
- Quantitative measures of dependence
- Descriptions of conditional distributions
The main reason for differentiating univariate and bivariate analysis is that bivariate analysis is not only a simple descriptive analysis, but also it describes the relationship between two different variables. Quantitative measures of dependence include correlation (such as Pearson's r when both variables are continuous, or Spearman's rho if one or both are not) and covariance (which reflects the scale variables are measured on). The slope, in regression analysis, also reflects the relationship between variables. The unstandardised slope indicates the unit change in the criterion variable for a one unit change in the predictor. The standardised slope indicates this change in standardised (z-score) units. Highly skewed data are often transformed by taking logarithms. The use of logarithms makes graphs more symmetrical and look more similar to the normal distribution, making them easier to interpret intuitively.
## References
|
https://en.wikipedia.org/wiki/Descriptive_statistics
|
Eventual consistency is a consistency model used in distributed computing to achieve high availability. Put simply: if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventual consistency, also called optimistic replication, is widely deployed in distributed systems and has origins in early mobile computing projects. A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence. Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent.
Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability). In chemistry, a base is the opposite of an acid, which helps in remembering the acronym.
|
https://en.wikipedia.org/wiki/Eventual_consistency
|
Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability). In chemistry, a base is the opposite of an acid, which helps in remembering the acronym. According to the same resource, these are the rough definitions of each term in BASE:
- Basically available: reading and writing operations are available as much as possible (using all nodes of a database cluster), but might not be consistent (the write might not persist after conflicts are reconciled, and the read might not get the latest write)
- Soft-state: without consistency guarantees, after some amount of time, we only have some probability of knowing the state, since it might not yet have converged
- Eventually consistent: If we execute some writes and then the system functions long enough, we can know the state of the data; any further reads of that data item will return the same value
Eventual consistency faces criticism for adding complexity to distributed software applications. This complexity arises because eventual consistency provides only a liveness guarantee (ensuring reads eventually return the same value) without safety guarantees—allowing any intermediate value before convergence. Application developers find this challenging because it differs from single-threaded programming, where variables reliably return their assigned values immediately.
|
https://en.wikipedia.org/wiki/Eventual_consistency
|
This complexity arises because eventual consistency provides only a liveness guarantee (ensuring reads eventually return the same value) without safety guarantees—allowing any intermediate value before convergence. Application developers find this challenging because it differs from single-threaded programming, where variables reliably return their assigned values immediately. With weak consistency guarantees, developers must carefully consider these limitations, as incorrect assumptions about consistency levels can lead to subtle bugs that only surface during network failures or high concurrency.
## Conflict resolution
In order to ensure replica convergence, a system must reconcile differences between multiple copies of distributed data. This consists of two parts:
- exchanging versions or updates of data between servers (often known as anti-entropy); and
- choosing an appropriate final state when concurrent updates have occurred, called reconciliation.
The most appropriate approach to reconciliation depends on the application. A widespread approach is "last writer wins". Another is to invoke a user-specified conflict handler. Timestamps and vector clocks are often used to detect concurrency between updates. Some people use "first writer wins" in situations where "last writer wins" is unacceptable.
|
https://en.wikipedia.org/wiki/Eventual_consistency
|
Timestamps and vector clocks are often used to detect concurrency between updates. Some people use "first writer wins" in situations where "last writer wins" is unacceptable.
Reconciliation of concurrent writes must occur sometime before the next read, and can be scheduled at different instants:
- Read repair: The correction is done when a read finds an inconsistency. This slows down the read operation.
- Write repair: The correction takes place during a write operation, slowing down the write operation.
- Asynchronous repair: The correction is not part of a read or write operation.
## Strong eventual consistency
Whereas eventual consistency is only a liveness guarantee (updates will be observed eventually), strong eventual consistency (SEC) adds the safety guarantee that any two nodes that have received the same (unordered) set of updates will be in the same state. If, furthermore, the system is monotonic, the application will never suffer rollbacks. A common approach to ensure SEC is conflict-free replicated data types.
|
https://en.wikipedia.org/wiki/Eventual_consistency
|
A penetration test, colloquially known as a pentest, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system; this is not to be confused with a vulnerability assessment. The test is performed to identify weaknesses (or vulnerabilities), including the potential for unauthorized parties to gain access to the system's features and data, as well as strengths, enabling a full risk assessment to be completed.
The process typically identifies the target systems and a particular goal, then reviews available information and undertakes various means to attain that goal. A penetration test target may be a white box (about which background and system information are provided in advance to the tester) or a black box (about which only basic information other than the company name is provided). A gray box penetration test is a combination of the two (where limited knowledge of the target is shared with the auditor). A penetration test can help identify a system's vulnerabilities to attack and estimate how vulnerable it is.
Security issues that the penetration test uncovers should be reported to the system owner. Penetration test reports may also assess potential impacts to the organization and suggest countermeasures to reduce the risk.
The UK National Cyber Security Center describes penetration testing as: "A method for gaining assurance in the security of an IT system by attempting to breach some or all of that system's security, using the same tools and techniques as an adversary might.
|
https://en.wikipedia.org/wiki/Penetration_test
|
Penetration test reports may also assess potential impacts to the organization and suggest countermeasures to reduce the risk.
The UK National Cyber Security Center describes penetration testing as: "A method for gaining assurance in the security of an IT system by attempting to breach some or all of that system's security, using the same tools and techniques as an adversary might. "
The goals of a penetration test vary depending on the type of approved activity for any given engagement, with the primary goal focused on finding vulnerabilities that could be exploited by a nefarious actor, and informing the client of those vulnerabilities along with recommended mitigation strategies.
Penetration tests are a component of a full security audit. For example, the Payment Card Industry Data Security Standard requires penetration testing on a regular schedule, and after system changes. Penetration testing also can support risk assessments as outlined in the NIST Risk Management Framework SP 800-53.
Several standard frameworks and methodologies exist for conducting penetration tests. These include the Open Source Security Testing Methodology Manual (OSSTMM), the Penetration Testing Execution Standard (PTES), the NIST Special Publication 800-115, the Information System Security Assessment Framework (ISSAF) and the OWASP Testing Guide. CREST, a not for profit professional body for the technical cyber security industry, provides its CREST Defensible Penetration Test standard that provides the industry with guidance for commercially reasonable assurance activity when carrying out penetration tests.
|
https://en.wikipedia.org/wiki/Penetration_test
|
These include the Open Source Security Testing Methodology Manual (OSSTMM), the Penetration Testing Execution Standard (PTES), the NIST Special Publication 800-115, the Information System Security Assessment Framework (ISSAF) and the OWASP Testing Guide. CREST, a not for profit professional body for the technical cyber security industry, provides its CREST Defensible Penetration Test standard that provides the industry with guidance for commercially reasonable assurance activity when carrying out penetration tests.
Flaw hypothesis methodology is a systems analysis and penetration prediction technique where a list of hypothesized flaws in a software system are compiled through analysis of the specifications and documentation for the system. The list of hypothesized flaws is then prioritized on the basis of the estimated probability that a flaw actually exists, and on the ease of exploiting it to the extent of control or compromise. The prioritized list is used to direct the actual testing of the system.
There are different types of penetration testing, depending upon the goal of the organization which include: Network (external and internal), Wireless, Web Application, Social Engineering, and Remediation Verification.
Even more recently a common pen testing tool called a flipper was used to hack the MGM casinos in 2023 by a group called Scattered Spiders showing the versatility and power of some of the tools of the trade.
|
https://en.wikipedia.org/wiki/Penetration_test
|
There are different types of penetration testing, depending upon the goal of the organization which include: Network (external and internal), Wireless, Web Application, Social Engineering, and Remediation Verification.
Even more recently a common pen testing tool called a flipper was used to hack the MGM casinos in 2023 by a group called Scattered Spiders showing the versatility and power of some of the tools of the trade.
## History
By the mid 1960s, growing popularity of time-sharing computer systems that made resources accessible over communication lines created new security concerns. As the scholars Deborah Russell and G. T. Gangemi Sr. explain, "The 1960s marked the true beginning of the age of computer security. "
In June 1965, for example, several of the U.S.'s leading computer security experts held one of the first major conferences on system security—hosted by the government contractor, the System Development Corporation (SDC). During the conference, someone noted that one SDC employee had been able to easily undermine various system safeguards added to SDC's AN/FSQ-32 time-sharing computer system. In hopes that further system security study would be useful, attendees requested "...studies to be conducted in such areas as breaking security protection in the time-shared system." In other words, the conference participants initiated one of the first formal requests to use computer penetration as a tool for studying system security.
|
https://en.wikipedia.org/wiki/Penetration_test
|
In hopes that further system security study would be useful, attendees requested "...studies to be conducted in such areas as breaking security protection in the time-shared system." In other words, the conference participants initiated one of the first formal requests to use computer penetration as a tool for studying system security.
At the Spring 1968 Joint Computer Conference, many leading computer specialists again met to discuss system security concerns. During this conference, the computer security experts Willis Ware, Harold Petersen, and Rein Turn, all of the RAND Corporation, and Bernard Peters of the National Security Agency (NSA), all used the phrase "penetration" to describe an attack against a computer system. In a paper, Ware referred to the military's remotely accessible time-sharing systems, warning that "Deliberate attempts to penetrate such computer systems must be anticipated." His colleagues Petersen and Turn shared the same concerns, observing that online communication systems "...are vulnerable to threats to privacy," including "deliberate penetration." Bernard Peters of the NSA made the same point, insisting that computer input and output "...could provide large amounts of information to a penetrating program." During the conference, computer penetration would become formally identified as a major threat to online computer systems.
The threat that computer penetration posed was next outlined in a major report organized by the United States Department of Defense (DoD) in late 1967.
|
https://en.wikipedia.org/wiki/Penetration_test
|
During the conference, computer penetration would become formally identified as a major threat to online computer systems.
The threat that computer penetration posed was next outlined in a major report organized by the United States Department of Defense (DoD) in late 1967. Essentially, DoD officials turned to Willis Ware to lead a task force of experts from NSA, CIA, DoD, academia, and industry to formally assess the security of time-sharing computer systems. By relying on many papers presented during the Spring 1967 Joint Computer Conference, the task force largely confirmed the threat to system security that computer penetration posed. Ware's report was initially classified, but many of the country's leading computer experts quickly identified the study as the definitive document on computer security. Jeffrey R. Yost of the Charles Babbage Institute has more recently described the Ware report as "...by far the most important and thorough study on technical and operational issues regarding secure computing systems of its time period." In effect, the Ware report reaffirmed the major threat posed by computer penetration to the new online time-sharing computer systems.
To better understand system weaknesses, the federal government and its contractors soon began organizing teams of penetrators, known as tiger teams, to use computer penetration to test system security. Deborah Russell and G. T. Gangemi Sr. stated that during the 1970s "...'tiger teams' first emerged on the computer scene.
|
https://en.wikipedia.org/wiki/Penetration_test
|
To better understand system weaknesses, the federal government and its contractors soon began organizing teams of penetrators, known as tiger teams, to use computer penetration to test system security. Deborah Russell and G. T. Gangemi Sr. stated that during the 1970s "...'tiger teams' first emerged on the computer scene. Tiger teams were government and industry-sponsored teams of crackers who attempted to break down the defenses of computer systems in an effort to uncover, and eventually patch, security holes. "
A leading scholar on the history of computer security, Donald MacKenzie, similarly points out that, "RAND had done some penetration studies (experiments in circumventing computer security controls) of early time-sharing systems on behalf of the government." Jeffrey R. Yost of the Charles Babbage Institute, in his own work on the history of computer security, also acknowledges that both the RAND Corporation and the SDC had "engaged in some of the first so-called 'penetration studies' to try to infiltrate time-sharing systems in order to test their vulnerability." In virtually all these early studies, tiger teams successfully broke into all targeted computer systems, as the country's time-sharing systems had poor defenses.
Of early tiger team actions, efforts at the RAND Corporation demonstrated the usefulness of penetration as a tool for assessing system security.
|
https://en.wikipedia.org/wiki/Penetration_test
|
In virtually all these early studies, tiger teams successfully broke into all targeted computer systems, as the country's time-sharing systems had poor defenses.
Of early tiger team actions, efforts at the RAND Corporation demonstrated the usefulness of penetration as a tool for assessing system security. At the time, one RAND analyst noted that the tests had "...demonstrated the practicality of system-penetration as a tool for evaluating the effectiveness and adequacy of implemented data security safeguards." In addition, a number of the RAND analysts insisted that the penetration test exercises all offered several benefits that justified its continued use. As they noted in one paper, "A penetrator seems to develop a diabolical frame of mind in his search for operating system weaknesses and incompleteness, which is difficult to emulate." For these reasons and others, many analysts at RAND recommended the continued study of penetration techniques for their usefulness in assessing system security.
Presumably the leading computer penetration expert during these formative years was James P. Anderson, who had worked with the NSA, RAND, and other government agencies to study system security. In the early 1971, the U.S. Air Force contracted Anderson's private company to study the security of its time-sharing system at the Pentagon. In his study, Anderson outlined a number of major factors involved in computer penetration. Anderson described a general attack sequence in steps:
1.
|
https://en.wikipedia.org/wiki/Penetration_test
|
In his study, Anderson outlined a number of major factors involved in computer penetration. Anderson described a general attack sequence in steps:
1. Find an exploitable vulnerability.
1. Design an attack around it.
1. Test the attack.
1. Seize a line in use.
1. Enter the attack.
1. Exploit the entry for information recovery.
Over time, Anderson's description of general computer penetration steps helped guide many other security experts, who relied on this technique to assess time-sharing computer system security.
In the following years, computer penetration as a tool for security assessment became more refined and sophisticated. In the early 1980s, the journalist William Broad briefly summarized the ongoing efforts of tiger teams to assess system security. As Broad reported, the DoD-sponsored report by Willis Ware "...showed how spies could actively penetrate computers, steal or copy electronic files and subvert the devices that normally guard top-secret information. The study touched off more than a decade of quiet activity by elite groups of computer scientists working for the Government who tried to break into sensitive computers. They succeeded in every attempt. "
While these various studies may have suggested that computer security in the U.S. remained a major problem, the scholar Edward Hunt has more recently made a broader point about the extensive study of computer penetration as a security tool.
|
https://en.wikipedia.org/wiki/Penetration_test
|
They succeeded in every attempt. "
While these various studies may have suggested that computer security in the U.S. remained a major problem, the scholar Edward Hunt has more recently made a broader point about the extensive study of computer penetration as a security tool. Hunt suggests in a recent paper on the history of penetration testing that the defense establishment ultimately "...created many of the tools used in modern day cyberwarfare," as it carefully defined and researched the many ways that computer penetrators could hack into targeted systems.
## Tools
A wide variety of security assessment tools are available to assist with penetration testing, including free-of-charge, free software, and commercial software.
### Specialized OS distributions
Several operating system distributions are geared towards penetration testing. Such distributions typically contain a pre-packaged and pre-configured set of tools. The penetration tester does not have to hunt down each individual tool, which might increase the risk of complications—such as compile errors, dependency issues, and configuration errors. Also, acquiring additional tools may not be practical in the tester's context.
|
https://en.wikipedia.org/wiki/Penetration_test
|
The penetration tester does not have to hunt down each individual tool, which might increase the risk of complications—such as compile errors, dependency issues, and configuration errors. Also, acquiring additional tools may not be practical in the tester's context.
Notable penetration testing OS examples include:
- BlackArch based on Arch Linux
- BackBox based on Ubuntu
- Kali Linux (replaced BackTrack December 2012) based on Debian
- Parrot Security OS based on Debian
- Pentoo based on Gentoo
- WHAX based on Slackware
Many other specialized operating systems facilitate penetration testing—each more or less dedicated to a specific field of penetration testing.
A number of Linux distributions include known OS and application vulnerabilities, and can be deployed as targets to practice against. Such systems help new security professionals try the latest security tools in a lab environment. Examples include Damn Vulnerable Linux (DVL), the OWASP Web Testing Environment (WTW), and Metasploitable.
### Software frameworks
- BackBox
- Hping
- Metasploit Project
- Nessus
- Nmap
- OWASP ZAP
- SAINT
- w3af
- Burp Suite
- Wireshark
- John the Ripper
- Hashcat
### Hardware tools
There are hardware tools specifically designed for penetration testing. However, not all hardware tools used in penetration testing are purpose-built for this task.
|
https://en.wikipedia.org/wiki/Penetration_test
|
### Hardware tools
There are hardware tools specifically designed for penetration testing. However, not all hardware tools used in penetration testing are purpose-built for this task. Some devices, such as measuring and debugging equipment, are repurposed for penetration testing due to their advanced functionality and versatile capabilities.
- Proxmark3 — multi-purpose hardware tool for radio-frequency identification (RFID) security analysis.
- BadUSB — toolset for exploiting vulnerabilities in USB devices to inject malicious keystrokes or payloads.
- Flipper Zero — portable, open-source multi-functional device pentesting wireless protocols such as Sub-GHz, RFID, NFC, Infrared and Bluetooth.
- Raspberry Pi — a compact, versatile single-board computer commonly used in penetration testing for tasks like network reconnaissance and exploitation.
- SDR (Software-defined Radio)— versatile tool for analyzing and attacking radio communications and protocols, including intercepting, emulating, decoding, and transmitting signals.
- ChipWhisperer — specialized hardware tool for side-channel attacks, allowing analysis of cryptographic implementations and vulnerabilities through power consumption or electromagnetic emissions.
## Penetration testing phases
The process of penetration testing may be simplified into the following five phases:
1. Reconnaissance: The act of gathering important information on a target system. This information can be used to better attack the target.
|
https://en.wikipedia.org/wiki/Penetration_test
|
Reconnaissance: The act of gathering important information on a target system. This information can be used to better attack the target. For example, open source search engines can be used to find data that can be used in a social engineering attack.
1. Scanning: Uses technical tools to further the attacker's knowledge of the system. For example, Nmap can be used to scan for open ports.
1. Gaining access: Using the data gathered in the reconnaissance and scanning phases, the attacker can use a payload to exploit the targeted system. For example, Metasploit can be used to automate attacks on known vulnerabilities.
1. Maintaining access: Maintaining access requires taking the steps involved in being able to be persistently within the target environment in order to gather as much data as possible.
1. Covering tracks: The attacker must clear any trace of compromising the victim system, any type of data gathered, log events, in order to remain anonymous.
Once an attacker has exploited one vulnerability they may gain access to other machines so the process repeats i.e. they look for new vulnerabilities and attempt to exploit them. This process is referred to as pivoting.
|
https://en.wikipedia.org/wiki/Penetration_test
|
Once an attacker has exploited one vulnerability they may gain access to other machines so the process repeats i.e. they look for new vulnerabilities and attempt to exploit them. This process is referred to as pivoting.
### Vulnerabilities
Legal operations that let the tester execute an illegal operation include unescaped SQL commands, unchanged hashed passwords in source-visible projects, human relationships, and old hashing or cryptographic functions. A single flaw may not be enough to enable a critically serious exploit. Leveraging multiple known flaws and shaping the payload in a way that appears as a valid operation is almost always required. Metasploit provides a ruby library for common tasks, and maintains a database of known exploits.
When working under budget and time constraints, fuzzing is a common technique that discovers vulnerabilities. It aims to get an unhandled error through random input. The tester uses random input to access the less often used code paths. Well-trodden code paths are usually free of errors. Errors are useful because they either expose more information, such as HTTP server crashes with full info trace-backs—or are directly usable, such as buffer overflows.
Imagine a website has 100 text input boxes. A few are vulnerable to SQL injections on certain strings. Submitting random strings to those boxes for a while will hopefully hit the bugged code path.
|
https://en.wikipedia.org/wiki/Penetration_test
|
A few are vulnerable to SQL injections on certain strings. Submitting random strings to those boxes for a while will hopefully hit the bugged code path. The error shows itself as a broken HTML page half rendered because of an SQL error. In this case, only text boxes are treated as input streams. However, software systems have many possible input streams, such as cookie and session data, the uploaded file stream, RPC channels, or memory. Errors can happen in any of these input streams. The test goal is to first get an unhandled error and then understand the flaw based on the failed test case. Testers write an automated tool to test their understanding of the flaw until it is correct. After that, it may become obvious how to package the payload so that the target system triggers its execution. If this is not viable, one can hope that another error produced by the fuzzer yields more fruit. The use of a fuzzer saves time by not checking adequate code paths where exploits are unlikely.
### Payload
The illegal operation, or payload in Metasploit terminology, can include functions for logging keystrokes, taking screenshots, installing adware, stealing credentials, creating backdoors using shellcode, or altering data.
|
https://en.wikipedia.org/wiki/Penetration_test
|
The use of a fuzzer saves time by not checking adequate code paths where exploits are unlikely.
### Payload
The illegal operation, or payload in Metasploit terminology, can include functions for logging keystrokes, taking screenshots, installing adware, stealing credentials, creating backdoors using shellcode, or altering data. Some companies maintain large databases of known exploits and provide products that automatically test target systems for vulnerabilities:
- Metasploit
- Nessus
- Nmap
- OpenVAS
- W3af
## Standardized government penetration test services
The General Services Administration (GSA) has standardized the "penetration test" service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS) and are listed at the US GSA Advantage website.
This effort has identified key service providers which have been technically reviewed and vetted to provide these advanced penetration services. This GSA service is intended to improve the rapid ordering and deployment of these services, reduce US government contract duplication, and to protect and support the US infrastructure in a more timely and efficient manner.
132-45A Penetration Testing is security testing in which service assessors mimic real-world attacks to identify methods for circumventing the security features of an application, system, or network.
|
https://en.wikipedia.org/wiki/Penetration_test
|
This GSA service is intended to improve the rapid ordering and deployment of these services, reduce US government contract duplication, and to protect and support the US infrastructure in a more timely and efficient manner.
132-45A Penetration Testing is security testing in which service assessors mimic real-world attacks to identify methods for circumventing the security features of an application, system, or network. HACS Penetration Testing Services typically strategically test the effectiveness of the organization's preventive and detective security measures employed to protect assets and data. As part of this service, certified ethical hackers typically conduct a simulated attack on a system, systems, applications or another target in the environment, searching for security weaknesses. After testing, they will typically document the vulnerabilities and outline which defenses are effective and which can be defeated or exploited.
In the UK penetration testing services are standardized via professional bodies working in collaboration with National Cyber Security Centre.
The outcomes of penetration tests vary depending on the standards and methodologies used. There are five penetration testing standards: Open Source Security Testing Methodology Manual (OSSTMM), Open Web Application Security Project (OWASP), National Institute of Standards and Technology (NIST00), Information System Security Assessment Framework (ISSAF), and Penetration Testing Methodologies and Standards (PTES).
|
https://en.wikipedia.org/wiki/Penetration_test
|
In a programming language, an evaluation strategy is a set of rules for evaluating expressions. The term is often used to refer to the more specific notion of a parameter-passing strategy that defines the kind of value that is passed to the function for each parameter (the binding strategy) and whether to evaluate the parameters of a function call, and if so in what order (the evaluation order). The notion of reduction strategy is distinct, although some authors conflate the two terms and the definition of each term is not widely agreed upon. A programming language's evaluation strategy is part of its high-level semantics. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies.
The calling convention consists of the low-level platform-specific details of parameter passing.
## Example
To illustrate, executing a function call `f(a,b)` may first evaluate the arguments `a` and `b`, store the results in references or memory locations `ref_a` and `ref_b`, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
## Example
To illustrate, executing a function call `f(a,b)` may first evaluate the arguments `a` and `b`, store the results in references or memory locations `ref_a` and `ref_b`, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy.
## Table
This is a table of evaluation strategies and representative languages by year introduced. The representative languages are listed in chronological order, starting with the language(s) that introduced the strategy and followed by prominent languages that use the strategy.
Evaluation strategy Representative languages Year first introduced
### Call by reference
Fortran II, PL/I 1958
### Call by value
ALGOL, C, Scheme, MATLAB 1960
### Call by name
ALGOL 60, Simula 1960
### Call by copy-restore
Fortran IV, Ada 1962
### Call by unification
Prolog 1965; Here: sect.5.8, p.32
### Call by need
SASL, Haskell, R 1971
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
### Call by unification
Prolog 1965; Here: sect.5.8, p.32
### Call by need
SASL, Haskell, R 1971
### Call by sharing
CLU, Java, Python, Ruby, Julia 1974 Call by reference parameters C++, PHP, C#, Visual Basic .NET 1985 Call by reference to const C++, C 1985
## Evaluation orders
While the order of operations defines the abstract syntax tree of the expression, the evaluation order defines the order in which expressions are evaluated. For example, the Python program
```python
def f(x):
print(x, end=)
return x
print(f(1) + f(2),end=)
```
outputs `123` due to Python's left-to-right evaluation order, but a similar program in OCaml:
```ocaml
let f x = print_int x; x ;;
print_int (f 1 + f 2)
```
outputs `213` due to OCaml's right-to-left evaluation order.
The evaluation order is mainly visible in code with side effects, but it also affects the performance of the code because a rigid order inhibits instruction scheduling. For this reason language standards such as C++ traditionally left the order unspecified, although languages such as Java and C# define the evaluation order as left-to-right and the C++17 standard has added constraints on the evaluation order.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
The evaluation order is mainly visible in code with side effects, but it also affects the performance of the code because a rigid order inhibits instruction scheduling. For this reason language standards such as C++ traditionally left the order unspecified, although languages such as Java and C# define the evaluation order as left-to-right and the C++17 standard has added constraints on the evaluation order.
### Strict evaluation
Applicative order is a family of evaluation orders in which a function's arguments are evaluated completely before the function is applied.
This has the effect of making the function strict, i.e. the function's result is undefined if any of the arguments are undefined, so applicative order evaluation is more commonly called strict evaluation. Furthermore, a function call is performed as soon as it is encountered in a procedure, so it is also called eager evaluation or greedy evaluation. Some authors refer to strict evaluation as "call by value" due to the call-by-value binding strategy requiring strict evaluation.
Common Lisp, Eiffel and Java evaluate function arguments left-to-right. C leaves the order undefined. Scheme requires the execution order to be the sequential execution of an unspecified permutation of the arguments. OCaml similarly leaves the order unspecified, but in practice evaluates arguments right-to-left due to the design of its abstract machine.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Scheme requires the execution order to be the sequential execution of an unspecified permutation of the arguments. OCaml similarly leaves the order unspecified, but in practice evaluates arguments right-to-left due to the design of its abstract machine. All of these are strict evaluation.
### Non-strict evaluation
A non-strict evaluation order is an evaluation order that is not strict, that is, a function may return a result before all of its arguments are fully evaluated. The prototypical example is normal order evaluation, which does not evaluate any of the arguments until they are needed in the body of the function. Normal order evaluation has the property that it terminates without error whenever any other evaluation order would have terminated without error. The name "normal order" comes from the lambda calculus, where normal order reduction will find a normal form if there is one (it is a "normalizing" reduction strategy). Lazy evaluation is classified in this article as a binding technique rather than an evaluation order. But this distinction is not always followed and some authors define lazy evaluation as normal order evaluation or vice-versa, or confuse non-strictness with lazy evaluation.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Lazy evaluation is classified in this article as a binding technique rather than an evaluation order. But this distinction is not always followed and some authors define lazy evaluation as normal order evaluation or vice-versa, or confuse non-strictness with lazy evaluation.
Boolean expressions in many languages use a form of non-strict evaluation called short-circuit evaluation, where evaluation evaluates the left expression but may skip the right expression if the result can be determined—for example, in a disjunctive expression (OR) where `true` is encountered, or in a conjunctive expression (AND) where `false` is encountered, and so forth. Conditional expressions similarly use non-strict evaluation - only one of the branches is evaluated.
### Comparison of applicative order and normal order evaluation
With normal order evaluation, expressions containing an expensive computation, an error, or an infinite loop will be ignored if not needed, allowing the specification of user-defined control flow constructs, a facility not available with applicative order evaluation. Normal order evaluation uses complex structures such as thunks for unevaluated expressions, compared to the call stack used in applicative order evaluation. Normal order evaluation has historically had a lack of usable debugging tools due to its complexity.
## Strict binding strategies
Call by value
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Normal order evaluation has historically had a lack of usable debugging tools due to its complexity.
## Strict binding strategies
Call by value
In call by value (or pass by value), the evaluated value of the argument expression is bound to the corresponding variable in the function (frequently by copying the value into a new memory region). If the function or procedure is able to assign values to its parameters, only its local variable is assigned—that is, anything passed into a function call is unchanged in the caller's scope when the function returns. For example, in Pascal, passing an array by value will cause the entire array to be copied, and any mutations to this array will be invisible to the caller:
```pascal
program Main;
uses crt;
procedure PrintArray(a: Array of integer);
var
i: Integer;
begin
for i := Low(a) to High(a) do
Write(a[i]);
WriteLn();
end;
Procedure Modify(Row : Array of integer);
begin
PrintArray(Row); // 123
Row[1] := 4;
PrintArray(Row); // 143
end;
Var
A : Array of integer;
begin
A := [1,2,3];
PrintArray(A); // 123
Modify(A);
PrintArray(A); // 123
end.
```
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
If the function or procedure is able to assign values to its parameters, only its local variable is assigned—that is, anything passed into a function call is unchanged in the caller's scope when the function returns. For example, in Pascal, passing an array by value will cause the entire array to be copied, and any mutations to this array will be invisible to the caller:
```pascal
program Main;
uses crt;
procedure PrintArray(a: Array of integer);
var
i: Integer;
begin
for i := Low(a) to High(a) do
Write(a[i]);
WriteLn();
end;
Procedure Modify(Row : Array of integer);
begin
PrintArray(Row); // 123
Row[1] := 4;
PrintArray(Row); // 143
end;
Var
A : Array of integer;
begin
A := [1,2,3];
PrintArray(A); // 123
Modify(A);
PrintArray(A); // 123
end.
```
#### Semantic drift
Strictly speaking, under call by value, no operations performed by the called routine can be visible to the caller, other than as part of the return value. This implies a form of purely functional programming in the implementation semantics. However, the circumlocution "call by value where the value is a reference" has become common in some languages, for example, the Java community.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
This implies a form of purely functional programming in the implementation semantics. However, the circumlocution "call by value where the value is a reference" has become common in some languages, for example, the Java community. Compared to traditional pass by value, the value which is passed is not a value as understood by the ordinary meaning of value, such as an integer that can be written as a literal, but an implementation-internal reference handle. Mutations to this reference handle are visible in the caller. Due to the visible mutation, this form of "call by value" is more properly referred to as call by sharing.
In purely functional languages, values and data structures are immutable, so there is no possibility for a function to modify any of its arguments. As such, there is typically no semantic difference between passing by value and passing by reference or a pointer to the data structure, and implementations frequently use call by reference internally for the efficiency benefits. Nonetheless, these languages are typically described as call by value languages.
Call by reference
Call by reference (or pass by reference) is an evaluation strategy where a parameter is bound to an implicit reference to the variable used as argument, rather than a copy of its value. This typically means that the function can modify (i.e., assign to) the variable used as argument—something that will be seen by its caller.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Call by reference
Call by reference (or pass by reference) is an evaluation strategy where a parameter is bound to an implicit reference to the variable used as argument, rather than a copy of its value. This typically means that the function can modify (i.e., assign to) the variable used as argument—something that will be seen by its caller. Call by reference can therefore be used to provide an additional channel of communication between the called function and the calling function. Pass by reference can significantly improve performance: calling a function with a many-megabyte structure as an argument does not have to copy the large structure, only the reference to the structure (which is generally a machine word and only a few bytes). However, a call-by-reference language makes it more difficult for a programmer to track the effects of a function call, and may introduce subtle bugs.
Due to variation in syntax, the difference between call by reference (where the reference type is implicit) and call by sharing (where the reference type is explicit) is often unclear on first glance. A simple litmus test is if it's possible to write a traditional `swap(a, b)` function in the language.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Due to variation in syntax, the difference between call by reference (where the reference type is implicit) and call by sharing (where the reference type is explicit) is often unclear on first glance. A simple litmus test is if it's possible to write a traditional `swap(a, b)` function in the language. For example in Fortran:
```fortran
program Main
implicit none
integer :: a = 1
integer :: b = 2
call Swap(a, b)
print *, a, b ! 2 1
contains
subroutine Swap(a, b)
integer, intent(inout) :: a, b
integer :: temp
temp = a
a = b
b = temp
end subroutine Swap
end program Main
```
Therefore, Fortran's `inout` intent implements call-by-reference; any variable can be implicitly converted to a reference handle.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
A simple litmus test is if it's possible to write a traditional `swap(a, b)` function in the language. For example in Fortran:
```fortran
program Main
implicit none
integer :: a = 1
integer :: b = 2
call Swap(a, b)
print *, a, b ! 2 1
contains
subroutine Swap(a, b)
integer, intent(inout) :: a, b
integer :: temp
temp = a
a = b
b = temp
end subroutine Swap
end program Main
```
Therefore, Fortran's `inout` intent implements call-by-reference; any variable can be implicitly converted to a reference handle. In contrast the closest one can get in Java is:
```java
class Main {
static class Box {
int value;
public Box(int value) {
this.value = value;
}
}
static void swap(Box a, Box b) {
int temp = a.value;
a.value = b.value;
b.value = temp;
}
public static void main(String[] args) {
Box a = new Box(1);
Box b = new Box(2);
swap(a, b);
System.out.println(String.format("%d %d", a.value, b.value));
}
}
// output: 2 1
```
where an explicit `Box` type must be used to introduce a handle.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
For example in Fortran:
```fortran
program Main
implicit none
integer :: a = 1
integer :: b = 2
call Swap(a, b)
print *, a, b ! 2 1
contains
subroutine Swap(a, b)
integer, intent(inout) :: a, b
integer :: temp
temp = a
a = b
b = temp
end subroutine Swap
end program Main
```
Therefore, Fortran's `inout` intent implements call-by-reference; any variable can be implicitly converted to a reference handle. In contrast the closest one can get in Java is:
```java
class Main {
static class Box {
int value;
public Box(int value) {
this.value = value;
}
}
static void swap(Box a, Box b) {
int temp = a.value;
a.value = b.value;
b.value = temp;
}
public static void main(String[] args) {
Box a = new Box(1);
Box b = new Box(2);
swap(a, b);
System.out.println(String.format("%d %d", a.value, b.value));
}
}
// output: 2 1
```
where an explicit `Box` type must be used to introduce a handle. Java is call-by-sharing but not call-by-reference.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
In contrast the closest one can get in Java is:
```java
class Main {
static class Box {
int value;
public Box(int value) {
this.value = value;
}
}
static void swap(Box a, Box b) {
int temp = a.value;
a.value = b.value;
b.value = temp;
}
public static void main(String[] args) {
Box a = new Box(1);
Box b = new Box(2);
swap(a, b);
System.out.println(String.format("%d %d", a.value, b.value));
}
}
// output: 2 1
```
where an explicit `Box` type must be used to introduce a handle. Java is call-by-sharing but not call-by-reference.
Call by copy-restore
Call by copy-restore—also known as "copy-in copy-out", "call by value result", "call by value return" (as termed in the Fortran community)—is a variation of call by reference. With call by copy-restore, the contents of the argument are copied to a new variable local to the call invocation. The function may then modify this variable, similarly to call by reference, but as the variable is local, the modifications are not visible outside of the call invocation during the call.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
With call by copy-restore, the contents of the argument are copied to a new variable local to the call invocation. The function may then modify this variable, similarly to call by reference, but as the variable is local, the modifications are not visible outside of the call invocation during the call. When the function call returns, the updated contents of this variable are copied back to overwrite the original argument ("restored").
The semantics of call by copy-restore is similar in many cases to call by reference, but differs when two or more function arguments alias one another (i.e., point to the same variable in the caller's environment). Under call by reference, writing to one argument will affect the other during the function's execution. Under call by copy-restore, writing to one argument will not affect the other during the function's execution, but at the end of the call, the values of the two arguments may differ, and it is unclear which argument is copied back first and therefore what value the caller's variable receives. For example, Ada specifies that the copy-out assignment for each or parameter occurs in an arbitrary order.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Under call by copy-restore, writing to one argument will not affect the other during the function's execution, but at the end of the call, the values of the two arguments may differ, and it is unclear which argument is copied back first and therefore what value the caller's variable receives. For example, Ada specifies that the copy-out assignment for each or parameter occurs in an arbitrary order. From the following program (illegal in Ada 2012) it can be seen that the behavior of GNAT is to copy in left-to-right order on return:
```ada
with Ada.Text_IO; use Ada.Text_IO;
procedure Test_Copy_Restore is
procedure Modify (A, B : in out Integer) is
begin
A := A + 1;
B := B + 2;
end Modify;
X : Integer := 0;
begin
Modify(X, X);
Put_Line("X = " & Integer'Image(X));
end Test_Copy_Restore;
-- $ gnatmake -gnatd.E test_copy_restore.adb; ./test_copy_restore
-- test_copy_restore.adb:12:10: warning: writable actual for "A" overlaps with actual for "B" [-gnatw.i]
-- X = 2
```
If the program returned 1 it would be copying right-to-left, and under call by reference semantics the program would return 3.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
For example, Ada specifies that the copy-out assignment for each or parameter occurs in an arbitrary order. From the following program (illegal in Ada 2012) it can be seen that the behavior of GNAT is to copy in left-to-right order on return:
```ada
with Ada.Text_IO; use Ada.Text_IO;
procedure Test_Copy_Restore is
procedure Modify (A, B : in out Integer) is
begin
A := A + 1;
B := B + 2;
end Modify;
X : Integer := 0;
begin
Modify(X, X);
Put_Line("X = " & Integer'Image(X));
end Test_Copy_Restore;
-- $ gnatmake -gnatd.E test_copy_restore.adb; ./test_copy_restore
-- test_copy_restore.adb:12:10: warning: writable actual for "A" overlaps with actual for "B" [-gnatw.i]
-- X = 2
```
If the program returned 1 it would be copying right-to-left, and under call by reference semantics the program would return 3.
When the reference is passed to the caller uninitialized (for example an parameter in Ada as opposed to an parameter), this evaluation strategy may be called "call by result".
This strategy has gained attention in multiprocessing and remote procedure calls, as unlike call-by-reference it does not require frequent communication between threads of execution for variable access.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
When the reference is passed to the caller uninitialized (for example an parameter in Ada as opposed to an parameter), this evaluation strategy may be called "call by result".
This strategy has gained attention in multiprocessing and remote procedure calls, as unlike call-by-reference it does not require frequent communication between threads of execution for variable access.
Call by sharing
Call by sharing (also known as "pass by sharing", "call by object", or "call by object-sharing") is an evaluation strategy that is intermediate between call by value and call by reference. Rather than every variable being exposed as a reference, only a specific class of values, termed "references", "boxed types", or "objects", have reference semantics, and it is the addresses of these pointers that are passed into the function. Like call by value, the value of the address passed is a copy, and direct assignment to the parameter of the function overwrites the copy and is not visible to the calling function. Like call by reference, mutating the target of the pointer is visible to the calling function. Mutations of a mutable object within the function are visible to the caller because the object is not copied or cloned—it is shared, hence the name "call by sharing".
The technique was first noted by Barbara Liskov in 1974 for the CLU language.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Mutations of a mutable object within the function are visible to the caller because the object is not copied or cloned—it is shared, hence the name "call by sharing".
The technique was first noted by Barbara Liskov in 1974 for the CLU language. It is used by many modern languages such as Python (the shared values being called "objects"), Java (objects), Ruby (objects), JavaScript (objects), Scheme (data structures such as vectors), AppleScript (lists, records, dates, and script objects), OCaml and ML (references, records, arrays, objects, and other compound data types), Maple (rtables and tables), and Tcl (objects). The term "call by sharing" as used in this article is not in common use; the terminology is inconsistent across different sources. For example, in the Java community, they say that Java is call by value.
For immutable objects, there is no real difference between call by sharing and call by value, except if object identity is visible in the language. The use of call by sharing with mutable objects is an alternative to input/output parameters: the parameter is not assigned to (the argument is not overwritten and object identity is not changed), but the object (argument) is mutated.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
For immutable objects, there is no real difference between call by sharing and call by value, except if object identity is visible in the language. The use of call by sharing with mutable objects is an alternative to input/output parameters: the parameter is not assigned to (the argument is not overwritten and object identity is not changed), but the object (argument) is mutated.
For example, in Python, lists are mutable and passed with call by sharing, so:
```python
def f(a_list):
a_list.append(1)
m = []
f(m)
print(m)
```
outputs `[1]` because the `append` method modifies the object on which it is called.
In contrast, assignments within a function are not noticeable to the caller. For example, this code binds the formal argument to a new object, but it is not visible to the caller because it does not mutate :
```python
def f(a_list):
a_list = a_list + [1]
print(a_list) # [1]
m = []
f(m)
print(m) # []
```
### Call by address
Call by address, pass by address, or call/pass by pointer is a parameter passing method where the address of the argument is passed as the formal parameter. Inside the function, the address (pointer) may be used to access or modify the value of the argument.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
### Call by address
Call by address, pass by address, or call/pass by pointer is a parameter passing method where the address of the argument is passed as the formal parameter. Inside the function, the address (pointer) may be used to access or modify the value of the argument. For example, the swap operation can be implemented as follows in C:
```c
1. include <stdio.h>
void swap(int* a, int* b) {
int temp = *a;
- a = *b;
- b = temp;
}
int main() {
int a = 1;
int b = 2;
swap(&a, &b);
printf("%d %d", a, b); // 2 1
return 0;
}
```
Some authors treat `&` as part of the syntax of calling . Under this view, C supports the call-by-reference parameter passing strategy. Other authors take a differing view that the presented implementation of in C is only a simulation of call-by-reference using pointers. Under this "simulation" view, mutable variables in C are not first-class (that is, l-values are not expressions), rather pointer types are.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Other authors take a differing view that the presented implementation of in C is only a simulation of call-by-reference using pointers. Under this "simulation" view, mutable variables in C are not first-class (that is, l-values are not expressions), rather pointer types are. In this view, the presented swap program is syntactic sugar for a program that uses pointers throughout, for example this program ( and have been added to highlight the similarities to the Java call-by-sharing program above):
```c
1. include <stdio.h>
int read(int *p) {
return *p;
}
void assign(int *p, int v) {
- p = v;
}
void swap(int* a, int* b) {
int temp_storage; int* temp = &temp_storage;
assign(temp, read(a));
assign(a, read(b));
assign(b, read(temp));
}
int main() {
int a_storage; int* a = &a_storage;
int b_storage; int* b = &b_storage;
assign(a,1);
assign(b,2);
swap(a, b);
printf("%d %d", read(a), read(b)); // 2 1
return 0;
}
```
Because in this program, operates on pointers and cannot change the pointers themselves, but only the values the pointers point to, this view holds that C's main evaluation strategy is more similar to call-by-sharing.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
Under this "simulation" view, mutable variables in C are not first-class (that is, l-values are not expressions), rather pointer types are. In this view, the presented swap program is syntactic sugar for a program that uses pointers throughout, for example this program ( and have been added to highlight the similarities to the Java call-by-sharing program above):
```c
1. include <stdio.h>
int read(int *p) {
return *p;
}
void assign(int *p, int v) {
- p = v;
}
void swap(int* a, int* b) {
int temp_storage; int* temp = &temp_storage;
assign(temp, read(a));
assign(a, read(b));
assign(b, read(temp));
}
int main() {
int a_storage; int* a = &a_storage;
int b_storage; int* b = &b_storage;
assign(a,1);
assign(b,2);
swap(a, b);
printf("%d %d", read(a), read(b)); // 2 1
return 0;
}
```
Because in this program, operates on pointers and cannot change the pointers themselves, but only the values the pointers point to, this view holds that C's main evaluation strategy is more similar to call-by-sharing.
C++ confuses the issue further by allowing to be declared and used with a very lightweight "reference" syntax:
```c
void swap(int& a, int& b) {
int temp = a;
a = b;
b = temp;
}
int main() {
int a = 1;
int b = 2;
swap(a, b);
std::cout << a << b << std::endl; // 2 1
return 0;
}
```
Semantically, this is equivalent to the C examples.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
In this view, the presented swap program is syntactic sugar for a program that uses pointers throughout, for example this program ( and have been added to highlight the similarities to the Java call-by-sharing program above):
```c
1. include <stdio.h>
int read(int *p) {
return *p;
}
void assign(int *p, int v) {
- p = v;
}
void swap(int* a, int* b) {
int temp_storage; int* temp = &temp_storage;
assign(temp, read(a));
assign(a, read(b));
assign(b, read(temp));
}
int main() {
int a_storage; int* a = &a_storage;
int b_storage; int* b = &b_storage;
assign(a,1);
assign(b,2);
swap(a, b);
printf("%d %d", read(a), read(b)); // 2 1
return 0;
}
```
Because in this program, operates on pointers and cannot change the pointers themselves, but only the values the pointers point to, this view holds that C's main evaluation strategy is more similar to call-by-sharing.
C++ confuses the issue further by allowing to be declared and used with a very lightweight "reference" syntax:
```c
void swap(int& a, int& b) {
int temp = a;
a = b;
b = temp;
}
int main() {
int a = 1;
int b = 2;
swap(a, b);
std::cout << a << b << std::endl; // 2 1
return 0;
}
```
Semantically, this is equivalent to the C examples. As such, many authors consider call-by-address to be a unique parameter passing strategy distinct from call-by-value, call-by-reference, and call-by-sharing.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
C++ confuses the issue further by allowing to be declared and used with a very lightweight "reference" syntax:
```c
void swap(int& a, int& b) {
int temp = a;
a = b;
b = temp;
}
int main() {
int a = 1;
int b = 2;
swap(a, b);
std::cout << a << b << std::endl; // 2 1
return 0;
}
```
Semantically, this is equivalent to the C examples. As such, many authors consider call-by-address to be a unique parameter passing strategy distinct from call-by-value, call-by-reference, and call-by-sharing.
Call by unification
In logic programming, the evaluation of an expression may simply correspond to the unification of the terms involved combined with the application of some form of resolution. Unification must be classified as a strict binding strategy because it is fully performed. However, unification can also be performed on unbounded variables, so calls may not necessarily commit to final values for all its variables.
## Non-strict binding strategies
Call by name
Call by name is an evaluation strategy where the arguments to a function are not evaluated before the function is called—rather, they are substituted directly into the function body (using capture-avoiding substitution) and then left to be evaluated whenever they appear in the function.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
However, unification can also be performed on unbounded variables, so calls may not necessarily commit to final values for all its variables.
## Non-strict binding strategies
Call by name
Call by name is an evaluation strategy where the arguments to a function are not evaluated before the function is called—rather, they are substituted directly into the function body (using capture-avoiding substitution) and then left to be evaluated whenever they appear in the function. If an argument is not used in the function body, the argument is never evaluated; if it is used several times, it is re-evaluated each time it appears. (See Jensen's device for a programming technique that exploits this.)
Call-by-name evaluation is occasionally preferable to call-by-value evaluation. If a function's argument is not used in the function, call by name will save time by not evaluating the argument, whereas call by value will evaluate it regardless. If the argument is a non-terminating computation, the advantage is enormous. However, when the function argument is used, call by name is often slower, requiring a mechanism such as a thunk.
.NET languages can simulate call by name using delegates or `Expression<T>` parameters. The latter results in an abstract syntax tree being given to the function. Eiffel provides agents, which represent an operation to be evaluated when needed.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
The latter results in an abstract syntax tree being given to the function. Eiffel provides agents, which represent an operation to be evaluated when needed. Seed7 provides call by name with function parameters. Java programs can accomplish similar lazy evaluation using lambda expressions and the `java.util.function. Supplier<T>` interface.
Call by need
Call by need is a memoized variant of call by name, where, if the function argument is evaluated, that value is stored for subsequent use. If the argument is pure (i.e., free of side effects), this produces the same results as call by name, saving the cost of recomputing the argument.
Haskell is a well-known language that uses call-by-need evaluation. Because evaluation of expressions may happen arbitrarily far into a computation, Haskell supports only side effects (such as mutation) via the use of monads. This eliminates any unexpected behavior from variables whose values change prior to their delayed evaluation.
In R's implementation of call by need, all arguments are passed, meaning that R allows arbitrary side effects.
Lazy evaluation is the most common implementation of call-by-need semantics, but variations like optimistic evaluation exist. .NET languages implement call by need using the type `Lazy<T>`.
Graph reduction is an efficient implementation of lazy evaluation.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
.NET languages implement call by need using the type `Lazy<T>`.
Graph reduction is an efficient implementation of lazy evaluation.
### Call by macro expansion
Call by macro expansion is similar to call by name, but uses textual substitution rather than capture-avoiding substitution. Macro substitution may therefore result in variable capture, leading to mistakes and undesired behavior. Hygienic macros avoid this problem by checking for and replacing shadowed variables that are not parameters.
### Call by future
"Call by future", also known as "parallel call by name" or "lenient evaluation", is a concurrent evaluation strategy combining non-strict semantics with eager evaluation. The method requires fine-grained dynamic scheduling and synchronization but is suitable for massively parallel machines.
The strategy creates a future (promise) for the function's body and each of its arguments. These futures are computed concurrently with the flow of the rest of the program. When a future A requires the value of another future B that has not yet been computed, future A blocks until future B finishes computing and has a value. If future B has already finished computing the value is returned immediately. Conditionals block until their condition is evaluated, and lambdas do not create futures until they are fully applied.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
If future B has already finished computing the value is returned immediately. Conditionals block until their condition is evaluated, and lambdas do not create futures until they are fully applied.
If implemented with processes or threads, creating a future will spawn one or more new processes or threads (for the promises), accessing the value will synchronize these with the main thread, and terminating the computation of the future corresponds to killing the promises computing its value. If implemented with a coroutine, as in .NET async/await, creating a future calls a coroutine (an async function), which may yield to the caller, and in turn be yielded back to when the value is used, cooperatively multitasking.
The strategy is non-deterministic, as the evaluation can occur at any time between creation of the future (i.e., when the expression is given) and use of the future's value. The strategy is non-strict because the function body may return a value before the arguments are evaluated. However, in most implementations, execution may still get stuck evaluating an unneeded argument. For example, the program
```haskell
f x = 1/x
g y = 1
main = print (g (f 0))
```
may either have finish before , and output 1, or may result in an error due to evaluating .
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
However, in most implementations, execution may still get stuck evaluating an unneeded argument. For example, the program
```haskell
f x = 1/x
g y = 1
main = print (g (f 0))
```
may either have finish before , and output 1, or may result in an error due to evaluating .
Call-by-future is similar to call by need in that values are computed only once. With careful handling of errors and nontermination, in particular terminating futures partway through if it is determined they will not be needed, call-by-future also has the same termination properties as call-by-need evaluation. However, call-by-future may perform unnecessary speculative work compared to call-by-need, such as deeply evaluating a lazy data structure. This can be avoided by using lazy futures that do not start computation until it is certain the value is needed.
### Optimistic evaluation
Optimistic evaluation is a call-by-need variant where the function's argument is partly evaluated in a call-by-value style for some amount of time (which may be adjusted at runtime). After that time has passed, evaluation is aborted and the function is applied using call by need. This approach avoids some of call-by-need's runtime expenses while retaining desired termination characteristics.
|
https://en.wikipedia.org/wiki/Evaluation_strategy
|
The Disc Filing System (DFS) is a computer file system developed by Acorn Computers, initially as an add-on to the Eurocard-based Acorn System 2.
In 1981, the Education Departments of Western Australia and South Australia announced joint tenders calling for the supply of personal computers to their schools. Acorn's Australian computer distributor, Barson Computers, convinced Joint Managing Directors Hermann Hauser and Chris Curry to allow the soon to be released Acorn BBC Microcomputer to be offered with disk storage as part of the bundle. They agreed on condition that Barson adapted the Acorn DFS from the System 2 without assistance from Acorn as they had no resources available. This required some minor hardware and software changes to make the DFS compatible with the BBC Micro.
Barson won the tenders for both states, with the DFS fitted, a year ahead of the UK. It was this early initiative that resulted in the BBC Micro being more heavily focused on the education market in Australia, with very little penetration of the home computer market until the arrival of the Acorn Electron.
The DFS shipped as a ROM and Disk Controller Chip fitted to the BBC Micro's motherboard. The filing system was of extremely limited functionality and storage capability, using a flat directory structure. Each filename can be up to seven letters long, plus one letter for the directory in which the file is stored.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
The filing system was of extremely limited functionality and storage capability, using a flat directory structure. Each filename can be up to seven letters long, plus one letter for the directory in which the file is stored.
The DFS is remarkable in that unlike most filing systems, there was no single vendor or implementation. The original DFS was written by Acorn, who continued to maintain their own codebase, but various disc drive vendors wrote their own implementations. Companies who wrote their own DFS implementations included Cumana, Solidisk, Opus and Watford Electronics. The Watford Electronics implementation is notable for supporting 62 files per disc instead of the usual 31, using a non-standard disc format. Beyond that, the Solidisk implementation introduced proprietary "chained" catalogues which allowed unlimited files per disc (only constrained by the disk size).
## Other features
in third-party implementations included being able to review free space, and built-in `FORMAT` and `VERIFY` commands, which were shipped on a utility disc with the original Acorn DFS.
Acorn followed up their original DFS series with the Acorn 1770 DFS, which used the same disc format as the earlier version but added a set of extra commands and supported the improved WD1770 floppy drive controller chip.
## Physical format
DFS conventionally uses one side of a double-density 5¼" floppy disc.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
Acorn followed up their original DFS series with the Acorn 1770 DFS, which used the same disc format as the earlier version but added a set of extra commands and supported the improved WD1770 floppy drive controller chip.
## Physical format
DFS conventionally uses one side of a double-density 5¼" floppy disc. Discs are formatted as either 40 or 80 track, giving a capacity of 100 or 200 KB per side (ten 256-byte sectors per track, with FM encoding).
The capacity is limited by the choice of the Intel 8271 controller in the original BBC Micro, which only supports FM encoding, not the MFM encoding which was already in common use by the time of the BBC Micro's launch. FM encoding gives half the recording capacity of MFM for a given physical disc density.
FM and MFM encoding are commonly referred to as "single density" and "double density", although the discs and drives are the same, unlike "high density", which uses different drives and discs.
Double-density 3½" discs can be formatted and used with 1770 DFS (the Intel 8271-based DFS has problems with many 3½" drives), giving the same "single-density" capacity with FM encoding, but this was not originally standard practice.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
FM and MFM encoding are commonly referred to as "single density" and "double density", although the discs and drives are the same, unlike "high density", which uses different drives and discs.
Double-density 3½" discs can be formatted and used with 1770 DFS (the Intel 8271-based DFS has problems with many 3½" drives), giving the same "single-density" capacity with FM encoding, but this was not originally standard practice. 3½" discs were normally formatted as MFM "double density" using the later Advanced Disc Filing System, as this is present in all Acorn machines supplied with 3½" drives. As of 2009, 3½" drives are more commonly used with BBC Micros than in the past, including use with DFS, due to their greater availability and easier data interchange with more recent computers.
High-density 5¼" and 3½" discs are not supported by DFS.
### Single- and double-sided operation
The DFS does not directly support double-sided discs; instead, the two heads of a double-sided drive are treated as two separate logical drives. The DFS can support up to four volumes, numbered from 0 to 3. Drive 0 is the default with drive 1 representing a second drive attached to the cable.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
The DFS can support up to four volumes, numbered from 0 to 3. Drive 0 is the default with drive 1 representing a second drive attached to the cable. "Drive" 2 referred to the reverse side of drive 0, and "drive" 3 was the reverse of drive 1. There is no support for more than two physical drives.
Due to the installed base of single-sided drives, commercial software was normally provided on single-sided discs, or as "flippy discs" that were manually reversed to access the other side.
### 40- and 80-track compatibility
Discs can be formatted using 40 or 80 tracks, using the `*FORM40` or `*FORM80` commands, and drives can be either 40 or 80 track. This is the most common compatibility issue for DFS users: 40-track discs were the norm for commercial software distribution, due to the installed base of 40-track drives, but 80-track drives became more common as prices dropped, allowing users to store more data. An 80-track drive will not automatically read 40-track discs.
The disc capacity is stored as a sector count in the catalogue on track zero. Track zero is located in the same place on both 40- and 80-track discs, allowing a disc file system to set the motor stepping accordingly.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
The disc capacity is stored as a sector count in the catalogue on track zero. Track zero is located in the same place on both 40- and 80-track discs, allowing a disc file system to set the motor stepping accordingly. However, the Intel 8271-based Acorn DFS does not do so, and so dual-format capability was addressed in a number of ways:
- by simply attaching both a 40-track drive and an 80-track drive to the BBC Micro, although this was costly for the home user;
- some disc drive resellers, notably UFD (User Friendly Devices) and Akhter Computer Group, offered drive assemblies fitted with switches to select 40- or 80-track operation;
- magazines such as The Micro User offered kits to build circuit boards that could be wired into the disc drive cable, optionally 'double-stepping' the attached drives;
- The Micro User also published an article on creating dual-format discs, with 21 tracks' worth of data stored in both formats so that either type of drive could access the contents; however these had limited capacity and once created were read-only;
- Acorn User magazine distributed 40-track cover discs with a small utility program on track zero, so that owners of 80-track drives could reformat them into 80-track discs with the original contents on the first 40 tracks; or
- the user could upgrade to a WD1770 or similar controller.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
Track zero is located in the same place on both 40- and 80-track discs, allowing a disc file system to set the motor stepping accordingly. However, the Intel 8271-based Acorn DFS does not do so, and so dual-format capability was addressed in a number of ways:
- by simply attaching both a 40-track drive and an 80-track drive to the BBC Micro, although this was costly for the home user;
- some disc drive resellers, notably UFD (User Friendly Devices) and Akhter Computer Group, offered drive assemblies fitted with switches to select 40- or 80-track operation;
- magazines such as The Micro User offered kits to build circuit boards that could be wired into the disc drive cable, optionally 'double-stepping' the attached drives;
- The Micro User also published an article on creating dual-format discs, with 21 tracks' worth of data stored in both formats so that either type of drive could access the contents; however these had limited capacity and once created were read-only;
- Acorn User magazine distributed 40-track cover discs with a small utility program on track zero, so that owners of 80-track drives could reformat them into 80-track discs with the original contents on the first 40 tracks; or
- the user could upgrade to a WD1770 or similar controller. Acorn 1770 DFS and some third-party controller systems provided dual-format capability in software by reprogramming the controller during track seeks; as a bonus, third-party systems offered proprietary MFM (so-called "double-density") formats for even greater disc capacity.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
However, the Intel 8271-based Acorn DFS does not do so, and so dual-format capability was addressed in a number of ways:
- by simply attaching both a 40-track drive and an 80-track drive to the BBC Micro, although this was costly for the home user;
- some disc drive resellers, notably UFD (User Friendly Devices) and Akhter Computer Group, offered drive assemblies fitted with switches to select 40- or 80-track operation;
- magazines such as The Micro User offered kits to build circuit boards that could be wired into the disc drive cable, optionally 'double-stepping' the attached drives;
- The Micro User also published an article on creating dual-format discs, with 21 tracks' worth of data stored in both formats so that either type of drive could access the contents; however these had limited capacity and once created were read-only;
- Acorn User magazine distributed 40-track cover discs with a small utility program on track zero, so that owners of 80-track drives could reformat them into 80-track discs with the original contents on the first 40 tracks; or
- the user could upgrade to a WD1770 or similar controller. Acorn 1770 DFS and some third-party controller systems provided dual-format capability in software by reprogramming the controller during track seeks; as a bonus, third-party systems offered proprietary MFM (so-called "double-density") formats for even greater disc capacity.
Failure to use the correct setting would result in errors from the DFS such as `Disk fault 18 at 01/00`, or damage to the disc drive by trying to step the heads beyond the physical end of the disc surface.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
Acorn 1770 DFS and some third-party controller systems provided dual-format capability in software by reprogramming the controller during track seeks; as a bonus, third-party systems offered proprietary MFM (so-called "double-density") formats for even greater disc capacity.
Failure to use the correct setting would result in errors from the DFS such as `Disk fault 18 at 01/00`, or damage to the disc drive by trying to step the heads beyond the physical end of the disc surface.
Switching to 80 tracks did not extend the catalogue in any way, leaving the user prone to running out of filename slots before running out of space on the disc. This situation resulted in a `Cat full` error.
## File storage
### Filenames
DFS is case-preserving but not case-sensitive. The prevalence of all-capitals filenames is most likely due to the BBC Micro defaulting to caps lock being enabled after a hard or soft reset. The character set is quite permissive, and all printable characters of 7-bit ASCII are allowed, including spaces, but excluding:
- The single wildcard character `#`.
-
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
The prevalence of all-capitals filenames is most likely due to the BBC Micro defaulting to caps lock being enabled after a hard or soft reset. The character set is quite permissive, and all printable characters of 7-bit ASCII are allowed, including spaces, but excluding:
- The single wildcard character `#`.
- The multiple wildcard character `*`.
- Control codes generated by the shell escape character `|`, although the sequence `||` can be used to represent a single `|` character in the filename.
- The drive specifier character `:` as the first character of a leaf name (the file's name proper). This causes a `Bad drive` or `Bad name` error. Where the colon is unambiguous, for example in `FOO:BAR`, then it is allowed as part of the leaf name.
- The directory specifier character `.` as the first or second character of a leaf name. `.` cannot be used as a directory character. Where the dot is unambiguous, such as in `PRG.BAS`, then it is allowed as part of the leaf name, and is not treated as a directory specifier (whereas `F.MONEY` would be a file `MONEY` in directory `F`).
For the sake of portability to third-party DFS implementations, it is best to avoid `:` and `.` in leaf names.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
For the sake of portability to third-party DFS implementations, it is best to avoid `:` and `.` in leaf names.
Quotation marks are allowed, although BBC BASIC requires them to be escaped twice:
- `SAVE """""""A"""` passes the string `"""A"` to the DFS, which then saves a file named `"A`.
- Conversely `SAVE "A"""` saves a file named `A"`.
- The same technique is used to insert spaces: `SAVE """B A R"""` saves a file named `B A R`.
A fully qualified filename, or "file specification" ("fsp" for short) contains a colon then the drive number, a dot, then the directory letter, another dot, and the name. For example, a file in the default directory of "drive" 2 called `BOB` would have a complete specification of `:2.$.BOB`. The drive and directory specifiers are both optional.
### Directories
"Directories" in the DFS are single character prefixes on filenames - such as `F` in `F.BankLtr` - used to group files. The arrangement is flat and a default directory of `$` is used instead of a root directory.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
### Directories
"Directories" in the DFS are single character prefixes on filenames - such as `F` in `F.BankLtr` - used to group files. The arrangement is flat and a default directory of `$` is used instead of a root directory. On requesting a catalogue of the disc (with the `*CAT` or `*.` commands), files in the current directory are shown with no directory prefix in one block, and below that are listed all other files in a second block, with their directory prefixes visible. For example, (from Acorn DFS - third party DFS implementations may vary slightly):
```
PROGRAM (12)
Drive 0 Option 2 (RUN)
Dir. :0.$ Lib. :0.$
!BOOT HELLO
SUMS TABLE
TEST VECTORS
ZOMBIE
A.HELLO L B.SUMS
F.BankLtr
```
The top seven files are all in the current directory which is `$` on drive 0. Below that are all the files in other directories, in this case `A`, `B` and `F`. An `L` after a filename (as with `A.HELLO`, above) shows the file is locked against modification or deletion. The first line contains the disc title and the modification count.
The DFS provides a working space, divided up into the directory and the library.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
The first line contains the disc title and the modification count.
The DFS provides a working space, divided up into the directory and the library. The "directory" is the working directory on the current volume, much like the working directory on any other command line system. The "library" is a second, alternative working directory that functions more like PATH and had the benefit of being able to be on any volume. Requests to open files with unqualified names, will first be searched for in the working directory; failing this, the library directory will also be searched. The directory and library both default to the same directory.
## Disc structure
The catalogue (file table) occupies the first two disc sectors: one for the names and directories of each file, and a matching sector holding the file locations, sizes and metadata. Eight bytes of each sector are used for each file. With a further eight bytes from each sector reserved for the 12-byte disc title and the volume information, the total number of files on the disc (irrespective of which directory each file is in) is limited to 31. In the interests of saving space, the most significant bit of the directory letter for a file is used as the locked (read-only) flag.
### Volume size
Although physical disks are usually formatted as either 100 KB or 200 KB, DFS supports volume sizes up to 256 KB.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
In the interests of saving space, the most significant bit of the directory letter for a file is used as the locked (read-only) flag.
### Volume size
Although physical disks are usually formatted as either 100 KB or 200 KB, DFS supports volume sizes up to 256 KB.
The largest DFS file size allowed is the volume size minus ½ KB for the catalogue, as file sizes are stored as an 18-bit quantity.
### File allocation
The DFS does not support data fragmentation, meaning a file's data must be stored in a single run of consecutive sectors, but free space is prone to becoming fragmented. Random-access file writes fail when the end of the file reaches the beginning of the next, even though there may be free sectors elsewhere on the disc. In such cases the DFS aborts with a `Can't extend` error. `SAVE` is also unable to split a file to fit the available space, but as the failure occurs at the sector allocation stage, the error returned is `Disk full`.
The `*COMPACT` command is provided to relocate all files on disc to a solid block, placing all the free space after it in a second block. This allows the next file created to fill the disc, but only the last existing file can be extended without being moved. `SAVE` deletes any existing file and copies the specified block of memory to wherever there is space on the disc.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
This allows the next file created to fill the disc, but only the last existing file can be extended without being moved. `SAVE` deletes any existing file and copies the specified block of memory to wherever there is space on the disc. In contrast the `*COMPACT` command uses program memory as a buffer to relocate the files, overwriting any program and data in memory.
### Metadata
Like the cassette filing system, the Acorn DFS supports the BBC Micro's standard file metadata: load address and execution address, required because Acorn MOS (the operating system used by the BBC Micro) does not support relocation of binary code. A file should be loaded to the address the programmer intended, as the contents may refer to internal locations by absolute addresses. An execution address is also recorded as the entry point is not necessarily at the beginning, or even within the file.
File attributes are limited to a single bit: Locked. When set, an `L` appears to the right of the file's name in the catalogue and the file may not be altered, overwritten or deleted.
### Dates
DFS discs do not track any dates (because Acorn MOS prior to version 3 did not maintain a real-time clock) but instead offer a peculiar feature: a modification count. Every time the catalogue is updated, the count increments.
|
https://en.wikipedia.org/wiki/Disc_Filing_System
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.