text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
In the business world, incorrect data can be costly. Many companies use customer information databases that record data like contact information, addresses, and preferences. For instance, if the addresses are inconsistent, the company will suffer the cost of resending mail or even losing customers.
## Data quality
High-quality data needs to pass a set of quality criteria. Those include:
- Validity: The degree to which the measures conform to defined business rules or constraints. (See also Validity (statistics).) When modern database technology is used to design data-capture systems, validity is fairly easy to ensure: invalid data arises mainly in legacy contexts (where constraints were not implemented in software) or where inappropriate data-capture technology was used (e.g., spreadsheets, where it is very hard to limit what a user chooses to enter into a cell, if cell validation is not used). Data constraints fall into the following categories:
- Data-Type Constraints: values in a particular column must be of a particular data type, e.g., Boolean, numeric (integer or real), date.
- Range Constraints: typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
Data constraints fall into the following categories:
- Data-Type Constraints: values in a particular column must be of a particular data type, e.g., Boolean, numeric (integer or real), date.
- Range Constraints: typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values.
- Mandatory Constraints: Certain columns must not be empty.
- Unique Constraints: A field, or a combination of fields, must be unique across a dataset. No two persons may have the same social security number (or other unique identifier).
- Set-Membership constraints: The values for a column come from a set of discrete values or codes. For example, a person's sex may be Female, Male, or Non-Binary.
- Foreign-key constraints: This is the more general case of set membership. The set of values in a column is defined in a column of another table that contains unique values. For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states and territories is recorded in a separate table.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
The set of values in a column is defined in a column of another table that contains unique values. For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states and territories is recorded in a separate table. The term foreign key is borrowed from relational database terminology.
- Regular expression patterns: Occasionally, text fields must be validated this way. For example, North American phone numbers may be required to have the pattern 999-999–9999.
- Cross-field validation: Certain conditions that utilize multiple fields must hold. For example, in laboratory medicine, the sum of the components of the differential white blood cell count must be equal to 100 (since they are all percentages). In a hospital database, a patient's date of discharge from the hospital cannot be earlier than the date of admission.
- Accuracy: The degree of conformity of a measure to a standard or a true value. (See also Accuracy and precision.) Accuracy is very hard to achieve through data cleansing in the general case because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
(See also Accuracy and precision.) Accuracy is very hard to achieve through data cleansing in the general case because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable. Accuracy has been achieved in some cleansing contexts, notably customer contact data, by using external databases that match up zip codes to geographical locations (city and state) and also help verify that street addresses within these zip codes actually exist.
- Completeness: The degree to which all required measures are known. Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded. (In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i.e. re-interviewing the subject, but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded. (In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i.e. re-interviewing the subject, but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago. In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but the supplying of default values does not imply that the data has been made complete.)
- Consistency: The degree to which a set of measures are equivalent across systems. (See also Consistency). Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
(See also Consistency). Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct. Fixing inconsistency is not always possible: it requires a variety of strategies - e.g., deciding which data were recorded more recently, which data source is likely to be most reliable (the latter knowledge may be specific to a given organization), or simply trying to find the truth by testing both data items (e.g., calling up the customer).
- Uniformity: The degree to which a set of data measures are specified using the same units of measure in all systems. (See also Unit of measurement.) In datasets pooled from different locales, weight may be recorded either in pounds or kilos and must be converted to a single measure using an arithmetic transformation.
The term integrity encompasses accuracy, consistency and some aspects of validation (see also Data integrity) but is rarely used by itself in data-cleansing contexts because it is insufficiently specific. (For example, "referential integrity" is a term used to refer to the enforcement of foreign-key constraints above.)
|
https://en.wikipedia.org/wiki/Data_cleansing
|
The term integrity encompasses accuracy, consistency and some aspects of validation (see also Data integrity) but is rarely used by itself in data-cleansing contexts because it is insufficiently specific. (For example, "referential integrity" is a term used to refer to the enforcement of foreign-key constraints above.)
## Process
- Data auditing: The data is audited with the use of statistical and database methods to detect anomalies and contradictions: this eventually indicates the characteristics of the anomalies and their locations. Several commercial software packages will let you specify constraints of various kinds (using a grammar that conforms to that of a standard programming language, e.g., JavaScript or Visual Basic) and then generate code that checks the data for violation of these constraints. This process is referred to below in the bullets "workflow specification" and "workflow execution." For users who lack access to high-end cleansing software, Microcomputer database packages such as Microsoft Access or File Maker Pro will also let you perform such checks, on a constraint-by-constraint basis, interactively with little or no programming required in many cases.
- Workflow specification: The detection and removal of anomalies are performed by a sequence of operations on the data known as the workflow.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
This process is referred to below in the bullets "workflow specification" and "workflow execution." For users who lack access to high-end cleansing software, Microcomputer database packages such as Microsoft Access or File Maker Pro will also let you perform such checks, on a constraint-by-constraint basis, interactively with little or no programming required in many cases.
- Workflow specification: The detection and removal of anomalies are performed by a sequence of operations on the data known as the workflow. It is specified after the process of auditing the data and is crucial in achieving the end product of high-quality data. In order to achieve a proper workflow, the causes of the anomalies and errors in the data have to be closely considered.
- Workflow execution: In this stage, the workflow is executed after its specification is complete and its correctness is verified. The implementation of the workflow should be efficient, even on large sets of data, which inevitably poses a trade-off because the execution of a data-cleansing operation can be computationally expensive.
- Post-processing and controlling: After executing the cleansing workflow, the results are inspected to verify correctness. Data that could not be corrected during the execution of the workflow is manually corrected, if possible.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
The implementation of the workflow should be efficient, even on large sets of data, which inevitably poses a trade-off because the execution of a data-cleansing operation can be computationally expensive.
- Post-processing and controlling: After executing the cleansing workflow, the results are inspected to verify correctness. Data that could not be corrected during the execution of the workflow is manually corrected, if possible. The result is a new cycle in the data-cleansing process where the data is audited again to allow the specification of an additional workflow to further cleanse the data by automatic processing.
Good quality source data has to do with “Data Quality Culture” and must be initiated at the top of the organization. It is not just a matter of implementing strong validation checks on input screens, because almost no matter how strong these checks are, they can often still be circumvented by the users.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
Good quality source data has to do with “Data Quality Culture” and must be initiated at the top of the organization. It is not just a matter of implementing strong validation checks on input screens, because almost no matter how strong these checks are, they can often still be circumvented by the users. There is a nine-step guide for organizations that wish to improve data quality:Olson, J. E. Data Quality: The Accuracy Dimension", Morgan Kaufmann, 2002.
- Declare a high-level commitment to a data quality culture
- Drive process reengineering at the executive level
- Spend money to improve the data entry environment
- Spend money to improve application integration
- Spend money to change how processes work
- Promote end-to-end team awareness
- Promote interdepartmental cooperation
- Publicly celebrate data quality excellence
- Continuously measure and improve data quality
Others include:
- Parsing: for the detection of syntax errors. A parser decides whether a string of data is acceptable within the allowed data specification. This is similar to the way a parser works with grammars and languages.
- Data transformation: Data transformation allows the mapping of the data from its given format into the format expected by the appropriate application. This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values.
- Duplicate elimination: Duplicate detection requires an algorithm for determining whether data contains duplicate representations of the same entity.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
This includes value conversions or translation functions, as well as normalizing numeric values to conform to minimum and maximum values.
- Duplicate elimination: Duplicate detection requires an algorithm for determining whether data contains duplicate representations of the same entity. Usually, data is sorted by a key that would bring duplicate entries closer together for faster identification.
- Statistical methods: By analyzing the data using the values of mean, standard deviation, range, or clustering algorithms, it is possible for an expert to find values that are unexpected and thus erroneous. Although the correction of such data is difficult since the true value is not known, it can be resolved by setting the values to an average or other statistical value. Statistical methods can also be used to handle missing values which can be replaced by one or more plausible values, which are usually obtained by extensive data augmentation algorithms.
## System
The essential job of this system is to find a balance between fixing dirty data and maintaining the data as close as possible to the original data from the source production system. This is a challenge for the extract, transform, load architect. The system should offer an architecture that can cleanse data, record quality events and measure/control the quality of data in the data warehouse. A good start is to perform a thorough data profiling analysis that will help define the required complexity of the data cleansing system and also give an idea of the current data quality in the source system(s).
|
https://en.wikipedia.org/wiki/Data_cleansing
|
The system should offer an architecture that can cleanse data, record quality events and measure/control the quality of data in the data warehouse. A good start is to perform a thorough data profiling analysis that will help define the required complexity of the data cleansing system and also give an idea of the current data quality in the source system(s).
## Quality screens
Part of the data cleansing system is a set of diagnostic filters known as quality screens. They each implement a test in the data flow that, if it fails, records an error in the Error Event Schema. Quality screens are divided into three categories:
- Column screens. Testing the individual column, e.g. for unexpected values like NULL values; non-numeric values that should be numeric; out-of-range values; etc.
- Structure screens. These are used to test for the integrity of different relationships between columns (typically foreign/primary keys) in the same or different tables. They are also used for testing that a group of columns is valid according to some structural definition to which it should adhere.
- Business rule screens. The most complex of the three tests. They test to see whether data, maybe across multiple tables, follow specific business rules. An example could be, that if a customer is marked as a certain type of customer, the business rules that define this kind of customer should be adhered to.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
They test to see whether data, maybe across multiple tables, follow specific business rules. An example could be, that if a customer is marked as a certain type of customer, the business rules that define this kind of customer should be adhered to.
When a quality screen records an error, it can either stop the dataflow process, send the faulty data somewhere else than the target system or tag the data.
The latter option is considered the best solution because the first option requires, that someone has to manually deal with the issue each time it occurs and the second implies that data are missing from the target system (integrity) and it is often unclear what should happen to these data.
## Criticism of existing tools and processes
Most data cleansing tools have limitations in usability:
- Project costs: costs typically in the hundreds of thousands of dollars
- Time: mastering large-scale data-cleansing software is time-consuming
- Security: cross-validation requires sharing information, giving application access across systems, including sensitive legacy systems
## Error event schema
The error event schema holds records of all error events thrown by the quality screens. It consists of an error event fact table with foreign keys to three dimension tables that represent a date (when), batch job (where), and screen (who produced error). It also holds information about exactly when the error occurred and the severity of the error.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
It consists of an error event fact table with foreign keys to three dimension tables that represent a date (when), batch job (where), and screen (who produced error). It also holds information about exactly when the error occurred and the severity of the error. Also, there is an error event detail fact table with a foreign key to the main table that contains detailed information about in which table, record and field the error occurred and the error condition.
|
https://en.wikipedia.org/wiki/Data_cleansing
|
In geometry, a set of points is convex if it contains every line segment between two points in the set.
For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex.
The boundary of a convex set in the plane is always a convex curve. The intersection of all the convex sets that contain a given subset of Euclidean space is called the convex hull of . It is the smallest convex set containing .
A convex function is a real-valued function defined on an interval with the property that its epigraph (the set of points on or above the graph of the function) is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis.
Spaces in which convex sets are defined include the Euclidean spaces, the affine spaces over the real numbers, and certain non-Euclidean geometries.
|
https://en.wikipedia.org/wiki/Convex_set
|
The branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis.
Spaces in which convex sets are defined include the Euclidean spaces, the affine spaces over the real numbers, and certain non-Euclidean geometries.
## Definitions
Let be a vector space or an affine space over the real numbers, or, more generally, over some ordered field (this includes Euclidean spaces, which are affine spaces). A subset of is convex if, for all and in , the line segment connecting and is included in .
This means that the affine combination belongs to for all in and in the interval . This implies that convexity is invariant under affine transformations. Further, it implies that a convex set in a real or complex topological vector space is path-connected (and therefore also connected).
A set is if every point on the line segment connecting and other than the endpoints is inside the topological interior of . A closed convex subset is strictly convex if and only if every one of its boundary points is an extreme point.
A set is absolutely convex if it is convex and balanced.
### Examples
The convex subsets of (the set of real numbers) are the intervals and the points of .
|
https://en.wikipedia.org/wiki/Convex_set
|
A set is absolutely convex if it is convex and balanced.
### Examples
The convex subsets of (the set of real numbers) are the intervals and the points of . Some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids and the Platonic solids. The Kepler-Poinsot polyhedra are examples of non-convex sets.
### Non-convex set
A set that is not convex is called a non-convex set. A polygon that is not a convex polygon is sometimes called a concave polygon, and some sources more generally use the term concave set to mean a non-convex set, but most authorities prohibit this usage.
The complement of a convex set, such as the epigraph of a concave function, is sometimes called a reverse convex set, especially in the context of mathematical optimization.
## Properties
Given points in a convex set , and
nonnegative numbers such that , the affine combination
$$
\sum_{k=1}^r\lambda_k u_k
$$
belongs to .
|
https://en.wikipedia.org/wiki/Convex_set
|
The complement of a convex set, such as the epigraph of a concave function, is sometimes called a reverse convex set, especially in the context of mathematical optimization.
## Properties
Given points in a convex set , and
nonnegative numbers such that , the affine combination
$$
\sum_{k=1}^r\lambda_k u_k
$$
belongs to . As the definition of a convex set is the case , this property characterizes convex sets.
Such an affine combination is called a convex combination of . The convex hull of a subset of a real vector space is defined as the intersection of all convex sets that contain . More concretely, the convex hull is the set of all convex combinations of points in . In particular, this is a convex set.
A (bounded) convex polytope is the convex hull of a finite subset of some Euclidean space .
### Intersections and unions
The collection of convex subsets of a vector space, an affine space, or a Euclidean space has the following properties:
1. The empty set and the whole space are convex.
1. The intersection of any collection of convex sets is convex.
1.
|
https://en.wikipedia.org/wiki/Convex_set
|
The intersection of any collection of convex sets is convex.
1. The union of a collection of convex sets is convex if those sets form a chain (a totally ordered set) under inclusion. For this property, the restriction to chains is important, as the union of two convex sets need not be convex.
### Closed convex sets
Closed convex sets are convex sets that contain all their limit points. They can be characterised as the intersections of closed half-spaces (sets of points in space that lie on and to one side of a hyperplane).
From what has just been said, it is clear that such intersections are convex, and they will also be closed sets. To prove the converse, i.e., every closed convex set may be represented as such intersection, one needs the supporting hyperplane theorem in the form that for a given closed convex set and point outside it, there is a closed half-space that contains and not . The supporting hyperplane theorem is a special case of the Hahn–Banach theorem of functional analysis.
|
https://en.wikipedia.org/wiki/Convex_set
|
To prove the converse, i.e., every closed convex set may be represented as such intersection, one needs the supporting hyperplane theorem in the form that for a given closed convex set and point outside it, there is a closed half-space that contains and not . The supporting hyperplane theorem is a special case of the Hahn–Banach theorem of functional analysis.
### Face of a convex set
A face of a convex set
$$
C
$$
is a convex subset
$$
F
$$
of
$$
C
$$
such that whenever a point
$$
p
$$
in
$$
F
$$
lies strictly between two points
$$
x
$$
and
$$
y
$$
in
$$
C
$$
, both
$$
x
$$
and _ BLOCK9_ must be in
$$
F
$$
. Equivalently, for any
$$
x,y\in C
$$
and any real number
$$
0<t<1
$$
such that
$$
(1-t)x+ty
$$
is in
$$
F
$$
,
$$
x
$$
and
$$
y
$$
must be in
$$
F
$$
. According to this definition,
$$
C
$$
itself and the empty set are faces of
$$
C
$$
; these are sometimes called the trivial faces of
$$
C
$$
.
|
https://en.wikipedia.org/wiki/Convex_set
|
Equivalently, for any
$$
x,y\in C
$$
and any real number
$$
0<t<1
$$
such that
$$
(1-t)x+ty
$$
is in
$$
F
$$
,
$$
x
$$
and
$$
y
$$
must be in
$$
F
$$
. According to this definition,
$$
C
$$
itself and the empty set are faces of
$$
C
$$
; these are sometimes called the trivial faces of
$$
C
$$
. An extreme point of
$$
C
$$
is a point that is a face of
$$
C
$$
.
Let
$$
C
$$
be a convex set in
$$
\R^n
$$
that is compact (or equivalently, closed and bounded). Then
$$
C
$$
is the convex hull of its extreme points. More generally, each compact convex set in a locally convex topological vector space is the closed convex hull of its extreme points (the Krein–Milman theorem).
For example:
- A triangle in the plane (including the region inside) is a compact convex set. Its nontrivial faces are the three vertices and the three edges. (So the only extreme points are the three vertices.)
-
|
https://en.wikipedia.org/wiki/Convex_set
|
Its nontrivial faces are the three vertices and the three edges. (So the only extreme points are the three vertices.)
- The only nontrivial faces of the closed unit disk
$$
\{ (x,y) \in \R^2: x^2+y^2 \leq 1 \}
$$
are its extreme points, namely the points on the unit circle
$$
S^1 = \{ (x,y) \in \R^2: x^2+y^2=1 \}
$$
.
### Convex sets and rectangles
Let be a convex body in the plane (a convex set whose interior is non-empty). We can inscribe a rectangle r in such that a homothetic copy R of r is circumscribed about . The positive homothety ratio is at most 2 and:
$$
\tfrac{1}{2} \cdot\operatorname{Area}(R) \leq \operatorname{Area}(C) \leq 2\cdot \operatorname{Area}(r)
$$
|
https://en.wikipedia.org/wiki/Convex_set
|
We can inscribe a rectangle r in such that a homothetic copy R of r is circumscribed about . The positive homothety ratio is at most 2 and:
$$
\tfrac{1}{2} \cdot\operatorname{Area}(R) \leq \operatorname{Area}(C) \leq 2\cdot \operatorname{Area}(r)
$$
### Blaschke-Santaló diagrams
The set
$$
\mathcal{K}^2
$$
of all planar convex bodies can be parameterized in terms of the convex body diameter D, its inradius r (the biggest circle contained in the convex body) and its circumradius R (the smallest circle containing the convex body). In fact, this set can be described by the set of inequalities given by
$$
2r \le D \le 2R
$$
$$
R \le \frac{\sqrt{3}}{3} D
$$
$$
r + R \le D
$$
$$
D^2 \sqrt{4R^2-D^2} \le 2R (2R + \sqrt{4R^2 -D^2})
$$
and can be visualized as the image of the function g that maps a convex body to the point given by (r/R, D/2R).
|
https://en.wikipedia.org/wiki/Convex_set
|
### Blaschke-Santaló diagrams
The set
$$
\mathcal{K}^2
$$
of all planar convex bodies can be parameterized in terms of the convex body diameter D, its inradius r (the biggest circle contained in the convex body) and its circumradius R (the smallest circle containing the convex body). In fact, this set can be described by the set of inequalities given by
$$
2r \le D \le 2R
$$
$$
R \le \frac{\sqrt{3}}{3} D
$$
$$
r + R \le D
$$
$$
D^2 \sqrt{4R^2-D^2} \le 2R (2R + \sqrt{4R^2 -D^2})
$$
and can be visualized as the image of the function g that maps a convex body to the point given by (r/R, D/2R). The image of this function is known a (r, D, R) Blachke-Santaló diagram.
Alternatively, the set
$$
\mathcal{K}^2
$$
can also be parametrized by its width (the smallest distance between any two different parallel support hyperplanes), perimeter and area.
### Other properties
Let X be a topological vector space and
$$
C \subseteq X
$$
be convex.
- _
|
https://en.wikipedia.org/wiki/Convex_set
|
Alternatively, the set
$$
\mathcal{K}^2
$$
can also be parametrized by its width (the smallest distance between any two different parallel support hyperplanes), perimeter and area.
### Other properties
Let X be a topological vector space and
$$
C \subseteq X
$$
be convex.
- _ BLOCK1_ and
$$
\operatorname{Int} C
$$
are both convex (i.e. the closure and interior of convex sets are convex).
- If
$$
a \in \operatorname{Int} C
$$
and
$$
b \in \operatorname{Cl} C
$$
then
$$
[a, b[ \, \subseteq \operatorname{Int} C
$$
(where
$$
[a, b[ \, := \left\{ (1 - r) a + r b : 0 \leq r < 1 \right\}
$$
).
|
https://en.wikipedia.org/wiki/Convex_set
|
BLOCK1_ and
$$
\operatorname{Int} C
$$
are both convex (i.e. the closure and interior of convex sets are convex).
- If
$$
a \in \operatorname{Int} C
$$
and
$$
b \in \operatorname{Cl} C
$$
then
$$
[a, b[ \, \subseteq \operatorname{Int} C
$$
(where
$$
[a, b[ \, := \left\{ (1 - r) a + r b : 0 \leq r < 1 \right\}
$$
).
- If
$$
\operatorname{Int} C \neq \emptyset
$$
then:
-
$$
\operatorname{cl} \left( \operatorname{Int} C \right) = \operatorname{Cl} C
$$
, and
-
$$
\operatorname{Int} C = \operatorname{Int} \left( \operatorname{Cl} C \right) = C^i
$$
, where
$$
C^{i}
$$
is the algebraic interior of C.
##
|
https://en.wikipedia.org/wiki/Convex_set
|
- If
$$
a \in \operatorname{Int} C
$$
and
$$
b \in \operatorname{Cl} C
$$
then
$$
[a, b[ \, \subseteq \operatorname{Int} C
$$
(where
$$
[a, b[ \, := \left\{ (1 - r) a + r b : 0 \leq r < 1 \right\}
$$
).
- If
$$
\operatorname{Int} C \neq \emptyset
$$
then:
-
$$
\operatorname{cl} \left( \operatorname{Int} C \right) = \operatorname{Cl} C
$$
, and
-
$$
\operatorname{Int} C = \operatorname{Int} \left( \operatorname{Cl} C \right) = C^i
$$
, where
$$
C^{i}
$$
is the algebraic interior of C.
##
### Convex hulls
and Minkowski sums
Convex hulls
Every subset of the vector space is contained within a smallest convex set (called the convex hull of ), namely the intersection of all convex sets containing .
|
https://en.wikipedia.org/wiki/Convex_set
|
- If
$$
\operatorname{Int} C \neq \emptyset
$$
then:
-
$$
\operatorname{cl} \left( \operatorname{Int} C \right) = \operatorname{Cl} C
$$
, and
-
$$
\operatorname{Int} C = \operatorname{Int} \left( \operatorname{Cl} C \right) = C^i
$$
, where
$$
C^{i}
$$
is the algebraic interior of C.
##
### Convex hulls
and Minkowski sums
Convex hulls
Every subset of the vector space is contained within a smallest convex set (called the convex hull of ), namely the intersection of all convex sets containing . The convex-hull operator Conv() has the characteristic properties of a closure operator:
- extensive: ,
- non-decreasing: implies that , and
- idempotent: .
|
https://en.wikipedia.org/wiki/Convex_set
|
### Convex hulls
and Minkowski sums
Convex hulls
Every subset of the vector space is contained within a smallest convex set (called the convex hull of ), namely the intersection of all convex sets containing . The convex-hull operator Conv() has the characteristic properties of a closure operator:
- extensive: ,
- non-decreasing: implies that , and
- idempotent: .
The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets
$$
\operatorname{Conv}(S)\vee\operatorname{Conv}(T) = \operatorname{Conv}(S\cup T) = \operatorname{Conv}\bigl(\operatorname{Conv}(S)\cup\operatorname{Conv}(T)\bigr).
$$
The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice.
|
https://en.wikipedia.org/wiki/Convex_set
|
The convex-hull operator Conv() has the characteristic properties of a closure operator:
- extensive: ,
- non-decreasing: implies that , and
- idempotent: .
The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets
$$
\operatorname{Conv}(S)\vee\operatorname{Conv}(T) = \operatorname{Conv}(S\cup T) = \operatorname{Conv}\bigl(\operatorname{Conv}(S)\cup\operatorname{Conv}(T)\bigr).
$$
The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice.
### Minkowski addition
In a real vector-space, the Minkowski sum of two (non-empty) sets, and , is defined to be the set formed by the addition of vectors element-wise from the summand-sets
$$
S_1+S_2=\{x_1+x_2: x_1\in S_1, x_2\in S_2\}.
$$
More generally, the Minkowski sum of a finite family of (non-empty) sets is the set formed by element-wise addition of vectors
$$
\sum_n S_n = \left \{ \sum_n x_n : x_n \in S_n \right \}.
$$
For Minkowski addition, the zero set containing only the zero vector has special importance:
|
https://en.wikipedia.org/wiki/Convex_set
|
The convex-hull operation is needed for the set of convex sets to form a lattice, in which the "join" operation is the convex hull of the union of two convex sets
$$
\operatorname{Conv}(S)\vee\operatorname{Conv}(T) = \operatorname{Conv}(S\cup T) = \operatorname{Conv}\bigl(\operatorname{Conv}(S)\cup\operatorname{Conv}(T)\bigr).
$$
The intersection of any collection of convex sets is itself convex, so the convex subsets of a (real or complex) vector space form a complete lattice.
### Minkowski addition
In a real vector-space, the Minkowski sum of two (non-empty) sets, and , is defined to be the set formed by the addition of vectors element-wise from the summand-sets
$$
S_1+S_2=\{x_1+x_2: x_1\in S_1, x_2\in S_2\}.
$$
More generally, the Minkowski sum of a finite family of (non-empty) sets is the set formed by element-wise addition of vectors
$$
\sum_n S_n = \left \{ \sum_n x_n : x_n \in S_n \right \}.
$$
For Minkowski addition, the zero set containing only the zero vector has special importance: For every non-empty subset S of a vector space
$$
S+\{0\}=S;
$$
in algebraic terminology, is the identity element of Minkowski addition (on the collection of non-empty sets).
|
https://en.wikipedia.org/wiki/Convex_set
|
### Minkowski addition
In a real vector-space, the Minkowski sum of two (non-empty) sets, and , is defined to be the set formed by the addition of vectors element-wise from the summand-sets
$$
S_1+S_2=\{x_1+x_2: x_1\in S_1, x_2\in S_2\}.
$$
More generally, the Minkowski sum of a finite family of (non-empty) sets is the set formed by element-wise addition of vectors
$$
\sum_n S_n = \left \{ \sum_n x_n : x_n \in S_n \right \}.
$$
For Minkowski addition, the zero set containing only the zero vector has special importance: For every non-empty subset S of a vector space
$$
S+\{0\}=S;
$$
in algebraic terminology, is the identity element of Minkowski addition (on the collection of non-empty sets).
### Convex hulls of Minkowski sums
Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition:
Let be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls
$$
\operatorname{Conv}(S_1+S_2)=\operatorname{Conv}(S_1)+\operatorname{Conv}(S_2).
$$
This result holds more generally for each finite collection of non-empty sets:
$$
\text{Conv}\left ( \sum_n S_n \right ) = \sum_n \text{Conv} \left (S_n \right).
$$
In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations.
|
https://en.wikipedia.org/wiki/Convex_set
|
For every non-empty subset S of a vector space
$$
S+\{0\}=S;
$$
in algebraic terminology, is the identity element of Minkowski addition (on the collection of non-empty sets).
### Convex hulls of Minkowski sums
Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition:
Let be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls
$$
\operatorname{Conv}(S_1+S_2)=\operatorname{Conv}(S_1)+\operatorname{Conv}(S_2).
$$
This result holds more generally for each finite collection of non-empty sets:
$$
\text{Conv}\left ( \sum_n S_n \right ) = \sum_n \text{Conv} \left (S_n \right).
$$
In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations. For the commutativity of Minkowski addition and convexification, see Theorem 1.1.2 (pages 2–3) in Schneider; this reference discusses much of the literature on the convex hulls of Minkowski sumsets in its "Chapter 3 Minkowski addition" (pages 126–196):
|
https://en.wikipedia.org/wiki/Convex_set
|
### Convex hulls of Minkowski sums
Minkowski addition behaves well with respect to the operation of taking convex hulls, as shown by the following proposition:
Let be subsets of a real vector-space, the convex hull of their Minkowski sum is the Minkowski sum of their convex hulls
$$
\operatorname{Conv}(S_1+S_2)=\operatorname{Conv}(S_1)+\operatorname{Conv}(S_2).
$$
This result holds more generally for each finite collection of non-empty sets:
$$
\text{Conv}\left ( \sum_n S_n \right ) = \sum_n \text{Conv} \left (S_n \right).
$$
In mathematical terminology, the operations of Minkowski summation and of forming convex hulls are commuting operations. For the commutativity of Minkowski addition and convexification, see Theorem 1.1.2 (pages 2–3) in Schneider; this reference discusses much of the literature on the convex hulls of Minkowski sumsets in its "Chapter 3 Minkowski addition" (pages 126–196):
### Minkowski sums of convex sets
The Minkowski sum of two compact convex sets is compact. The sum of a compact convex set and a closed convex set is closed.
|
https://en.wikipedia.org/wiki/Convex_set
|
### Minkowski sums of convex sets
The Minkowski sum of two compact convex sets is compact. The sum of a compact convex set and a closed convex set is closed.
The following famous theorem, proved by Dieudonné in 1966, gives a sufficient condition for the difference of two closed convex subsets to be closed. It uses the concept of a recession cone of a non-empty convex subset S, defined as:
$$
\operatorname{rec} S = \left\{ x \in X \, : \, x + S \subseteq S \right\},
$$
where this set is a convex cone containing
$$
0 \in X
$$
and satisfying
$$
S + \operatorname{rec} S = S
$$
. Note that if S is closed and convex then _ BLOCK3_ is closed and for all
$$
s_0 \in S
$$
,
$$
\operatorname{rec} S = \bigcap_{t > 0} t (S - s_0).
$$
Theorem (Dieudonné). Let A and B be non-empty, closed, and convex subsets of a locally convex topological vector space such that
$$
\operatorname{rec} A \cap \operatorname{rec} B
$$
is a linear subspace.
|
https://en.wikipedia.org/wiki/Convex_set
|
BLOCK3_ is closed and for all
$$
s_0 \in S
$$
,
$$
\operatorname{rec} S = \bigcap_{t > 0} t (S - s_0).
$$
Theorem (Dieudonné). Let A and B be non-empty, closed, and convex subsets of a locally convex topological vector space such that
$$
\operatorname{rec} A \cap \operatorname{rec} B
$$
is a linear subspace. If A or B is locally compact then A − B is closed.
## Generalizations and extensions for convexity
The notion of convexity in the Euclidean space may be generalized by modifying the definition in some or other aspects. The common name "generalized convexity" is used, because the resulting objects retain certain properties of convex sets.
### Star-convex (star-shaped) sets
Let be a set in a real or complex vector space. is star convex (star-shaped) if there exists an in such that the line segment from to any point in is contained in . Hence a non-empty convex set is always star-convex but a star-convex set is not always convex.
|
https://en.wikipedia.org/wiki/Convex_set
|
is star convex (star-shaped) if there exists an in such that the line segment from to any point in is contained in . Hence a non-empty convex set is always star-convex but a star-convex set is not always convex.
### Orthogonal convexity
An example of generalized convexity is orthogonal convexity.
A set in the Euclidean space is called orthogonally convex or ortho-convex, if any segment parallel to any of the coordinate axes connecting two points of lies totally within . It is easy to prove that an intersection of any collection of orthoconvex sets is orthoconvex. Some other properties of convex sets are valid as well.
### Non-Euclidean geometry
The definition of a convex set and a convex hull extends naturally to geometries which are not Euclidean by defining a geodesically convex set to be one that contains the geodesics joining any two points in the set.
### Order topology
Convexity can be extended for a totally ordered set endowed with the order topology.
Let . The subspace is a convex set if for each pair of points in such that , the interval is contained in .
|
https://en.wikipedia.org/wiki/Convex_set
|
Let . The subspace is a convex set if for each pair of points in such that , the interval is contained in . That is, is convex if and only if for all in , implies .
A convex set is connected in general: a counter-example is given by the subspace {1,2,3} in , which is both convex and not connected.
### Convexity spaces
The notion of convexity may be generalised to other objects, if certain properties of convexity are selected as axioms.
Given a set , a convexity over is a collection of subsets of satisfying the following axioms:
1. The empty set and are in .
1. The intersection of any collection from is in .
1. The union of a chain (with respect to the inclusion relation) of elements of is in .
The elements of are called convex sets and the pair is called a convexity space. For the ordinary convexity, the first two axioms hold, and the third one is trivial.
For an alternative definition of abstract convexity, more suited to discrete geometry, see the convex geometries associated with antimatroids.
|
https://en.wikipedia.org/wiki/Convex_set
|
For the ordinary convexity, the first two axioms hold, and the third one is trivial.
For an alternative definition of abstract convexity, more suited to discrete geometry, see the convex geometries associated with antimatroids.
### Convex spaces
Convexity can be generalised as an abstract algebraic structure: a space is convex if it is possible to take convex combinations of points.
|
https://en.wikipedia.org/wiki/Convex_set
|
Discrete calculus or the calculus of discrete functions, is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change.
Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus.
The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment
$$
\Delta x
$$
of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as
$$
\Delta x\to 0
$$
is infinitesimal calculus.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as
$$
\Delta x\to 0
$$
is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications.
## Two initial constructions
Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function.
Suppose the functions are defined at points separated by an increment _ BLOCK0_:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The "doubling function" may be denoted by
$$
g(x)=2x
$$
and the "squaring function" by
$$
f(x)=x^2
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Suppose the functions are defined at points separated by an increment _ BLOCK0_:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The "doubling function" may be denoted by
$$
g(x)=2x
$$
and the "squaring function" by
$$
f(x)=x^2
$$
. The "difference quotient" is the rate of change of the function over one of the intervals
$$
[x,x+h]
$$
defined by the formula:
$$
\frac{f(x+h)-f(x)}{h}.
$$
It takes the function
$$
f
$$
as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function
$$
g(x)=2x+h
$$
, as will turn out.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
BLOCK0_:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The "doubling function" may be denoted by
$$
g(x)=2x
$$
and the "squaring function" by
$$
f(x)=x^2
$$
. The "difference quotient" is the rate of change of the function over one of the intervals
$$
[x,x+h]
$$
defined by the formula:
$$
\frac{f(x+h)-f(x)}{h}.
$$
It takes the function
$$
f
$$
as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function
$$
g(x)=2x+h
$$
, as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals:
$$
a+h/2, a+h+h/2, a+2h+h/2,..., a+nh+h/2,...
$$
As the rate of change is that for the whole interval
$$
[x,x+h]
$$
, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a
$$
1
$$
-cochain.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The "difference quotient" is the rate of change of the function over one of the intervals
$$
[x,x+h]
$$
defined by the formula:
$$
\frac{f(x+h)-f(x)}{h}.
$$
It takes the function
$$
f
$$
as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function
$$
g(x)=2x+h
$$
, as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals:
$$
a+h/2, a+h+h/2, a+2h+h/2,..., a+nh+h/2,...
$$
As the rate of change is that for the whole interval
$$
[x,x+h]
$$
, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a
$$
1
$$
-cochain.
The most common notation for the difference quotient is:
$$
\frac{\Delta f}{\Delta x}(x+h/2)=\frac{f(x+h)-f(x)}{h}.
$$
If the input of the function represents time, then the difference quotient represents change with respect to time.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
As a matter of convenience, the new function may defined at the middle points of the above intervals:
$$
a+h/2, a+h+h/2, a+2h+h/2,..., a+nh+h/2,...
$$
As the rate of change is that for the whole interval
$$
[x,x+h]
$$
, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a
$$
1
$$
-cochain.
The most common notation for the difference quotient is:
$$
\frac{\Delta f}{\Delta x}(x+h/2)=\frac{f(x+h)-f(x)}{h}.
$$
If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if
$$
f
$$
is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of
$$
f
$$
is how the position is changing in time, that is, it is the velocity of the ball.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The most common notation for the difference quotient is:
$$
\frac{\Delta f}{\Delta x}(x+h/2)=\frac{f(x+h)-f(x)}{h}.
$$
If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if
$$
f
$$
is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of
$$
f
$$
is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as
$$
y=mx + b
$$
, where
$$
x
$$
is the independent variable,
$$
y
$$
is the dependent variable,
$$
b
$$
is the
$$
y
$$
-intercept, and:
$$
m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.
$$
This gives an exact value for the slope of a straight line.
If the function is not linear, however, then the change in
$$
y
$$
divided by the change in
$$
x
$$
varies.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as
$$
y=mx + b
$$
, where
$$
x
$$
is the independent variable,
$$
y
$$
is the dependent variable,
$$
b
$$
is the
$$
y
$$
-intercept, and:
$$
m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.
$$
This gives an exact value for the slope of a straight line.
If the function is not linear, however, then the change in
$$
y
$$
divided by the change in
$$
x
$$
varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let
$$
f
$$
be a function, and fix a point
$$
x
$$
in the domain of
$$
f
$$
.
$$
(x, f(x))
$$
is a point on the graph of the function. If
$$
h
$$
is the increment of
$$
x
$$
, then
$$
x + h
$$
is the next value of
$$
x
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
$$
(x, f(x))
$$
is a point on the graph of the function. If
$$
h
$$
is the increment of
$$
x
$$
, then
$$
x + h
$$
is the next value of
$$
x
$$
. Therefore,
$$
(x+h, f(x+h))
$$
is the increment of
$$
(x, f(x))
$$
. The slope of the line between these two points is
$$
m = \frac{f(x+h) - f(x)}{(x+h) - x} = \frac{f(x+h) - f(x)}{h}.
$$
So
$$
m
$$
is the slope of the line between
$$
(x, f(x))
$$
and
$$
(x+h, f(x+h))
$$
.
Here is a particular example, the difference quotient of the squaring function. Let
$$
f(x)=x^2
$$
be the squaring function.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Here is a particular example, the difference quotient of the squaring function. Let
$$
f(x)=x^2
$$
be the squaring function. Then:
$$
\begin{align}\frac{\Delta f}{\Delta x}(x) &={(x+h)^2 - x^2\over{h}} \\
&={x^2 + 2hx + h^2 - x^2\over{h}} \\
&={2hx + h^2\over{h}} \\
&= 2x + h .
\end{align}
$$
The difference quotient of the difference quotient is called the second difference quotient and it is defined at
$$
a+h, a+2h, a+3h, \ldots, a+nh,\ldots
$$
and so on.
Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called integration. In technical language, integral calculus studies a certain linear operator.
The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In technical language, integral calculus studies a certain linear operator.
The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis.
A motivating example is the distances traveled in a given time.
$$
\text{distance} = \text{speed} \cdot \text{time}
$$
If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by
$$
a
$$
and
$$
b
$$
) is the area of the shaded region
$$
s
$$
.
So, the interval between
$$
a
$$
and
$$
b
$$
is divided into a number of equal segments, the length of each segment represented by the symbol
$$
\Delta x
$$
. For each small segment, we have one value of the function
$$
f(x)
$$
. Call that value
$$
v
$$
. Then the area of the rectangle with base
$$
\Delta x
$$
and height
$$
v
$$
gives the distance (time
$$
\Delta x
$$
multiplied by speed
$$
v
$$
) traveled in that segment.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Call that value
$$
v
$$
. Then the area of the rectangle with base
$$
\Delta x
$$
and height
$$
v
$$
gives the distance (time
$$
\Delta x
$$
multiplied by speed
$$
v
$$
) traveled in that segment. Associated with each segment is the value of the function above it,
$$
f(x) = v
$$
. The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled.
Suppose a function is defined at the mid-points of the intervals of equal length
$$
\Delta x=h>0
$$
:
$$
a+h/2, a+h+h/2, a+2h+h/2,\ldots, a+nh-h/2,\ldots
$$
Then the Riemann sum from
$$
a
$$
to
$$
b=a+nh
$$
in sigma notation is:
$$
\sum_{i=1}^n f(a+ih)\, \Delta x.
$$
As this computation is carried out for each
$$
n
$$
, the new function is defined at the points:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The fundamental theorem of calculus states that differentiation and integration are inverse operations.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled.
Suppose a function is defined at the mid-points of the intervals of equal length
$$
\Delta x=h>0
$$
:
$$
a+h/2, a+h+h/2, a+2h+h/2,\ldots, a+nh-h/2,\ldots
$$
Then the Riemann sum from
$$
a
$$
to
$$
b=a+nh
$$
in sigma notation is:
$$
\sum_{i=1}^n f(a+ih)\, \Delta x.
$$
As this computation is carried out for each
$$
n
$$
, the new function is defined at the points:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus: If a function
$$
f
$$
is defined on a partition of the interval
$$
[a, b]
$$
,
$$
b=a+nh
$$
, and if
$$
F
$$
is a function whose difference quotient is
$$
f
$$
, then we have:
$$
\sum_{i=0}^{n-1} f(a+ih+h/2)\, \Delta x = F(b) - F(a).
$$
Furthermore, for every
$$
m=0,1,2,\ldots,n-1
$$
, we have:
$$
\frac{\Delta}{\Delta x}\sum_{i=0}^m f(a+ih+h/2)\, \Delta x = f(a+mh+h/2).
$$
This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences.
## History
The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
## History
The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative.
During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative.
During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals:
- Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more
- L. E. J. Brouwer: simplicial approximation theorem
- Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms
- Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes
- J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions
- Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators
- W. V. D. Hodge: the Hodge star operator, the Hodge decomposition
- Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product
- Hassler Whitney: cochains as integrands
The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals:
- Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more
- L. E. J. Brouwer: simplicial approximation theorem
- Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms
- Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes
- J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions
- Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators
- W. V. D. Hodge: the Hodge star operator, the Hodge decomposition
- Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product
- Hassler Whitney: cochains as integrands
The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling.
## Applications
Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The main contributions come from the following individuals:
- Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more
- L. E. J. Brouwer: simplicial approximation theorem
- Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms
- Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes
- J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions
- Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators
- W. V. D. Hodge: the Hodge star operator, the Hodge decomposition
- Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product
- Hassler Whitney: cochains as integrands
The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling.
## Applications
Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
## Applications
Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus.
Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay).
In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling).
In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation.
The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation.
The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.
In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets.
In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets.
In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures.
Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function.
## Calculus of differences and sums
Suppose a function (a
$$
0
$$
-cochain) _ BLOCK1_ is defined at points separated by an increment _ BLOCK2_:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The difference (or the exterior derivative, or the coboundary operator) of the function is given by:
$$
\big(\Delta f\big)\big([x,x+h]\big)=f(x+h)-f(x).
$$
It is defined at each of the above intervals; it is a
$$
1
$$
-cochain.
Suppose a
$$
1
$$
-cochain
$$
g
$$
is defined at each of the above intervals.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
BLOCK2_:
$$
a, a+h, a+2h, \ldots, a+nh,\ldots
$$
The difference (or the exterior derivative, or the coboundary operator) of the function is given by:
$$
\big(\Delta f\big)\big([x,x+h]\big)=f(x+h)-f(x).
$$
It is defined at each of the above intervals; it is a
$$
1
$$
-cochain.
Suppose a
$$
1
$$
-cochain
$$
g
$$
is defined at each of the above intervals. Then its sum is a function (a
$$
0
$$
-cochain) defined at each of the points by:
$$
\left(\sum g\right)\!(a+nh) = \sum_{i=1}^{n} g\big([a+(i-1)h,a+ih]\big).
$$
These are their properties:
- Constant rule: If
$$
c
$$
is a constant, then
$$
\Delta c = 0
$$
- Linearity: if
$$
a
$$
and
$$
b
$$
are constants,
$$
\Delta (a f + b g) = a \,\Delta f + b \,\Delta g,\quad \sum (a f + b g) = a \,\sum f + b \,\sum g
$$
-
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Suppose a
$$
1
$$
-cochain
$$
g
$$
is defined at each of the above intervals. Then its sum is a function (a
$$
0
$$
-cochain) defined at each of the points by:
$$
\left(\sum g\right)\!(a+nh) = \sum_{i=1}^{n} g\big([a+(i-1)h,a+ih]\big).
$$
These are their properties:
- Constant rule: If
$$
c
$$
is a constant, then
$$
\Delta c = 0
$$
- Linearity: if
$$
a
$$
and
$$
b
$$
are constants,
$$
\Delta (a f + b g) = a \,\Delta f + b \,\Delta g,\quad \sum (a f + b g) = a \,\sum f + b \,\sum g
$$
- Product rule:
$$
\Delta (f g) = f \,\Delta g + g \,\Delta f + \Delta f \,\Delta g
$$
- Fundamental theorem of calculus I:
$$
\left( \sum \Delta f\right)\!(a+nh) = f(a+nh)-f(a)
$$
- Fundamental theorem of calculus II:
$$
\Delta\!\left(\sum g\right) = g
$$
The definitions are applied to graphs as follows.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
If a function (a
$$
0
$$
-cochain)
$$
f
$$
is defined at the nodes of a graph:
$$
a, b, c, \ldots
$$
then its exterior derivative (or the differential) is the difference, i.e., the following function defined on the edges of the graph (
$$
1
$$
-cochain):
$$
\left(df\right)\!\big([a,b]\big) = f(b)-f(a).
$$
If
$$
g
$$
is a
$$
1
$$
-cochain, then its integral over a sequence of edges
$$
\sigma
$$
of the graph is the sum of its values over all edges of
$$
\sigma
$$
("path integral"):
$$
\int_\sigma g = \sum_{\sigma} g\big([a,b]\big).
$$
These are the properties:
- Constant rule: If
$$
c
$$
is a constant, then
$$
dc = 0
$$
- Linearity: if
$$
a
$$
and
$$
b
$$
are constants,
$$
d(a f + b g) = a \,df + b \,dg,\quad \int_\sigma (a f + b g) = a \,\int_\sigma f + b \,\int_\sigma g
$$
- Product rule:
$$
d(f g) = f \,dg + g \,df + df \,dg
$$
- Fundamental theorem of calculus
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
I: if a
$$
1
$$
-chain
$$
\sigma
$$
consists of the edges
$$
[a_0,a_1],[a_1,a_2],...,[a_{n-1},a_n]
$$
, then for any
$$
0
$$
-cochain
$$
f
$$
$$
\int_\sigma df = f(a_n)-f(a_0)
$$
- Fundamental theorem of calculus II: if the graph is a tree,
$$
g
$$
is a
$$
1
$$
-cochain, and a function (
$$
0
$$
-cochain) is defined on the nodes of the graph by
$$
f(x) = \int_\sigma g
$$
where a
$$
1
$$
-chain
$$
\sigma
$$
consists of
$$
[a_0,a_1],[a_1,a_2],...,[a_{n-1},x]
$$
for some fixed
$$
a_0
$$
, then
$$
df = g
$$
See references.
## Chains of simplices and cubes
A simplicial complex
$$
S
$$
is a set of simplices that satisfies the following conditions:
1. Every face of a simplex from
$$
S
$$
is also in
$$
S
$$
.
2.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Every face of a simplex from
$$
S
$$
is also in
$$
S
$$
.
2. The non-empty intersection of any two simplices
$$
\sigma_1, \sigma_2 \in S
$$
is a face of both
$$
\sigma_1
$$
and
$$
\sigma_2
$$
.
By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as
$$
(v_0,...,v_k)
$$
, with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.
Let
$$
S
$$
be a simplicial complex. A simplicial k-chain is a finite formal sum
$$
\sum_{i=1}^N c_i \sigma_i, \,
$$
where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
A simplicial k-chain is a finite formal sum
$$
\sum_{i=1}^N c_i \sigma_i, \,
$$
where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,
$$
(v_0,v_1) = -(v_1,v_0).
$$
The vector space of k-chains on
$$
S
$$
is written
$$
C_k
$$
. It has a basis in one-to-one correspondence with the set of k-simplices in
$$
S
$$
. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
Let
$$
\sigma = (v_0,...,v_k)
$$
be an oriented k-simplex, viewed as a basis element of
$$
C_k
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
Let
$$
\sigma = (v_0,...,v_k)
$$
be an oriented k-simplex, viewed as a basis element of
$$
C_k
$$
. The boundary operator
$$
\partial_k: C_k \rightarrow C_{k-1}
$$
is the linear operator defined by:
$$
\partial_k(\sigma)=\sum_{i=0}^k (-1)^i (v_0 , \dots , \widehat{v_i} , \dots ,v_k),
$$
where the oriented simplex
$$
(v_0 , \dots , \widehat{v_i} , \dots ,v_k)
$$
is the
$$
i
$$
th face of
$$
\sigma
$$
, obtained by deleting its
$$
i
$$
th vertex.
In
$$
C_k
$$
, elements of the subgroup
$$
Z_k = \ker \partial_k
$$
are referred to as cycles, and the subgroup
$$
B_k = \operatorname{im} \partial_{k+1}
$$
is said to consist of boundaries.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The boundary operator
$$
\partial_k: C_k \rightarrow C_{k-1}
$$
is the linear operator defined by:
$$
\partial_k(\sigma)=\sum_{i=0}^k (-1)^i (v_0 , \dots , \widehat{v_i} , \dots ,v_k),
$$
where the oriented simplex
$$
(v_0 , \dots , \widehat{v_i} , \dots ,v_k)
$$
is the
$$
i
$$
th face of
$$
\sigma
$$
, obtained by deleting its
$$
i
$$
th vertex.
In
$$
C_k
$$
, elements of the subgroup
$$
Z_k = \ker \partial_k
$$
are referred to as cycles, and the subgroup
$$
B_k = \operatorname{im} \partial_{k+1}
$$
is said to consist of boundaries.
A direct computation shows that
$$
\partial^2= 0
$$
. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces
$$
(C_k, \partial_k)
$$
form a chain complex.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces
$$
(C_k, \partial_k)
$$
form a chain complex. Another equivalent statement is that
$$
B_k
$$
is contained in
$$
Z_k
$$
.
A cubical complex is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset
$$
I\subset\mathbf{R}
$$
of the form
$$
I = [\ell, \ell+1]\quad\text{or}\quad I=[\ell, \ell]
$$
for some
$$
\ell\in\mathbf{Z}
$$
. An elementary cube
$$
Q
$$
is the finite product of elementary intervals, i.e.
$$
Q=I_1\times I_2\times \cdots\times I_d\subset \mathbf{R}^d
$$
where
$$
I_1,I_2,\ldots,I_d
$$
are elementary intervals.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
An elementary interval is a subset
$$
I\subset\mathbf{R}
$$
of the form
$$
I = [\ell, \ell+1]\quad\text{or}\quad I=[\ell, \ell]
$$
for some
$$
\ell\in\mathbf{Z}
$$
. An elementary cube
$$
Q
$$
is the finite product of elementary intervals, i.e.
$$
Q=I_1\times I_2\times \cdots\times I_d\subset \mathbf{R}^d
$$
where
$$
I_1,I_2,\ldots,I_d
$$
are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube
$$
[0,1]^n
$$
embedded in Euclidean space
$$
\mathbf{R}^d
$$
(for some
$$
n,d\in\mathbf{N}\cup\{0\}
$$
with
$$
n\leq d
$$
). A set
$$
X\subseteq\mathbf{R}^d
$$
is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Equivalently, an elementary cube is any translate of a unit cube
$$
[0,1]^n
$$
embedded in Euclidean space
$$
\mathbf{R}^d
$$
(for some
$$
n,d\in\mathbf{N}\cup\{0\}
$$
with
$$
n\leq d
$$
). A set
$$
X\subseteq\mathbf{R}^d
$$
is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes.
More general are cell complexes.
A chain complex _BLOCK39 _ is a sequence of vector spaces
$$
\ldots,C_0, C_1, C_2, C_3, C_4, \ldots
$$
connected by linear operators (called boundary operators)
$$
\partial_n : C_n \to C_{n-1}
$$
, such that the composition of any two consecutive maps is the zero map.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
A chain complex _BLOCK39 _ is a sequence of vector spaces
$$
\ldots,C_0, C_1, C_2, C_3, C_4, \ldots
$$
connected by linear operators (called boundary operators)
$$
\partial_n : C_n \to C_{n-1}
$$
, such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy
$$
\partial_n \circ \partial_{n+1} = 0
$$
, or with indices suppressed,
$$
\partial^2 = 0
$$
. The complex may be written out as follows.
$$
\cdots
\xleftarrow{\partial_0} C_0
\xleftarrow{\partial_1} C_1
\xleftarrow{\partial_2} C_2
\xleftarrow{\partial_3} C_3
\xleftarrow{\partial_4} C_4
\xleftarrow{\partial_5}
\cdots
$$
A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images).
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
Explicitly, the boundary operators satisfy
$$
\partial_n \circ \partial_{n+1} = 0
$$
, or with indices suppressed,
$$
\partial^2 = 0
$$
. The complex may be written out as follows.
$$
\cdots
\xleftarrow{\partial_0} C_0
\xleftarrow{\partial_1} C_1
\xleftarrow{\partial_2} C_2
\xleftarrow{\partial_3} C_3
\xleftarrow{\partial_4} C_4
\xleftarrow{\partial_5}
\cdots
$$
A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map
$$
f
$$
from a simplicial complex
$$
S
$$
to another
$$
T
$$
is a function from the vertex set of
$$
S
$$
to the vertex set of
$$
T
$$
such that the image of each simplex in
$$
S
$$
(viewed as a set of vertices) is a simplex in
$$
T
$$
. It generates a linear map, called a chain map, from the chain complex of
$$
S
$$
to the chain complex of
$$
T
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
A simplicial map
$$
f
$$
from a simplicial complex
$$
S
$$
to another
$$
T
$$
is a function from the vertex set of
$$
S
$$
to the vertex set of
$$
T
$$
such that the image of each simplex in
$$
S
$$
(viewed as a set of vertices) is a simplex in
$$
T
$$
. It generates a linear map, called a chain map, from the chain complex of
$$
S
$$
to the chain complex of
$$
T
$$
. Explicitly, it is given on
$$
k
$$
-chains by
$$
f((v_0, \ldots, v_k)) = (f(v_0),\ldots,f(v_k))
$$
if
$$
f(v_0), ..., f(v_k)
$$
are all distinct, and otherwise it is set equal to
$$
0
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
It generates a linear map, called a chain map, from the chain complex of
$$
S
$$
to the chain complex of
$$
T
$$
. Explicitly, it is given on
$$
k
$$
-chains by
$$
f((v_0, \ldots, v_k)) = (f(v_0),\ldots,f(v_k))
$$
if
$$
f(v_0), ..., f(v_k)
$$
are all distinct, and otherwise it is set equal to
$$
0
$$
.
A chain map
$$
f
$$
between two chain complexes
$$
(A_*, d_{A,*})
$$
and
$$
(B_*, d_{B,*})
$$
is a sequence
$$
f_*
$$
of homomorphisms
$$
f_n : A_n \rightarrow B_n
$$
for each
$$
n
$$
that commutes with the boundary operators on the two chain complexes, so
$$
d_{B,n} \circ f_n = f_{n-1} \circ d_{A,n}
$$
. This is written out in the following commutative diagram:
A chain map sends cycles to cycles and boundaries to boundaries.
See references.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
A chain map sends cycles to cycles and boundaries to boundaries.
See references.
## Discrete differential forms: cochains
For each vector space Ci in the chain complex we consider its dual space
$$
C_i^* := \mathrm{Hom}(C_i,{\bf R}),
$$
and _ BLOCK1_ is its dual linear operator
$$
d^{i-1}: C_{i-1}^* \to C_{i}^*.
$$
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
$$
\cdots \leftarrow C_{i+1}^* \stackrel{\partial^*_i}{\leftarrow}\ C_{i}^* \stackrel{\partial^*_{i-1}}{\leftarrow} C_{i-1}^* \leftarrow \cdots
$$
The cochain complex
$$
(C^*, d^*)
$$
is the dual notion to a chain complex.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
## Discrete differential forms: cochains
For each vector space Ci in the chain complex we consider its dual space
$$
C_i^* := \mathrm{Hom}(C_i,{\bf R}),
$$
and _ BLOCK1_ is its dual linear operator
$$
d^{i-1}: C_{i-1}^* \to C_{i}^*.
$$
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
$$
\cdots \leftarrow C_{i+1}^* \stackrel{\partial^*_i}{\leftarrow}\ C_{i}^* \stackrel{\partial^*_{i-1}}{\leftarrow} C_{i-1}^* \leftarrow \cdots
$$
The cochain complex
$$
(C^*, d^*)
$$
is the dual notion to a chain complex. It consists of a sequence of vector spaces
$$
...,C_0, C_1, C_2, C_3, C_4, ...
$$
connected by linear operators
$$
d^n: C^n\to C^{n+1}
$$
satisfying
$$
d^{n+1}\circ d^n = 0
$$
.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
BLOCK1_ is its dual linear operator
$$
d^{i-1}: C_{i-1}^* \to C_{i}^*.
$$
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
$$
\cdots \leftarrow C_{i+1}^* \stackrel{\partial^*_i}{\leftarrow}\ C_{i}^* \stackrel{\partial^*_{i-1}}{\leftarrow} C_{i-1}^* \leftarrow \cdots
$$
The cochain complex
$$
(C^*, d^*)
$$
is the dual notion to a chain complex. It consists of a sequence of vector spaces
$$
...,C_0, C_1, C_2, C_3, C_4, ...
$$
connected by linear operators
$$
d^n: C^n\to C^{n+1}
$$
satisfying
$$
d^{n+1}\circ d^n = 0
$$
. The cochain complex may be written out in a similar fashion to the chain complex.
$$
\cdots
\xrightarrow{d^{-1}}
C^0 \xrightarrow{d^0}
C^1 \xrightarrow{d^1}
C^2 \xrightarrow{d^2}
C^3 \xrightarrow{d^3}
C^4 \xrightarrow{d^4}
\cdots
$$
The index
$$
n
$$
in either
$$
C_n
$$
or
$$
C^n
$$
is referred to as the degree (or dimension).
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
It consists of a sequence of vector spaces
$$
...,C_0, C_1, C_2, C_3, C_4, ...
$$
connected by linear operators
$$
d^n: C^n\to C^{n+1}
$$
satisfying
$$
d^{n+1}\circ d^n = 0
$$
. The cochain complex may be written out in a similar fashion to the chain complex.
$$
\cdots
\xrightarrow{d^{-1}}
C^0 \xrightarrow{d^0}
C^1 \xrightarrow{d^1}
C^2 \xrightarrow{d^2}
C^3 \xrightarrow{d^3}
C^4 \xrightarrow{d^4}
\cdots
$$
The index
$$
n
$$
in either
$$
C_n
$$
or
$$
C^n
$$
is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.
The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of
$$
d
$$
are called cocycles (or closed elements), and the elements in the image of
$$
d
$$
are called coboundaries (or exact elements).
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of
$$
d
$$
are called cocycles (or closed elements), and the elements in the image of
$$
d
$$
are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles.
The Poincaré lemma states that if
$$
B
$$
is an open ball in
$$
{\bf R}^n
$$
, any closed
$$
p
$$
-form
$$
\omega
$$
defined on
$$
B
$$
is exact, for any integer
$$
p
$$
with
$$
1 \le p\le n
$$
.
When we refer to cochains as discrete (differential) forms, we refer to
$$
d
$$
as the exterior derivative.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The Poincaré lemma states that if
$$
B
$$
is an open ball in
$$
{\bf R}^n
$$
, any closed
$$
p
$$
-form
$$
\omega
$$
defined on
$$
B
$$
is exact, for any integer
$$
p
$$
with
$$
1 \le p\le n
$$
.
When we refer to cochains as discrete (differential) forms, we refer to
$$
d
$$
as the exterior derivative. We also use the calculus notation for the values of the forms:
$$
\omega (s)=\int_s\omega.
$$
Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval:
$$
\sum_{i=0}^{n-1} \frac{\Delta F}{\Delta x}(a+ih+h/2) \, \Delta x = F(b) - F(a).
$$
Stokes' theorem says that the sum of a form
$$
\omega
$$
over the boundary of some orientable manifold
$$
\Omega
$$
is equal to the sum of its exterior derivative
$$
d\omega
$$
over the whole of
$$
\Omega
$$
, i.e.,
$$
\int_\Omega d\omega=\int_{\partial \Omega}\omega\,.
$$
It is worthwhile to examine the underlying principle by considering an example for
$$
d=2
$$
dimensions.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
When we refer to cochains as discrete (differential) forms, we refer to
$$
d
$$
as the exterior derivative. We also use the calculus notation for the values of the forms:
$$
\omega (s)=\int_s\omega.
$$
Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval:
$$
\sum_{i=0}^{n-1} \frac{\Delta F}{\Delta x}(a+ih+h/2) \, \Delta x = F(b) - F(a).
$$
Stokes' theorem says that the sum of a form
$$
\omega
$$
over the boundary of some orientable manifold
$$
\Omega
$$
is equal to the sum of its exterior derivative
$$
d\omega
$$
over the whole of
$$
\Omega
$$
, i.e.,
$$
\int_\Omega d\omega=\int_{\partial \Omega}\omega\,.
$$
It is worthwhile to examine the underlying principle by considering an example for
$$
d=2
$$
dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
We also use the calculus notation for the values of the forms:
$$
\omega (s)=\int_s\omega.
$$
Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval:
$$
\sum_{i=0}^{n-1} \frac{\Delta F}{\Delta x}(a+ih+h/2) \, \Delta x = F(b) - F(a).
$$
Stokes' theorem says that the sum of a form
$$
\omega
$$
over the boundary of some orientable manifold
$$
\Omega
$$
is equal to the sum of its exterior derivative
$$
d\omega
$$
over the whole of
$$
\Omega
$$
, i.e.,
$$
\int_\Omega d\omega=\int_{\partial \Omega}\omega\,.
$$
It is worthwhile to examine the underlying principle by considering an example for
$$
d=2
$$
dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains.
See references.
## The wedge product of forms
In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree
$$
p
$$
and
$$
q
$$
to form a composite cochain of degree
$$
p + q
$$
.
For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension.
For simplicial complexes, the wedge product is implemented as the cup product: if
$$
f^p
$$
is a
$$
p
$$
-cochain and
$$
g^q
$$
is a
$$
q
$$
-cochain, then
$$
(f^p \smile g^q)(\sigma) = f^p(\sigma_{0,1, ..., p}) \cdot g^q(\sigma_{p, p+1 ,..., p + q})
$$
where _
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension.
For simplicial complexes, the wedge product is implemented as the cup product: if
$$
f^p
$$
is a
$$
p
$$
-cochain and
$$
g^q
$$
is a
$$
q
$$
-cochain, then
$$
(f^p \smile g^q)(\sigma) = f^p(\sigma_{0,1, ..., p}) \cdot g^q(\sigma_{p, p+1 ,..., p + q})
$$
where _ BLOCK8_ is a
$$
(p + q)
$$
-simplex and
$$
\sigma_S,\ S \subset \{0,1,...,p+q \}
$$
,
is the simplex spanned by
$$
S
$$
into the
$$
(p+q)
$$
-simplex whose vertices are indexed by
$$
\{0,...,p+q \}
$$
. So,
$$
\sigma_{0,1, ..., p}
$$
is the
$$
p
$$
-th front face and
$$
\sigma_{p, p+1, ..., p + q}
$$
is the
$$
q
$$
-th back face of
$$
\sigma
$$
, respectively.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
BLOCK8_ is a
$$
(p + q)
$$
-simplex and
$$
\sigma_S,\ S \subset \{0,1,...,p+q \}
$$
,
is the simplex spanned by
$$
S
$$
into the
$$
(p+q)
$$
-simplex whose vertices are indexed by
$$
\{0,...,p+q \}
$$
. So,
$$
\sigma_{0,1, ..., p}
$$
is the
$$
p
$$
-th front face and
$$
\sigma_{p, p+1, ..., p + q}
$$
is the
$$
q
$$
-th back face of
$$
\sigma
$$
, respectively.
The coboundary of the cup product of cochains
$$
f^p
$$
and
$$
g^q
$$
is given by
$$
d(f^p \smile g^q) = d{f^p} \smile g^q + (-1)^p(f^p \smile d{g^q}).
$$
The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
So,
$$
\sigma_{0,1, ..., p}
$$
is the
$$
p
$$
-th front face and
$$
\sigma_{p, p+1, ..., p + q}
$$
is the
$$
q
$$
-th back face of
$$
\sigma
$$
, respectively.
The coboundary of the cup product of cochains
$$
f^p
$$
and
$$
g^q
$$
is given by
$$
d(f^p \smile g^q) = d{f^p} \smile g^q + (-1)^p(f^p \smile d{g^q}).
$$
The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary.
The cup product operation satisfies the identity
$$
\alpha^p \smile \beta^q = (-1)^{pq}(\beta^q \smile \alpha^p).
$$
In other words, the corresponding multiplication is graded-commutative.
See references.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The cup product operation satisfies the identity
$$
\alpha^p \smile \beta^q = (-1)^{pq}(\beta^q \smile \alpha^p).
$$
In other words, the corresponding multiplication is graded-commutative.
See references.
## Laplace operator
The Laplace operator
$$
\Delta f
$$
of a function
$$
f
$$
at a vertex
$$
p
$$
, is (up to a factor) the rate at which the average value of
$$
f
$$
over a cellular neighborhood of
$$
p
$$
deviates from
$$
f(p)
$$
. The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena.
The codifferential
$$
\delta:C^k\to C^{k-1}
$$
is an operator defined on
$$
k
$$
-forms by:
$$
\delta = (-1)^{n(k-1) + 1} {\star} d {\star} = (-1)^{k}\, {\star}^{-1} d {\star} ,
$$
where
$$
d
$$
is the exterior derivative or differential and
$$
\star
$$
is the Hodge star operator.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
For these reasons, it is extensively used in the sciences for modelling various physical phenomena.
The codifferential
$$
\delta:C^k\to C^{k-1}
$$
is an operator defined on
$$
k
$$
-forms by:
$$
\delta = (-1)^{n(k-1) + 1} {\star} d {\star} = (-1)^{k}\, {\star}^{-1} d {\star} ,
$$
where
$$
d
$$
is the exterior derivative or differential and
$$
\star
$$
is the Hodge star operator.
The codifferential is the adjoint of the exterior derivative according to Stokes' theorem:
$$
(\eta,\delta \zeta) = (d\eta,\zeta).
$$
Since the differential satisfies
$$
d^2=0
$$
, the codifferential has the corresponding property
$$
\delta^2 = {\star} d {\star} {\star} d {\star} = (-1)^{k(n-k)} {\star} d^2 {\star} = 0 .
$$
The Laplace operator is defined by:
$$
\Delta = (\delta + d)^2 = \delta d + d\delta .
$$
See references.
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
The codifferential
$$
\delta:C^k\to C^{k-1}
$$
is an operator defined on
$$
k
$$
-forms by:
$$
\delta = (-1)^{n(k-1) + 1} {\star} d {\star} = (-1)^{k}\, {\star}^{-1} d {\star} ,
$$
where
$$
d
$$
is the exterior derivative or differential and
$$
\star
$$
is the Hodge star operator.
The codifferential is the adjoint of the exterior derivative according to Stokes' theorem:
$$
(\eta,\delta \zeta) = (d\eta,\zeta).
$$
Since the differential satisfies
$$
d^2=0
$$
, the codifferential has the corresponding property
$$
\delta^2 = {\star} d {\star} {\star} d {\star} = (-1)^{k(n-k)} {\star} d^2 {\star} = 0 .
$$
The Laplace operator is defined by:
$$
\Delta = (\delta + d)^2 = \delta d + d\delta .
$$
See references.
## Related
- Discrete element method
- Divided differences
- Finite difference coefficient
- Finite difference method
- Finite element method
- Finite volume method
- Numerical differentiation
- Numerical integration
- Numerical methods for ordinary differential equations
|
https://en.wikipedia.org/wiki/Discrete_calculus
|
In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.
## General techniques
There are several broadly recognized algorithmic techniques that offer a proven method or process for designing and constructing algorithms. Different techniques may be used depending on the objective, which may include searching, sorting, mathematical optimization, constraint satisfaction, categorization, analysis, and prediction.
### Brute force
Brute force is a simple, exhaustive technique that evaluates every possible outcome to find a solution.
### Divide and conquer
The divide and conquer technique decomposes complex problems recursively into smaller sub-problems. Each sub-problem is then solved and these partial solutions are recombined to determine the overall solution. This technique is often used for searching and sorting.
### Dynamic
Dynamic programming is a systematic technique in which a complex problem is decomposed recursively into smaller, overlapping subproblems for solution. Dynamic programming stores the results of the overlapping sub-problems locally using an optimization technique called memoization.
### Evolutionary
An evolutionary approach develops candidate solutions and then, in a manner similar to biological evolution, performs a series of random alterations or combinations of these solutions and evaluates the new results against a fitness function. The most fit or promising results are selected for additional iterations, to achieve an overall optimal solution.
|
https://en.wikipedia.org/wiki/Algorithmic_technique
|
### Evolutionary
An evolutionary approach develops candidate solutions and then, in a manner similar to biological evolution, performs a series of random alterations or combinations of these solutions and evaluates the new results against a fitness function. The most fit or promising results are selected for additional iterations, to achieve an overall optimal solution.
### Graph traversal
Graph traversal is a technique for finding solutions to problems that can be represented as graphs. This approach is broad, and includes depth-first search, breadth-first search, tree traversal, and many specific variations that may include local optimizations and excluding search spaces that can be determined to be non-optimum or not possible. These techniques may be used to solve a variety of problems including shortest path and constraint satisfaction problems.
### Greedy
A greedy approach begins by evaluating one possible outcome from the set of possible outcomes, and then searches locally for an improvement on that outcome. When a local improvement is found, it will repeat the process and again search locally for additional improvements near this local optimum. A greedy technique is generally simple to implement, and these series of decisions can be used to find local optimums depending on where the search began. However, greedy techniques may not identify the global optimum across the entire set of possible outcomes.,
### Heuristic
A heuristic approach employs a practical method to reach an immediate solution not guaranteed to be optimal.
|
https://en.wikipedia.org/wiki/Algorithmic_technique
|
However, greedy techniques may not identify the global optimum across the entire set of possible outcomes.,
### Heuristic
A heuristic approach employs a practical method to reach an immediate solution not guaranteed to be optimal.
### Learning
Learning techniques employ statistical methods to perform categorization and analysis without explicit programming. Supervised learning, unsupervised learning, reinforcement learning, and deep learning techniques are included in this category.
### Mathematical optimization
Mathematical optimization is a technique that can be used to calculate a mathematical optimum by minimizing or maximizing a function.
### Modeling
Modeling is a general technique for abstracting a real-world problem into a framework or paradigm that assists with solution.
### Recursion
Recursion is a general technique for designing an algorithm that calls itself with a progressively simpler part of the task down to one or more base cases with defined results.
### Window sliding
The window sliding is used to reduce the use of nested loop and replace it with a single loop, thereby reducing the time complexity.
|
https://en.wikipedia.org/wiki/Algorithmic_technique
|
Spectroscopy is the field of study that measures and interprets electromagnetic spectra. In narrower contexts, spectroscopy is the precise study of color as generalized from visible light to all bands of the electromagnetic spectrum.
Spectroscopy, primarily in the electromagnetic spectrum, is a fundamental exploratory tool in the fields of astronomy, chemistry, materials science, and physics, allowing the composition, physical structure and electronic structure of matter to be investigated at the atomic, molecular and macro scale, and over astronomical distances.
Historically, spectroscopy originated as the study of the wavelength dependence of the absorption by gas phase matter of visible light dispersed by a prism. Current applications of spectroscopy include biomedical spectroscopy in the areas of tissue analysis and medical imaging. Matter waves and acoustic waves can also be considered forms of radiative energy, and recently gravitational waves have been associated with a spectral signature in the context of the Laser Interferometer Gravitational-Wave Observatory (LIGO).
## Introduction
Spectroscopy is a branch of science concerned with the spectra of electromagnetic radiation as a function of its wavelength or frequency measured by spectrographic equipment, and other techniques, in order to obtain information concerning the structure and properties of matter.
|
https://en.wikipedia.org/wiki/Spectroscopy
|
Matter waves and acoustic waves can also be considered forms of radiative energy, and recently gravitational waves have been associated with a spectral signature in the context of the Laser Interferometer Gravitational-Wave Observatory (LIGO).
## Introduction
Spectroscopy is a branch of science concerned with the spectra of electromagnetic radiation as a function of its wavelength or frequency measured by spectrographic equipment, and other techniques, in order to obtain information concerning the structure and properties of matter. Spectral measurement devices are referred to as spectrometers, spectrophotometers, spectrographs or spectral analyzers. Most spectroscopic analysis in the laboratory starts with a sample to be analyzed, then a light source is chosen from any desired range of the light spectrum, then the light goes through the sample to a dispersion array (diffraction grating instrument) and captured by a photodiode. For astronomical purposes, the telescope must be equipped with the light dispersion device. There are various versions of this basic setup that may be employed.
Spectroscopy began with Isaac Newton splitting light with a prism; a key moment in the development of modern optics. Therefore, it was originally the study of visible light that we call color that later under the studies of James Clerk Maxwell came to include the entire electromagnetic spectrum.
|
https://en.wikipedia.org/wiki/Spectroscopy
|
Spectroscopy began with Isaac Newton splitting light with a prism; a key moment in the development of modern optics. Therefore, it was originally the study of visible light that we call color that later under the studies of James Clerk Maxwell came to include the entire electromagnetic spectrum. Although color is involved in spectroscopy, it is not equated with the color of elements or objects that involve the absorption and reflection of certain electromagnetic waves to give objects a sense of color to our eyes. Rather spectroscopy involves the splitting of light by a prism, diffraction grating, or similar instrument, to give off a particular discrete line pattern called a "spectrum" unique to each different type of element. Most elements are first put into a gaseous phase to allow the spectra to be examined although today other methods can be used on different phases. Each element that is diffracted by a prism-like instrument displays either an absorption spectrum or an emission spectrum depending upon whether the element is being cooled or heated.
Until recently all spectroscopy involved the study of line spectra and most spectroscopy still does. Vibrational spectroscopy is the branch of spectroscopy that studies the spectra. However, the latest developments in spectroscopy can sometimes dispense with the dispersion technique.
|
https://en.wikipedia.org/wiki/Spectroscopy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.