text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
This is an example implementation of insert and remove in C. Below are the data structures and the `rotate_subtree` helper function used in the insert and remove examples.
```c
enum Color { BLACK, RED };
enum Dir { LEFT, RIGHT };
// red-black tree node
struct Node {
struct Node *parent; // null for the root node
union {
// Union so we can use ->left/->right or ->child[0]/->child[1]
struct {
struct Node *left;
struct Node *right;
};
struct Node *child[2];
};
enum Color color;
int key;
};
struct Tree {
struct Node *root;
};
1. define DIRECTION(N) (N == N->parent->right ? RIGHT : LEFT)
struct Node *rotate_subtree(struct Tree *tree, struct Node *sub, enum Dir dir) {
struct Node *sub_parent = sub->parent;
struct Node *new_root = sub->child[1 - dir]; // 1 - dir is the opposite direction
struct Node *new_child = new_root->child[dir];
sub->child[1 - dir] = new_child;
if (new_child) new_child->parent = sub;
new_root->child[dir] = sub;
new_root->parent = sub_parent;
sub->parent = new_root;
if (sub_parent)
sub_parent->child[sub == sub_parent->right] = new_root;
else
tree->root = new_root;
return new_root;
}
```
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
### Notes to the sample code and diagrams of insertion and removal
The proposal breaks down both insertion and removal (not mentioning some very simple cases) into six constellations of nodes, edges, and colors, which are called cases. The proposal contains, for both insertion and removal, exactly one case that advances one black level closer to the root and loops, the other five cases rebalance the tree of their own. The more complicated cases are pictured in a diagram.
- symbolises a red node and a (non-NULL) black node (of black height ≥ 1), symbolises the color red or black of a non-NULL node, but the same color throughout the same diagram. NULL nodes are not represented in the diagrams.
- The variable N denotes the current node, which is labeled N or N in the diagrams.
- A diagram contains three columns and two to four actions. The left column shows the first iteration, the right column the higher iterations, the middle column shows the segmentation of a case into its different actions.
1. The action "entry" shows the constellation of nodes with their colors which defines a case and mostly violates some of the requirements. A blue border rings the current node N and the other nodes are labeled according to their relation to N.
1.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The action "entry" shows the constellation of nodes with their colors which defines a case and mostly violates some of the requirements. A blue border rings the current node N and the other nodes are labeled according to their relation to N.
1. If a rotation is considered useful, this is pictured in the next action, which is labeled "rotation".
1. If some recoloring is considered useful, this is pictured in the next action, which is labeled "color".
1. If there is still some need to repair, the cases make use of code of other cases and this after a reassignment of the current node N, which then again carries a blue ring and relative to which other nodes may have to be reassigned also. This action is labeled "reassign". For both, insert and delete, there is (exactly) one case which iterates one black level closer to the root; then the reassigned constellation satisfies the respective loop invariant.
- A possibly numbered triangle with a black circle atop represents a red–black subtree (connected to its parent according to requirement 3) with a black height equal to the iteration level minus one, i.e. zero in the first iteration. Its root may be red or black.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
For both, insert and delete, there is (exactly) one case which iterates one black level closer to the root; then the reassigned constellation satisfies the respective loop invariant.
- A possibly numbered triangle with a black circle atop represents a red–black subtree (connected to its parent according to requirement 3) with a black height equal to the iteration level minus one, i.e. zero in the first iteration. Its root may be red or black. A possibly numbered triangle represents a red–black subtree with a black height one less, i.e. its parent has black height zero in the second iteration.
Remark
For simplicity, the sample code uses the disjunction:
`U == NULL || U->color == BLACK // considered black`
and the conjunction:
`U != NULL && U->color == RED // not considered black`
Thereby, it must be kept in mind that both statements are not evaluated in total, if `U == NULL`. Then in both cases `U->color` is not touched (see Short-circuit evaluation).(The comment `considered black` is in accordance with requirement 2.)
The related `if`-statements have to occur far less frequently if the proposal is realised.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Then in both cases `U->color` is not touched (see Short-circuit evaluation).(The comment `considered black` is in accordance with requirement 2.)
The related `if`-statements have to occur far less frequently if the proposal is realised.
### Insertion
Insertion begins by placing the new (non-NULL) node, say N, at the position in the binary search tree of a NULL node whose in-order predecessor’s key compares less than the new node’s key, which in turn compares less than the key of its in-order successor.
(Frequently, this positioning is the result of a search within the tree immediately preceding the insert operation and consists of a node `P` together with a direction `dir` with
The newly inserted node is temporarily colored red so that all paths contain the same number of black nodes as before.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
But if its parent, say P, is also red then this action introduces a red-violation.
```c
// parent is optional
void insert(struct Tree *tree, struct Node *node, struct Node *parent, enum Dir dir) {
node->color = RED;
node->parent = parent;
if (!parent) {
tree->root = node;
return;
}
parent->child[dir] = node;
// rebalance the tree
do {
// Case #1
if (parent->color == BLACK) return;
struct Node *grandparent = parent->parent;
if (!grandparent) {
// Case #4
parent->color = BLACK;
return;
}
dir = DIRECTION(parent);
struct Node *uncle = grandparent->child[1 - dir];
if (!uncle || uncle->color == BLACK) {
if (node == parent->child[1 - dir]) {
// Case #5
rotate_subtree(tree, parent, dir);
node = parent;
parent = grandparent->child[dir];
}
// Case #6
rotate_subtree(tree, grandparent, 1 - dir);
parent->color = BLACK;
grandparent->color = RED;
return;
}
// Case #2
parent->color = BLACK;
uncle->color = BLACK;
grandparent->color = RED;
node = grandparent;
} while (parent = node->parent);
// Case #3
return;
}
```
The rebalancing loop of the insert operation has the following invariants:
- Node is the current node, initially the insertion node.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
- Node is red at the beginning of each iteration.
- Requirement 3 is satisfied for all pairs node←parent with the possible exception node←parent when parent is also red (a red-violation at node).
- All other properties (including requirement 4) are satisfied throughout the tree.
#### Notes to the insert diagrams
before caserowspan="2" rowspan="2" after next Δh P G U x P G U x I1 I2 N:=G 2 — I3 — I4 i I5 P↶N N:=P o I6 0 o I6 P↷G
Insertion This synopsis shows in its before columns, that allpossible casesThe same partitioning is found in Ben Pfaff. of constellations are covered.
- In the diagrams, P is used for N’s parent, G for its grandparent, and U for its uncle. In the table, "—" indicates the root.
- The diagrams show the parent node P as the left child of its parent G even though it is possible for P to be on either side. The sample code covers both possibilities by means of the side variable `dir`.
- The diagrams show the cases where P is red also, the red-violation.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The sample code covers both possibilities by means of the side variable `dir`.
- The diagrams show the cases where P is red also, the red-violation.
- The column x indicates the change in child direction, i.e. o (for "outer") means that P and N are both left or both right children, whereas i (for "inner") means that the child direction changes from P’s to N’s.
- The column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored. So in case I2 the sample code covers both possibilities of child directions of N, although the corresponding diagram shows only one.
- The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible.
- The column rotation indicates whether a rotation contributes to the rebalancing.
- The column assignment shows an assignment of N before entering a subsequent step. This possibly induces a reassignment of the other nodes P, G, U also.
- If something has been changed by the case, this is shown in the column group after.
- A sign in column next signifies that the rebalancing is complete with this step.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
If something has been changed by the case, this is shown in the column group after.
- A sign in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks.
- In case 2 the problem of rebalancing is escalated
$$
\Delta h=2
$$
tree levels or 1 black level higher in the tree, in that the grandfather G becomes the new current node N. So it takes maximally
$$
\tfrac{h}2
$$
steps of iteration to repair the tree (where is the height of the tree). Because the probability of escalation decreases exponentially with each step the total rebalancing cost is constant on average, indeed amortized constant.
- Rotations occur in cases I6 and I5 + I6 – outside the loop. Therefore, at most two rotations occur in total.
#### Insert case 1
The current node’s parent P is black, so requirement 3 holds. Requirement 4 holds also according to the loop invariant.
#### Insert case 2
If both the parent P and the uncle U are red, then both of them can be repainted black and the grandparent G becomes red for maintaining requirement 4. Since any path through the parent or uncle must pass through the grandparent, the number of black nodes on these paths has not changed.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
#### Insert case 2
If both the parent P and the uncle U are red, then both of them can be repainted black and the grandparent G becomes red for maintaining requirement 4. Since any path through the parent or uncle must pass through the grandparent, the number of black nodes on these paths has not changed. However, the grandparent G may now violate requirement 3, if it has a red parent. After relabeling G to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 2 tree levels) higher.
#### Insert case 3
Insert case 2 has been executed for
$$
\tfrac{h-1}2
$$
times and the total height of the tree has increased by 1, now being .
The current node N is the (red) root of the tree, and all RB-properties are satisfied.
#### Insert case 4
The parent P is red and the root.
Because N is also red, requirement 3 is violated. But after switching P’s color the tree is in RB-shape.
The black height of the tree increases by 1.
#### Insert case 5
The parent P is red but the uncle U is black.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The black height of the tree increases by 1.
#### Insert case 5
The parent P is red but the uncle U is black. The ultimate goal is to rotate the parent node P to the grandparent position, but this will not work if N is an "inner" grandchild of G (i.e., if N is the left child of the right child of G or the right child of the left child of G). A at P switches the roles of the current node N and its parent P. The rotation adds paths through N (those in the subtree labeled 2, see diagram) and removes paths through P (those in the subtree labeled 4). But both P and N are red, so requirement 4 is preserved. Requirement 3 is restored in case 6.
#### Insert case 6
The current node N is now certain to be an "outer" grandchild of G (left of left child or right of right child). Now at G, putting P in place of G and making P the parent of N and G. G is black and its former child P is red, since requirement 3 was violated. After switching the colors of P and G the resulting tree satisfies requirement 3.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Now at G, putting P in place of G and making P the parent of N and G. G is black and its former child P is red, since requirement 3 was violated. After switching the colors of P and G the resulting tree satisfies requirement 3. Requirement 4 also remains satisfied, since all paths that went through the black G now go through the black P.
Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place.
### Removal
#### Simple cases
- When the deleted node has 2 children (non-NULL), then we can swap its value with its in-order successor (the leftmost child of the right subtree), and then delete the successor instead. Since the successor is leftmost, it can only have a right child (non-NULL) or no child at all.
- When the deleted node has only 1 child (non-NULL). In this case, just replace the node with its child, and color it black.
- The single child (non-NULL) must be red according to conclusion 5, and the deleted node must be black according to requirement 3.
- When the deleted node has no children (both NULL) and is the root, replace it with NULL.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The single child (non-NULL) must be red according to conclusion 5, and the deleted node must be black according to requirement 3.
- When the deleted node has no children (both NULL) and is the root, replace it with NULL. The tree is empty.
- When the deleted node has no children (both NULL), and is red, simply remove the leaf node.
- When the deleted node has no children (both NULL), and is black, deleting it will create an imbalance, and requires a rebalance, as covered in the next section.
#### Removal of a black non-root leaf
The complex case is when N is not the root, colored black and has no proper child (⇔ only NULL children).
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
In the first iteration, N is replaced by NULL.
```c
void remove(struct Tree *tree, struct Node *node) {
struct Node *parent = node->parent;
struct Node *sibling;
struct Node *close_nephew;
struct Node *distant_nephew;
enum Dir dir = DIRECTION(node);
parent->child[dir] = NULL;
goto start_balance;
do {
dir = DIRECTION(node);
start_balance:
sibling = parent->child[1 - dir];
distant_nephew = sibling->child[1 - dir];
close_nephew = sibling->child[dir];
if (sibling->color == RED) {
// Case #3
rotate_subtree(tree, parent, dir);
parent->color = RED;
sibling->color = BLACK;
sibling = close_nephew;
distant_nephew = sibling->child[1 - dir];
if (distant_nephew && distant_nephew->color == RED)
goto case_6;
close_nephew = sibling->child[dir];
if (close_nephew && close_nephew->color == RED)
goto case_5;
// Case #4
sibling->color = RED;
parent->color = BLACK;
return;
}
if (distant_nephew && distant_nephew->color == RED)
goto case_6;
if (close_nephew && close_nephew->color == RED)
goto case_5;
if (parent->color == RED) {
// Case #4
sibling->color = RED;
parent->color = BLACK;
return;
}
// Case #1
if (!parent) return;
// Case #2
sibling->color = RED;
node = parent;
} while (parent = node->parent);
case_5:
rotate_subtree(tree, sibling, 1 - dir);
sibling->color = RED;
close_nephew->color = BLACK;
distant_nephew = sibling;
sibling = close_nephew;
case_6:
rotate_subtree(tree, parent, dir);
sibling->color = parent->color;
parent->color = BLACK;
distant_nephew->color = BLACK;
return;
}
```
The rebalancing loop of the delete operation has the following invariant:
-
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
At the beginning of each iteration the black height of N equals the iteration number minus one, which means that in the first iteration it is zero and that N is a true black node in higher iterations.
- The number of black nodes on the paths through N is one less than before the deletion, whereas it is unchanged on all other paths, so that there is a black-violation at P if other paths exist.
- All other properties (including requirement 3) are satisfied throughout the tree.
#### Notes to the delete diagrams
before caserowspan="2" rowspan="2" after next Δh P C S D P C S D — D1 D2 N:=P 1 D3 P↶S N:=N D6 0 D5 0 D4 0 D4 D5 C↷S N:=N D6 0 D6 P↶S
Deletion This synopsis shows in its before columns, that allpossible cases of color constellations are covered.
- In the diagrams below, P is used for N’s parent, S for the sibling of N, C (meaning close nephew) for S’s child in the same direction as N, and D (meaning distant nephew) for S’s other child (S cannot be a NULL node in the first iteration, because it must have black height one, which was the black height of N before its deletion, but C and D may be NULL nodes).
-
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
This synopsis shows in its before columns, that allpossible cases of color constellations are covered.
- In the diagrams below, P is used for N’s parent, S for the sibling of N, C (meaning close nephew) for S’s child in the same direction as N, and D (meaning distant nephew) for S’s other child (S cannot be a NULL node in the first iteration, because it must have black height one, which was the black height of N before its deletion, but C and D may be NULL nodes).
- The diagrams show the current node N as the left child of its parent P even though it is possible for N to be on either side. The code samples cover both possibilities by means of the side variable `dir`.
- At the beginning (in the first iteration) of removal, N is the NULL node replacing the node to be deleted. Because its location in parent’s node is the only thing of importance, it is symbolised by (meaning: the current node N is a NULL node and left child) in the left column of the delete diagrams. As the operation proceeds also proper nodes (of black height ≥ 1) may become current (see e.g. case 2).
- By counting the black bullets ( and ) in a delete diagram it can be observed that the paths through N have one bullet less than the other paths.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
As the operation proceeds also proper nodes (of black height ≥ 1) may become current (see e.g. case 2).
- By counting the black bullets ( and ) in a delete diagram it can be observed that the paths through N have one bullet less than the other paths. This means a black-violation at P—if it exists.
- The color constellation in column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored.
- The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible.
- The column rotation indicates whether a rotation contributes to the rebalancing.
- The column assignment shows an assignment of N before entering a subsequent iteration step. This possibly induces a reassignment of the other nodes P, C, S, D also.
- If something has been changed by the case, this is shown in the column group after.
- A sign in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks.
-
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks.
- The loop is where the problem of rebalancing is escalated
$$
\Delta h=1
$$
level higher in the tree in that the parent P becomes the new current node N. So it takes maximally iterations to repair the tree (where is the height of the tree). Because the probability of escalation decreases exponentially with each iteration the total rebalancing cost is constant on average, indeed amortized constant. (Just as an aside: Mehlhorn & Sanders point out: "AVL trees do not support constant amortized update costs." This is true for the rebalancing after a deletion, but not AVL insertion.)
- Out of the body of the loop there are exiting branches to the cases 3, 6, 5, 4, and 1; section "
#### Delete case 3
" of its own has three different exiting branches to the cases 6, 5 and 4.
- Rotations occur in cases 6 and 5 + 6 and 3 + 5 + 6 – all outside the loop. Therefore, at most three rotations occur in total.
#### Delete case 1
The current node N is the new root. One black node has been removed from every path, so the RB-properties are preserved.
The black height of the tree decreases by 1.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
One black node has been removed from every path, so the RB-properties are preserved.
The black height of the tree decreases by 1.
#### Delete case 2
P, S, and S’s children are black. After painting S red all paths passing through S, which are precisely those paths not passing through N, have one less black node. Now all paths in the subtree rooted by P have the same number of black nodes, but one fewer than the paths that do not pass through P, so requirement 4 may still be violated. After relabeling P to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 1 tree level) higher.
Delete case 3
The sibling S is red, so P and the nephews C and D have to be black. A at P turns S into N’s grandparent.
Then after reversing the colors of P and S, the path through N is still short one black node. But N now has a red parent P and after the reassignment a black sibling S, so the transformations in cases 4, 5, or 6 are able to restore the RB-shape.
#### Delete case 4
The sibling S and S’s children are black, but P is red.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
But N now has a red parent P and after the reassignment a black sibling S, so the transformations in cases 4, 5, or 6 are able to restore the RB-shape.
#### Delete case 4
The sibling S and S’s children are black, but P is red. Exchanging the colors of S and P does not affect the number of black nodes on paths going through S, but it does add one to the number of black nodes on paths going through N, making up for the deleted black node on those paths.
#### Delete case 5
The sibling S is black, S’s close child C is red, and S’s distant child D is black. After a at S the nephew C becomes S’s parent and N’s new sibling. The colors of S and C are exchanged.
All paths still have the same number of black nodes, but now N has a black sibling whose distant child is red, so the constellation is fit for case D6. Neither N nor its parent P are affected by this transformation, and P may be red or black ( in the diagram).
#### Delete case 6
The sibling S is black, S’s distant child D is red. After a at P the sibling S becomes the parent of P and S’s distant child D. The colors of P and S are exchanged, and D is made black.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
After a at P the sibling S becomes the parent of P and S’s distant child D. The colors of P and S are exchanged, and D is made black. The whole subtree still has the same color at its root S, namely either red or black ( in the diagram), which refers to the same color both before and after the transformation. This way requirement 3 is preserved. The paths in the subtree not passing through N (i.o.w. passing through D and node 3 in the diagram) pass through the same number of black nodes as before, but N now has one additional black ancestor: either P has become black, or it was black and S was added as a black grandparent. Thus, the paths passing through N pass through one additional black node, so that requirement 4 is restored and the total tree is in RB-shape.
Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Thus, the paths passing through N pass through one additional black node, so that requirement 4 is restored and the total tree is in RB-shape.
Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place.
Proof of bounds
For
$$
h\in\N
$$
there is a red–black tree of height
$$
h
$$
with
{|
|-
|
$$
m_h
$$
||colspan=2|
$$
= 2^{\lfloor(h+1)/2\rfloor} + 2^{\lfloor h/2 \rfloor} - 2
$$
|-
|rowspan=2| ||rowspan=2;style="vertical-align:bot"|
$$
= \Biggl\{
$$
||style="vertical-align:top"|
$$
2 \cdot 2^{\tfrac{h}2}-2 = 2^{\tfrac{h}2+1}-2
$$
|| ||style="vertical-align:bot"| if
$$
h
$$
even
|-
|style="vertical-align:top"| _
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place.
Proof of bounds
For
$$
h\in\N
$$
there is a red–black tree of height
$$
h
$$
with
{|
|-
|
$$
m_h
$$
||colspan=2|
$$
= 2^{\lfloor(h+1)/2\rfloor} + 2^{\lfloor h/2 \rfloor} - 2
$$
|-
|rowspan=2| ||rowspan=2;style="vertical-align:bot"|
$$
= \Biggl\{
$$
||style="vertical-align:top"|
$$
2 \cdot 2^{\tfrac{h}2}-2 = 2^{\tfrac{h}2+1}-2
$$
|| ||style="vertical-align:bot"| if
$$
h
$$
even
|-
|style="vertical-align:top"| _ BLOCK9_ || ||style="vertical-align:bot"| if
$$
h
$$
odd
|}
nodes (
$$
\lfloor \, \rfloor
$$
is the floor function) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Proof of bounds
For
$$
h\in\N
$$
there is a red–black tree of height
$$
h
$$
with
{|
|-
|
$$
m_h
$$
||colspan=2|
$$
= 2^{\lfloor(h+1)/2\rfloor} + 2^{\lfloor h/2 \rfloor} - 2
$$
|-
|rowspan=2| ||rowspan=2;style="vertical-align:bot"|
$$
= \Biggl\{
$$
||style="vertical-align:top"|
$$
2 \cdot 2^{\tfrac{h}2}-2 = 2^{\tfrac{h}2+1}-2
$$
|| ||style="vertical-align:bot"| if
$$
h
$$
even
|-
|style="vertical-align:top"| _ BLOCK9_ || ||style="vertical-align:bot"| if
$$
h
$$
odd
|}
nodes (
$$
\lfloor \, \rfloor
$$
is the floor function) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal. Its black height is
$$
\lceil h/2\rceil
$$
(with black root) or for odd
$$
h
$$
(then with a red root) also
$$
(h-1)/2~.
$$
Proof
For a red–black tree of a certain height to have minimal number of nodes, it must have exactly one longest path with maximal number of red nodes, to achieve a maximal tree height with a minimal black height.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
BLOCK9_ || ||style="vertical-align:bot"| if
$$
h
$$
odd
|}
nodes (
$$
\lfloor \, \rfloor
$$
is the floor function) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal. Its black height is
$$
\lceil h/2\rceil
$$
(with black root) or for odd
$$
h
$$
(then with a red root) also
$$
(h-1)/2~.
$$
Proof
For a red–black tree of a certain height to have minimal number of nodes, it must have exactly one longest path with maximal number of red nodes, to achieve a maximal tree height with a minimal black height. Besides this path all other nodes have to be black. If a node is taken off this tree it either loses height or some RB property.
The RB tree of height
$$
h=1
$$
with red root is minimal.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
If a node is taken off this tree it either loses height or some RB property.
The RB tree of height
$$
h=1
$$
with red root is minimal. This is in agreement with
$$
m_1 = 2^{\lfloor (1+1)/2\rfloor} \!+\!2^{\lfloor 1/2 \rfloor} \!\!-\!\!2 = 2^1\!+\!2^0\!\!-\!\!2 = 1~.
$$
A minimal RB tree (RBh in figure 2) of height
$$
h>1
$$
has a root whose two child subtrees are of different height. The higher child subtree is also a minimal RB tree, containing also a longest path that defines its height it has
$$
m_{h-1}
$$
nodes and the black height
$$
\lfloor(h\!\!-\!\!1)/2\rfloor =: s .
$$
The other subtree is a perfect binary tree of (black) height
$$
s
$$
having
$$
2^s\!\!-\!\!1=2^{\lfloor(h-1)/2\rfloor}\!\!-\!\!1
$$
black nodes—and no red node.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The higher child subtree is also a minimal RB tree, containing also a longest path that defines its height it has
$$
m_{h-1}
$$
nodes and the black height
$$
\lfloor(h\!\!-\!\!1)/2\rfloor =: s .
$$
The other subtree is a perfect binary tree of (black) height
$$
s
$$
having
$$
2^s\!\!-\!\!1=2^{\lfloor(h-1)/2\rfloor}\!\!-\!\!1
$$
black nodes—and no red node. Then the number of nodes is by induction
(higher subtree) (root) (second subtree) resulting in ■
The graph of the function
$$
m_h
$$
is convex and piecewise linear with breakpoints at
$$
(h=2k\;|\;m_{2k}=2 \cdot 2^k\!-\!2)
$$
where
$$
k \in \N .
$$
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The other subtree is a perfect binary tree of (black) height
$$
s
$$
having
$$
2^s\!\!-\!\!1=2^{\lfloor(h-1)/2\rfloor}\!\!-\!\!1
$$
black nodes—and no red node. Then the number of nodes is by induction
(higher subtree) (root) (second subtree) resulting in ■
The graph of the function
$$
m_h
$$
is convex and piecewise linear with breakpoints at
$$
(h=2k\;|\;m_{2k}=2 \cdot 2^k\!-\!2)
$$
where
$$
k \in \N .
$$
The function has been tabulated as
$$
m_h=
$$
A027383(h–1) for
$$
h\geq 1
$$
Solving the function for
$$
h
$$
The inequality
$$
9>8=2^3
$$
leads to
$$
3 > 2^{3/2}
$$
, which for odd
$$
h
$$
leads to
$$
m_h = 3 \cdot 2^{(h-1)/2}-2 = \bigl(3\cdot 2^{-3/2}\bigr) \cdot 2^{(h+2)/2}-2 > 2 \cdot 2^{h/2}-2
$$
.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Then the number of nodes is by induction
(higher subtree) (root) (second subtree) resulting in ■
The graph of the function
$$
m_h
$$
is convex and piecewise linear with breakpoints at
$$
(h=2k\;|\;m_{2k}=2 \cdot 2^k\!-\!2)
$$
where
$$
k \in \N .
$$
The function has been tabulated as
$$
m_h=
$$
A027383(h–1) for
$$
h\geq 1
$$
Solving the function for
$$
h
$$
The inequality
$$
9>8=2^3
$$
leads to
$$
3 > 2^{3/2}
$$
, which for odd
$$
h
$$
leads to
$$
m_h = 3 \cdot 2^{(h-1)/2}-2 = \bigl(3\cdot 2^{-3/2}\bigr) \cdot 2^{(h+2)/2}-2 > 2 \cdot 2^{h/2}-2
$$
.
So in both, the even and the odd case,
$$
h
$$
is in the interval
(perfect binary tree) (minimal red–black tree)
with
$$
n
$$
being the number of nodes.
Conclusion
A red–black tree with
$$
n
$$
nodes (keys) has tree height
$$
h \in O(\log n) .
$$
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
So in both, the even and the odd case,
$$
h
$$
is in the interval
(perfect binary tree) (minimal red–black tree)
with
$$
n
$$
being the number of nodes.
Conclusion
A red–black tree with
$$
n
$$
nodes (keys) has tree height
$$
h \in O(\log n) .
$$
## Set operations and bulk operations
In addition to the single-element insert, delete and lookup operations, several set operations have been defined on union, intersection and set difference. Then fast bulk operations on insertions or deletions can be implemented based on these set functions. These set operations rely on two helper operations, Split and Join. With the new operations, the implementation of red–black trees can be more efficient and highly-parallelizable. In order to achieve its time complexities this implementation requires that the root is allowed to be either red or black, and that every node stores its own black height.
- Join: The function Join is on two red–black trees and and a key , where , i.e. all keys in are less than , and all keys in are greater than . It returns a tree containing all elements in , also as .
If the two trees have the same black height, Join simply creates a new node with left subtree , root and right subtree .
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
It returns a tree containing all elements in , also as .
If the two trees have the same black height, Join simply creates a new node with left subtree , root and right subtree . If both and have black root, set to be red. Otherwise is set black.
If the black heights are unequal, suppose that has larger black height than (the other case is symmetric). Join follows the right spine of until a black node , which is balanced with . At this point a new node with left child , root (set to be red) and right child is created to replace c. The new node may invalidate the red–black invariant because at most three red nodes can appear in a row. This can be fixed with a double rotation. If double red issue propagates to the root, the root is then set to be black, restoring the properties. The cost of this function is the difference of the black heights between the two input trees.
- Split: To split a red–black tree into two smaller trees, those smaller than key , and those larger than key , first draw a path from the root by inserting into the red–black tree. After this insertion, all values less than will be found on the left of the path, and all values greater than will be found on the right.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The cost of this function is the difference of the black heights between the two input trees.
- Split: To split a red–black tree into two smaller trees, those smaller than key , and those larger than key , first draw a path from the root by inserting into the red–black tree. After this insertion, all values less than will be found on the left of the path, and all values greater than will be found on the right. By applying Join, all the subtrees on the left side are merged bottom-up using keys on the path as intermediate nodes from bottom to top to form the left tree, and the right part is symmetric.
For some applications, Split also returns a boolean value denoting if appears in the tree. The cost of Split is
$$
O(\log n) ,
$$
order of the height of the tree. This algorithm actually has nothing to do with any special properties of a red–black tree, and may be used on any tree with a join operation, such as an AVL tree.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The join algorithm is as follows:
function joinRightRB(TL, k, TR):
if (TL.color=black) and (TL.blackHeight=TR.blackHeight):
return Node(TL,⟨k,red⟩,TR)
T'=Node(TL.left,⟨TL.key,TL.color⟩,joinRightRB(TL.right,k,TR))
if (TL.color=black) and (T'.right.color=T'.right.right.color=red):
T'.right.right.color=black;
return rotateLeft(T')
return T' /* T[recte T'] */
function joinLeftRB(TL, k, TR):
/* symmetric to joinRightRB */
function join(TL, k, TR):
if TL.blackHeight>TR.blackHeight:
T'=joinRightRB(TL,k,TR)
if (T'.color=red) and (T'.right.color=red):
T'.color=black
return T'
if TR.blackHeight>TL.blackHeight:
/* symmetric */
if (TL.color=black) and (TR.color=black):
return Node(TL,⟨k,red⟩,TR)
return Node(TL,⟨k,black⟩,TR)
The split algorithm is as follows:
function split(T, k):
if (T = NULL) return (NULL, false, NULL)
if (k = T.key) return (T.left, true, T.right)
if (k < T.key):
(L',b,R') = split(T.left, k)
return (L',b,join(R',T.key,T.right))
(L',b,R') = split(T.right, k)
return (join(T.left,T.key,L'),b,T.right)
The union of two red–black trees and representing sets and , is a red–black tree that represents .
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The following recursive function computes this union:
function union(t1, t2):
if t1 = NULL return t2
if t2 = NULL return t1
(L1,b,R1)=split(t1,t2.key)
proc1=start:
TL=union(L1,t2.left)
proc2=start:
TR=union(R1,t2.right)
wait all proc1,proc2
return join(TL, t2.key, TR)
Here, split is presumed to return two trees: one holding the keys less its input key, one holding the greater keys. (The algorithm is non-destructive, but an in-place destructive version exists also.)
The algorithm for intersection or difference is similar, but requires the Join2 helper routine that is the same as Join but without the middle key. Based on the new functions for union, intersection or difference, either one key or multiple keys can be inserted to or deleted from the red–black tree. Since Split calls Join but does not deal with the balancing criteria of red–black trees directly, such an implementation is usually called the "join-based" implementation.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Based on the new functions for union, intersection or difference, either one key or multiple keys can be inserted to or deleted from the red–black tree. Since Split calls Join but does not deal with the balancing criteria of red–black trees directly, such an implementation is usually called the "join-based" implementation.
The complexity of each of union, intersection and difference is
$$
O\left(m \log \left({n\over m}+1\right)\right)
$$
for two red–black trees of sizes
$$
m
$$
and
$$
n(\ge m)
$$
. This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth
$$
O(\log m \log n)
$$
. When
$$
m=1
$$
, the join-based implementation has the same computational directed acyclic graph (DAG) as single-element insertion and deletion if the root of the larger tree is used to split the smaller tree.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth
$$
O(\log m \log n)
$$
. When
$$
m=1
$$
, the join-based implementation has the same computational directed acyclic graph (DAG) as single-element insertion and deletion if the root of the larger tree is used to split the smaller tree.
## Parallel algorithms
Parallel algorithms for constructing red–black trees from sorted lists of items can run in constant time or
$$
O(\log \log n)
$$
time, depending on the computer model, if the number of processors available is asymptotically proportional to the number
$$
n
$$
of items where
$$
n\to\infty
$$
. Fast search, insertion, and deletion parallel algorithms are also known.
The join-based algorithms for red–black trees are parallel for bulk operations, including union, intersection, construction, filter, map-reduce, and so on.
### Parallel bulk operations
Basic operations like insertion, removal or update can be parallelised by defining operations that process bulks of multiple elements. It is also possible to process bulks with several basic operations, for example bulks may contain elements to insert and also elements to remove from the tree.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
### Parallel bulk operations
Basic operations like insertion, removal or update can be parallelised by defining operations that process bulks of multiple elements. It is also possible to process bulks with several basic operations, for example bulks may contain elements to insert and also elements to remove from the tree.
The algorithms for bulk operations aren't just applicable to the red–black tree, but can be adapted to other sorted sequence data structures also, like the 2–3 tree, 2–3–4 tree and (a,b)-tree. In the following different algorithms for bulk insert will be explained, but the same algorithms can also be applied to removal and update. Bulk insert is an operation that inserts each element of a sequence
$$
I
$$
into a tree
$$
T
$$
.
#### Join-based
This approach can be applied to every sorted sequence data structure that supports efficient join- and split-operations.
The general idea is to split and in multiple parts and perform the insertions on these parts in parallel.
1. First the bulk of elements to insert must be sorted.
1. After that, the algorithm splits into
$$
k \in \mathbb{N}^+
$$
parts
$$
\langle I_1, \cdots, I_k \rangle
$$
of about equal sizes.
1.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
After that, the algorithm splits into
$$
k \in \mathbb{N}^+
$$
parts
$$
\langle I_1, \cdots, I_k \rangle
$$
of about equal sizes.
1. Next the tree must be split into parts
$$
\langle T_1, \cdots, T_k \rangle
$$
in a way, so that for every
$$
j \in \mathbb{N}^+ | \, 1 \leq j < k
$$
following constraints hold:
1. _ BLOCK6_ 1.
$$
\text{last}(T_j) < \text{first}(I_{j + 1})
$$
1. Now the algorithm inserts each element of
$$
I_j
$$
into
$$
T_j
$$
sequentially. This step must be performed for every , which can be done by up to processors in parallel.
1. Finally, the resulting trees will be joined to form the final result of the entire operation.
Note that in Step 3 the constraints for splitting assure that in Step 5 the trees can be joined again and the resulting sequence is sorted.
The pseudo code shows a simple divide-and-conquer implementation of the join-based algorithm for bulk-insert.
Both recursive calls can be executed in parallel.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The pseudo code shows a simple divide-and-conquer implementation of the join-based algorithm for bulk-insert.
Both recursive calls can be executed in parallel.
The join operation used here differs from the version explained in this article, instead join2 is used, which misses the second parameter k.
bulkInsert(T, I, k):
I.sort()
bulklInsertRec(T, I, k)
bulkInsertRec(T, I, k):
if k = 1:
forall e in I: T.insert(e)
else
m := ⌊size(I) / 2⌋
(T1, _, T2) := split(T, I[m])
bulkInsertRec(T1, I[0 .. m], ⌈k / 2⌉)
|| bulkInsertRec(T2, I[m + 1 .. size(I) - 1], ⌊k / 2⌋)
T ← join2(T1, T2)
#####
##### Execution time
Sorting is not considered in this analysis.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
The join operation used here differs from the version explained in this article, instead join2 is used, which misses the second parameter k.
bulkInsert(T, I, k):
I.sort()
bulklInsertRec(T, I, k)
bulkInsertRec(T, I, k):
if k = 1:
forall e in I: T.insert(e)
else
m := ⌊size(I) / 2⌋
(T1, _, T2) := split(T, I[m])
bulkInsertRec(T1, I[0 .. m], ⌈k / 2⌉)
|| bulkInsertRec(T2, I[m + 1 .. size(I) - 1], ⌊k / 2⌋)
T ← join2(T1, T2)
#####
##### Execution time
Sorting is not considered in this analysis.
{|
|-
| #recursion levels ||
$$
\in O(\log k)
$$
|-
| T(split) + T(join) ||
$$
\in O(\log |T|)
$$
|-
| insertions per thread ||
$$
\in O\left(\frac{|I|}{k}\right)
$$
|-
| T(insert) ||
$$
\in O(\log |T|)
$$
|-
| ||
$$
\in O\left(\log k \log |T| + \frac{|I|}{k} \log |T|\right)
$$
|}
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
##### Execution time
Sorting is not considered in this analysis.
{|
|-
| #recursion levels ||
$$
\in O(\log k)
$$
|-
| T(split) + T(join) ||
$$
\in O(\log |T|)
$$
|-
| insertions per thread ||
$$
\in O\left(\frac{|I|}{k}\right)
$$
|-
| T(insert) ||
$$
\in O(\log |T|)
$$
|-
| ||
$$
\in O\left(\log k \log |T| + \frac{|I|}{k} \log |T|\right)
$$
|}
This can be improved by using parallel algorithms for splitting and joining.
In this case the execution time is
$$
\in O\left(\log |T| + \frac{|I|}{k} \log |T|\right)
$$
.
#####
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
This can be improved by using parallel algorithms for splitting and joining.
In this case the execution time is
$$
\in O\left(\log |T| + \frac{|I|}{k} \log |T|\right)
$$
.
#####
##### Work
{|
|-
| #splits, #joins ||
$$
\in O(k)
$$
|-
| W(split) + W(join) ||
$$
\in O(\log |T|)
$$
|-
| #insertions ||
$$
\in O(|I|)
$$
|-
| W(insert) ||
$$
\in O(\log |T|)
$$
|-
| W(bulkInsert) ||
$$
\in O(k \log |T| + |I| \log |T|)
$$
|}
#### Pipelining
Another method of parallelizing bulk operations is to use a pipelining approach.
This can be done by breaking the task of processing a basic operation up into a sequence of subtasks.
For multiple basic operations the subtasks can be processed in parallel by assigning each subtask to a separate processor.
1. First the bulk of elements to insert must be sorted.
1.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
First the bulk of elements to insert must be sorted.
1. For each element in the algorithm locates the according insertion position in . This can be done in parallel for each element
$$
\in I
$$
since won't be mutated in this process. Now must be divided into subsequences according to the insertion position of each element. For example
$$
s_{n, \mathit{left}}
$$
is the subsequence of that contains the elements whose insertion position would be to the left of node .
1. The middle element
$$
m_{n, \mathit{dir}}
$$
of every subsequence
$$
s_{n, \mathit{dir}}
$$
will be inserted into as a new node
$$
n'
$$
. This can be done in parallel for each
$$
m_{n, \mathit{dir}}
$$
since by definition the insertion position of each _ BLOCK6_ is unique. If
$$
s_{n, \mathit{dir}}
$$
contains elements to the left or to the right of
$$
m_{n, \mathit{dir}}
$$
, those will be contained in a new set of subsequences as
$$
s_{n', \mathit{left}}
$$
or
$$
s_{n', \mathit{right}}
$$
.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
BLOCK6_ is unique. If
$$
s_{n, \mathit{dir}}
$$
contains elements to the left or to the right of
$$
m_{n, \mathit{dir}}
$$
, those will be contained in a new set of subsequences as
$$
s_{n', \mathit{left}}
$$
or
$$
s_{n', \mathit{right}}
$$
.
1. Now possibly contains up to two consecutive red nodes at the end of the paths form the root to the leaves, which needs to be repaired. Note that, while repairing, the insertion position of elements
$$
\in S
$$
have to be updated, if the corresponding nodes are affected by rotations.
1. If two nodes have different nearest black ancestors, they can be repaired in parallel. Since at most four nodes can have the same nearest black ancestor, the nodes at the lowest level can be repaired in a constant number of parallel steps.
1. This step will be applied successively to the black levels above until is fully repaired.
1. The steps 3 to 5 will be repeated on the new subsequences until is empty. At this point every element
$$
\in I
$$
has been inserted. Each application of these steps is called a stage.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
At this point every element
$$
\in I
$$
has been inserted. Each application of these steps is called a stage. Since the length of the subsequences in is
$$
\in O(|I|)
$$
and in every stage the subsequences are being cut in half, the number of stages is
$$
\in O(\log |I|)
$$
.
1. Since all stages move up the black levels of the tree, they can be parallelised in a pipeline. Once a stage has finished processing one black level, the next stage is able to move up and continue at that level.
Execution time
Sorting is not considered in this analysis.
Also,
$$
|I|
$$
is assumed to be smaller than
$$
|T|
$$
, otherwise it would be more efficient to construct the resulting tree from scratch.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
Execution time
Sorting is not considered in this analysis.
Also,
$$
|I|
$$
is assumed to be smaller than
$$
|T|
$$
, otherwise it would be more efficient to construct the resulting tree from scratch.
{|
|-
| T(find insert position) ||
$$
\in O(\log |T|)
$$
|-
| #stages ||
$$
\in O(\log |I|)
$$
|-
| T(insert) + T(repair) ||
$$
\in O(\log |T|)
$$
|- style="vertical-align:top"
| T(bulkInsert) with ~ #processors ||
$$
\in O(\log |I| + 2 \cdot \log |T|)
$$
$$
= O(\log |T|)
$$
|}
Work
{|
|-
| W(find insert positions) ||
$$
\in O(|I| \log |T|)
$$
|-
| #insertions, #repairs ||
$$
\in O(|I|)
$$
|-
| W(insert) + W(repair) ||
$$
\in O(\log |T|)
$$
|- style="vertical-align:top"
| W(bulkInsert) ||
$$
\in O(2 \cdot |I| \log |T|)
$$
$$
= O(|I| \log |T|)
$$
|}
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system (only direct attached storage for each node). Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.
## Shared-disk file system
A shared-disk file system uses a storage area network (SAN) to allow multiple computers to gain direct disk access at the block level. Access control and translation from file-level operations that applications use to block-level operations used by the SAN must take place on the client node. The most common type of clustered file system, the shared-disk file systemby adding mechanisms for concurrency controlprovides a consistent and serializable view of the file system, avoiding corruption and unintended data loss even when multiple clients try to access the same files at the same time. Shared-disk file-systems commonly employ some sort of fencing mechanism to prevent data corruption in case of node failures, because an unfenced device can cause data corruption if it loses communication with its sister nodes and tries to access the same information other nodes are accessing.
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
The most common type of clustered file system, the shared-disk file systemby adding mechanisms for concurrency controlprovides a consistent and serializable view of the file system, avoiding corruption and unintended data loss even when multiple clients try to access the same files at the same time. Shared-disk file-systems commonly employ some sort of fencing mechanism to prevent data corruption in case of node failures, because an unfenced device can cause data corruption if it loses communication with its sister nodes and tries to access the same information other nodes are accessing.
The underlying storage area network may use any of a number of block-level protocols, including SCSI, iSCSI, HyperSCSI, ATA over Ethernet (AoE), Fibre Channel, network block device, and InfiniBand.
There are different architectural approaches to a shared-disk filesystem. Some distribute file information across all the servers in a cluster (fully distributed).
###
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
Some distribute file information across all the servers in a cluster (fully distributed).
###
### Examples
- Blue Whale Clustered file system (BWFS)
- Silicon Graphics (SGI) clustered file system (CXFS)
- Veritas Cluster File System
- Microsoft Cluster Shared Volumes (CSV)
- DataPlow Nasan File System
- IBM General Parallel File System (GPFS)
- Oracle Cluster File System (OCFS)
- OpenVMS Files-11 File System
- PolyServe storage solutions
- Quantum StorNext File System (SNFS), ex ADIC, ex CentraVision File System (CVFS)
- Red Hat Global File System (GFS2)
- Sun QFS
- TerraScale Technologies TerraFS
- Veritas CFS (Cluster FS: Clustered VxFS)
- Versity VSM (SAM-QFS ported to Linux), ScoutFS
- VMware VMFS
- WekaFS
- Apple Xsan
- DragonFly BSD HAMMER2
## Distributed file systems
Distributed file systems do not share block level access to the same storage but use a network protocol. These are commonly known as network file systems, even though they are not the only file systems that use the network to send data. Distributed file systems can restrict access to the file system depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed.
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
These are commonly known as network file systems, even though they are not the only file systems that use the network to send data. Distributed file systems can restrict access to the file system depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed.
The difference between a distributed file system and a distributed data store is that a distributed file system allows files to be accessed using the same interfaces and semantics as local files for example, mounting/unmounting, listing directories, read/write at byte boundaries, system's native permission model. Distributed data stores, by contrast, require using a different API or library and have different semantics (most often those of a database).
### Design goals
Distributed file systems may aim for "transparency" in a number of aspects. That is, they aim to be "invisible" to client programs, which "see" a system which is similar to a local file system. Behind the scenes, the distributed file system handles locating files, transporting data, and potentially providing other features listed below.
- Access transparency: clients are unaware that files are distributed and can access them in the same way as local files are accessed.
- Location transparency: a consistent namespace exists encompassing local as well as remote files. The name of a file does not give its location.
-
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
- Access transparency: clients are unaware that files are distributed and can access them in the same way as local files are accessed.
- Location transparency: a consistent namespace exists encompassing local as well as remote files. The name of a file does not give its location.
-
### Concurrency
transparency: all clients have the same view of the state of the file system. This means that if one process is modifying a file, any other processes on the same system or remote systems that are accessing the files will see the modifications in a coherent manner.
- Failure transparency: the client and client programs should operate correctly after a server failure.
- Heterogeneity: file service should be provided across different hardware and operating system platforms.
- Scalability: the file system should work well in small environments (1 machine, a dozen machines) and also scale gracefully to bigger ones (hundreds through tens of thousands of systems).
- Replication transparency: Clients should not have to be aware of the file replication performed across multiple servers to support scalability.
- Migration transparency: files should be able to move between different servers without the client's knowledge.
###
## History
The Incompatible Timesharing System used virtual devices for transparent inter-machine file system access in the 1960s. More file servers were developed in the 1970s.
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
## History
The Incompatible Timesharing System used virtual devices for transparent inter-machine file system access in the 1960s. More file servers were developed in the 1970s. In 1976, Digital Equipment Corporation created the File Access Listener (FAL), an implementation of the Data Access Protocol as part of DECnet Phase II which became the first widely used network file system. In 1984, Sun Microsystems created the file system called "Network File System" (NFS) which became the first widely used Internet Protocol based network file system. Other notable network file systems are Andrew File System (AFS), Apple Filing Protocol (AFP), NetWare Core Protocol (NCP), and Server Message Block (SMB) which is also known as Common Internet File System (CIFS).
In 1986, IBM announced client and server support for Distributed Data Management Architecture (DDM) for the System/36, System/38, and IBM mainframe computers running CICS. This was followed by the support for IBM Personal Computer, AS/400, IBM mainframe computers under the MVS and VSE operating systems, and FlexOS. DDM also became the foundation for Distributed Relational Database Architecture, also known as DRDA.
There are many peer-to-peer network protocols for open-source distributed file systems for cloud or closed-source clustered file systems, e. g.:
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
DDM also became the foundation for Distributed Relational Database Architecture, also known as DRDA.
There are many peer-to-peer network protocols for open-source distributed file systems for cloud or closed-source clustered file systems, e. g.: 9P, AFS, Coda, CIFS/SMB, DCE/DFS, WekaFS, Lustre, PanFS, Google File System, Mnet, Chord Project.
Examples
- Alluxio
- BeeGFS (Fraunhofer)
- CephFS (Inktank, Red Hat, SUSE)
- Windows Distributed File System (DFS) (Microsoft)
- Infinit (acquired by Docker)
- GfarmFS
- GlusterFS (Red Hat)
- GFS (Google Inc.)
- GPFS (IBM)
- HDFS (Apache Software Foundation)
- IPFS (Inter Planetary File System)
- iRODS
- LizardFS (Skytechnology)
- Lustre
- MapR FS
- MooseFS (Core Technology / Gemius)
- ObjectiveFS
- OneFS (EMC Isilon)
- OrangeFS (Clemson University, Omnibond Systems), formerly Parallel Virtual File System
- PanFS (Panasas)
- Parallel Virtual File System (Clemson University, Argonne National Laboratory, Ohio Supercomputer Center)
- RozoFS (Rozo Systems)
- SMB/CIFS
- Torus (CoreOS)
- WekaFS (WekaIO)
- XtreemFS
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
9P, AFS, Coda, CIFS/SMB, DCE/DFS, WekaFS, Lustre, PanFS, Google File System, Mnet, Chord Project.
Examples
- Alluxio
- BeeGFS (Fraunhofer)
- CephFS (Inktank, Red Hat, SUSE)
- Windows Distributed File System (DFS) (Microsoft)
- Infinit (acquired by Docker)
- GfarmFS
- GlusterFS (Red Hat)
- GFS (Google Inc.)
- GPFS (IBM)
- HDFS (Apache Software Foundation)
- IPFS (Inter Planetary File System)
- iRODS
- LizardFS (Skytechnology)
- Lustre
- MapR FS
- MooseFS (Core Technology / Gemius)
- ObjectiveFS
- OneFS (EMC Isilon)
- OrangeFS (Clemson University, Omnibond Systems), formerly Parallel Virtual File System
- PanFS (Panasas)
- Parallel Virtual File System (Clemson University, Argonne National Laboratory, Ohio Supercomputer Center)
- RozoFS (Rozo Systems)
- SMB/CIFS
- Torus (CoreOS)
- WekaFS (WekaIO)
- XtreemFS
## Network-attached storage
Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN).
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
Examples
- Alluxio
- BeeGFS (Fraunhofer)
- CephFS (Inktank, Red Hat, SUSE)
- Windows Distributed File System (DFS) (Microsoft)
- Infinit (acquired by Docker)
- GfarmFS
- GlusterFS (Red Hat)
- GFS (Google Inc.)
- GPFS (IBM)
- HDFS (Apache Software Foundation)
- IPFS (Inter Planetary File System)
- iRODS
- LizardFS (Skytechnology)
- Lustre
- MapR FS
- MooseFS (Core Technology / Gemius)
- ObjectiveFS
- OneFS (EMC Isilon)
- OrangeFS (Clemson University, Omnibond Systems), formerly Parallel Virtual File System
- PanFS (Panasas)
- Parallel Virtual File System (Clemson University, Argonne National Laboratory, Ohio Supercomputer Center)
- RozoFS (Rozo Systems)
- SMB/CIFS
- Torus (CoreOS)
- WekaFS (WekaIO)
- XtreemFS
## Network-attached storage
Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN). NAS typically uses file-based protocols (as opposed to block-based protocols a SAN would use) such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare).
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
## Network-attached storage
Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN). NAS typically uses file-based protocols (as opposed to block-based protocols a SAN would use) such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare).
## Design considerations
### Avoiding single point of failure
The failure of disk hardware or a given storage node in a cluster can create a single point of failure that can result in data loss or unavailability. Fault tolerance and high availability can be provided through data replication of one sort or another, so that data remains intact and available despite the failure of any single piece of equipment. For examples, see the lists of distributed fault-tolerant file systems and distributed parallel fault-tolerant file systems.
### Performance
A common performance measurement of a clustered file system is the amount of time needed to satisfy service requests. In conventional systems, this time consists of a disk-access time and a small amount of CPU-processing time. But in a clustered file system, a remote access has additional overhead due to the distributed structure.
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
In conventional systems, this time consists of a disk-access time and a small amount of CPU-processing time. But in a clustered file system, a remote access has additional overhead due to the distributed structure. This includes the time to deliver the request to a server, the time to deliver the response to the client, and for each direction, a CPU overhead of running the communication protocol software.
Concurrency
Concurrency control becomes an issue when more than one person or client is accessing the same file or block and want to update it. Hence updates to the file from one client should not interfere with access and updates from other clients. This problem is more complex with file systems due to concurrent overlapping writes, where different writers write to overlapping regions of the file concurrently. This problem is usually handled by concurrency control or locking which may either be built into the file system or provided by an add-on protocol.
History
IBM mainframes in the 1970s could share physical disks and file systems if each machine had its own channel connection to the drives' control units. In the 1980s, Digital Equipment Corporation's TOPS-20 and OpenVMS clusters (VAX/ALPHA/IA64) included shared disk file systems.
|
https://en.wikipedia.org/wiki/Clustered_file_system
|
In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic notation). It gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size , and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
## Definition
Given a model of computation and an algorithm
$$
\mathsf{A}
$$
that halts on each input
$$
s
$$
, the mapping
$$
t_{\mathsf{A}} \colon \{0, 1\}^\star \to \N
$$
is called the time complexity of
$$
\mathsf{A}
$$
if, for every input string
$$
s
$$
,
$$
\mathsf{A}
$$
halts after exactly
$$
t_{\mathsf{A}}(s)
$$
steps.
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
## Definition
Given a model of computation and an algorithm
$$
\mathsf{A}
$$
that halts on each input
$$
s
$$
, the mapping
$$
t_{\mathsf{A}} \colon \{0, 1\}^\star \to \N
$$
is called the time complexity of
$$
\mathsf{A}
$$
if, for every input string
$$
s
$$
,
$$
\mathsf{A}
$$
halts after exactly
$$
t_{\mathsf{A}}(s)
$$
steps.
Since we usually are interested in the dependence of the time complexity on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping
$$
t_{\mathsf{A}} \colon \N \to \N
$$
, defined by the maximal complexity
$$
t_{\mathsf{A}}(n) := \max_{s\in \{0, 1\}^n} t_{\mathsf{A}}(s)
$$
of inputs
$$
s
$$
with length or size
$$
\le n
$$
.
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
## Definition
Given a model of computation and an algorithm
$$
\mathsf{A}
$$
that halts on each input
$$
s
$$
, the mapping
$$
t_{\mathsf{A}} \colon \{0, 1\}^\star \to \N
$$
is called the time complexity of
$$
\mathsf{A}
$$
if, for every input string
$$
s
$$
,
$$
\mathsf{A}
$$
halts after exactly
$$
t_{\mathsf{A}}(s)
$$
steps.
Since we usually are interested in the dependence of the time complexity on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping
$$
t_{\mathsf{A}} \colon \N \to \N
$$
, defined by the maximal complexity
$$
t_{\mathsf{A}}(n) := \max_{s\in \{0, 1\}^n} t_{\mathsf{A}}(s)
$$
of inputs
$$
s
$$
with length or size
$$
\le n
$$
.
Similar definitions can be given for space complexity, randomness complexity, etc.
## Ways of speaking
Very frequently, the complexity
$$
t_{\mathsf{A}}
$$
of an algorithm _
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
Similar definitions can be given for space complexity, randomness complexity, etc.
## Ways of speaking
Very frequently, the complexity
$$
t_{\mathsf{A}}
$$
of an algorithm _ BLOCK1_ is given in asymptotic Big-O Notation, which gives its growth rate in the form
$$
t_{\mathsf{A}} = O(g(n))
$$
with a certain real valued comparison function
$$
g(n)
$$
and the meaning:
- There exists a positive real number
$$
M
$$
and a natural number
$$
n_0
$$
such that
$$
|t_{\mathsf{A}}(n)| \le M g(n) \quad \text{ for all } n\ge n_0.
$$
Quite frequently, the wording is:
- „Algorithm _ BLOCK7_ has the worst-case complexity
$$
O(g(n))
$$
. “
or even only:
- „Algorithm
$$
\mathsf{A}
$$
has complexity
$$
O(g(n))
$$
. “
## Examples
Consider performing insertion sort on
$$
n
$$
numbers on a random-access machine. The best-case for the algorithm is when the numbers are already sorted, which takes
$$
O(n)
$$
steps to perform the task.
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
## Examples
Consider performing insertion sort on
$$
n
$$
numbers on a random-access machine. The best-case for the algorithm is when the numbers are already sorted, which takes
$$
O(n)
$$
steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes
$$
O(n^2)
$$
steps to sort them; therefore the worst-case time-complexity of insertion sort is of
$$
O(n^2)
$$
.
|
https://en.wikipedia.org/wiki/Worst-case_complexity
|
Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on externally-provided labels. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving them requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples, where one sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects.
During SSL, the model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels, which help to initialize the model parameters. Next, the actual task is performed with supervised or unsupervised learning.
Self-supervised learning has produced promising results in recent years, and has found practical application in fields such as audio processing, and is being used by Facebook and others for speech recognition.
## Types
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
Self-supervised learning has produced promising results in recent years, and has found practical application in fields such as audio processing, and is being used by Facebook and others for speech recognition.
## Types
### Autoassociative self-supervised learning
Autoassociative self-supervised learning is a specific category of self-supervised learning where a neural network is trained to reproduce or reconstruct its own input data. In other words, the model is tasked with learning a representation of the data that captures its essential features or structure, allowing it to regenerate the original input.
The term "autoassociative" comes from the fact that the model is essentially associating the input data with itself. This is often achieved using autoencoders, which are a type of neural network architecture used for representation learning. Autoencoders consist of an encoder network that maps the input data to a lower-dimensional representation (latent space), and a decoder network that reconstructs the input from this representation.
The training process involves presenting the model with input data and requiring it to reconstruct the same data as closely as possible. The loss function used during training typically penalizes the difference between the original input and the reconstructed output (e.g. mean squared error). By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space.
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
The loss function used during training typically penalizes the difference between the original input and the reconstructed output (e.g. mean squared error). By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space.
### Contrastive self-supervised learning
For a binary classification task, training data can be divided into positive examples and negative examples. Positive examples are those that match the target. For example, if training a classifier to identify birds, the positive training data would include images that contain birds. Negative examples would be images that do not. Contrastive self-supervised learning uses both positive and negative examples. The loss function in contrastive learning is used to minimize the distance between positive sample pairs, while maximizing the distance between negative sample pairs.
An early example uses a pair of 1-dimensional convolutional neural networks to process a pair of images and maximize their agreement.
Contrastive Language-Image Pre-training (CLIP) allows joint pretraining of a text encoder and an image encoder, such that a matching image-text pair have image encoding vector and text encoding vector that span a small angle (having a large cosine similarity).
InfoNCE (Noise-Contrastive Estimation) is a method to optimize two models jointly, based on Noise Contrastive Estimation (NCE).
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
Contrastive Language-Image Pre-training (CLIP) allows joint pretraining of a text encoder and an image encoder, such that a matching image-text pair have image encoding vector and text encoding vector that span a small angle (having a large cosine similarity).
InfoNCE (Noise-Contrastive Estimation) is a method to optimize two models jointly, based on Noise Contrastive Estimation (NCE). Given a set
$$
X=\left\{x_1, \ldots x_N\right\}
$$
of
$$
N
$$
random samples containing one positive sample from
$$
p\left(x_{t+k} \mid c_t\right)
$$
and
$$
N-1
$$
negative samples from the 'proposal' distribution
$$
p\left(x_{t+k}\right)
$$
, it minimizes the following loss function:
$$
\mathcal{L}_{\mathrm{N}}=-\mathbb{E}_{X} \left[\log \frac{f_k\left(x_{t+k}, c_t\right)}{\sum_{x_j \in X} f_k\left(x_j, c_t\right)}\right]
$$
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
InfoNCE (Noise-Contrastive Estimation) is a method to optimize two models jointly, based on Noise Contrastive Estimation (NCE). Given a set
$$
X=\left\{x_1, \ldots x_N\right\}
$$
of
$$
N
$$
random samples containing one positive sample from
$$
p\left(x_{t+k} \mid c_t\right)
$$
and
$$
N-1
$$
negative samples from the 'proposal' distribution
$$
p\left(x_{t+k}\right)
$$
, it minimizes the following loss function:
$$
\mathcal{L}_{\mathrm{N}}=-\mathbb{E}_{X} \left[\log \frac{f_k\left(x_{t+k}, c_t\right)}{\sum_{x_j \in X} f_k\left(x_j, c_t\right)}\right]
$$
### Non-contrastive self-supervised learning
Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss.
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
### Non-contrastive self-supervised learning
Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss. For the example of binary classification, it would trivially learn to classify each example as positive. Effective NCSSL requires an extra predictor on the online side that does not back-propagate on the target side.
## Comparison with other forms of machine learning
SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in the input are implicitly and autonomously extracted from the data. These supervisory signals, extracted from the data, can then be used for training.
SSL is similar to unsupervised learning in that it does not require labels in the sample data. Unlike unsupervised learning, however, learning is not done using inherent data structures.
Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled.
In transfer learning, a model designed for one task is reused on a different task.
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled.
In transfer learning, a model designed for one task is reused on a different task.
Training an autoencoder intrinsically constitutes a self-supervised process, because the output pattern needs to become an optimal reconstruction of the input pattern itself. However, in current jargon, the term 'self-supervised' often refers to tasks based on a pretext-task training setup. This involves the (human) design of such pretext task(s), unlike
the case of fully self-contained autoencoder training.
In reinforcement learning, self-supervising learning from a combination of losses can create abstract representations where only the most important information about the state are kept in a compressed way.
## Examples
Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other.
Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries.
OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things.
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things.
Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks.
The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text.
DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via typical gradient descent.
Self-GenomeNet is an example of self-supervised learning in genomics.
Self-supervised learning continues to gain prominence as a new approach across diverse fields. Its ability to leverage unlabeled data effectively opens new possibilities for advancement in machine learning, especially in data-driven application domains.
## References
## Further reading
-
## External links
-
-
-
-
Category:Machine learning
Category:Generative artificial intelligence
|
https://en.wikipedia.org/wiki/Self-supervised_learning
|
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
## Definition
The full scope of combinatorics is not universally agreed upon. According to H. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions.
|
https://en.wikipedia.org/wiki/Combinatorics
|
## Definition
The full scope of combinatorics is not universally agreed upon. According to H. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with:
- the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
- the existence of such structures that satisfy certain given criteria,
- the construction of these structures, perhaps in many ways, and
- optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
|
https://en.wikipedia.org/wiki/Combinatorics
|
However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
## History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. The earliest recorded use of combinatorial techniques comes from problem 79 of the Rhind papyrus, which dates to the 16th century BC. The problem concerns a certain geometric series, and has similarities to Fibonacci's problem of counting the number of compositions of 1s and 2s that sum to a given total. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. Stanley, Richard P.; "Hipparchus, Plutarch, Schröder, and Hough", American Mathematical Monthly 104 (1997), no. 4, 344–350.
|
https://en.wikipedia.org/wiki/Combinatorics
|
Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers. Stanley, Richard P.; "Hipparchus, Plutarch, Schröder, and Hough", American Mathematical Monthly 104 (1997), no. 4, 344–350. Earlier, in the Ostomachion, Archimedes (3rd century BCE) may have considered the number of configurations of a tiling puzzle, while combinatorial interests possibly were present in lost works by Apollonius.
In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
|
https://en.wikipedia.org/wiki/Combinatorics
|
The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.
During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics.
### Graph theory
also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
|
https://en.wikipedia.org/wiki/Combinatorics
|
In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics.
### Graph theory
also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field.
## Approaches and subfields of combinatorics
### Enumerative combinatorics
Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
|
https://en.wikipedia.org/wiki/Combinatorics
|
Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions.
### Analytic combinatorics
Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
### Partition theory
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Graph theory
Graphs are fundamental objects in combinatorics.
|
https://en.wikipedia.org/wiki/Combinatorics
|
They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Graph theory
Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on n vertices with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems.
### Design theory
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.
|
https://en.wikipedia.org/wiki/Combinatorics
|
The solution of the problem is a special case of a Steiner system, which play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics.
Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography.
### Finite geometry
Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry).
### Order theory
Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that".
|
https://en.wikipedia.org/wiki/Combinatorics
|
### Order theory
Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras.
### Matroid theory
Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics.
### Extremal combinatorics
Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory.
|
https://en.wikipedia.org/wiki/Combinatorics
|
### Extremal combinatorics
Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory.
The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn, n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate.
Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.
|
https://en.wikipedia.org/wiki/Combinatorics
|
It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle.
### Probabilistic combinatorics
In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.
Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics.
|
https://en.wikipedia.org/wiki/Combinatorics
|
Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics.
### Algebraic combinatorics
Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group and representation theory, lattice theory and commutative algebra are common.
### Combinatorics on words
Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics.
|
https://en.wikipedia.org/wiki/Combinatorics
|
It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field.
### Geometric combinatorics
Geometric combinatorics is related to convex and discrete geometry. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is a historical name for discrete geometry.
It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics.
|
https://en.wikipedia.org/wiki/Combinatorics
|
It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope.
### Topological combinatorics
Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology.
### Arithmetic combinatorics
Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved.
|
https://en.wikipedia.org/wiki/Combinatorics
|
It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems.
### Infinitary combinatorics
Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals.
Gian-Carlo Rota used the name continuous combinatorics to describe geometric probability, since there are many analogies between counting and measure.
## Related fields
### Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.
|
https://en.wikipedia.org/wiki/Combinatorics
|
### Combinatorial optimization
Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory.
### Coding theory
Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory.
### Discrete and computational geometry
Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry.
### Combinatorics and dynamical systems
Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example
graph dynamical system.
### Combinatorics and physics
|
https://en.wikipedia.org/wiki/Combinatorics
|
MongoDB is a source-available, cross-platform, document-oriented database program. Classified as a NoSQL database product, MongoDB uses JSON-like documents with optional schemas. Released in February 2009 by 10gen (now MongoDB Inc.), it supports features like sharding, replication, and ACID transactions (from version 4.0).
### MongoDB Atlas
, its managed cloud service, operates on AWS, Google Cloud Platform, and Microsoft Azure. Current versions are licensed under the Server Side Public License (SSPL). MongoDB is a member of the MACH Alliance.
## History
The American software company 10gen began developing MongoDB in 2007 as a component of a planned platform-as-a-service product. In 2009, the company shifted to an open-source development model and began offering commercial support and other services. In 2013, 10gen changed its name to MongoDB Inc.
On October 20, 2017, MongoDB became a publicly traded company, listed on NASDAQ as MDB with an IPO price of $24 per share.
On November 8, 2018, with the stable release 4.0.4, the software's license changed from AGPL 3.0 to SSPL.
On October 30, 2019, MongoDB teamed with Alibaba Cloud to offer Alibaba Cloud customers a MongoDB-as-a-service solution.
|
https://en.wikipedia.org/wiki/MongoDB
|
On November 8, 2018, with the stable release 4.0.4, the software's license changed from AGPL 3.0 to SSPL.
On October 30, 2019, MongoDB teamed with Alibaba Cloud to offer Alibaba Cloud customers a MongoDB-as-a-service solution. Customers can use the managed offering from Alibaba's global data centers.
+ MongoDB release history Version Release date Feature notes Refs 1.0 August 2009 1.2 December 2009- more indexes per collection
- faster index creation
- map/reduce
- stored JavaScript functions
- configurable fsync time
- several small features and fixes 1.4 March 2010 1.6 August 2010- production-ready sharding
- replica sets
- support for IPv6 1.8 March 2011 2.0 September 2011 2.2 August 2012 2.4 March 2013- enhanced geospatial support
- switch to V8 JavaScript engine
- security enhancements
- text search (beta)
- hashed index 2.6 April 8, 2014- aggregation enhancements
- text-search integration
- query-engine improvements
- new write-operation protocol
- security enhancements 3.0 March 3, 2015- WiredTiger storage engine support
- pluggable storage engine API
- SCRAM-SHA-1 authentication
- improved explain functionality
- MongoDB
|
https://en.wikipedia.org/wiki/MongoDB
|
On October 30, 2019, MongoDB teamed with Alibaba Cloud to offer Alibaba Cloud customers a MongoDB-as-a-service solution. Customers can use the managed offering from Alibaba's global data centers.
+ MongoDB release history Version Release date Feature notes Refs 1.0 August 2009 1.2 December 2009- more indexes per collection
- faster index creation
- map/reduce
- stored JavaScript functions
- configurable fsync time
- several small features and fixes 1.4 March 2010 1.6 August 2010- production-ready sharding
- replica sets
- support for IPv6 1.8 March 2011 2.0 September 2011 2.2 August 2012 2.4 March 2013- enhanced geospatial support
- switch to V8 JavaScript engine
- security enhancements
- text search (beta)
- hashed index 2.6 April 8, 2014- aggregation enhancements
- text-search integration
- query-engine improvements
- new write-operation protocol
- security enhancements 3.0 March 3, 2015- WiredTiger storage engine support
- pluggable storage engine API
- SCRAM-SHA-1 authentication
- improved explain functionality
- MongoDB Ops Manager 3.2 December 8, 2015- WiredTiger storage engine by default
- replication election enhancements
- config servers as replica sets
- readConcern
- document validations
- moved from V8 to SpiderMonkey 3.4 November 29, 2016- linearizable read concerns
- views
- collation 3.6 November 29, 2017 4.0 June 26, 2018- transactions
- license change effective pr. 4.0.4 4.2 August 13, 2019 4.4 July 25, 2020 4.4.5 April 2021 4.4.6 May 2021 5.0 July 13, 2021- future-proofs versioned API
- client-side field level encryption
- live resharding
- time series support 6.0 July 19, 2022 7.0 August 15, 2023 8.0 October 2, 2024
|
https://en.wikipedia.org/wiki/MongoDB
|
Customers can use the managed offering from Alibaba's global data centers.
+ MongoDB release history Version Release date Feature notes Refs 1.0 August 2009 1.2 December 2009- more indexes per collection
- faster index creation
- map/reduce
- stored JavaScript functions
- configurable fsync time
- several small features and fixes 1.4 March 2010 1.6 August 2010- production-ready sharding
- replica sets
- support for IPv6 1.8 March 2011 2.0 September 2011 2.2 August 2012 2.4 March 2013- enhanced geospatial support
- switch to V8 JavaScript engine
- security enhancements
- text search (beta)
- hashed index 2.6 April 8, 2014- aggregation enhancements
- text-search integration
- query-engine improvements
- new write-operation protocol
- security enhancements 3.0 March 3, 2015- WiredTiger storage engine support
- pluggable storage engine API
- SCRAM-SHA-1 authentication
- improved explain functionality
- MongoDB Ops Manager 3.2 December 8, 2015- WiredTiger storage engine by default
- replication election enhancements
- config servers as replica sets
- readConcern
- document validations
- moved from V8 to SpiderMonkey 3.4 November 29, 2016- linearizable read concerns
- views
- collation 3.6 November 29, 2017 4.0 June 26, 2018- transactions
- license change effective pr. 4.0.4 4.2 August 13, 2019 4.4 July 25, 2020 4.4.5 April 2021 4.4.6 May 2021 5.0 July 13, 2021- future-proofs versioned API
- client-side field level encryption
- live resharding
- time series support 6.0 July 19, 2022 7.0 August 15, 2023 8.0 October 2, 2024
## Main features
|
https://en.wikipedia.org/wiki/MongoDB
|
Ops Manager 3.2 December 8, 2015- WiredTiger storage engine by default
- replication election enhancements
- config servers as replica sets
- readConcern
- document validations
- moved from V8 to SpiderMonkey 3.4 November 29, 2016- linearizable read concerns
- views
- collation 3.6 November 29, 2017 4.0 June 26, 2018- transactions
- license change effective pr. 4.0.4 4.2 August 13, 2019 4.4 July 25, 2020 4.4.5 April 2021 4.4.6 May 2021 5.0 July 13, 2021- future-proofs versioned API
- client-side field level encryption
- live resharding
- time series support 6.0 July 19, 2022 7.0 August 15, 2023 8.0 October 2, 2024
## Main features
### Ad-hoc queries
MongoDB supports field, range query and regular-expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. Queries can also be configured to return a random sample of results of a given size.
### Indexing
Fields in a MongoDB document can be indexed with primary and secondary indices.
### Replication
MongoDB provides high availability with replica sets. A replica set consists of two or more copies of the data. Each replica-set member may act in the role of primary or secondary replica at any time. All writes and reads are done on the primary replica by default. Secondary replicas maintain a copy of the data of the primary using built-in replication.
|
https://en.wikipedia.org/wiki/MongoDB
|
All writes and reads are done on the primary replica by default. Secondary replicas maintain a copy of the data of the primary using built-in replication. When a primary replica fails, the replica set automatically conducts an election process to determine which secondary should become the primary. Secondaries can optionally serve read operations, but that data is only eventually consistent by default.
If the replicated MongoDB deployment only has a single secondary member, a separate daemon called an arbiter must be added to the set. It has the single responsibility of resolving the election of the new primary. As a consequence, an ideal distributed MongoDB deployment requires at least three separate servers, even in the case of just one primary and one secondary.
### Load balancing
MongoDB scales horizontally using sharding. The user chooses a shard key, which determines how the data in a collection will be distributed. The data is split into ranges (based on the shard key) and distributed across multiple shards, which are masters with one or more replicas. Alternatively, the shard key can be hashed to map to a shard – enabling an even data distribution.
MongoDB can run over multiple servers, balancing the load or duplicating data to keep the system functional in case of hardware failure.
|
https://en.wikipedia.org/wiki/MongoDB
|
Alternatively, the shard key can be hashed to map to a shard – enabling an even data distribution.
MongoDB can run over multiple servers, balancing the load or duplicating data to keep the system functional in case of hardware failure.
### File storage
MongoDB can be used as a file system, called GridFS, with load-balancing and data-replication features over multiple machines for storing files.
This function, called a grid file system, is included with MongoDB drivers. MongoDB exposes functions for file manipulation and content to developers. GridFS can be accessed using the mongofiles utility or plugins for Nginx and lighttpd. GridFS divides a file into parts, or chunks, and stores each of those chunks as a separate document.
### Aggregation
MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function and single-purpose aggregation methods.
Map-reduce can be used for batch processing of data and aggregation operations. However, according to MongoDB's documentation, the aggregation pipeline provides better performance for most aggregation operations.
The aggregation framework enables users to obtain results similar to those returned by queries that include the SQL GROUP BY clause. Aggregation operators can be strung together to form a pipeline, analogous to Unix pipes.
|
https://en.wikipedia.org/wiki/MongoDB
|
The aggregation framework enables users to obtain results similar to those returned by queries that include the SQL GROUP BY clause. Aggregation operators can be strung together to form a pipeline, analogous to Unix pipes. The aggregation framework includes the $lookup operator, which can join documents from multiple collections, as well as statistical operators such as standard deviation.
### Server-side JavaScript execution
JavaScript can be used in queries, aggregation functions (such as MapReduce) and sent directly to the database to be executed.
### Capped collections
MongoDB supports fixed-size collections called capped collections. This type of collection maintains insertion order and, once the specified size has been reached, behaves like a circular queue.
### Transactions
MongoDB supports multi-document ACID transactions since the 4.0 release in June 2018.
## Editions
###
### MongoDB Community Server
The MongoDB Community Edition is free and available for Windows, Linux and macOS.
### MongoDB Enterprise Server
MongoDB Enterprise Server is the commercial edition of MongoDB and is available as part of the MongoDB Enterprise Advanced subscription.
MongoDB Atlas
MongoDB is also available as an on-demand, fully managed service. MongoDB Atlas runs on AWS, Microsoft Azure and Google Cloud Platform.
|
https://en.wikipedia.org/wiki/MongoDB
|
MongoDB Atlas runs on AWS, Microsoft Azure and Google Cloud Platform.
On March 10, 2022, MongoDB warned its users in Russia and Belarus that their data stored on the MongoDB Atlas platform will be destroyed as a result of American sanctions related to the Russo-Ukrainian War.
## Architecture
### Programming language accessibility
MongoDB has official drivers for major programming languages and development environments. There are also a large number of unofficial or community-supported drivers for other programming languages and frameworks.
### Serverless access
### Management and graphical front-ends
The primary interface to the database has been the mongo shell. Since MongoDB 3.2, MongoDB Compass is introduced as the native GUI. There are products and third-party projects that offer user interfaces for administration and data viewing.
## Licensing
MongoDB Community Server
As of October 2018, MongoDB is released under the Server Side Public License (SSPL), a non-free license developed by the project. It replaces the GNU Affero General Public License, and is nearly identical to the GNU General Public License version 3, but requires that those making the software publicly available as part of a "service" must make the service's entire source code (insofar that a user would be able to recreate the service themselves) available under this license.
|
https://en.wikipedia.org/wiki/MongoDB
|
## Licensing
MongoDB Community Server
As of October 2018, MongoDB is released under the Server Side Public License (SSPL), a non-free license developed by the project. It replaces the GNU Affero General Public License, and is nearly identical to the GNU General Public License version 3, but requires that those making the software publicly available as part of a "service" must make the service's entire source code (insofar that a user would be able to recreate the service themselves) available under this license. By contrast, the AGPL only requires the source code of the licensed software to be provided to users when the software is conveyed over a network. The SSPL was submitted for certification to the Open Source Initiative but later withdrawn. In January 2021, the Open Source Initiative stated that SSPL is not an open source license. The language drivers are available under an Apache License. In addition, MongoDB Inc. offers proprietary licenses for MongoDB. The last versions licensed as AGPL version 3 are 4.0.3 (stable) and 4.1.4.
MongoDB has been removed from the Debian, Fedora and Red Hat Enterprise Linux distributions because of the licensing change. Fedora determined that the SSPL version 1 is not a free software license because it is "intentionally crafted to be aggressively discriminatory" towards commercial users.
## Bug reports and criticisms
|
https://en.wikipedia.org/wiki/MongoDB
|
Fedora determined that the SSPL version 1 is not a free software license because it is "intentionally crafted to be aggressively discriminatory" towards commercial users.
## Bug reports and criticisms
### Security
Because of MongoDB's default security configuration, which allows any user full access to the database, data from tens of thousands of MongoDB installations has been stolen. Furthermore, many MongoDB servers have been held for ransom. In September 2017, Davi Ottenheimer head of product security at MongoDB, proclaimed that measures had been taken to defend against these risks.
From the MongoDB 2.6 release onward, the binaries for the official MongoDB RPM and DEB packages bind to localhost by default. From MongoDB 3.6, this default behavior was extended to all MongoDB packages across all platforms. As a result, all networked connections to the database are denied unless explicitly configured by an administrator.
### Technical criticisms
In some failure scenarios in which an application can access two distinct MongoDB processes that cannot access each other, it is possible for MongoDB to return stale reads. It is also possible for MongoDB to roll back writes that have been acknowledged. The issue was addressed in version 3.4.0, released in November 2016, and applied to earlier releases from v3.2.12 onward.
|
https://en.wikipedia.org/wiki/MongoDB
|
It is also possible for MongoDB to roll back writes that have been acknowledged. The issue was addressed in version 3.4.0, released in November 2016, and applied to earlier releases from v3.2.12 onward.
Before version 2.2, locks were implemented on a per-server-process basis. With version 2.2, locks were implemented at the database level. Beginning with version 3.0, pluggable storage engines are available, and each storage engine may implement locks differently. With MongoDB 3.0, locks are implemented at the collection level for the MMAPv1 storage engine, while the WiredTiger storage engine uses an optimistic concurrency protocol that effectively provides document-level locking. Even with versions prior to 3.0, one approach to increase concurrency is to use sharding. In some situations, reads and writes will yield their locks. If MongoDB predicts that a page is unlikely to be in memory, operations will yield their lock while the pages load. The use of lock yielding expanded greatly in version 2.2.
Until version 3.3.11, MongoDB could not perform collation-based sorting and was limited to bytewise comparison via memcmp, which would not provide correct ordering for many non-English languages when used with a Unicode encoding. The issue was fixed on August 23, 2016.
Prior to MongoDB 4.0, queries against an index were not atomic.
|
https://en.wikipedia.org/wiki/MongoDB
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.