text
stringlengths
105
4.17k
source
stringclasses
883 values
The issue was fixed on August 23, 2016. Prior to MongoDB 4.0, queries against an index were not atomic. Documents that were updated while queries was running could be missed. The introduction of the snapshot read concern in MongoDB 4.0 eliminated this risk. MongoDB claimed that version 3.6.4 had passed "the industry's toughest data safety, correctness, and consistency tests" by Jepsen, and that "MongoDB offers among the strongest data consistency, correctness, and safety guarantees of any database available today." Jepsen, which describes itself as a "distributed systems safety research company," disputed both claims on Twitter, saying, "In that report, MongoDB lost data and violated causal by default." In its May 2020 report on MongoDB version 4.2.6, Jepsen wrote that MongoDB had only mentioned tests that version 3.6.4 had passed, and that version had 4.2.6 introduced more problems. Jepsen's test summary reads in part: Jepsen evaluated MongoDB version 4.2.6, and found that even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations. Weak defaults meant that transactions could lose writes and allow dirty reads, even downgrading requested safety levels at the database and collection level.
https://en.wikipedia.org/wiki/MongoDB
Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations. Weak defaults meant that transactions could lose writes and allow dirty reads, even downgrading requested safety levels at the database and collection level. Moreover, the snapshot read concern did not guarantee snapshot unless paired with write concern majority—even for read-only transactions. These design choices complicate the safe use of MongoDB transactions. On May 26, Jepsen updated the report to say: "MongoDB identified a bug in the transaction retry mechanism which they believe was responsible for the anomalies observed in this report; a patch is scheduled for 4.2.8." The issue has been patched as of that version, and "Jepsen criticisms of the default write concerns have also been addressed, with the default write concern now elevated to the majority concern (w:majority) from MongoDB 5.0." ## MongoDB conference MongoDB Inc. hosts an annual developer conference that has been called MongoDB World or MongoDB.live. Year Dates City Venue Notes 2014 June 23–25 New York City Sheraton Times Square Hotel 2015 June 1–2 New York City Sheraton Times Square Hotel 2016 June 28–29 New York City New York Hilton Midtown 2017 June 20–21 Chicago Hyatt Regency Chicago
https://en.wikipedia.org/wiki/MongoDB
Year Dates City Venue Notes 2014 June 23–25 New York City Sheraton Times Square Hotel 2015 June 1–2 New York City Sheraton Times Square Hotel 2016 June 28–29 New York City New York Hilton Midtown 2017 June 20–21 Chicago Hyatt Regency Chicago First year not in New York City 2018 June 26–27 New York City New York Hilton Midtown 2019 June 17–19 New York City New York Hilton Midtown 2020 May 4–6 Online In‑person event canceled and conference held entirely online because of the COVID-19 pandemic 2021 July 13–14 Online Conference held online because of the COVID-19 pandemic 2022 June 7–9 New York City Javits Center
https://en.wikipedia.org/wiki/MongoDB
In classical electromagnetism, Ampère's circuital law (not to be confused with Ampère's force law) relates the circulation of a magnetic field around a closed loop to the electric current passing through the loop. James Clerk Maxwell derived it using hydrodynamics in his 1861 published paper "On Physical Lines of Force". In 1865, he generalized the equation to apply to time-varying currents by adding the displacement current term, resulting in the modern form of the law, sometimes called the Ampère–Maxwell law, which is one of Maxwell's equations that form the basis of classical electromagnetism. ## Ampère's original circuital law In 1820 Danish physicist Hans Christian Ørsted discovered that an electric current creates a magnetic field around it, when he noticed that the needle of a compass next to a wire carrying current turned so that the needle was perpendicular to the wire. H. A. M. Snelders, "Oersted's discovery of electromagnetism" in He investigated and discovered the rules which govern the field around a straight current-carrying wire: - The magnetic field lines encircle the current-carrying wire. - The magnetic field lines lie in a plane perpendicular to the wire. - If the direction of the current is reversed, the direction of the magnetic field reverses.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
- The magnetic field lines lie in a plane perpendicular to the wire. - If the direction of the current is reversed, the direction of the magnetic field reverses. - The strength of the field is directly proportional to the magnitude of the current. - The strength of the field at any point is inversely proportional to the distance of the point from the wire. This sparked a great deal of research into the relation between electricity and magnetism. André-Marie Ampère investigated the magnetic force between two current-carrying wires, discovering Ampère's force law. In the 1850s Scottish mathematical physicist James Clerk Maxwell generalized these results and others into a single mathematical law. The original form of Maxwell's circuital law, which he derived as early as 1855 in his paper "On Faraday's Lines of Force" based on an analogy to hydrodynamics, relates magnetic fields to electric currents that produce them. It determines the magnetic field associated with a given current, or the current associated with a given magnetic field. The original circuital law only applies to a magnetostatic situation, to continuous steady currents flowing in a closed circuit. For systems with electric fields that change over time, the original law (as given in this section) must be modified to include a term known as Maxwell's correction (see below).
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
The original circuital law only applies to a magnetostatic situation, to continuous steady currents flowing in a closed circuit. For systems with electric fields that change over time, the original law (as given in this section) must be modified to include a term known as Maxwell's correction (see below). ### Equivalent forms The original circuital law can be written in several different forms, which are all ultimately equivalent: - An "integral form" and a "differential form". The forms are exactly equivalent, and related by the Kelvin–Stokes theorem (see the "proof" section below). - Forms using SI units, and those using cgs units. Other units are possible, but rare. This section will use SI units, with cgs units discussed later. - Forms using either or magnetic fields. These two forms use the total current density and free current density, respectively. The and fields are related by the constitutive equation: in non-magnetic materials where is the magnetic constant. ### Explanation The integral form of the original circuital law is a line integral of the magnetic field around some closed curve (arbitrary but must be closed). The curve in turn bounds both a surface which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by ), and encloses the current.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
### Explanation The integral form of the original circuital law is a line integral of the magnetic field around some closed curve (arbitrary but must be closed). The curve in turn bounds both a surface which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by ), and encloses the current. The mathematical statement of the law is a relation between the circulation of the magnetic field around some path (line integral) due to the current which passes through that enclosed path (surface integral). In terms of total current, (which is the sum of both free current and bound current) the line integral of the magnetic -field (in teslas, T) around closed curve is proportional to the total current passing through a surface (enclosed by ). In terms of free current, the line integral of the magnetic -field (in amperes per metre, A·m−1) around closed curve equals the free current through a surface .
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
In terms of total current, (which is the sum of both free current and bound current) the line integral of the magnetic -field (in teslas, T) around closed curve is proportional to the total current passing through a surface (enclosed by ). In terms of free current, the line integral of the magnetic -field (in amperes per metre, A·m−1) around closed curve equals the free current through a surface . + Forms of the original circuital law written in SI units Integral form Differential formUsing -field and total currentUsing -field and free current - is the total current density (in amperes per square metre, A·m−2), - is the free current density only, - is the closed line integral around the closed curve , - denotes a surface integral over the surface bounded by the curve , - is the vector dot product, - is an infinitesimal element (a differential) of the curve (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve ) - is the vector area of an infinitesimal element of surface (that is, a vector with magnitude equal to the area of the infinitesimal surface element, and direction normal to surface . The direction of the normal must correspond with the orientation of by the right hand rule), see below for further explanation of the curve and surface . - is the curl operator.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
+ Forms of the original circuital law written in SI units Integral form Differential formUsing -field and total currentUsing -field and free current - is the total current density (in amperes per square metre, A·m−2), - is the free current density only, - is the closed line integral around the closed curve , - denotes a surface integral over the surface bounded by the curve , - is the vector dot product, - is an infinitesimal element (a differential) of the curve (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve ) - is the vector area of an infinitesimal element of surface (that is, a vector with magnitude equal to the area of the infinitesimal surface element, and direction normal to surface . The direction of the normal must correspond with the orientation of by the right hand rule), see below for further explanation of the curve and surface . - is the curl operator. ### Ambiguities and sign conventions There are a number of ambiguities in the above definitions that require clarification and a choice of convention. 1.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
### Ambiguities and sign conventions There are a number of ambiguities in the above definitions that require clarification and a choice of convention. 1. First, three of these terms are associated with sign ambiguities: the line integral could go around the loop in either direction (clockwise or counterclockwise); the vector area could point in either of the two directions normal to the surface; and is the net current passing through the surface , meaning the current passing through in one direction, minus the current in the other direction—but either direction could be chosen as positive. These ambiguities are resolved by the right-hand rule: With the palm of the right-hand toward the area of integration, and the index-finger pointing along the direction of line-integration, the outstretched thumb points in the direction that must be chosen for the vector area . Also the current passing in the same direction as must be counted as positive. The right hand grip rule can also be used to determine the signs. 1. Second, there are infinitely many possible surfaces that have the curve as their border. (Imagine a soap film on a wire loop, which can be deformed by blowing on the film). Which of those surfaces is to be chosen? If the loop does not lie in a single plane, for example, there is no one obvious choice.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Which of those surfaces is to be chosen? If the loop does not lie in a single plane, for example, there is no one obvious choice. The answer is that it does not matter: in the magnetostatic case, the current density is solenoidal (see next section), so the divergence theorem and continuity equation imply that the flux through any surface with boundary , with the same sign convention, is the same. In practice, one usually chooses the most convenient surface (with the given boundary) to integrate over. ## Free current versus bound current The electric current that arises in the simplest textbook situations would be classified as "free current"—for example, the current that passes through a wire or battery. In contrast, "bound current" arises in the context of bulk materials that can be magnetized and/or polarized. (All materials can to some extent.) When a material is magnetized (for example, by placing it in an external magnetic field), the electrons remain bound to their respective atoms, but behave as if they were orbiting the nucleus in a particular direction, creating a microscopic current. When the currents from all these atoms are put together, they create the same effect as a macroscopic current, circulating perpetually around the magnetized object. This magnetization current is one contribution to "bound current". The other source of bound current is bound charge.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
This magnetization current is one contribution to "bound current". The other source of bound current is bound charge. When an electric field is applied, the positive and negative bound charges can separate over atomic distances in polarizable materials, and when the bound charges move, the polarization changes, creating another contribution to the "bound current", the polarization current . The total current density due to free and bound charges is then: $$ \mathbf{J} =\mathbf{J}_\mathrm{f} + \mathbf{J}_\mathrm{M} + \mathbf{J}_\mathrm{P} \,, $$ with   the "free" or "conduction" current density. All current is fundamentally the same, microscopically. Nevertheless, there are often practical reasons for wanting to treat bound current differently from free current. For example, the bound current usually originates over atomic dimensions, and one may wish to take advantage of a simpler theory intended for larger dimensions. The result is that the more microscopic Ampère's circuital law, expressed in terms of and the microscopic current (which includes free, magnetization and polarization currents), is sometimes put into the equivalent form below in terms of and the free current only.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
For example, the bound current usually originates over atomic dimensions, and one may wish to take advantage of a simpler theory intended for larger dimensions. The result is that the more microscopic Ampère's circuital law, expressed in terms of and the microscopic current (which includes free, magnetization and polarization currents), is sometimes put into the equivalent form below in terms of and the free current only. For a detailed definition of free current and bound current, and the proof that the two formulations are equivalent, see the "proof" section below. ## Shortcomings of the original formulation of the circuital law There are two important issues regarding the circuital law that require closer scrutiny. First, there is an issue regarding the continuity equation for electrical charge. In vector calculus, the identity for the divergence of a curl states that the divergence of the curl of a vector field must always be zero. Hence $$ \nabla\cdot(\nabla\times\mathbf{B}) = 0 \,, $$ and so the original Ampère's circuital law implies that $$ \nabla\cdot \mathbf{J} = 0\,, $$ i.e. that the current density is solenoidal.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
so the original Ampère's circuital law implies that $$ \nabla\cdot \mathbf{J} = 0\,, $$ i.e. that the current density is solenoidal. But in general, reality follows the continuity equation for electric charge: $$ \nabla\cdot \mathbf{J} = -\frac{\partial \rho}{\partial t} \,, $$ which is nonzero for a time-varying charge density. An example occurs in a capacitor circuit where time-varying charge densities exist on the plates. Second, there is an issue regarding the propagation of electromagnetic waves. For example, in free space, where $$ \mathbf{J} = \mathbf{0}\,, $$ the circuital law implies that $$ \nabla\times\mathbf{B} = \mathbf{0}\,, $$ i.e. that the magnetic field is irrotational, but to maintain consistency with the continuity equation for electric charge, we must have $$ \nabla\times\mathbf{B} = \frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}\,. $$ To 'resolve' these situations (w/ eqn.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
For example, in free space, where $$ \mathbf{J} = \mathbf{0}\,, $$ the circuital law implies that $$ \nabla\times\mathbf{B} = \mathbf{0}\,, $$ i.e. that the magnetic field is irrotational, but to maintain consistency with the continuity equation for electric charge, we must have $$ \nabla\times\mathbf{B} = \frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}\,. $$ To 'resolve' these situations (w/ eqn. above), the contribution of displacement current must be added to the current term in the circuital law. James Clerk Maxwell conceived of displacement current as a polarization current in the dielectric vortex sea, which he used to model the magnetic field hydrodynamically and mechanically. He added this displacement current to Ampère's circuital law at equation 112 in his 1861 paper "On Physical Lines of Force". ### Displacement current In free space, the displacement current is related to the time rate of change of electric field. In a dielectric the above contribution to displacement current is present too, but a major contribution to the displacement current is related to the polarization of the individual molecules of the dielectric material.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
### Displacement current In free space, the displacement current is related to the time rate of change of electric field. In a dielectric the above contribution to displacement current is present too, but a major contribution to the displacement current is related to the polarization of the individual molecules of the dielectric material. Even though charges cannot flow freely in a dielectric, the charges in molecules can move a little under the influence of an electric field. The positive and negative charges in molecules separate under the applied field, causing an increase in the state of polarization, expressed as the polarization density . A changing state of polarization is equivalent to a current. Both contributions to the displacement current are combined by defining the displacement current as: $$ \mathbf{J}_\mathrm{D} = \frac {\partial}{\partial t} \mathbf{D} (\mathbf{r}, \, t) \, , $$ where the electric displacement field is defined as: $$ \mathbf{D} = \varepsilon_0 \mathbf{E} + \mathbf{P} = \varepsilon_0 \varepsilon_\mathrm{r} \mathbf{E} \, , $$ where is the electric constant, the relative static permittivity, and is the polarization density.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
A changing state of polarization is equivalent to a current. Both contributions to the displacement current are combined by defining the displacement current as: $$ \mathbf{J}_\mathrm{D} = \frac {\partial}{\partial t} \mathbf{D} (\mathbf{r}, \, t) \, , $$ where the electric displacement field is defined as: $$ \mathbf{D} = \varepsilon_0 \mathbf{E} + \mathbf{P} = \varepsilon_0 \varepsilon_\mathrm{r} \mathbf{E} \, , $$ where is the electric constant, the relative static permittivity, and is the polarization density. Substituting this form for in the expression for displacement current, it has two components: $$ \mathbf{J}_\mathrm{D} = \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} + \frac{\partial \mathbf{P}}{\partial t}\,. $$ The first term on the right hand side is present everywhere, even in a vacuum. It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Substituting this form for in the expression for displacement current, it has two components: $$ \mathbf{J}_\mathrm{D} = \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} + \frac{\partial \mathbf{P}}{\partial t}\,. $$ The first term on the right hand side is present everywhere, even in a vacuum. It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current. Some authors apply the name displacement current to only this contribution. The second term on the right hand side is the displacement current as originally conceived by Maxwell, associated with the polarization of the individual molecules of the dielectric material. Maxwell's original explanation for displacement current focused upon the situation that occurs in dielectric media. In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying. For greater discussion see Displacement current. ## Extending the original law: the Ampère–Maxwell equation Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law. Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
## Extending the original law: the Ampère–Maxwell equation Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law. Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also Note): $$ \oint_C \mathbf{H} \cdot \mathrm{d} \boldsymbol{l} = \iint_S \left( \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} \right) \cdot \mathrm{d} \mathbf{S} $$ (integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also Note): $$ \oint_C \mathbf{H} \cdot \mathrm{d} \boldsymbol{l} = \iint_S \left( \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} \right) \cdot \mathrm{d} \mathbf{S} $$ (integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density. In differential form, $$ \mathbf{\nabla} \times \mathbf{H} = \mathbf{J}_\mathrm{f}+\frac{\partial \mathbf{D}}{\partial t} \, . $$ On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below):
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Note): $$ \oint_C \mathbf{H} \cdot \mathrm{d} \boldsymbol{l} = \iint_S \left( \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} \right) \cdot \mathrm{d} \mathbf{S} $$ (integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density. In differential form, $$ \mathbf{\nabla} \times \mathbf{H} = \mathbf{J}_\mathrm{f}+\frac{\partial \mathbf{D}}{\partial t} \, . $$ On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below): In differential form, In both forms includes magnetization current density as well as conduction and polarization current densities.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
In differential form, $$ \mathbf{\nabla} \times \mathbf{H} = \mathbf{J}_\mathrm{f}+\frac{\partial \mathbf{D}}{\partial t} \, . $$ On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below): In differential form, In both forms includes magnetization current density as well as conduction and polarization current densities. That is, the current density on the right side of the Ampère–Maxwell equation is: $$ \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{D} +\mathbf{J}_\mathrm{M} = \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{P} +\mathbf{J}_\mathrm{M} + \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} = \mathbf{J}+ \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} \, , $$ where current density is the displacement current, and is the current density contribution actually due to movement of charges, both free and bound.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
In differential form, In both forms includes magnetization current density as well as conduction and polarization current densities. That is, the current density on the right side of the Ampère–Maxwell equation is: $$ \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{D} +\mathbf{J}_\mathrm{M} = \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{P} +\mathbf{J}_\mathrm{M} + \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} = \mathbf{J}+ \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} \, , $$ where current density is the displacement current, and is the current density contribution actually due to movement of charges, both free and bound. Because , the charge continuity issue with Ampère's original formulation is no longer a problem. Because of the term in , wave propagation in free space now is possible. With the addition of the displacement current, Maxwell was able to hypothesize (correctly) that light was a form of electromagnetic wave.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Because of the term in , wave propagation in free space now is possible. With the addition of the displacement current, Maxwell was able to hypothesize (correctly) that light was a form of electromagnetic wave. See electromagnetic wave equation for a discussion of this important discovery. ### Proof of equivalence Proof that the formulations of the circuital law in terms of free current are equivalent to the formulations involving total current In this proof, we will show that the equation $$ \nabla\times \mathbf{H} = \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} $$ is equivalent to the equation $$ \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) = \mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\,. $$ Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the Kelvin–Stokes theorem. We introduce the polarization density , which has the following relation to and :_
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
In this proof, we will show that the equation $$ \nabla\times \mathbf{H} = \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} $$ is equivalent to the equation $$ \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) = \mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\,. $$ Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the Kelvin–Stokes theorem. We introduce the polarization density , which has the following relation to and :_ BLOCK2_Next, we introduce the magnetization density , which has the following relation to and :_
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
We introduce the polarization density , which has the following relation to and :_ BLOCK2_Next, we introduce the magnetization density , which has the following relation to and :_ BLOCK3_and the following relation to the bound current: $$ \begin{align} \mathbf{J}_\mathrm{bound} &= \nabla\times\mathbf{M} + \frac{\partial \mathbf{P}}{\partial t} \\ &=\mathbf{J}_\mathrm{M}+\mathbf{J}_\mathrm{P}, \end{align} $$ where $$ \mathbf{J}_\mathrm{M} = \nabla\times\mathbf{M} , $$ is called the magnetization current density, and $$ \mathbf{J}_\mathrm{P} = \frac{\partial \mathbf{P}}{\partial t}, $$ is the polarization current density.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Taking the equation for : $$ \begin{align} \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) &= \mathbf{\nabla} \times \left( \mathbf {H}+\mathbf{M} \right) \\ &=\mathbf{\nabla} \times \mathbf H + \mathbf{J}_{\mathrm{M}} \\ &= \mathbf{J}_\mathrm{f} + \mathbf{J}_\mathrm{P} +\varepsilon_0 \frac{\partial \mathbf E}{\partial t} + \mathbf{J}_\mathrm{M}. \end{align} $$ Consequently, referring to the definition of the bound current: $$ \begin{align} \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) &=\mathbf{J}_\mathrm{f}+ \mathbf{J}_\mathrm{bound} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} \\ &=\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf E}{\partial t} , \end{align} $$ as was to be shown.
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
## Ampère's circuital law in cgs units In cgs units, the integral form of the equation, including Maxwell's correction, reads $$ \oint_C \mathbf{B} \cdot \mathrm{d}\boldsymbol{l} = \frac{1}{c} \iint_S \left(4\pi\mathbf{J}+\frac{\partial \mathbf{E}}{\partial t}\right) \cdot \mathrm{d}\mathbf{S}, $$ where is the speed of light. The differential form of the equation (again, including Maxwell's correction) is $$ \mathbf{\nabla} \times \mathbf{B} = \frac{1}{c}\left(4\pi\mathbf{J}+\frac{\partial \mathbf{E}}{\partial t}\right). $$
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule $$ \{\mathrm{onions, potatoes}\} \Rightarrow \{\mathrm{burger}\} $$ found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements. In addition to the above example from market basket analysis, association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
https://en.wikipedia.org/wiki/Association_rule_learning
In addition to the above example from market basket analysis, association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand. ## Definition Following the original definition by Agrawal, Imieliński, Swami the problem of association rule mining is defined as: Let $$ I=\{i_1, i_2,\ldots,i_n\} $$ be a set of binary attributes called items. Let $$ D = \{t_1, t_2, \ldots, t_m\} $$ be a set of transactions called the database. Each transaction in has a unique transaction ID and contains a subset of the items in . A rule is defined as an implication of the form: $$ X \Rightarrow Y $$ , where $$ X, Y \subseteq I $$ . In Agrawal, Imieliński, Swami a rule is defined only between a set and a single item, $$ X \Rightarrow i_j $$ for $$ i_j \in I $$ .
https://en.wikipedia.org/wiki/Association_rule_learning
A rule is defined as an implication of the form: $$ X \Rightarrow Y $$ , where $$ X, Y \subseteq I $$ . In Agrawal, Imieliński, Swami a rule is defined only between a set and a single item, $$ X \Rightarrow i_j $$ for $$ i_j \in I $$ . Every rule is composed by two different sets of items, also known as itemsets, and , where is called antecedent or left-hand-side (LHS) and consequent or right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statement $$ X \Rightarrow Y $$ is often read as if then , where the antecedent ( ) is the if and the consequent () is the then. This simply implies that, in theory, whenever occurs in a dataset, then will as well. ## Process Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under ### Support and ### Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called
https://en.wikipedia.org/wiki/Association_rule_learning
Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called ### Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true. Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data. There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis. What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior. Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data. Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables.
https://en.wikipedia.org/wiki/Association_rule_learning
Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data. Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables. Benefits There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases. Downsides However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it. ThresholdsWhen using Association rules, you are most likely to only use Support and Confidence.
https://en.wikipedia.org/wiki/Association_rule_learning
For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it. ThresholdsWhen using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied: 1. A minimum Support threshold to find all the frequent itemsets that are in the database. 1. A minimum Confidence threshold to the frequent itemsets found to create rules. +Table 1. Example of Threshold for Support and Confidence. Items Support Confidence Items Support Confidence Item A 30% 50% Item C 45% 55% Item B 15% 25% Item A 30% 50% Item C 45% 55% Item D 35% 40% Item D 35% 40% Item B 15% 25% The Support Threshold is 30%, Confidence Threshold is 50% The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last.
https://en.wikipedia.org/wiki/Association_rule_learning
Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last. To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is the power set over and has size $$ 2^n-1 $$ , of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of item that is within the power set . An efficient search is possible by using the downward-closure property of support (also called anti-monotonicity). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori and Eclat) can find all frequent itemsets. ## Useful Concepts + Table 2. Example database with 5 transactions and 7 items transaction ID milk bread butter beer diapers eggs fruit 1 1 1 0 0 0 0 1 2 0 0 1 0 0 1 1 3 0 0 0 1 1 0 0 4 1 1 1 0 0 1 1 5 0 1 0 0 0 0 0 To illustrate the concepts, we use a small example from the supermarket domain.
https://en.wikipedia.org/wiki/Association_rule_learning
## Useful Concepts + Table 2. Example database with 5 transactions and 7 items transaction ID milk bread butter beer diapers eggs fruit 1 1 1 0 0 0 0 1 2 0 0 1 0 0 1 1 3 0 0 0 1 1 0 0 4 1 1 1 0 0 1 1 5 0 1 0 0 0 0 0 To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items is $$ I= \{\mathrm{milk, bread, butter, beer, diapers, eggs, fruit}\} $$ . An example rule for the supermarket could be $$ \{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\} $$ meaning that if butter and bread are bought, customers also buy milk. In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. Let $$ X, Y $$ be itemsets, $$ X \Rightarrow Y $$ an association rule and a set of transactions of a given database. Note: this example is extremely small.
https://en.wikipedia.org/wiki/Association_rule_learning
Let $$ X, Y $$ be itemsets, $$ X \Rightarrow Y $$ an association rule and a set of transactions of a given database. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant, and datasets often contain thousands or millions of transactions. Support Support is an indication of how frequently the itemset appears in the dataset. In our example, it can be easier to explain support by writing $$ \text{support} = P(A\cap B)= \frac{(\text{number of transactions containing }A\text{ and }B)}\text{ (total number of transactions)} $$ where A and B are separate item sets that occur at the same time in a transaction. Using Table 2 as an example, the itemset $$ X=\{\mathrm{beer, diapers}\} $$ has a support of since it occurs in 20% of all transactions (1 out of 5 transactions). The argument of support of X is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive). Furthermore, the itemset $$ Y=\{\mathrm{milk, bread, butter}\} $$ has a support of as it appears in 20% of all transactions as well.
https://en.wikipedia.org/wiki/Association_rule_learning
The argument of support of X is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive). Furthermore, the itemset $$ Y=\{\mathrm{milk, bread, butter}\} $$ has a support of as it appears in 20% of all transactions as well. When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example. Minimum support thresholds are useful for determining which itemsets are preferred or interesting. If we set the support threshold to ≥0.4 in Table 3, then the $$ \{\mathrm{milk}\} \Rightarrow \{\mathrm{eggs}\} $$ would be removed since it did not meet the minimum threshold of 0.4.
https://en.wikipedia.org/wiki/Association_rule_learning
Minimum support thresholds are useful for determining which itemsets are preferred or interesting. If we set the support threshold to ≥0.4 in Table 3, then the $$ \{\mathrm{milk}\} \Rightarrow \{\mathrm{eggs}\} $$ would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset. Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items. Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values. +Table 3.
https://en.wikipedia.org/wiki/Association_rule_learning
Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values. +Table 3. Example of Support, and support × confidenceif Antecedent then Consequentsupport support X confidenceif buy milk, then buy bread 2/5= 0.40.4×1.0= 0.4if buy milk, then buy eggs1/5= 0.20.2×0.5= 0.1if buy bread, then buy fruit2/5= 0.40.4×0.66= 0.264if buy fruit, then buy eggs2/5= 0.40.4×0.66= 0.264if buy milk and bread, then buy fruit2/5= 0.40.4×1.0= 0.4 The support of with respect to is defined as the proportion of transactions in the dataset which contains the itemset . Denoting a transaction by $$ (i,t) $$ where is the unique identifier of the transaction and is its itemset, the support may be written as: $$ \mathrm{support\,of\,X} = \frac{|\{(i,t) \in T : X \subseteq t \}|}{|T|} $$ This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above.
https://en.wikipedia.org/wiki/Association_rule_learning
Example of Support, and support × confidenceif Antecedent then Consequentsupport support X confidenceif buy milk, then buy bread 2/5= 0.40.4×1.0= 0.4if buy milk, then buy eggs1/5= 0.20.2×0.5= 0.1if buy bread, then buy fruit2/5= 0.40.4×0.66= 0.264if buy fruit, then buy eggs2/5= 0.40.4×0.66= 0.264if buy milk and bread, then buy fruit2/5= 0.40.4×1.0= 0.4 The support of with respect to is defined as the proportion of transactions in the dataset which contains the itemset . Denoting a transaction by $$ (i,t) $$ where is the unique identifier of the transaction and is its itemset, the support may be written as: $$ \mathrm{support\,of\,X} = \frac{|\{(i,t) \in T : X \subseteq t \}|}{|T|} $$ This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together. Confidence Confidence is the percentage of all transactions satisfying that also satisfy .
https://en.wikipedia.org/wiki/Association_rule_learning
Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together. Confidence Confidence is the percentage of all transactions satisfying that also satisfy . With respect to , the confidence value of an association rule, often denoted as $$ X \Rightarrow Y $$ , is the ratio of transactions containing both and to the total amount of values present, where is the antecedent and is the consequent. Confidence can also be interpreted as an estimate of the conditional probability $$ P(E_Y | E_X) $$ , the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.
https://en.wikipedia.org/wiki/Association_rule_learning
With respect to , the confidence value of an association rule, often denoted as $$ X \Rightarrow Y $$ , is the ratio of transactions containing both and to the total amount of values present, where is the antecedent and is the consequent. Confidence can also be interpreted as an estimate of the conditional probability $$ P(E_Y | E_X) $$ , the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS. It is commonly depicted as: $$ \mathrm{conf}(X \Rightarrow Y) = P(Y | X) = \frac{\mathrm{supp}(X \cap Y)}{ \mathrm{supp}(X) }=\frac{\text{number of transactions containing }X\text{ and }Y}{\text{number of transactions containing }X} $$ The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions and within the dataset in ratio to transactions containing only . This means that the number of transactions in both and is divided by those just in .
https://en.wikipedia.org/wiki/Association_rule_learning
It is commonly depicted as: $$ \mathrm{conf}(X \Rightarrow Y) = P(Y | X) = \frac{\mathrm{supp}(X \cap Y)}{ \mathrm{supp}(X) }=\frac{\text{number of transactions containing }X\text{ and }Y}{\text{number of transactions containing }X} $$ The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions and within the dataset in ratio to transactions containing only . This means that the number of transactions in both and is divided by those just in . For example, Table 2 shows the rule $$ \{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\} $$ which has a confidence of $$ \frac{1/5}{1/5}=\frac{0.2}{0.2}=1.0 $$ in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread.
https://en.wikipedia.org/wiki/Association_rule_learning
For example, Table 2 shows the rule $$ \{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\} $$ which has a confidence of $$ \frac{1/5}{1/5}=\frac{0.2}{0.2}=1.0 $$ in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule $$ \{\mathrm{fruit}\} \Rightarrow \{\mathrm{eggs}\} $$ , however, has a confidence of $$ \frac{2/5}{3/5}=\frac{0.4}{0.6}=0.67 $$ . This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases. For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%).
https://en.wikipedia.org/wiki/Association_rule_learning
When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items. +Table 4. Example of Confidence and Support × Confidenceif Antecedent then ConsequentConfidence Support × Confidenceif buy milk, then buy bread = 1.00.4×1.0= 0.4if buy milk, then buy eggs = 0.50.2×0.5= 0.1if buy bread, then buy fruit ≈ 0.660.4×0.66= 0.264if buy fruit, then buy eggs ≈ 0.660.4×0.66= 0.264if buy milk and bread, then buy fruit = 1.00.4×1.0= 0.4 Overall, using confidence in association rule mining is great way to bring awareness to data relations.
https://en.wikipedia.org/wiki/Association_rule_learning
Example of Confidence and Support × Confidenceif Antecedent then ConsequentConfidence Support × Confidenceif buy milk, then buy bread = 1.00.4×1.0= 0.4if buy milk, then buy eggs = 0.50.2×0.5= 0.1if buy bread, then buy fruit ≈ 0.660.4×0.66= 0.264if buy fruit, then buy eggs ≈ 0.660.4×0.66= 0.264if buy milk and bread, then buy fruit = 1.00.4×1.0= 0.4 Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships.
https://en.wikipedia.org/wiki/Association_rule_learning
Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships. Lift The lift of a rule is defined as: $$ \mathrm{lift}(X\Rightarrow Y) = \frac{ \mathrm{supp}(X \cup Y)}{ \mathrm{supp}(X) \times \mathrm{supp}(Y) } $$ or the ratio of the observed support to that expected if X and Y were independent. For example, the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ has a lift of $$ \frac{0.2}{0.4 \times 0.4} = 1.25 $$ . If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events.
https://en.wikipedia.org/wiki/Association_rule_learning
If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events. If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets. If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa. The value of lift is that it considers both the support of the rule and the overall data set. [rede] ### Conviction The conviction of a rule is defined as $$ \mathrm{conv}(X\Rightarrow Y) =\frac{ 1 - \mathrm{supp}(Y) }{ 1 - \mathrm{conf}(X\Rightarrow Y)} $$ .
https://en.wikipedia.org/wiki/Association_rule_learning
[rede] ### Conviction The conviction of a rule is defined as $$ \mathrm{conv}(X\Rightarrow Y) =\frac{ 1 - \mathrm{supp}(Y) }{ 1 - \mathrm{conf}(X\Rightarrow Y)} $$ . For example, the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ has a conviction of $$ \frac{1 - 0.4}{1 - 0.5} = 1.2 $$ , and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. ### Alternative measures of interestingness In addition to confidence, other measures of interestingness for rules have been proposed.
https://en.wikipedia.org/wiki/Association_rule_learning
In this example, the conviction value of 1.2 shows that the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. ### Alternative measures of interestingness In addition to confidence, other measures of interestingness for rules have been proposed. Some popular measures are: - All-confidence - Collective strength - Leverage Several more measures are presented and compared by Tan et al. and by Hahsler. Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness." ## History The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al., which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper on GUHA, a general data mining method developed by Petr Hájek et al.
https://en.wikipedia.org/wiki/Association_rule_learning
## History The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al., which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper on GUHA, a general data mining method developed by Petr Hájek et al. An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules with $$ \mathrm{supp}(X) $$ and $$ \mathrm{conf}(X \Rightarrow Y) $$ greater than user defined constraints. ## Statistically sound associations One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules.
https://en.wikipedia.org/wiki/Association_rule_learning
For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery controls this risk, in most cases reducing the risk of finding any spurious associations to a user-specified significance level. ## Algorithms Many algorithms for generating association rules have been proposed. Some well-known algorithms are Apriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database. ### Apriori algorithm Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties.
https://en.wikipedia.org/wiki/Association_rule_learning
It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties. Overview: Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length  from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. Example: Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'.
https://en.wikipedia.org/wiki/Association_rule_learning
Example: Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'. +Input Set{a, b}{c, d}{a, d}{a, e}{b, d}{a, b, d}{a, c, d}{a, b, c, d} Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3. +Support Valuesabcd6436 Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set. +Support Values{a, b}{a, c}{a, d}{b, c}{b, d}{c, d}324133 Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets.
https://en.wikipedia.org/wiki/Association_rule_learning
+Support Values{a, b}{a, c}{a, d}{b, c}{b, d}{c, d}324133 Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set. +Support Values{a, c, d}2 Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop. Advantages and Limitations: Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the ### Eclat algorithm . However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the
https://en.wikipedia.org/wiki/Association_rule_learning
FP-growth outperforms the Apriori and Eclat. This is due to the ### FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan. Eclat algorithm Eclat (alt. ECLAT, stands for Equivalence Class Transformation) is a backtracking algorithm, which traverses the frequent itemset lattice graph in a depth-first search (DFS) fashion. Whereas the breadth-first search (BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS. To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it.
https://en.wikipedia.org/wiki/Association_rule_learning
a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoes combinatorial explosion. It is suitable for both sequential as well as parallel execution with locality-enhancing properties. FP-growth algorithm FP stands for frequent pattern. In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into a trie. Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly. Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root.
https://en.wikipedia.org/wiki/Association_rule_learning
Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root. Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm). Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this item $$ I $$ . A new conditional tree is created which is the original FP-tree projected onto $$ I $$ . The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional on $$ I $$ meet the minimum support threshold. The resulting paths from root to $$ I $$ will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree. Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins. ### Others
https://en.wikipedia.org/wiki/Association_rule_learning
Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins. ### Others #### ASSOC The ASSOC procedure is a GUHA method which mines for generalized association rules using fast bitstrings operations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used. #### OPUS search OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support. Initially used to find rules for a fixed consequent it has subsequently been extended to find rules with any item as a consequent. OPUS search is the core technology in the popular Magnum Opus association discovery system. ## Lore A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true. Daniel Powers says:
https://en.wikipedia.org/wiki/Association_rule_learning
There are varying opinions as to how much of the story is true. Daniel Powers says: In 1992, Thomas Blischok, manager of a retail consulting group at Teradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves. ## Other types of association rule mining Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relations live in, nearby and humid: “Those who live in a place which is nearby a city with humid climate type and also are younger than 20 $$ \implies $$ their health condition is good”. Such association rules can be extracted from RDBMS data or semantic web data. Contrast set learning is a form of associative learning. Contrast set learners use rules that differ meaningfully in their distribution across subsets. Weighted class learning is another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results.
https://en.wikipedia.org/wiki/Association_rule_learning
Contrast set learners use rules that differ meaningfully in their distribution across subsets. Weighted class learning is another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results. High-order pattern discovery facilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data. K-optimal pattern discovery provides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data. Approximate Frequent Itemset mining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0. Generalized Association Rules hierarchical taxonomy (concept hierarchy) Quantitative Association Rules categorical and quantitative data Interval Data Association Rules e.g. partition the age into 5-year-increment ranged Sequential pattern mining discovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions. Subspace Clustering, a specific type of clustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models.
https://en.wikipedia.org/wiki/Association_rule_learning
Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. It is a subfield of physics and electrical engineering which uses active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog signals to digital signals. Electronic devices have significantly influenced the development of many aspects of modern society, such as telecommunications, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which continually produces ever-more sophisticated electronic devices and circuits in response to global demand. The semiconductor industry is one of the global economy's largest and most profitable sectors, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017.
https://en.wikipedia.org/wiki/Electronics
The semiconductor industry is one of the global economy's largest and most profitable sectors, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017. ## History and development Karl Ferdinand Braun´s development of the crystal detector, the first semiconductor device, in 1874 and the identification of the electron in 1897 by Sir Joseph John Thomson, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages, such as radio signals from a radio antenna, practicable. Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry.
https://en.wikipedia.org/wiki/Electronics
Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes continued to play a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors.
https://en.wikipedia.org/wiki/Electronics
In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment. As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer.
https://en.wikipedia.org/wiki/Electronics
If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available. ## Subfields - Analog electronics - Audio electronics - Avionics - Bioelectronics - Circuit design - Digital electronics - Electronic components - Embedded systems - Integrated circuits - Microelectronics - Nanoelectronics - Optoelectronics - Power electronics - Printed circuit boards - Semiconductor devices - Sensors - Telecommunications ## Devices and components An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function.
https://en.wikipedia.org/wiki/Electronics
## Devices and components An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level. ## Types of circuits Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. ### Analog circuits are becoming less common, as many of their functions are being digitized. Analog circuits Analog circuits use a continuous range of voltage or current for signal processing, as opposed to the discrete levels used in digital circuits. Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced.
https://en.wikipedia.org/wiki/Electronics
Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced. As semiconductor technology developed, many of the functions of analog circuits were taken over by digital circuits, and modern circuits that are entirely analog are less common; their functions being replaced by hybrid approach which, for instance, uses analog circuits at the front end of a device receiving an analog signal, and then use digital processing using microprocessor techniques thereafter. Sometimes it may be difficult to classify some circuits that have elements of both linear and non-linear operation. An example is the voltage comparator which receives a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch, having essentially two levels of output. Analog circuits are still widely used for signal amplification, such as in the entertainment industry, and conditioning signals from analog sensors, such as in industrial measurement and control. ### Digital circuits Digital circuits are electric circuits based on discrete voltage levels. Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates.
https://en.wikipedia.org/wiki/Electronics
Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates. Digital circuits use a binary system with two voltage levels labelled "0" and "1" to indicated logical status. Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states.
https://en.wikipedia.org/wiki/Electronics
Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states. - Logic gates - Adders - Flip-flops - Counters - Registers - Multiplexers - Schmitt triggers Highly integrated devices: - Memory chip - Microprocessors - Microcontrollers - Application-specific integrated circuit (ASIC) - Digital signal processor (DSP) - Field-programmable gate array (FPGA) - Field-programmable analog array (FPAA) - System on chip (SOC) ## Design Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user. Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors.
https://en.wikipedia.org/wiki/Electronics
Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice. ### Computer-aided design Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others. ## Negative qualities ### Thermal management Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection.
https://en.wikipedia.org/wiki/Electronics
### Thermal management Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. ### Noise Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. ## Packaging methods Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used.
https://en.wikipedia.org/wiki/Electronics
For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets. Electrical components are generally mounted in the following ways: - Through-hole (sometimes referred to as 'Pin-Through-Hole') - Surface mount - Chassis mount - Rack mount - LGA/BGA/PGA socket ## Industry The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over as of 2018. The largest industry sector is e-commerce, which generated over in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13sextillion MOSFETs having been manufactured between 1960 and 2018.
https://en.wikipedia.org/wiki/Electronics
The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly. However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there. Lewis, James Andrew: "Strengthening a Transnational Semiconductor Industry", June 2, 2022, Center for Strategic and International Studies (CSIS), retrieved September 12, 2022 Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology.
https://en.wikipedia.org/wiki/Electronics
Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology. By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China. Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.
https://en.wikipedia.org/wiki/Electronics
In classical mechanics, Euler's laws of motion are equations of motion which extend Newton's laws of motion for point particle to rigid body motion. They were formulated by Leonhard Euler about 50 years after Isaac Newton formulated his laws. ## Overview ### Euler's first law Euler's first law states that the rate of change of linear momentum of a rigid body is equal to the resultant of all the external forces acting on the body: $$ \mathbf F_\text{ext} = \frac{d\mathbf p}{dt}. $$ Internal forces between the particles that make up a body do not contribute to changing the momentum of the body as there is an equal and opposite force resulting in no net effect. The linear momentum of a rigid body is the product of the mass of the body and the velocity of its center of mass . ### Euler's second law Euler's second law states that the rate of change of angular momentum about a point that is fixed in an inertial reference frame (often the center of mass of the body), is equal to the sum of the external moments of force (torques) acting on that body about that point: _ BLOCK0_Note that the above formula holds only if both and are computed with respect to a fixed inertial frame or a frame parallel to the inertial frame but fixed on the center of mass.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
### Euler's second law Euler's second law states that the rate of change of angular momentum about a point that is fixed in an inertial reference frame (often the center of mass of the body), is equal to the sum of the external moments of force (torques) acting on that body about that point: _ BLOCK0_Note that the above formula holds only if both and are computed with respect to a fixed inertial frame or a frame parallel to the inertial frame but fixed on the center of mass. For rigid bodies translating and rotating in only two dimensions, this can be expressed as: $$ \mathbf M = \mathbf r_{\rm cm} \times \mathbf a_{\rm cm} m + I \boldsymbol{\alpha}, $$ where: - is the position vector of the center of mass of the body with respect to the point about which moments are summed, - is the linear acceleration of the center of mass of the body, - is the mass of the body, - is the angular acceleration of the body, and - is the moment of inertia of the body about its center of mass. See also Euler's equations (rigid body dynamics). ## Explanation and derivation The distribution of internal forces in a deformable body are not necessarily equal throughout, i.e. the stresses vary from one point to the next.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
See also Euler's equations (rigid body dynamics). ## Explanation and derivation The distribution of internal forces in a deformable body are not necessarily equal throughout, i.e. the stresses vary from one point to the next. This variation of internal forces throughout the body is governed by Newton's second law of motion of conservation of linear momentum and angular momentum, which for their simplest use are applied to a mass particle but are extended in continuum mechanics to a body of continuously distributed mass. For continuous bodies these laws are called Euler's laws of motion. The total body force applied to a continuous body with mass , mass density , and volume , is the volume integral integrated over the volume of the body: $$ \mathbf F_B=\int_V\mathbf b\,dm = \int_V\mathbf b\rho\,dV $$ where is the force acting on the body per unit mass (dimensions of acceleration, misleadingly called the "body force"), and is an infinitesimal mass element of the body. Body forces and contact forces acting on the body lead to corresponding moments (torques) of those forces relative to a given point.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
The total body force applied to a continuous body with mass , mass density , and volume , is the volume integral integrated over the volume of the body: $$ \mathbf F_B=\int_V\mathbf b\,dm = \int_V\mathbf b\rho\,dV $$ where is the force acting on the body per unit mass (dimensions of acceleration, misleadingly called the "body force"), and is an infinitesimal mass element of the body. Body forces and contact forces acting on the body lead to corresponding moments (torques) of those forces relative to a given point. Thus, the total applied torque about the origin is given by $$ \mathbf M= \mathbf M_B + \mathbf M_C $$ where and respectively indicate the moments caused by the body and contact forces.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
Body forces and contact forces acting on the body lead to corresponding moments (torques) of those forces relative to a given point. Thus, the total applied torque about the origin is given by $$ \mathbf M= \mathbf M_B + \mathbf M_C $$ where and respectively indicate the moments caused by the body and contact forces. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) acting on the body can be given as the sum of a volume and surface integral: $$ \mathbf F = \int_V \mathbf a\,dm = \int_V \mathbf a\rho\,dV = \int_S \mathbf{t} \,dS + \int_V \mathbf b\rho\,dV $$ $$ \mathbf M = \mathbf M_B + \mathbf M_C = \int_S \mathbf r \times \mathbf t \,dS + \int_V \mathbf r \times \mathbf b\rho\,dV. $$ where is called the surface traction, integrated over the surface of the body, in turn denotes a unit vector normal and directed outwards to the surface .
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
Thus, the total applied torque about the origin is given by $$ \mathbf M= \mathbf M_B + \mathbf M_C $$ where and respectively indicate the moments caused by the body and contact forces. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) acting on the body can be given as the sum of a volume and surface integral: $$ \mathbf F = \int_V \mathbf a\,dm = \int_V \mathbf a\rho\,dV = \int_S \mathbf{t} \,dS + \int_V \mathbf b\rho\,dV $$ $$ \mathbf M = \mathbf M_B + \mathbf M_C = \int_S \mathbf r \times \mathbf t \,dS + \int_V \mathbf r \times \mathbf b\rho\,dV. $$ where is called the surface traction, integrated over the surface of the body, in turn denotes a unit vector normal and directed outwards to the surface . Let the coordinate system be an inertial frame of reference, be the position vector of a point particle in the continuous body with respect to the origin of the coordinate system, and be the velocity vector of that point.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
Euler's first axiom or law (law of balance of linear momentum or balance of forces) states that in an inertial frame the time rate of change of linear momentum of an arbitrary portion of a continuous body is equal to the total applied force acting on that portion, and it is expressed as $$ \begin{align} \frac{d\mathbf p}{dt} &= \mathbf F \\ \frac{d}{dt}\int_V \rho\mathbf v\,dV&=\int_S \mathbf t \, dS + \int_V \mathbf b\rho \,dV. \end{align} $$ Euler's second axiom or law (law of balance of angular momentum or balance of torques) states that in an inertial frame the time rate of change of angular momentum of an arbitrary portion of a continuous body is equal to the total applied torque acting on that portion, and it is expressed as $$ \begin{align} \frac{d\mathbf L}{dt} &= \mathbf M \\ \frac{d}{dt}\int_V \mathbf r\times\rho\mathbf v\,dV&=\int_S \mathbf r \times \mathbf t \,dS + \int_V \mathbf r \times \mathbf b\rho\,dV. \end{align} $$ where _
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
Synthetic geometry (sometimes referred to as axiomatic geometry or even pure geometry) is geometry without the use of coordinates. It relies on the axiomatic method for proving all results from a few basic properties initially called postulates, and at present called axioms. After the 17th-century introduction by René Descartes of the coordinate method, which was called analytic geometry, the term "synthetic geometry" was coined to refer to the older methods that were, before Descartes, the only known ones. According to Felix Klein Synthetic geometry is that which studies figures as such, without recourse to formulae, whereas analytic geometry consistently makes use of such formulae as can be written down after the adoption of an appropriate system of coordinates. The first systematic approach for synthetic geometry is Euclid's Elements. However, it appeared at the end of the 19th century that Euclid's postulates were not sufficient for characterizing geometry. The first complete axiom system for geometry was given only at the end of the 19th century by David Hilbert. At the same time, it appeared that both synthetic methods and analytic methods can be used to build geometry. The fact that the two approaches are equivalent has been proved by Emil Artin in his book Geometric Algebra.
https://en.wikipedia.org/wiki/Synthetic_geometry
At the same time, it appeared that both synthetic methods and analytic methods can be used to build geometry. The fact that the two approaches are equivalent has been proved by Emil Artin in his book Geometric Algebra. Because of this equivalence, the distinction between synthetic and analytic geometry is no more in use, except at elementary level, or for geometries that are not related to any sort of numbers, such as some finite geometries and non-Desarguesian geometry. ## Logical synthesis The process of logical synthesis begins with some arbitrary but definite starting point. This starting point is the introduction of primitive notions or primitives and axioms about these primitives: - Primitives are the most basic ideas. Typically they include both objects and relationships. In geometry, the objects are things such as points, lines and planes, while a fundamental relationship is that of incidence – of one object meeting or joining with another. The terms themselves are undefined. Hilbert once remarked that instead of points, lines and planes one might just as well talk of tables, chairs and beer mugs, the point being that the primitive terms are just empty placeholders and have no intrinsic properties.
https://en.wikipedia.org/wiki/Synthetic_geometry
The terms themselves are undefined. Hilbert once remarked that instead of points, lines and planes one might just as well talk of tables, chairs and beer mugs, the point being that the primitive terms are just empty placeholders and have no intrinsic properties. - Axioms are statements about these primitives; for example, any two points are together incident with just one line (i.e. that for any two points, there is just one line which passes through both of them). Axioms are assumed true, and not proven. They are the building blocks of geometric concepts, since they specify the properties that the primitives have. From a given set of axioms, synthesis proceeds as a carefully constructed logical argument. When a significant result is proved rigorously, it becomes a theorem. ### Properties of axiom sets There is no fixed axiom set for geometry, as more than one consistent set can be chosen. Each such set may lead to a different geometry, while there are also examples of different sets giving the same geometry. With this plethora of possibilities, it is no longer appropriate to speak of "geometry" in the singular. Historically, Euclid's parallel postulate has turned out to be independent of the other axioms. Simply discarding it gives absolute geometry, while negating it yields hyperbolic geometry.
https://en.wikipedia.org/wiki/Synthetic_geometry
Historically, Euclid's parallel postulate has turned out to be independent of the other axioms. Simply discarding it gives absolute geometry, while negating it yields hyperbolic geometry. Other consistent axiom sets can yield other geometries, such as projective, elliptic, spherical or affine geometry. Axioms of continuity and "betweenness" are also optional, for example, discrete geometries may be created by discarding or modifying them. Following the Erlangen program of Klein, the nature of any given geometry can be seen as the connection between symmetry and the content of the propositions, rather than the style of development. ## History Euclid's original treatment remained unchallenged for over two thousand years, until the simultaneous discoveries of the non-Euclidean geometries by Gauss, Bolyai, Lobachevsky and Riemann in the 19th century led mathematicians to question Euclid's underlying assumptions. One of the early French analysts summarized synthetic geometry this way: The Elements of Euclid are treated by the synthetic method. This author, after having posed the axioms, and formed the requisites, established the propositions which he proves successively being supported by that which preceded, proceeding always from the simple to compound, which is the essential character of synthesis.
https://en.wikipedia.org/wiki/Synthetic_geometry
One of the early French analysts summarized synthetic geometry this way: The Elements of Euclid are treated by the synthetic method. This author, after having posed the axioms, and formed the requisites, established the propositions which he proves successively being supported by that which preceded, proceeding always from the simple to compound, which is the essential character of synthesis. The heyday of synthetic geometry can be considered to have been the 19th century, when analytic methods based on coordinates and calculus were ignored by some geometers such as Jakob Steiner, in favor of a purely synthetic development of projective geometry. For example, the treatment of the projective plane starting from axioms of incidence is actually a broader theory (with more models) than is found by starting with a vector space of dimension three. Projective geometry has in fact the simplest and most elegant synthetic expression of any geometry. In his Erlangen program, Felix Klein played down the tension between synthetic and analytic methods: On the Antithesis between the Synthetic and the Analytic Method in Modern Geometry: The distinction between modern synthesis and modern analytic geometry must no longer be regarded as essential, inasmuch as both subject-matter and methods of reasoning have gradually taken a similar form in both. We choose therefore in the text as common designation of them both the term projective geometry.
https://en.wikipedia.org/wiki/Synthetic_geometry
The distinction between modern synthesis and modern analytic geometry must no longer be regarded as essential, inasmuch as both subject-matter and methods of reasoning have gradually taken a similar form in both. We choose therefore in the text as common designation of them both the term projective geometry. Although the synthetic method has more to do with space-perception and thereby imparts a rare charm to its first simple developments, the realm of space-perception is nevertheless not closed to the analytic method, and the formulae of analytic geometry can be looked upon as a precise and perspicuous statement of geometrical relations. On the other hand, the advantage to original research of a well formulated analysis should not be underestimated, - an advantage due to its moving, so to speak, in advance of the thought. But it should always be insisted that a mathematical subject is not to be considered exhausted until it has become intuitively evident, and the progress made by the aid of analysis is only a first, though a very important, step. The close axiomatic study of Euclidean geometry led to the construction of the Lambert quadrilateral and the Saccheri quadrilateral. These structures introduced the field of non-Euclidean geometry where Euclid's parallel axiom is denied.
https://en.wikipedia.org/wiki/Synthetic_geometry
The close axiomatic study of Euclidean geometry led to the construction of the Lambert quadrilateral and the Saccheri quadrilateral. These structures introduced the field of non-Euclidean geometry where Euclid's parallel axiom is denied. Gauss, Bolyai and Lobachevski independently constructed hyperbolic geometry, where parallel lines have an angle of parallelism that depends on their separation. This study became widely accessible through the Poincaré disc model where motions are given by Möbius transformations. Similarly, Riemann, a student of Gauss's, constructed Riemannian geometry, of which elliptic geometry is a particular case. Another example concerns inversive geometry as advanced by Ludwig Immanuel Magnus, which can be considered synthetic in spirit. The closely related operation of reciprocation expresses analysis of the plane. Karl von Staudt showed that algebraic axioms, such as commutativity and associativity of addition and multiplication, were in fact consequences of incidence of lines in geometric configurations. David Hilbert showed that the Desargues configuration played a special role. Further work was done by Ruth Moufang and her students. The concepts have been one of the motivators of incidence geometry. When parallel lines are taken as primary, synthesis produces affine geometry.
https://en.wikipedia.org/wiki/Synthetic_geometry
The concepts have been one of the motivators of incidence geometry. When parallel lines are taken as primary, synthesis produces affine geometry. Though Euclidean geometry is both an affine and metric geometry, in general affine spaces may be missing a metric. The extra flexibility thus afforded makes affine geometry appropriate for the study of spacetime, as discussed in the history of affine geometry. In 1955 Herbert Busemann and Paul J. Kelley sounded a nostalgic note for synthetic geometry: Although reluctantly, geometers must admit that the beauty of synthetic geometry has lost its appeal for the new generation. The reasons are clear: not so long ago synthetic geometry was the only field in which the reasoning proceeded strictly from axioms, whereas this appeal — so fundamental to many mathematically interested people — is now made by many other fields. For example, college studies now include linear algebra, topology, and graph theory where the subject is developed from first principles, and propositions are deduced by elementary proofs. Expecting to replace synthetic with analytic geometry leads to loss of geometric content. Today's student of geometry has axioms other than Euclid's available: see Hilbert's axioms and Tarski's axioms.
https://en.wikipedia.org/wiki/Synthetic_geometry
Expecting to replace synthetic with analytic geometry leads to loss of geometric content. Today's student of geometry has axioms other than Euclid's available: see Hilbert's axioms and Tarski's axioms. Ernst Kötter published a (German) report in 1901 on "The development of synthetic geometry from Monge to Staudt (1847)"; ## Proofs using synthetic geometry Synthetic proofs of geometric theorems make use of auxiliary constructs (such as helping lines) and concepts such as equality of sides or angles and similarity and congruence of triangles. Examples of such proofs can be found in the articles Butterfly theorem, Angle bisector theorem, Apollonius' theorem, British flag theorem, Ceva's theorem, Equal incircles theorem, Geometric mean theorem, Heron's formula, Isosceles triangle theorem, Law of cosines, and others that are linked to here. ## Computational synthetic geometry In conjunction with computational geometry, a computational synthetic geometry has been founded, having close connection, for example, with matroid theory. Synthetic differential geometry is an application of topos theory to the foundations of differentiable manifold theory.
https://en.wikipedia.org/wiki/Synthetic_geometry
The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized. The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other. Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, North-Holland Publishing Company., Amsterdam, (1st edition 1949) fifth edition 1965, p. 8: "If two systems are both in thermal equilibrium with a third system then they are in thermal equilibrium with each other. " Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time. Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent". The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers. ## Equivalence relation A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law. If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows: This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement: One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation: A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed. It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading. ## Foundation of temperature Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials. The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant $$ k_{\mathrm B} $$ . The present article is about the thermodynamic concept, not about the kinetic theory concept. The zeroth law establishes thermal equilibrium as an equivalence relationship.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
The present article is about the thermodynamic concept, not about the kinetic theory concept. The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature. In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces.
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature. In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface. For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then = where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas. The surface = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers". In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind".
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics