text
stringlengths
105
4.17k
source
stringclasses
883 values
One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by and the quantity of heat added can be expressed as . For these conditions $$ dU=TdS-PdV \, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(closed system, reversible process).} $$ While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and : Equation () is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of .Adkins, C. J. (1968/1983), p. 75. It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and .
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
For these conditions $$ dU=TdS-PdV \, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(closed system, reversible process).} $$ While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as can be considered as a thermodynamic state function of the defining state variables and : Equation () is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are and , with respect to which and are partial derivatives of .Adkins, C. J. (1968/1983), p. 75. It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and . In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes: $$ dU=TdS-PdV + \sum_i \mu_i dN_i. $$ where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by and . In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes: $$ dU=TdS-PdV + \sum_i \mu_i dN_i. $$ where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to: $$ dU = T dS - \sum_{i}X_{i}dx_{i} + \sum_{j}\mu_{j}dN_{j}. $$ Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters. For an open system, there can be transfers of particles as well as energy into or out of the system during a process.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters. For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems. A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system. It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement. Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero. The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy. ## Fluid dynamics In fluid dynamics, the first law of thermodynamics reads $$ \frac{D E_t}{D t}=\frac{D W}{D t} + \frac{D Q}{D t} \to \frac{D E_t}{D t} = \nabla\cdot({\mathbf \sigma\cdot v}) - \nabla\cdot{\mathbf q} $$ . ## Spatially inhomogeneous systems Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if $$ E $$ denotes the total energy of that component system, one may write $$ E = E^{\mathrm {kin}} + E^{\mathrm {pot}} + U $$ where $$ E^{\mathrm {kin}} $$ and $$ E^{\mathrm {pot}} $$ denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and $$ U $$ denotes its internal energy. Glansdorff, P., Prigogine, I. (1971), p. 8.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
For any closed homogeneous component of an inhomogeneous closed system, if $$ E $$ denotes the total energy of that component system, one may write $$ E = E^{\mathrm {kin}} + E^{\mathrm {pot}} + U $$ where $$ E^{\mathrm {kin}} $$ and $$ E^{\mathrm {pot}} $$ denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and $$ U $$ denotes its internal energy. Glansdorff, P., Prigogine, I. (1971), p. 8. Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system. A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction $$ E^{\mathrm {pot}}_{12} $$ between the subsystems.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system. A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction $$ E^{\mathrm {pot}}_{12} $$ between the subsystems. Thus, in an obvious notation, one may write $$ E = E^{\mathrm {kin}}_1 + E^{\mathrm {pot}}_1 + U_1 + E^{\mathrm {kin}}_2 + E^{\mathrm {pot}}_2 + U_2 + E^{\mathrm {pot}}_{12} $$ The quantity $$ E^{\mathrm {pot}}_{12} $$ in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments. The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
On occasions, authors make their various respective arbitrary assignments. The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy,Thomson, W. (1852 b). On a universal tendency in nature to the dissipation of mechanical energy, Philosophical Magazine 4: 304–306. whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems. First law of thermodynamics for open systems For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. Landsberg, P. T. (1978), p. 78. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed. There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system. ### Internal energy for an open system Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics".
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
### Internal energy for an open system Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. Fitts, D. D. (1962), p. 28. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems. Haase, R. (1963/1969), p. 15.Smith, D. A. (1980). Definition of heat in open systems, Aust. J. Phys., 33: 95–105. Balian, R. (1991/2007), p. 217 In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires thatTisza, L. (1966), p. 110. $$ \Delta U_s+\Delta U_o=0\, , $$ where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
p. 110. $$ \Delta U_s+\Delta U_o=0\, , $$ where and denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above. For the thermodynamic operation of adding two systems with internal energies and , to produce a new system with internal energy , one may write ; the reference states for , and should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables. Prigogine, I., (1955/1967), p. 12. There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors. Landsberg, P. T. (1978), pp. 79, 102. Also of course $$ \Delta N_s+\Delta N_o=0\, , $$ where and denote the changes in mole number of a component substance of the system and of its surroundings respectively.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Landsberg, P. T. (1978), pp. 79, 102. Also of course $$ \Delta N_s+\Delta N_o=0\, , $$ where and denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass. ### Process of transfer of matter between an open system and its surroundings A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem. An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium. ### Open system with multiple contacts An open system can be in contact equilibrium with several other systems at once. Prigogine, I. (1947), p. 48.Aston, J. G., Fritz, J. J. (1959), Chapter 9.Landsberg, P. T. (1961), pp. 128–142.Tschoegl, N. W. (2000), p. 201. This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
128–142.Tschoegl, N. W. (2000), p. 201. This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work. With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics: where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics: where ΔU0 denotes the change of internal energy of the system, and denotes the change of internal energy of the of the surrounding subsystems that are in open contact with the system, due to transfer between the system and that surrounding subsystem, and denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here. #### Combination of first and second laws If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above. For a general natural process, there is no immediate term-wise correspondence between equations () and (), because they describe the process in different conceptual frames.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
#### Combination of first and second laws If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above. For a general natural process, there is no immediate term-wise correspondence between equations () and (), because they describe the process in different conceptual frames. Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely. For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write $$ \delta Q \,=\, T\, \mathrm d S-T\textstyle{\sum_{i}}s_i\,dN_i\,\text{ and }\delta W \,=\, P\, \mathrm d V\,\, \,\,\,\, \text {(suitably defined surrounding subsystems, quasi-static transfers of energy)}, $$ where $$ dN_i $$ is the added amount of species $$ i $$ and $$ s_i $$ is the corresponding molar entropy. For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield where $$ h_i $$ is the molar enthalpy of species $$ i $$ .Buchdahl, H. A. (1966), Section 66, pp. 121–125.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield where $$ h_i $$ is the molar enthalpy of species $$ i $$ .Buchdahl, H. A. (1966), Section 66, pp. 121–125. ### Non-equilibrium transfers The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined. The first law of thermodynamics for any process on the specification of equation (3) can be defined as where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and $$ h_i $$ is the molar enthalpy of species $$ i $$ , coming into the system from the surrounding that is in contact with the system. Formula (6) is valid in general case, both for quasi-static and for irreversible processes.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
The first law of thermodynamics for any process on the specification of equation (3) can be defined as where ΔU denotes the change of internal energy of the system, denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, denotes the work of the system and $$ h_i $$ is the molar enthalpy of species $$ i $$ , coming into the system from the surrounding that is in contact with the system. Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables $$ \xi_1, \xi_2,\ldots $$ that are called internal variables have been introduced, which allows to formulate for the general case Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
The situation of the quasi-static process is considered in the previous Section, which in our terms defines To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables $$ \xi_1, \xi_2,\ldots $$ that are called internal variables have been introduced, which allows to formulate for the general case Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient. Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical. The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity. Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics".
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity. Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow and a conduction flow. This conduction flow is by definition the heat flow . Therefore: where denotes the [internal] energy per unit mass. [These authors actually use the symbols and to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol to refer to total energy, including kinetic energy of bulk flow.] " This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. Truesdell, C., Muncaster, R. G. (1980), p. 3. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase. In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
This is not the ad hoc definition of "reduced heat flux" of Rolf Haase. In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
https://en.wikipedia.org/wiki/First_law_of_thermodynamics
Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. William Feller, An Introduction to Probability ## Theory and Its ## Applications , vol. 1, 3rd ed., (1968), Wiley, . This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%). These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
https://en.wikipedia.org/wiki/Probability
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems. ## Etymology The word probability derives from the Latin , which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. ## Interpretations When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur.
https://en.wikipedia.org/wiki/Probability
This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur. For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. The probability of getting an outcome of at least one head is 3 out of 4, or 0.75, and this event is more likely to occur. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability: - Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes.
https://en.wikipedia.org/wiki/Probability
The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once. - Subjectivists assign numbers per subjective probability, that is, as a degree of belief. The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E", although that interpretation is not universally agreed upon. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share.
https://en.wikipedia.org/wiki/Probability
By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share. ## History The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by superstitions. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
https://en.wikipedia.org/wiki/Probability
However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation.
https://en.wikipedia.org/wiki/Probability
See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve. The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the errordisregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old.
https://en.wikipedia.org/wiki/Probability
The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old. " Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, $$ \phi(x) = ce^{-h^2 x^2} $$ where $$ h $$ is a constant depending on precision of observation, and $$ c $$ is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809.
https://en.wikipedia.org/wiki/Probability
He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931.
https://en.wikipedia.org/wiki/Probability
In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931. On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. See integral geometry for more information. Theory Like other theories, the theory of probability is a representation of its concepts in formal termsthat is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
https://en.wikipedia.org/wiki/Probability
In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details. There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability. Applications Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation. An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
https://en.wikipedia.org/wiki/Probability
Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play. Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty. The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory. ## Mathematical treatment Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as $$ \Omega $$ .
https://en.wikipedia.org/wiki/Probability
## Mathematical treatment Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as $$ \Omega $$ . The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred. A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.
https://en.wikipedia.org/wiki/Probability
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. The probability of an event A is written as $$ P(A) $$ , $$ p(A) $$ , or $$ \text{Pr}(A) $$ . This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure. The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as $$ A', A^c $$ , $$ \overline{A}, A^\complement, \neg A $$ , or $$ {\sim}A $$ ; its probability is given by .
https://en.wikipedia.org/wiki/Probability
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as $$ A', A^c $$ , $$ \overline{A}, A^\complement, \neg A $$ , or $$ {\sim}A $$ ; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is For a more comprehensive treatment, see Complementary event. If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as $$ P(A \cap B). $$ ### Independent events If two events, A and B are independent then the joint probability is $$ P(A \mbox{ and }B) = P(A \cap B) = P(A) P(B). $$ For example, if two coins are flipped, then the chance of both being heads is $$ \tfrac{1}{2}\times\tfrac{1}{2} = \tfrac{1}{4}. $$ ### Mutually exclusive events If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.
https://en.wikipedia.org/wiki/Probability
### Independent events If two events, A and B are independent then the joint probability is $$ P(A \mbox{ and }B) = P(A \cap B) = P(A) P(B). $$ For example, if two coins are flipped, then the chance of both being heads is $$ \tfrac{1}{2}\times\tfrac{1}{2} = \tfrac{1}{4}. $$ ### Mutually exclusive events If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events. If two events are mutually exclusive, then the probability of both occurring is denoted as $$ P(A \cap B) $$ and $$ P(A \mbox{ and }B) = P(A \cap B) = 0 $$
https://en.wikipedia.org/wiki/Probability
### Mutually exclusive events If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events. If two events are mutually exclusive, then the probability of both occurring is denoted as $$ P(A \cap B) $$ and $$ P(A \mbox{ and }B) = P(A \cap B) = 0 $$ If two events are mutually exclusive, then the probability of either occurring is denoted as $$ P(A \cup B) $$ and $$ P(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B) - P(A \cap B) = P(A) + P(B) - 0 = P(A) + P(B) $$ For example, the chance of rolling a 1 or 2 on a six-sided die is $$ P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}. $$
https://en.wikipedia.org/wiki/Probability
If two events are mutually exclusive, then the probability of both occurring is denoted as $$ P(A \cap B) $$ and $$ P(A \mbox{ and }B) = P(A \cap B) = 0 $$ If two events are mutually exclusive, then the probability of either occurring is denoted as $$ P(A \cup B) $$ and $$ P(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B) - P(A \cap B) = P(A) + P(B) - 0 = P(A) + P(B) $$ For example, the chance of rolling a 1 or 2 on a six-sided die is $$ P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}. $$ ### Not (necessarily) mutually exclusive events If the events are not (necessarily) mutually exclusive then $$ P\left(A \hbox{ or } B\right) = P(A \cup B) = P\left(A\right)+P\left(B\right)-P\left(A \mbox{ and } B\right). $$ Rewritten, $$ P\left( A\cup B\right) =P\left( A\right) +P\left( B\right) -P\left( A\cap B\right) $$ For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is $$ \tfrac{13}{52} + \tfrac{12}{52} - \tfrac{3}{52} = \tfrac{11}{26}, $$ since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.
https://en.wikipedia.org/wiki/Probability
If two events are mutually exclusive, then the probability of either occurring is denoted as $$ P(A \cup B) $$ and $$ P(A\mbox{ or }B) = P(A \cup B)= P(A) + P(B) - P(A \cap B) = P(A) + P(B) - 0 = P(A) + P(B) $$ For example, the chance of rolling a 1 or 2 on a six-sided die is $$ P(1\mbox{ or }2) = P(1) + P(2) = \tfrac{1}{6} + \tfrac{1}{6} = \tfrac{1}{3}. $$ ### Not (necessarily) mutually exclusive events If the events are not (necessarily) mutually exclusive then $$ P\left(A \hbox{ or } B\right) = P(A \cup B) = P\left(A\right)+P\left(B\right)-P\left(A \mbox{ and } B\right). $$ Rewritten, $$ P\left( A\cup B\right) =P\left( A\right) +P\left( B\right) -P\left( A\cap B\right) $$ For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is $$ \tfrac{13}{52} + \tfrac{12}{52} - \tfrac{3}{52} = \tfrac{11}{26}, $$ since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once. This can be expanded further for multiple not (necessarily) mutually exclusive events.
https://en.wikipedia.org/wiki/Probability
For three events, this proceeds as follows: $$ \begin{aligned}P\left( A\cup B\cup C\right) =&P\left( \left( A\cup B\right) \cup C\right) \\ =&P\left( A\cup B\right) +P\left( C\right) -P\left( \left( A\cup B\right) \cap C\right) \\ =&P\left( A\right) +P\left( B\right) -P\left( A\cap B\right) +P\left( C\right) -P\left( \left( A\cap C\right) \cup \left( B\cap C\right) \right) \\ =&P\left( A\right) +P\left( B\right) +P\left( C\right) -P\left( A\cap B\right) -\left( P\left( A\cap C\right) +P\left( B\cap C\right) -P\left( \left( A\cap C\right) \cap \left( B\cap C\right) \right) \right) \\ P\left( A\cup B\cup C\right) =&P\left( A\right) +P\left( B\right) +P\left( C\right) -P\left( A\cap B\right) -P\left( A\cap C\right) -P\left( B\cap C\right) +P\left( A\cap B\cap C\right) \end{aligned} $$ It can be seen, then, that this pattern can be repeated for any number of events.
https://en.wikipedia.org/wiki/Probability
### Conditional probability Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written $$ P(A \mid B) $$ , and is read "the probability of A, given B". It is defined by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)}\, $$ If $$ P(B)=0 $$ then _ BLOCK3_ is formally undefined by this expression. In this case $$ A $$ and $$ B $$ are independent, since $$ P(A \cap B) = P(A)P(B) = 0. $$ However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable). For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is $$ 1/2; $$ however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be $$ 1/3, $$ since only 1 red and 2 blue balls would have been remaining.
https://en.wikipedia.org/wiki/Probability
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is $$ 1/2; $$ however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be $$ 1/3, $$ since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be $$ 2/3. $$ ### Inverse probability In probability theory and applications, Bayes' rule relates the odds of event $$ A_1 $$ to event $$ A_2, $$ before (prior to) and after (posterior to) conditioning on another event $$ B. $$ The odds on $$ A_1 $$ to event $$ A_2 $$ is simply the ratio of the probabilities of the two events.
https://en.wikipedia.org/wiki/Probability
### Inverse probability In probability theory and applications, Bayes' rule relates the odds of event $$ A_1 $$ to event $$ A_2, $$ before (prior to) and after (posterior to) conditioning on another event $$ B. $$ The odds on $$ A_1 $$ to event $$ A_2 $$ is simply the ratio of the probabilities of the two events. When arbitrarily many events $$ A $$ are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, $$ P(A|B)\propto P(A) P(B|A) $$ where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as $$ A $$ varies, for fixed or given $$ B $$ (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). ### Summary of probabilities +Summary of probabilities Event Probability A not A A or B A and B A given B
https://en.wikipedia.org/wiki/Probability
In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). ### Summary of probabilities +Summary of probabilities Event Probability A not A A or B A and B A given B ## Relation to randomness and probability in quantum mechanics In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel.
https://en.wikipedia.org/wiki/Probability
This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant ) that only a statistical description of its properties is feasible. Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality.
https://en.wikipedia.org/wiki/Probability
Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
https://en.wikipedia.org/wiki/Probability
Commutative algebra, first known as ideal theory, is the branch of algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings; rings of algebraic integers, including the ordinary integers $$ \mathbb{Z} $$ ; and p-adic integers. Commutative algebra is the main technical tool of algebraic geometry, and many results and concepts of commutative algebra are strongly related with geometrical concepts. The study of rings that are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras. ## Overview Commutative algebra is essentially the study of the rings occurring in algebraic number theory and algebraic geometry. Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings. Polynomial rings in several indeterminates over a field are examples of commutative rings.
https://en.wikipedia.org/wiki/Commutative_algebra
Several concepts of commutative algebras have been developed in relation with algebraic number theory, such as Dedekind rings (the main class of commutative rings occurring in algebraic number theory), integral extensions, and valuation rings. Polynomial rings in several indeterminates over a field are examples of commutative rings. Since algebraic geometry is fundamentally the study of the common zeros of these rings, many results and concepts of algebraic geometry have counterparts in commutative algebra, and their names recall often their geometric origin; for example "Krull dimension", "localization of a ring", "local ring", "regular ring". An affine algebraic variety corresponds to a prime ideal in a polynomial ring, and the points of such an affine variety correspond to the maximal ideals that contain this prime ideal. The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal. The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology.
https://en.wikipedia.org/wiki/Commutative_algebra
The Zariski topology, originally defined on an algebraic variety, has been extended to the sets of the prime ideals of any commutative ring; for this topology, the closed sets are the sets of prime ideals that contain a given ideal. The spectrum of a ring is a ringed space formed by the prime ideals equipped with the Zariski topology, and the localizations of the ring at the open sets of a basis of this topology. This is the starting point of scheme theory, a generalization of algebraic geometry introduced by Grothendieck, which is strongly based on commutative algebra, and has induced, in turns, many developments of commutative algebra. ## History The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition.
https://en.wikipedia.org/wiki/Commutative_algebra
Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, who recast many earlier results in terms of an ascending chain condition, now known as the Noetherian condition. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem. The main figure responsible for the birth of commutative algebra as a mature subject was Wolfgang Krull, who introduced the fundamental notions of localization and completion of a ring, as well as that of regular local rings. He established the concept of the Krull dimension of a ring, first for ### Noetherian rings before moving on to expand his theory to cover general valuation rings and Krull rings. To this day, Krull's principal ideal theorem is widely considered the single most important foundational theorem in commutative algebra. These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject. Much of the modern development of commutative algebra emphasizes modules.
https://en.wikipedia.org/wiki/Commutative_algebra
These results paved the way for the introduction of commutative algebra into algebraic geometry, an idea which would revolutionize the latter subject. Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Krull and Noether. ## Main tools and results Noetherian rings A Noetherian ring, named after Emmy Noether, is a ring in which every ideal is finitely generated; that is, all elements of any ideal can be written as a linear combinations of a finite set of elements, with coefficients in the ring. Many commonly considered commutative rings are Noetherian, in particular, every field, the ring of the integer, and every polynomial ring in one or several indeterminates over them. The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem. Moreover, many ring constructions preserve the Noetherian property.
https://en.wikipedia.org/wiki/Commutative_algebra
The fact that polynomial rings over a field are Noetherian is called Hilbert's basis theorem. Moreover, many ring constructions preserve the Noetherian property. In particular, if a commutative ring is Noetherian, the same is true for every polynomial ring over it, and for every quotient ring, localization, or completion of the ring. The importance of the Noetherian property lies in its ubiquity and also in the fact that many important theorems of commutative algebra require that the involved rings are Noetherian, This is the case, in particular of Lasker–Noether theorem, the Krull intersection theorem, and Nakayama's lemma. Furthermore, if a ring is Noetherian, then it satisfies the descending chain condition on prime ideals, which implies that every Noetherian local ring has a finite Krull dimension. ### Primary decomposition An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals.
https://en.wikipedia.org/wiki/Commutative_algebra
### Primary decomposition An ideal Q of a ring is said to be primary if Q is proper and whenever xy ∈ Q, either x ∈ Q or yn ∈ Q for some positive integer n. In Z, the primary ideals are precisely the ideals of the form (pe) where p is prime and e is a positive integer. Thus, a primary decomposition of (n) corresponds to representing (n) as the intersection of finitely many primary ideals. The Lasker–Noether theorem, given here, may be seen as a certain generalization of the fundamental theorem of arithmetic: For any primary decomposition of I, the set of all radicals, that is, the set {Rad(Q1), ..., Rad(Qt)} remains the same by the Lasker–Noether theorem. In fact, it turns out that (for a Noetherian ring) the set is precisely the assassinator of the module R/I; that is, the set of all annihilators of R/I (viewed as a module over R) that are prime. ### Localization The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions $$ \frac{m}{s} $$ . where the denominators s range in a given subset S of R.
https://en.wikipedia.org/wiki/Commutative_algebra
### Localization The localization is a formal way to introduce the "denominators" to a given ring or a module. That is, it introduces a new ring/module out of an existing one so that it consists of fractions $$ \frac{m}{s} $$ . where the denominators s range in a given subset S of R. The archetypal example is the construction of the ring Q of rational numbers from the ring Z of integers. ### Completion A completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have simpler structure than the general ones and Hensel's lemma applies to them. ### Zariski topology on prime ideals The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets $$ V(I) = \{P \in \operatorname{Spec}\,(A) \mid I \subseteq P\} $$ where A is a fixed commutative ring and I is an ideal.
https://en.wikipedia.org/wiki/Commutative_algebra
### Zariski topology on prime ideals The Zariski topology defines a topology on the spectrum of a ring (the set of prime ideals). In this formulation, the Zariski-closed sets are taken to be the sets $$ V(I) = \{P \in \operatorname{Spec}\,(A) \mid I \subseteq P\} $$ where A is a fixed commutative ring and I is an ideal. This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form.
https://en.wikipedia.org/wiki/Commutative_algebra
This is defined in analogy with the classical Zariski topology, where closed sets in affine space are those defined by polynomial equations . To see the connection with the classical picture, note that for any set S of polynomials (over an algebraically closed field), it follows from Hilbert's Nullstellensatz that the points of V(S) (in the old sense) are exactly the tuples (a1, ..., an) such that the ideal (x1 - a1, ..., xn - an) contains S; moreover, these are maximal ideals and by the "weak" Nullstellensatz, an ideal of any affine coordinate ring is maximal if and only if it is of this form. Thus, V(S) is "the same as" the maximal ideals containing S. Grothendieck's innovation in defining Spec was to replace maximal ideals with all prime ideals; in this formulation it is natural to simply generalize this observation to the definition of a closed set in the spectrum of a ring. ## Connections with algebraic geometry Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme.
https://en.wikipedia.org/wiki/Commutative_algebra
## Connections with algebraic geometry Commutative algebra (in the form of polynomial rings and their quotients, used in the definition of algebraic varieties) has always been a part of algebraic geometry. However, in the late 1950s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra, which are locally ringed spaces, which form a category that is antiequivalent (dual) to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology.
https://en.wikipedia.org/wiki/Commutative_algebra
The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Zariski topology in the sense of Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc. Nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions, leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks.
https://en.wikipedia.org/wiki/Commutative_algebra
Breach and attack simulation (BAS) refers to technologies that allow organizations to test their security defenses against simulated cyberattacks. BAS solutions provide automated assessments that help identify weaknesses or gaps in an organization's security posture. ## Description BAS tools work by executing simulated attacks against an organization's IT infrastructure and assets. These simulated attacks are designed to mimic real-world threats and techniques used by cybercriminals. The simulations test the organization's ability to detect, analyze, and respond to attacks. After running the simulations, BAS platforms generate reports that highlight areas where security controls failed to stop the simulated attacks. Organizations use BAS to validate whether security controls are working as intended. Frequent BAS testing helps benchmark security posture over time and ensure proper incident response processes are in place. BAS testing complements other security assessments like penetration testing and vulnerability scanning. It focuses more on validating security controls versus just finding flaws. The automated nature of BAS allows wider and more regular testing than manual red team exercises. BAS is often part of a continuous threat exposure management (CTEM) program. ## Features Key features of BAS technologies include: - Automated testing: simulations can be scheduled to run repeatedly without manual oversight.
https://en.wikipedia.org/wiki/Breach_and_attack_simulation
BAS is often part of a continuous threat exposure management (CTEM) program. ## Features Key features of BAS technologies include: - Automated testing: simulations can be scheduled to run repeatedly without manual oversight. - Threat modeling: simulations are designed based on real adversarial tactics, techniques and procedures. - Attack surface coverage: can test internal and external-facing assets. - Security control validation: integrates with other security tools to test efficacy. - Reporting: identifies vulnerabilities and prioritizes remediation efforts. ## Use cases Major breach attack simulation use cases include: ### Validating security controls Frequent BAS testing helps ensure security controls like firewalls and endpoint detection stay properly configured to detect real threats. Continuous changes to networks and systems can introduce misconfigurations or gaps that BAS exercises uncover. Many solutions provide the ability to compare different software tools adopted or purchased and assess which is more effective. Regular simulations also improve incident response by training security personnel. ### Efficiency improvements Iterative BAS helps optimize detection and response times. It assists teams in tuning monitoring tools and refining processes. Vulnerability patching can also be better prioritized based on observed exploitability versus just CVSS severity. ### Assessing resilience BAS emulates full attack techniques to prep defenses against real threats.
https://en.wikipedia.org/wiki/Breach_and_attack_simulation
Vulnerability patching can also be better prioritized based on observed exploitability versus just CVSS severity. ### Assessing resilience BAS emulates full attack techniques to prep defenses against real threats. Mapping simulations to frameworks like MITRE ATT&CK validate readiness against known adversary behavior. While not as in-depth as red teaming, BAS quickly benchmarks resilience. ## References ## See also - Red team - Penetration test Category:Security software
https://en.wikipedia.org/wiki/Breach_and_attack_simulation
In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform. ## General form An integral transform is any transform of the following form: $$ (Tf)(u) = \int_{t_1}^{t_2} f(t)\, K(t, u)\, dt $$ The input of this transform is a function , and the output is another function . An integral transform is a particular kind of mathematical operator. There are numerous useful integral transforms. Each is specified by a choice of the function $$ K $$ of two variables, that is called the kernel or nucleus of the transform.
https://en.wikipedia.org/wiki/Integral_transform
There are numerous useful integral transforms. Each is specified by a choice of the function $$ K $$ of two variables, that is called the kernel or nucleus of the transform. Some kernels have an associated inverse kernel $$ K^{-1}( u,t ) $$ which (roughly speaking) yields an inverse transform: $$ f(t) = \int_{u_1}^{u_2} (Tf)(u)\, K^{-1}( u,t )\, du $$ A symmetric kernel is one that is unchanged when the two variables are permuted; it is a kernel function such that $$ K(t, u) = K(u, t) $$ . In the theory of integral equations, symmetric kernels correspond to self-adjoint operators. ## Motivation There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform.
https://en.wikipedia.org/wiki/Integral_transform
An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform. There are many applications of probability that rely on integral transforms, such as "pricing kernel" or stochastic discount factor, or the smoothing of data recovered from robust statistics; see kernel (statistics). ## History The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals. Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis. ## Usage example As an example of an application of integral transforms, consider the Laplace transform.
https://en.wikipedia.org/wiki/Integral_transform
The sines and cosines in the Fourier series are an example of an orthonormal basis. ## Usage example As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution.
https://en.wikipedia.org/wiki/Integral_transform
The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond to power series in the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain. The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifted damped sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines.
https://en.wikipedia.org/wiki/Integral_transform
The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifted damped sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines. Another usage example is the kernel in the path integral: $$ \psi(x,t) = \int_{-\infty}^\infty \psi(x',t') K(x,t; x', t') dx'. $$ This states that the total amplitude $$ \psi(x,t) $$ to arrive at $$ (x,t) $$ is the sum (the integral) over all possible values $$ x' $$ of the total amplitude $$ \psi(x',t') $$ to arrive at the point $$ (x',t') $$ multiplied by the amplitude to go from $$ x' $$ to $$ x $$ i.e. $$ K(x,t;x',t') $$ . It is often referred to as the propagator for a given system. This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel.
https://en.wikipedia.org/wiki/Integral_transform
This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel. ## Table of transforms + Table of integral transforms Transform Symbol K f(t) t1 t2 K−1 u1 u2 Abel transform F, f Assuming the Abel transform is not discontinuous at . t Associated Legendre transform Fourier transform Fourier sine transform on , real-valued Fourier cosine transform on , real-valued Hankel transform Hartley transform Hermite transform Hilbert transform Jacobi transform Laguerre transform Laplace transform Legendre transform Mellin transform Some conditions apply, see Mellin inversion theorem for details. Two-sided Laplacetransform Poisson kernel Radon transform Rƒ Weierstrass transform X-ray transform Xƒ In the limits of integration for the inverse transform, c is a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform, c must be greater than the largest real part of the zeroes of the transform function. Note that there are alternative notations and conventions for the Fourier transform. ## Different domains Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group. -
https://en.wikipedia.org/wiki/Integral_transform
Note that there are alternative notations and conventions for the Fourier transform. ## Different domains Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group. - If instead one uses functions on the circle (periodic functions), integration kernels are then biperiodic functions; convolution by functions on the circle yields circular convolution. - If one uses functions on the cyclic group of order n ( or ), one obtains n × n matrices as integration kernels; convolution corresponds to circulant matrices. ## General theory Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is a linear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be a generalized function then all linear operators are integral transforms (a properly formulated version of this statement is the Schwartz kernel theorem). The general theory of such integral equations is known as Fredholm theory. In this theory, the kernel is understood to be a compact operator acting on a Banach space of functions. Depending on the situation, the kernel is then variously referred to as the Fredholm operator, the nuclear operator or the Fredholm kernel.
https://en.wikipedia.org/wiki/Integral_transform
In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values. Although search engines use search algorithms, they belong to the study of information retrieval, not algorithmics. The appropriate search algorithm to use often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes. Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures by using numerical keys. Finally, hashing directly maps keys to records based on a hash function.
https://en.wikipedia.org/wiki/Search_algorithm
Digital search algorithms work based on the properties of digits in data structures by using numerical keys. Finally, hashing directly maps keys to records based on a hash function. Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of , or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.
https://en.wikipedia.org/wiki/Search_algorithm
Binary search functions, for example, have a maximum complexity of , or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space. ## Applications of search algorithms Specific applications of search algorithms include: - Problems in combinatorial optimization, such as: - The vehicle routing problem, a form of shortest path problem - The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. - The nurse scheduling problem - Problems in constraint satisfaction, such as: - The map coloring problem - Filling in a sudoku or crossword puzzle - In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm) - Finding a combination or password from the whole set of possibilities - Factoring an integer (an important problem in cryptography) - Search engine optimization (SEO) and content optimization for web crawlers - Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH) - Retrieving a record from a database - Finding the maximum or minimum value in a list or array - Checking to see if a given value is present in a set of values ## Classes
https://en.wikipedia.org/wiki/Search_algorithm
## Applications of search algorithms Specific applications of search algorithms include: - Problems in combinatorial optimization, such as: - The vehicle routing problem, a form of shortest path problem - The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. - The nurse scheduling problem - Problems in constraint satisfaction, such as: - The map coloring problem - Filling in a sudoku or crossword puzzle - In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm) - Finding a combination or password from the whole set of possibilities - Factoring an integer (an important problem in cryptography) - Search engine optimization (SEO) and content optimization for web crawlers - Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH) - Retrieving a record from a database - Finding the maximum or minimum value in a list or array - Checking to see if a given value is present in a set of values ## Classes ### For virtual search spaces Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities.
https://en.wikipedia.org/wiki/Search_algorithm
## Classes ### For virtual search spaces Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation. An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods.
https://en.wikipedia.org/wiki/Search_algorithm
This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods. This method is applicable when the search space is not limited and all aspects of the given network are available to the entity running the search algorithm. This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness". Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s).
https://en.wikipedia.org/wiki/Search_algorithm
Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, and the A* algorithm and its variants. ### For sub-structures of a given structure An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm. Another important subclass of this category are the string searching algorithms, that search for patterns within strings.
https://en.wikipedia.org/wiki/Search_algorithm
Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm. Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure. ### Search for the maximum of a function In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science. ### For quantum computers There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics. While the ideas and applications behind quantum computers are still entirely theoretical, studies have been conducted with algorithms like Grover's that accurately replicate the hypothetical physical versions of quantum computing systems.
https://en.wikipedia.org/wiki/Search_algorithm
In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup. When some object $$ X $$ is said to be embedded in another object $$ Y $$ , the embedding is given by some injective and structure-preserving map $$ f:X\rightarrow Y $$ . The precise meaning of "structure-preserving" depends on the kind of mathematical structure of which $$ X $$ and $$ Y $$ are instances. In the terminology of category theory, a structure-preserving map is called a morphism. The fact that a map $$ f:X\rightarrow Y $$ is an embedding is often indicated by the use of a "hooked arrow" (); thus: $$ f : X \hookrightarrow Y. $$ (On the other hand, this notation is sometimes reserved for inclusion maps.) Given $$ X $$ and $$ Y $$ , several different embeddings of $$ X $$ in $$ Y $$ may be possible. In many cases of interest there is a standard (or "canonical") embedding, like those of the natural numbers in the integers, the integers in the rational numbers, the rational numbers in the real numbers, and the real numbers in the complex numbers.
https://en.wikipedia.org/wiki/Embedding
Given $$ X $$ and $$ Y $$ , several different embeddings of $$ X $$ in $$ Y $$ may be possible. In many cases of interest there is a standard (or "canonical") embedding, like those of the natural numbers in the integers, the integers in the rational numbers, the rational numbers in the real numbers, and the real numbers in the complex numbers. In such cases it is common to identify the domain $$ X $$ with its image $$ f(X) $$ contained in $$ Y $$ , so that $$ X\subseteq Y $$ . ## Topology and geometry ### General topology In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map $$ f : X \to Y $$ between topological spaces $$ X $$ and $$ Y $$ is a topological embedding if $$ f $$ yields a homeomorphism between $$ X $$ and $$ f(X) $$ (where $$ f(X) $$ carries the subspace topology inherited from $$ Y $$ ). Intuitively then, the embedding $$ f : X \to Y $$ lets us treat $$ X $$ as a subspace of $$ Y $$ . Every embedding is injective and continuous.
https://en.wikipedia.org/wiki/Embedding
Intuitively then, the embedding $$ f : X \to Y $$ lets us treat $$ X $$ as a subspace of $$ Y $$ . Every embedding is injective and continuous. Every map that is injective, continuous and either open or closed is an embedding; however there are also embeddings that are neither open nor closed. The latter happens if the image $$ f(X) $$ is neither an open set nor a closed set in $$ Y $$ . For a given space $$ Y $$ , the existence of an embedding $$ X \to Y $$ is a topological invariant of $$ X $$ . This allows two spaces to be distinguished if one is able to be embedded in a space while the other is not. #### Related definitions If the domain of a function $$ f : X \to Y $$ is a topological space then the function is said to be if there exists some neighborhood $$ U $$ of this point such that the restriction $$ f\big\vert_U : U \to Y $$ is injective. It is called if it is locally injective around every point of its domain. Similarly, a is a function for which every point in its domain has some neighborhood to which its restriction is a (topological, resp. smooth) embedding.
https://en.wikipedia.org/wiki/Embedding
smooth) embedding. Every injective function is locally injective but not conversely. Local diffeomorphisms, local homeomorphisms, and smooth immersions are all locally injective functions that are not necessarily injective. The inverse function theorem gives a sufficient condition for a continuously differentiable function to be (among other things) locally injective. Every fiber of a locally injective function $$ f : X \to Y $$ is necessarily a discrete subspace of its domain $$ X. $$ ### Differential topology In differential topology: Let $$ M $$ and $$ N $$ be smooth manifolds and $$ f:M\to N $$ be a smooth map. Then $$ f $$ is called an immersion if its derivative is everywhere injective. An embedding, or a smooth embedding, is defined to be an immersion that is an embedding in the topological sense mentioned above (i.e. homeomorphism onto its image). In other words, the domain of an embedding is diffeomorphic to its image, and in particular the image of an embedding must be a submanifold. An immersion is precisely a local embedding, i.e. for any point $$ x\in M $$ there is a neighborhood $$ x\in U\subset M $$ such that _
https://en.wikipedia.org/wiki/Embedding
In other words, the domain of an embedding is diffeomorphic to its image, and in particular the image of an embedding must be a submanifold. An immersion is precisely a local embedding, i.e. for any point $$ x\in M $$ there is a neighborhood $$ x\in U\subset M $$ such that _ BLOCK6_ is an embedding. When the domain manifold is compact, the notion of a smooth embedding is equivalent to that of an injective immersion. An important case is $$ N = \mathbb{R}^n $$ . The interest here is in how large $$ n $$ must be for an embedding, in terms of the dimension _ BLOCK9_ of $$ M $$ . The Whitney embedding theorem states that $$ n = 2m $$ is enough, and is the best possible linear bound. For example, the real projective space $$ \mathbb{R}\mathrm{P}^m $$ of dimension $$ m $$ , where $$ m $$ is a power of two, requires $$ n = 2m $$ for an embedding.
https://en.wikipedia.org/wiki/Embedding
The Whitney embedding theorem states that $$ n = 2m $$ is enough, and is the best possible linear bound. For example, the real projective space $$ \mathbb{R}\mathrm{P}^m $$ of dimension $$ m $$ , where $$ m $$ is a power of two, requires $$ n = 2m $$ for an embedding. However, this does not apply to immersions; for instance, $$ \mathbb{R}\mathrm{P}^2 $$ can be immersed in $$ \mathbb{R}^3 $$ as is explicitly shown by Boy's surface—which has self-intersections. The Roman surface fails to be an immersion as it contains cross-caps. An embedding is proper if it behaves well with respect to boundaries: one requires the map $$ f: X \rightarrow Y $$ to be such that - $$ f(\partial X) = f(X) \cap \partial Y $$ , and - $$ f(X) $$ is transverse to $$ \partial Y $$ in any point of $$ f(\partial X) $$ .
https://en.wikipedia.org/wiki/Embedding
The Roman surface fails to be an immersion as it contains cross-caps. An embedding is proper if it behaves well with respect to boundaries: one requires the map $$ f: X \rightarrow Y $$ to be such that - $$ f(\partial X) = f(X) \cap \partial Y $$ , and - $$ f(X) $$ is transverse to $$ \partial Y $$ in any point of $$ f(\partial X) $$ . The first condition is equivalent to having $$ f(\partial X) \subseteq \partial Y $$ and $$ f(X \setminus \partial X) \subseteq Y \setminus \partial Y $$ . The second condition, roughly speaking, says that $$ f(X) $$ is not tangent to the boundary of $$ Y $$ . ### Riemannian and pseudo-Riemannian geometry In Riemannian geometry and pseudo-Riemannian geometry: Let $$ (M,g) $$ and _ BLOCK1_ be Riemannian manifolds or more generally pseudo-Riemannian manifolds. An isometric embedding is a smooth embedding $$ f:M\rightarrow N $$ that preserves the (pseudo-)metric in the sense that _
https://en.wikipedia.org/wiki/Embedding
BLOCK1_ be Riemannian manifolds or more generally pseudo-Riemannian manifolds. An isometric embedding is a smooth embedding $$ f:M\rightarrow N $$ that preserves the (pseudo-)metric in the sense that _ BLOCK3_ is equal to the pullback of $$ h $$ by $$ f $$ , i.e. $$ g=f^{*}h $$ . Explicitly, for any two tangent vectors $$ v,w\in T_x(M) $$ we have $$ g(v,w)=h(df(v),df(w)). $$ Analogously, isometric immersion is an immersion between (pseudo)-Riemannian manifolds that preserves the (pseudo)-Riemannian metrics. Equivalently, in Riemannian geometry, an isometric embedding (immersion) is a smooth embedding (immersion) that preserves length of curves (cf. Nash embedding theorem). ## Algebra In general, for an algebraic category $$ C $$ , an embedding between two $$ C $$ -algebraic structures $$ X $$ and $$ Y $$ is a $$ C $$ -morphism that is injective. ### Field theory
https://en.wikipedia.org/wiki/Embedding
## Algebra In general, for an algebraic category $$ C $$ , an embedding between two $$ C $$ -algebraic structures $$ X $$ and $$ Y $$ is a $$ C $$ -morphism that is injective. ### Field theory In field theory, an embedding of a field $$ E $$ in a field _ BLOCK1_ is a ring homomorphism . The kernel of $$ \sigma $$ is an ideal of $$ E $$ , which cannot be the whole field $$ E $$ , because of the condition . Furthermore, any field has as ideals only the zero ideal and the whole field itself (because if there is any non-zero field element in an ideal, it is invertible, showing the ideal is the whole field). Therefore, the kernel is $$ 0 $$ , so any embedding of fields is a monomorphism. Hence, $$ E $$ is isomorphic to the subfield $$ \sigma(E) $$ of $$ F $$ . This justifies the name embedding for an arbitrary homomorphism of fields. ### Universal algebra and model theory
https://en.wikipedia.org/wiki/Embedding
This justifies the name embedding for an arbitrary homomorphism of fields. ### Universal algebra and model theory If $$ \sigma $$ is a signature and $$ A,B $$ are $$ \sigma $$ -structures (also called $$ \sigma $$ -algebras in universal algebra or models in model theory), then a map $$ h:A \to B $$ is a $$ \sigma $$ -embedding exactly if all of the following hold: - $$ h $$ is injective, - for every $$ n $$ -ary function symbol $$ f \in\sigma $$ and _ BLOCK9_
https://en.wikipedia.org/wiki/Embedding
If $$ \sigma $$ is a signature and $$ A,B $$ are $$ \sigma $$ -structures (also called $$ \sigma $$ -algebras in universal algebra or models in model theory), then a map $$ h:A \to B $$ is a $$ \sigma $$ -embedding exactly if all of the following hold: - $$ h $$ is injective, - for every $$ n $$ -ary function symbol $$ f \in\sigma $$ and _ BLOCK9_ we have $$ h(f^A(a_1,\ldots,a_n))=f^B(h(a_1),\ldots,h(a_n)) $$ , - for every $$ n $$ -ary relation symbol $$ R \in\sigma $$ and $$ a_1,\ldots,a_n \in A^n, $$ we have $$ A \models R(a_1,\ldots,a_n) $$ iff $$ B \models R(h(a_1),\ldots,h(a_n)). $$ Here $$ A\models R (a_1,\ldots,a_n) $$ is a model theoretical notation equivalent to $$ (a_1,\ldots,a_n)\in R^A $$ .
https://en.wikipedia.org/wiki/Embedding
BLOCK9_ we have $$ h(f^A(a_1,\ldots,a_n))=f^B(h(a_1),\ldots,h(a_n)) $$ , - for every $$ n $$ -ary relation symbol $$ R \in\sigma $$ and $$ a_1,\ldots,a_n \in A^n, $$ we have $$ A \models R(a_1,\ldots,a_n) $$ iff $$ B \models R(h(a_1),\ldots,h(a_n)). $$ Here $$ A\models R (a_1,\ldots,a_n) $$ is a model theoretical notation equivalent to $$ (a_1,\ldots,a_n)\in R^A $$ . In model theory there is also a stronger notion of elementary embedding. ## Order theory and domain theory In order theory, an embedding of partially ordered sets is a function $$ F $$ between partially ordered sets $$ X $$ and $$ Y $$ such that $$ \forall x_1,x_2\in X: x_1\leq x_2 \iff F(x_1)\leq F(x_2). $$ Injectivity of $$ F $$ follows quickly from this definition.
https://en.wikipedia.org/wiki/Embedding
## Order theory and domain theory In order theory, an embedding of partially ordered sets is a function $$ F $$ between partially ordered sets $$ X $$ and $$ Y $$ such that $$ \forall x_1,x_2\in X: x_1\leq x_2 \iff F(x_1)\leq F(x_2). $$ Injectivity of $$ F $$ follows quickly from this definition. In domain theory, an additional requirement is that $$ \forall y\in Y:\{x \mid F(x) \leq y\} $$ is directed. ## Metric spaces A mapping $$ \phi: X \to Y $$ of metric spaces is called an embedding (with distortion $$ C>0 $$ ) if $$ L d_X(x, y) \leq d_Y(\phi(x), \phi(y)) \leq CLd_X(x,y) $$ for every $$ x,y\in X $$ and some constant $$ L>0 $$ . ### Normed spaces An important special case is that of normed spaces; in this case it is natural to consider linear embeddings.
https://en.wikipedia.org/wiki/Embedding