text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: ### Charge exchange ionization
Charge-exchange ionization (also known as charge-transfer ionization) is a gas phase reaction between an ion and an atom or molecule in which the charge of the ion is transferred to the neutral species.
A+ + B -> A + B+
### Chemi-ionization
Chemi-ionization is the formation of an ion through the reaction of a gas phase atom or molecule with an atom or molecule in an excited state. Chemi-ionization can be represented by
G^\ast{} + M -> G{} + M^{+\bullet}{} + e^-
where G is the excited state species (indicated by the superscripted asterisk), and M is the species that is ionized by the loss of an electron to form the radical cation (indicated by the superscripted "plus-dot").
### Associative ionization
Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. * One or both of the interacting species may have excess internal energy.
For example,
A^\ast{} + B -> AB^{+\bullet}{} + e^-
where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+.
### Penning ionization
Penning ionization is a form of chemi-ionization involving reactions between neutral atoms or molecules. The process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927.
|
https://en.wikipedia.org/wiki/Ion_source
|
passage: This only occurs when the material is sufficiently dense and compact, indicating that it has been produced by the progenitor star itself only shortly before the supernova occurs.
Large numbers of supernovae have been catalogued and classified to provide distance candles and test models. Average characteristics vary somewhat with distance and type of host galaxy, but can broadly be specified for each supernova type.
+Physical properties of supernovae by typeType Average peak absolute magnitude Approximate energy (foe) Days to peak luminosity Days from peak to 10% luminosityIa −19 1 approx. 19 around 60Ib/c (faint) around −15 0.1 15–25 unknownIb around −17 1 15–25 40–100Ic around −16 1 15–25 40–100Ic (bright) to −22 above 5 roughly 25 roughly 100II-b around −17 1 around 20 around 100II-L around −17 1 around 13 around 150II-P (faint) around −14 0.1 roughly 15 unknownII-P around −16 1 around 15 Plateau then around 50IIn around −17 1 12–30 or more 50–150IIn (bright) to −22 above 5 above 50 above 100
Notes:
### Asymmetry
A long-standing puzzle surrounding type II supernovae is why the remaining compact object receives a large velocity away from the epicentre; pulsars, and thus neutron stars, are observed to have high peculiar velocities, and black holes presumably do as well, although they are far harder to observe in isolation.
|
https://en.wikipedia.org/wiki/Supernova
|
passage: From a technical point of view, information (data)-centric security relies on the implementation of the following:
- Information (data) that is self-describing and defending.
- Policies and controls that account for business context.
- Information that remains protected as it moves in and out of applications and storage systems, and changing business context.
- Policies that work consistently through the different data management technologies and defensive layers implemented.
## Technology
### Data access controls and policies
Data access control is the selective restriction of access to data. Accessing may mean viewing, editing, or using. Defining proper access controls requires to map out the information, where it resides, how important it is, who it is important to, how sensitive the data is and then designing appropriate controls.
### Encryption
Encryption is a proven data-centric technique to address the risk of data theft in smartphones, laptops, desktops and even servers, including the cloud. One limitation is that encryption is not always effective once a network intrusion has occurred and cybercriminals operate with stolen valid user credentials.
### Data masking
Data Masking is the process of hiding specific data within a database table or cell to ensure that data security is maintained and that sensitive information is not exposed to unauthorized personnel. This may include masking the data from users, developers, third-party and outsourcing vendors, etc.
Data masking can be achieved multiple ways: by duplicating data to eliminate the subset of the data that needs to be hidden, or by obscuring the data dynamically as users perform requests.
|
https://en.wikipedia.org/wiki/Data-centric_security
|
passage: - Cache
- Level 0 (L0), micro-operations cache6,144 bytes (6 KiB) in size
- Level 1 (L1) instruction cache128 KiB in size
- Level 1 (L1) data cache128 KiB in size. Best access speed is around 700 GB/s.
- Level 2 (L2) instruction and data (shared)1 MiB in size. Best access speed is around 200 GB/s.
- Level 3 (L3) shared cache6 MiB in size. Best access speed is around 100 GB/s.
- Level 4 (L4) shared cache128 MiB in size. Best access speed is around 40 GB/s.
- Main memory (primary storage)GiB in size. Best access speed is around 10 GB/s. In the case of a NUMA machine, access times may not be uniform.
- Mass storage (secondary storage)terabytes in size. As of 2017, best access speed is from a consumer solid state drive is about 2000 MB/s.
- Nearline storage (tertiary storage)up to exabytes in size. As of 2013, best access speed is about 160 MB/s.
- Offline storage
The lower levels of the hierarchyfrom mass storage downwardsare also known as tiered storage. The formal distinction between online, nearline, and offline storage is:
- Online storage is immediately available for I/O.
- Nearline storage is not immediately available, but can be made online quickly without human intervention.
- Offline storage is not immediately available, and requires some human intervention to bring online.
|
https://en.wikipedia.org/wiki/Memory_hierarchy
|
passage: In mathematics, a sober space is a topological space X such that every (nonempty) irreducible closed subset of X is the closure of exactly one point of X: that is, every nonempty irreducible closed subset has a unique generic point.
## Definitions
Sober spaces have a variety of cryptomorphic definitions, which are documented in this section. In each case below, replacing "unique" with "at most one" gives an equivalent formulation of the T0 axiom. Replacing it with "at least one" is equivalent to the property that the T0 quotient of the space is sober, which is sometimes referred to as having "enough points" in the literature.
### With irreducible closed sets
A closed set is irreducible if it cannot be written as the union of two proper closed subsets. A space is sober if every nonempty irreducible closed subset is the closure of a unique point.
### In terms of morphisms of frames and locales
A topological space X is sober if every map from its partially ordered set of open subsets to
$$
\{0,1\}
$$
that preserves all joins and all finite meets is the inverse image of a unique continuous function from the one-point space to X.
This may be viewed as a correspondence between the notion of a point in a locale and a point in a topological space, which is the motivating definition.
|
https://en.wikipedia.org/wiki/Sober_space
|
passage: They are related to, but not the same as the seven crystal systems.
+Overview of common lattice systems Crystal family Lattice system Point group (Schönflies notation) 14
#### Bravais lattices
Primitive (P) Base-centered (S) Body-centered (I) Face-centered (F) Triclinic (a) C
aP Monoclinic (m) C
mP
mS Orthorhombic (o) D
oP
oS
oI
oF Tetragonal (t) D
tP
tI Hexagonal (h) Rhombohedral D
hR Hexagonal D
hP Cubic (c) O
cP
cI
cF
The most symmetric, the cubic or isometric system, has the symmetry of a cube, that is, it exhibits four threefold rotational axes oriented at 109.5° (the tetrahedral angle) with respect to each other. These threefold axes lie along the body diagonals of the cube. The other six lattice systems, are hexagonal, tetragonal, rhombohedral (often confused with the trigonal crystal system), orthorhombic, monoclinic and triclinic.
Bravais lattices
Bravais lattices, also referred to as space lattices, describe the geometric arrangement of the lattice points, and therefore the translational symmetry of the crystal. The three dimensions of space afford 14 distinct Bravais lattices describing the translational symmetry. All crystalline materials recognized today, not including quasicrystals, fit in one of these arrangements.
|
https://en.wikipedia.org/wiki/Crystal_structure
|
passage: These hierarchical relations participate in the promotion of stereotypes about people and groups, sometimes based on subjective criteria. Social categories can encourage people to associate stereotypes to groups of people. Associating stereotypes to a group, and to people who belong to this group, can lead to forms of discrimination towards people of this group. The perception of a group and the stereotypes associated with it have an impact on social relations and activities.
Some social categories have more weight than others in society. For instance, in history and still today, the category of "race" is one of the first categories used to sort people. However, only a few categories of race are commonly used such as "Black", "White", "Asian" etc. It participates in the reduction of the multitude of ethnicities to a few categories based mostly on people's skin color.
The process of sorting people creates a vision of the other as 'different', leading to the dehumanization of people. Scholars talk about intergroup relations with the concept of social identity theory developed by H. Tajfel. Indeed, in history, many examples of social categorization have led to forms of domination or violence from a dominant group to a dominated group. Periods of colonisation are examples of times when people from a group chose to dominate and control other people belonging to other groups because they considered them as inferior. Racism, discrimination and violence are consequences of social categorization and can occur because of it. When people see others as different, they tend to develop hierarchical relation with other groups.
## Miscategorization
There cannot be categorization without the possibility of miscategorization.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
passage: This also holds for s complex (in this case the integral has to be intended as a contour integral, for example along the straight segment from 0 to s) because
$$
-e^{s-x} \sum_{j=0}^{np-1} f_i^{(j)}(x)
$$
is a primitive of
$$
e^{s-x} f_i(x)
$$
.
|
https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem
|
passage: Let
$$
Y
$$
be the set of all these vectors
$$
\alpha_r, \beta_r
$$
.
For each
$$
\nu \in Y
$$
, let us define two sets of numbers:
$$
R_{\nu}^+=\{r|\alpha_r=\nu \}; \;\;\; R_{\nu}^-=\{r|\beta_r=\nu \}
$$
$$
r \in R_{\nu}^+
$$
if and only if
$$
\nu
$$
is the vector of the input stoichiometric coefficients
$$
\alpha_r
$$
for the rth elementary reaction;
$$
r \in R_{\nu}^-
$$
if and only if
$$
\nu
$$
is the vector of the output stoichiometric coefficients
$$
\beta_r
$$
for the rth elementary reaction.
The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every
$$
\nu \in Y
$$
$$
\sum_{r\in R_{\nu}^-}w_r=\sum_{r\in R_{\nu}^+}w_r
$$
The semi-detailed balance condition is sufficient for the stationarity: it implies that
$$
\frac{d N}{dt}=V \sum_r \gamma_r w_r=0.
$$
For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state.
|
https://en.wikipedia.org/wiki/Detailed_balance
|
passage: (See Appendix A.1 for the basic definitions for the watershed-based grey-level blob detection algorithm.)T. Lindeberg and J.-O. Eklundh, "On the computation of a scale-space primal sketch", Journal of Visual Communication and Image Representation, vol. 2, pp. 55--78, Mar. 1991. More detailed treatments of applications of grey-level blob detection and the scale-space primal sketch to computer vision and medical image analysis are given in .Lindeberg, T.: Detecting salient blob-like image structures and their scales with a scale-space primal sketch: A method for focus-of-attention, International Journal of Computer Vision, 11(3), 283--318, 1993.Lindeberg, T, Lidberg, Par and Roland, P. E..: "Analysis of Brain Activation Patterns Using a 3-D Scale-Space Primal Sketch", Human Brain Mapping, vol 7, no 3, pp 166--194, 1999.Jean-Francois Mangin, Denis Rivière, Olivier Coulon, Cyril Poupon, Arnaud Cachia, Yann Cointepas, Jean-Baptiste Poline, Denis Le Bihan, Jean Régis, Dimitri Papadopoulos-Orfanos: "Coordinate-based versus structural approaches to brain image analysis". Artificial Intelligence in Medicine 30(2): 177-197 (2004)
## Maximally stable extremal regions (MSER)
Matas et al. (2002) were interested in defining image descriptors that are robust under perspective transformations.
|
https://en.wikipedia.org/wiki/Blob_detection
|
passage: In mathematical logic, specifically in the discipline of model theory, the Fraïssé limit (also called the Fraïssé construction or Fraïssé amalgamation) is a method used to construct (infinite) mathematical structures from their (finite) substructures. It is a special example of the more general concept of a direct limit in a category. The technique was developed in the 1950s by its namesake, French logician Roland Fraïssé.
The main point of Fraïssé's construction is to show how one can approximate a (countable) structure by its finitely generated substructures. Given a class
$$
\mathbf{K}
$$
of finite relational structures, if
$$
\mathbf{K}
$$
satisfies certain properties (described below), then there exists a unique countable structure
$$
\operatorname{Flim}(\mathbf{K})
$$
, called the Fraïssé limit of
$$
\mathbf{K}
$$
, which contains all the elements of
$$
\mathbf{K}
$$
as substructures.
The general study of Fraïssé limits and related notions is sometimes called Fraïssé theory. This field has seen wide applications to other parts of mathematics, including topological dynamics, functional analysis, and Ramsey theory.
## Finitely generated substructures and age
Fix a language
$$
\mathcal{L}
$$
. By an -structure, we mean a logical structure having signature
$$
\mathcal{L}
$$
.
|
https://en.wikipedia.org/wiki/Fra%C3%AFss%C3%A9_limit
|
passage: Pressurized water is used in water blasting and water jet cutters. High pressure water guns are used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is also used in the cooling of machinery to prevent overheating, or prevent saw blades from overheating.
Water is also used in many industrial processes and machines, such as the steam turbine and heat exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water (thermal pollution). Industry requires pure water for many applications and uses a variety of purification techniques both in water supply and discharge.
#### Food processing
Boiling, steaming, and simmering are popular cooking methods that often require immersing food in water or its gaseous state, steam. Water is also used for dishwashing. Water also plays many critical roles within the field of food science.
Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and freezing points of water are affected by solutes, as well as air pressure, which is in turn affected by altitude. Water boils at lower temperatures with the lower air pressure that occurs at higher elevations. One mole of sucrose (sugar) per kilogram of water raises the boiling point of water by , and one mole of salt per kg raises the boiling point by ; similarly, increasing the number of dissolved particles lowers water's freezing point.
|
https://en.wikipedia.org/wiki/Water
|
passage: ## Important results
Important results include the Bolzano–Weierstrass and Heine–Borel theorems, the intermediate value theorem and mean value theorem, Taylor's theorem, the fundamental theorem of calculus, the Arzelà-Ascoli theorem, the Stone-Weierstrass theorem, Fatou's lemma, and the monotone convergence and dominated convergence theorems.
## Generalizations and related areas of mathematics
Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the concepts of Banach spaces and Hilbert spaces and, more generally to functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis.
|
https://en.wikipedia.org/wiki/Real_analysis
|
passage: When is infinite-dimensional the topology on is strictly coarser than the strong dual topology
Suppose that is a locally convex Hausdorff space and that is its completion. If then is strictly finer than
Any equicontinuous subset in the dual of a separable Hausdorff locally convex vector space is metrizable in the topology.
If is locally convex then a subset is -bounded if and only if there exists a barrel in such that
### Compact-convex convergence
If
$$
X
$$
is a Fréchet space then the topologies
$$
\gamma\left(X', X\right) = c\left(X', X\right).
$$
### Compact convergence
If
$$
X
$$
is a Fréchet space or a LF-space then
$$
c(X',X)
$$
is complete.
Suppose that
$$
X
$$
is a metrizable topological vector space and that
$$
W' \subseteq X'.
$$
If the intersection of
$$
W'
$$
with every equicontinuous subset of
$$
X'
$$
is weakly-open, then
$$
W'
$$
is open in
$$
c(X',X).
$$
### Precompact convergence
Banach–Alaoglu theorem: An equicontinuous subset
$$
K \subseteq X'
$$
has compact closure in the topology of uniform convergence on precompact sets.
|
https://en.wikipedia.org/wiki/Polar_topology
|
passage: Some other projects that are not "pure" database file systems but that use some aspects of a database file system:
- Many Web content management systems use a relational DBMS to store and retrieve files. For example, XHTML files are stored as XML or text fields, while image files are stored as blob fields; SQL SELECT (with optional XPath) statements retrieve the files, and allow the use of a sophisticated logic and more rich information associations than "usual file systems." Many CMSs also have the option of storing only metadata within the database, with the standard filesystem used to store the content of files.
- Very large file systems, embodied by applications like Apache Hadoop and Google File System, use some database file system concepts.
### Transactional file systems
Some programs need to either make multiple file system changes, or, if one or more of the changes fail for any reason, make none of the changes. For example, a program which is installing or updating software may write executables, libraries, and/or configuration files. If some of the writing fails and the software is left partially installed or updated, the software may be broken or unusable. An incomplete update of a key system utility, such as the command shell, may leave the entire system in an unusable state.
Transaction processing introduces the atomicity guarantee, ensuring that operations inside of a transaction are either all committed or the transaction can be aborted and the system discards all of its partial results. This means that if there is a crash or power failure, after recovery, the stored state will be consistent.
|
https://en.wikipedia.org/wiki/File_system
|
passage: A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10th and Ganga's position is 40th, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Central tendency
The median, i.e. middle-ranked, item is allowed as the measure of central tendency; however, the mean (or average) as the measure of central tendency is not allowed. The mode is allowed.
|
https://en.wikipedia.org/wiki/Level_of_measurement
|
passage: A web shell is a shell-like interface that enables a web server to be remotely accessed, often for the purposes of cyberattacks. A web shell is unique in that a web browser is used to interact with it.
A web shell could be programmed in any programming language that is supported on a server. Web shells are most commonly written in PHP due to the widespread usage of PHP for web applications. Though Active Server Pages, ASP.NET, Python, Perl, Ruby, and Unix shell scripts are also used.
Using network monitoring tools, an attacker can find vulnerabilities that can potentially allow delivery of a web shell. These vulnerabilities are often present in applications that are run on a web server.
An attacker can use a web shell to issue shell commands, perform privilege escalation on the web server, and the ability to upload, delete, download, and execute files to and from the web server.
## General usage
Web shells are used in attacks mostly because they are multi-purpose and difficult to detect. They are commonly used for:
- Data theft
- Infecting website visitors (watering hole attacks)
- Website defacement by modifying files with a malicious intent
- Launch distributed denial-of-service (DDoS) attacks
- To relay commands inside the network which is inaccessible over the Internet
- To use as command and control base, for example as a bot in a botnet system or in way to compromise the security of additional external networks.
Web shells give hackers the ability to steal information, corrupt data, and upload malwares that are more damaging to a system.
|
https://en.wikipedia.org/wiki/Web_shell
|
passage: ### Vector form
Given a normalized light vector
$$
\vec{l}
$$
(pointing from the light source toward the surface) and a normalized plane normal vector
$$
\vec{n}
$$
, one can work out the normalized reflected and refracted rays, via the cosines of the angle of incidence
$$
\theta_1
$$
and angle of refraction
$$
\theta_2
$$
, without explicitly using the sine values or any trigonometric functions or angles:
$$
\cos\theta_1 = -\vec{n}\cdot \vec{l}
$$
Note:
$$
\cos\theta_1
$$
must be positive, which it will be if
$$
\vec{n}
$$
is the normal vector that points from the surface toward the side where the light is coming from, the region with index
$$
n_1
$$
. If
$$
\cos\theta_1
$$
is negative, then
$$
\vec{n}
$$
points to the side without the light, so start over with
$$
\vec{n}
$$
replaced by its negative.
$$
\vec{v}_{\mathrm{reflect}}=\vec{l} + 2\cos\theta_1 \vec{n}
$$
This reflected direction vector points back toward the side of the surface where the light came from.
|
https://en.wikipedia.org/wiki/Snell%27s_law
|
passage: The objective is to calculate numerical approximations
$$
U_j
$$
to the exact solution
$$
u(t_j)
$$
using a serial time-stepping method (e.g. Runge-Kutta) that has high numerical accuracy (and therefore high computational cost). We refer to this method as the fine solver
$$
\mathcal{F}
$$
, which propagates an initial value
$$
U_j
$$
at time
$$
t_j
$$
to a terminal value
$$
U_{j+1}
$$
at time
$$
t_{j+1}
$$
. The goal is to calculate the solution (with high numerical accuracy) using
$$
\mathcal{F}
$$
such that we obtain
$$
U_{j+1} = \mathcal{F}(t_j,t_{j+1},U_j), \quad \text{where} \quad U_0 = u^0.
$$
The problem with this (and the reason for attempting to solve in parallel in the first place) solution is that it is computationally infeasible to calculate in real-time.
### How it works
Instead of using a single processor to solve the initial value problem (as is done with classical time-stepping methods), Parareal makes use of
$$
N
$$
processors. The aim to is to use
$$
N
$$
processors to solve
$$
N
$$
smaller initial value problems (one on each time slice) in parallel.
|
https://en.wikipedia.org/wiki/Parareal
|
passage: Tools that accept descriptions of optimizations are called program transformation systems and are beginning to be applied to real software systems such as C++.
Some high-level languages (Eiffel, Esterel) optimize their programs by using an intermediate language.
Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time.
## Time taken for optimization
Sometimes, the time taken to undertake optimization therein itself may be an issue.
Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile.
An automatic optimizer (or optimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large.
In particular, for just-in-time compilers the performance of the run time compile component, executing together with its target code, is the key to improving overall execution speed.
## References
|
https://en.wikipedia.org/wiki/Program_optimization
|
passage: ## History
### The classical era
The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy.
|
https://en.wikipedia.org/wiki/Physiology
|
passage: In addition, biochemical reactions are catalyzed by enzymes which sometimes prefer one isotope over others. For example, oxygenic photosynthesis is catalyzed by RuBisCO, which prefers carbon-12 over carbon-13, resulting in carbon isotope fractionation in the rock record.
### Sedimentary rocks tell a story
Sedimentary rocks preserve remnants of the history of life on Earth in the form of fossils, biomarkers, isotopes, and other traces. The rock record is far from perfect, and the preservation of biosignatures is a rare occurrence. Understanding what factors determine the extent of preservation and the meaning behind what is preserved are important components to detangling the ancient history of the co-evolution of life and Earth. The sedimentary record allows scientists to observe changes in life and Earth in composition over time and sometimes even date major transitions, like extinction events.
Some classic examples of geobiology in the sedimentary record include stromatolites and banded-iron formations. The role of life in the origin of both of these is a heavily debated topic.
### Life is fundamentally chemistry
The first life arose from abiotic chemical reactions. When this happened, how it happened, and even what planet it happened on are uncertain. However, life follows the rules of and arose from lifeless chemistry and physics. It is constrained by principles such as thermodynamics. This is an important concept in the field because it represents the epitome of the interconnectedness, if not sameness, of life and Earth.
|
https://en.wikipedia.org/wiki/Geobiology
|
passage: The square of the zeta function gives the number of elements in an interval:
$$
\zeta^2(x,y) = \sum_{z\in [x,y]} \zeta(x,z)\,\zeta(z,y) = \sum_{z\in [x,y]} 1 = \#[x,y].
$$
## Examples
Positive integers ordered by divisibility
The convolution associated to the incidence algebra for intervals [1, n] becomes the Dirichlet convolution, hence the Möbius function is μ(a, b) = μ(b/a), where the second "μ" is the classical Möbius function introduced into number theory in the 19th century.
Finite subsets of some set E, ordered by inclusion
The Möbius function is
$$
\mu(S,T)=(-1)^{\left|T\smallsetminus S\right|}
$$
whenever S and T are finite subsets of E with S ⊆ T, and Möbius inversion is called the principle of inclusion-exclusion.
|
https://en.wikipedia.org/wiki/Incidence_algebra
|
passage: Finally, war may result from issue indivisibilities.
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.
### Defence science and technology
Game theory has been used extensively to model decision-making scenarios relevant to defence applications. Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare. Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels.
The tool, for example, automates the transformation of public vulnerability data into models, allowing defenders to synthesize optimal defence strategies through Stackelberg equilibrium analysis. This approach enhances cyber resilience by enabling defenders to anticipate and counteract attackers’ best responses, making game theory increasingly relevant in adversarial cybersecurity environments.
|
https://en.wikipedia.org/wiki/Game_theory
|
passage: ## Difference from centrifugal pseudoforce
The "reactive centrifugal force" discussed in this article is not the same thing as the centrifugal pseudoforce, which is usually what is meant by the term "centrifugal force".
Reactive centrifugal force, being one-half of the reaction pair together with centripetal force, is a concept which applies in any reference frame. This distinguishes it from the inertial or fictitious centrifugal force, which appears only in rotating frames.
Reactive centrifugal force Inertial centrifugal force Reference frame Any Only rotating frames Exerted by Bodies undergoing rotation Acts as if emanating from the rotation axis, it is a so-called fictitious force Exerted upon The constraint that causes the inward centripetal force All bodies, moving or not; if moving, Coriolis force is present as well Direction Opposite to the centripetal force Away from rotation axis, regardless of path of body Kinetic analysis Part of an action-reaction pair with a centripetal force as per Newton's third law Included as a fictitious force in Newton's second law and is never part of an action-reaction pair with a centripetal force
## Gravitational two-body case
In a two-body rotation, such as a planet and moon rotating about their common center of mass or barycentre, the forces on both bodies are centripetal. In that case, the reaction to the centripetal force of the planet on the moon is the centripetal force of the moon on the planet.
## References
Category:Force
Category:Mechanics
Category:Rotation
|
https://en.wikipedia.org/wiki/Reactive_centrifugal_force
|
passage: Gauss illustrates the Chinese remainder theorem on a problem involving calendars, namely, "to find the years that have a certain period number with respect to the solar and lunar cycle and the Roman indiction." Gauss introduces a procedure for solving the problem that had already been used by Leonhard Euler but was in fact an ancient method that had appeared several times.
## Statement
Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote by N the product of the ni.
The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x < N and the remainder of the Euclidean division of x by ni is ai for every i.
This may be restated as follows in terms of congruences:
If the
$$
n_i
$$
are pairwise coprime, and if a1, ..., ak are any integers, then the system
$$
\begin{align}
x &\equiv a_1 \pmod{n_1} \\
&\,\,\,\vdots \\
x &\equiv a_k \pmod{n_k},
\end{align}
$$
has a solution, and any two solutions, say x1 and x2, are congruent modulo N, that is, .
|
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
|
passage: This amounts to the further constraint that the convolution of with must satisfy
$$
\eta_\varepsilon * \eta_\delta = \eta_{\varepsilon+\delta}
$$
for all . Convolution semigroups in that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
$$
\begin{cases}
\dfrac{\partial}{\partial t}\eta(t,x) = A\eta(t,x), \quad t>0 \\[5pt]
\displaystyle\lim_{t\to 0^+} \eta(t,x) = \delta(x)
\end{cases}
$$
in which the limit is as usual understood in the weak sense. Setting gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
#####
|
https://en.wikipedia.org/wiki/Dirac_delta_function
|
passage: This version is often useful in discussions of semi-continuity which crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure of N in [−∞,∞], the extended real number line, is N ∪ {∞}.)
## Sequences of sets
The power set ℘(X) of a set X is a complete lattice that is ordered by set inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subset Y of X is bounded above by X and below by the empty set ∅ because ∅ ⊆ Y ⊆ X. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets of X).
There are two common ways to define the limit of sequences of sets. In both cases:
- The sequence accumulates around sets of points rather than single points themselves. That is, because each element of the sequence is itself a set, there exist accumulation sets that are somehow nearby to infinitely many elements of the sequence.
- The supremum/superior/outer limit is a set that joins these accumulation sets together. That is, it is the union of all of the accumulation sets.
|
https://en.wikipedia.org/wiki/Limit_inferior_and_limit_superior
|
passage: A checkweigher can send a signal to the machine to increase or decrease the amount put into a package. This can result in a payback associated with the checkweigher since producers will be better able to control the amount of give-away. See checkweigher case study outlining ground beef and packaging savings.
## Application considerations
Speed and accuracy that can be achieved by a checkweigher is influenced by the following:
- Pack length or dia
- Pack weight
- Line speed required
- Pack content (solid or liquid)
- Motor technology
- Stabilization time of the weight transducer
- Airflow causing readings in error
- Vibrations from machinery causing unnecessary rejects
- Sensitivity to temperature, as the load cells can be temperature sensitive
## Applications
In-motion scales are dynamic machines that can be designed to perform thousands of tasks. Some are used as simple case weighers at the end of the conveyor line to ensure the overall finished package product is within its target weight.
An in motion conveyor checkweigher can be used to detect missing pieces of a kit, such as a cell phone package that is missing the manual, or other collateral. Checkweighers are typically used on the incoming conveyor chain, and the output pre-packaging conveyor chain in a poultry processing plant. The bird is weighed when it comes onto the conveyor, then after processing and washing at the end, the network computer can determine whether or not the bird absorbed too much water, which as it is further processed, will be drained, making the bird under its target weight.
|
https://en.wikipedia.org/wiki/Check_weigher
|
passage: An acute triangle has three inscribed squares, each with one side coinciding with part of a side of the triangle and with the square's other two vertices on the remaining two sides of the triangle. (In a right triangle two of these are merged into the same square, so there are only two distinct inscribed squares.) However, an obtuse triangle has only one inscribed square, one of whose sides coincides with part of the longest side of the triangle.
All triangles in which the Euler line is parallel to one side are acute. This property holds for side BC if and only if
$$
(\tan B)(\tan C)=3.
$$
## Inequalities
### Sides
If angle C is obtuse then for sides a, b, and c we have
$$
\frac{c^2}{2} < a^2+b^2 < c^2,
$$
with the left inequality approaching equality in the limit only as the apex angle of an isosceles triangle approaches 180°, and with the right inequality approaching equality only as the obtuse angle approaches 90°.
If the triangle is acute then
$$
a^2+b^2 > c^2, \quad b^2+c^2 > a^2, \quad c^2+a^2 > b^2.
$$
|
https://en.wikipedia.org/wiki/Acute_and_obtuse_triangles
|
passage: The (dynamic and environment responsive) pattern of auxin distribution within the plant is a key factor for plant growth, its reaction to its environment, and specifically for development of plant organs (such as leaves or flowers). It is achieved through very complex and well-coordinated active transport of auxin molecules from cell to cell throughout the plant body—by the so-called polar auxin transport. Thus, a plant can (as a whole) react to external conditions and adjust to them, without requiring a nervous system. Auxins typically act in concert with, or in opposition to, other plant hormones. For example, the ratio of auxin to cytokinin in certain plant tissues determines initiation of root versus shoot buds.
On the molecular level, all auxins are compounds with an aromatic ring and a carboxylic acid group. The most important member of the auxin family is indole-3-acetic acid (IAA), which generates the majority of auxin effects in intact plants, and is the most potent native auxin. And as native auxin, its equilibrium is controlled in many ways in plants, from synthesis, through possible conjugation to degradation of its molecules, always according to the requirements of the situation. Auxin can act in a heat sensitive manner in many situations, which will in turn effect a plant's phenotypic development.
- Five naturally occurring (endogenous) auxins in plants include indole-3-acetic acid, 4-chloroindole-3-acetic acid, phenylacetic acid, indole-3-butyric acid, and indole-3-propionic acid.
|
https://en.wikipedia.org/wiki/Auxin
|
passage: Obviously, the singleton set
$$
\{x\}
$$
is transcendental over
$$
\Q
$$
and the extension
$$
\Q(x, y)/\Q(x)
$$
is algebraic; hence
$$
\{x\}
$$
is a transcendence basis that does not generates the extension
$$
\Q(x, y)/\Q(x)
$$
. Similarly,
$$
\{y\}
$$
is a transcendence basis that does not generates the whole extension. However the extension is purely transcendental since, if one set
$$
t=y/x,
$$
one has
$$
x=t^2
$$
and
$$
y=t^3,
$$
and thus
$$
t
$$
generates the whole extension.
Purely transcendental extensions of an algebraically closed field occur as function fields of rational varieties. The problem of finding a rational parametrization of a rational variety is equivalent with the problem of finding a transcendence basis that generates the whole extension.
## Normal, separable and Galois extensions
An algebraic extension
$$
L/K
$$
is called normal if every irreducible polynomial in K[X] that has a root in L completely factors into linear factors over L. Every algebraic extension F/K admits a normal closure L, which is an extension field of F such that
$$
L/K
$$
is normal and which is minimal with this property.
|
https://en.wikipedia.org/wiki/Field_extension
|
passage: These are orthogonal polynomials with respect to a Sobolev inner product, i.e. an inner product with derivatives. Including derivatives has big consequences for the polynomials, in general they no longer share some of the nice features of the classical orthogonal polynomials.
### Orthogonal polynomials with matrices
Orthogonal polynomials with matrices have either coefficients that are matrices or the indeterminate is a matrix.
There are two popular examples: either the coefficients
$$
\{a_i\}
$$
are matrices or
$$
x
$$
:
- Variante 1:
$$
P(x)=A_nx^n+A_{n-1}x^{n-1}+\cdots + A_1x +A_0
$$
, where
$$
\{A_{i}\}
$$
are
$$
p\times p
$$
matrices.
- Variante 2:
$$
P(X)=a_nX^n+a_{n-1}X^{n-1}+\cdots + a_1X +a_0I_p
$$
where
$$
X
$$
is a
$$
p\times p
$$
-matrix and
$$
I_p
$$
is the identity matrix.
### Quantum polynomials
Quantum polynomials or q-polynomials are the q-analogs of orthogonal polynomials.
|
https://en.wikipedia.org/wiki/Orthogonal_polynomials
|
passage: Therefore, the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows:
$$
G_X =e^{\operatorname{E}[\ln X]}= e^{\psi(\alpha) - \psi(\alpha + \beta)}
$$
While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2: . The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln(X) strongly tends towards negative infinity as X approaches zero, while ln(X) flattens towards zero as .
Along a line , the following limits apply:
$$
\begin{align}
&\lim_{\alpha = \beta \to 0} G_X = 0 \\
&\lim_{\alpha = \beta \to \infty} G_X =\tfrac{1}{2}
\end{align}
$$
Following are the limits with one parameter finite (non-zero) and the other approaching these limits:
$$
\begin{align}
\lim_{\beta \to 0} G_X = \lim_{\alpha \to \infty} G_X = 1\\
\lim_{\alpha\to 0} G_X = \lim_{\beta \to \infty} G_X = 0
\end{align}
$$
The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2.
|
https://en.wikipedia.org/wiki/Beta_distribution
|
passage: An alternative normalization sets
$$
C_n^{(\alpha)}(1)=1
$$
. Assuming this alternative normalization, the derivatives of Gegenbauer are expressed in terms of Gegenbauer:
$$
\begin{aligned}
\frac{d^q}{dx^q}C_{q+2 j+1}^{(\alpha)}(x)=\frac{2^q(q+2 j+1)!}{(q-1)!\Gamma(q+2 j+2 \alpha+1)} & \sum_{i=0}^j \frac{(2 i+\alpha+1) \Gamma(2 i+2 \alpha+1)}{(2 i+1)!(j-i)!} \\
& \times \frac{\Gamma(q+j+i+\alpha+1)}{\Gamma(j+i+\alpha+2)}(q+j-i-1)!C_{2 i+1}^{(\alpha)}(x)
\end{aligned}
$$
|
https://en.wikipedia.org/wiki/Gegenbauer_polynomials
|
passage: The corresponding continuous transformations of the celestial sphere (excepting the identity) all share the same two fixed points (the North and South poles). They move all other points away from the South pole and toward the North pole (or vice versa), along a family of curves called loxodromes. Each loxodrome spirals infinitely often around each pole.
Parabolic
A parabolic element of is
$$
P_4 = \begin{bmatrix} 1 & \alpha \\ 0 & 1 \end{bmatrix}
$$
and has the single fixed point = ∞ on the Riemann sphere. Under stereographic projection, it appears as an ordinary translation along the real axis.
The spinor map converts this to the matrix (representing a Lorentz transformation)
$$
\begin{align}
Q_4 &= \begin{bmatrix}
BLOCK6 \end{bmatrix} \\[6pt]
BLOCK7 \end{bmatrix} ~.
\end{align}
$$
This generates a two-parameter abelian subgroup, which is obtained by considering a complex variable rather than a constant. The corresponding continuous transformations of the celestial sphere (except for the identity transformation) move points along a family of circles that are all tangent at the North pole to a certain great circle. All points other than the North pole itself move along these circles.
Parabolic Lorentz transformations are often called null rotations.
|
https://en.wikipedia.org/wiki/Lorentz_group
|
passage: Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page.
### Protein quantification
For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification.
One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.
### mRNA-protein correlation
While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability.
|
https://en.wikipedia.org/wiki/Gene_expression
|
passage: ## In database query languages
Since the 1980s Oracle Database has implemented a proprietary SQL extension `CONNECT BY... START WITH` that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general `WITH RECURSIVE` construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM Db2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). SQLite released support for this in 2014.
Datalog also implements transitive closure computations.
MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016.
## Algorithms
Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in . Reducing the problem to multiplications of adjacency matrices achieves the time complexity of matrix multiplication,
$$
O(n^{2.3728596})
$$
. However, this approach is not practical since both the constant factors and the memory consumption for sparse graphs are high . The problem can also be solved by the Floyd–Warshall algorithm in
$$
O(n^3)
$$
, or by repeated breadth-first search or depth-first search starting from each node of the graph.
|
https://en.wikipedia.org/wiki/Transitive_closure
|
passage: The reversal potential for Cl- in many neurons is quite negative, nearly equal to the resting potential. Opening Cl- channels tends to buffer the membrane potential, but this effect is countered when the membrane starts to depolarize, allowing more negatively charged Cl- ions to enter the cell. Consequently, it becomes more difficult to depolarize the membrane and excite the cell when Cl- channels are open. Similar effects result from the opening of K+ channels. The significance of inhibitory neurotransmitters is evident from the effects of toxins that impede their activity. For instance, strychnine binds to glycine receptors, blocking the action of glycine and leading to muscle spasms, convulsions, and death.
### Interfaces
Synapses can be classified by the type of cellular structures serving as the pre- and post-synaptic components. The vast majority of synapses in the mammalian nervous system are classical axo-dendritic synapses (axon synapsing upon a dendrite), however, a variety of other arrangements exist. These include but are not limited to axo-axonic, dendro-dendritic, axo-secretory, axo-ciliary, somato-dendritic, dendro-somatic, and somato-somatic synapses.
In fact, the axon can synapse onto a dendrite, onto a cell body, or onto another axon or axon terminal, as well as into the bloodstream or diffusely into the adjacent nervous tissue.
|
https://en.wikipedia.org/wiki/Synapse
|
passage: We can find similar relations between the other Poisson ratios.
### Transversely isotropic
Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is the -plane, then Hooke's law takes the form
$$
\begin{bmatrix}
BLOCK0 \end{bmatrix}
= \begin{bmatrix}
BLOCK1 \begin{bmatrix}
BLOCK2 \end{bmatrix}
$$
where we have used the -plane of isotropy to reduce the number of constants, that is,
$$
E_y = E_z,\qquad \nu_{xy} = \nu_{xz},\qquad \nu_{yx} = \nu_{zx} .
$$
.
The symmetry of the stress and strain tensors implies that
$$
\frac{\nu_{xy}}{E_x} = \frac{\nu_{yx}}{E_y} ,\qquad \nu_{yz} = \nu_{zy} .
$$
This leaves us with six independent constants , , , , , . However, transverse isotropy gives rise to a further constraint between and , which is
$$
G_{yz} = \frac{E_y}{2\left(1+\nu_{yz}\right)} .
$$
Therefore, there are five independent elastic material properties two of which are Poisson's ratios.
|
https://en.wikipedia.org/wiki/Poisson%27s_ratio
|
passage: Tcl 8.5 added new datatypes, a new extension repository, bignums, lambdas. December 2012 Tcl 8.6 added built-in dynamic object system, TclOO, and stackless evaluation. September 2024 Tcl 9.0 added 64-bit capabilities, support for the full Unicode code point range, uses epoll & kqueue
Tcl conferences and workshops are held in both the United States and Europe. Several corporations, including FlightAware use Tcl as part of their products.
## Features
Tcl's features include
- All operations are commands, including language structures. They are written in prefix notation.
- Commands commonly accept a variable number of arguments (are variadic).
- Everything can be dynamically redefined and overridden. Actually, there are no keywords, so even control structures can be added or changed, although this is not advisable.
- All data types can be manipulated as strings, including source code. Internally, variables have types like integer and double, but converting is purely automatic.
- Variables are not declared, but assigned to. Use of a non-defined variable results in an error.
- Fully dynamic, class-based object system, TclOO, including advanced features such as meta-classes, filters, and mixins.
- Event-driven interface to sockets and files. Time-based and user-defined events are also possible.
- Variable visibility restricted to lexical (static) scope by default, but `uplevel` and `upvar` allowing procs to interact with the enclosing functions' scopes.
|
https://en.wikipedia.org/wiki/Tcl
|
passage: and that
$$
\mathbb{P}
$$
-almost everywhere
$$
Y(\omega)\neq 0
$$
.
With the variable
$$
Y
$$
we define a probability
$$
\mathbb{Q}
$$
that satisfies
$$
\mathbb{E}_{\mathbb{P}}[X] = \mathbb{E}_{\mathbb{Q}}\left[\frac{X}{Y}\right].
$$
The variable
$$
X/Y
$$
will thus be sampled under
$$
\mathbb{Q}
$$
to estimate
$$
\mathbb{E}_{\mathbb{P}}[X]
$$
as above and this estimation is improved when
$$
\operatorname{var}_{\mathbb{Q}}\left[\frac{X}{Y}\right] < \operatorname{var}_{\mathbb{P}}[X].
$$
When
$$
X
$$
is of constant sign over
$$
\Omega
$$
, the best variable
$$
Y
$$
would clearly be
$$
Y^*=\frac{X}{\mathbb{E}_{\mathbb{P}}[X]}\geq 0
$$
, so that
$$
X/Y^*
$$
is the searched constant
$$
\mathbb{E}_{\mathbb{P}}[X]
$$
and a single sample under
$$
\mathbb{Q}^*
$$
suffices to give its value.
|
https://en.wikipedia.org/wiki/Importance_sampling
|
passage: For the case of bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer using integers less than or equal to .
$$
g_n = p(N_{-},n)
$$
This arises due to the constraint of putting quanta into a state ket where
$$
\sum_{k=0}^\infty k n_k = n
$$
and
$$
\sum_{k=0}^\infty n_k = N
$$
, which are the same constraints as in integer partition.
### Example: 3D isotropic harmonic oscillator
The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with a different spherically symmetric potential
$$
V(r) = {1\over 2} \mu \omega^2 r^2,
$$
where is the mass of the particle. Because will be used below for the magnetic quantum number, mass is indicated by , instead of , as earlier in this article.
|
https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
|
passage: Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation.
### Existence
The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and
$$
f(s)
$$
is continuous. More precisely, it has a solution when
$$
\partial D \in C^{1,\alpha}
$$
for some
$$
\alpha \in (0, 1)
$$
, where
$$
C^{1,\alpha}
$$
denotes the Hölder condition.
## Example: the unit disk in two dimensions
In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula.
|
https://en.wikipedia.org/wiki/Dirichlet_problem
|
passage: Others have argued in favor of "good value" or conferring significant health benefits even if the measures do not save money. Furthermore, preventive health services are often described as one entity though they comprise a myriad of different services, each of which can individually lead to net costs, savings, or neither. Greater differentiation of these services is necessary to fully understand both the financial and health effects.
A 2010 study reported that in the United States, vaccinating children, cessation of smoking, daily prophylactic use of aspirin, and screening of breast and colorectal cancers had the most potential to prevent premature death. Preventive health measures that resulted in savings included vaccinating children and adults, smoking cessation, daily use of aspirin, and screening for issues with alcoholism, obesity, and vision failure. These authors estimated that if usage of these services in the United States increased to 90% of the population, there would be net savings of $3.7 billion, which comprised only about -0.2% of the total 2006 United States healthcare expenditure. Despite the potential for decreasing healthcare spending, utilization of healthcare resources in the United States still remains low, especially among Latinos and African-Americans. Overall, preventive services are difficult to implement because healthcare providers have limited time with patients and must integrate a variety of preventive health measures from different sources.
While these specific services bring about small net savings, not every preventive health measure saves more than it costs. A 1970s study showed that preventing heart attacks by treating hypertension early on with drugs actually did not save money in the long run.
|
https://en.wikipedia.org/wiki/Preventive_healthcare
|
passage: ## Motivation
A general paradigm in group theory is that a group G should be studied via its group representations. A slight generalization of those representations are the G-modules: a G-module is an abelian group M together with a group action of G on M, with every element of G acting as an automorphism of M. We will write G multiplicatively and M additively.
Given such a G-module M, it is natural to consider the submodule of G-invariant elements:
$$
M^{G} = \lbrace x \in M \ | \ \forall g \in G : \ gx=x \rbrace.
$$
Now, if N is a G-submodule of M (i.e., a subgroup of M mapped to itself by the action of G), it isn't in general true that the invariants in
$$
M/N
$$
are found as the quotient of the invariants in M by those in N: being invariant 'modulo N ' is broader. The purpose of the first group cohomology
$$
H^1(G,N)
$$
is to precisely measure this difference.
The group cohomology functors
$$
H^*
$$
in general measure the extent to which taking invariants doesn't respect exact sequences. This is expressed by a long exact sequence.
|
https://en.wikipedia.org/wiki/Group_cohomology
|
passage: ### Mass of a simple pendulum
In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length with gravitational acceleration
$$
g
$$
is given by
$$
T = 2 \pi \sqrt\frac{l}{g}
$$
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity,
$$
g
$$
, therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of
$$
g
$$
varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression for angular acceleration being proportional to the sine of the displacement angle:
$$
-mgl \sin\theta =I\alpha,
$$
where is the moment of inertia. When is small, and therefore the expression becomes
$$
-mgl \theta =I\alpha
$$
which makes angular acceleration directly proportional and opposite to , satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position).
### Scotch yoke
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion.
|
https://en.wikipedia.org/wiki/Simple_harmonic_motion
|
passage: -λ and α = 1.
### Hyperexponential or mixture of exponential distribution
The mixture of exponential or hyperexponential distribution with λ1,λ2,...,λn>0 can be represented as a phase type distribution with
$$
\boldsymbol{\alpha}=(\alpha_1,\alpha_2,\alpha_3,\alpha_4,...,\alpha_n)
$$
with
$$
\sum_{i=1}^n \alpha_i =1
$$
and
$$
{S}=\left[\begin{matrix}-\lambda_1&0&0&0&0\\0&-\lambda_2&0&0&0\\0&0&-\lambda_3&0&0\\0&0&0&-\lambda_4&0\\0&0&0&0&-\lambda_5\\\end{matrix}\right].
$$
This mixture of densities of exponential distributed random variables can be characterized through
$$
f(x)=\sum_{i=1}^n \alpha_i \lambda_i e^{-\lambda_i x} =\sum_{i=1}^n\alpha_i f_{X_i}(x),
$$
or its cumulative distribution function
$$
F(x)=1-\sum_{i=1}^n \alpha_i e^{-\lambda_i x}=\sum_{i=1}^n\alpha_iF_{X_i}(x).
$$
with
$$
X_i \sim Exp( \lambda_i )
$$
Erlang distribution
The Erlang distribution has two parameters, the shape an integer k > 0 and the rate λ > 0.
|
https://en.wikipedia.org/wiki/Phase-type_distribution
|
passage: $$
are partial derivatives of with respect to partial derivatives of ρ,
$$
\left [ \frac {\partial f} {\partial \left (\nabla^{(i)}\rho \right ) } \right ]_{\alpha_1 \alpha_2 \cdots \alpha_i} = \frac {\partial f} {\partial \rho_{\alpha_1 \alpha_2 \cdots \alpha_i} }
$$
where
$$
\rho_{\alpha_1 \alpha_2 \cdots \alpha_i} \equiv \frac {\partial^{\,i}\rho} {\partial r_{\alpha_1} \, \partial r_{\alpha_2} \cdots \partial r_{\alpha_i} }
$$
, and the tensor scalar product is,
$$
|
https://en.wikipedia.org/wiki/Functional_derivative
|
passage: ## Notational uses
Del is used as a shorthand form to simplify many long mathematical expressions. It is most commonly used to simplify expressions for the gradient, divergence, curl, directional derivative, and
### Laplacian
.
Gradient
The vector derivative of a scalar field
$$
f
$$
is called the gradient, and it can be represented as:
$$
\operatorname{grad}f = {\partial f \over \partial x} \hat\mathbf x + {\partial f \over \partial y} \hat\mathbf y + {\partial f \over \partial z} \hat\mathbf z=\nabla f
$$
It always points in the direction of greatest increase of
$$
f
$$
, and it has a magnitude equal to the maximum rate of increase at the point—just like a standard derivative. In particular, if a hill is defined as a height function over a plane
$$
h(x,y)
$$
, the gradient at a given location will be a vector in the xy-plane (visualizable as an arrow on a map) pointing along the steepest direction. The magnitude of the gradient is the value of this steepest slope.
|
https://en.wikipedia.org/wiki/Del
|
passage: ### Information measure for stereoscopic images
The least squares measure may be used to measure the information content of the stereoscopic images, given depths at each point
$$
z(x, y)
$$
. Firstly the information needed to express one image in terms of the other is derived. This is called
$$
I_m
$$
.
A color difference function should be used to fairly measure the difference between colors. The color difference function is written cd in the following. The measure of the information needed to record the color matching between the two images is,
$$
I_m(z_1, z_2) = \frac{1}{\sigma_m^2} \sum_{x, y}\operatorname{cd}(\operatorname{color}_1(x, y + \frac{k}{z_1(x, y)}), \operatorname{color}_2(x, y))^2
$$
An assumption is made about the smoothness of the image. Assume that two pixels are more likely to be the same color, the closer the voxels they represent are. This measure is intended to favor colors that are similar being grouped at the same depth. For example, if an object in front occludes an area of sky behind, the measure of smoothness favors the blue pixels all being grouped together at the same depth.
|
https://en.wikipedia.org/wiki/Computer_stereo_vision
|
passage: Sedgewick showed that the insert operation can be implemented in just 46 lines of Java.
In 2008, Sedgewick proposed the left-leaning red–black tree, leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code.
## Terminology
The black depth of a node is defined as the number of black nodes from the root to that node (i.e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node).
The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a null node shall be set to 0, because its subtree is empty as suggested by the example figure, and its tree height is also 0.
## Properties
In addition to the requirements imposed on a binary search tree the following must be satisfied by a
1. Every node is either red or black.
1. All null nodes are considered black.
1. A red node does not have a red child.
1. Every path from a given node to any of its leaf nodes goes through the same number of black nodes.
|
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
|
passage: The zebrafish can reach up to in length, although they typically are in the wild with some variations depending on location. Its lifespan in captivity is around two to three years, although in ideal conditions, this may be extended to over five years. In the wild it is typically an annual species.
## Psychology
In 2015, a study was published about zebrafishes' capacity for episodic memory. The individuals showed a capacity to remember context with respect to objects, locations and occasions (what, when, where). Episodic memory is a capacity of explicit memory systems, typically associated with conscious experience.
The Mauthner cells integrate a wide array of sensory stimuli to produce the escape reflex. Those stimuli are found to include the lateral line signals by McHenry et al. 2009 and visual signals consistent with looming objects by Temizer et al. 2015, Dunn et al. 2016, and Yao et al. 2016.
## Reproduction
The approximate generation time for Danio rerio is three months. A male must be present for ovulation and spawning to occur. Zebrafish are asynchronous spawners and under optimal conditions (such as food availability and favorable water parameters) can spawn successfully frequently, even on a daily basis. Females are able to spawn at intervals of two to three days, laying hundreds of eggs in each clutch. Upon release, embryonic development begins; in absence of sperm, growth stops after the first few cell divisions. Fertilized eggs almost immediately become transparent, a characteristic that makes D. rerio a convenient research model species.
|
https://en.wikipedia.org/wiki/Zebrafish
|
passage: Three main hypotheses address the origin of the genetic code. Many models belong to one of them or to a hybrid:
- Random freeze: the genetic code was randomly created. For example, early tRNA-like ribozymes may have had different affinities for amino acids, with codons emerging from another part of the ribozyme that exhibited random variability. Once enough peptides were coded for, any major random change in the genetic code would have been lethal; hence it became "frozen".
- Stereochemical affinity: the genetic code is a result of a high affinity between each amino acid and its codon or anti-codon; the latter option implies that pre-tRNA molecules matched their corresponding amino acids by this affinity. Later during evolution, this matching was gradually replaced with matching by aminoacyl-tRNA synthetases.
- Optimality: the genetic code continued to evolve after its initial creation, so that the current code maximizes some fitness function, usually some kind of error minimization.
Hypotheses have addressed a variety of scenarios:
- Chemical principles govern specific RNA interaction with amino acids. Experiments with aptamers showed that some amino acids have a selective chemical affinity for their codons. Experiments showed that of 8 amino acids tested, 6 show some RNA triplet-amino acid association.
- Biosynthetic expansion. The genetic code grew from a simpler earlier code through a process of "biosynthetic expansion".
|
https://en.wikipedia.org/wiki/Genetic_code
|
passage: The nonlinearity of the ODE effectively becomes a nonlinearity of F, and requires a root-finding technique capable of solving nonlinear systems. Such methods typically converge slower as nonlinearities become more severe. The boundary value problem solver's performance suffers from this.
- Even stable and well-conditioned ODEs may make for unstable and ill-conditioned BVPs. A slight alteration of the initial value guess y0 may generate an extremely large step in the ODEs solution y(tb; ta, y0) and thus in the values of the function F whose root is sought. Non-analytic root-finding methods can seldom cope with this behaviour.
## Multiple shooting
A direct multiple shooting method partitions the interval [ta, tb] by introducing additional grid points
$$
t_a = t_0 < t_1 < \cdots < t_N = t_b .
$$
The method starts by guessing somehow the values of y at all grid points tk with . Denote these guesses by yk. Let y(t; tk, yk) denote the solution emanating from the kth grid point, that is, the solution of the initial value problem
$$
y'(t) = f(t, y(t)), \quad y(t_k) = y_k.
$$
All these solutions can be pieced together to form a continuous trajectory if the values y match at the grid points.
|
https://en.wikipedia.org/wiki/Direct_multiple_shooting_method
|
passage: The inclusion of a clock signal is not necessary, as the leading edge of the data signal can be used as the clock if a small offset is added to each data value in order to avoid a data value with a zero length pulse.
_ __ ___ _____ _ _____ __ _
| | | | | | | | | | | | | | | |
PWM signal | | | | | | | | | | | | | | | |
__| |____| |___| |__| |_| |____| |_| |___| |_____
Data 0 1 2 4 0 4 1 0
### Power delivery
PWM can be used to control the amount of power delivered to a load without incurring the losses that would result from linear power delivery by resistive means. Drawbacks to this technique are that the power drawn by the load is not constant but rather discontinuous (see Buck converter), and energy delivered to the load is not continuous either. However, the load may be inductive, and with a sufficiently high frequency and when necessary using additional passive electronic filters, the pulse train can be smoothed and average analog waveform recovered. Power flow into the load can be continuous. Power flow from the supply is not constant and will require energy storage on the supply side in most cases.
|
https://en.wikipedia.org/wiki/Pulse-width_modulation
|
passage: If a calculation based on latitude/longitude should be valid for all Earth positions, it should be verified that the discontinuity and the Poles are handled correctly. Another solution is to use n-vector instead of latitude/longitude, since this representation does not have discontinuities or singularities.
## Flat-surface approximation formulae for very short distance
A planar approximation for the surface of the Earth may be useful over very small distances. It approximates the arc length,
$$
D
$$
, to the tunnel distance,
$$
D_\textrm{t}
$$
, or omits the conversion between arc and chord lengths shown below.
The shortest distance between two points in plane is a Cartesian straight line. The Pythagorean theorem is used to calculate the distance between points in a plane.
Even over short distances, the accuracy of geographic distance calculations which assume a flat Earth depend on the method by which the latitude and longitude coordinates have been projected onto the plane. The projection of latitude and longitude coordinates onto a plane is the realm of cartography.
The formulae presented in this section provide varying degrees of accuracy.
### Spherical Earth approximation formulae
The tunnel distance,
$$
D_\textrm{t}
$$
, is calculated on Spherical Earth.
|
https://en.wikipedia.org/wiki/Geographical_distance
|
passage: When supersymmetry is imposed as a local symmetry, one automatically obtains a quantum mechanical theory that includes gravity. Such a theory is called a supergravity theory.
A theory of strings that incorporates the idea of supersymmetry is called a superstring theory. There are several different versions of superstring theory which are all subsumed within the M-theory framework. At low energies, superstring theories are approximated by one of the three supergravities in ten dimensions, known as type I, type IIA, and type IIB supergravity. Similarly, M-theory is approximated at low energies by supergravity in eleven dimensions.
### Branes
In string theory and related theories such as supergravity theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For example, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension , these are called -branes. Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They can have mass and other attributes such as charge. A -brane sweeps out a -dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane. The word brane comes from the word "membrane" which refers to a two-dimensional brane.
|
https://en.wikipedia.org/wiki/M-theory
|
passage: Other global problems are loss of ecosystem services, land degradation, environmental impacts of animal agriculture and air and water pollution, including marine plastic pollution and ocean acidification. Many people worry about human impacts on the environment. These include impacts on the atmosphere, land, and water resources.
Human activities now have an impact on Earth's geology and ecosystems. This led Paul Crutzen to call the current geological epoch the Anthropocene.
The importance of citizens in accomplishing climate change adaptation, mitigation, and more general sustainable development objectives is being emphasized more and more by urban climate change governance (Hegger, Mees, & Wamsler, 2022). The Sustainable Development Goals and the Glasgow Climate Pact are two recent international agreements that acknowledge that sustainability transformations depend on both individual and social attitudes, values, and behaviors in addition to technical solutions (IPCC, 2022; Wamsler et al., 2021). Through their roles as voters, activists, consumers, and community members—particularly in decision-making, information co-production, and localized self-governance initiatives—citizens are seen as crucial change agents (Mees et al., 2016; Wamsler, 2017).
### Economic sustainability
The economic dimension of sustainability is controversial. This is because the term development within sustainable development can be interpreted in different ways. Some may take it to mean only economic development and growth. This can promote an economic system that is bad for the environment. Others focus more on the trade-offs between environmental conservation and achieving welfare goals for basic needs (food, water, health, and shelter).
|
https://en.wikipedia.org/wiki/Sustainability
|
passage: Gluons themselves carry the color charge of the strong interaction. This is unlike the photon, which mediates the electromagnetic interaction but lacks an electric charge. Gluons therefore participate in the strong interaction in addition to mediating it, making QCD significantly harder to analyze than QED (quantum electrodynamics) as it deals with nonlinear equations to characterize such interactions.
Higgs field
The Standard Model hypothesises a field called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state (zero-point) energy after renormalization; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is approximately .
|
https://en.wikipedia.org/wiki/Zero-point_energy
|
passage: In mathematics, the inverse hyperbolic functions are inverses of the hyperbolic functions, analogous to the inverse circular functions. There are six in common use: inverse hyperbolic sine, inverse hyperbolic cosine, inverse hyperbolic tangent, inverse hyperbolic cosecant, inverse hyperbolic secant, and inverse hyperbolic cotangent. They are commonly denoted by the symbols for the hyperbolic functions, prefixed with arc- or ar- or with a superscript
$$
{-1}
$$
(for example , , or
$$
\sinh^{-1}
$$
).
For a given value of a hyperbolic function, the inverse hyperbolic function provides the corresponding hyperbolic angle measure, for example
$$
\operatorname{arsinh}(\sinh a) = a
$$
and
$$
\sinh(\operatorname{arsinh} x) = x.
$$
Hyperbolic angle measure is the length of an arc of a unit hyperbola
$$
x^2 - y^2 = 1
$$
as measured in the Lorentzian plane (not the length of a hyperbolic arc in the Euclidean plane), and twice the area of the corresponding hyperbolic sector. This is analogous to the way circular angle measure is the arc length of an arc of the unit circle in the Euclidean plane or twice the area of the corresponding circular sector. Alternately hyperbolic angle is the area of a sector of the hyperbola
$$
xy = 1.
$$
Some authors call the inverse hyperbolic functions hyperbolic area functions.
|
https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions
|
passage: ### Criticism of the simple solutions
As already remarked, most sources in the topic of probability, including many introductory probability textbooks, solve the problem by showing the conditional probabilities that the car is behind door 1 and door 2 are and (not and ) given that the contestant initially picks door 1 and the host opens door 3; various ways to derive and understand this result were given in the previous subsections.
Among these sources are several that explicitly criticize the popularly presented "simple" solutions, saying these solutions are "correct but ... shaky", or do not "address the problem posed", or are "incomplete", or are "unconvincing and misleading", or are (most bluntly) "false".
Sasha Volokh wrote that "any explanation that says something like 'the probability of door 1 was , and nothing can change that...' is automatically fishy: probabilities are expressions of our ignorance about the world, and new information can change the extent of our ignorance. "
Some say that these solutions answer a slightly different questionone phrasing is "you have to announce whether you plan to switch".
The simple solutions show in various ways that a contestant who is determined to switch will win the car with probability , and hence that switching is the winning strategy, if the player has to choose in advance between "always switching", and "always staying". However, the probability of winning by switching is a logically distinct concept from the probability of winning by switching . As one source says, "the distinction between [these questions] seems to confound many".
|
https://en.wikipedia.org/wiki/Monty_Hall_problem
|
passage: The '$' sign is used to denote 'end of input' is expected, as is the case for the starting rule.
This is not the complete item set 0, though. Each item set must be 'closed', which means all production rules for each nonterminal following a '•' have to be recursively included into the item set until all of those nonterminals are dealt with. The resulting item set is called the closure of the item set we began with.
For LR(1) for each production rule an item has to be included for each possible lookahead terminal following the rule. For more complex languages this usually results in very large item sets, which is the reason for the large memory requirements of LR(1) parsers.
In our example, the starting symbol requires the nonterminal 'E' which in turn requires 'T', thus all production rules will appear in item set 0. At first, we ignore the problem of finding the lookaheads and just look at the case of an LR(0), whose items do not contain lookahead terminals. So the item set 0 (without lookaheads) will look like this:
[S → • E]
[E → • T]
[E → • ( E )]
[T → • n]
[T → • + T]
[T → • T + n]
### FIRST and FOLLOW sets
To determine lookahead terminals, so-called FIRST and FOLLOW sets are used.
FIRST(A) is the set of terminals which can appear as the first element of any chain of rules matching nonterminal A. FOLLOW(I) of an Item
|
https://en.wikipedia.org/wiki/Canonical_LR_parser
|
passage: So he looked for another equation that can be modified in order to describe the action of electromagnetic forces. In addition, this equation, as it stands, is nonlocal (see also Introduction to nonlocal equations).
Klein and Gordon instead began with the square of the above identity, i.e.
$$
\mathbf{p}^2 c^2 + m^2 c^4 = E^2,
$$
which, when quantized, gives
$$
\left( (-i\hbar\mathbf{\nabla})^2 c^2 + m^2 c^4 \right) \psi = \left( i \hbar \frac{\partial}{\partial t} \right)^2 \psi,
$$
which simplifies to
$$
-\hbar^2 c^2 \mathbf{\nabla}^2 \psi + m^2 c^4 \psi = -\hbar^2 \frac{\partial^2}{\partial t^2} \psi.
$$
Rearranging terms yields
$$
\frac{1}{c^2} \frac{\partial^2}{\partial t^2} \psi - \mathbf{\nabla}^2 \psi + \frac{m^2 c^2}{\hbar^2} \psi = 0.
$$
Since all reference to imaginary numbers has been eliminated from this equation, it can be applied to fields that are real-valued, as well as those that have complex values.
|
https://en.wikipedia.org/wiki/Klein%E2%80%93Gordon_equation
|
passage: ## Examples
Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.
- Biology: cell theory, theory of evolution (modern evolutionary synthesis), abiogenesis, germ theory, particulate inheritance theory, dual inheritance theory, Young–Helmholtz theory, opponent process, cohesion-tension theory
- Chemistry: collision theory, kinetic theory of gases, Lewis theory, molecular theory, molecular orbital theory, transition state theory, valence bond theory
- Physics: atomic theory, Big Bang theory, Dynamo theory, perturbation theory, theory of relativity (successor to classical mechanics), quantum field theory
- Earth science: Climate change theory (from climatology), plate tectonics theory (from geology), theories of the origin of the Moon, theories for the Moon illusion
- Astronomy: Self-gravitating system, Stellar evolution, solar nebular model, stellar nucleosynthesis
## Explanatory notes
## References
## Further reading
- Essay by a British/American meteorologist and NASA astronaut on anthopogenic global warming and "theory".
Category:Epistemology of science
Category:Scientific method
|
https://en.wikipedia.org/wiki/Scientific_theory
|
passage: Hence, for the finite-dimensional distributions to be consistent, it must hold that
$$
\nu_{1,2}( \mathbb{R}_+ \times \mathbb{R}_-) = \nu_{2,1}( \mathbb{R}_- \times \mathbb{R}_+)
$$
.
The first condition generalizes this statement to hold for any number of time points
$$
t_i
$$
, and any control sets
$$
F_i
$$
.
Continuing the example, the second condition implies that
$$
\mathbb{P}(X_1>0) = \mathbb{P}(X_1>0, X_2 \in \mathbb{R})
$$
. Also this is a trivial condition that will be satisfied by any consistent family of finite-dimensional distributions.
## Implications of the theorem
Since the two conditions are trivially satisfied for any stochastic process, the power of the theorem is that no other conditions are required: For any reasonable (i.e., consistent) family of finite-dimensional distributions, there exists a stochastic process with these distributions.
The measure-theoretic approach to stochastic processes starts with a probability space and defines a stochastic process as a family of functions on this probability space. However, in many applications the starting point is really the finite-dimensional distributions of the stochastic process. The theorem says that provided the finite-dimensional distributions satisfy the obvious consistency requirements, one can always identify a probability space to match the purpose.
|
https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem
|
passage: Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.
### Single-neuron modeling
Even a single neuron has complex biophysical characteristics and can perform computations (e.g.). Hodgkin and Huxley's original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.
The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.
There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
|
https://en.wikipedia.org/wiki/Computational_neuroscience
|
passage: \chi_{40,7} & 1 & i & -i & -1 & -1 & -i & i & 1 & 1 & i & -i & -1 & -1 & -i & i & 1 \\
\chi_{40,9} & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\
\chi_{40,11} & 1 & 1 & -1 & 1 & 1 & -1 & 1 & 1 & -1 & -1 & 1 & -1 & -1 & 1 & -1 & -1 \\
\chi_{40,13} & 1 & -i & -i & -1 & -1 & -i & -i & 1 & -1 & i & i & 1 & 1 & i & i & -1 \\
\chi_{40,17} & 1 & -i & i & -1 & 1 & -i & i & -1 & 1 & -i & i & -1 & 1 & -i & i & -1 \\
\chi_{40,19} & 1 & -1 & 1 & 1 & 1 & 1 & -1 & 1 & -1 & 1 & -1 & -1 & -1 & -1 & 1 & -1 \\
\chi_{40,21} & 1 & -1 & 1 & 1 & -1 & -1 & 1 & -1 & -1 & 1 & -1 & -1 & 1 & 1 & -1 & 1 \\
\chi_{40,23} & 1 & -i & i & -1 & -1 & i & -i & 1 & 1 & -i & i & -1 & -1 & i & -i & 1 \\
\chi_{40,27} & 1 & -i & -i & -1 & 1 & i & i & -1 & -1 & i & i & 1 & -1 & -i & -i & 1 \\
|
https://en.wikipedia.org/wiki/Dirichlet_character
|
passage: Using the Markov inequality to bound the desired probability:
$$
\operatorname{P}(T \geq cn H_n) \le \frac{1}{c}.
$$
The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:
$$
\begin{align}
\operatorname{E}(T_k) & {}= \operatorname{E}(t_{k+1} + t_{k+2} + \cdots + t_n) \\
& {}= n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n-k}\right) \\
& {}= n \cdot H_{n-k}
\end{align}
$$
And when
$$
k=0
$$
then we get the original result.
|
https://en.wikipedia.org/wiki/Coupon_collector%27s_problem
|
passage: The first step in composing choral music with overtone singing is to discover what the singers can be expected to do successfully without extensive practice. The second step is to find a musical context in which those techniques could be effective, not mere special effects. It was initially hypothesized that beginners would be able to:
- glissando through the partials of a given fundamental, ascending or descending, fast, or slow
- use vowels/text for relative pitch gestures on indeterminate partials specifying the given shape without specifying particular partials
- improvise on partials of the given fundamental, ad lib., freely, or in giving style or manner
- find and sustain a particular partial (requires interval recognition)
- by extension, move to an adjacent partial, above or below, and alternate between the two
Singers should not be asked to change the fundamental pitch while overtone singing and changing partials should always be to an adjacent partial. When a particular partial is to be specified, time should be allowed (a beat or so) for the singers to get the harmonics to "speak" and find the correct one.
### String instruments
String instruments can also produce multiphonic tones when strings are divided in two pieces or the sound is somehow distorted. The sitar has sympathetic strings which help to bring out the overtones while one is playing. The overtones are also highly important in the tanpura, the drone instrument in traditional North and South Indian music, in which loose strings tuned at octaves and fifths are plucked and designed to buzz to create sympathetic resonance and highlight the cascading sound of the overtones.
|
https://en.wikipedia.org/wiki/Overtone
|
passage: For every set there is exactly one function from the empty set to (there are no values of this function to specify), which is always injective, but never surjective unless is (also) empty.
- For every non-empty set there are no functions from to the empty set (there is at least one value of the function that must be specified, but it cannot).
- When there are no injective functions , and if there are no surjective functions .
- The expressions used in the formulas have as particular values
$$
0^0=0^{\underline 0}=0!=\binom00=\binom{-1}0=\left\{{0\atop0}\right\}=p_0(0)=1
$$
(the first three are instances of an empty product, and the value
$$
\tbinom{-1}{0} = \tfrac{(-1)^{\underline{0}}}{0!} = 1
$$
is given by the conventional extension of binomial coefficients to arbitrary values of the upper index), while
$$
\left\{{n\atop x}\right\}=p_x(n)=0 \quad\hbox{whenever either } n>0=x \hbox{ or }0\leq n<x.
$$
In particular in the case of counting multisets with elements taken from , the given expression
$$
\tbinom{n+x-1}n
$$
is equivalent in most cases to
$$
\tbinom{n+x-1}{x-1}
$$
, but the latter expression would give 0 for the case (by the usual convention that binomial coefficients with a negative lower index are always 0).
|
https://en.wikipedia.org/wiki/Twelvefold_way
|
passage: For each polyphase iteration, the total number of runs follows a pattern similar to a reversed Fibonacci numbers of higher order sequence. With 4 files, and a dataset consisting of 57 runs, the total run count on each iteration would be 57, 31, 17, 9, 5, 3, 1. Note that except for the last iteration, the run count reduction factor is a bit less than 2, 57/31, 31/17, 17/9, 9/5, 5/3, 3/1, about 1.84 for a 4 file case, but each iteration except the last reduced the run count while processing about 65% of the dataset, so the run count reduction factor per dataset processed during the intermediate iterations is about 1.84 / 0.65 = 2.83. For a dataset consisting of 57 runs of 1 record each, then after the initial distribution, polyphase merge sort moves 232 records during the 6 iterations it takes to sort the dataset, for an overall reduction factor of 2.70 (this is explained in more detail later).
After the first polyphase iteration, what was the output file now contains the results of merging N−1 original runs, but the remaining N−2 input working files still contain the remaining original runs, so the second merge iteration produces runs of size (N−1) + (N−2) = (2N − 3) original runs. The third iteration produces runs of size (4N − 7) original runs.
|
https://en.wikipedia.org/wiki/Polyphase_merge_sort
|
passage: The canonical bundle formula
Let
$$
X
$$
be a normal surface. A genus fibration
$$
f:X\to B
$$
of
$$
X
$$
is a proper flat morphism
$$
f
$$
to a smooth curve such that
$$
f_*\mathcal{O}_X\cong \mathcal{O}_B
$$
and all fibers of
$$
f
$$
have arithmetic genus
$$
g
$$
. If
$$
X
$$
is a smooth projective surface and the fibers of
$$
f
$$
do not contain rational curves of self-intersection
$$
-1
$$
, then the fibration is called minimal. For example, if
$$
X
$$
admits a (minimal) genus 0 fibration, then is
$$
X
$$
is birationally ruled, that is, birational to
$$
\mathbb{P}^1\times B
$$
.
For a minimal genus 1 fibration (also called elliptic fibrations)
$$
f:X\to B
$$
all but finitely many fibers of
$$
f
$$
are geometrically integral and all fibers are geometrically connected (by Zariski's connectedness theorem).
|
https://en.wikipedia.org/wiki/Canonical_bundle
|
passage: Diabetic peripheral neuropathy (DPN) affects 30% of all diabetes patients. When DPN is superimposed with nerve compression, DPN may be treatable with multiple nerve decompressions. The theory is that DPN predisposes peripheral nerves to compression at anatomical sites of narrowing, and that the majority of DPN symptoms are actually attributable to nerve compression, a treatable condition, rather than DPN itself. The surgery is associated with lower pain scores, higher two-point discrimination (a measure of sensory improvement), lower rate of ulcerations, fewer falls (in the case of lower extremity decompression), and fewer amputations.
### Self-management and support
In countries using a general practitioner system, such as the United Kingdom, care may take place mainly outside hospitals, with hospital-based specialist care used only in case of complications, difficult blood sugar control, or research projects. In other circumstances, general practitioners and specialists share care in a team approach. Evidence has shown that social prescribing led to slight improvements in blood sugar control for people with type 2 diabetes. Home telehealth support can be an effective management technique.
The use of technology to deliver educational programs for adults with type 2 diabetes includes computer-based self-management interventions to collect for tailored responses to facilitate self-management. There is no adequate evidence to support effects on cholesterol, blood pressure, behavioral change (such as physical activity levels and dietary), depression, weight and health-related quality of life, nor in other biological, cognitive or emotional outcomes.
|
https://en.wikipedia.org/wiki/Diabetes
|
passage: - Fixed interval: responding increases towards the end of the interval; poor resistance to extinction.
- Variable interval: steady activity results, good resistance to extinction.
- Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar.
- Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE).
- The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of gamblers at slot machines).
- Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.
- The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response.
- fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time.
- Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly.
|
https://en.wikipedia.org/wiki/Reinforcement
|
passage: On the other hand, there is always an elementary extension in which any set of types over a fixed parameter set is realised:
Let
$$
\mathcal{M}
$$
be a structure and let
$$
\Phi
$$
be a set of complete types over a given parameter set
$$
A \subset \mathcal{M}.
$$
Then there is an elementary extension
$$
\mathcal{N}
$$
of
$$
\mathcal{M}
$$
which realises every type in
$$
\Phi
$$
.
However, since the parameter set is fixed and there is no mention here of the cardinality of
$$
\mathcal{N}
$$
, this does not imply that every theory has a saturated model.
In fact, whether every theory has a saturated model is independent of the axioms of Zermelo–Fraenkel set theory, and is true if the generalised continuum hypothesis holds.
### Ultraproducts
Ultraproducts are used as a general technique for constructing models that realise certain types.
An ultraproduct is obtained from the direct product of a set of structures over an index set by identifying those tuples that agree on almost all entries, where almost all is made precise by an ultrafilter on . An ultraproduct of copies of the same structure is known as an ultrapower.
The key to using ultraproducts in model theory is Łoś's theorem:
Let
$$
\mathcal{M}_i
$$
be a set of -structures indexed by an index set and an ultrafilter on .
|
https://en.wikipedia.org/wiki/Model_theory
|
passage: Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of converge uniformly to provided is continuous on the circle; these partial sums can be used to approximate .
A trigonometric polynomial of degree has a maximum of roots in a real interval unless it is the zero function.
## Fejér-Riesz theorem
The Fejér-Riesz theorem states that every positive real trigonometric polynomial
$$
t(x) = \sum_{n=-N}^{N} c_n e^{i n x},
$$
satisfying
$$
t(x)>0
$$
for all
$$
x\in\mathbb{R}
$$
,
can be represented as the square of the modulus of another (usually complex) trigonometric polynomial
$$
q(x)
$$
such that:
$$
t(x) = |q(x)|^2 = q(x)\bar{q}(x).
$$
Or, equivalently, every Laurent polynomial
$$
w(z)=\sum_{n=-N}^{N} w_{n}z^{n},
$$
with
$$
w_n \in\mathbb{C}
$$
that satisfies
$$
w(\zeta)\geq 0
$$
for all
$$
\zeta \in \mathbb{T}
$$
can be written as:
$$
w(\zeta)=|p(\zeta)|^2=p(\zeta)\bar{p}(\bar{\zeta}),
$$
for some polynomial
$$
p(z)
$$
.
|
https://en.wikipedia.org/wiki/Trigonometric_polynomial
|
passage: For example, 3 is the only prime with period 1, 11 is the only prime with period 2, 37 is the only prime with period 3, 101 is the only prime with period 4, so they are unique primes. The next larger unique prime is 9091 with period 10, though the next larger period is 9 (its prime being 333667). Unique primes were described by Samuel Yates in 1980. A prime number p is unique if and only if there exists an n such that
$$
\frac{\Phi_n(10)}{\gcd(\Phi_n(10), n)}
$$
is a power of p, where
$$
\Phi_n(b)
$$
denotes the
$$
n
$$
th cyclotomic polynomial evaluated at
$$
b
$$
. The value of n is then the period of the decimal expansion of 1/p.
At present, more than fifty decimal unique primes or probable primes are known. However, there are only twenty-three unique primes below 10100.
The decimal unique primes are
3, 11, 37, 101, 9091, 9901, 333667, 909091, ... .
## References
## External links
-
Category:Prime numbers
Category:Rational numbers
|
https://en.wikipedia.org/wiki/Reciprocals_of_primes
|
passage: Natural science can be broken into two main branches: life science (for example biology) and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences.
- A branch of applied science - the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine., although medicine would not normally be considered a physical science. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena.
## Branches
- Physics – natural and physical science could involve the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves. "Physics is the study of your world and the world and universe around you."
- Branches of physics
- Astronomy – study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae), the physics, chemistry, and evolution of such objects, and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma-ray bursts, and cosmic microwave background radiation.
- Branches of astronomy
- Chemistry – studies the composition, structure, properties and change of matter. Chemistry . (n.d.). Merriam-Webster's Medical Dictionary. Retrieved August 19, 2007.
|
https://en.wikipedia.org/wiki/Outline_of_physical_science
|
passage: ## Well-distributed sequence
A sequence (s1, s2, s3, ...) of real numbers is said to be well-distributed on [a, b] if for any subinterval [c, d] of [a, b] we have
$$
\lim_{n\to\infty}{ \left|\{\,s_{k+1},\dots,s_{k+n} \,\} \cap [c,d] \right| \over n}={d-c \over b-a}
$$
uniformly in k. Clearly every well-distributed sequence is uniformly distributed, but the converse does not hold. The definition of well-distributed modulo 1 is analogous.
## Sequences equidistributed with respect to an arbitrary measure
For an arbitrary probability measure space
$$
(X,\mu)
$$
, a sequence of points
$$
(x_n)
$$
is said to be equidistributed with respect to
$$
\mu
$$
if the mean of point measures converges weakly to
$$
\mu
$$
:
$$
\frac{\sum_{k=1}^n \delta_{x_k}}{n}\Rightarrow \mu \ .
$$
In any Borel probability measure on a separable, metrizable space, there exists an equidistributed sequence with respect to the measure; indeed, this follows immediately from the fact that such a space is standard.
|
https://en.wikipedia.org/wiki/Equidistributed_sequence
|
passage: Hence, so long as
$$
m_2(\hat{x}) \geq |h_3(x(t))|
$$
, the second row of the error dynamics,
$$
\dot{e}_2 = h_3(\hat{x}) - m_2(\hat{x}) \sgn( e_2 )
$$
, will enter the
$$
e_2 = 0
$$
sliding mode in finite time.
1. Along the
$$
e_i = 0
$$
surface, the corresponding
$$
v_{i+1}(t) = \{\ldots\}_{\text{eq}}
$$
equivalent control will be equal to
$$
h_{i+1}(x)
$$
. Hence, so long as
$$
m_{i+1}(\hat{x}) \geq |h_{i+2}(x(t))|
$$
, the
$$
(i+1)
$$
th row of the error dynamics,
$$
\dot{e}_{i+1} = h_{i+2}(\hat{x}) - m_{i+1}(\hat{x}) \sgn( e_{i+1} )
$$
, will enter the
$$
e_{i+1} = 0
$$
sliding mode in finite time.
So, for sufficiently large
$$
m_i
$$
gains, all observer estimated states reach the actual states in finite time.
|
https://en.wikipedia.org/wiki/State_observer
|
passage: ## Applications
The inverted index data structure is a central component of a typical search engine indexing algorithm. A goal of a search engine implementation is to optimize the speed of the query: find the documents where word X occurs. Once a forward index is developed, which stores lists of words per document, it is next inverted to develop an inverted index. Querying the forward index would require sequential iteration through each document and to each word to verify a matching document. The time, memory, and processing resources to perform such a query are not always technically realistic. Instead of listing the words per document in the forward index, the inverted index data structure is developed which lists the documents per word.
With the inverted index created, the query can be resolved by jumping to the word ID (via random access) in the inverted index.
In pre-computer times, concordances to important books were manually assembled. These were effectively inverted indexes with a small amount of accompanying commentary that required a tremendous amount of effort to produce.
In bioinformatics, inverted indexes are very important in the sequence assembly of short fragments of sequenced DNA. One way to find the source of a fragment is to search for it against a reference DNA sequence. A small number of mismatches (due to differences between the sequenced DNA and reference DNA, or errors) can be accounted for by dividing the fragment into smaller fragments—at least one subfragment is likely to match the reference DNA sequence.
|
https://en.wikipedia.org/wiki/Inverted_index
|
passage: Mathematically derived formalisms such as the Hammond Postulate, the Curtin-Hammett principle, and the theory of microscopic reversibility are often applied to organic chemistry. Chemists have also used the principle of thermodynamic versus kinetic control to influence reaction products.
### Rate laws
The study of chemical kinetics is used to determine the rate law for a reaction. The rate law provides a quantitative relationship between the rate of a chemical reaction and the concentrations or pressures of the chemical species present. Rate laws must be determined by experimental measurement and generally cannot be elucidated from the chemical equation. The experimentally determined rate law refers to the stoichiometry of the transition state structure relative to the ground state structure. Determination of the rate law was historically accomplished by monitoring the concentration of a reactant during a reaction through gravimetric analysis, but today it is almost exclusively done through fast and unambiguous spectroscopic techniques. In most cases, the determination of rate equations is simplified by adding a large excess ("flooding") all but one of the reactants.
### Catalysis
The study of catalysis and catalytic reactions is very important to the field of physical organic chemistry. A catalyst participates in the chemical reaction but is not consumed in the process. A catalyst lowers the activation energy barrier (ΔG‡), increasing the rate of a reaction by either stabilizing the transition state structure or destabilizing a key reaction intermediate, and as only a small amount of catalyst is required it can provide economic access to otherwise expensive or difficult to synthesize organic molecules.
|
https://en.wikipedia.org/wiki/Physical_organic_chemistry
|
passage: The Cartesian coordinate expansion of the outer product with respect to the standard ordered orthonormal plane basis
$$
(\mathbf{x}, \mathbf{y})
$$
gives
$$
\mathbf{v}_i \wedge \mathbf{v}_{i+1} = (x_i y_{i+1} - x_{i+1} y_i) \; \mathbf{x} \wedge \mathbf{y}
$$
and the oriented area is given as follows.
$$
A =
# \frac{1}{2} \sum_{i
=1}^{n} v_i \wedge v_{i+1}
\frac{1}{2} \sum_{i1}^{n} (x_i y_{i+1} - x_{i+1} y_i) \; \mathbf{x} \wedge \mathbf{y}
$$
Note that the area is given as a multiple of the unit area
$$
\mathbf{x} \wedge \mathbf{y}
$$
.
|
https://en.wikipedia.org/wiki/Shoelace_formula
|
passage: For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin.
### Extracellular matrix
The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels.
|
https://en.wikipedia.org/wiki/Morphogenesis
|
passage: ### Small-scale integration CPUs
During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs.
IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and KA PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs, but was eventually implemented with LSI components once these became practical.
### Large-scale integration CPUs
Lee Boysel published influential articles, including a 1967 "manifesto", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI).
|
https://en.wikipedia.org/wiki/Central_processing_unit
|
passage: It is an alternative to methods from the Bayesian literature such as bridge sampling and defensive importance sampling.
Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density
$$
Z=P(D\mid M)
$$
where
$$
M
$$
is
$$
M_1
$$
or
$$
M_2
$$
:
Start with
$$
N
$$
points
$$
\theta_1,\ldots,\theta_N
$$
sampled from prior.
for
$$
i=1
$$
to
$$
j
$$
do % The number of iterations j is chosen by guesswork.
$$
L_i := \min(
$$
current likelihood values of the points
$$
)
$$
;
$$
X_i := \exp(-i/N);
$$
$$
w_i := X_{i-1} - X_i
$$
$$
Z := Z + L_i\cdot w_i;
$$
Save the point with least likelihood as a sample point with weight
$$
w_i
$$
.
Update the point with least likelihood with some Markov chain Monte Carlo steps according to the prior, accepting only steps that
keep the likelihood above
$$
L_i
$$
.
end
return
$$
Z
$$
;
At each iteration,
$$
X_i
$$
is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than
$$
\theta_i
$$
.
|
https://en.wikipedia.org/wiki/Nested_sampling_algorithm
|
passage: ## Solutions
The effect of jaggies can be reduced by a graphics technique known as spatial anti-aliasing. Anti-aliasing smooths out jagged lines by surrounding them with transparent pixels to simulate the appearance of fractionally-filled pixels when viewed at a distance. The downside of anti-aliasing is that it reduces contrast – rather than sharp black/white transitions, there are shades of gray – and the resulting image can appear fuzzy. This is an inescapable trade-off: if the resolution is insufficient to display the desired detail, the output will either be jagged, fuzzy, or some combination thereof. While machine learning-based upscaling techniques such as DLSS can be used to infer this missing information, other types of artifacts may be introduced in the process.
In real-time 3D rendering such as in video games, various anti-aliasing techniques are used to remove jaggies created by the edges of polygons and other contrasting lines. Since anti-aliasing can impose a significant performance overhead, games for home computers often allow users to choose the level and type of anti-aliasing in use in order to optimize their experience, whereas on consoles this setting is typically fixed for each title to ensure a consistent experience. While anti-aliasing is generally implemented through graphics APIs like DirectX and Vulkan, some consoles such as the Xbox 360 and PlayStation 3 are also capable of anti-aliasing to little direct performance cost by way of dedicated hardware which performs anti-aliasing on the contents of the framebuffer once it has been rendered by the GPU.
|
https://en.wikipedia.org/wiki/Jaggies
|
passage: If π is a meandric permutation, then π2 consists of two cycles, one containing all the even symbols and the other all the odd symbols. Permutations with this property are called alternate permutations, since the symbols in the original permutation alternate between odd and even integers. However, not all alternate permutations are meandric because it may not be possible to draw them without introducing a self-intersection in the curve. For example, the order 3 alternate permutation, (1 4 3 6 5 2), is not meandric.
## Open meander
Given a fixed line L in the Euclidean plane, an open meander of order n is a non-self-intersecting curve in the plane that crosses the line at n points. Two open meanders are equivalent if one can be continuously deformed into the other while maintaining its property of being an open meander and leaving the order of the bridges on the road, in the order in which they are crossed, invariant.
Examples
The open meander of order 1 intersects the line once:
The open meander of order 2 intersects the line twice:
### Open meandric numbers
The number of distinct open meanders of order n is the open meandric number mn. The first fifteen open meandric numbers are given below .
m1 = 1
m2 = 1
m3 = 2
m4 = 3
m5 = 8
m6 = 14
m7 = 42
m8 = 81
m9 = 262
m10 = 538
m11 = 1828
m12 = 3926
m13 = 13820
m14 = 30694
m15 = 110954
|
https://en.wikipedia.org/wiki/Meander_%28mathematics%29
|
passage: The notation means .
### Comparison sorts
Below is a table of comparison sorts. Mathematical analysis demonstrates a comparison sort cannot perform better than on average.
+ Comparison sorts Name Best Average Worst Memory Stable In-place Method Other notes In-place merge sort — — Yes Yes Merging Can be implemented as a stable sort based on stable in-place merging.
#### Heapsort
No Yes Selection Introsort No Partitioning & Selection Used in several STL implementations.
#### Merge sort
Yes No Merging Highly parallelizable (up to using the Three Hungarians' Algorithm). Tournament sort No Selection Variation of Heapsort. Tree sort Yes No Insertion When using a self-balancing binary search tree. Block sort Yes Insertion & Merging Combine a block-based in-place merge algorithm with a bottom-up merge sort. Smoothsort No Selection An adaptive variant of heapsort based upon the Leonardo sequence rather than a traditional binary heap. Timsort Yes Insertion & Merging Makes n-1 comparisons when the data is already sorted or reverse sorted. Patience sorting No Insertion & Selection Finds all the longest increasing subsequences in . Cubesort Yes Insertion Makes n-1 comparisons when the data is already sorted or reverse sorted.
#### Quicksort
No No Partitioning Quicksort can be done in-place with stack space. Library sort No No InsertionSimilar to a gapped insertion sort. It requires randomly permuting the input to warrant with-high-probability time bounds, which makes it not stable.
|
https://en.wikipedia.org/wiki/Sorting_algorithm
|
passage: Each of the outer 9 doublet microtubules extends a pair of dynein arms (an "inner" and an "outer" arm) to the adjacent microtubule; these produce force through ATP hydrolysis. The flagellar axoneme also contains radial spokes, polypeptide complexes extending from each of the outer nine microtubule doublets towards the central pair, with the "head" of the spoke facing inwards. The radial spoke is thought to be involved in the regulation of flagellar motion, although its exact function and method of action are not yet understood.
#### Flagella versus cilia
The regular beat patterns of eukaryotic cilia and flagella generate motion on a cellular level. Examples range from the propulsion of single cells such as the swimming of spermatozoa to the transport of fluid along a stationary layer of cells such as in the respiratory tract.
Although eukaryotic cilia and flagella are ultimately the same, they are sometimes classed by their pattern of movement, a tradition from before their structures have been known. In the case of flagella, the motion is often planar and wave-like, whereas the motile cilia often perform a more complicated three-dimensional motion with a power and recovery stroke. Yet another traditional form of distinction is by the number of 9+2 organelles on the cell.
#### Intraflagellar transport
Intraflagellar transport, the process by which axonemal subunits, transmembrane receptors, and other proteins are moved up and down the length of the flagellum, is essential for proper functioning of the flagellum, in both motility and signal transduction.
|
https://en.wikipedia.org/wiki/Flagellum
|
passage: For constant-volume calorimetry:
$$
\delta Q = C_V \delta T\
$$
where
$$
\delta T\
$$
denotes the increment in temperature and
$$
C_V\
$$
denotes the heat capacity at constant volume.
#### Classical heat calculation with respect to pressure
From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure.
In a process of small increments,
$$
\delta p\
$$
of its pressure, and
$$
\delta T\
$$
of its temperature, the increment of heat,
$$
\delta Q\
$$
, gained by the body of calorimetric material, is given by
$$
\delta Q\ =C^{(p)}_T(p,T)\, \delta p\,+\,C^{(T)}_p(p,T)\,\delta T
$$
where
$$
C^{(p)}_T(p,T)\
$$
denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure
$$
p\
$$
and temperature
$$
T\
$$
;
$$
C^{(T)}_p(p,T)\
$$
denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure
$$
p\
$$
and temperature
$$
T\
$$
.
|
https://en.wikipedia.org/wiki/Calorimetry
|
passage: Later work complemented this work by quantum-entangling two mechanical oscillators.
### Entanglement of elements of living systems
In October 2018, physicists reported producing quantum entanglement using living organisms, particularly between photosynthetic molecules within living bacteria and quantized light.
Living organisms (green sulphur bacteria) have been studied as mediators to create quantum entanglement between otherwise non-interacting light modes, showing high entanglement between light and bacterial modes, and to some extent, even entanglement within the bacteria.
### Entanglement of quarks and gluons in protons
Physicists at Brookhaven National Laboratory demonstrated quantum entanglement within protons, showing quarks and gluons are interdependent rather than isolated particles. Using high-energy electron-proton collisions, they revealed maximal entanglement, reshaping our understanding of proton structure.
|
https://en.wikipedia.org/wiki/Quantum_entanglement
|
passage: Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor.
When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem.
Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.
|
https://en.wikipedia.org/wiki/Quantum_mechanics
|
passage: $$
Then
$$
\hat{x}_i = \left(\frac{1/\sigma_x^2}{1/\sigma_x^2 + 1/\sigma_d^2} \mu + \frac{1/\sigma_d^2}{1/\sigma_x^2 + 1/\sigma_d^2} d \right)+ \left(\frac{1/\sigma_x^2}{1/\sigma_x^2 + 1/\sigma_d^2} \xi_i + \frac{1/\sigma_d^2}{1/\sigma_x^2 + 1/\sigma_d^2} \epsilon_i \right)
$$
.
|
https://en.wikipedia.org/wiki/Ensemble_Kalman_filter
|
passage: In contrast, each egg cell or ovum is relatively large and non-motile.
Oogenesis, the process of female gamete formation in animals, involves meiosis (including meiotic recombination) of a diploid primary oocyte to produce a haploid ovum. Spermatogenesis, the process of male gamete formation in animals, involves meiosis in a diploid primary spermatocyte to produce haploid spermatozoa. In animals, ova are produced in the ovaries of females and sperm develop in the testes of males. During fertilization, a spermatozoon and an ovum, each carrying half of the genetic information of an individual, unite to form a zygote that develops into a new diploid organism.
## Evolution
It is generally accepted that isogamy is the ancestral state from which anisogamy and oogamy evolved, although its evolution has left no fossil records. There are almost invariably only two gamete types, all analyses showing that intermediate gamete sizes are eliminated due to selection. Since intermediate sized gametes do not have the same advantages as small or large ones, they do worse than small ones in mobility and numbers, and worse than large ones in supply.
## Differences between gametes and somatic cells
In contrast to a gamete, which has only one set of chromosomes, a diploid somatic cell has two sets of homologous chromosomes, one of which is a copy of the chromosome set from the sperm and one a copy of the chromosome set from the egg cell.
|
https://en.wikipedia.org/wiki/Gamete
|
passage: See also Yi (2004). Define,
$$
\quad \varphi(q) =\vartheta_{00}(0;\tau) =\theta_3(0;q)=\sum_{n=-\infty}^\infty q^{n^2}
$$
with the nome
$$
q =e^{\pi i \tau},
$$
$$
\tau = n\sqrt{-1},
$$
and Dedekind eta function
$$
\eta(\tau).
$$
Then for
$$
n = 1,2,3,\dots
$$
$$
\begin{align}
\varphi\left(e^{-\pi} \right) &= \frac{\sqrt[4]{\pi}}{\Gamma\left(\frac34\right)} = \sqrt2\,\eta\left(\sqrt{-1}\right)\\
\varphi\left(e^{-2\pi}\right) &= \frac{\sqrt[4]{\pi}}{\Gamma\left(\frac34\right)} \frac{\sqrt{2+\sqrt2}}{2}\\
\varphi\left(e^{-3\pi}\right) &= \frac{\sqrt[4]{\pi}}{\Gamma\left(\frac34\right)} \frac{\sqrt{1+\sqrt3}}{\sqrt[8]{108}}\\
|
https://en.wikipedia.org/wiki/Theta_function
|
passage: The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
## History
System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability.
|
https://en.wikipedia.org/wiki/System_dynamics
|
passage: $$
which is the maximum principle,
$$
\frac{\partial H(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\lambda}(t),t)}{\partial \mathbf{\lambda}} = \dot{\mathbf{x}}(t) \quad
$$
which generates the state transition function
$$
\, \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),t) = \dot{\mathbf{x}}(t)
$$
,
$$
\frac{\partial H(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\lambda}(t),t)}{\partial \mathbf{x}} = - \dot{\mathbf{\lambda}}(t) \quad
$$
which generates the costate equations
$$
\, \dot{\mathbf{\lambda}}(t) = - \left[ I_{\mathbf{x}}(\mathbf{x}(t),\mathbf{u}(t),t) + \mathbf{\lambda}^{\mathsf{T}}(t) \mathbf{f}_{\mathbf{x}}(\mathbf{x}(t),\mathbf{u}(t),t) \right]
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28control_theory%29
|
passage: = −4
When
$$
k=-4
$$
, then
$$
(\frac{a}{2})^2-N(\frac{b}{2})^2=-1
$$
. Composing with itself yields
$$
(\frac{a^2+Nb^2}{4})^2-N(\frac{ab}{2})^2=1
$$
$$
\Rightarrow
$$
$$
(\frac{a^2+2}{2})^2-N(\frac{ab}{2})^2=1
$$
.
Again composing itself yields
$$
(\frac{(a^2+2)^2+Na^2b^2)}{4})^2-N(\frac{ab(a^2+2)}{2})^2=1
$$
$$
\Rightarrow
$$
$$
(\frac{a^4+4a^2+2}{2})^2-N(\frac{ab(a^2+2)}{2})^2=1
$$
Finally, from the earlier equations, compose the triples
$$
(\frac{a^2+2}{2},\frac{ab}{2},1)
$$
and
$$
(\frac{a^4+4a^2+2}{2},\frac{ab(a^2+2)}{2},1)
$$
, to get
$$
(\frac{(a^2+2)(a^4+4a^2+2)+Na^2b^2 (a^2+2)}{4})^2-N(\frac{ab(a^4+4a^2+3)}{2})^2=1
$$
$$
|
https://en.wikipedia.org/wiki/Chakravala_method
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.