source
stringlengths 31
207
| text
stringlengths 12
1.5k
|
---|---|
https://en.wikipedia.org/wiki/Computational%20science
|
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes
Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve sciences (e.g, physical, biological, and social), engineering, and humanities problems
Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to solve computationally demanding problems
The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science
In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of input parameters. The essence of computational science is the application of numerical alg
|
https://en.wikipedia.org/wiki/Small%20set
|
In mathematics, the term small set may refer to:
Small set (category theory)
Small set (combinatorics), a set of positive integers whose sum of reciprocals converges
Small set, an element of a Grothendieck universe
See also
Ideal (set theory)
Natural density
Large set (disambiguation)
|
https://en.wikipedia.org/wiki/Large%20set
|
In mathematics, the term large set is sometimes used to refer to any set that is "large" in some sense. It has specialized meanings in three branches of mathematics:
Large set (category theory), a set that does not belong to a fixed universe of sets
Large set (combinatorics), a set of integers whose sum of reciprocals diverges
Large set (Ramsey theory), a set of integers with the property that, if all the integers are colored, one of the color classes has long arithmetic progressions whose differences are in the set
See also
Natural density
Small set (disambiguation)
|
https://en.wikipedia.org/wiki/Carmichael%20function
|
In number theory, a branch of mathematics, the Carmichael function of a positive integer is the smallest positive integer such that
holds for every integer coprime to . In algebraic terms, is the exponent of the multiplicative group of integers modulo .
The Carmichael function is named after the American mathematician Robert Carmichael who defined it in 1910. It is also known as Carmichael's λ function, the reduced totient function, and the least universal exponent function.
The following table compares the first 36 values of with Euler's totient function (in bold if they are different; the s such that they are different are listed in ).
Numerical examples
Carmichael's function at 5 is 4, , because for any number coprime to 5, i.e. there is with namely, , , and . And this is the smallest exponent with this property, because (and as well.)Moreover, Euler's totient function at 5 is 4, , because there are exactly 4 numbers less than and coprime to 5 (1, 2, 3, and 4). Euler's theorem assures that for all coprime to 5, and 4 is the smallest such exponent.
Carmichael's function at 8 is 2, , because for any number coprime to 8, i.e. it holds that . Namely, , , and .Euler's totient function at 8 is 4, , because there are exactly 4 numbers less than and coprime to 8 (1, 3, 5, and 7). Moreover, Euler's theorem assures that for all coprime to 8, but 4 is not the smallest such exponent.
Computing with Carmichael's theorem
By the unique factorization th
|
https://en.wikipedia.org/wiki/Provable%20security
|
Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields.
Usually, this refers to mathematical proofs, which are common in cryptography. In such a proof, the capabilities of the attacker are defined by an adversarial model (also referred to as attacker model): the aim of the proof is to show that the attacker must solve the underlying hard problem in order to break the security of the modelled system. Such a proof generally does not consider side-channel attacks or other implementation-specific attacks, because they are usually impossible to model without implementing the system (and thus, the proof only applies to this implementation).
Outside of cryptography, the term is often used in conjunction with secure coding and security by design, both of which can rely on proofs to show the security of a particular approach. As with the cryptographic setting, this involves an attacker model and a model of the system. For example, code can be verified to match the intended functionality, described by a model: this can be done through static checking. These techniques are sometimes used for evaluating products (see Common Criteria): the security here depends not only on the correctness of the attacker model, but also on the model of the code.
Finally, the term provable security is sometimes used by sellers of security software that are attempting to sell security products like firewalls, antivirus softw
|
https://en.wikipedia.org/wiki/Clock%20synchronization
|
Clock synchronization is a topic in computer science and engineering that aims to coordinate otherwise independent clocks. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift, caused by clocks counting time at slightly different rates. There are several problems that occur as a result of clock rate differences and several solutions, some being more acceptable than others in certain contexts.
Terminology
In serial communication, clock synchronization can refer to clock recovery which achieves frequency synchronization, as opposed to full phase synchronization. Such clock synchronization is used in synchronization in telecommunications and automatic baud rate detection.
Plesiochronous or isochronous operation refers to a system with frequency synchronization and loose constraints on phase synchronization. Synchronous operation implies a tighter synchronization based on time perhaps in addition to frequency.
Problems
As a result of the difficulties managing time at smaller scales, there are problems associated with clock skew that take on more complexity in distributed computing in which several computers will need to realize the same global time. For instance, in Unix systems the make command is used to compile new or modified code and seeks to avoid recompiling unchanged code. The make command uses the clock of the machine it runs on to determine which source files need to be recompiled. If the sources reside on a separat
|
https://en.wikipedia.org/wiki/Andrea%20diSessa
|
Andrea A. diSessa (born June 3, 1947) is an education researcher and author of the book Turtle Geometry about Logo. He has also written highly cited research papers on the epistemology of physics, educational experimentation, and constructivist analysis of knowledge. He also created, with Hal Abelson, the Boxer Programming Environment at the Massachusetts Institute of Technology.
Personal history
DiSessa received an A.B. in physics from Princeton University in 1969 and a Ph.D. in physics from the Massachusetts Institute of Technology in 1975. He is currently Evelyn Lois Corey Professor of Education at the University of California, Berkeley and has been a member of the National Academy for Education since 1995.
Some of his notable work in Education research focuses on the concept of material intelligence and computational literacy, and ontological innovations and the role of theory in design based research.
Material intelligence
Material Intelligence can be thought of as a subset of distributed cognition, where it refers to the new knowledge that furthers human intelligence and skills by interaction with the computer, and existing computer literacy, in a social environment. It can also be the ability of tools in general, and computers in specific, to increase the intelligence and skills of human mind. It was coined by Andrea DiSessa in his book Changing Minds: Computers, Learning and Literacy. He uses the terms computational literacy, material literacy, and material intelli
|
https://en.wikipedia.org/wiki/Michael%20J.%20L.%20Kirby
|
Michael J. L. Kirby (born August 5, 1941) is a Canadian politician. He sat in the Senate of Canada as a Liberal representing Nova Scotia. He is the former chair of the Mental Health Commission of Canada.
Born in Montreal, Kirby earned a Bachelor of Science and Master of Arts in mathematics from Dalhousie University where he was a member of Phi Delta Theta Fraternity, and, also a Doctor of Philosophy in applied mathematics from Northwestern University.
In the 1960s Kirby was a professor of business administration and public administration at Dalhousie and also taught at the University of Chicago and the University of Kent.
Kirby worked as principal assistant to the Premier of Nova Scotia Gerald Regan from 1970 to 1973 and Assistant Principal Secretary to Prime Minister Pierre Trudeau from 1974 to 1976. He served as President of the Institute for Research on Public Policy from 1977 to 1980.
Kirby chaired the federal Task Force on Atlantic Fisheries which was established to recommend how to achieve and maintain a viable Atlantic fishing industry. It issued its report in 1982.
Kirby returned to public service in the 1980s as Secretary to the Canadian Cabinet for Federal-Provincial Relations and Deputy Clerk of the Queen's Privy Council for Canada. As such he participated in the federal-provincial negotiations that led to the patriation of the Canadian Constitution in 1982. He was elevated to the Canadian Senate by Pierre Trudeau in January 1984 weeks before the prime mini
|
https://en.wikipedia.org/wiki/Hard-core%20predicate
|
In cryptography, a hard-core predicate of a one-way function f is a predicate b (i.e., a function whose output is a single bit) which is easy to compute (as a function of x) but is hard to compute given f(x). In formal terms, there is no probabilistic polynomial-time (PPT) algorithm that computes b(x) from f(x) with probability significantly greater than one half over random choice of x. In other words, if x is drawn uniformly at random, then given f(x), any PPT adversary can only distinguish the hard-core bit b(x) and a uniformly random bit with negligible advantage over the length of x.
A hard-core function can be defined similarly. That is, if x is chosen uniformly at random, then given f(x), any PPT algorithm can only distinguish the hard-core function value h(x) and uniformly random bits of length |h(x)| with negligible advantage over the length of x.
A hard-core predicate captures "in a concentrated sense" the hardness of inverting f.
While a one-way function is hard to invert, there are no guarantees about the feasibility of computing partial information about the preimage c from the image f(x). For instance, while RSA is conjectured to be a one-way function, the Jacobi symbol of the preimage can be easily computed from that of the image.
It is clear that if a one-to-one function has a hard-core predicate, then it must be one way. Oded Goldreich and Leonid Levin (1989) showed how every one-way function can be trivially modified to obtain a one-way function that h
|
https://en.wikipedia.org/wiki/Dual%20basis%20in%20a%20field%20extension
|
In mathematics, the linear algebra concept of dual basis can be applied in the context of a finite extension L/K, by using the field trace. This requires the property that the field trace TrL/K provides a non-degenerate quadratic form over K. This can be guaranteed if the extension is separable; it is automatically true if K is a perfect field, and hence in the cases where K is finite, or of characteristic zero.
A dual basis () is not a concrete basis like the polynomial basis or the normal basis; rather it provides a way of using a second basis for computations.
Consider two bases for elements in a finite field, GF(pm):
and
then B2 can be considered a dual basis of B1 provided
Here the trace of a value in GF(pm) can be calculated as follows:
Using a dual basis can provide a way to easily communicate between devices that use different bases, rather than having to explicitly convert between bases using the change of bases formula. Furthermore, if a dual basis is implemented then conversion from an element in the original basis to the dual basis can be accomplished with multiplication by the multiplicative identity (usually 1).
References
, Definition 2.30, p. 54.
Linear algebra
Field extensions
Theory of cryptography
|
https://en.wikipedia.org/wiki/Pomeron
|
In physics, the pomeron is a Regge trajectory — a family of particles with increasing spin — postulated in 1961 to explain the slowly rising cross section of hadronic collisions at high energies. It is named after Isaak Pomeranchuk.
Overview
While other trajectories lead to falling cross sections, the pomeron can lead to logarithmically rising cross sections — which, experimentally, are approximately constant ones. The identification of the pomeron and the prediction of its properties was a major success of the Regge theory of strong interaction phenomenology. In later years, a BFKL pomeron was derived in further kinematic regimes from perturbative calculations in QCD, but its relationship to the pomeron seen in soft high energy scattering is still not fully understood.
One consequence of the pomeron hypothesis is that the cross sections of proton–proton and proton–antiproton scattering should be equal at high enough energies. This was demonstrated by the Soviet physicist Isaak Pomeranchuk by analytic continuation assuming only that the cross sections do not fall. The pomeron itself was introduced by Vladimir Gribov, and it incorporated this theorem into Regge theory. Geoffrey Chew and Steven Frautschi introduced the pomeron in the West. The modern interpretation of Pomeranchuk's theorem is that the pomeron has no conserved charges—the particles on this trajectory have the quantum numbers of the vacuum.
The pomeron was well accepted in the 1960s despite the fact that the m
|
https://en.wikipedia.org/wiki/Zeta%20potential
|
Zeta potential is the electrical potential at the slipping plane. This plane is the interface which separates mobile fluid from fluid that remains attached to the surface.
Zeta potential is a scientific term for electrokinetic potential in colloidal dispersions. In the colloidal chemistry literature, it is usually denoted using the Greek letter zeta (ζ), hence ζ-potential. The usual units are volts (V) or, more commonly, millivolts (mV). From a theoretical viewpoint, the zeta potential is the electric potential in the interfacial double layer (DL) at the location of the slipping plane relative to a point in the bulk fluid away from the interface. In other words, zeta potential is the potential difference between the dispersion medium and the stationary layer of fluid attached to the dispersed particle.
The zeta potential is caused by the net electrical charge contained within the region bounded by the slipping plane, and also depends on the location of that plane. Thus, it is widely used for quantification of the magnitude of the charge. However, zeta potential is not equal to the Stern potential or electric surface potential in the double layer, because these are defined at different locations. Such assumptions of equality should be applied with caution. Nevertheless, zeta potential is often the only available path for characterization of double-layer properties.
The zeta potential is an important and readily measurable indicator of the stability of colloidal dispersions
|
https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20%C4%8Ce%C5%99ovsk%C3%BD
|
Jiří Čeřovský (born 23 May 1955 in Semily) is a Czech regional politician and former athlete. He is a member of Civic Democratic Party.
Life and career
Čeřovský graduated from Charles University with a PhD in biology and chemistry. In the 1970s, he had reached excellent results in athletics (he won all Czechoslovakian masterships from 1975 to 1983 and had represented Czechoslovakia in world championships). Later Čeřovský taught biology at the Sport School in Jablonec nad Nisou.
After the fall of communism in Czechoslovakia (1989), he became a member of the city council of Jablonec nad Nisou. From 1994 to 2006 he was mayor of the city, and since 2004 he is a member of the Regional Council of Liberec Region. In September 2019, he was re-elected mayor of the city.
External links
Election material containing biography of Cerovsky (in Czech, PDF)
1955 births
Living people
People from Semily
Civic Democratic Party (Czech Republic) mayors
Mayors of places in the Czech Republic
Czech male hurdlers
Czechoslovak male hurdlers
Charles University alumni
|
https://en.wikipedia.org/wiki/Alan%20J.%20Heeger
|
Alan Jay Heeger (born January 22, 1936) is an American physicist, academic and Nobel Prize laureate in chemistry.
Heegar was elected as a member into the National Academy of Engineering in 2002 for co-founding the field of conducting polymers and for pioneering work in making these novel materials available for technological applications.
Life and career
Heeger was born in Sioux City, Iowa, into a Jewish family. He grew up in Akron, Iowa, where his father owned a general store. At age nine, following his father's death, the family moved to Sioux City.
Heeger earned a B.S. in physics and mathematics from the University of Nebraska-Lincoln in 1957, and a Ph.D in physics from the University of California, Berkeley in 1961. From 1962 to 1982 he was on the faculty of the University of Pennsylvania. In 1982 he commenced his present appointment as a professor in the Physics Department and the Materials Department at the University of California, Santa Barbara. His research has led to the formation of numerous start-up companies including Uniax, Konarka, and Sirigen, founded in 2003 by Guillermo C. Bazan, Patrick J. Dietzen, Brent S. Gaylord. Alan Heeger was a founder of Uniax, which was acquired by DuPont.
He won the Nobel Prize for Chemistry in 2000 along with Alan G. MacDiarmid and Hideki Shirakawa "for their discovery and development of conductive polymers"; They published their results on polyacetylene a conductive polymer in 1977. This led to the construction of the Su–S
|
https://en.wikipedia.org/wiki/Ostwald
|
Ostwald may refer to:
Friedrich Wilhelm Ostwald, the physico-chemist (awarded the Nobel Prize in Chemistry, 1909)
Ostwald's rule of polymorphism: in general, the least stable polymorph crystallizes first
The Ostwald Process, a synthesis method for making nitric acid from ammonia
Ostwald ripening, a crystallization effect
Ostwald color system
Wolfgang Ostwald, chemist and biologist, son of Friedrich Wilhelm Ostwald. He studied colloids
Martin Ostwald, a German-American classical scholar
Ostwald (crater), a crater on the far side of the Moon
Ostwald, Bas-Rhin, a commune in the Bas-Rhin département in France
See also
Oswald (disambiguation)
Ozwald Boateng
|
https://en.wikipedia.org/wiki/F1%20hybrid
|
An F1 hybrid (also known as filial 1 hybrid) is the first filial generation of offspring of distinctly different parental types. F1 hybrids are used in genetics, and in selective breeding, where the term F1 crossbreed may be used. The term is sometimes written with a subscript, as F hybrid. Subsequent generations are called F, F, etc.
The offspring of distinctly different parental types produce a new, uniform phenotype with a combination of characteristics from the parents. In fish breeding, those parents frequently are two closely related fish species, while in plant and animal breeding, the parents often are two inbred lines.
Gregor Mendel focused on patterns of inheritance and the genetic basis for variation. In his cross-pollination experiments involving two true-breeding, or homozygous, parents, Mendel found that the resulting F1 generation was heterozygous and consistent. The offspring showed a combination of the phenotypes from each parent that were genetically dominant. Mendel's discoveries involving the F1 and F2 generations laid the foundation for modern genetics.
Production of F1 hybrids
In plants
Crossing two genetically different plants produces a hybrid seed. This can happen naturally, and includes hybrids between species (for example, peppermint is a sterile F1 hybrid of watermint and spearmint). In agronomy, the term F1 hybrid is usually reserved for agricultural cultivars derived from two-parent cultivars. These F1 hybrids are usually created by means o
|
https://en.wikipedia.org/wiki/CSTA
|
CSTA may refer to:
California Science Teachers Association
Canadian Seed Trade Association
Center for SCREEN-TIME Awareness
Combat Systems Test Activity, Aberdeen Proving Ground, Maryland
Community of Scientific and Technological Activities , a student activity in faculty of pharmacy Helwan university.
Computer Science Teachers Association
Computer-supported telecommunications applications
Constant Slope Timed Automata
Council of Science and Technology Advisors - a Canadian S&T Committee
Cystatin A, a protein
Czech and Slovak Transatlantic Award
Peptide Transporter Carbon Starvation Family, a family of transporter proteins
|
https://en.wikipedia.org/wiki/Patience%20sorting
|
In computer science, patience sorting is a sorting algorithm inspired by, and named after, the card game patience. A variant of the algorithm efficiently computes the length of a longest increasing subsequence in a given array.
Overview
The algorithm's name derives from a simplified variant of the patience card game. The game begins with a shuffled deck of cards. The cards are dealt one by one into a sequence of piles on the table, according to the following rules.
Initially, there are no piles. The first card dealt forms a new pile consisting of the single card.
Each subsequent card is placed on the leftmost existing pile whose top card has a value greater than or equal to the new card's value, or to the right of all of the existing piles, thus forming a new pile.
When there are no more cards remaining to deal, the game ends.
This card game is turned into a two-phase sorting algorithm, as follows. Given an array of elements from some totally ordered domain, consider this array as a collection of cards and simulate the patience sorting game. When the game is over, recover the sorted sequence by repeatedly picking off the minimum visible card; in other words, perform a -way merge of the piles, each of which is internally sorted.
Analysis
The first phase of patience sort, the card game simulation, can be implemented to take comparisons in the worst case for an -element input array: there will be at most piles, and by construction, the top cards of the piles form an i
|
https://en.wikipedia.org/wiki/Bargmann%27s%20limit
|
In quantum mechanics, Bargmann's limit, named for Valentine Bargmann, provides an upper bound on the number of bound states with azimuthal quantum number in a system with central potential . It takes the form
This limit is the best possible upper bound in such a way that for a given , one can always construct a potential for which is arbitrarily close to this upper bound. Note that the Dirac delta function potential attains this limit. After the first proof of this inequality by Valentine Bargmann in 1953, Julian Schwinger presented an alternative way of deriving it in 1961.
Rigorous formulation and proof
Stated in a formal mathematical way, Bargmann's limit goes as follows. Let be a spherically symmetric potential, such that it is piecewise continuous in , for and for , where and . If
then the number of bound states with azimuthal quantum number for a particle of mass obeying the corresponding Schrödinger equation, is bounded from above by
Although the original proof by Valentine Bargmann is quite technical, the main idea follows from two general theorems on ordinary differential equations, the Sturm Oscillation Theorem and the Sturm-Picone Comparison Theorem. If we denote by the wave function subject to the given potential with total energy and azimuthal quantum number , the Sturm Oscillation Theorem implies that equals the number of nodes of . From the Sturm-Picone Comparison Theorem, it follows that when subject to a stronger potential (i.e. for all
|
https://en.wikipedia.org/wiki/Yokogawa%20Electric
|
is a Japanese multinational electrical engineering and software company, with businesses based on its measurement, control, and information technologies.
It has a global workforce of over 19,000 employees, 84 subsidiary and 3 affiliated companies operating in 55 countries. The company is listed on the Tokyo Stock Exchange and is a constituent of the Nikkei 225 stock index.
Yokogawa pioneered the development of distributed control systems and introduced its Centum series DCS in 1975.
Some of Yokogawa's most recognizable products are production control systems, test and measurement instruments, pressure transmitters, flow meters, oxygen analyzers, fieldbus instruments, manufacturing execution systems and advanced process control.
History
Yokogawa traces its roots back to 1915, when Dr. Tamisuke Yokogawa, a renowned architect, established an electric meter research institute in Shibuya, Tokyo. After pioneering the development and production of electric meters in Japan, this enterprise was incorporated in 1920 as Yokogawa Electric Works Ltd.
In 1933 Yokogawa began the research and manufacture of aircraft instruments and flow, temperature, and pressure controllers. In the years following the war, Yokogawa went public, developed its first electronic recorders, signed a technical assistance agreement for industrial instruments with the U.S. firm Foxboro, and opened its first overseas sales office (New York).
In the 1960s the company made a full-scale entry into the industrial
|
https://en.wikipedia.org/wiki/Shotgun%20%28disambiguation%29
|
A shotgun is a type of firearm.
Sawed-off shotgun
Shotgun may also refer to:
Science and technology
Shotgun hill climbing, a type of mathematical optimization algorithm in computer science
Shotgun house, a type of narrow, rectangular house
Shotgun sequencing, a method of sequencing DNA
Shotgunning (cold reading), a "mind-reading" technique
Shotgun mic, a type of microphone with a long barrel
Shotgun debugging or shotgunning, a technique in system troubleshooting, debugging, or repair
Shotgun Software, a project management software for creative studios owned by Autodesk
Slang
Riding shotgun, a passenger sitting beside the driver in a car or other vehicle
Shotgun wedding, a hasty wedding due to unplanned pregnancy
Shotgunning, a method for rapidly drinking beer out of a can by punching a hole in it
to shotgun weed or a joint, when one person forces marijuana smoke into the mouth of another person
Sport
Shotgun (shooting sports), a shooting sports discipline
Shotgun, nickname ("Escopeta" in Spanish) of Sergio Roitman (born 1979), professional tennis player from Argentina
Shotgun formation, an offensive formation in American football
"The Shotgun", a nickname for snooker player Jamie Cope
WWF Shotgun Saturday Night, a television series
Film and television
Shotgun (1955 film), an American Western film
Shotgun, 1989 film with Rif Hutton
Shotgun, retitled After Everything, a 2018 American comedy-drama film
"Shotgun" (Breaking Bad), a season four episode of Breaking Bad
Liter
|
https://en.wikipedia.org/wiki/Eightfold%20way%20%28physics%29
|
In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Working alone, both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman proposed the idea in 1961.
The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism.
Background
By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as atoms and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and hypothesized neutrino. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a "strong interaction" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force but those that are, are dubbed "hadrons", which are now further classified as mesons (middle mass) and baryons (heavy weight).
But the discovery of the (neutral) kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than
|
https://en.wikipedia.org/wiki/Strong%20RSA%20assumption
|
In cryptography, the strong RSA assumption states that the RSA problem is intractable even when the solver is allowed to choose the public exponent e (for e ≥ 3). More specifically, given a modulus N of unknown factorization, and a ciphertext C, it is infeasible to find any pair (M, e) such that C ≡ M e mod N.
The strong RSA assumption was first used for constructing signature schemes provably secure against existential forgery without resorting to the random oracle model.
See also
Quadratic residuosity problem
Decisional composite residuosity assumption
References
Barić N., Pfitzmann B. (1997) Collision-Free Accumulators and Fail-Stop Signature Schemes Without Trees. In: Fumy W. (eds) Advances in Cryptology – EUROCRYPT ’97. EUROCRYPT 1997. Lecture Notes in Computer Science, vol 1233. Springer, Berlin, Heidelberg.
Fujisaki E., Okamoto T. (1997) Statistical zero knowledge protocols to prove modular polynomial relations. In: Kaliski B.S. (eds) Advances in Cryptology – CRYPTO '97. CRYPTO 1997. Lecture Notes in Computer Science, vol 1294. Springer, Berlin, Heidelberg.
Ronald Cramer and Victor Shoup. 1999. Signature schemes based on the strong RSA assumption. In Proceedings of the 6th ACM conference on Computer and communications security (CCS ’99). Association for Computing Machinery, New York, NY, USA, 46–51.
Ronald L. Rivest and Burt Kaliski. 2003. RSA Problem. pdf file
Computational hardness assumptions
|
https://en.wikipedia.org/wiki/Private%20information%20retrieval
|
In cryptography, a private information retrieval (PIR) protocol is a protocol that allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved. PIR is a weaker version of 1-out-of-n oblivious transfer, where it is also required that the user should not get information about other database items.
One trivial, but very inefficient way to achieve PIR is for the server to send an entire copy of the database to the user. In fact, this is the only possible protocol (in the classical or the quantum setting) that gives the user information theoretic privacy for their query in a single-server setting. There are two ways to address this problem: make the server computationally bounded or assume that there are multiple non-cooperating servers, each having a copy of the database.
The problem was introduced in 1995 by Chor, Goldreich, Kushilevitz and Sudan in the information-theoretic setting and in 1997 by Kushilevitz and Ostrovsky in the computational setting. Since then, very efficient solutions have been discovered. Single database (computationally private) PIR can be achieved with constant (amortized) communication and k-database (information theoretic) PIR can be done with communication.
Advances in computational PIR
The first single-database computational PIR scheme to achieve communication complexity less than was created in 1997 by Kushilevitz and Ostrovsky and achieved communication complexity of for any , where i
|
https://en.wikipedia.org/wiki/PWB/UNIX
|
The Programmer's Workbench (PWB/UNIX) is an early, now discontinued, version of the Unix operating system that had been created in the Bell Labs Computer Science Research Group of AT&T. Its stated goal was to provide a time-sharing working environment for large groups of programmers, writing software for larger batch processing computers.
Prior to 1973 Unix development at AT&T was a project of a small group of researchers in Department 1127 of Bell Labs. As the usefulness of Unix in other departments of Bell Labs was evident, the company decided to develop a version of Unix tailored to support programmers in production work, not just research. The Programmer's Workbench was started in 1973, by Evan Ivie and Rudd Canaday to support a computer center for a 1000-employee Bell Labs division, which would be the largest Unix site for several years. PWB/UNIX was to provide tools for teams of programmers to manage their source code and collaborate on projects with other team members. It also introduced several stability improvements beyond Research Unix, and broadened usage of the Research nroff and troff text formatters, via efforts with Bell Labs typing pools that led to the -mm macros.
While PWB users managed their source code on PDP-11 Unix systems, programs were often written to run on other operating systems. For this reason, PWB included software for submitting jobs to IBM System/370, UNIVAC 1100 series, and SDS Sigma 5 computers. In 1977 PWB supported a user community of ab
|
https://en.wikipedia.org/wiki/Guard%20%28computer%20science%29
|
In computer programming, a guard is a boolean expression that must evaluate to true if the program execution is to continue in the branch in question. Regardless of which programming language is used, a guard clause, guard code, or guard statement, is a check of integrity preconditions used to avoid errors during execution.
Uses
A typical example is checking that a reference about to be processed is not null, which avoids null-pointer failures.
Other uses include using a boolean field for idempotence (so subsequent calls are nops), as in the dispose pattern.public string Foo(string username)
{
if (username == null) {
throw new ArgumentNullException(nameof(username), "Username is null.");
}
// Rest of the method code follows here...
}
Flatter code with less nesting
The guard provides an early exit from a subroutine, and is a commonly used deviation from structured programming, removing one level of nesting and resulting in flatter code: replacing if guard { ... } with if not guard: return; ....
Using guard clauses can be a refactoring technique to improve code. In general, less nesting is good, as it simplifies the code and reduces cognitive burden.
For example, in Python:
# This function has no guard clause
def f_noguard(x):
if isinstance(x, int):
#code
#code
#code
return x + 1
else:
return None
# Equivalent function with a guard clause. Note that most of the code is less indented, which is good
def
|
https://en.wikipedia.org/wiki/Mathematische%20Arbeitstagung
|
The Mathematische Arbeitstagung taking place annually in Bonn since 1957, and founded by Friedrich Hirzebruch, was an international meeting of mathematicians intended to act in clearing-house fashion, by disseminating current research ideas; and, at the same time, to bring mathematics in West Germany back into its place in European trends. It proved highly successful in attracting the cream of younger mathematicians, partly because its structure was not that of the conventional international conference. The programme of talks was decided 'in real time' only, rather than in advance.
For example, in 1962 the meeting was dominated by talks on K-theory, at that time the breaking news. The early participants included Jean-Pierre Serre, Jacques Tits, Alexander Grothendieck, Hans Grauert, Nicolaas Kuiper, Raoul Bott, John Milnor, Stephen Smale, Armand Borel, Shiing-Shen Chern, Kunihiko Kodaira, Donald Spencer, Michael Atiyah, Isadore Singer, Shreeram Shankar Abhyankar, Michel Kervaire, Marcel Berger, Karl Stein, Reinhold Remmert, René Thom, Serge Lang and Frank Adams.
The institutional structure was reinforced from 1969 by the Sonderforschungsbereich Theoretische Mathematik programme, and from 1980 by the founding of the Max Planck Institute for Mathematics in Bonn.
References
Proceedings of the 25th Mathematics Arbeitstagung, Bonn 1984, Lecture Notes in Mathematics (Berlin: Springer-Verlag)
History of mathematics
|
https://en.wikipedia.org/wiki/Initial%20condition
|
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time.
In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons.
Linear system
Discrete time
A linear matrix difference equation of the homogeneous (having no constant term) form has closed form solution predicated on the vector of initial conditions on the individual variables that are stacked i
|
https://en.wikipedia.org/wiki/Ample%20line%20bundle
|
In mathematics, a distinctive feature of algebraic geometry is that some line bundles on a projective variety can be considered "positive", while others are "negative" (or a mixture of the two). The most important notion of positivity is that of an ample line bundle, although there are several related classes of line bundles. Roughly speaking, positivity properties of a line bundle are related to having many global sections. Understanding the ample line bundles on a given variety X amounts to understanding the different ways of mapping X into projective space. In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of an ample divisor.
In more detail, a line bundle is called basepoint-free if it has enough sections to give a morphism to projective space. A line bundle is semi-ample if some positive power of it is basepoint-free; semi-ampleness is a kind of "nonnegativity". More strongly, a line bundle on a complete variety X is very ample if it has enough sections to give a closed immersion (or "embedding") of X into projective space. A line bundle is ample if some positive power is very ample.
An ample line bundle on a projective variety X has positive degree on every curve in X. The converse is not quite true, but there are corrected versions of the converse, the Nakai–Moishezon and Kleiman criteria for ampleness.
Introduction
Pullback of a line bundle and hyperplane divisors
Given a morphism
|
https://en.wikipedia.org/wiki/Social%20software%20%28research%20field%29
|
In philosophy and the social sciences, social software is an interdisciplinary research program that borrows
mathematical tools and techniques from game theory and computer science in order to analyze and design social procedures. The goals of research in this field are modeling social situations, developing theories of correctness, and designing social procedures.
Work under the term social software has been going on since about 1996, and conferences in Copenhagen, London, Utrecht and New York, have been partly or wholly devoted to it. Much of the work is carried out at the City University of New York under the leadership of Rohit Jivanlal Parikh, who was influential in the development of the field.
Goals and tools
Current research in the area of social software include the analysis of social procedures and examination of them for fairness, appropriateness, correctness and efficiency. For example, an election procedure could be a simple majority vote, Borda count, a Single Transferable vote (STV), or Approval voting. All of these procedures can be examined for various properties like monotonicity. Monotonicity has the property that voting for a candidate should not harm that candidate. This may seem obvious, true
under any system, but it is something which can happen in STV. Another question would be the ability to elect a Condorcet winner in case there is one.
Other principles which are considered by researchers in social software include the concept that a procedur
|
https://en.wikipedia.org/wiki/Algebroid
|
In mathematics, algebroid may refer to several distinct notions, which nevertheless all arise from generalising certain aspects of the theory of algebras or Lie algebras.
Algebroid branch, a formal power series branch of an algebraic curve
Algebroid cohomology
Algebroid multifunction
Courant algebroid, an object generalising Lie bialgebroids
Lie algebroid, the infinitesimal counterpart of Lie groupoids
Atiyah algebroid, a fundamental example of a Lie algebroid associated to a principal bundle
R-algebroid, a categorical construction associated to groupoids
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/Semidefinite%20embedding
|
Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data.
It is motivated by the observation that kernel Principal Component Analysis (kPCA) does not reduce the data dimensionality, as it leverages the Kernel trick to non-linearly map the original data into an inner-product space.
Algorithm
MVU creates a mapping from the high dimensional input vectors to some low dimensional Euclidean vector space in the following steps:
A neighbourhood graph is created. Each input is connected with its k-nearest input vectors (according to Euclidean distance metric) and all k-nearest neighbors are connected with each other. If the data is sampled well enough, the resulting graph is a discrete approximation of the underlying manifold.
The neighbourhood graph is "unfolded" with the help of semidefinite programming. Instead of learning the output vectors directly, the semidefinite programming aims to find an inner product matrix that maximizes the pairwise distances between any two inputs that are not connected in the neighbourhood graph while preserving the nearest neighbors distances.
The low-dimensional embedding is finally obtained by application of multidimensional scaling on the learned inner product matrix.
The steps of applying semidefinite programming followed by a linear dimensionality reduction st
|
https://en.wikipedia.org/wiki/2C-T-4
|
2C-T-4 (2,5-dimethoxy-4-isopropylthiophenethylamine) is a psychedelic phenethylamine of the 2C family. It was first synthesized by Alexander Shulgin and is used as entheogenic recreational drug.
Chemistry
2C-T-4 is the 2-carbon homolog of aleph-4. The full chemical name is 2-[4-(isopropylthio)-2,5-dimethoxyphenyl]-ethanamine. The drug has structural and pharmacodynamic properties similar to 2C-T-7 and 2C-T-19.
Effects
2C-T-4 produces psychedelic and entheogenic effects that develop slowly and can last 8–16 hours. While users may experience virtually no effects for the first hour after ingestion, results vary drastically between individuals and range from hallucination and euphoria to intense sickness and anxiety. Shulgin devoted a chapter in the first part of his book PiHKAL to this compound, describing an intense "plus-four" psychedelic experience mediated by a twelve milligram dose.
Pharmacology
The mechanism that produces 2C-T-4's hallucinogenic and entheogenic effects has not been specifically established, however it is most likely to result from action as a 5-HT2A serotonin receptor agonist in the brain, a mechanism of action shared by all of the hallucinogenic tryptamines and phenethylamines for which the mechanism of action is known.
Popularity
2C-T-4 is relatively unknown on the black market, but has been sold to a limited extent on the research chemical market.
Drug prohibition laws
Canada
As of October 31, 2016, 2C-T-4 is a controlled substance (Schedul
|
https://en.wikipedia.org/wiki/Old%20quantum%20theory
|
The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as the semi-classical approximation to modern quantum mechanics. The main and final accomplishments of the old quantum theory were the determination of the modern form of the periodic table by Edmund Stoner and the Pauli exclusion principle which were both premised on the Arnold Sommerfeld enhancements to the Bohr model of the atom.
The main tool of the old quantum theory was the Bohr–Sommerfeld quantization condition, a procedure for selecting out certain states of a classical system as allowed states: the system can then only exist in one of the allowed states and not in any other state.
History
The old quantum theory was instigated by the 1900 work of Max Planck on the emission and absorption of light in a black body with his discovery of Planck’s law introducing his quantum of action, and began in earnest after the work of Albert Einstein on the specific heats of solids in 1907 brought him to the attention of Walther Nernst. Einstein, followed by Debye, applied quantum principles to the motion of atoms, explaining the specific heat anomaly.
In 1910, Arthur Erich Haas develops J. J. Thomson’s atomic model in his 1910 paper that outlined a treatment of the hydrogen atom involving quantization of electronic orb
|
https://en.wikipedia.org/wiki/Block%20design
|
In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as blocks, chosen such that frequency of the elements satisfies certain conditions making the collection of blocks exhibit symmetry (balance). Block designs have applications in many areas, including experimental design, finite geometry, physical chemistry, software testing, cryptography, and algebraic geometry.
Without further specifications the term block design usually refers to a balanced incomplete block design (BIBD), specifically (and also synonymously) a 2-design, which has been the most intensely studied type historically due to its application in the design of experiments. Its generalization is known as a t-design.
Overview
A design is said to be balanced (up to t) if all t-subsets of the original set occur in equally many (i.e., λ) blocks. When t is unspecified, it can usually be assumed to be 2, which means that each pair of elements is found in the same number of blocks and the design is pairwise balanced. For t=1, each element occurs in the same number of blocks (the replication number, denoted r) and the design is said to be regular. Any design balanced up to t is also balanced in all lower values of t (though with different λ-values), so for example a pairwise balanced (t=2) design is also regular (t=1). When the balancing requirement fails, a design may still be partially balanced if the t-subsets can be divided into n classes,
|
https://en.wikipedia.org/wiki/Hewitt%20Bostock
|
Hewitt Bostock, (May 31, 1864 – April 28, 1930) was a Canadian publisher, businessman and politician.
He was born in Walton Heath, Epsom, England and studied at Trinity College, Cambridge graduating with honours in mathematics. Bostock then studied law and was called to the bar in 1888. Rather than begin a legal practice he toured North America, Australia, New Zealand, China and Japan before settling in British Columbia in 1893 In 1888 the Monte Creek Ranch (also known as the Ducks Ranch) he had purchased in 1888, taking up residence there in 1894. In addition to the ranch, he also operated a lumber company.
He founded the Province newspaper and then entered politics winning election to the House of Commons of Canada as a Liberal in the 1896 election, representing the riding of Yale—Cariboo for one term (until the 1900 election).
In 1904, he was appointed to the Senate of Canada by the prime minister, Wilfrid Laurier. A decade later he became Leader of the Opposition in the Canadian Senate. Bostock broke with the Laurier Liberals over the Conscription Crisis of 1917, and became a Liberal-Unionist, campaigning in favour of the Union government of Sir Robert Borden during the 1917 election.
Following World War I, Bostock reconciled with the Liberals and, in 1921, became Minister of Public Works in the Liberal government of William Lyon Mackenzie King. Several months later, in 1922, he became Speaker of the Senate of Canada and held the position until his death in 1930. In
|
https://en.wikipedia.org/wiki/ICE%20%28cipher%29
|
In cryptography, ICE (Information Concealment Engine) is a symmetric-key block cipher published by Kwan in 1997. The algorithm is similar in structure to DES, but with the addition of a key-dependent bit permutation in the round function. The key-dependent bit permutation is implemented efficiently in software. The ICE algorithm is not subject to patents, and the source code has been placed into the public domain.
ICE is a Feistel network with a block size of 64 bits. The standard ICE algorithm takes a 64-bit key and has 16 rounds. A fast variant, Thin-ICE, uses only 8 rounds. An open-ended variant, ICE-n, uses 16n rounds with 64n bit key.
Van Rompay et al. (1998) attempted to apply differential cryptanalysis to ICE. They described an attack on Thin-ICE which recovers the secret key using 223 chosen plaintexts with a 25% success probability. If 227 chosen plaintexts are used, the probability can be improved to 95%. For the standard version of ICE, an attack on 15 out of 16 rounds was found, requiring 256 work and at most 256 chosen plaintexts.
Structure
ICE is a 16-round Feistel network. Each round uses a 32→32 bit F function, which uses 60 bits of key material.
The structure of the F function is somewhat similar to DES: The input is expanded by taking overlapping fields, the expanded input is XORed with a key, and the result is fed to a number of reducing S-boxes which undo the expansion.
First, ICE divides the input into 4 overlapping 10-bit values. They are bits 3
|
https://en.wikipedia.org/wiki/Waves%20Audio
|
Waves Audio Ltd. is a developer and supplier of professional digital audio signal processing technologies and audio effects, used in recording, mixing, mastering, post production, broadcast, and live sound. The company's corporate headquarters and main development facilities are located in Tel Aviv, with additional offices in the United States, China, and Taiwan, and development centers in India and Ukraine.
In 2011, Waves won a Technical Grammy Award.
History
Waves Audio was founded in 1992 by Gilad Keren and Meir Sha'ashua in Tel Aviv, Israel.
Later that year, Waves released its first product, the Q10 Paragraphic Equalizer. The Q10 was the audio industry's first commercially available audio plugin.
Waves' L1 Ultramaximizer, released in 1994, became a prominent plugin, with some publications pointing to it as contributing to the "loudness war" behind modern music mastering. Record producer Tony Maserati said of early Waves software, "[they] were the only plugins [that were] quality and they were creative." Waves later launched a signature line of Maserati inspired plugins.
Waves launched the Waves Signature Series working with music producers and engineers to explore their unique sounds.
In 2009, as part of the Signature Series, Waves released the Eddie Kramer Signature Series of five plug-ins focusing on classic rock.
The Chris Lord-Alge Signature Series followed in 2010.
In 2011, the company was honored with a Technical Grammy Award for "contributions of outstandin
|
https://en.wikipedia.org/wiki/Peter%20Waage
|
Peter Waage (29 June 1833 – 13 January 1900) was a Norwegian chemist and professor of chemistry at the University of Kristiania. Along with his brother-in-law Cato Maximilian Guldberg, he co-discovered and developed the law of mass action between 1864 and 1879.
Biography
He grew up on the island of Hidra in Vest-Agder, Norway. He was the son of Peder Pedersen Waage (1796–1872) and Regine Lovise Wathne (1802–72). He attended the Bergen Cathedral School and studied chemistry and mineralogy at the University of Kristiania (now University of Oslo) under Adolph Strecker. In 1858, he received the Crown Prince's gold medal (Kronprinsens gullmedalje) for work on the development of a theory of oxygen-containing acid radicals. He became a cand.real. in 1859. He subsequently traveled to France and Germany, where he studied for two years including time spent with Robert Bunsen in Heidelberg.
In 1861, Waage was made an associate professor and in 1866 he was appointed professor of chemistry at the University of Kristiania. He remained a professor at the University over 30 years. He was also chairman of the Norwegian Polytechnic Society from 1868 to 1869, and the first chairman of the Norwegian branch of the YMCA when it was established in 1880.
Personal life
He was married twice. In 1862, he married Johanne Christiane Tandberg Riddervold (1838- 1869), daughter of Hans Riddervold (1795-1876) and Anne Marie Bull (1804-70). Following the death of his first wife, he was married i
|
https://en.wikipedia.org/wiki/Path%20%28topology%29
|
In mathematics, a path in a topological space is a continuous function from the closed unit interval into
Paths play an important role in the fields of topology and mathematical analysis.
For example, a topological space for which there exists a path connecting any two points is said to be path-connected. Any space may be broken up into path-connected components. The set of path-connected components of a space is often denoted
One can also define paths and loops in pointed spaces, which are important in homotopy theory. If is a topological space with basepoint then a path in is one whose initial point is . Likewise, a loop in is one that is based at .
Definition
A curve in a topological space is a continuous function from a non-empty and non-degenerate interval
A in is a curve whose domain is a compact non-degenerate interval (meaning are real numbers), where is called the of the path and is called its .
A is a path whose initial point is and whose terminal point is
Every non-degenerate compact interval is homeomorphic to which is why a is sometimes, especially in homotopy theory, defined to be a continuous function from the closed unit interval into
An or 0 in is a path in that is also a topological embedding.
Importantly, a path is not just a subset of that "looks like" a curve, it also includes a parameterization. For example, the maps and represent two different paths from 0 to 1 on the real line.
A loop in a space base
|
https://en.wikipedia.org/wiki/Loop%20%28topology%29
|
In mathematics, a loop in a topological space is a continuous function from the unit interval to such that In other words, it is a path whose initial point is equal to its terminal point.
A loop may also be seen as a continuous map from the pointed unit circle into , because may be regarded as a quotient of under the identification of 0 with 1.
The set of all loops in forms a space called the loop space of .
See also
Free loop
Loop group
Loop space
Loop algebra
Fundamental group
Quasigroup
References
Topology
es:Grupo fundamental#Lazo
|
https://en.wikipedia.org/wiki/Proper%20map
|
In mathematics, a function between topological spaces is called proper if inverse images of compact subsets are compact. In algebraic geometry, the analogous concept is called a proper morphism.
Definition
There are several competing definitions of a "proper function".
Some authors call a function between two topological spaces if the preimage of every compact set in is compact in
Other authors call a map if it is continuous and ; that is if it is a continuous closed map and the preimage of every point in is compact. The two definitions are equivalent if is locally compact and Hausdorff.
Let be a closed map, such that is compact (in ) for all Let be a compact subset of It remains to show that is compact.
Let be an open cover of Then for all this is also an open cover of Since the latter is assumed to be compact, it has a finite subcover. In other words, for every there exists a finite subset such that
The set is closed in and its image under is closed in because is a closed map. Hence the set
is open in It follows that contains the point
Now and because is assumed to be compact, there are finitely many points such that Furthermore, the set is a finite union of finite sets, which makes a finite set.
Now it follows that and we have found a finite subcover of which completes the proof.
If is Hausdorff and is locally compact Hausdorff then proper is equivalent to . A map is universally closed if for any topological space the map i
|
https://en.wikipedia.org/wiki/Pl%C3%BCcker%20embedding
|
In mathematics, the Plücker map embeds the Grassmannian , whose elements are k-dimensional subspaces of an n-dimensional vector space V, either real or complex, in a projective space, thereby realizing it as a projective algebraic variety. More precisely, the Plücker map embeds into the projectivization of the -th exterior power of . The image is algebraic, consisting of the intersection of a number of quadrics defined by the (see below).
The Plücker embedding was first defined by Julius Plücker in the case as a way of describing the lines in three-dimensional space (which, as projective lines in real projective space, correspond to two-dimensional subspaces of a four-dimensional vector space). The image of that embedding is the Klein quadric in RP5.
Hermann Grassmann generalized Plücker's embedding to arbitrary k and n. The homogeneous coordinates of the image of the Grassmannian under the Plücker embedding, relative to the basis in the exterior space corresponding to the natural basis in (where is the base field) are called Plücker coordinates.
Definition
Denoting by the -dimensional vector space over the field , and by
the Grassmannian of -dimensional subspaces of , the Plücker embedding is the map ι defined by
where is a basis for the element and is the projective equivalence class of the element of the th exterior power of .
This is an embedding of the Grassmannian into the projectivization . The image can be completely characterized as the inter
|
https://en.wikipedia.org/wiki/Mark%20Krein
|
Mark Grigorievich Krein (, ; 3 April 1907 – 17 October 1989) was a Soviet mathematician, one of the major figures of the Soviet school of functional analysis. He is known for works in operator theory (in close connection with concrete problems coming from mathematical physics), the problem of moments, classical analysis and representation theory.
He was born in Kyiv, leaving home at age 17 to go to Odesa. He had a difficult academic career, not completing his first degree and constantly being troubled by anti-Semitic discrimination. His supervisor was Nikolai Chebotaryov.
He was awarded the Wolf Prize in Mathematics in 1982 (jointly with Hassler Whitney),
but was not allowed to attend the ceremony.
David Milman, Mark Naimark, Israel Gohberg, Vadym Adamyan, Mikhail Livsic and other known mathematicians were his students.
He died in Odesa.
On 14 January 2008, the memorial plaque of Mark Krein was unveiled on the main administration building of I.I. Mechnikov Odesa National University.
See also
Tannaka–Krein duality
Krein–Milman theorem and Krein–Rutman theorem in functional analysis
Krein space
Krein's condition for the indeterminacy of the problem of moments
External links
INTERNATIONAL CONFERENCE Modern Analysis and Applications (MAA 2007). Dedicated to the centenary of Mark Krein
1907 births
1989 deaths
Soviet mathematicians
Ukrainian Jews
Scientists from Kyiv
Wolf Prize in Mathematics laureates
Operator theorists
Functional analysts
Foreign associates of the N
|
https://en.wikipedia.org/wiki/George%20Gollin
|
George D. Gollin (born May 6, 1953) is an American physics professor at the University of Illinois at Urbana-Champaign. Besides his work on particle physics and the International Linear Collider, he has since 2003 made numerous efforts in fighting institutions which are considered to be diploma mills, which has caused him to receive significant public attention. Gollin placed second in the 2014 Democratic primary for Illinois's 13th congressional district.
Areas of research and teaching
Gollin has worked on particle physics experiments studying muon scattering (1975–1981, intended to test the ideas of "Quantum Chromodynamics"), neutral K meson decay parameters (1980–1993, measuring things relating to "CP violation"), and electron-positron annihilation (1993–2005, measuring production and decay properties of heavy quarks). His current research focuses on technical issues associated with the design and construction of the "International Linear Collider", a very large electron-positron colliding beams facility intended to shed light on the origins of the masses of the fundamental particles.
His teaching has spanned the entire physics curriculum, from elementary algebra-based courses for students in the life sciences to classical electrodynamics at the graduate level.
Other public service
The University of Illinois is a public university and "land grant school"; as such, Gollin's responsibilities include faculty public service in addition to his teaching and research obligatio
|
https://en.wikipedia.org/wiki/Abraham%E2%80%93Minkowski%20controversy
|
The Abraham–Minkowski controversy is a physics debate concerning electromagnetic momentum within dielectric media. Two equations were first suggested by Hermann Minkowski (1908) and Max Abraham (1909) for this momentum. They predict different values, from which the name of the controversy derives. Experimental support has been claimed for both.
The two points of view have different physical interpretations and thus neither need be more correct than the other.
David J. Griffiths argues that, in the presence of matter, only the total stress–energy tensor carries unambiguous physical significance; how one apportions it between an "electromagnetic" part and a "matter" part depends on context and convenience.
Several papers have claimed to have resolved this controversy.
The controversy is still of importance in physics beyond the Standard Model where electrodynamics gets modifications, like in the presence of axions.
References
External links
Physical Review Focus: Momentum From Nothing
Physical Review Focus: Light Bends Glass
Electric and magnetic fields in matter
Hermann Minkowski
1908 introductions
|
https://en.wikipedia.org/wiki/Reuven%20Ramaty
|
Dr. Reuven Ramaty (1937—2001) was a Hungarian astrophysicist who worked for 30 years at NASA's NASA Goddard Space Flight Centre. He was a leader in the fields of solar physics, gamma-ray line spectrometry, nuclear astrophysics, and low-energy cosmic rays. Ramaty was a founding member of NASA's High Energy Solar Spectroscopic Imager which has now been renamed the Reuven Ramaty High Energy Solar Spectroscopic Imager in his honour. This was the first space mission to be named after a NASA scientist and was operational from 2002 until 2018. The Online Archive of California holds over 400 entries for documents, papers and photographs published by and of Ramaty and his work. Ramaty made many contributions in the field of astrophysics and solar physics. He was given the Goddard Lindsay Award in 1980 and had a tribute dedicated to his work at the University of Maryland in 2000.
Early life
Ramaty was born on February 25, 1937, to two Hungarian parents Michael Miki Reiter and Eliz Ramaty, living in Timișoara, Romania. At 11 years old, in 1948, his family moved to Israel to escape growing cultural tensions and economic difficulties of the Second World War. He became the stepson of Gizi Reiter after her marriage to his father. Ramaty remained in Israel for 16 years, where he finished his secondary education and graduated from Tel Aviv University in 1961 with a Bachelor of Science in physics. Ramaty taught physics at a secondary level in Israel before his move to Los Angeles. During hi
|
https://en.wikipedia.org/wiki/Splitting%20of%20prime%20ideals%20in%20Galois%20extensions
|
In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert.
Definitions
Let L/K be a finite extension of number fields, and let OK and OL be the corresponding ring of integers of K and L, respectively, which are defined to be the integral closure of the integers Z in the field in question.
Finally, let p be a non-zero prime ideal in OK, or equivalently, a maximal ideal, so that the residue OK/p is a field.
From the basic theory of one-dimensional rings follows the existence of a unique decomposition
of the ideal pOL generated in OL by p into a product of distinct maximal ideals Pj, with multiplicities ej.
The field F = OK/p naturally embeds into Fj = OL/Pj for every j, the degree fj = [OL/Pj : OK/p] of this residue field extension is called inertia degree of Pj over p.
The multiplicity ej is called ramification index of Pj over p. If it is bigger than 1 for some j, the field extension L/K is called ramified at p (or we say that p ramif
|
https://en.wikipedia.org/wiki/Jeffrey%20A.%20Harvey
|
Jeffrey A. Harvey (born February 15, 1955 in San Antonio, Texas) is an American string theorist at the University of Chicago.
Scientific contributions
Among Harvey's many contributions to the field of theoretical physics, he is one of the co-discoverers of the heterotic string, along with David Gross, Emil Martinec, and Ryan Rohm. The four physicists were colloquially known as the "Princeton string quartet".
Harvey is a fellow of the American Physical Society and of the American Academy of Arts and Sciences. He is a trustee at the Institute for Advanced Study in Princeton, New Jersey.
He received the Dirac Medal in 2023.
Selected publications
See also
CGHS model
References
External links
Dr. Harvey's homepage
American string theorists
Living people
1955 births
University of Chicago faculty
Fellows of the American Physical Society
Trustees of the Institute for Advanced Study
Members of the United States National Academy of Sciences
|
https://en.wikipedia.org/wiki/Pilot%20wave%20theory
|
In theoretical physics, the pilot wave theory, also known as Bohmian mechanics, was the first known example of a hidden-variable theory, presented by Louis de Broglie in 1927. Its more modern version, the de Broglie–Bohm theory, interprets quantum mechanics as a deterministic theory, avoiding troublesome notions such as wave–particle duality, instantaneous wave function collapse, and the paradox of Schrödinger's cat. To solve these problems, the theory is inherently nonlocal.
The de Broglie–Bohm pilot wave theory is one of several interpretations of (non-relativistic) quantum mechanics.
An extension to the relativistic case with spin has been developed since the 1990s.
History
Louis de Broglie's early results on the pilot wave theory were presented in his thesis (1924) in the context of atomic orbitals where the waves are stationary. Early attempts to develop a general formulation for the dynamics of these guiding waves in terms of a relativistic wave equation were unsuccessful until in 1926 Schrödinger developed his non-relativistic wave equation. He further suggested that since the equation described waves in configuration space, the particle model should be abandoned. Shortly thereafter, Max Born suggested that the wave function of Schrödinger's wave equation represents the probability density of finding a particle. Following these results, de Broglie developed the dynamical equations for his pilot wave theory. Initially, de Broglie proposed a double solution approach,
|
https://en.wikipedia.org/wiki/Jeffrey%20Harvey
|
Jeffrey Harvey may refer to:
Jeffrey A. Harvey (born 1955), professor of physics and string theorist at University of Chicago
Jeffrey Harvey (biologist) (born 1957), biologist at the Netherlands Institute of Ecology
See also
Geoff Harvey (1935–2019), English-Australian musician
|
https://en.wikipedia.org/wiki/Tom%20Banks%20%28physicist%29
|
Thomas Israel Banks (born April 19, 1949 in New York City) is a theoretical physicist and professor at Rutgers University and University of California, Santa Cruz.
Work
Banks' work centers around string theory and its applications to high energy particle physics and cosmology. He received his Ph.D. in physics from the Massachusetts Institute of Technology in 1973. In 1973-86 he was appointed at the Tel Aviv University, he was several times a visiting scholar at the Institute for Advanced Study in Princeton (1976–78, 1983–84, and in the fall of 2010).
Along with Willy Fischler, Stephen Shenker, and Leonard Susskind, he is one of the four originators of M(atrix) theory, or BFSS Matrix Theory, an attempt to formulate M theory in a nonperturbative manner. Banks proposed a conjecture known as Asymptotic Darkness - it posits that the physics above the Planck scale is dominated by black hole production. He has often criticized the widely held assumption in the string theory community that background spacetimes with different asymptotics can represent different vacua states of the same theory of quantum gravity. Rather, Banks argues that different asymptotics correspond to different models of quantum gravity. Many of his arguments for this and other ideas are contained in his paper "A Critique of Pure String Theory: Heterodox Opinions of Diverse Dimensions." published in 2003.
Notes
External links
Thomas Banks' profile at Scipp with list of publications
M Theory as a Matrix Mode
|
https://en.wikipedia.org/wiki/Ho%C5%99ava%E2%80%93Witten%20theory
|
In theoretical physics, the Hořava–Witten theory argues that the cancellation of anomalies guarantees that a supersymmetric gauge theory with the E8 gauge group propagates on a type of domain wall. This domain wall, a Hořava–Witten domain wall, behaves as a boundary of the eleven-dimensional spacetime in M-theory. Proposed by Petr Hořava and Edward Witten, the theory is important for various relations between M-theory and superstring theory.
References
String theory
|
https://en.wikipedia.org/wiki/Petr%20Ho%C5%99ava%20%28physicist%29
|
Petr Hořava (born 1963 in Prostějov) is a Czech string theorist. He is a professor of physics in the Berkeley Center for Theoretical Physics at the University of California, Berkeley, where he teaches courses on quantum field theory and string theory. Hořava is a member of the theory group at Lawrence Berkeley National Laboratory.
Work
Hořava is known for his articles written with Edward Witten about the Hořava-Witten domain walls in M-theory. These articles demonstrated that the ten-dimensional heterotic string theory could be produced from 11-dimensional M-theory by making one of the dimensions have edges (the domain walls). This discovery provided crucial support for the conjecture that all string theories could arise as limits of a single higher-dimensional theory.
Hořava is less well known for his discovery of D-branes, usually attributed to Dai, Leigh and Polchinski, who discovered them independently, also in 1989.
In 2009, Hořava proposed a theory of gravity that separates space from time at high energy while matching some predictions of general relativity at lower energies.
See also
Hořava–Lifshitz gravity
Hořava–Witten domain wall
K-theory (physics)
References
External links
Hořava's webpage at LBNL
1963 births
String theorists
Czech physicists
Living people
Theoretical physicists
People from Prostějov
|
https://en.wikipedia.org/wiki/LHC%40home
|
LHC@home is a volunteer computing project researching particle physics that uses the Berkeley Open Infrastructure for Network Computing (BOINC) platform. The project's computing power is utilized by physicists at CERN in support of the Large Hadron Collider and other experimental particle accelerators.
The project is run with the help of over 5,400 active volunteer users contributing more than 10,000 computers processing at a combined 61 teraFLOPS . The project is cross-platform, and runs on a variety of computer hardware configurations.
Applications
The LHC@home project currently runs four applications—Atlas, CMS, SixTrack, and Test4Theory—which deal with different aspects of research conducted in LHC like calculating particle beam stability and simulating proton collisions. Atlas, CMS, and Test4Theory use VirtualBox, an x86 virtualization software package.
Atlas
Atlas uses volunteer computing power to run simulations of the ATLAS experiment. It can be run in VirtualBox or natively on Linux.
Beauty
Beauty (LHCb) compared the decay of bottom quarks () and bottom antiquarks (), which also known as beauty quarks. The participation of volunteers in the application was suspended indefinitely on 19 November 2018.
CMS
The CMS application (formerly a standalone project called CMS@Home) allows users to run simulations for the Compact Muon Solenoid experiment on their computers.
SixTrack
SixTrack was first introduced as a beta on 1 September 2004 and a record 1000 users signed u
|
https://en.wikipedia.org/wiki/Graviphoton
|
In theoretical physics and quantum physics, a graviphoton or gravivector is a hypothetical particle which emerges as an excitation of the metric tensor (i.e. gravitational field) in spacetime dimensions higher than four, as described in Kaluza–Klein theory.
However, its crucial physical properties are analogous to a (massive) photon: it induces a "vector force", sometimes dubbed a "fifth force". The electromagnetic potential emerges from an extra component of the metric tensor , where the figure 5 labels an additional, fifth dimension.
In gravity theories with extended supersymmetry (extended supergravities), a graviphoton is normally a superpartner of the graviton that behaves like a photon, and is prone to couple with gravitational strength, as was appreciated in the late 1970s. Unlike the graviton, it may provide a repulsive (as well as an attractive) force, and thus, in some technical sense, a type of anti-gravity. Under special circumstances, in several natural models, often descending from five-dimensional theories mentioned, it may actually cancel the gravitational attraction in the static limit. Joël Scherk investigated semirealistic aspects of this phenomenon, stimulating searches for physical manifestations of this mechanism.
See also
Graviscalar (a.k.a. radion)
Supergravity
References
Supersymmetry
Bosons
Photons
Hypothetical elementary particles
Force carriers
Subatomic particles with spin 1
|
https://en.wikipedia.org/wiki/Extended%20supersymmetry
|
In theoretical physics, extended supersymmetry is supersymmetry whose infinitesimal generators carry not only a spinor index , but also an additional index where is integer (such as 2 or 4).
Extended supersymmetry is also called , supersymmetry, for example. Extended supersymmetry is very important for analysis of mathematical properties of quantum field theory and superstring theory. The more extended supersymmetry is, the more it constrains physical observables and parameters.
See also
Supersymmetry algebra
Harmonic superspace
Projective superspace
Supersymmetry
|
https://en.wikipedia.org/wiki/Graviscalar
|
In theoretical physics, the hypothetical particle called the graviscalar or radion emerges as an excitation of general relativity's metric tensor, i.e. gravitational field, but is indistinguishable from a scalar in four dimensions, as shown in Kaluza–Klein theory. The scalar field comes from a component of the metric tensor where the figure 5 labels an additional fifth dimension. The only variations in the scalar field represent variations in the size of the extra dimension. Also, in models with multiple extra dimensions, there exist several such particles. Moreover, in theories with extended supersymmetry, a graviscalar is usually a superpartner of the graviton that behaves as a particle with spin 0. This concept closely relates to the gauged Higgs models.
See also
Graviphoton (aka gravivector)
Dilaton
Kaluza–Klein theory
Randall–Sundrum models
Goldberger–Wise mechanism
References
Roy Maartens, “Brane-World Gravity”, Living Rev. Relativ., 7, (2004), 7. ,
Supersymmetry
Theories of gravity
Hypothetical elementary particles
Force carriers
|
https://en.wikipedia.org/wiki/Electric%20displacement%20field
|
In physics, the electric displacement field (denoted by D) or electric induction is a vector field that appears in Maxwell's equations. It accounts for the electromagnetic effects of polarization and that of an electric field, combining the two in an auxiliary field. It plays a major role in topics such as the capacitance of a material, as well the response of dielectrics to electric field, and how shapes can change due to electric fields in piezoelectricity or flexoelectricity as well as the creation of voltages and charge transfer due to elastic strains.
In any material, if there is an inversion center then the charge at, for instance, and are the same. This means that there is no dipole. If an electric field is applied to an insulator, then (for instance) the negative charges can move slightly towards the positive side of the field, and the positive charges in the other direction. This leads to an induced dipole which is described as a polarization. There can be slightly different movements of the negative electrons and positive nuclei in molecules, or different displacements of the atoms in an ionic compound. Materials which do not have an inversion center display piezoelectricity and always have a polarization; in others spatially varying strains can break the inversion symmetry and lead to polarization, the flexoelectric effect. Other stimuli such as magnetic fields can lead to polarization in some materials, this being called the magnetoelectric effect.
Definition
|
https://en.wikipedia.org/wiki/Peierls%20bracket
|
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracket. It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.
The bracket
is defined as
,
as the difference between some kind of action of one quantity on the other, minus the flipped term.
In quantum mechanics, the Peierls bracket becomes a commutator i.e. a Lie bracket.
References
Peierls, R. "The Commutation Laws of Relativistic Field Theory,"
Proc. R. Soc. Lond. August 21, 1952 214 1117 143-157.
Theoretical physics
|
https://en.wikipedia.org/wiki/Almarian%20Decker
|
Almarian William Decker (born 1852, Ohio; d. Aug. 1893, Sierra Madre, California; interred Sierra Madre Pioneer Cemetery) was an American pioneer of electrical engineering involved in the early development of three-phase electrical power. In 1892 he was hired by H. H. Sinclair and Henry Fisher of the Redlands Electric Light and Power Company, a Californian generating company, to design a new three-phase generator for the Mill Creek No. 1 hydroelectric plant. The plant opened in 1893 and is still in operation today (2004). This was the first commercial application of three-phase electrical power in the United States and probably the world. Its success led to the widespread adoption of three-phase power, in preference to single-phase and direct current.
Decker was also hired by Prof. Thaddeus Lowe of the Mount Lowe Railway which opened in Altadena, California in 1893. Decker was responsible for the daily supervision of the electrical installations on the railway, and had also computed the electrical requirements for the Great Incline operating system, even considering a series of rechargeable batteries since there was such a lack of resources for hydroelectric generation. Decker suffered from tuberculosis which left him so weak that he had to be ferried out to the work sites in a wheelbarrow daily to oversee the electrical installations. He died a little more than a month after the railroad opened.
Many of Decker's theories of electrical methodology were underestimated dur
|
https://en.wikipedia.org/wiki/Dangling%20bond
|
In chemistry, a dangling bond is an unsatisfied valence on an immobilized atom. An atom with a dangling bond is also referred to as an immobilized free radical or an immobilized radical, a reference to its structural and chemical similarity to a free radical.
When speaking of a dangling bond, one is generally referring to the state described above, containing one electron and thus leading to a neutrally charged atom. There are also dangling bond defects containing two or no electrons. These are negatively and positively charged respectively. Dangling bonds with two electrons have an energy close to the valence band of the material and those with none have an energy that is closer to the conduction band.
Properties
In order to gain enough electrons to fill their valence shells (see also octet rule), many atoms will form covalent bonds with other atoms. In the simplest case, that of a single bond, two atoms each contribute one unpaired electron, and the resulting pair of electrons is shared between them. Atoms that possess too few bonding partners to satisfy their valences and that possess unpaired electrons are termed "free radicals"; so, often, are molecules containing such atoms. When a free radical exists in an immobilized environment (for example, a solid), it is referred to as an "immobilized free radical" or a "dangling bond".
A dangling bond in (bulk) crystalline silicon is often pictured as a single unbound hybrid sp3 orbital on the silicon atom, with the other thre
|
https://en.wikipedia.org/wiki/Pairing
|
In mathematics, a pairing is an R-bilinear map from the Cartesian product of two R-modules, where the underlying ring R is commutative.
Definition
Let R be a commutative ring with unit, and let M, N and L be R-modules.
A pairing is any R-bilinear map . That is, it satisfies
,
and
for any and any and any . Equivalently, a pairing is an R-linear map
where denotes the tensor product of M and N.
A pairing can also be considered as an R-linear map
, which matches the first definition by setting
.
A pairing is called perfect if the above map is an isomorphism of R-modules.
A pairing is called non-degenerate on the right if for the above map we have that for all implies ; similarly, is called non-degenerate on the left if for all implies .
A pairing is called alternating if and for all m. In particular, this implies , while bilinearity shows . Thus, for an alternating pairing, .
Examples
Any scalar product on a real vector space V is a pairing (set , in the above definitions).
The determinant map (2 × 2 matrices over k) → k can be seen as a pairing .
The Hopf map written as is an example of a pairing. For instance, Hardie et al. present an explicit construction of the map using poset models.
Pairings in cryptography
In cryptography, often the following specialized definition is used:
Let be additive groups and a multiplicative group, all of prime order . Let be generators of and respectively.
A pairing is a map:
for which the following holds:
|
https://en.wikipedia.org/wiki/Green%27s%20relations
|
In mathematics, Green's relations are five equivalence relations that characterise the elements of a semigroup in terms of the principal ideals they generate. The relations are named for James Alexander Green, who introduced them in a paper of 1951. John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid for groups, but in this case tell us nothing useful, because groups always have divisibility.
Instead of working directly with a semigroup S, it is convenient to define Green's relations over the monoid S1. (S1 is "S with an identity adjoined if necessary"; if S is not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an element a of S, the relevant ideals are:
The principal left ideal generated by a: . This is the same as , which is .
The principal right ideal generated by a: , or equivalently .
The principal two-sided ideal generated by a: , or .
The L, R, and J relations
For elements a and b of S, Green's relations L, R and J are defined by
a L b if and only if S1 a = S1 b.
a R b if and only if a S1 = b S1.
a J b if and only if S1 a S1 = S1 b S1.
That is, a and b
|
https://en.wikipedia.org/wiki/Whitehead%20manifold
|
In mathematics, the Whitehead manifold is an open 3-manifold that is contractible, but not homeomorphic to discovered this puzzling object while he was trying to prove the Poincaré conjecture, correcting an error in an earlier paper where he incorrectly claimed that no such manifold exists.
A contractible manifold is one that can continuously be shrunk to a point inside the manifold itself. For example, an open ball is a contractible manifold. All manifolds homeomorphic to the ball are contractible, too. One can ask whether all contractible manifolds are homeomorphic to a ball. For dimensions 1 and 2, the answer is classical and it is "yes". In dimension 2, it follows, for example, from the Riemann mapping theorem. Dimension 3 presents the first counterexample: the Whitehead manifold.
Construction
Take a copy of the three-dimensional sphere. Now find a compact unknotted solid torus inside the sphere. (A solid torus is an ordinary three-dimensional doughnut, that is, a filled-in torus, which is topologically a circle times a disk.) The closed complement of the solid torus inside is another solid torus.
Now take a second solid torus inside so that and a tubular neighborhood of the meridian curve of is a thickened Whitehead link.
Note that is null-homotopic in the complement of the meridian of This can be seen by considering as and the meridian curve as the z-axis together with The torus has zero winding number around the z-axis. Thus the necessary null-
|
https://en.wikipedia.org/wiki/JH
|
JH may refer to:
Jh (digraph), in written language
JH (hash function), in cryptography
Japan Highway Public Corporation
Jharkhand, India (ISO 3166: JH)
Juvenile hormone
Fuji Dream Airlines (since 2008, IATA code JH), a Japanese airline
Harlequin Air (1997-2005, IATA code JH), a former Japanese airline
Nordeste Linhas Aéreas Regionais (1976-1995, IATA code JH), a former Brazilian airline
, symbol for Yokohama Line railway service
|
https://en.wikipedia.org/wiki/Horace%20Lamb
|
Sir Horace Lamb (27 November 1849 – 4 December 1934) was a British applied mathematician and author of several influential texts on classical physics, among them Hydrodynamics (1895) and Dynamical Theory of Sound (1910). Both of these books remain in print. The word vorticity was invented by Lamb in 1916.
Biography
Early life and education
Lamb was born in Stockport, Cheshire, the son of John Lamb and his wife Elizabeth, née Rangeley. John Lamb was a foreman in a cotton mill who had gained some distinction by the invention of an improvement to spinning machines, he died when his son was a child. Lamb's mother married again, and soon afterwards Horace went to live with his strict maternal aunt, Mrs. Holland. He studied at Stockport Grammar School, where he made the acquaintance of a wise and kindly headmaster in the Rev. Charles Hamilton, and a graduate of classics, Frederic Slaney Poole, who in his final year became a good friend. It was from these two tutors that Lamb acquired his interest in mathematics and, to a somewhat lesser extent, classical literature.
In 1867, he gained a classical scholarship at Queens' College, Cambridge. Since Lamb's inclination, however, was to pursue a career in engineering, he chose to decline the offer, and instead worked for a year at the Owens College in nearby Manchester, as a means of developing his mathematical proficiency further.
At that time, the Chair of Pure Mathematics at Owens College was held by Thomas Barker, an eminent Scot
|
https://en.wikipedia.org/wiki/Manindra%20Agrawal
|
Manindra Agrawal (born 20 May 1966) is an Indian computer scientist and professor at the Department of Computer Science and Engineering at the Indian Institute of Technology, Kanpur. He was the recipient of the first Infosys Prize for Mathematics, the Godel Prize in 2006; and the Shanti Swarup Bhatnagar Award in Mathematical Sciences in 2003. He has been honoured with Padma Shri, India's 4th highest civilian award, in 2013.
Career
He created the AKS primality test with Neeraj Kayal and Nitin Saxena, for which he and his co-authors won the 2006 Fulkerson Prize, and the 2006 Gödel Prize. He was also awarded 2002 Clay Research Award for this work. The test is the first unconditional deterministic algorithm to test an n-digit number for primality in a time that has been proven to be polynomial in n.
In September 2008, Agrawal was chosen for the first Infosys Mathematics Prize for outstanding contributions in the broad field of mathematics. He also served on the Mathematical Sciences jury for the Infosys Prize in 2014 and 2015. He was a visiting scholar at the Institute for Advanced Study in 2003-04.
Agarwal served as the Deputy Director of IIT Kanpur from 2017-2021.
Awards and honors
Clay Research Award (2002)
S S Bhatnagar Prize in Mathematical Sciences (2003)
ICTP Prize (2003)
Fulkerson Prize (2006)
Gödel Prize (2006)
Infosys Prize (2008)
G.D. Birla Award for Scientific Research (2009)
TWAS Prize in Mathematics (2010)
Goyal Prize (2017)
References
External l
|
https://en.wikipedia.org/wiki/Pierre%20Varignon
|
Pierre Varignon (1654 – 23 December 1722) was a French mathematician. He was educated at the Jesuit College and the University of Caen, where he received his M.A. in 1682. He took Holy Orders the following year.
Varignon gained his first exposure to mathematics by reading Euclid and then Descartes' La Géométrie. He became professor of mathematics at the Collège Mazarin in Paris in 1688 and was elected to the Académie Royale des Sciences in the same year. In 1704 he held the departmental chair at Collège Mazarin and also became professor of mathematics at the Collège Royal. He was elected to the Berlin Academy in 1713 and to the Royal Society in 1718. Many of his works were published in Paris in 1725, three years after his death. His lectures at Mazarin were published in Elements de mathematique in 1731.
Varignon was a friend of Newton, Leibniz, and the Bernoulli family. Varignon's principal contributions were to graphic statics and mechanics. Except for l'Hôpital, Varignon was the earliest and strongest French advocate of infinitesimal calculus, and exposed the errors in Michel Rolle's critique thereof. He recognized the importance of a test for the convergence of series, but analytical difficulties prevented his success. Nevertheless, he simplified the proofs of many propositions in mechanics, adapted Leibniz's calculus to the inertial mechanics of Newton's Principia, and treated mechanics in terms of the composition of forces in Projet d'une nouvelle mécanique in 1687. A
|
https://en.wikipedia.org/wiki/ExoMars
|
ExoMars (Exobiology on Mars) is an astrobiology programme of the European Space Agency (ESA) and the Russian space agency (Roscosmos).
The goals of ExoMars are to search for signs of past life on Mars, investigate how the Martian water and geochemical environment varies, investigate atmospheric trace gases and their sources and, by doing so, demonstrate the technologies for a future Mars sample-return mission.
The first part of the programme is a mission launched in 2016 that placed the Trace Gas Orbiter into Mars orbit and released the Schiaparelli EDM lander. The orbiter is operational but the lander crashed on the planet's surface. The second part of the programme was planned to launch in July 2020, when the Kazachok lander would have delivered the Rosalind Franklin rover on the surface, supporting a science mission that was expected to last into 2022 or beyond. On 12 March 2020, it was announced that the second mission was being delayed to 2022 as a result of problems with the parachutes, which could not be resolved in time for the launch window.
The Trace Gas Orbiter (TGO) and a test stationary lander called Schiaparelli were launched on 14 March 2016. TGO entered Mars orbit on 19 October 2016 and proceeded to map the sources of methane () and other trace gases present in the Martian atmosphere that could be evidence for possible biological or geological activity. The TGO features four instruments and will also act as a communications relay satellite. The Schiaparelli
|
https://en.wikipedia.org/wiki/Jacobian%20conjecture
|
In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an n-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus.
The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century.
The Jacobian determinant
Let N > 1 be a fixed integer and consider polynomials f1, ..., fN in variables X1, ..., XN with coefficients in a field k. Then we define a vector-valued function F: kN → kN by setting:
F(X1, ..., XN) = (f1(X1, ...,XN),..., fN(X1,...,XN)).
Any map F: kN → kN arising in this way is called a polynomial
|
https://en.wikipedia.org/wiki/Shanti%20Swarup%20Bhatnagar%20Prize%20for%20Science%20and%20Technology
|
The Shanti Swarup Bhatnagar Prize for Science and Technology (SSB) is a science award in India given annually by the Council of Scientific and Industrial Research (CSIR) for notable and outstanding research, applied or fundamental, in biology, chemistry, environmental science, engineering, mathematics, medicine, and physics. The prize recognizes outstanding Indian work (according to the view of CSIR awarding committee) in science and technology. It is the most coveted award in multidisciplinary science in India. The award is named after the founder Director of the Council of Scientific & Industrial Research, Shanti Swarup Bhatnagar. It was first awarded in 1958.
Any citizen of India engaged in research in any field of science and technology up to the age of 45 years is eligible for the prize. The prize is awarded on the basis of contributions made through work done in India only during the five years preceding the year of the prize. The prize comprises a citation, a plaque, and a cash award of . In addition, recipients also receive Rs. 15,000 per month up to the age of 65 years.
Nomination and selection
Names of candidates are proposed by a member of the governing body of CSIR, Vice-Chancellors of universities or institutes of national importance, and deans of different faculties of science and former awardees. Selection is made by the Advisory Committee constituted each year and necessarily consists of at least six experts including at least one former Bhatnagar Awardee
|
https://en.wikipedia.org/wiki/Poisson%20formula
|
In mathematics, the Poisson formula, named after Siméon Denis Poisson, may refer to:
Poisson distribution in probability
Poisson summation formula in Fourier analysis
Poisson kernel in complex or harmonic analysis
Poisson–Jensen formula in complex analysis
|
https://en.wikipedia.org/wiki/Alan%20Burns%20%28professor%29
|
Professor Alan Burns FREng FIET FBCS FIEEE CEng is a professor in the Computer Science Department at the University of York, England. He has been at the University of York since 1990, and held the post of Head of Department from 1999 until 30 June 2006, when he was succeeded by John McDermid.
He is a member of the department's Real-Time Systems Research Group, and has authored or co-authored over 300 publications, with a large proportion of them concentrating on real-time systems and the Ada programming language. Burns has been actively involved in the creation of the Ravenscar profile, a subset of Ada's tasking model, designed to enable the analysis of real-time programs for their timing properties.
In 2006, Alan Burns was awarded the Annual Technical Achievement Award for technical achievement and leadership by the IEEE Technical Committee on Real-time Systems. In 2009, he was elected Fellow of the Royal Academy of Engineering.
He is also a Fellow of the British Computer Society (BCS) and the Institution of Engineering and Technology (IET), and a Fellow of the Institute of Electrical and Electronics Engineers (IEEE).
Books
Alan Burns has written a number of computer science books.
References
External links
Alan Burns departmental home page
Personal home page
Year of birth missing (living people)
Living people
Academics of the University of York
English computer scientists
Computer science writers
Fellows of the British Computer Society
Fellows of the Institution
|
https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor%20theorem
|
In mathematics, the Heine–Cantor theorem, named after Eduard Heine and Georg Cantor, states that if is a continuous function between two metric spaces and , and is compact, then is uniformly continuous. An important special case is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous.
Proof
Suppose that and are two metric spaces with metrics and , respectively. Suppose further that a function is continuous and is compact. We want to show that is uniformly continuous, that is, for every positive real number there exists a positive real number such that for all points in the function domain , implies that .
Consider some positive real number . By continuity, for any point in the domain , there exists some positive real number such that when , i.e., a fact that is within of implies that is within of .
Let be the open -neighborhood of , i.e. the set
Since each point is contained in its own , we find that the collection is an open cover of . Since is compact, this cover has a finite subcover where . Each of these open sets has an associated radius . Let us now define , i.e. the minimum radius of these open sets. Since we have a finite number of positive radii, this minimum is well-defined and positive. We now show that this works for the definition of uniform continuity.
Suppose that for any two in . Since the sets form an open (sub)cover of our space , we know that must lie within one of
|
https://en.wikipedia.org/wiki/P.%20K.%20Srinivasan
|
P.K. Srinivasan (PKS) (4 November 1924 – 20 June 2005) was a well known mathematics teacher in India. He taught mathematics at the Muthialpet High School in Chennai, India until his retirement. His singular dedication to education of mathematics would bring him to the United States, where he worked for a year, and then to Nigeria, where he would work for six years. He is known in India for his dedication to teaching mathematics and in creating pioneering awareness of the Indian mathematician Ramanujan. He has authored several books in English, Telugu and Tamil that introduce mathematics to children in novel and interesting ways. He was also a prominent reviewer of math books in the weekly Book Review column of the Indian newspaper The Hindu in Chennai.
Experience
PKS, as he was known to the world-at-large, among his colleagues, students and friends, travelled to the United States as a Fulbright exchange teacher and worked in Liverpool Central School, New York, in 1965-'66. Later he served as a Senior education officer and a Senior lecturer in Mathematics in Nigeria for seven years. He served as a lecturer in the National Council of Educational Research and Training (NCERT) in India. He organized over sixty math expositions and fairs in India, Nigeria and the United States, and participated actively in four International Congress on Mathematical Education (ICME) conferences.
He inspired many a creative idea and gave them shape through demonstrative displays by his students
|
https://en.wikipedia.org/wiki/Fialka
|
In cryptography, Fialka (M-125) is the name of a Cold War-era Soviet cipher machine. A rotor machine, the device uses 10 rotors, each with 30 contacts along with mechanical pins to control stepping. It also makes use of a punched card mechanism. Fialka means "violet" in Russian. Information regarding the machine was quite scarce until c. 2005 because the device had been kept secret.
Fialka contains a five-level paper tape reader on the right hand side at the front of the machine, and a paper tape punch and tape printing mechanism on top. The punched-card input for keying the machine is located on the left hand side. The Fialka requires 24 volt DC power and comes with a separate power supply that accepts power at 100 to 250 VAC, 50–400 Hz by means of an external selector switch.
The machine's rotors are labelled with Cyrillic, requiring 30 points on the rotors; this is in contrast to many comparable Western machines with 26-contact rotors, corresponding to the Latin alphabet. The keyboard, at least in the examples of East German origin, had both Cyrillic and Latin markings. There are at least two versions known to exist, the M-125-MN and the M-125-3MN. The M-125-MN had a typewheel that could handle Latin and Cyrillic letters. The M-125-3MN had separate typewheels for Latin and Cyrillic. The M-125-3MN had three modes, single shift letters, double shift with letters and symbols, and digits only, for use with code books and to superencrypt numeric ciphers.
Encryption mechanism
|
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Schmidt
|
In mathematics, Hilbert–Schmidt may refer to
a Hilbert–Schmidt operator;
a Hilbert–Schmidt integral operator;
the Hilbert–Schmidt theorem.
|
https://en.wikipedia.org/wiki/American%20Regions%20Mathematics%20League
|
The American Regions Mathematics League (ARML), is an annual, national high school mathematics team competition held simultaneously at four locations in the United States: the University of Iowa, Penn State, University of Nevada, Reno, and the University of Alabama in Huntsville. Past sites have included San Jose State University, Rutgers University, UNLV, Duke University, and University of Georgia.
Teams consist of 15 members, which usually represent a large geographic region (such as a state) or a large population center (such as a major city). Some schools also field teams. The competition is held in June, on the first Saturday after Memorial Day.
In 2022, 120 teams competed with about 1800 students.
ARML problems cover a wide variety of mathematical topics including algebra, geometry, number theory, combinatorics, probability, and inequalities. Calculus is not required to successfully complete any problem, but it may facilitate solving the problem more quickly or efficiently. While part of the competition is short-answer based, there is a cooperative team round, and a proof-based power question (also completed as a team). ARML problems are harder than most high school mathematics competitions.
The contest is sponsored by D. E. Shaw & Co. Contest supporters are the American Mathematical Society, Mu Alpha Theta (the National Mathematics Honor Society for High School and Two-Year College students), Star League, Penguin Books, and Princeton University Press.
Competi
|
https://en.wikipedia.org/wiki/Pre-intuitionism
|
In the philosophy of mathematics, the pre-intuitionists were a small but influential group who informally shared similar philosophies on the nature of mathematics. The term itself was used by L. E. J. Brouwer, who in his 1951 lectures at Cambridge described the differences between intuitionism and its predecessors:
Of a totally different orientation [from the "Old Formalist School" of Dedekind, Cantor, Peano, Zermelo, and Couturat, etc.] was the Pre-Intuitionist School, mainly led by Poincaré, Borel and Lebesgue. These thinkers seem to have maintained a modified observational standpoint for the introduction of natural numbers, for the principle of complete induction [...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic.
The introduction of natural numbers
The pre-intuitionists, as defined by L. E. J. Brouwer, differed from the formalist standpoint in several ways, particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. For Poincaré, the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence.
This is to say that no mathematical object exists without
|
https://en.wikipedia.org/wiki/Parcel
|
Parcel may refer to:
Parcels (band), an Australian modern soul band
Parcel (consignment), an individual consignment of cargo for shipment
Parcel (film), 2019 Bengali film
Parcel (package), sent through the mail or package delivery
Fluid parcel, a concept in fluid dynamics
Land lot, a piece of land
Placer (geography), parcel in Portuguese, a type of submerged bank or reef
an object used in the game Pass the parcel
|
https://en.wikipedia.org/wiki/Mohr%E2%80%93Mascheroni%20theorem
|
In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone.
It must be understood that "any geometric construction" refers to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as:
Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together.
Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary.
History
The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as Mascheroni's Theorem until Mohr's work was rediscovered.
Several proofs of the result are known. Mascheroni's proof of 1797 was generall
|
https://en.wikipedia.org/wiki/Laws%20of%20robotics
|
Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.
The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.
Isaac Asimov's "Three Laws of Robotics"
The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In The Evitable Conflict the machines generalize the First Law to mean:
No machine may harm humanity; or, through inaction, allow humanity to come to harm.
This was refined in the end of Foundation and Earth, a zeroth law was introduced, with the original three suitably rewritten as subordinate to it:
Adaptations and extensions exist ba
|
https://en.wikipedia.org/wiki/CellML
|
CellML is an XML based markup language for describing mathematical models. Although it could theoretically describe any mathematical model, it was originally created with the Physiome Project in mind, and hence used primarily to describe models relevant to the field of biology. This is reflected in its name CellML, although this is simply a name, not an abbreviation. CellML is growing in popularity as a portable description format for computational models, and groups throughout the world are using CellML for modelling or developing software tools based on CellML. CellML is similar to Systems Biology Markup Language SBML but provides greater scope for model modularity and reuse, and is not specific to descriptions of biochemistry.
History
The CellML language grew from a need to share models of cardiac cell dynamics among researchers at a number of sites across the world. The original working group formed in 1998 consisted of David Bullivant, Warren Hedley, and Poul Nielsen; all three were at that time members of the Department of Engineering Science at the University of Auckland. The language was an application of the XML specification developed by the World Wide Web Consortium – the decision to use XML was based on late 1998 recommendations from Warren Hedley and André (David) Nickerson. Existing XML-based languages were leveraged to describe the mathematics (content MathML), metadata (RDF), and links between resources (XLink). The CellML working group first became aware of
|
https://en.wikipedia.org/wiki/Initial%20topology
|
In general topology and related areas of mathematics, the initial topology (or induced topology or weak topology or limit topology or projective topology) on a set with respect to a family of functions on is the coarsest topology on that makes those functions continuous.
The subspace topology and product topology constructions are both special cases of initial topologies. Indeed, the initial topology construction can be viewed as a generalization of these.
The dual notion is the final topology, which for a given family of functions mapping to a set is the finest topology on that makes those functions continuous.
Definition
Given a set and an indexed family of topological spaces with functions
the initial topology on is the coarsest topology on such that each
is continuous.
Definition in terms of open sets
If is a family of topologies indexed by then the of these topologies is the coarsest topology on that is finer than each This topology always exists and it is equal to the topology generated by
If for every denotes the topology on then is a topology on , and the is the least upper bound topology of the -indexed family of topologies (for ).
Explicitly, the initial topology is the collection of open sets generated by all sets of the form where is an open set in for some under finite intersections and arbitrary unions.
Sets of the form are often called . If contains exactly one element, then all the open sets of the initial topology are
|
https://en.wikipedia.org/wiki/Formal%20proof
|
In logic and mathematics, a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. It differs from a natural language argument in that it is rigorous, unambiguous and mechanically verifiable. If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concepts of Fitch-style proof, sequent calculus and natural deduction are generalizations of the concept of proof.
The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus (of some formal system) to the previous well-formed formulas in the proof sequence.
Formal proofs often are constructed with the help of computers in interactive theorem proving (e.g., through the use of proof checker and automated theorem prover). Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only
|
https://en.wikipedia.org/wiki/List%20of%20loop%20quantum%20gravity%20researchers
|
This is a list researchers in the physics field of loop quantum gravity who have Wikipedia articles.
Abhay Ashtekar, Pennsylvania State University, United States
John Baez, University of California, Riverside, United States
Aurélien Barrau, Université Grenoble Alpes, Grenoble, France
John W. Barrett, University of Nottingham, United Kingdom
Eugenio Bianchi, Pennsylvania State University, United States
Martin Bojowald, Pennsylvania State University, United States
Alejandro Corichi, National Autonomous University of Mexico, Mexico
Bianca Dittrich, Perimeter Institute for Theoretical Physics, Canada
Laurent Freidel, Perimeter Institute for Theoretical Physics, Canada
Rodolfo Gambini, University of the Republic, Uruguay
Jerzy Lewandowski, University of Warsaw, Poland
Jorge Pullin, Louisiana State University, United States
Carlo Rovelli, Centre de Physique Théorique, Centre National de la Recherche Scientifique (CNRS), Aix-Marseille University and University of Toulon, Marseille, France
Lee Smolin, Perimeter Institute for Theoretical Physics, Canada
Thomas Thiemann, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
See also
List of quantum gravity researchers
List of contributors to general relativity
External links
Map of Loop Quantum Gravity people and research groups.
Centre de Physique Théorique, Marseille, France
Institute for Gravitation and the Cosmos, Penn State, United States
Perimeter Institute for Theoretical Physics, Waterloo, Canada
Q
|
https://en.wikipedia.org/wiki/Hamiltonian%20system
|
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can be studied in both Hamiltonian mechanics and dynamical systems theory.
Overview
Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution equations of a physical system. The advantage of this description is that it gives important insights into the dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of three bodies: while there is no closed-form solution to the general problem, Poincaré showed for the first time that it exhibits deterministic chaos.
Formally, a Hamiltonian system is a dynamical system characterised by the scalar function , also known as the Hamiltonian. The state of the system, , is described by the generalized coordinates and , corresponding to generalized momentum and position respectively. Both and are real-valued vectors with the same dimension N. Thus, the state is completely described by the 2N-dimensional vector
and the evolution equations are given by Hamilton's equations:
The trajectory is the solution of the initial value problem defined by Hamilton's equations and the initial condition .
Time-independent Hamiltonian systems
If the Hamiltonian is not explicitly time-dependent, i.e. if ,
|
https://en.wikipedia.org/wiki/Downregulation%20and%20upregulation
|
In biochemistry, in the biological context of organisms' regulation of gene expression and production of gene products, downregulation is the process by which a cell decreases the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus. The complementary process that involves increase in quantities of cellular components is called upregulation.
An example of downregulation is the cellular decrease in the expression of a specific receptor in response to its increased activation by a molecule, such as a hormone or neurotransmitter, which reduces the cell's sensitivity to the molecule. This is an example of a locally acting (negative feedback) mechanism.
An example of upregulation is the response of liver cells exposed to such xenobiotic molecules as dioxin. In this situation, the cells increase their production of cytochrome P450 enzymes, which in turn increases degradation of these dioxin molecules.
Downregulation or upregulation of an RNA or protein may also arise by an epigenetic alteration. Such an epigenetic alteration can cause expression of the RNA or protein to no longer respond to an external stimulus. This occurs, for instance, during drug addiction or progression to cancer.
Downregulation and upregulation of receptors
All living cells have the ability to receive and process signals that originate outside their membranes, which they do by means of proteins called receptors, often located at the cell's s
|
https://en.wikipedia.org/wiki/Piermaria%20Oddone
|
Piermaria Jorge Oddone (born March 26, 1944 in Arequipa, Peru) is a Peruvian-American particle physicist.
Oddone earned his bachelor's degree in physics at the Massachusetts Institute of Technology in 1965 and a PhD in physics from Princeton University in 1970.
From 1972, Oddone worked at the US Department of Energy’s Lawrence Berkeley National Laboratory. In 1987 he was appointed director of the Physics Division at Berkeley Lab, and later became the laboratory deputy director for scientific programs. In 1987, he proposed the idea of using an Asymmetric B-factory to study the violation of CP symmetry in the decay of B-mesons.
In the late seventies and early eighties, Oddone was a member of the team that developed the first Time Projection Chamber (TPC). This technology was subsequently used for many particle and nuclear physics experiments. He led the TPC collaboration from 1984 to 1987.
He was appointed director of Fermi National Accelerator Laboratory (Fermilab) and took up office on 1 July 2005.
Oddone received the 2005 Panofsky Prize in Experimental Particle Physics for the invention of the Asymmetric B-Factory to carry out precision measurements of CP violation in B-meson decays.
He was elected Fellow of the American Physical Society in 1990 "for significant research in elementary- particle physics and contributions to the development of apparatus as well as of the infrastructure required for future advances of the field"
In 2011 at an international symposium he
|
https://en.wikipedia.org/wiki/Function%20generator
|
In electrical engineering, a function generator is usually a piece of electronic test equipment or software used to generate different types of electrical waveforms over a wide range of frequencies. Some of the most common waveforms produced by the function generator are the sine wave, square wave, triangular wave and sawtooth shapes. These waveforms can be either repetitive or single-shot (which requires an internal or external trigger source). Another feature included on many function generators is the ability to add a DC offset. Integrated circuits used to generate waveforms may also be described as function generator ICs.
Although function generators cover both audio and radio frequencies, they are usually not suitable for applications that need low distortion or stable frequency signals. When those traits are required, other signal generators would be more appropriate.
Some function generators can be phase-locked to an external signal source (which may be a frequency reference) or another function generator.
Function generators are used in the development, test and repair of electronic equipment. For example, they may be used as a signal source to test amplifiers or to introduce an error signal into a control loop. Function generators are primarily used for working with analog circuits, related pulse generators are primarily used for working with digital circuits.
Electronic instruments
Principles of Operation
Simple function generators usually generate triangular
|
https://en.wikipedia.org/wiki/Subgroup%20growth
|
In mathematics, subgroup growth is a branch of group theory, dealing with quantitative questions about subgroups of a given group.
Let be a finitely generated group. Then, for each integer define to be the number of subgroups of index in . Similarly, if is a topological group, denotes the number of open subgroups of index in . One similarly defines and to denote the number of maximal and normal subgroups of index , respectively.
Subgroup growth studies these functions, their interplay, and the characterization of group theoretical properties in terms of these functions.
The theory was motivated by the desire to enumerate finite groups of given order, and the analogy with Mikhail Gromov's notion of word growth.
Nilpotent groups
Let be a finitely generated torsionfree nilpotent group. Then there exists a composition series with infinite cyclic factors, which induces a bijection (though not necessarily a homomorphism).
such that group multiplication can be expressed by polynomial functions in these coordinates; in particular, the multiplication is definable. Using methods from the model theory of p-adic integers, F. Grunewald, D. Segal and G. Smith showed that the local zeta function
is a rational function in .
As an example, let be the discrete Heisenberg group. This group has a "presentation" with generators and relations
Hence, elements of can be represented as triples of integers with group operation given by
To each finite index subgroup of , asso
|
https://en.wikipedia.org/wiki/Conditional%20convergence
|
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.
Definition
More precisely, a series of real numbers is said to converge conditionally if
exists (as a finite real number, i.e. not or ), but
A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series).
Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. The Lévy–Steinitz theorem identifies the set of values to which a series of terms in Rn can converge.
A typical conditionally convergent integral is that on the non-negative real axis of (see Fresnel integral).
See also
Absolute convergence
Unconditional convergence
References
Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964).
Mathematical series
Integral calculus
Convergence (mathematics)
Summability theory
|
https://en.wikipedia.org/wiki/Music%20from%20The%20Body
|
Music from The Body is the soundtrack album to Roy Battersby's 1970 documentary film The Body, about human biology, narrated by Vanessa Redgrave and Frank Finlay.
History
The music was composed in collaboration between Pink Floyd member Roger Waters and Ron Geesin, the same year they worked together on Atom Heart Mother and employs biomusic, including, on the first track, sounds made by the human body (slaps, breathing, laughing, whispering, flatulence, etc.), in addition to more traditional guitar, piano and stringed instruments. The album's final track, "Give Birth to a Smile", features all four members of Pink Floyd, plus Geesin on piano, although David Gilmour, Nick Mason and Richard Wright are uncredited.
The child heard on opening track is Ron's son Joe Geesin.
The LP, being a complete re-recording of the score, features a different track listing from the original film soundtrack, and a 3 sided acetate does exist of the full version. The cover of the album features a Transparent Anatomical Manikin (TAM).
Waters did not release another album outside of Pink Floyd until 1984's The Pros and Cons of Hitch Hiking.
Track listing
All songs written by Ron Geesin, except where noted:
Side One
"Our Song" (Geesin/Waters) – 1:24
"Sea Shell and Stone" (Waters) – 2:17
"Red Stuff Writhe" – 1:11
"A Gentle Breeze Blew Through Life" – 1:19
"Lick Your Partners" – 0:35
"Bridge Passage for Three Plastic Teeth" – 0:35
"Chain of Life" (Waters) – 3:59
"The Womb Bit" (Geesin/Waters) – 2:0
|
https://en.wikipedia.org/wiki/Harold%20W.%20Kuhn
|
Harold William Kuhn (July 29, 1925 – July 2, 2014) was an American mathematician who studied game theory. He won the 1980 John von Neumann Theory Prize along with David Gale and Albert W. Tucker. A former Professor Emeritus of Mathematics at Princeton University, he is known for the Karush–Kuhn–Tucker conditions, for Kuhn's theorem, for developing Kuhn poker as well as the description of the Hungarian method for the assignment problem. Recently, though, a paper by Carl Gustav Jacobi, published posthumously in 1890 in Latin, has been discovered that anticipates by many decades the Hungarian algorithm.
Life
Kuhn was born in Santa Monica in 1925. He is known for his association with John Forbes Nash, as a fellow graduate student, a lifelong friend and colleague, and a key figure in getting Nash the attention of the Nobel Prize committee that led to Nash's 1994 Nobel Prize in Economics. Kuhn and Nash both had long associations and collaborations with Albert W. Tucker, who was Nash's dissertation advisor. Kuhn co-edited The Essential John Nash, and is credited as the mathematics consultant in the 2001 movie adaptation of Nash's life, A Beautiful Mind.
Harold Kuhn served as the third president of the Society for Industrial and Applied Mathematics (SIAM). He was elected to the 2002 class of Fellows of the Institute for Operations Research and the Management Sciences.
In 1949, he married Estelle Henkin, sister of logician Leon Henkin. His oldest son was oral historian Clifford
|
https://en.wikipedia.org/wiki/California%20Academy%20of%20Mathematics%20and%20Science
|
The California Academy of Mathematics and Science (CAMS) is a public magnet high school in Carson, California, United States focusing on science and mathematics. Its California API scores are fourth-highest in the state.
Located on the campus of California State University, Dominguez Hills, CAMS shares many facilities with the university, including the gymnasium, the student union, the tennis courts, the pool, the library and a few of the parking lots. It is a National "No Child Left Behind" Blue Ribbon (2011) and California Distinguished school. The No Child Left Behind blue ribbon was only presented to 32 public schools nationwide. Newsweek states in its top 1200 High Schools in the USA, CAMS is in the top 4% taking number 281 in 2006.
In the December 2007, Newsweek released the results of a two-year study to determine the 100 best High Schools in the United States of America. Out of the 18,000+ schools reviewed, CAMS made it into the top 100 as number 21. As of August 24, 2016, CAMS moved up in ranking to the 100th best high school in the nation. In California CAMS is ranked 10th in the state.
According to U.S. News & World Report, as of November 2019 CAMS is rated as the 46th best high school in the nation, and the 5th best in California. It also ranks as the best magnet high school in California.
Unlike similar schools such as the Illinois Mathematics and Science Academy and the North Carolina School of Science and Mathematics, CAMS is non-residential, drawing its st
|
https://en.wikipedia.org/wiki/Comodule
|
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra.
Formal definition
Let K be a field, and C be a coalgebra over K. A (right) comodule over C is a K-vector space M together with a linear map
such that
,
where Δ is the comultiplication for C, and ε is the counit.
Note that in the second rule we have identified with .
Examples
A coalgebra is a comodule over itself.
If M is a finite-dimensional module over a finite-dimensional K-algebra A, then the set of linear functions from A to K forms a coalgebra, and the set of linear functions from M to K forms a comodule over that coalgebra.
A graded vector space V can be made into a comodule. Let I be the index set for the graded vector space, and let be the vector space with basis for . We turn into a coalgebra and V into a -comodule, as follows:
Let the comultiplication on be given by .
Let the counit on be given by .
Let the map on V be given by , where is the i-th homogeneous piece of .
In algebraic topology
One important result in algebraic topology is the fact that homology over the dual Steenrod algebra forms a comodule. This comes from the fact the Steenrod algebra has a canonical action on the cohomologyWhen we dualize to the dual Steenrod algebra, this gives a comodule structureThis result extends to other cohomology theories as well, such as com
|
https://en.wikipedia.org/wiki/Homotopy%20sphere
|
In algebraic topology, a branch of mathematics, a homotopy sphere is an n-manifold that is homotopy equivalent to the n-sphere. It thus has the same homotopy groups and the same homology groups as the n-sphere, and so every homotopy sphere is necessarily a homology sphere.
The topological generalized Poincaré conjecture is that any n-dimensional homotopy sphere is homeomorphic to the n-sphere; it was solved by Stephen Smale in dimensions five and higher, by Michael Freedman in dimension 4, and for dimension 3 (the original Poincaré conjecture) by Grigori Perelman in 2005.
The resolution of the smooth Poincaré conjecture in dimensions 5 and larger implies that homotopy spheres in those dimensions are precisely exotic spheres. It is still an open question () whether or not there are non-trivial smooth homotopy spheres in dimension 4.
See also
Homology sphere
Homotopy groups of spheres
Poincaré conjecture
References
External links
Homotopy theory
Topological spaces
|
https://en.wikipedia.org/wiki/Relational%20operator
|
In computer science, a relational operator is a programming language construct or operator that tests or defines some kind of relation between two entities. These include numerical equality (e.g., ) and inequalities (e.g., ).
In programming languages that include a distinct boolean data type in their type system, like Pascal, Ada, or Java, these operators usually evaluate to true or false, depending on if the conditional relationship between the two operands holds or not. In languages such as C, relational operators return the integers 0 or 1, where 0 stands for false and any non-zero value stands for true.
An expression created using a relational operator forms what is termed a relational expression or a condition. Relational operators can be seen as special cases of logical predicates.
Equality
Usage
Equality is used in many programming language constructs and data types. It is used to test if an element already exists in a set, or to access to a value through a key. It is used in switch statements to dispatch the control flow to the correct branch, and during the unification process in logic programming.
One possible meaning of equality is that "if a equals b, then either a or b can be used interchangeably in any context without noticing any difference." But this statement does not necessarily hold, particularly when taking into account mutability together with content equality.
Location equality vs. content equality
Sometimes, particularly in object-oriented progra
|
https://en.wikipedia.org/wiki/Pr%C3%BCfer%20rank
|
In mathematics, especially in the area of algebra known as group theory, the Prüfer rank of a pro-p group measures the size of a group in terms of the ranks of its elementary abelian sections. The rank is well behaved and helps to define analytic pro-p-groups. The term is named after Heinz Prüfer.
Definition
The Prüfer rank of pro-p-group is
where is the rank of the abelian group
,
where is the Frattini subgroup of .
As the Frattini subgroup of can be thought of as the group of non-generating elements of , it can be seen that will be equal to the size of any minimal generating set of .
Properties
Those profinite groups with finite Prüfer rank are more amenable to analysis.
Specifically in the case of finitely generated pro-p groups, having finite Prüfer rank is equivalent to having an open normal subgroup that is powerful. In turn these are precisely the class of pro-p groups that are p-adic analytic – that is groups that can be imbued with a
p-adic manifold structure.
References
Infinite group theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.