source
stringlengths 32
209
| text
stringlengths 18
1.5k
|
---|---|
https://en.wikipedia.org/wiki/Potez%20630
|
The Potez 630 and its derivatives were a family of twin-engined, multirole aircraft developed for the French Air Force in the late 1930s. The design was a contemporary of the British Bristol Blenheim (which was larger and designed purely as a bomber) and the German Messerschmitt Bf 110 (which was designed purely as a fighter).
The Potez 630 was in use by several operators during the Second World War. Following the Battle of France, both the Vichy French Air Force and Free French Air Forces used the type; a number of captured aircraft were operated by several air wings of the Axis powers. After the end of the conflict in 1945, a handful of aircraft were used for training purposes for some time.
Development
Origins
On 31 October 1934, the French Ministry of Air issued a specification for a heavy fighter. The specification demanded the aircraft be capable of performing three principal roles: fighter direction, in which it was required to lead formations of single-engine fighters with sufficient maneuverability; day attack, in which the type was also to escort friendly close air support and bomber aircraft; and nightfighter operations. Specified performance details included a maximum speed of 450 km/h at 4,000 meters, a 300 km/h cruising speed, and an endurance of at least four hours. Armament requirements included two fixed forward-firing 20 mm cannons and a single machine gun to the rear for self-defence. The sought aircraft was also required to accommodate two/three seats a
|
https://en.wikipedia.org/wiki/Vector%20projection
|
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to .
The projection of onto is often written as or .
The vector component or vector resolute of perpendicular to , sometimes also called the vector rejection of from (denoted or ), is the orthogonal projection of onto the plane (or, in general, hyperplane) that is orthogonal to . Since both and are vectors, and their sum is equal to , the rejection of from is given by:
To simplify notation, this article defines and
Thus, the vector is parallel to the vector is orthogonal to and
The projection of onto can be decomposed into a direction and a scalar magnitude by writting it as
where is a scalar, called the scalar projection of onto , and is the unit vector in the direction of . The scalar projection is defined as
where the operator ⋅ denotes a dot product, ‖a‖ is the length of , and θ is the angle between and .
The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of , that is, if the angle between the vectors is more than 90 degrees.
The vector projection can be calculated using the dot product of and as:
Notation
This article uses the convention that vectors are denoted in a bold font (e.g. ), and scalars are written in normal font (e.g. a1
|
https://en.wikipedia.org/wiki/Scalar%20projection
|
In mathematics, the scalar projection of a vector on (or onto) a vector also known as the scalar resolute of in the direction of is given by:
where the operator denotes a dot product, is the unit vector in the direction of is the length of and is the angle between and .
The term scalar component refers sometimes to scalar projection, as, in Cartesian coordinates, the components of a vector are the scalar projections in the directions of the coordinate axes.
The scalar projection is a scalar, equal to the length of the orthogonal projection of on , with a negative sign if the projection has an opposite direction with respect to .
Multiplying the scalar projection of on by converts it into the above-mentioned orthogonal projection, also called vector projection of on .
Definition based on angle θ
If the angle between and is known, the scalar projection of on can be computed using
( in the figure)
The formula above can be inverted to obtain the angle, θ.
Definition in terms of a and b
When is not known, the cosine of can be computed in terms of and by the following property of the dot product :
By this property, the definition of the scalar projection becomes:
Properties
The scalar projection has a negative sign if . It coincides with the length of the corresponding vector projection if the angle is smaller than 90°. More exactly, if the vector projection is denoted and its length :
if
if
See also
Scalar product
Cross product
V
|
https://en.wikipedia.org/wiki/Classifier
|
Classifier may refer to:
Classifier (linguistics), or measure word, especially in East Asian languages
Classifier handshape, in sign languages
Classifier (UML), in software engineering
Classification rule, in statistical classification, e.g.:
Hierarchical classifier
Linear classifier
Deductive classifier
Subobject classifier, in category theory
An air classifier or similar machine for sorting materials
Classifier (machine learning)
See also
Finite-state machine#Classifiers
Classification (disambiguation)
Classified (disambiguation)
|
https://en.wikipedia.org/wiki/Mixed-data%20sampling
|
Econometric models involving data sampled at different frequencies are of general interest. Mixed-data sampling (MIDAS) is an econometric regression developed by Eric Ghysels with several co-authors. There is now a substantial literature on MIDAS regressions and their applications, including Ghysels, Santa-Clara and Valkanov (2006), Ghysels, Sinko and Valkanov, Andreou, Ghysels and Kourtellos (2010) and Andreou, Ghysels and Kourtellos (2013).
MIDAS Regressions
A MIDAS regression is a direct forecasting tool which can relate future low-frequency data with current and lagged high-frequency indicators, and yield different forecasting models for each forecast horizon. It can flexibly deal with data sampled at different frequencies and provide a direct forecast of the low-frequency variable. It incorporates each individual high-frequency data in the regression, which solves the problems of losing potentially useful information and including mis-specification.
A simple regression example has the independent variable appearing at a higher frequency than the dependent variable:
where y is the dependent variable, x is the regressor, m denotes the frequency – for instance if y is yearly is quarterly – is the disturbance and is a lag distribution, for instance the Beta function or the Almon Lag. For example .
The regression models can be viewed in some cases as substitutes for the Kalman filter when applied in the context of mixed frequency data. Bai, Ghysels and Wright (2013)
|
https://en.wikipedia.org/wiki/Demographics%20of%20Serbia
|
Demographic features of the population of Serbia include vital statistics, ethnicity, religious affiliations, education level, health of the populace, and other aspects of the population.
History
Censuses in Serbia ordinarily take place every 10 years, organized by the Statistical Office of the Republic of Serbia. The Principality of Serbia had conducted the first population census in 1834; the subsequent censuses were conducted in 1841, 1843, 1846, 1850, 1854, 1859, 1863 and 1866 and 1874. During the era Kingdom of Serbia, six censuses were conducted in 1884, 1890, 1895, 1900, 1905 and the last one being in 1910. During the Kingdom of Yugoslavia, censuses were conducted in 1931 and 1921; the census in 1941 was never conducted due to the outbreak of World War II. Socialist Yugoslavia conducted censuses in 1948, 1953, 1961, 1971, 1981, and 1991. The two most recent censuses were held in 2011 and 2022.
The years since the first 1834 Census saw frequent border changes of Serbia, first amidst the disintegration of the Ottoman Empire and Austria-Hungary, then subsequent formation and later disintegration of Yugoslavia and, finally, 2008 partially recognized independence of Kosovo which affected territorial scope in which all these censuses have been conducted.
Total fertility rate 1860–1949
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation.
Vital statist
|
https://en.wikipedia.org/wiki/Cohort%20%28statistics%29
|
In statistics, epidemiology, marketing and demography, a cohort is a group of subjects who share a defining characteristic (typically subjects who experienced a common event in a selected time period, such as birth or graduation).
Cohort data can oftentimes be more advantageous to demographers than period data. Because cohort data is honed to a specific time period, it is usually more accurate. It is more accurate because it can be tuned to retrieve custom data for a specific study.
In addition, cohort data is not affected by tempo effects, unlike period data. However, cohort data can be disadvantageous in the sense that it can take a long amount of time to collect the data necessary for the cohort study. Another disadvantage of cohort studies is that it can be extremely costly to carry out, since the study will go on for a long period of time, demographers often require sufficient funds to fuel the study.
Demography often contrasts cohort perspectives and period perspectives. For instance, the total cohort fertility rate is an index of the average completed family size for cohorts of women, but since it can only be known for women who have finished child-bearing, it cannot be measured for currently fertile women. It can be calculated as the sum of the cohort's age-specific fertility rates that obtain as it ages through time. In contrast, the total period fertility rate uses current age-specific fertility rates to calculate the completed family size for a notional woman,
|
https://en.wikipedia.org/wiki/Anthropoid
|
Anthropoid means 'ape/human feature' and may refer to:
Simian, monkeys and apes (anthropoids, or suborder Anthropoidea, in earlier classifications)
Anthropoid apes, apes that are closely related to humans (e.g., former family Pongidae and sometimes also Hylobatidae and their extinct relatives)
Anthropoides, a genus of cranes
Operation Anthropoid, the codename for the assassination of Reinhard Heydrich, SS-Obergruppenführer and Reichsprotektor of Bohemia and Moravia
Operation Anthropoid Memorial, Libeň, Prague, Czech Republic
Anthropoid (film), a 2016 film based on Operation Anthropoid
In pelvimetry, one of four types of human female pelvis
Anthropoid animals, fictional creatures in the Japanese visual novel Wanko to Kurasō
Anthropoid robots, mostly referred to as androids meaning human-like robots
See also
Anthropology (disambiguation)
Anthropod (disambiguation)
Humanoid (disambiguation)
|
https://en.wikipedia.org/wiki/Conexant
|
Conexant Systems, Inc. was an American-based software developer and fabless semiconductor company that developed technology for voice and audio processing, imaging and modems. The company began as a division of Rockwell International, before being spun off as a public company. Conexant itself then spun off several business units, creating independent public companies which included Skyworks Solutions and Mindspeed Technologies.
The company was acquired by computing interface technology company Synaptics, Inc. in July 2017.
History
In 1996, Rockwell International Corporation incorporated its semiconductor division as Rockwell Semiconductor Systems, Inc. On January 4, 1999, Rockwell spun off Conexant Systems, Inc. as a public company. It was listed on the NASDAQ under symbol CNXT on January 4, 1999. At that time, Conexant became the world's largest, standalone communications-IC company. Dwight W. Decker was its first chief executive officer and chairman of its board of directors. The company was based in Newport Beach, California.
In the early 2000s, Conexant spun off several standalone technology businesses to create public companies.
In March 2002, Conexant entered into a joint venture agreement with The Carlyle Group to share ownership of its wafer fabrication plant, called Jazz Semiconductor.
In June 2002, Conexant spun off its wireless communications division, which merged immediately following the spinoff with Massachusetts-based chip manufacturer Alpha Indus
|
https://en.wikipedia.org/wiki/Order%20of%20a%20kernel
|
In statistics, the order of a kernel is the degree of the first non-zero moment of a kernel.
Definitions
The literature knows two major definitions of the order of a kernel:
Definition 1
Let be an integer. Then, is a kernel of order if the functions are integrable and satisfy
Definition 2
References
Nonparametric statistics
|
https://en.wikipedia.org/wiki/Reduced%20form
|
In statistics, and particularly in econometrics, the reduced form of a system of equations is the result of solving the system for the endogenous variables. This gives the latter as functions of the exogenous variables, if any. In econometrics, the equations of a structural form model are estimated in their theoretically given form, while an alternative approach to estimation is to first solve the theoretical equations for the endogenous variables to obtain reduced form equations, and then to estimate the reduced form equations.
Let Y be the vector of the variables to be explained (endogeneous variables) by a statistical model and X be the vector of explanatory (exogeneous) variables. In addition let be a vector of error terms. Then the general expression of a structural form is , where f is a function, possibly from vectors to vectors in the case of a multiple-equation model. The reduced form of this model is given by , with g a function.
Structural and reduced forms
Exogenous variables are variables which are not determined by the system. If we assume that demand is influenced not only by price, but also by an exogenous variable, Z, we can consider the structural supply and demand model
supply:
demand:
where the terms are random errors (deviations of the quantities supplied and demanded from those implied by the rest of each equation). By solving for the unknowns (endogenous variables) P and Q, this structural model can be rewritten in the reduced form:
|
https://en.wikipedia.org/wiki/Tipp-Ex
|
Tipp-Ex is a brand of correction fluid and other related products that is popular throughout Europe. It was also the name of the German company (Tipp-Ex GmbH & Co. KG) that produced the products in the Tipp-Ex line. While Tipp-Ex is a trademark name for correction products, in some countries it has become a genericised trademark: to tippex or to tippex out means to erase, either generally or with correction fluid.
History
Tipp-Ex correction paper was invented by Wolfgang Dabisch from Eltville, West Germany, who filed a patent in 1958 on Colored film for the correction of typing errors (German: Tippfehler). He subsequently founded a company of the same name. Shortly after that a Tipp-Ex Sales & Distribution company (Tipp-Ex Vertrieb GmbH & Co. KG) was founded in Frankfurt by Otto Carls. This company still exists under the name of Tipp-Ex GmbH & Co. KG close to Frankfurt.
Tipp-Ex became a registered trademark with the German patent office in 1987.
Earlier, in 1951, Bette Nesmith Graham invented the first correction fluid in her kitchen and began marketing the product in 1956 as Mistake Out. Tipp-Ex GmbH only started to produce white correction fluid in 1965 under the brand Tipp-Ex, but also as C-fluid.
As a result of the invention of Tipp-Ex, it became possible to erase a typographical error made on a typewriter. The typewriter would be backspaced to the letter that was to be changed, the correction paper would be placed behind the ribbon, and the mistyped letter would be r
|
https://en.wikipedia.org/wiki/Autoregressive%20integrated%20moving%20average
|
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.e., the trend). When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular (a.k.a. purely nondeterministic) wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework so that it is eliminated by the seasonal differencing.
The autoregressive () part of ARIMA indicates that the evolving variable of interest is regressed on
|
https://en.wikipedia.org/wiki/Bayesian%20search%20theory
|
Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS Scorpion, and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370.
Procedure
The usual procedure is as follows:
Formulate as many reasonable hypotheses as possible about what may have happened to the object.
For each hypothesis, construct a probability density function for the location of the object.
Construct a function giving the probability of actually finding an object in location X when searching there if it really is in location X. In an ocean search, this is usually a function of water depth — in shallow water chances of finding an object are good if the search is in the right place. In deep water chances are reduced.
Combine the above information coherently to produce an overall probability density map. (Usually this simply means multiplying the two functions together.) This gives the probability of finding the object by looking in location X, for all possible locations X. (This can be visualized as a contour map of probability.)
Construct a search path which starts at the point of highest probability and 'scans' over high probability areas, then intermediate probabilities, and finally low probability areas.
Revise all the probabilities continuousl
|
https://en.wikipedia.org/wiki/Desorption
|
Desorption is the physical process where adsorbed atoms or molecules are released from a surface into the surrounding vacuum or fluid. This occurs when a molecule gains enough energy to overcome the activation barrier and the binding energy that keep it attached to the surface.
Desorption is the reverse of the process of adsorption, which differs from absorption in that adsorption it refers to substances bound to the surface, rather than being absorbed into the bulk.
Desorption can occur from any of several processes, or a combination of them: it may result from heat (thermal energy); incident light such as infrared, visible, or ultraviolet photons; or a incident beam of energetic particles such as electrons. It may also occur following chemical reactions such as oxidation or reduction in an electrochemical cell or after a chemical reaction of a adsorbed compounds in which the surface may act as a catalyst.
Desorption mechanisms
Depending on the nature of the adsorbent-to-surface bond, there are a multitude of mechanisms for desorption. The surface bond of a sorbant can be cleaved thermally, through chemical reactions or by radiation, all which may result in desorption of the species.
Thermal desorption
Thermal desorption is the process by which an adsorbate is heated and this induces desorption of atoms or molecules from the surface. The first use of thermal desorption was by LeRoy Apker in 1948. It is one of the most frequently used modes of desorption, and can be
|
https://en.wikipedia.org/wiki/Witt%20algebra
|
In mathematics, the complex Witt algebra, named after Ernst Witt, is the Lie algebra of meromorphic vector fields defined on the Riemann sphere that are holomorphic except at two fixed points. It is also the complexification of the Lie algebra of polynomial vector fields on a circle, and the Lie algebra of derivations of the ring C[z,z−1].
There are some related Lie algebras defined over finite fields, that are also called Witt algebras.
The complex Witt algebra was first defined by Élie Cartan (1909), and its analogues over finite fields were studied by Witt in the 1930s.
Basis
A basis for the Witt algebra is given by the vector fields , for n in .
The Lie bracket of two basis vector fields is given by
This algebra has a central extension called the Virasoro algebra that is important in two-dimensional conformal field theory and string theory.
Note that by restricting n to 1,0,-1, one gets a subalgebra. Taken over the field of complex numbers, this is just the Lie algebra of the Lorentz group . Over the reals, it is the algebra sl(2,R) = su(1,1).
Conversely, su(1,1) suffices to reconstruct the original algebra in a presentation.
Over finite fields
Over a field k of characteristic p>0, the Witt algebra is defined to be the Lie algebra of derivations of the ring
k[z]/zp
The Witt algebra is spanned by Lm for −1≤ m ≤ p−2.
Images
See also
Virasoro algebra
Heisenberg algebra
References
Élie Cartan, Les groupes de transformations continus, infinis, simples. Ann. Sc
|
https://en.wikipedia.org/wiki/Hankinson
|
Hankinson may refer to:
Hankinson (surname)
Hankinson, North Dakota, a city in Richland County, North Dakota, United States
Hankinson's equation, an equation for predicting the strength of wood
Hankinson-Moreau-Covenhoven House, a house located in Freehold, New Jersey, United States
Lake Hankinson, a lake within the catchment of the Waiau River, New Zealand
|
https://en.wikipedia.org/wiki/Wald%27s%20equation
|
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands.
The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation.
Basic version
Let be a sequence of real-valued, independent and identically distributed random variables and let be an integer-valued random variable that is independent of the sequence . Suppose that and the have finite expectations. Then
Example
Roll a six-sided dice. Take the number on the die (call it ) and roll that number of six-sided dice to get the numbers , and add up their values. By Wald's equation, the resulting value on average is
General version
Let be an infinite sequence of real-valued random variables and let be a nonnegative integer-valued random variable.
Assume that:
. are all integrable (finite-mean) random variables,
. for every natural number , and
. the infinite series satisfies
Then the random sums
are integrable and
If, in addition,
. all have the same expectation, and
. has
|
https://en.wikipedia.org/wiki/Germplasm
|
Germplasm are genetic resources such as seeds, tissues, and DNA sequences that are maintained for the purpose of animal and plant breeding, conservation efforts, agriculture, and other research uses. These resources may take the form of seed collections stored in seed banks, trees growing in nurseries, animal breeding lines maintained in animal breeding programs or gene banks. Germplasm collections can range from collections of wild species to elite, domesticated breeding lines that have undergone extensive human selection. Germplasm collection is important for the maintenance of biological diversity, food security, and conservation efforts.
In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production.
Germplasm Regulation
In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production.
Specifically for plants, there is the U.S. National Plant Germplasm System (NPGS) which holds > 450,000 accessions with 10,000 species of the 85 most commonly grown crops. Many accessions held are international species,
|
https://en.wikipedia.org/wiki/Fin%20field-effect%20transistor
|
A fin field-effect transistor (FinFET) is a multigate device, a MOSFET (metal–oxide–semiconductor field-effect transistor) built on a substrate where the gate is placed on two, three, or four sides of the channel or wrapped around the channel, forming a double or even multi gate structure. These devices have been given the generic name "FinFETs" because the source/drain region forms fins on the silicon surface. The FinFET devices have significantly faster switching times and higher current density than planar CMOS (complementary metal–oxide–semiconductor) technology.
FinFET is a type of non-planar transistor, or "3D" transistor. It is the basis for modern nanoelectronic semiconductor device fabrication. Microchips utilizing FinFET gates first became commercialized in the first half of the 2010s, and became the dominant gate design at 14 nm, 10 nm and 7 nm process nodes.
It is common for a single FinFET transistor to contain several fins, arranged side by side and all covered by the same gate, that act electrically as one, to increase drive strength and performance.
History
After the MOSFET was first demonstrated by Mohamed Atalla and Dawon Kahng of Bell Labs in 1960, the concept of a double-gate thin-film transistor (TFT) was proposed by H. R. Farrah (Bendix Corporation) and R. F. Steinberg in 1967. A double-gate MOSFET was later proposed by Toshihiro Sekigawa of the Electrotechnical Laboratory (ETL) in a 1980 patent describing the planar XMOS transistor. Sekigawa fabricat
|
https://en.wikipedia.org/wiki/The%20First%20Generation%20of%20Postwar%20Writers
|
The First Generation of Postwar Writers''' is a classification in Modern Japanese literature used to group writers who appeared on the postwar literary scene between 1946 and 1947.
List of First Generation writers
Haniya Yutaka (埴谷雄高)
Nakamura Shin'ichirō (中村真一郎)
Noma Hiroshi (野間宏)
Shiina Rinzō (椎名麟三)
Takeda Taijun (武田泰淳)
Umezaki Haruo (梅崎春生)
Background of the Post-War Literature in Japan
During the beginning of the post-war period in Japan, the revolution of post-war literature in Japan became modern democratic as "Democracy", "Freedom", "class", and "individual". However, the influence of the emperor system made the revolution of post-war literature of Japan become contra-democratic. Therefore, the post-war literature in Japan had transferred to the management under the imperial institution of Japan.
Characteristics and Significance of the Post-War Literature
During the period post-war in Japan, trama was one of the representations as characteristics in literature and movie. During the literature in post-war Japan, the narration usually would be considered as the view of the "victim" in the war between Japan and other countries. Meanwhile, the reason for creating the characteristics of trauma and victim was the expectation to separate the past and present in Japanese military history.
Besides the characteristics of trauma and victim. The influence of the post-war literature in Japan can be demonstrated as "Body", "Individual", and "National Identity".
The body
|
https://en.wikipedia.org/wiki/Second%20Generation%20of%20Postwar%20Writers
|
The is a classification in modern Japanese literature used for writers who appeared on the postwar literary scene between 1948 and 1949.
Exceptional in this generation of postwar writers are Mishima Yukio and Abe Kōbō, both of whom have received acclaim in Japan and abroad. At times, their reputation abroad has surpassed that of their reputation in Japan.
List of Second Generation writers
Mishima Yukio (三島由紀夫)
Abe Kōbō (安部公房)
Ōoka Shōhei (大岡昇平)
Shimao Toshio (島尾敏雄)
Hotta Yoshie (堀田善衛)
Inoue Mitsuharu (井上光晴)
See also
Japanese literature
The First Generation of Postwar Writers
The Third Generation of Postwar Writers
Second Generation of Postwar Writers, The
Japanese literary movements
Postwar Japan
20th-century Japanese literature
|
https://en.wikipedia.org/wiki/Third%20Generation%20of%20Postwar%20Writers
|
The Third Generation of Postwar Writers (第三の新人, daisan no shinjin) is a classification in Modern Japanese literature used to group writers who appeared on the postwar literary scene between 1953 and 1955.
Shūsaku Endō, a member of the Third Generation once said, "In those days, although we had received the Akutagawa Prize one after another, hardly did anyone expect that we would become great writers. We were regarded as if we would soon be forgotten by the literary world. Precisely, almost all people did not start to know Akutagawa Prize until Ishihara Shintaro had won the prize and surfed away mass media and provoked public opinion into asunder, as the first manifesto from one of the Postwar Generation."
However, despite this, this generation has made a major mark on Japanese literature. The works of Endō in particularly have been translated into many languages and are widely read in the United States, France, and Germany.
At that same time, women writers such as Aya Kōda (幸田文), Minako Oba, and Sawako Ariyoshi also made their debuts.
After this generation, predominant and various writers like Shintaro Ishihara, Morio Kita, and Kenzaburō Ōe appeared.
List
Shūsaku Endō (遠藤周作)
Shōtarō Yasuoka (安岡章太郎)
Junnosuke Yoshiyuki (吉行淳之介)
Junzo Shono (庄野潤三)
Shumon Miura (三浦朱門)
Ayako Sono (曽野綾子)
Hiroyuki Agawa (阿川弘之)
Kojima Nobuo (小島信夫)
See also
Japanese literature
The First Generation of Postwar Writers
The Second Generation of Postwar Writers
Japanese literature
Litera
|
https://en.wikipedia.org/wiki/Volvo%20B18%20engine
|
The B18 is a 1.8 L inline four cylinder automobile engine produced by Volvo from 1961 through 1968. A larger 2.0 L derivative called the B20 debuted in 1969.
Despite being a pushrod design, the engines can rev to 6,500 rpm. They are also reputed to be very durable. The world's highest mileage car, a 1966 Volvo P1800S, traveled more than on its original B18 engine.
B18
The B18 has a single cam-in-block, operating two overhead valves (OHV) per cylinder by pushrods and rocker arms. The crankshaft rides in five main bearings, making the B18 quite different in design from its predecessor, the three-bearing B16.
With a bore of and stroke of , the B18 displaces . The engine was used in Volvo's PV544, P210 Duett, 120 (Amazon), P1800 and 140 series. It could also be found in the L3314 and the Bandvagn 202 military vehicles. The B18 was fitted to many Volvo Penta sterndrive marine propulsion systems. It was also used in the Facel Vega Facel III and the Marcos 1800 GT.
There are four variations of this engine:
B18A: Single carburettor version.
B18B: Dual carburettor version with a higher compression ratio, fitted variously with dual sidedraft SU or Zenith/Stromberg carburettors.
B18C: Single carburettor version with a lower compression ratio and mechanical RPM regulator, fitted in the gasoline powered versions of the Volvo BM 320 tractor. This version was also used for the elevator in the PS-15 radar system.
B18D Dual carburettor version with a lower compression ratio.
DOH
|
https://en.wikipedia.org/wiki/Helmert%E2%80%93Wolf%20blocking
|
The Helmert–Wolf blocking (HWB) is a least squares solution method for the solution of a sparse block system of linear equations. It was first reported by F. R. Helmert for use in geodesy problems in 1880; (1910–1994) published his direct semianalytic solution in 1978.
It is based on ordinary Gaussian elimination in matrix form or partial minimization form.
Description
Limitations
The HWB solution is very fast to compute but it is optimal only if observational errors do not correlate between the data blocks. The generalized canonical correlation analysis (gCCA) is the statistical method of choice for making those harmful cross-covariances vanish. This may, however, become quite tedious depending on the nature of the problem.
Applications
The HWB method is critical to satellite geodesy and similar large problems. The HWB method can be extended to fast Kalman filtering (FKF) by augmenting its linear regression equation system to take into account information from numerical forecasts, physical constraints and other ancillary data sources that are available in realtime. Operational accuracies can then be computed reliably from the theory of minimum-norm quadratic unbiased estimation (Minque) of C. R. Rao.
See also
Block matrix
Notes
Statistical algorithms
Least squares
Geodesy
|
https://en.wikipedia.org/wiki/Michael%20I.%20Jordan
|
Michael Irwin Jordan (born February 25, 1956) is an American scientist, professor at the University of California, Berkeley and researcher in machine learning, statistics, and artificial intelligence.
Jordan was elected a member of the National Academy of Engineering in 2010 for contributions to the foundations and applications of machine learning.
He is one of the leading figures in machine learning, and in 2016 Science reported him as the world's most influential computer scientist.
In 2022, Jordan won the inaugural World Laureates Association Prize in Computer Science or Mathematics, "for fundamental contributions to the foundations of machine learning and its application."
Education
Jordan received his BS magna cum laude in Psychology in 1978 from the Louisiana State University, his MS in Mathematics in 1980 from Arizona State University and his PhD in Cognitive Science in 1985 from the University of California, San Diego. At the University of California, San Diego, Jordan was a student of David Rumelhart and a member of the Parallel Distributed Processing (PDP) Group in the 1980s.
Career and research
Jordan is the Pehong Chen Distinguished Professor at the University of California, Berkeley, where his appointment is split across EECS and Statistics. He was a professor at the Department of Brain and Cognitive Sciences at MIT from 1988 to 1998.
In the 1980s Jordan started developing recurrent neural networks as a cognitive model. In recent years, his work is less
|
https://en.wikipedia.org/wiki/Aldolase%20A
|
Aldolase A (ALDOA, or ALDA), also known as fructose-bisphosphate aldolase, is an enzyme that in humans is encoded by the ALDOA gene on chromosome 16.
The protein encoded by this gene is a glycolytic enzyme that catalyzes the reversible conversion of fructose-1,6-bisphosphate to glyceraldehyde 3-phosphate (G3P) and dihydroxyacetone phosphate (DHAP). Three aldolase isozymes (A, B, and C), encoded by three different genes, are differentially expressed during development. Aldolase A is found in the developing embryo and is produced in even greater amounts in adult muscle. Aldolase A expression is repressed in adult liver, kidney and intestine and similar to aldolase C levels in brain and other nervous tissue. Aldolase A deficiency has been associated with myopathy and hemolytic anemia. Alternative splicing and alternative promoter usage results in multiple transcript variants. Related pseudogenes have been identified on chromosomes 3 and 10. [provided by RefSeq, Aug 2011]
Structure
ALDOA is a homotetramer and one of the three aldolase isozymes (A, B, and C), encoded by three different genes. The ALDOA gene contains 8 exons and the 5' UTR IB. Key amino acids responsible for its catalytic function have been identified. The residue Tyr363 functions as the acid–base catalyst for protonating C3 of the substrate, while Lys146 is proposed to stabilize the negative charge of the resulting conjugate base of Tyr363 and the strained configuration of the C-terminal. Residue Glu187 particip
|
https://en.wikipedia.org/wiki/Defining%20length
|
In genetic algorithms and genetic programming defining length L(H) is the maximum distance between two defining symbols (that is symbols that have a fixed value as opposed to symbols that can take any value, commonly denoted as # or *) in schema H. In tree GP schemata, L(H) is the number of links in the minimum tree fragment including all the non-= symbols within a schema H.
Example
Schemata "00##0", "1###1", "01###", and "##0##" have defining lengths of 4, 4, 1, and 0, respectively. Lengths are computed by determining the last fixed position and subtracting from it the first fixed position.
In genetic algorithms as the defining length of a solution increases so does the susceptibility of the solution to disruption due to mutation or cross-over.
References
Genetic algorithms
|
https://en.wikipedia.org/wiki/Stress-induced%20leakage%20current
|
Stress-induced leakage current (SILC) is an increase in the gate leakage current of a MOSFET, used in semiconductor physics. It occurs due to defects created in the gate oxide during electrical stressing. SILC is perhaps the largest factor inhibiting device miniaturization. Increased leakage is a common failure mode of electronic devices.
Oxide defects
The most well-studied defects assisting in the leakage current are those produced by charge trapping in the oxide. This model provides a point of attack and has stimulated researchers to develop methods to decrease the rate of charge trapping by mechanisms such as nitrous oxide (N2O) nitridation of the oxide.
Semiconductor device defects
|
https://en.wikipedia.org/wiki/Training%2C%20validation%2C%20and%20test%20data%20sets
|
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets.
The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training data set and produces a result, which is then compared with the target, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation.
Successively, the fitted model is use
|
https://en.wikipedia.org/wiki/Instrumental%20variables%20estimation
|
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.
Instrumental variable methods allow for consistent estimation when the explanatory variables (covariates) are correlated with the error terms in a regression model. Such correlation may occur when:
changes in the dependent variable change the value of at least one of the covariates ("reverse" causation),
there are omitted variables that affect both the dependent and explanatory variables, or
the covariates are subject to non-random measurement error.
Explanatory variables that suffer from one or more of these issues in the context of a regression are sometimes referred to as endogenous. In this situation, ordinary least squares produces biased and inconsistent estimates. However, if an instrument is available, consistent estimates may still be obtained. An instrument is a variable that does not itself
|
https://en.wikipedia.org/wiki/Mating%20pool
|
A mating pool is a concept used in evolutionary computation, which refers to a family of algorithms used to solve optimization and search problems.
The mating pool is formed by candidate solutions that the selection operators deem to have the highest fitness in the current population. Solutions that are included in the mating pool are referred to as parents. Individual solutions can be repeatedly included in the mating pool, with individuals of higher fitness values having a higher chance of being included multiple times. Crossover operators are then applied to the parents, resulting in recombination of genes recognized as superior. Lastly, random changes in the genes are introduced through mutation operators, increasing the genetic variation in the gene pool. Those two operators improve the chance of creating new, superior solutions. A new generation of solutions is thereby created, the children, who will constitute the next population. Depending on the selection method, the total number of parents in the mating pool can be different to the size of the initial population, resulting in a new population that’s smaller. To continue the algorithm with an equally sized population, random individuals from the old populations can be chosen and added to the new population.
At this point, the fitness value of the new solutions is evaluated. If the termination conditions are fulfilled, processes come to an end. Otherwise, they are repeated.
The repetition of the steps result in can
|
https://en.wikipedia.org/wiki/Homothetic
|
Homothetic may refer to:
Geometry
Homothetic transformation, also known as homothety, homothecy, or homogeneous dilation
Homothetic center
Homothetic vector field
Economics
Homothetic preferences
|
https://en.wikipedia.org/wiki/Premature%20convergence
|
In evolutionary algorithms (EA), the term of premature convergence means that a population for an optimization problem converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms in general and genetic algorithms in particular, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene (see also convergence).
Strategies for preventing premature convergence
Strategies to regain genetic variation can be:
a mating strategy called incest prevention,
uniform crossover,
favored replacement of similar individuals (preselection or crowding),
segmentation of individuals of similar fitness (fitness sharing),
increasing population size.
The genetic variation can also be regained by mutation though this process is highly random.
One way to reduce the risk of premature convergence is to use structured populations instead of the commonly used panmictic ones, see b
|
https://en.wikipedia.org/wiki/Brown%20note
|
The brown note, also sometimes called the brown frequency or brown noise, is a hypothetical infrasonic frequency capable of causing fecal incontinence by creating acoustic resonance in the human bowel. Considered an urban myth, the name is a metonym for the common color of human faeces. Attempts to demonstrate the existence of a "brown note" using sound waves transmitted through the air have failed.
Frequencies supposedly involved are between 5 and 9 Hz, which are below the lower frequency limit of human hearing. High-power sound waves below 20 Hz are felt in the body, not heard by the ear as sound.
Physiological effects of low frequency vibration
Air is a very inefficient medium for transferring low frequency vibration from a transducer to the human body. Mechanical connection of the vibration source to the human body, however, provides a potentially dangerous combination. The U.S. space program, worried about the harmful effects of rocket flight on astronauts, ordered vibration tests that used cockpit seats mounted on vibration tables to transfer "brown note" and other frequencies directly to the human subjects. Very high power levels of 160 dB were achieved at frequencies of 2–3 Hz. Test frequencies ranged from 0.5 Hz to 40 Hz. Test subjects suffered motor ataxia, nausea, visual disturbance, degraded task performance and difficulties in communication. These tests are assumed by researchers to be the nucleus of the current urban myth.
MythBusters testing
In February 20
|
https://en.wikipedia.org/wiki/Manaaki%20Whenua%20%E2%80%93%20Landcare%20Research
|
Manaaki Whenua – Landcare Research is a New Zealand Crown Research Institute whose focus of research is the environment, biodiversity, and sustainability.
History
Manaaki Whenua was originally part of the Department of Scientific and Industrial Research (DSIR), but was established as an independent organisation when the Crown Research Institutes were created in 1992. As part of that process, it was semi-commercialised, and now operates as a government-owned company rather than as a government department. The commercialisation has led to greater emphasis on financial viability, and Manaaki Whenua is employed by various private groups to provide advice and information. It is currently chaired by Acting Chair Dr Paul Reynolds QSO.
Locations
The main site is in Lincoln, near Christchurch. There are also other sites at Auckland on the Tamaki campus of Auckland University, Hamilton, Palmerston North, Wellington, Alexandra, and Dunedin.
Collections
Manaaki Whenua – Landcare Research holds several collections of organisms that are of significant national importance to New Zealand. Detailed information on all the specimens can be found though the Systematics Collections Data (SCD) website.
International collection of microorganisms from plants
The International Collection of Microorganisms from Plants in Auckland holds live bacterial and fungal specimens that are preserved under liquid nitrogen or in freeze dried ampoules. Currently there are over 20,000 specimens in the collecti
|
https://en.wikipedia.org/wiki/Thermodynamic%20equations
|
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics.
Introduction
One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power:
During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the
|
https://en.wikipedia.org/wiki/Confluent%20hypergeometric%20function
|
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:
Kummer's (confluent hypergeometric) function , introduced by , is a solution to Kummer's differential equation. This is also known as the confluent hypergeometric function of the first kind. There is a different and unrelated Kummer's function bearing the same name.
Tricomi's (confluent hypergeometric) function introduced by , sometimes denoted by , is another solution to Kummer's equation. This is also known as the confluent hypergeometric function of the second kind.
Whittaker functions (for Edmund Taylor Whittaker) are solutions to Whittaker's equation.
Coulomb wave functions are solutions to the Coulomb wave equation.
The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables.
Kummer's equation
Kummer's equation may be written as:
with a regular singular point at and an irregular singular point at . It has two (usually) linearly independent solutions and .
Kummer's function
|
https://en.wikipedia.org/wiki/Texture%20%28chemistry%29
|
In physical chemistry and materials science, texture is the distribution of crystallographic orientations of a polycrystalline sample (it is also part of the geological fabric). A sample in which these orientations are fully random is said to have no distinct texture. If the crystallographic orientations are not random, but have some preferred orientation, then the sample has a weak, moderate or strong texture. The degree is dependent on the percentage of crystals having the preferred orientation.
Texture is seen in almost all engineered materials, and can have a great influence on materials properties. The texture forms in materials during thermo-mechanical processes, for example during production processes e.g. rolling. Consequently, the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture. Controlling the production process in combination with the characterization of texture and the material's microstructure help to determine the materials properties, i.e. the processing-microstructure-texture-property relationship. Also, geologic rocks show texture due to their thermo-mechanic history of formation processes.
One extreme case is a complete lack of texture: a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites. The opposite extreme is a perfect single crystal, which likely has anisotropic properties by geometric necessity.
Chara
|
https://en.wikipedia.org/wiki/V%C3%ADctor%20A.%20Carre%C3%B1o
|
Víctor A. Carreño (born 1956) is a NASA aerospace engineer and aerospace technologist. He holds the patent for the Single Frequency Multitransmitter Telemetry System.
Early years
Carreño was born in Santo Domingo, Dominican Republic. His family moved to Puerto Rico when he was only a child and he was raised in the City of Guaynabo. Carreño became interested in electronics and the solution of mathematical problems as a child. After finishing his primary and secondary education, he attended the Margarita Janer Palacios High School and here he was a top mathematics and science student and graduated with honors in 1974.
In 1974, Carreño enrolled at the University of Puerto Rico and earned his Bachelor of Science degree in electrical engineering in 1979. Upon graduation, he applied to and was hired by the NASA Langley Research Center and assigned to the Aircraft Electronics System Branch, Flight Electronics Division. In 1983, he was reassigned to the digital system upset assessment team, in the Fault Tolerant Systems Branch. His work involved the development of techniques for analytically assessing digital system upset due to lightning-induced transients.
Career in NASA
Carreño is credited with inventing and developing the Single Frequency Multitransmitter Telemetry System in 1983. He also designed and conducted experiments for the real-time evaluation of the Viper single board computer while working in the instrumentation of the F-106 lightning research aircraft. Carreño co
|
https://en.wikipedia.org/wiki/Jotun%20Hein
|
Jotun John Piet Hein (born 19 July 1956) is Professor of Bioinformatics at the Department of Statistics of the University of Oxford and a professorial fellow of University College, Oxford. Hein was previously Director of the Bioinformatics Research Centre at Aarhus University, Denmark.
Hein is the fourth son of Piet Hein, the Danish scientist, mathematician, inventor, designer, author, and poet who wrote the famed Grooks poetry collections and invented the Superegg and the Soma cube. When he was 12 years old, Jotun proved the Soma cube's "Basalt Rock" construction impossible, which was published in the puzzle's instruction manual as "Jotun's Proof."
Hein's research interests are in molecular evolution, molecular population genetics and bioinformatics.
Selected books
Hein, J; Schierup, M. H., and Wiuf, C. Gene Genealogies, Variation and Evolution – A Primer in Coalescent Theory. Oxford University Press, 2005. .
References
External links
Personal home page
1956 births
Living people
British statisticians
Fellows of University College, Oxford
20th-century British mathematicians
21st-century British mathematicians
|
https://en.wikipedia.org/wiki/Br%C3%A9guet%20693
|
The Bréguet 690 and its derivatives were a series of light twin-engine ground-attack aircraft that were used by the French Air Force in World War II. The aircraft was intended to be easy to maintain, forgiving to fly, and capable of at . The type's sturdy construction was frequently demonstrated and the armament was effective. French rearmament began two years later than that in Britain and none of these aircraft were available in sufficient numbers to make a difference in 1940.
Design and development
Bréguet 690
The Bréguet 690 had begun life in 1934 as the Bre 630, the Bréguet Aviation entry for the (STAé, Aeronautical Technical Service) specification of October 1934 along with the Hanriot H 220, Loire-Nieuport LN-20, Romano R.110 and the Potez 630. The Bréguet 630 was a twin-engined monoplane with twin tailplanes and Hispano-Suiza 14AB 02/03 (port and starboard) 14-cylinder air-cooled radial engines, both rotating inwards to limit torque problems if one engine failed. The aircraft was armed with two forward-firing Hispano-Suiza HS.404 cannon and a 7.5 mm MAC 1934 machine-gun firing rearwards for aft defence. The Potez 630 won the C3 competition but Bréguet began construction of the prototype Bréguet 690 in 1935, without an order from the , which was not placed until 26 March 1937. Completion of the Bréguet 690-01 was slowed by a ten-month delay in the delivery of its engines from Hispano-Suiza. The Bre 690-01 was finished in early 1938 and flew on 23 March, revealing
|
https://en.wikipedia.org/wiki/%C3%93%20Flaithbheartaigh
|
O'Flaherty ( , ; ; ) is an Irish Gaelic clan based most prominently in what is today County Galway. The clan name originated in the 10th century as a derivative of its founder Flaithbheartach mac Eimhin. They descend in the paternal line from the Connachta's Uí Briúin Seóla. They were originally kings of Maigh Seóla and Muintir Murchada and as members of the Uí Briúin were kinsmen of the Ó Conchubhair and Mac Diarmada amongst others. After their king Cathal mac Tigernán lost out to Áed in Gai Bernaig in the 11th century, the family were pushed further west to Iar Connacht, a territory associated with Connemara today. They continued to rule this land until the 16th century. The name has been alternatively rendered into English in various forms, such as Flaherty, Fluharty, Faherty, Laverty, Flaverty, Lahiff, and Flahive.
Naming conventions
Overview
This Gaelic-Irish surname is written as "Ua Flaithbertach" (nominative) or "Ua Flaithbertaig" (genitive) in Old Irish and Middle Irish texts. In Modern Irish the surname is now generally spelt as Ó Flatharta.
The surname is commonly translated as "bright ruler" or more correctly "bright prince", flaith originally meaning prince in Irish. "O" or Ó comes from Ua, designating "grandson" or "descendant" of a (major) clan member. The prefix is often anglicised to O', using an apostrophe instead of the Irish síneadh fada: "'".
Maigh Seóla was the earliest O'Flaherty domain, to the east of Lough Corrib in the kingdom of Connacht, th
|
https://en.wikipedia.org/wiki/Military%20brat%20%28U.S.%20subculture%29
|
In the United States, a military brat (also known by various "brat" derivatives) is the child of a parent(s), adopted parent(s) or legal guardian(s) serving full-time in the United States Armed Forces, whether current or former. The term military brat can also refer to the subculture and lifestyle of such families.
The military brat lifestyle typically involves moving to new states or countries many times while growing up, as the child's military family is customarily transferred to new non-combat assignments; consequently, many military brats never have a home town. War-related family stresses are also a commonly occurring part of military brat life. There are also other aspects of military brat life that are significantly different in comparison to the civilian American population, often including living in foreign countries and or diverse regions within the U.S., exposure to foreign languages and cultures, and immersion in military culture.
The military brats subculture has emerged over the last 200 years. The age of the phenomenon has meant military brats have also been described by a number of researchers as one of America's oldest and yet least well-known and largely invisible subcultures. They have also been described as a "modern nomadic subculture".
Military brat is known in U.S. military culture as a term of endearment and respect. The term may also connote a military brat's experience of mobile upbringing, and may refer to a sense of worldliness. Research has sh
|
https://en.wikipedia.org/wiki/Monocrystalline%20whisker
|
A monocrystalline whisker is a filament of material that is structured as a single, defect-free crystal. Some typical whisker materials are graphite, alumina, iron, silicon carbide and silicon. Single-crystal whiskers of these (and some other) materials are known for having very high tensile strength (on the order of 10–20 GPa). Whiskers are used in some composites, but large-scale fabrication of defect-free whiskers is very difficult.
Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known, and were featured regularly in science fiction as materials for fabrication of space elevators, arcologies, and other large structures. Despite showing great promise for a range of applications, their usage has been hindered by concerns over their effects on health when inhaled.
See also
Whisker (metallurgy) – Self-organizing metallic whisker-shaped structures that cause problems with electronics.
Laser-heated pedestal growth
References
"Mechanical and Physical Properties of Whiskers", CRC Handbook of Chemistry and Physics, 55th edition.
Materials
|
https://en.wikipedia.org/wiki/Visvedevas
|
The visvedevas () refers to the designation used to address the entirety of the various deities featured in the Vedas. It also refers to a specific classification of deities in the Puranas. The visvedevas are sometimes regarded as the most comprehensive gathering of the gods, a classification in which no deity is stated to be omitted.
Literature
Rigveda
In the Rigveda a number of hymns are addressed to these deities, including (according to Griffith): 1.3,1.89,3.54-56, 4.55, 5.41-51, 6.49-52, 7.34-37, 39, 40, 42, 43, 8.27-30, 58, 83 10.31, 35, 36, 56, 57, 61-66, 92, 93, 100, 101, 109, 114, 126, 128, 137, 141, 157, 165, 181.
RV 3.54.17 addresses them as headed by Indra:
This is, ye Wise, your great and glorious title, that all ye Deities abide in Indra. (trans. Griffith)
The dichotomy between devas is not evident in these hymns, and the devas are invoked together such as Mitra and Varuna. Though many devas are named in the Rigveda, only 33 devas are counted, eleven of them present each in earth, space, and heaven.
Manusmriti
According to Manu (iii, 90, 121), offerings should be made daily to the visvedevas. These privileges were bestowed on them by Brahma and the Pitri as a reward for severe austerities they had performed on the Himalayas.
Puranas
In later Hinduism, the visvedevas form one of the nine ganadevatas (along with the adityas, vasus, tushitas, abhasvaras, anilas, maharajikas, sadhyas, and rudras). According to the Vishnu Purana and Padma Purana, they were
|
https://en.wikipedia.org/wiki/Circular%20distribution
|
In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the lower or upper end of the range, and the division of the range could notionally be made at any point.
Graphical representation
If a circular distribution has a density
it can be graphically represented as a closed curve
where the radius is set equal to
and where a and b are chosen on the basis of appearance.
Examples
By computing the probability distribution of angles along a handwritten ink trace,
a lobe-shaped polar distribution emerges. The main direction of the lobe in the
first quadrant corresponds to the slant of handwriting (see: graphonomics).
An example of a circular lattice distribution would be the probability of being born in a given month of the year, with each calendar month being thought of as arranged round a circle, so that "January" is next to "December".
See also
Circular mean
Circular uniform distribution
von Mises distribution
References
External link
|
https://en.wikipedia.org/wiki/Running%20angle
|
In mathematics, the running angle is the angle of consecutive vectors with respect to the base line, i.e.
Usually, it is more informative to compute it using a four-quadrant version of the arctan function in a mathematical software library.
See also
Differential geometry
Polar distribution
Penmanship
|
https://en.wikipedia.org/wiki/Hariri
|
Hariri (in Arabic حريري) is a surname and derivative of harir (in Arabic حرير meaning silk) which indicates a mercantile background at one point in that field.
People
Historic
Ali Hariri (1009-1079), Kurdish poet
Al-Hariri of Basra (1054–1122), Arab poet, scholar of the Arabic language and a high government official of the Seljuk Empire
Surname
Family of Rafic Hariri
Ayman Hariri (born 1978), Lebanese businessman, son of Rafic Hariri
Bahia Hariri (born 1952), Lebanese politician, sister of Rafic Hariri
Bahaa Hariri (born 1966), Lebanese business tycoon, son of Rafic Hariri
Fahd Hariri (born 1980/1981), Lebanese businessman and property developer, the son of Rafic Hariri
Hind Hariri (born 1984), daughter and youngest child of Rafic Hariri
Nazik Hariri, widow of Rafic Hariri
Rafic Hariri (1944–2005), business tycoon and Lebanese Prime Minister; assassinated
Saad Hariri (born 1970), politician, business tycoon, Lebanese Prime Minister, and son of Rafic Hariri
Other people
Abbas Hariri, Iranian wrestler
Abu Al-Izz Al-Hariri (1946–2014), Egyptian politician and member of parliament
Abdulhadi Al Hariri (born 1992), Syrian footballer
Fawzi Hariri (born 1958), Iraqi Minister of Industry and Minerals (since 2006)
Franso Hariri (1937–2001), Kurdish Iraqi politician
Lamia Al Hariri, Syrian diplomat
May Hariri (born 1972), Lebanese pop artist and actress
Naser al-Hariri, Syrian politician
Omar El-Hariri (c. 1944–2015), Libyan politician, minister, leading figur
|
https://en.wikipedia.org/wiki/Suhas%20Patankar
|
Suhas V. Patankar (born 22 February 1941) is an Indian mechanical engineer. He is a pioneer in the field of computational fluid dynamics (CFD) and Finite volume method. He is currently a Professor Emeritus at the University of Minnesota. He is also president of Innovative Research, Inc. Patankar was born in Pune, Maharashtra, India.
Early life and education
Patankar received his bachelor's degree in mechanical engineering in 1962 from the College of Engineering, Pune, which is affiliated to the University of Pune and his Master of Technology degree in mechanical engineering from the Indian Institute of Technology Bombay in 1964. In 1967 he received his Ph.D. in mechanical engineering from the Imperial College, University of London.
Career
Patankar's most important contribution to the field of CFD is the SIMPLE algorithm that he developed along with his colleagues at Imperial College. Patankar is the author of a book in computational fluid dynamics titled Numerical Heat Transfer and Fluid Flow which was first published in 1980. This book has since been considered one of the groundbreaking contributions to computational fluid dynamics due to its emphasis on physical understanding and insight into the fluid flow and heat transfer phenomena. He is also one of the most cited authors in science and engineering.
References
1941 births
Living people
American mechanical engineers
Indian mechanical engineers
Computational fluid dynamicists
Savitribai Phule Pune University alumn
|
https://en.wikipedia.org/wiki/Helly%E2%80%93Bray%20theorem
|
In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray.
Let F and F1, F2, ... be cumulative distribution functions on the real line. The Helly–Bray theorem states that if Fn converges weakly to F, then
for each bounded, continuous function g: R → R, where the integrals involved are Riemann–Stieltjes integrals.
Note that if X and X1, X2, ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E(Xn) → E(X), since g(x) = x is not a bounded function.
In fact, a stronger and more general theorem holds. Let P and P1, P2, ... be probability measures on some set S. Then Pn converges weakly to P if and only if
for all bounded, continuous and real-valued functions on S. (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals.)
The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3).
References
Probability theorems
|
https://en.wikipedia.org/wiki/Nonmetricity%20tensor
|
In mathematics, the nonmetricity tensor in differential geometry is the covariant derivative of the metric tensor. It is therefore a tensor field of order three. It vanishes for the case of Riemannian geometry and can be
used to study non-Riemannian spacetimes.
Definition
By components, it is defined as follows.
It measures the rate of change of the components of the metric tensor along the flow of a given vector field, since
where is the coordinate basis of vector fields of the tangent bundle, in the case of having a 4-dimensional manifold.
Relation to connection
We say that a connection is compatible with the metric when its associated covariant derivative of the metric tensor (call it , for example) is zero, i.e.
If the connection is also torsion-free (i.e. totally symmetric) then it is known as the Levi-Civita connection, which is the only one without torsion and compatible with the metric tensor. If we see it from a geometrical point of view, a non-vanishing nonmetricity tensor for a metric tensor implies that the modulus of a vector defined on the tangent bundle to a certain point of the manifold, changes when it is evaluated along the direction (flow) of another arbitrary vector.
References
External links
Differential geometry
|
https://en.wikipedia.org/wiki/Zeroth%20law
|
Zeroth law may refer to:
Zeroth law of black hole thermodynamics, about event horizons of black holes
Zeroth law of robotics, an addition to Isaac Asimov's Three Laws of Robotics
Zeroth law of thermodynamics, in relation to thermal equilibriums
See also
Zeroth (disambiguation)
|
https://en.wikipedia.org/wiki/Collectively%20exhaustive%20events
|
In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, and 6 balls of a single outcome are collectively exhaustive, because they encompass the entire range of possible outcomes.
Another way to describe collectively exhaustive events is that their union must cover all the events within the entire sample space. For example, events A and B are said to be collectively exhaustive if
where S is the sample space.
Compare this to the concept of a set of mutually exclusive events. In such a set no more than one event can occur at a given time. (In some forms of mutual exclusion only one event can ever occur.) The set of all possible die rolls is both mutually exclusive and collectively exhaustive (i.e., "MECE"). The events 1 and 6 are mutually exclusive but not collectively exhaustive. The events "even" (2,4 or 6) and "not-6" (1,2,3,4, or 5) are also collectively exhaustive but not mutually exclusive. In some forms of mutual exclusion only one event can ever occur, whether collectively exhaustive or not. For example, tossing a particular biscuit for a group of several dogs cannot be repeated, no matter which dog snaps it up.
One example of an event that is both collectively exhaustive and mutually exclusive is tossing a coin. The outcome must be either heads or tails, or p (heads or tails) = 1, so the outcomes are collectively exhaustiv
|
https://en.wikipedia.org/wiki/Vertisol
|
A vertisol is a Soil Order in the USDA soil taxonomy and a Reference Soil Group in the World Reference Base for Soil Resources (WRB). It is also defined in many other soil classification systems. In the Australian Soil Classification it is called vertosol. Vertisols have a high content of expansive clay minerals, many of them belonging to the montmorillonites that form deep cracks in drier seasons or years. In a phenomenon known as argillipedoturbation, alternate shrinking and swelling causes self-ploughing, where the soil material consistently mixes itself, causing some vertisols to have an extremely deep A horizon and no B horizon. (A soil with no B horizon is called an A/C soil). This heaving of the underlying material to the surface often creates a microrelief known as gilgai.
Vertisols typically form from highly basic rocks, such as basalt, in climates that are seasonally humid or subject to erratic droughts and floods, or that impeded drainage. Depending on the parent material and the climate, they can range from grey or red to the more familiar deep black (known as "black earths" in Australia, "black gumbo" in East Texas, "black cotton" soils in East Africa, and "vlei soils" in South Africa).
Vertisols are found between 50°N and 45°S of the equator. Major areas where vertisols are dominant are eastern Australia (especially inland Queensland and New South Wales), the Deccan Plateau of India, and parts of southern Sudan, Ethiopia, Kenya, Chad (the Gezira), South Afric
|
https://en.wikipedia.org/wiki/Nonelementary%20integral
|
In mathematics, a nonelementary antiderivative of a given elementary function is an antiderivative (or indefinite integral) that is, itself, not an elementary function (i.e. a function constructed from a finite number of quotients of constant, algebraic, exponential, trigonometric, and logarithmic functions using field operations). A theorem by Liouville in 1835 provided the first proof that nonelementary antiderivatives exist. This theorem also provides a basis for the Risch algorithm for determining (with difficulty) which elementary functions have elementary antiderivatives.
Examples
Examples of functions with nonelementary antiderivatives include:
(elliptic integral)
(logarithmic integral)
(error function, Gaussian integral)
and (Fresnel integral)
(sine integral, Dirichlet integral)
(exponential integral)
(in terms of the exponential integral)
(in terms of the logarithmic integral)
(incomplete gamma function); for the antiderivative can be written in terms of the exponential integral; for in terms of the error function; for any positive integer, the antiderivative elementary.
Some common non-elementary antiderivative functions are given names, defining so-called special functions, and formulas involving these new functions can express a larger class of non-elementary antiderivatives. The examples above name the corresponding special functions in parentheses.
Properties
Nonelementary antiderivatives can often be evaluated using Taylor series. Even if a
|
https://en.wikipedia.org/wiki/Proper%20orthogonal%20decomposition
|
The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve.
It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning.
POD and PCA
The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen and Loève, and their Karhunen–Loève theorem.
Mathematical expression
The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector f
|
https://en.wikipedia.org/wiki/HomeRF
|
HomeRF was a wireless networking specification for home devices. It was developed in 1998 by the Home Radio Frequency Working Group, a consortium of mobile wireless companies that included Proxim Wireless, Intel, Siemens AG, Motorola, Philips and more than 100 other companies.
The group was disbanded in January 2003, after other wireless networks became accessible to home users and Microsoft began including support for them in its Windows operating systems. As a result, HomeRF fell into obsolescence.
Description
Initially called Shared Wireless Access Protocol (SWAP) and later just HomeRF, this open specification allowed PCs, peripherals, cordless phones and other consumer devices to share and communicate voice and data in and around the home without the complication and expense of running new wires. HomeRF combined several wireless technologies in the 2.4 GHz ISM band, including IEEE 802.11 FH (the frequency-hopping version of wireless data networking) and DECT (the most prevalent digital cordless telephony standard in the world) to meet the unique home networking requirements for security, quality of service (QoS) and interference immunity—issues that still plagued Wi-Fi (802.11b and g).
HomeRF used frequency hopping spread spectrum (FHSS) in the 2.4 GHz frequency band and in theory could achieve a maximum of 10 Mbit/s throughput; its nodes could travel within a 50-meter range of a wireless access point while remaining connected to the personal area network (PAN). Seve
|
https://en.wikipedia.org/wiki/Ptolemy%27s%20theorem
|
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy.
If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that:
This relation may be verbally expressed as follows:
If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides.
Moreover, the converse of Ptolemy's theorem is also true:
In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral.
Corollaries on Inscribed Polygons
Equilateral triangle
Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle.
Given An equilateral triangle inscribed on a circle and a point on the circle.
The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices.
Proof: Follows immediately from Ptolemy's theorem:
Square
Any square can be inscrib
|
https://en.wikipedia.org/wiki/Formate
|
Formate (IUPAC name: methanoate) is the conjugate base of formic acid. Formate is an anion () or its derivatives such as ester of formic acid. The salts and esters are generally colorless.
Fundamentals
When dissolved in water, formic acid converts to formate:
Formate is a planar anion. The two oxygen atoms are equivalent and bear a partial negative charge. The remaining C-H bond is not acidic.
Biochemistry
Formate is a common C-1 source in living systems. It is formed from many precursors including choline, serine, and sarcosine. It provides a C-1 source in the biosynthesis of some nucleic acids. Formate (or formic acid) is invoked as a leaving group in the demethylation of some sterols.
These conversions are catalyzed by aromatase enzymes using O2 as the oxidant. Specific conversions include testosterone to estradiol and androstenedione to estrone.
Formate is reversibly oxidized by the enzyme formate dehydrogenase from Desulfovibrio gigas:
Formate esters
Formate esters have the formula HCOOR (alternative way of writing formula ROC(O)H or RO2CH). Many form spontaneously when alcohols dissolve in formic acid.
The most important formate ester is methyl formate, which is produced as an intermediate en route to formic acid. Methanol and carbon monoxide react in the presence of a strong base, such as sodium methoxide:
Hydrolysis of methyl formate gives formic acid and regenerates methanol:
Formic acid is used for many applications in industry.
Formate ester
|
https://en.wikipedia.org/wiki/Schur%27s%20theorem
|
In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur.
Ramsey theory
In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with
For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers into c parts, one of the parts contains integers x, y, and z with . Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's number.
Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part.
Using this definition, the only known Schur numbers are S(n) 2, 5, 14, 45, and 161 () The proof that was announced in 2017 and took up 2 petabytes of space.
Combinatorics
In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if is a set of integers such that , the number of different tuples of non-negative integer numbers such that when goes to infinity is:
As a result, for every set of relatively prim
|
https://en.wikipedia.org/wiki/Education%20in%20North%20Korea
|
Education in North Korea is universal and state-funded schooling by the government. As of 2021, UNESCO Institute for Statistics does not report any data for North Korea's literacy rates. Some children go through one year of kindergarten, four years of primary education, six years of secondary education, and then on to university. The North Korean state claims its national literacy rate for citizens aged 15 and older is 100 percent.
In 1988, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) reported that North Korea had 35,000 preprimary, 60,000 primary, 111,000 secondary, 23,000 college and university, and 4,000 other postsecondary teachers.
History
Formal education has played a central role in the social and cultural development of both traditional Korea and contemporary North Korea. During the Joseon Dynasty, the royal court established a system of schools that taught Confucian subjects in the provinces as well as in four central secondary schools in the capital. There was no state-supported system of primary education.
During the 15th century, state-supported schools declined in quality and were supplanted in importance by private academies, the seowon, centers of a Neo-Confucian revival in the 16th century. Higher education was provided by the Seonggyungwan, the Confucian national university, in Seoul. Its enrollment was limited to 200 students who had passed the lower civil-service examinations and were preparing for the highest examinat
|
https://en.wikipedia.org/wiki/Wigner%E2%80%93Seitz%20cell
|
The Wigner–Seitz cell, named after Eugene Wigner and Frederick Seitz, is a primitive cell which has been constructed by applying Voronoi decomposition to a crystal lattice. It is used in the study of crystalline materials in crystallography.
The unique property of a crystal is that its atoms are arranged in a regular three-dimensional array called a lattice. All the properties attributed to crystalline materials stem from this highly ordered structure. Such a structure exhibits discrete translational symmetry. In order to model and study such a periodic system, one needs a mathematical "handle" to describe the symmetry and hence draw conclusions about the material properties consequent to this symmetry. The Wigner–Seitz cell is a means to achieve this.
A Wigner–Seitz cell is an example of a primitive cell, which is a unit cell containing exactly one lattice point. For any given lattice, there are an infinite number of possible primitive cells. However there is only one Wigner–Seitz cell for any given lattice. It is the locus of points in space that are closer to that lattice point than to any of the other lattice points.
A Wigner–Seitz cell, like any primitive cell, is a fundamental domain for the discrete translation symmetry of the lattice. The primitive cell of the reciprocal lattice in momentum space is called the Brillouin zone.
Overview
Background
The concept of Voronoi decomposition was investigated by Peter Gustav Lejeune Dirichlet, leading to the name Dirichlet
|
https://en.wikipedia.org/wiki/Breakpoint%20%28disambiguation%29
|
A breakpoint is an execution stop point in the code of a computer program.
Breakpoint or break point may also refer to:
BCR (gene), the gene that encodes the breakpoint cluster region protein
Break point, in tennis
Break Point, a 2002 novel by Rosie Rushton
Break Point (film), a 2015 U.S. comedy film
Breakpoint (demoparty), a German demoscene party
Breakpoint (meteorology), a location referred to by meteorologists when issuing watches, warnings, or advisories for specific areas
Breakpoint (novel) a 2007 novel by Richard A. Clarke
Breakpoint ("The Shield"), a 2003 episode of the television show The Shield
Breakpoint, an indicator of a microbial organism's susceptibility or resistance to a particular antimicrobial; see Minimum inhibitory concentration
"Breakpoint", a song by Megadeth on the 1995 album Hidden Treasures
See also
Break Point (disambiguation)
Breaking point (disambiguation)
Point Break, a 1991 action film
Tom Clancy's Ghost Recon Breakpoint, a 2019 online video game by Ubisoft
|
https://en.wikipedia.org/wiki/Duophonic%20Records
|
Duophonic Ultra High Frequency Disks Limited (also known as Duophonic Records or Duophonic Super 45s) is a British independent record label formed by English-French rock band Stereolab in 1991. The label has two imprints: Duophonic Ultra High Frequency Disks for UK Stereolab releases licensed to various labels worldwide, and Duophonic Super 45s for releases of other artists and certain Stereolab UK-only releases. Duophonic's first release was Stereolab's debut EP Super 45 (1991), limited to 880 copies; of these, forty copies had handmade covers that were produced by Martin Pike in his father's garage.
Bands that have released records on Duophonic include Broadcast, the High Llamas, Labradford, Tortoise, Pram, Yo La Tengo, the Notwist, and Apparat Organ Quartet. Daft Punk, one of the most successful electronic bands of the 1990s, released their first songs under the name Darlin' on the 1993 Duophonic compilation Shimmies in Super 8. Duophonic's most successful release is Stereolab's Emperor Tomato Ketchup (1996), which was licensed to Elektra Records outside the UK and has sold over 60,000 copies worldwide.
Duophonic is managed by Martin Pike, and is owned by Tim Gane (34%), Laetitia Sadier (34%), and Pike (32%). Although founded in 1991, the label did not become a limited company until 25 August 1993, when Pike relocated from Horsham, West Sussex, to East Dulwich in the London Borough of Southwark. From there, Pike also runs Associated London Management [2008] Ltd, a compan
|
https://en.wikipedia.org/wiki/Crystal%20%28song%29
|
"Crystal" is a song by English rock band New Order. The song was released on 11 July 2001 as the first single from their seventh studio album, Get Ready (2001). "Crystal" entered the UK Singles Chart at number eight, attracting considerable attention and critical praise as the band's comeback single, their first original since 1993. The song also found success internationally, peaking at number three in Canada, number seven in Finland, and reaching the top 50 in Germany, Ireland, Italy, and Sweden. "Crystal" appears as the first track on the album in a different version than the single release, with an extended intro and coda.
Release
Singer-guitarist Bernard Sumner originally gave the song to German record label Mastermind for Success, and it was recorded by label artist Corvin Dalek. However, DJ Pete Tong heard the song and declared it to be the best New Order single since "Blue Monday", leading Sumner to reconsider the gift and have New Order record and release it.
A version of the single was also released in Japan to promote the release of the New Order DVD 316, and has a different cover that resembles the 316 cover. B-sides for the single were 4 live audio tracks taken from the DVD. The single was B-sided by a variety of remixes, and an original song titled "Behind Closed Doors". All versions feature extensive backing vocals from Dawn Zee, mostly wordless. Zee has continued to perform with New Order on all their successive studio albums.
After the song was released, a
|
https://en.wikipedia.org/wiki/Generalized%20canonical%20correlation
|
In statistics, the generalized canonical correlation analysis (gCCA), is a way of making sense of cross-correlation matrices between the sets of random variables when there are more than two sets. While a conventional CCA generalizes principal component analysis (PCA) to two sets of random variables, a gCCA generalizes PCA to more than two sets of random variables. The canonical variables represent those common factors that can be found by a large PCA of all of the transformed random variables after each set underwent its own PCA.
Applications
The Helmert-Wolf blocking (HWB) method of estimating linear regression parameters can find an optimal solution only if all cross-correlations between the data blocks are zero. They can always be made to vanish by introducing a new regression parameter for each common factor. The gCCA method can be used for finding those harmful common factors that create cross-correlation between the blocks. However, no optimal HWB solution exists if the random variables do not contain enough information on all of the new regression parameters.
References
Afshin-Pour, B.; Hossein-Zadeh, G.A. Strother, S.C.; Soltanian-Zadeh, H. (2012), "Enhancing reproducibility of fMRI statistical maps using generalized canonical correlation analysis in NPAIRS framework", NeuroImage 60(4): 1970–1981.
Sun, Q.S., Liu, Z.D., Heng, P.A., Xia, D.S. (2005) "A Theorem on the Generalized Canonical Projective Vectors". Pattern Recognition 38 (3) 449
Kettenring, J. R. (1971
|
https://en.wikipedia.org/wiki/Red%20Hat%20Enterprise%20Linux%20derivatives
|
Red Hat Enterprise Linux derivatives are Linux distributions that are based on the source code of Red Hat Enterprise Linux (RHEL).
History
Red Hat Linux was one of the first and most popular Linux distributions. This was largely because, while a paid-for supported version was available, a freely downloadable version was also available. Since the only difference between the paid-for option and the free option was support, a great number of people chose to use the free version.
In 2003, Red Hat made the decision to split its Red Hat Linux product into two: Red Hat Enterprise Linux for customers who were willing to pay for it, and Fedora that was made available free of charge but gets updates for every release for approximately 13 months.
Fedora has its own beta cycle and has some issues fixed by contributors who include Red Hat staff. However, its quick and nonconservative release cycle means it might not be suitable for some users. Fedora is somewhat a test-bed for Red Hat, allowing them to beta test their new features before they get included in Red Hat Enterprise Linux. Since the release of Fedora, Red Hat has no longer made binary versions of its commercial product available free-of-charge.
Motivations
Red Hat does not make a compiled version of its Enterprise Linux product available for free download. However, as the license terms on which it is mostly based explicitly stipulate, Red Hat has made the entire source code available in RPM format via their network of ser
|
https://en.wikipedia.org/wiki/Low-Frequency%20Array%20%28LOFAR%29
|
The Low-Frequency Array, or LOFAR, is a large radio telescope, with an antenna network located mainly in the Netherlands, and spreading across 7 other European countries as of 2019. Originally designed and built by ASTRON, the Netherlands Institute for Radio Astronomy, it was first opened by Queen Beatrix of The Netherlands in 2010, and has since been operated on behalf of the International LOFAR Telescope (ILT) partnership by ASTRON.
LOFAR consists of a vast array of omnidirectional radio antennas using a modern concept, in which the signals from the separate antennas are not connected directly electrically to act as a single large antenna, as they are in most array antennas. Instead, the LOFAR dipole antennas (of two types) are distributed in stations, within which the antenna signals can be partly combined in analogue electronics, then digitised, then combined again across the full station. This step-wise approach provides great flexibility in setting and rapidly changing the directional sensitivity on the sky of an antenna station. The data from all stations are then transported over fiber to a central digital processor, and combined in software to emulate a conventional radio telescope dish with a resolving power corresponding to the greatest distance between the antenna stations across Europe. LOFAR is thus an interferometric array, using about 20,000 small antennas concentrated in 52 stations since 2019. 38 of these stations are distributed across the Netherlands, bui
|
https://en.wikipedia.org/wiki/Heinrich%20Gustav%20Hotho
|
Heinrich Gustav Hotho (Berlin, May 22, 1802 – Berlin, December 25, 1873) was a German historian of art and Right Hegelian. He is famous for being the compiler and editor of Hegel's posthumous work Vorlesungen über die Ästhetik ("Lectures on Aesthetics").
Biography
During boyhood he was affected for two years with blindness consequent on an attack of measles. But recovering his sight he studied so hard as to take his degree at Berlin in 1826. A year of travel spent in visiting Paris, London and the Low Countries determined his vocation.
He came home delighted with the treasures which he had seen, worked laboriously for a higher examination and passed as "docent" in aesthetics and art history. In 1829 he was made professor at the university of Berlin. In 1833 GF Waagen accepted him as assistant in the museum of the Prussian capital; and in 1858 he was promoted to the directorship of the Berlin print-room.
During a long and busy life, in which his time was divided between literature and official duties, Hotho's ambition had always been to master the history of the schools of Germany and the Netherlands. Accordingly what he published was generally confined to those countries. In 1842-1843 he gave to the world his account of German and Flemish painting. From 1853 to 1858 he revised and published anew a part of this work, which he called "The school of Hubert van Eyck, with his German precursors and contemporaries."
His attempt later on to write a history of Christian painting
|
https://en.wikipedia.org/wiki/Whey%20protein
|
Whey protein is a mixture of proteins isolated from whey, the liquid material created as a by-product of cheese production. The proteins consist of α-lactalbumin, β-lactoglobulin, serum albumin and immunoglobulins. Glycomacropeptide also makes up the third largest component but is not a protein. Whey protein is commonly marketed as a protein supplement, and various health claims have been attributed to it. A review published in 2010 in the European Food Safety Authority Journal concluded that the provided literature did not adequately support the proposed claims. For muscle growth, whey protein has been shown to be slightly better compared to other types of protein, such as casein or soy.
Production of whey
Whey is left over when milk is coagulated during the process of cheese production, and contains everything that is soluble from milk after the pH is dropped to 4.6 during the coagulation process. It is a 5% solution of lactose in water and contains the water-soluble proteins of milk as well as some lipid content. Processing can be done by simple drying, or the relative protein content can be increased by removing the lactose, lipids and other non-protein materials. For example, membrane filtration separates the proteins from lipids, lactose and minerals in whey, which is followed by spray drying.
Whey can be denatured by heat. High heat (such as the sustained high temperatures above 72 °C associated with the pasteurization process) denatures whey proteins. While native
|
https://en.wikipedia.org/wiki/Clapp%20oscillator
|
The Clapp oscillator or Gouriet oscillator is an LC electronic oscillator that uses a particular combination of an inductor and three capacitors to set the oscillator's frequency. LC oscillators use a transistor (or vacuum tube or other gain element) and a positive feedback network. The oscillator has good frequency stability.
History
The Clapp oscillator design was published by James Kilton Clapp in 1948 while he worked at General Radio. According to Czech engineer Jiří Vackář, oscillators of this kind were independently developed by several inventors, and one developed by Gouriet had been in operation at the BBC since 1938.
Circuit
The Clapp oscillator uses a single inductor and three capacitors to set its frequency. The Clapp oscillator is often drawn as a Colpitts oscillator that has an additional capacitor () placed in series with the inductor.
The oscillation frequency in Hertz (cycles per second) for the circuit in the figure, which uses a field-effect transistor (FET), is
The capacitors and are usually much larger than , so the term dominates the other capacitances, and the frequency is near the series resonance of and . Clapp's paper gives an example where and are 40 times larger than ; the change makes the Clapp circuit about 400 times more stable than the Colpitts oscillator for capacitance changes of .
Capacitors , and form a voltage divider that determines the amount of feedback voltage applied to the transistor input.
Although, the Clapp circuit
|
https://en.wikipedia.org/wiki/Assimilation%20%28biology%29
|
' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver.
Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase
|
https://en.wikipedia.org/wiki/Rodrigues%27%20rotation%20formula
|
In the theory of three-dimensional rotation, Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix in , the group of all rotation matrices, from an axis–angle representation. In terms of Lie theory, the Rodrigues' formula provides an algorithm to compute the exponential map from the Lie algebra to its Lie group .
This formula is variously credited to Leonhard Euler, Olinde Rodrigues, or a combination of the two. A detailed historical analysis in 1989 concluded that the formula should be attributed to Euler, and recommended calling it "Euler's finite rotation formula." This proposal has received notable support, but some others have viewed the formula as just one of many variations of the Euler–Rodrigues formula, thereby crediting both.
Statement
If is a vector in and is a unit vector describing an axis of rotation about which rotates by an angle according to the right hand rule, the Rodrigues formula for the rotated vector is
The intuition of the above formula is that the first term scales the vector down, while the second skews it (via vector addition) toward the new rotational position. The third term re-adds the height (relative to ) that was lost by the first term.
An alternative statement is to write the axis vector as a cross product of any two nonzero vectors and
|
https://en.wikipedia.org/wiki/Ariel%20%28detergent%29
|
Ariel is a brand of laundry detergent developed by P&G European Technology Centre in Belgium. The enzymes for the detergent are provided by Novozymes.
History
It was launched in multiple markets between 1967 and 1969. The brand is owned by US multinational Procter & Gamble and is popular in Mexico and India.
References
External links
Official UK Website
Official German Website
Products introduced in 1967
Cleaning product brands
Laundry detergents
Procter & Gamble brands
Consumer goods
British brands
|
https://en.wikipedia.org/wiki/Heterokaryon
|
A heterokaryon is a multinucleate cell that contains genetically different nuclei. Heterokaryotic and heterokaryosis are derived terms. This is a special type of syncytium. This can occur naturally, such as in the mycelium of fungi during sexual reproduction, or artificially as formed by the experimental fusion of two genetically different cells, as e.g., in hybridoma technology.
Etymology
Heterokaryon is from neo-classic Greek hetero, meaning different, and karyon, meaning kernel or in this case nucleus.
The term was coined in 1965, independently by B. Ephrussi and M. Weiss, by H. Harris and J. F. Watkins, and by Y. Okada and F. Murayama.
Occurrence
Heterokaryons are found in the life cycle of yeasts, for example Saccharomyces cerevisiae, a genetic model organism. The heterokaryon stage is produced from the fusion of two haploid cells. This transient heterokaryon can produce further haploid buds, or cell nuclei can fuse and produce a diploid cell, which can then undergo mitosis.
Ciliate protozoans
The term was first used for ciliate protozoans such as Tetrahymena. This has two types of cell nuclei, a large, somatic macronucleus and a small, germline micronucleus. Both exist in a single cell at the same time and carry out different functions with distinct cytological and biochemical properties.
True fungi
Many fungi (notably the arbuscular mycorrhizal fungi) exhibit heterokaryosis. The haploid nuclei within a mycelium may differ from one another not merely by accumulating
|
https://en.wikipedia.org/wiki/SHC
|
SHC may refer to:
Science
Src homology 2 domain-containing, in structural biology, a structural domain in signal transduction proteins
SHC1, a human gene
Sirohydrochlorin, a chemical precursor to various enzymes.
Specific heat capacity, in physics, a substance's heat capacity per unit mass, usually denoted by the symbol c or s
Spontaneous human combustion, a theory that, under certain conditions, a human being may burn without any apparent external source of ignition
Schools
Spring Hill College, a predominantly undergraduate Jesuit university in Mobile, Alabama
Schreyer Honors College, an honors program at the Pennsylvania State University
Stanford Humanities Center, a humanities organization located at Stanford University
Sacred Heart Cathedral Preparatory, a co-ed Catholic school in San Francisco, California, United States
Sacred Heart College, Auckland, a Catholic, Marist secondary school in Auckland, New Zealand
Religion
Sacred Heart Cathedral (disambiguation), a name for multiple Catholic cathedrals
Society of the Holy Cross (Korea), an order of nuns in the Anglican Church of Korea
Other uses
Canadian Historical Association (Société historique du Canada)
Serving His Children, a Christian nonprofit organization based in Uganda
Shc (shell script compiler) for Unix-like operating systems
South Health Campus, in Calgary, Alberta, Canada
A song on the Sacred Hearts Club album by Foster the People
|
https://en.wikipedia.org/wiki/Clairaut%27s%20theorem%20%28gravity%29
|
Clairaut's theorem characterizes the surface gravity on a viscous rotating ellipsoid in hydrostatic equilibrium under the action of its gravitational field and centrifugal force. It was published in 1743 by Alexis Claude Clairaut in a treatise which synthesized physical and geodetic evidence that the Earth is an oblate rotational ellipsoid. It was initially used to relate the gravity at any point on the Earth's surface to the position of that point, allowing the ellipticity of the Earth to be calculated from measurements of gravity at different latitudes. Today it has been largely supplanted by the Somigliana equation.
History
Although it had been known since antiquity that the Earth was spherical, by the 17th century evidence was accumulating that it was not a perfect sphere. In 1672 Jean Richer found the first evidence that gravity was not constant over the Earth (as it would be if the Earth were a sphere); he took a pendulum clock to Cayenne, French Guiana and found that it lost minutes per day compared to its rate at Paris. This indicated the acceleration of gravity was less at Cayenne than at Paris. Pendulum gravimeters began to be taken on voyages to remote parts of the world, and it was slowly discovered that gravity increases smoothly with increasing latitude, gravitational acceleration being about 0.5% greater at the poles than at the equator.
British physicist Isaac Newton explained this in his Principia Mathematica (1687) in which he outlined his theory a
|
https://en.wikipedia.org/wiki/Biointensive%20agriculture
|
Biointensive agriculture is an organic agricultural system that focuses on achieving maximum yields from a minimum area of land, while simultaneously increasing biodiversity and sustaining the soil fertility. The goal of the method is long term sustainability on a closed system basis. It is particularly effective for backyard gardeners and smallholder farmers in developing countries, and also has been used successfully on small-scale commercial farms.
History
Many of the techniques that contribute to the biointensive method were present in the agriculture of the ancient Chinese, Greeks, Mayans, and of the Early Modern period in Europe, as well as in West Africa (Tapades of Fouta Djallon) from at least the late 18th century.
Sustainable bio-intensive farming (BIF) system, which emphasizes biodiversity conservation; recycling of nutrients; synergy among crops, animals, soils, and other biological components; and regeneration and conservation of resources is a type of agro-ecological approach. This alternative can approach that can appropriately address the central issue of hunger, poverty, food / nutrition insecurity and livelihoods (Rajbhandari, 1999).
System
The biointensive method provides many benefits as compared with conventional farming and gardening methods, and is an inexpensive, easily implemented sustainable production method that can be used by people who lack the resources (or desire) to implement commercial chemical and fossil-fuel-based forms of agriculture.
|
https://en.wikipedia.org/wiki/Critical%20variable
|
Critical variables are defined, for example in thermodynamics, in terms of the values of variables at the critical point.
On a PV diagram, the critical point is an inflection point. Thus:
For the van der Waals equation, the above yields:
Thermodynamic properties
Conformal field theory
|
https://en.wikipedia.org/wiki/Implicit%20surface
|
In mathematics, an implicit surface is a surface in Euclidean space defined by an equation
An implicit surface is the set of zeros of a function of three variables. Implicit means that the equation is not solved for or or .
The graph of a function is usually described by an equation and is called an explicit representation. The third essential description of a surface is the parametric one:
, where the -, - and -coordinates of surface points are represented by three functions depending on common parameters . Generally the change of representations is simple only when the explicit representation is given: (implicit), (parametric).
Examples:
The plane
The sphere
The torus
A surface of genus 2: (see diagram).
The surface of revolution (see diagram wineglass).
For a plane, a sphere, and a torus there exist simple parametric representations. This is not true for the fourth example.
The implicit function theorem describes conditions under which an equation can be solved (at least implicitly) for , or . But in general the solution may not be made explicit. This theorem is the key to the computation of essential geometric features of a surface: tangent planes, surface normals, curvatures (see below). But they have an essential drawback: their visualization is difficult.
If is polynomial in , and , the surface is called algebraic. Example 5 is non-algebraic.
Despite difficulty of visualization, implicit surfaces provide relatively simple techniques to generat
|
https://en.wikipedia.org/wiki/Friction%20factor
|
Friction factor may refer to:
Atkinson friction factor, a measure of the resistance to airflow of a duct
Darcy friction factor, in fluid dynamics
Fanning friction factor, a dimensionless number used as a local parameter in continuum mechanics
See also
coefficient of friction
Dimensionless numbers
|
https://en.wikipedia.org/wiki/F.%20Thomson%20Leighton
|
Frank Thomson "Tom" Leighton (born 1956) is the CEO of Akamai Technologies, the company he co-founded with the late Daniel Lewin in 1998. As one of the world's preeminent authorities on algorithms for network applications and cybersecurity, Dr. Leighton discovered a solution to free up web congestion using applied mathematics and distributed computing.
He is on leave as a professor of applied mathematics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). He received his B.S.E. in Electrical Engineering from Princeton University in 1978, and his Ph.D. in Mathematics from MIT in 1981. His brother David T. Leighton is a full professor at the University of Notre Dame, specializing in transport phenomena. Their father was a U.S. Navy colleague and friend of Admiral Hyman G. Rickover, the father of naval nuclear propulsion and a founder of the Research Science Institute (RSI).
Dr. Leighton has served on numerous government, industry, and academic advisory panels, including the Presidential Informational Technology Advisory Committee (PITAC) and chaired its subcommittee on cybersecurity. He serves on the board of trustees of the Society for Science & the Public (SSP) and of the Center for Excellence in Education (CEE), and he has participated in the Distinguished Lecture Series at CEE's flagship program for high school students, the Research Science Institute (RSI).
Awards and honors
The Instit
|
https://en.wikipedia.org/wiki/Tausonite
|
Tausonite is the rare naturally occurring mineral form of strontium titanate: chemical formula: SrTiO3. It occurs as red to orange brown cubic crystals and crystal masses.
It is a member of the perovskite group.
It was first described in 1982 for an occurrence in a syenite intrusive in Tausonite Hill, Murun Massif, Olyokma-Chara Plateau, Sakha Republic, Yakutia, geologically part of the Aldan Shield, Eastern-Siberian Region, Russia. It was named for Russian geochemist Lev Vladimirovich Tauson (1917–1989). It has also been reported from a fenite dike associated with a carbonatite complex in Sarambi, Concepción Department, Paraguay. and in high pressure metamorphic rocks along the Kotaki River area of Honshu Island, Japan.
References
Oxide minerals
Titanium minerals
Strontium minerals
Cubic minerals
Minerals in space group 221
|
https://en.wikipedia.org/wiki/Omitted-variable%20bias
|
In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included.
More specifically, OVB is the bias that appears in the estimates of parameters in a regression analysis, when the assumed specification is incorrect in that it omits an independent variable that is a determinant of the dependent variable and correlated with one or more of the included independent variables.
In linear regression
Intuition
Suppose the true cause-and-effect relationship is given by:
with parameters a, b, c, dependent variable y, independent variables x and z, and error term u. We wish to know the effect of x itself upon y (that is, we wish to obtain an estimate of b).
Two conditions must hold true for omitted-variable bias to exist in linear regression:
the omitted variable must be a determinant of the dependent variable (i.e., its true regression coefficient must not be zero); and
the omitted variable must be correlated with an independent variable specified in the regression (i.e., cov(z,x) must not equal zero).
Suppose we omit z from the regression, and suppose the relation between x and z is given by
with parameters d, f and error term e. Substituting the second equation into the first gives
If a regression of y is conducted upon x only, this last equation is what is estimated, and the regression coefficient on x is actually a
|
https://en.wikipedia.org/wiki/Emiliania%20huxleyi
|
Emiliania huxleyi is a species of coccolithophore found in almost all ocean ecosystems from the equator to sub-polar regions, and from nutrient rich upwelling zones to nutrient poor oligotrophic waters. It is one of thousands of different photosynthetic plankton that freely drift in the photic zone of the ocean, forming the basis of virtually all marine food webs. It is studied for the extensive blooms it forms in nutrient-depleted waters after the reformation of the summer thermocline. Like other coccolithophores, E. huxleyi is a single-celled phytoplankton covered with uniquely ornamented calcite disks called coccoliths. Individual coccoliths are abundant in marine sediments although complete coccospheres are more unusual. In the case of E. huxleyi, not only the shell, but also the soft part of the organism may be recorded in sediments. It produces a group of chemical compounds that are very resistant to decomposition. These chemical compounds, known as alkenones, can be found in marine sediments long after other soft parts of the organisms have decomposed. Alkenones are most commonly used by earth scientists as a means to estimate past sea surface temperatures.
Basic facts
Emiliania huxleyi was named after Thomas Huxley and Cesare Emiliani, who were the first to examine sea-bottom sediment and discover the coccoliths within it. It is believed to have evolved approximately 270,000 years ago from the older genus Gephyrocapsa Kampter and became dominant in planktonic assembl
|
https://en.wikipedia.org/wiki/Cambridge%20Low%20Frequency%20Synthesis%20Telescope
|
The Cambridge Low-Frequency Synthesis Telescope (CLFST) is an east-west aperture synthesis radio telescope currently operating at 151 MHz. It consists of 60 tracking yagis on a 4.6 km baseline, giving 776 simultaneous baselines. These provide a resolution of 70×70 cosec (declination) arcsec2, with a sensitivity of about 30 to 50 mJy/beam, and a field of view of about 9°×9°. The telescope is situated at the Mullard Radio Astronomy Observatory.
The CLFST has made three astronomical catalogues of the Northern Hemisphere:
6C survey at 151 MHz
7C survey at 151 MHz
8C survey at 38 MHz
Radio telescopes
Interferometric telescopes
Cavendish Laboratory
|
https://en.wikipedia.org/wiki/Viron
|
Viron may refer to:
Viron P. Vaky (1925–2012), American diplomat
an alternative form of Vyronas, a suburb of Athens, Greece
Viron, a fictional city in The Book of the Long Sun by Gene Wolfe
Viron, a playable race (or their language) in the computer RTS game Ground Control II
Viron Transit, a bus company servicing the Ilocos Region, Philippines
Viron,the Latin name of virus meaning poison
See also
Virion, a virus particle
|
https://en.wikipedia.org/wiki/Mauritius%20Radio%20Telescope
|
The Mauritius Radio Telescope (MRT) is a synthesis radio telescope in Mauritius that is used to make images of the sky at a frequency of 151.5 MHz. The MRT was primarily designed to make a survey with a point source sensitivity of 150 mJy. Its resolution is about 4 arc min.
Structure
The MRT is a T-shaped array consisting of a 2048m-long East-West arm with 1024 fixed helical antennas arranged in 32 groups and an 880m-long North-South arm with 15 movable trolleys, each containing four antennas. There is a single trolley in the North arm. The North-South arm is built along the old Port Louis to Flacq railway line which closed in 1964.
Function
The antennas collect radio waves and transform them into electric signals. The signal from each group is filtered, amplified and sent to the telescope building where it is digitized. The digitized signals are processed in a correlator. Linux systems using custom software transform these correlated signals into raw images called "dirty maps".
The MRT uses aperture synthesis to simulate a 1 km by 1 km filled array. Data is collected as the trolleys in the North-South arm move southward from the array centre. Observations are repeated 62 times until the trolleys reach the southern end the arm. The 1-D data for each day is added so as to make a 2-D map of the sky. Unlike most radio telescopes, the MRT can 'see' very extended sources. Also, the non-co-planarity of the East-West arm have led to new imaging techniques used in cleaning th
|
https://en.wikipedia.org/wiki/Nanocrystal
|
A nanocrystal is a material particle having at least one dimension smaller than 100 nanometres, based on quantum dots (a nanoparticle) and composed of atoms in either a single- or poly-crystalline arrangement.
The size of nanocrystals distinguishes them from larger crystals. For example, silicon nanocrystals can provide efficient light emission while bulk silicon does
not and may be used for memory components.
When embedded in solids, nanocrystals may exhibit much more complex melting behaviour than conventional solids and may form the basis of a special class of solids. They can behave as single-domain systems (a volume within the system having the same atomic or molecular arrangement throughout) that can help explain the behaviour of macroscopic samples of a similar material without the complicating presence of grain boundaries and other defects.
Semiconductor nanocrystals having dimensions smaller than 10 nm are also described as quantum dots.
Synthesis
The traditional method involves molecular precursors, which can include typical metal salts and a source of the anion. Most semiconducting nanomaterials feature chalcogenides (SS−, SeS−, TeS−) and pnicnides (P3−, As3−, Sb3−). Sources of these elements are the silylated derivatives such as bis(trimethylsilyl)sulfide (S(SiMe3)2 and tris(trimethylsilyl)phosphine (P(SiMe3)3).
Some procedures use surfactants to solubilize the growing nanocrystals. In some cases, nanocrystals can exchange their elements with reagents thro
|
https://en.wikipedia.org/wiki/Mutation%20rate
|
In genetics, the mutation rate is the frequency of new mutations in a single gene or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense and Nonsense mutations are two subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate.
There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome. Over DNA, RNA or a single gene, mutation rates are changing.
When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases.
Background
Different genetic variants within a specie
|
https://en.wikipedia.org/wiki/Phase%20problem
|
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years.
Overview
Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics.
In X-ray crystallography, the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell. If the phases are known, the electron density can be simply obtained by Fourier synthesis. This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also cal
|
https://en.wikipedia.org/wiki/Rarita%E2%80%93Schwinger%20equation
|
In theoretical physics, the Rarita–Schwinger equation is the
relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In modern notation it can be written as:
where is the Levi-Civita symbol,
and are Dirac matrices,
is the mass,
,
and is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the representation of the Lorentz group, or rather, its part.
This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian:
where the bar above denotes the Dirac adjoint.
This equation controls the propagation of the wave function of composite objects such as the delta baryons () or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally.
The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation , where is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino.
"Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist.
Equations of motion in the massless case
Consider a massless Rarita–Schwinger field described by the Lagrangian density
where the sum over spin indices is implicit, are Majora
|
https://en.wikipedia.org/wiki/Pugs%20%28compiler%29
|
Pugs is a compiler and interpreter for the Raku programming language, started on February 1, 2005, by Audrey Tang. (At the time, Raku was known as Perl 6.)
Pugs development is now placed on hiatus, with most Raku implementation efforts now taking place on Rakudo.
Overview
The Pugs project aimed to bootstrap Perl 6 by implementing the full Perl 6 specification, as detailed in the Synopses. It is written in Haskell, specifically targeting the Glasgow Haskell Compiler.
Pugs includes two main executables:
Pugs is the interpreter with an interactive shell.
Pugscc can compile Perl 6 programs into Haskell code, Perl 5, JavaScript, or Parrot virtual machine's PIR assembly.
Pugs is free software, distributable under the terms of either the GNU General Public License or the Artistic License. These are the same terms as Perl.
Version numbering
The major/minor version numbers of Pugs converges to 2π (being reminiscent of TeX and METAFONT, which use a similar scheme); each significant digit in the minor version represents a successfully completed milestone. The third digit is incremented for each release. The current milestones are:
6.0: Initial release.
6.2: Basic IO and control flow elements; mutable variables; assignment.
6.28: Classes and traits.
6.283: Rules and Grammars.
6.2831: Type system and linking.
6.28318: Macros.
6.283185: Port Pugs to Perl 6, if needed.
Perl 5 compatibility
As of version 6.2.6, Pugs also has the ability to embed Perl 5 and use CPAN modules in
|
https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Riemann%E2%80%93Roch%20theorem
|
In mathematics, specifically in algebraic geometry, the Grothendieck–Riemann–Roch theorem is a far-reaching result on coherent cohomology. It is a generalisation of the Hirzebruch–Riemann–Roch theorem, about complex manifolds, which is itself a generalisation of the classical Riemann–Roch theorem for line bundles on compact Riemann surfaces.
Riemann–Roch type theorems relate Euler characteristics of the cohomology of a vector bundle with their topological degrees, or more generally their characteristic classes in (co)homology or algebraic analogues thereof. The classical Riemann–Roch theorem does this for curves and line bundles, whereas the Hirzebruch–Riemann–Roch theorem generalises this to vector bundles over manifolds. The Grothendieck–Riemann–Roch theorem sets both theorems in a relative situation of a morphism between two manifolds (or more general schemes) and changes the theorem from a statement about a single bundle, to one applying to chain complexes of sheaves.
The theorem has been very influential, not least for the development of the Atiyah–Singer index theorem. Conversely, complex analytic analogues of the Grothendieck–Riemann–Roch theorem can be proved using the index theorem for families. Alexander Grothendieck gave a first proof in a 1957 manuscript, later published. Armand Borel and Jean-Pierre Serre wrote up and published Grothendieck's proof in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof.
Formulation
Let X be a s
|
https://en.wikipedia.org/wiki/The%20Sign
|
The Sign can refer to:
The Sign (Ace of Base album), an alternate name for the Ace of Base album Happy Nation
"The Sign" (song), a 1993 hit from this album
The Sign (Crystal Lake album)
"The Sign" (Agents of S.H.I.E.L.D.), an episode of Agents of S.H.I.E.L.D.
"The Sign", a song by Lizzo from the album Special, 2022
|
https://en.wikipedia.org/wiki/Ursa
|
Ursa is a Latin word meaning bear. Derivatives of this word are ursine or Ursini.
Ursa may also refer to:
General
URSA Extracts (United States of America), a California cannabis concentrate company
Ursa (Finland), a Finnish astronomical association
Ursa (spider), a spider genus in the family Araneidae
Ursa Major, the Great Bear constellation
Ursa Minor, the Small Bear constellation
HMS Ursa, the name of two ships of the Royal Navy
Places
Ursa, Illinois, a village in the United States
Ursa, a village in Gârcov Commune, Olt County, Romania
Ursa Motoșeni, the former name of Motoșeni Commune, Bacău County, Romania
People
Urša, feminine given name
Fiction
Ursa (comics), a fictional villain in Superman media
Ursa, a fictional monster in the M. Knight Shymalan film After Earth
A fictional character in Avatar: The Last Airbender
Ursa, Bear's girlfriend in the TV series Bear in the Big Blue House
A fictional character in Disney's Adventures of the Gummi Bears
Ursa, a fictional character in the Open Season franchise
Ursa Corregidora, protagonist of Corregidora by Gayl Jones
Ursa Wren, a fictional character in the Disney show Star Wars Rebels, voiced by Sharmila Devar.
Ursa, a hero in the video games DOTA and DOTA 2.
Ursas, a type of antagonistic monster featured in the animated web series RWBY
See also
Ersa (disambiguation)
Erza
Ursus (disambiguation)
|
https://en.wikipedia.org/wiki/Spin%20transistor
|
The magnetically sensitive transistor, also known as the spin transistor, spin field-effect transistor (spinFET), Datta–Das spin transistor or spintronic transistor (named for spintronics, the technology which this development spawned), originally proposed in 1990 by Supriyo Datta and Biswajit Das, is an alternative design on the common transistor invented in the 1940s. This device was considered one of the Nature Milestones in Spin in 2008.
Description
The spin transistor comes about as a result of research on the ability of electrons (and other fermions) to naturally exhibit one of two (and only two) states of spin: known as "spin up" and "spin down". Thus, spin transistors operate on electron spin as embodying a two-state quantum system. Unlike its namesake predecessor, which operates on an electric current, spin transistors operate on electrons on a more fundamental level; it is essentially the application of electrons set in particular states of spin to store information.
One advantage over regular transistors is that these spin states can be detected and altered without necessarily requiring the application of an electric current. This allows for detection hardware (such as hard drive heads) that are much smaller but even more sensitive than today's devices, which rely on noisy amplifiers to detect the minute charges used on today's data storage devices. The potential result is devices that can store more data in less space and consume less power, using less costly
|
https://en.wikipedia.org/wiki/Motion%20estimation
|
In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.
Related terms
More often than not, the term motion estimation and the term optical flow are used interchangeably. It is also related in concept to image registration and stereo correspondence. In fact all of these terms refer to the process of finding corresponding points between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are "usually" the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for a
|
https://en.wikipedia.org/wiki/Julia%20M.%20Riley
|
Julia M. Riley (née Hill, born 1947) is a British astrophysicist who developed the Fanaroff–Riley classification.
Personal and professional background
She is the daughter of Philippa (born Pass) and British marine geophysicist Maurice Hill and granddaughter of Nobel Prize–winning physiologist Archibald Vivian Hill.
Riley is a Fellow of Girton College associated with the Cavendish Astrophysics Group at University of Cambridge. Her primary field of research is in the area of radio astronomy. Riley lectures and supervises physics within the Natural Sciences Tripos at the University of Cambridge.
Fanaroff–Riley type I and II
In 1974, along with Bernard Fanaroff, she wrote a paper classifying radio galaxies into two types based on their morphology (shape). Fanaroff and Riley's classification became known as Fanaroff–Riley type I and II of radio galaxies (FRI and FRII). In FRI sources the major part of the radio emission comes from closer to the centre of the source, whereas in FRII sources the major part of the emission comes from hotspots set away from the centre (see active galaxies).
References
External links
Webpage at Girton College
Webpage at Cavendish Astrophysics Group
21st-century British astronomers
Fellows of Girton College, Cambridge
1947 births
Living people
Keynes family
Women astronomers
British women scientists
Academics of the University of Cambridge
20th-century British astronomers
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.