source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Geographic%20tongue
|
Geographic tongue, also known by several other terms, is a condition of the mucous membrane of the tongue, usually on the dorsal surface. It is a common condition, affecting approximately 2–3% of the general population. It is characterized by areas of smooth, red depapillation (loss of lingual papillae) which migrate over time. The name comes from the map-like appearance of the tongue, with the patches resembling the islands of an archipelago. The cause is unknown, but the condition is entirely benign (importantly, it does not represent oral cancer), and there is no curative treatment. Uncommonly, geographic tongue may cause a burning sensation on the tongue, for which various treatments have been described with little formal evidence of efficacy.
Signs and symptoms
In health, the dorsal surface of the tongue is covered in tuft-like projections called lingual papillae (some of which are associated with taste buds), which give the tongue an irregular surface texture and a white-pink color. Geographic tongue is characterized by areas of atrophy and depapillation (loss of papillae), leaving an erythematous (darker red) and smoother surface than the unaffected areas. The depapillated areas are usually well-demarcated, and bordered by a slightly raised, white, yellow or grey, serpiginous (snaking) peripheral zone. A lesion of geographic tongue may start as a white patch before the depapillation occurs. In certain cases there may be only one lesion, but this is uncommon; the lesions will typically occur in multiple locations on the tongue and coalesce over time to form the typical map-like appearance. The lesions usually change in shape and size, and migrate to other areas, sometimes within hours. The condition may affect only part of the tongue, with a predilection for the tip and the sides of the tongue, or the entire dorsal surface at any one time. The condition goes through periods of remission and relapse. Loss of the white peripheral zone is thought to signify per
|
https://en.wikipedia.org/wiki/Lola%20%28computing%29
|
Lola is designed to be a simple hardware description language for describing synchronous, digital circuits. Niklaus Wirth developed the language to teach digital design on field-programmable gate arrays (FPGAs) to computer science students while a professor at ETH Zurich.
The purpose of Lola is to statically describe the structure and function of hardware components and of the connections between them. A Lola text is composed of declarations and statements. It describes digital electronics hardware on the logic gate level in the form of signal assignments. Signals are combined using operators and assigned to other signals. Signals and the respective assignments can be grouped together into data types. An instance of a type is a hardware component. Types can be composed of instances of other types, thereby supporting a hierarchical design style and they can be generic, e.g., parametrizable with the word width of a circuit.
All of the concepts mentioned above are demonstrated in the following example of a circuit for adding binary data. First, a fundamental building block () is defined, then this is used to declare a cascade of word-width 8, and finally the s are connected to each other. The defined in this example can serve as a building block on a higher level of the design hierarchy.
MODULE Adder;
TYPE Cell; (* Composite Type *)
IN x,y,ci:BIT; (* input signals *)
OUT z,co:BIT; (* output signals *)
BEGIN
z:=x-y-ci;
co:=x*y+x*ci+y*ci;
END Cell;
CONST N:=8;
IN X,Y:[N]BIT; ci:BIT; (* input signals *)
OUT Z:[N]BIT; co:BIT; (* output signals *)
VAR S:[N]Cell; (* composite type instances *)
BEGIN
S.0(X.0, Y.0, ci); (* inputs in cell 0 *)
FOR i:=1..N-1 DO
S.i(X.i,Y.i,S[i-1].co); (* inputs in cell i *)
END;
FOR i:=0..N-1 DO
Z.i:=S.i.z;
END;
co:=S.7.co;
END Adder.
Wirth describes Lola from a user's perspective in his book Digital Circuit Design. A complementary view on the details of the Lola compiler's implementation can be found in Wirth'
|
https://en.wikipedia.org/wiki/Outline%20of%20actuarial%20science
|
The following outline is provided as an overview of and topical guide to actuarial science:
Actuarial science – discipline that applies mathematical and statistical methods to assess risk in the insurance and finance industries.
What type of thing is actuarial science?
Actuarial science can be described as all of the following:
An academic discipline –
A branch of science –
An applied science –
A subdiscipline of statistics –
Essence of actuarial science
Actuarial science
Actuary
Actuarial notation
Fields in which actuarial science is applied
Mathematical finance
Insurance, especially:
Life insurance
Health insurance
Human resource consulting
History of actuarial science
History of actuarial science
General actuarial science concepts
Insurance
Health insurance
Life Insurance
Life insurance
Life insurer
Insurable interest
Insurable risk
Annuity
Life annuity
Perpetuity
New Business Strain
Zillmerisation
Financial reinsurance
Net premium valuation
Gross premium valuation
Embedded value
European Embedded Value
Stochastic modelling
Asset liability modelling
Non-life Insurance
Property insurance
Casualty insurance
Vehicle insurance
Ruin theory
Stochastic modelling
Risk and capital management in non-life insurance
Reinsurance
Reinsurance
Financial reinsurance
Reinsurance Actuarial Premium
Reinsurer
Investments & Asset Management
Dividend yield
PE ratio
Bond valuation
Yield to maturity
Cost of capital
Net asset value
Derivatives
Mathematics of Finance
Financial mathematics
Interest
Time value of money
Discounting
Present value
Future value
Net present value
Internal rate of return
Yield curve
Yield to maturity
Effective annual rate (EAR)
Annual percentage rate (APR)
Mortality
Force of mortality
Life table
Pensions
Pensions
Stochastic modelling
Other
Enterprise risk management
Fictional actuaries
Persons influential in the field of actuarial science
List of actuaries
See also
In
|
https://en.wikipedia.org/wiki/Big%20design%20up%20front
|
Big design up front (BDUF) is a software development approach in which the program's design is to be completed and perfected before that program's implementation is started. It is often associated with the waterfall model of software development.
Synonyms for big design up front (BDUF) are big modeling up front (BMUF) and big requirements up front (BRUF). These are viewed as anti-patterns within agile software development.
Arguments for
Proponents of waterfall model argue that time spent in designing is a worthwhile investment, with the hope that less time and effort will be spent fixing a bug in the early stages of a software product's lifecycle than when that same bug is found and must be fixed later. That is, it is much easier to fix a requirements bug in the requirements phase than to fix that same bug in the implementation phase, as to fix a requirements bug in the implementation phase requires scrapping at least some of the implementation and design work which has already been completed.
Joel Spolsky, a popular online commentator on software development, has argued strongly in favor of big design up front:
"Many times, thinking things out in advance saved us serious development headaches later on. ... [on making a particular specification change] ... Making this change in the spec took an hour or two. If we had made this change in code, it would have added weeks to the schedule. I can’t tell you how strongly I believe in Big Design Up Front, which the proponents of Extreme Programming consider anathema. I have consistently saved time and made better products by using BDUF and I’m proud to use it, no matter what the XP fanatics claim. They’re just wrong on this point and I can’t be any clearer than that."
However, several commentators have argued that what Joel has called big design up front doesn't resemble the BDUF criticized by advocates of XP and other agile software development methodologies because he himself says his example was neither recognizabl
|
https://en.wikipedia.org/wiki/Soft-bodied%20organism
|
Soft-bodied organisms are animals that lack skeletons. The group roughly corresponds to the group Vermes as proposed by Carl von Linné. All animals have muscles but, since muscles can only pull, never push, a number of animals have developed hard parts that the muscles can pull on, commonly called skeletons. Such skeletons may be internal, as in vertebrates, or external, as in arthropods. However, many animals groups do very well without hard parts. This include animals such as earthworms, jellyfish, tapeworms, squids and an enormous variety of animals from almost every part of the kingdom Animalia.
Commonality
Most soft-bodied animals are small, but they do make up the majority of the animal biomass. If we were to weigh up all animals on Earth with hard parts against soft-bodied ones, estimates indicate that the biomass of soft-bodied animals would be at least twice that of animals with hard parts, quite possibly much larger. Particularly the roundworms are extremely numerous. The nematodologist Nathan Cobb described the ubiquitous presence of nematodes on Earth as follows:
"In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable, since for every massing of human beings there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites."
Anatomy
Not being a true phylogenetic group, soft-bodied organisms vary enormously in anatomy. Cnidarians and flatworms have a single opening to the gut and a d
|
https://en.wikipedia.org/wiki/Ore%20extension
|
In mathematics, especially in the area of algebra known as ring theory, an Ore extension, named after Øystein Ore, is a special type of a ring extension whose properties are relatively well understood. Elements of a Ore extension are called Ore polynomials.
Ore extensions appear in several natural contexts, including skew and differential polynomial rings, group algebras of polycyclic groups, universal enveloping algebras of solvable Lie algebras, and coordinate rings of quantum groups.
Definition
Suppose that R is a (not necessarily commutative) ring, is a ring homomorphism, and is a σ-derivation of R, which means that is a homomorphism of abelian groups satisfying
.
Then the Ore extension , also called a skew polynomial ring, is the noncommutative ring obtained by giving the ring of polynomials a new multiplication, subject to the identity
.
If δ = 0 (i.e., is the zero map) then the Ore extension is denoted R[x; σ]. If σ = 1 (i.e., the identity map) then the Ore extension is denoted R[ x, δ ] and is called a differential polynomial ring.
Examples
The Weyl algebras are Ore extensions, with R any commutative polynomial ring, σ the identity ring endomorphism, and δ the polynomial derivative. Ore algebras are a class of iterated Ore extensions under suitable constraints that permit to develop a noncommutative extension of the theory of Gröbner bases.
Properties
An Ore extension of a domain is a domain.
An Ore extension of a skew field is a non-commutative principal ideal domain.
If σ is an automorphism and R is a left Noetherian ring then the Ore extension R[ λ; σ, δ ] is also left Noetherian.
Elements
An element f of an Ore ring R is called
twosided (or invariant ), if R·f = f·R, and
central, if g·f = f·g for all g in R.
Further reading
Azeddine Ouarit (1992) Extensions de ore d'anneaux noetheriens á i.p, Comm. Algebra, 20 No 6,1819-1837. https://zbmath.org/?q=an:0754.16014
Azeddine Ouarit (1994) A remark on the J
|
https://en.wikipedia.org/wiki/CJ%20Affiliate
|
CJ Affiliate (formerly Commission Junction) is an online advertising company owned by Publicis Groupe operating in the affiliate marketing industry, which operates worldwide. The corporate headquarters is in Santa Barbara, California, and there are offices in Atlanta, GA, Chicago, IL, New York, NY San Francisco, CA Westlake Village, CA and Westborough, MA in the US, and in the UK, Germany, France, Spain, Sweden, India, and South Africa.
beFree, Inc. / Value-click, Inc.
Former Commission Junction competitor beFree, Inc. was acquired by Value-click, Inc. in 2002, before Commission Junction. beFree was gradually phased out in favor of Commission Junction. On February 3, 2014 Value-click, Inc. announced it has changed its name to Conversant, Inc., bringing former Value-click, Inc. companies Commission Junction, Dotomi, Greystripe, Mediaplex, and Value-click Media under one name. Conversant was bought by Alliance Data in 2014. Commission Junction continues to be known as CJ Affiliate.
See also
Affiliate marketing
Affiliate programs directories
Affiliate networks
References
External links
CJ Affiliate (formerly Commission Junction)
Commission Junction UK
Marketing companies established in 1998
Online advertising services and affiliate networks
Companies based in Santa Barbara County, California
Affiliate marketing
1998 establishments in California
|
https://en.wikipedia.org/wiki/Image%20persistence
|
Image persistence, or image retention, is the LCD and plasma display equivalent of screen burn-in. Unlike screen burn, the effects are usually temporary and often not visible without close inspection. Plasma displays experiencing severe image persistence can result in screen burn-in instead.
Image persistence can occur as easily as having something remain unchanged on the screen in the same location for a duration of even 10 minutes, such as a web page or document. Minor cases of image persistence are generally only visible when looking at darker areas on the screen, and usually invisible to the eye during ordinary computer use.
Cause
Liquid crystals have a natural relaxed state. When a voltage is applied they rearrange themselves to block certain light waves. If left with the same voltage for an extended period of time (e.g. displaying a pointer or the Taskbar in one place, or showing a static picture for extended periods of time), the liquid crystals can develop a tendency to stay in one position. This ever-so-slight tendency to stay arranged in one position can throw the requested color off by a slight degree, which causes the image to look like the traditional "burn-in" on phosphor based displays. In fact, the root cause of LCD image retention is different from phosphor aging, but the phenomenon is the same, namely uneven use of display pixels. Slight LCD image retention can be recovered. When severe image retention occurs, the liquid crystal molecules have been polarized and cannot rotate in the electric field, so they cannot be recovered.
The cause of this tendency is unclear. It might be due to various factors, including accumulation of ionic impurities inside the LCD, impurities introduced during the fabrication of the LCD, imperfect driver settings, electric charge building up near the electrodes, parasitic capacitance, or a DC voltage component that occurs unavoidably in some display pixels owing to anisotropy in the dielectric constant of the liquid c
|
https://en.wikipedia.org/wiki/Electronic%20Communications%20Act%202000
|
The Electronic Communications Act 2000 (c.7) is an Act of the Parliament of the United Kingdom that:
Had provisions to regulate the provision of cryptographic services in the UK (ss.1-6); and
Confirms the legal status of electronic signatures (ss.7-10).
The United Kingdom government had come to the conclusion that encryption, encryption services and electronic signatures would be important to e-commerce in the UK.
By 1999, however, only the security services still hankered after key escrow. So a "sunset clause" was put in the bill. The Electronic Communications Act 2000 gave the Home Office the power to create a registration regime for encryption services. This was given a five-year period before it would automatically lapse, which eventually happened in May 2006.
References
External links
An account from the Foundation For Information Policy Research
United Kingdom Acts of Parliament 2000
Cryptography law
|
https://en.wikipedia.org/wiki/Ion%20plating
|
Ion plating (IP) is a physical vapor deposition (PVD) process that is sometimes called ion assisted deposition (IAD) or ion vapor deposition (IVD) and is a modified version of vacuum deposition. Ion plating uses concurrent or periodic bombardment of the substrate, and deposits film by atomic-sized energetic particles called ions. Bombardment prior to deposition is used to sputter clean the substrate surface. During deposition the bombardment is used to modify and control the properties of the depositing film. It is important that the bombardment be continuous between the cleaning and the deposition portions of the process to maintain an atomically clean interface. If this interface is not properly cleaned, then it can result into a weaker coating or poor adhesion.
They are many different processes to vacuum deposited coatings in which they are used for various applications such as corrosion resistance and wear on the material.
Process
In ion plating, the energy, flux and mass of the bombarding species along with the ratio of bombarding particles to depositing particles are important processing variables. The depositing material may be vaporized either by evaporation, sputtering (bias sputtering), arc vaporization or by decomposition of a chemical vapor precursor chemical vapor deposition (CVD). The energetic particles used for bombardment are usually ions of an inert or reactive gas, or, in some cases, ions of the condensing film material ("film ions"). Ion plating can be done in a plasma environment where ions for bombardment are extracted from the plasma or it may be done in a vacuum environment where ions for bombardment are formed in a separate ion gun. The latter ion plating configuration is often called Ion Beam Assisted Deposition (IBAD). By using a reactive gas or vapor in the plasma, films of compound materials can be deposited.
Ion plating is used to deposit hard coatings of compound materials on tools, adherent metal coatings, optical coatings with hig
|
https://en.wikipedia.org/wiki/Ecological%20sanitation
|
Ecological sanitation, commonly abbreviated as ecosan (also spelled eco-san or EcoSan), is an approach to sanitation provision which aims to safely reuse excreta in agriculture. It is an approach, rather than a technology or a device which is characterized by a desire to "close the loop", mainly for the nutrients and organic matter between sanitation and agriculture in a safe manner. One of the aims is to minimise the use of non-renewable resources. When properly designed and operated, ecosan systems provide a hygienically safe system to convert human excreta into nutrients to be returned to the soil, and water to be returned to the land. Ecosan is also called resource-oriented sanitation.
Definition
The definition of ecosan has varied in the past. In 2012, a widely accepted definition of ecosan was formulated by Swedish experts: "Ecological sanitation systems are systems which allow for the safe recycling of nutrients to crop production in such a way that the use of non-renewable resources is minimized. These systems have a strong potential to be sustainable sanitation systems if technical, institutional, social and economic aspects are managed appropriately."
Prior to 2012, ecosan has often been associated with urine diversion and in particular with urine-diverting dry toilets (UDDTs), a type of dry toilet. For this reason, the term "ecosan toilet" is widely used when people mean a UDDT. However, the ecosan concept should not be limited to one particular type of toilet. Also, UDDTs can be used without having any reuse activities in which case they are not in line with the ecosan concept (an example being the 80,000 UDDTs implemented by eThekwini Municipality near Durban, South Africa).
Use of the term "ecosan"
The term "ecosan" was first used in 1995 and the first project started in 1996 in Ethiopia, by an NGO called Sudea. A trio, Dr Torsten Modig, Umeå University, Almaz Terrefe, teamleader, and Gunder Edström, hygiene expert, chose an area in a dense urban
|
https://en.wikipedia.org/wiki/Super%20PI
|
Super PI is a computer program that calculates pi to a specified number of digits after the decimal point—up to a maximum of 32 million. It uses Gauss–Legendre algorithm and is a Windows port of the program used by Yasumasa Kanada in 1995 to compute pi to 232 digits.
Significance
Super PI is popular in the overclocking community, both as a benchmark to test the performance of these systems and as a stress test to check that they are still functioning correctly.
Credibility concerns
The competitive nature of achieving the best Super PI calculation times led to fraudulent Super PI results, reporting calculation times faster than normal. Attempts to counter the fraudulent results resulted in a modified version of Super PI, with a checksum to validate the results. However, other methods exist of producing inaccurate or fake time results, raising questions about the program's future as an overclocking benchmark.
Super PI utilizes x87 floating point instructions which are supported on all x86 and x86-64 processors, current versions which also support the lower precision Streaming SIMD Extensions vector instructions.
The future
Super PI is single threaded, so its relevance as a measure of performance in the current era of multi-core processors is diminishing quickly. Therefore, wPrime has been developed to support multiple threaded calculations to be run at the same time so one can test stability on multi-core machines. Other multithreaded programs include: Hyper PI, IntelBurnTest, Prime95, Montecarlo superPI, OCCT or y-cruncher. Last but not least, while SuperPi is unable to calculate more than 32 million digits, and Alexander J. Yee & Shigeru Kondo were able to set a record of 10 Trillion 50 Digits of Pi using y-cruncher under a 2 x Intel Xeon X5680 @ 3.33 GHz - (12 physical cores, 24 hyperthreaded) computer on October 16, 2011 Super PI is much slower than these other programs, and utilizes inferior algorithms to them.
References
External links
Benchmarks (co
|
https://en.wikipedia.org/wiki/Solid-state%20laser
|
A solid-state laser is a laser that uses a gain medium that is a solid, rather than a liquid as in dye lasers or a gas as in gas lasers. Semiconductor-based lasers are also in the solid state, but are generally considered as a separate class from solid-state lasers, called laser diodes.
Solid-state media
Generally, the active medium of a solid-state laser consists of a glass or crystalline "host" material, to which is added a "dopant" such as neodymium, chromium, erbium, thulium or ytterbium. Many of the common dopants are rare-earth elements, because the excited states of such ions are not strongly coupled with the thermal vibrations of their crystal lattices (phonons), and their operational thresholds can be reached at relatively low intensities of laser pumping.
There are many hundreds of solid-state media in which laser action has been achieved, but relatively few types are in widespread use. Of these, probably the most common is neodymium-doped yttrium aluminum garnet (Nd:YAG). Neodymium-doped glass (Nd:glass) and ytterbium-doped glasses or ceramics are used at very high power levels (terawatts) and high energies (megajoules), for multiple-beam inertial confinement fusion.
The first material used for lasers was synthetic ruby crystals. Ruby lasers are still used for a few applications, but they are not common because of their low power efficiencies. At room temperature, ruby lasers emit only short pulses of light, but at cryogenic temperatures they can be made to emit a continuous train of pulses.
Some solid-state lasers can also be tunable using several intracavity techniques, which employ etalons, prisms, and gratings, or a combination of these. Titanium-doped sapphire is widely used for its broad tuning range, 660 to 1080 nanometers. Alexandrite lasers are tunable from 700 to 820 nm and yield higher-energy pulses than titanium-sapphire lasers because of the gain medium's longer energy storage time and higher damage threshold.
Pumping
Solid state lasin
|
https://en.wikipedia.org/wiki/PointCast
|
PointCast was a dot-com company founded in 1992 by Christopher R. Hassett in Sunnyvale, California.
PointCast Network
The company's initial product amounted to a screensaver that displayed news and other information, delivered live over the Internet. The PointCast Network used push technology, which was a hot concept at the time, and received enormous press coverage when it launched in beta form on February 13, 1996.
The product did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. It demanded more bandwidth than the home dial-up Internet connections of the day could provide, and people objected to the large number of advertisements that were pushed over the service as well. PointCast offered corporations a proxy server that would dramatically reduce the bandwidth used. But even this didn't help save PointCast. A more likely reason than bandwidth was the increasing popularity of "portal websites". When PointCast first started Yahoo offered little more than a hierarchical structure on the internet (broken down by subject much like DMOZ) but was soon to introduce the portal which was customizable and offered a much more convenient way to read the news.
News Corporation purchase offer and change of CEO
At its height in January 1997, News Corporation made an offer of $450 million to purchase the company. However, the offer was withdrawn in March. While there were rumors that it was withdrawn due to issues with the price and revenue projections, James Murdoch said it was due to PointCast's inaction.
Shortly after not accepting the purchase offer, the board of directors decided to replace Christopher Hassett as the CEO. Some reasons included turning down the recent purchase offer, software performance problems (using too much corporate bandwidth) and declining market share (lost to the then-emerging Web portal sites.) After five months, David Dorman was
|
https://en.wikipedia.org/wiki/Ikeda%20map
|
In physics and mathematics, the Ikeda map is a discrete-time dynamical system given by the complex map
The original map was proposed first by Kensuke Ikeda as a model of light going around across a nonlinear optical resonator (ring cavity containing a nonlinear dielectric medium) in a more general form. It is reduced to the above simplified "normal" form by Ikeda, Daido and Akimoto stands for the electric field inside the resonator at the n-th step of rotation in the resonator, and and are parameters which indicate laser light applied from the outside, and linear phase across the resonator, respectively. In particular the parameter is called dissipation parameter characterizing the loss of resonator, and in the limit of the Ikeda map becomes a conservative map.
The original Ikeda map is often used in another modified form in order to take the saturation effect of nonlinear dielectric medium into account:
A 2D real example of the above form is:
where u is a parameter and
For , this system has a chaotic attractor.
Attractor
This shows how the attractor of the system changes as the parameter is varied from 0.0 to 1.0 in steps of 0.01. The Ikeda dynamical system is simulated for 500 steps, starting from 20000 randomly placed starting points. The last 20 points of each trajectory are plotted to depict the attractor. Note the bifurcation of attractor points as is increased.
Point trajectories
The plots below show trajectories of 200 random points for various values of . The inset plot on the left shows an estimate of the attractor while the inset on the right shows a zoomed in view of the main trajectory plot.
Octave/MATLAB code for point trajectories
The Octave/MATLAB code to generate these plots is given below:
% u = ikeda parameter
% option = what to plot
% 'trajectory' - plot trajectory of random starting points
% 'limit' - plot the last few iterations of random starting points
function ikeda(u, option)
P = 200; % how many starting points
N = 100
|
https://en.wikipedia.org/wiki/Human%E2%80%93robot%20interaction
|
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.
Origins
Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing, many aspects of HRI are continuations of human communications, a field of research which is much older than robotics.
The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot. Asimov coined Three Laws of Robotics, namely:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator.
Although initially robots in the human–robot
|
https://en.wikipedia.org/wiki/List%20of%20podcast%20clients
|
A podcast client, or podcatcher, is a computer program used to stream or download podcasts, usually via an RSS or XML feed.
While podcast clients are best known for streaming and downloading podcasts, many are also capable of downloading video, newsfeeds, text, and pictures. Some of these podcast clients can also automate the transfer of received audio files to a portable media player. Although many include a directory of high-profile podcasts, they generally allow users to manually subscribe directly to a feed by providing the URL.
The core concepts had been developing since 2000, the first commercial podcast client software was developed in 2001.
Podcasts were made popular when Apple added podcatching to its iTunes software and iPod portable media player in June 2005. Apple Podcasts is currently included in all Apple devices, such as iPhone, iPad and Mac computers.
Podcast clients
See also
Comparison of audio player software
References
External links
"Podcast Software (Clients)" at podcastingnews.com – Archived page
podcatchermatrix.org compares the features of a number of podcast clients.
"Podcast Client Feature Comparison Matrix" as a Google Sheets spreadsheet.
Podcasting software
Podcast clients
Clients
|
https://en.wikipedia.org/wiki/Detritus
|
In biology, detritus () is dead particulate organic material, as distinguished from dissolved organic material. Detritus typically includes the bodies or fragments of bodies of dead organisms, and fecal material. Detritus typically hosts communities of microorganisms that colonize and decompose (i.e. remineralize) it. In terrestrial ecosystems it is present as leaf litter and other organic matter that is intermixed with soil, which is denominated "soil organic matter". The detritus of aquatic ecosystems is organic substances that is suspended in the water and accumulates in depositions on the floor of the body of water; when this floor is a seabed, such a deposition is denominated "marine snow".
Theory
The corpses of dead plants or animals, material derived from animal tissues (e.g. molted skin), and fecal matter gradually lose their form due to physical processes and the action of decomposers, including grazers, bacteria, and fungi. Decomposition, the process by which organic matter is decomposed, occurs in several phases. Micro- and macro-organisms that feed on it rapidly consume and absorb materials such as proteins, lipids, and sugars that are low in molecular weight, while other compounds such as complex carbohydrates are decomposed more slowly. The decomposing microorganisms degrade the organic materials so as to gain the resources they require for their survival and reproduction. Accordingly, simultaneous to microorganisms' decomposition of the materials of dead plants and animals is their assimilation of decomposed compounds to construct more of their biomass (i.e. to grow their own bodies). When microorganisms die, fine organic particles are produced, and if small animals that feed on microorganisms eat these particles they collect inside the intestines of the consumers, and change shape into large pellets of dung. As a result of this process, most of the materials of dead organisms disappear and are not visible and recognizable in any form, but are pres
|
https://en.wikipedia.org/wiki/Radioisotope%20piezoelectric%20generator
|
A radioisotope piezoelectric generator (RPG) is a type of radioisotope generator that converts energy stored in radioactive materials into motion, which is used to generate electricity using the repeated deformation of a piezoelectric material. This approach creates a high-impedance source and, unlike chemical batteries, the devices will work at a very wide range of temperatures.
Description
A piezoelectric cantilever is mounted directly above a base of the radioactive isotope nickel-63. All of the radiation emitted as the millicurie-level nickel-63 thin film decays is in the form of beta radiation, which consists of electrons. As the cantilever accumulates the emitted electrons, it builds up a negative charge at the same time that the isotope film becomes positively charged. The beta particles essentially transfer electronic charge from the thin film to the cantilever. The opposite charges cause the cantilever to bend toward the isotope film. Just as the cantilever touches the thin-film isotope, the charge jumps the gap. That permits current to flow back onto the isotope, equalizing the charge and resetting the cantilever. As long as the isotope is decaying - a process that can last for decades - the tiny cantilever will continue its up-and-down motion. As the cantilever directly generates electricity when deformed, a charge pulse is released each time the cantilever cycles.
Radioactive isotopes can continue to release energy over periods ranging from weeks to decades. The half-life of nickel-63, for example, is over 100 years. Thus, a battery using this isotope might continue to supply useful energy for at least half that time. Researchers have demonstrated devices with about 7% efficiency with high frequencies of 120 Hz to low-frequency (every three hours) self-reciprocating actuators.
History
In 2002 researchers at Cornell University published and patented the first design.
See also
Atomic battery
Thermionic converter
Betavoltaics
Optoelectric nuclear bat
|
https://en.wikipedia.org/wiki/Radiant%20exitance
|
In radiometry, radiant exitance or radiant emittance is the radiant flux emitted by a surface per unit area, whereas spectral exitance or spectral emittance is the radiant exitance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. This is the emitted component of radiosity. The SI unit of radiant exitance is the watt per square metre (), while that of spectral exitance in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral exitance in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (). The CGS unit erg per square centimeter per second () is often used in astronomy. Radiant exitance is often called "intensity" in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.
Mathematical definitions
Radiant exitance
Radiant exitance of a surface, denoted ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
is the partial derivative symbol,
is the radiant flux emitted, and
is the surface area.
If we want to talk about the radiant flux received by a surface, we speak of irradiance.
The radiant exitance of a black surface, according to the Stefan–Boltzmann law, is equal to:
where is the Stefan–Boltzmann constant, and is the temperature of that surface.
For a real surface, the radiant exitance is equal to:
where is the emissivity of that surface.
Spectral exitance
Spectral exitance in frequency of a surface, denoted Me,ν, is defined as
where is the frequency.
Spectral exitance in wavelength of a surface, denoted Me,λ, is defined as
where is the wavelength.
The spectral exitance of a black surface around a given frequency or wavelength, according to the Lambert's cosine law and the Planck's law, is equal to:
where
is the Planck constant,
is the frequency,
is the wavelength,
is the Boltz
|
https://en.wikipedia.org/wiki/Gobe%20Software
|
Gobe Software, Inc was a software company founded in 1997 by members of the ClarisWorks development team that developed and published an integrated desktop software suite for BeOS. In later years, it was the distributor of BeOS itself.
History
Gobe was founded in 1997 by members of the ClarisWorks development team and some of the authors of the original Styleware application for the Apple II. After leaving StyleWare and creating the product later known as ClarisWorks and AppleWorks, Bob Hearn, Scott Holdaway joined Tom Hoke, Scott Lindsey, Bruce Q. Hammond, and Carl Grice who also worked at Apple Computer's Claris subsidiary and formed Gobe Software, Inc with the notion to create a next-generation integrated office suite similar to ClarisWorks, but for the BeOS platform. It released Gobe Productive in 1998.
When Be Inc. outsourced publication of BeOS in 2000, Gobe became the publisher of BeOS in North America, Australia, and sections of Asia. Only weeks after signing up other publishers around the globe, Be, Inc. halted development for the BeOS platform and publicly announced that all of its corporate focus would be on "Internet Appliances" and made public announcements that hampered forward momentum of the BeOS platform. In addition, the publishers in general and Gobe in particular did not have source code access to the BeOS and were not able to continue its development or add drivers that the platform needed to be a viable alternative to Windows or Linux. Gobe also published Hicom Entertainment/Next Generation Entertainments "Corum III" role-playing game for BeOS during this period.
The failure of Be, Inc and BeOS meant ports had to be undertaken, and Windows and Linux variants were developed. Although the company shipped a Windows version of its software in December 2001, it was unable to obtain sufficient operating capital after the 2000 stock market crash and suspended operations 2002. In 2008 Gobe management began to work with distribution and developm
|
https://en.wikipedia.org/wiki/Collinearity
|
In geometry, collinearity of a set of points is the property of their lying on a single line. A set of points with this property is said to be collinear (sometimes spelled as colinear). In greater generality, the term has been used for aligned objects, that is, things being "in a line" or "in a row".
Points on a line
In any geometry, the set of points on a line are said to be collinear. In Euclidean geometry this relation is intuitively visualized by points lying in a row on a "straight line". However, in most geometries (including Euclidean) a line is typically a primitive (undefined) object type, so such visualizations will not necessarily be appropriate. A model for the geometry offers an interpretation of how the points, lines and other object types relate to one another and a notion such as collinearity must be interpreted within the context of that model. For instance, in spherical geometry, where lines are represented in the standard model by great circles of a sphere, sets of collinear points lie on the same great circle. Such points do not lie on a "straight line" in the Euclidean sense, and are not thought of as being in a row.
A mapping of a geometry to itself which sends lines to lines is called a collineation; it preserves the collinearity property.
The linear maps (or linear functions) of vector spaces, viewed as geometric maps, map lines to lines; that is, they map collinear point sets to collinear point sets and so, are collineations. In projective geometry these linear mappings are called homographies and are just one type of collineation.
Examples in Euclidean geometry
Triangles
In any triangle the following sets of points are collinear:
The orthocenter, the circumcenter, the centroid, the Exeter point, the de Longchamps point, and the center of the nine-point circle are collinear, all falling on a line called the Euler line.
The de Longchamps point also has other collinearities.
Any vertex, the tangency of the opposite side with an excircl
|
https://en.wikipedia.org/wiki/Multiple%20Registration%20Protocol
|
Multiple Registration Protocol (MRP), which replaced Generic Attribute Registration Protocol (GARP), is a generic registration framework defined by the IEEE 802.1ak amendment to the IEEE 802.1Q standard. MRP allows bridges, switches or other similar devices to register and de-register attribute values, such as VLAN identifiers and multicast group membership across a large local area network. MRP operates at the data link layer.
History
GARP was defined by the IEEE 802.1 working group to provide a generic framework allowing bridges (or other devices like switches) to register and de-register attribute values such as VLAN identifiers and multicast group membership. GARP defines the architecture, rules of operation, state machines and variables for the registration and de-registration of attribute values. GARP was used by two applications: GARP VLAN Registration Protocol (GVRP) for registering VLAN trunking between multilayer switches, and by the GARP Multicast Registration Protocol (GMRP). The latter two were both mostly enhancements for VLAN-aware switches per definition in IEEE 802.1Q.
Multiple Registration Protocol (MRP) was introduced in order to replace GARP, with the IEEE 802.1ak amendment in 2007. The two GARP applications were also modified in order to use MRP. GMRP was replaced by Multiple MAC Registration Protocol (MMRP) and GVRP was replaced by Multiple VLAN Registration Protocol (MVRP). This change essentially moved the definitions of GARP, GVRP, and GMRP into an 802.1Q based environment, implying they were already VLAN aware. This also allowed for significant streamlining of the underlying protocol without much change to the interface of the applications themselves.
The new protocol and applications fixed a problem with the old GARP-based GVRP-based system,
where a simple registration or a failover could take an extremely long time to converge on a large network, incurring a significant bandwidth degradation.
It is expected GARP will be removed from
|
https://en.wikipedia.org/wiki/EXpressDSP
|
eXpressDSP is a software package produced by Texas Instruments (TI). This software package is a suite of tools used to develop applications on Texas Instruments digital signal processor line of chips.
It consists of:
An integrated development environment called Code Composer Studio IDE.
DSP/BIOS Real-Time OS kernel
Standards for application interoperability and reuse
Code examples for common applications, called the eXpressDSP Reference Frameworks
A number of third-party products from TI's DSP Third Party Program
eXpressDSP Algorithm Interface Standard
TI publishes an eXpressDSP Algorithm Interface Standard (XDAIS), an Application Programming Interface (API) designed to enable interoperability of real-time DSP algorithms.
References
About eXpressDSP Software
eXpressDSP Algorithm Interface Standard
Dorsch, Jeff, "TI Unveils DSP Software Environment - eXpressDSP Real-Time application development software - Product Announcement," Electronic News, Sept. 20, 1999.
Texas Instruments
Digital signal processing
|
https://en.wikipedia.org/wiki/Tip%20growth
|
Tip growth is an extreme form of polarised growth of living cells that results in an elongated cylindrical cell morphology with a rounded tip at which the growth activity takes place. Tip growth occurs in algae (e.g., Acetabularia acetabulum), fungi (hyphae) and plants (e.g. root hairs and pollen tubes).
Tip growth is a process that has many similarities in diverse walled cells such as pollen tubes, root hairs, and hyphae.
Fungal tip growth and hyphal tropisms
Fungal hyphae extend continuously at their extreme tips, where enzymes are released into the environment and where new wall materials are synthesised. The rate of tip extension can be extremely rapid - up to 40 micrometres per minute. It is supported by the continuous movement of materials into the tip from older regions of the hyphae. So, in effect, a fungal hypha is a continuously moving mass of protoplasm in a continuously extending tube. This unique mode of growth - apical growth - is the hallmark of fungi, and it accounts for much of their environmental and economic significance.
References
Developmental biology
|
https://en.wikipedia.org/wiki/Intraflagellar%20transport
|
Intraflagellar transport (IFT) is a bidirectional motility along axoneme microtubules that is essential for the formation (ciliogenesis) and maintenance of most eukaryotic cilia and flagella. It is thought to be required to build all cilia that assemble within a membrane projection from the cell surface. Plasmodium falciparum cilia and the sperm flagella of Drosophila are examples of cilia that assemble in the cytoplasm and do not require IFT. The process of IFT involves movement of large protein complexes called IFT particles or trains from the cell body to the ciliary tip and followed by their return to the cell body. The outward or anterograde movement is powered by kinesin-2 while the inward or retrograde movement is powered by cytoplasmic dynein 2/1b. The IFT particles are composed of about 20 proteins organized in two subcomplexes called complex A and B.
IFT was first reported in 1993 by graduate student Keith Kozminski while working in the lab of Dr. Joel Rosenbaum at Yale University. The process of IFT has been best characterized in the biflagellate alga Chlamydomonas reinhardtii as well as the sensory cilia of the nematode Caenorhabditis elegans.
It has been suggested based on localization studies that IFT proteins also function outside of cilia.
Biochemistry
Intraflagellar transport (IFT) describes the bi-directional movement of non-membrane-bound particles along the doublet microtubules of the flagellar, and motile cilia axoneme, between the axoneme and the plasma membrane. Studies have shown that the movement of IFT particles along the microtubule is carried out by two different microtubule motors; the anterograde (towards the flagellar tip) motor is heterotrimeric kinesin-2, and the retrograde (towards the cell body) motor is cytoplasmic dynein 1b. IFT particles carry axonemal subunits to the site of assembly at the tip of the axoneme; thus, IFT is necessary for axonemal growth. Therefore, since the axoneme needs a continually fresh supply of prote
|
https://en.wikipedia.org/wiki/RDM%20%28lighting%29
|
Remote Device Management (RDM) is a protocol enhancement to USITT DMX512 that allows bi-directional communication between a lighting or system controller and attached RDM compliant devices over a standard DMX line. This protocol will allow configuration, status monitoring, and management of these devices in such a way that does not disturb the normal operation of standard DMX512 devices that do not recognize the RDM protocol.
The standard was originally developed by the Entertainment Services and Technology Association - Technical Standards (ESTA) and is officially known as "ANSI E1.20, Remote Device Management Over DMX512 Networks.
Technical Details
RDM Physical layer
The RDM protocol and the RDM physical layer were designed to be compatible with legacy equipment. All compliant legacy DMX512 receivers should be usable in mixed systems with an RDM controller (console) and RDM responders (receivers). DMX receivers and RDM responders can be used with a legacy DMX console to form a DMX512 only system. From a user’s point of view the system layout is very similar to a DMX system. The controller is placed at one end of the main cable segment. The cable is run receiver to receiver in a daisy-chain fashion. RDM enabled splitters are used the same way DMX splitters would be. The far end (the non console or splitter end) of a cable segment should be terminated.
RDM requires two significant topology changes compared to DMX. However, these changes are generally internal to equipment and therefore not seen by the user.
First, a controller’s (console’s) output is terminated. Second, this termination must provide a bias to keep the line in the ‘marking state’ when no driver is enabled.
The reason for the additional termination is that a network segment will be driven at many points along its length. Hence, either end of the segment, if unterminated, will cause reflections.
A DMX console’s output drivers are always enabled. The RDM protocol is designed so that except
|
https://en.wikipedia.org/wiki/Bring%20radical
|
In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial
The Bring radical of a complex number a is either any of the five roots of the above polynomial (it is thus multi-valued), or a specific root, which is usually chosen such that the Bring radical is real-valued for real a and is an analytic function in a neighborhood of the real line. Because of the existence of four branch points, the Bring radical cannot be defined as a function that is continuous over the whole complex plane, and its domain of continuity must exclude four branch cuts.
George Jerrard showed that some quintic equations can be solved in closed form using radicals and Bring radicals, which had been introduced by Erland Bring.
In this article, the Bring radical of a is denoted For real argument, it is odd, monotonically decreasing, and unbounded, with asymptotic behavior for large .
Normal forms
The quintic equation is rather difficult to obtain solutions for directly, with five independent coefficients in its most general form:
The various methods for solving the quintic that have been developed generally attempt to simplify the quintic using Tschirnhaus transformations to reduce the number of independent coefficients.
Principal quintic form
The general quintic may be reduced into what is known as the principal quintic form, with the quartic and cubic terms removed:
If the roots of a general quintic and a principal quintic are related by a quadratic Tschirnhaus transformation
the coefficients α and β may be determined by using the resultant, or by means of the power sums of the roots and Newton's identities. This leads to a system of equations in α and β consisting of a quadratic and a linear equation, and either of the two sets of solutions may be used to obtain the corresponding three coefficients of the principal quintic form.
This form is used by Felix Klein's solution to the quintic.
Bring–Jerrard normal form
It is possible to
|
https://en.wikipedia.org/wiki/Z-factor
|
The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening (where it is also known as Z-prime), and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention.
Background
In high-throughput screens, experimenters often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to positive and negative control samples. The particular choice of experimental conditions and measurements is called an assay. Large screens are expensive in time and resources. Therefore, prior to starting a large screen, smaller test (or pilot) screens are used to assess the quality of an assay, in an attempt to predict if it would be useful in a high-throughput setting. The Z-factor is an attempt to quantify the suitability of a particular assay for use in a full-scale, high-throughput screen.
Definition
The Z-factor is defined in terms of four parameters: the means () and standard deviations () of both the positive (p) and negative (n) controls (, , and , ). Given these values, the Z-factor is defined as:
In practice, the Z-factor is estimated from the sample means and sample standard deviations
Interpretation
The following interpretations for the Z-factor are taken from:
Note that by the standards of many types of experiments, a zero Z-factor would suggest a large effect size, rather than a borderline useless result as suggested above. For example, if σp=σn=1, then μp=6 and μn=0 gives a zero Z-factor. But for normally-distributed data with these parameters, the probability that the positive control value would be less than the negative control value is less than 1 in 105. Extreme conservatism is used in high throughput screening due to the large number of tests performed.
Limitations
The constant factor 3 in the definition of the Z-factor is motivated by the normal distribution, for which more than 99% of values
|
https://en.wikipedia.org/wiki/Dialogue%20system
|
A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
The elements of a dialogue system are not defined because this idea is under research, however, they are different from chatbot. The typical GUI wizard engages in a sort of dialogue, but it includes very few of the common dialogue system components, and the dialogue state is trivial.
Background
After dialogue systems based only on written text processing starting from the early Sixties, the first speaking dialogue system was issued by the DARPA Project in the USA in 1977. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian). Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.
Components
What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the dialogue manager, which is a component that manages the state of the dialogue, and dialogue strategy. A typical activity cycle in a dialogue system contains the following phases:
The user speaks, and the input is converted to plain text by the system's input recogniser/decoder, which may include:
automatic speech recogniser (ASR)
gesture recogniser
handwriting recogniser
The text is analysed by a natural language understanding (NLU) unit, which may include:
Proper Name identification
part-of-speech tagging
Syntactic/semantic parser
The semantic information is analysed by the dialogue manager, which keeps the history and state of the dialogue and manages the general flow of the conversation.
Usually, the dialogue manager contacts on
|
https://en.wikipedia.org/wiki/Pearson%E2%80%93Anson%20effect
|
The Pearson–Anson effect, discovered in 1922 by Stephen Oswald Pearson and Horatio Saint George Anson, is the phenomenon of an oscillating electric voltage produced by a neon bulb connected across a capacitor, when a direct current is applied through a resistor. This circuit, now called the Pearson-Anson oscillator, neon lamp oscillator, or sawtooth oscillator, is one of the simplest types of relaxation oscillator. It generates a sawtooth output waveform. It has been used in low frequency applications such as blinking warning lights, stroboscopes, tone generators in electronic organs and other electronic music circuits, and in time bases and deflection circuits of early cathode-ray tube oscilloscopes. Since the development of microelectronics, these simple negative resistance oscillators have been superseded in many applications by more flexible semiconductor relaxation oscillators such as the 555 timer IC.
Neon bulb as a switching device
A neon bulb, often used as an indicator lamp in appliances, consists of a glass bulb containing two electrodes, separated by an inert gas such as neon at low pressure. Its nonlinear current-voltage characteristics (diagram below) allow it to function as a switching device.
When a voltage is applied across the electrodes, the gas conducts almost no electric current until a threshold voltage is reached (point b), called the firing or breakdown voltage, Vb. At this voltage electrons in the gas are accelerated to a high enough speed to knock other electrons off gas atoms, which go on to knock off more electrons in a chain reaction. The gas in the bulb ionizes, starting a glow discharge, and its resistance drops to a low value. In its conducting state the current through the bulb is limited only by the external circuit. The voltage across the bulb drops to a lower voltage called the maintaining voltage Vm. The bulb will continue to conduct current until the applied voltage drops below the extinction voltage Ve (point d), w
|
https://en.wikipedia.org/wiki/Darwin%27s%20Radio
|
Darwin's Radio is a 1999 science fiction novel by Greg Bear. It won the Nebula Award in 2000 for Best Novel and the 2000 Endeavour Award. It was also nominated for the Hugo Award, Locus and Campbell Awards the same year.
The novel's original tagline was "The next great war will be inside us." It was followed by a sequel, Darwin's Children, in 2003.
Plot summary
In the novel, a new form of endogenous retrovirus has emerged, SHEVA. It controls human evolution by rapidly evolving the next generation while it is in the womb, leading to speciation.
The novel follows several characters as the "plague" is discovered as well as the panicked reaction of the public and the US government to the disease.
Built into the human genome are non-coding sequences of DNA called introns. Certain portions of those "non-sense" sequences, remnants of prehistoric retroviruses, have been activated and are translating numerous LPCs (large protein complexes). The activation of SHEVA and its consequential sudden speciation was postulated to be controlled by a complex genetic network that perceives a need for modification or to be a human adaptive response to overcrowding. The disease, or rather, gene activation, is passed on laterally from male to female as per an STD. If impregnated, a woman in her first trimester who has contracted SHEVA will miscarry a deformed female fetus made of little more than two ovaries. This "first stage fetus" leaves behind a fertilized egg with 52 chromosomes, rather than the typical 46 characteristic of Homo sapiens sapiens.
During the third trimester of the second stage pregnancy, both parents go into a pre-speciation puberty to prepare them for the needs of their novel child. Facial pigmentation changes underneath the old skin which begins sloughing off like a mask. Vocal organs and olfactory glands alter and sensitize respectively, to adapt for a new form of communication. For over a year after the first SHEVA outbreak in the United States, no
|
https://en.wikipedia.org/wiki/Sense%20Plan%20Act
|
Sense-Plan-Act was the predominant robot control methodology through 1985.
Sense - gather information using the sensors
Plan - create a world model using all the information, and plan the next move
Act
SPA is used in iterations: After the acting phase, the sensing phase, and the entire cycle, is repeated.
see also: OODA loop, PDCA, Continual improvement process
Robot architectures
|
https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20context-free%20languages
|
In computer science, in particular in formal language theory, the pumping lemma for context-free languages, also known as the Bar-Hillel lemma, is a lemma that gives a property shared by all context-free languages and generalizes the pumping lemma for regular languages.
The pumping lemma can be used to construct a proof by contradiction that a specific language is not context-free. Conversely, the pumping lemma does not suffice to guarantee that a language is context-free; there are other necessary conditions, such as Ogden's lemma, or the Interchange lemma.
Formal statement
If a language is context-free, then there exists some integer (called a "pumping length") such that every string in that has a length of or more symbols (i.e. with ) can be written as
with substrings and , such that
1. ,
2. , and
3. for all .
Below is a formal expression of the Pumping Lemma.
Informal statement and explanation
The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have.
The property is a property of all strings in the language that are of length at least , where is a constant—called the pumping length—that varies between context-free languages.
Say is a string of length at least that is in the language.
The pumping lemma states that can be split into five substrings, , where is non-empty and the length of is at most , such that repeating and the same number of times () in produces a string that is still in the language. It is often useful to repeat zero times, which removes and from the string. This process of "pumping up" with additional copies of and is what gives the pumping lemma its name.
Finite languages (which are regular and hence context-free) obey the pumping lemma trivially by having equal to the maximum string length in plus one. As there are no strings of this length the pumping lemma is not violated.
|
https://en.wikipedia.org/wiki/Charcot%E2%80%93Leyden%20crystals
|
Charcot–Leyden crystals are microscopic crystals composed of eosinophil protein galectin-10 found in people who have allergic diseases such as asthma or parasitic infections such as parasitic pneumonia or ascariasis.
Appearance
Charcot–Leyden crystals are composed of an eosinophilic lysophospholipase binding protein called Galectin -10. They vary in size and may be as large as 50 µm in length. Charcot–Leyden crystals are slender and pointed at both ends, consisting of a pair of hexagonal pyramids joined at their bases. Normally colorless, they are stained purplish-red by trichrome.
Clinical significance
They are indicative of a disease involving eosinophilic inflammation or proliferation, such as is found in allergic reactions (asthma, bronchitis, allergic rhinitis and rhinosinusitis) and parasitic infections such as Entamoeba histolytica, Necator americanus, and Ancylostoma duodenale.
Charcot–Leyden crystals are often seen pathologically in patients with bronchial asthma.
History
Friedrich Albert von Zenker was the first to notice these crystals, doing so in 1851, after which they were described jointly by Jean-Martin Charcot and Charles-Philippe Robin in 1853, then in 1872 by Ernst Viktor von Leyden.
See also
Curschmann's Spirals
References
External links
Tulane Lung pathology
Charcot Leyden crystals at UDEL
Scientists solve a century-old mystery to treat asthma and airway inflammation
Pathology
|
https://en.wikipedia.org/wiki/Callendar%E2%80%93Van%20Dusen%20equation
|
The Callendar–Van Dusen equation is an equation that describes the relationship between resistance (R) and temperature (T) of platinum resistance thermometers (RTD).
As commonly used for commercial applications of RTD thermometers, the relationship between resistance and temperature is given by the following equations. The relationship above 0 °C (up to the melting point of aluminum ~ 660 °C) is a simplification of the equation that holds over a broader range down to -200 °C. The longer form was published in 1925 (see below) by M.S. Van Dusen and is given as:
While the simpler form was published earlier by Callendar, it is generally valid only over the range between 0 °C to 661 °C and is given as:
Where constants A, B, and C are derived from experimentally determined parameters α, β, and δ using resistance measurements made at 0 °C, 100 °C and 260 °C.
Together,
It is important to note that these equations are listed as the basis for the temperature/resistance tables for idealized platinum resistance thermometers and are not intended to be used for the calibration of an individual thermometer, which would require the experimentally determined parameters to be found.
These equations are cited in International Standards for platinum RTD's resistance versus temperature functions DIN/IEC 60751 (also called IEC 751), also adopted as BS-1904, and with some modification, JIS C1604.
The equation was found by British physicist Hugh Longbourne Callendar, and refined for measurements at lower temperatures by M. S. Van Dusen, a chemist at the U.S. National Bureau of Standards (now known as the National Institute of Standards and Technology ) in work published in 1925 in the Journal of the American Chemical Society.
Starting in 1968, the Callendar-Van Dusen Equation was replaced by an interpolating formula given by a 20th order polynomial first published in The International Practical Temperature Scale of 1968 by the Comité International des Poids et Mesures.
Starting
|
https://en.wikipedia.org/wiki/Fatou%E2%80%93Lebesgue%20theorem
|
In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals (in the sense of Lebesgue) of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue.
If the sequence of functions converges pointwise, the inequalities turn into equalities and the theorem reduces to Lebesgue's dominated convergence theorem.
Statement of the theorem
Let f1, f2, ... denote a sequence of real-valued measurable functions defined on a measure space (S,Σ,μ). If there exists a Lebesgue-integrable function g on S which dominates the sequence in absolute value, meaning that |fn| ≤ g for all natural numbers n, then all fn as well as the limit inferior and the limit superior of the fn are integrable and
Here the limit inferior and the limit superior of the fn are taken pointwise. The integral of the absolute value of these limiting functions is bounded above by the integral of g.
Since the middle inequality (for sequences of real numbers) is always true, the directions of the other inequalities are easy to remember.
Proof
All fn as well as the limit inferior and the limit superior of the fn are measurable and dominated in absolute value by g, hence integrable.
The first inequality follows by applying Fatou's lemma to the non-negative functions fn + g and using the linearity of the Lebesgue integral. The last inequality is the reverse Fatou lemma.
Since g also dominates the limit superior of the |fn|,
by the monotonicity of the Lebesgue integral. The same estimates hold for the limit superior of the fn.
References
Topics in Real and Functional Analysis by Gerald Teschl, University of Vienna.
External links
Theorems in real analysis
Theorems in measure theory
Articles containing proofs
|
https://en.wikipedia.org/wiki/Internet%20Mail%20Consortium
|
The Internet Mail Consortium (IMC) was an organization between 1996 and 2002 that claimed to be the only international organization focused on cooperatively managing and promoting the rapidly expanding world of electronic mail on the Internet.
Purpose
The goals of the IMC included greatly expanding the role of mail on the Internet into areas such as commerce and entertainment, advancing new Internet mail technologies, and making it easier for all Internet users, particularly novices, to get the most out of the growing communications medium. It did this by providing information about all the Internet mail standards and technologies. They also prepared reports that supplemented the Internet Engineering Task Force's RFCs.
Headquartered in Santa Cruz, California, the IMC was founded by Paul E. Hoffman about 1996 and ceased activity in 2002.
See also
Versit Consortium
References
External links
Official Website
Internet governance organizations
Internet Standard organizations
Internet-related organizations
Task forces
Email
.
|
https://en.wikipedia.org/wiki/Overlapping%20interval%20topology
|
In mathematics, the overlapping interval topology is a topology which is used to illustrate various topological principles.
Definition
Given the closed interval of the real number line, the open sets of the topology are generated from the half-open intervals with and with . The topology therefore consists of intervals of the form , , and with , together with itself and the empty set.
Properties
Any two distinct points in are topologically distinguishable under the overlapping interval topology as one can always find an open set containing one but not the other point. However, every non-empty open set contains the point 0 which can therefore not be separated from any other point in , making with the overlapping interval topology an example of a T0 space that is not a T1 space.
The overlapping interval topology is second countable, with a countable basis being given by the intervals , and with and r and s rational.
See also
List of topologies
Particular point topology, a topology where sets are considered open if they are empty or contain a particular, arbitrarily chosen, point of the topological space
References
(See example 53)
Topological spaces
|
https://en.wikipedia.org/wiki/Stolp%20radio%20transmitter
|
Stolp radio transmitter was a broadcasting station close to Rathsdamnitz, Germany ( since 1945: Dębnica Kaszubska, Poland) southeast of Stolp, Germany (since 1945 Słupsk, Poland). The facility, which went in service on December 1, 1938, was designed to explore whether a reduction of fading effects could be achieved via an extended surface antenna, as well as an increase in directionality by changing the phase position of individual radiators. A group of 10 antennas arranged on a circle with 1 km diameter around a central antenna was planned. A model test consisting of a 50 m high, free-standing wooden tower supporting a vertical wire which worked as the antenna was completed.
Until July 1939 six further towers of the same type were built on a circle with 150 m diameter around the central antenna tower. All these towers were the tallest wooden lattice towers with a triangular cross-section ever built in Germany.
The antennas were fed through an underground cable, which runs from the transmitter building, 180 m away from the central tower to the central tower, where a distributor for the transmission power was installed. From this distributor, overhead single-wire lines mounted on 4 m high wooden poles run to the antenna towers on the circle for feeding their antennas with the transmission power.
In 1940 south of the transmission building a 50 m tall guyed mast radiator, which was manufactured by Jucho, was erected.
The facility survived World War II and was shortly after World War II used to broadcast the program of the Russian military broadcaster "Radio Volga". However, in 1955 the facility was completely demolished after the removal of all technical equipment.
References
External links
Information on WWII German radio stations (in German)
Former radio masts and towers
Broadcast transmitters
Communications in Poland
1938 establishments in Germany
1955 disestablishments in Poland
|
https://en.wikipedia.org/wiki/Eagle%20Computer
|
Eagle Computer, Inc., was an early American computer company based in Los Gatos, California. Spun off from Audio-Visual Laboratories (AVL), it first sold a line of popular CP/M computers which were highly praised in the computer magazines of the day. After the IBM PC was launched, Eagle produced the Eagle 1600 series, which ran MS-DOS but were not true clones. When it became evident that the buying public wanted actual clones of the IBM PC, even if a non-clone had better features, Eagle responded with a line of clones, including a portable. The Eagle PCs were always rated highly in computer magazines.
CP/M models
Multi-image models
The AVL Eagle I and II had audio-visual connectors on the back. As a separate company, Eagle sold the Eagle I, II, III, IV, and V computer models and external SCSI/SASI hard-disk boxes called the File 10 and the File 40.
The first Eagle computers were produced by Audio Visual Labs (AVL), a company founded by Gary Kappenman in New Jersey in the early 1970s to produce proprietary large-format multi-image equipment. Kappenman introduced the world's first microprocessor-controlled multi-image programming computers, the ShowPro III and V, which were dedicated controllers. In 1980, AVL introduced the first non-dedicated controller, the Eagle. This first Eagle computer used a 16 kHz processor and had a 5-inch disk drive for online storage.
The Eagle ran PROCALL (PROgrammable Computer Audio-visual Language Library) software for writing cues to control up to 30 Ektagraphic projectors, five 16 mm film projectors and 20 auxiliary control points. Digital control data was sourced via an RCA or XLR-type audio connector at the rear of the unit. AVL's proprietary "ClockTrak" (a biphase digital timecode similar to, but incompatible with SMPTE timecode) was sourced from the control channel of a multitrack analog audio tape deck. The timed list of events in the Eagle was synchronized to the ClockTrak. Later versions of PROCALL included the option of
|
https://en.wikipedia.org/wiki/Blind%20deconvolution
|
In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption.
In image processing
In image processing, blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of "blurred" images in the presence of a poorly determined or unknown point spread function (PSF). Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Researchers have been studying blind deconvolution methods for several decades, and have approached the problem from different directions.
Most of the work on blind deconvolution started in early 1970s. Blind deconvolution is used in astronomical imaging and medical imaging.
Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation-maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary.
Examples of non-iterative techniques include SeDDaRA, the cepstrum transform and APEX. The cepstrum transform and APEX methods assume that the PSF has a specific shape, and one must estimate the width of t
|
https://en.wikipedia.org/wiki/Rpath
|
In computing, rpath designates the run-time search path hard-coded in an executable file or library. Dynamic linking loaders use the rpath to find required libraries.
Specifically, it encodes a path to shared libraries into the header of an executable (or another shared library). This RPATH header value (so named in the Executable and Linkable Format header standards) may either override or supplement the system default dynamic linking search paths.
The rpath of an executable or shared library is an optional entry in the .dynamic section of the ELF executable or shared libraries, with the type DT_RPATH, called the DT_RPATH attribute. It can be stored there at link time by the linker. Tools such as chrpath and patchelf can create or modify the entry later.
Use of the DT_RPATH entry by the dynamic linker
The different dynamic linkers for ELF implement the use of the DT_RPATH attribute in different ways.
GNU ld.so
The dynamic linker of the GNU C Library searches for shared libraries in the following locations in order:
The (colon-separated) paths in the DT_RPATH dynamic section attribute of the binary if present and the DT_RUNPATH attribute does not exist.
The (colon-separated) paths in the environment variable LD_LIBRARY_PATH, unless the executable is a setuid/setgid binary, in which case it is ignored. LD_LIBRARY_PATH can be overridden by calling the dynamic linker with the option --library-path (e.g. /lib/ld-linux.so.2 --library-path $HOME/mylibs myprogram).
The (colon-separated) paths in the DT_RUNPATH dynamic section attribute of the binary if present.
Lookup based on the ldconfig cache file (often located at /etc/ld.so.cache) which contains a compiled list of candidate libraries previously found in the augmented library path (set by /etc/ld.so.conf). If, however, the binary was linked with the -z nodefaultlib linker option, libraries in the default library paths are skipped.
In the trusted default path /lib, and then /usr/lib. If the binary was linked
|
https://en.wikipedia.org/wiki/Experts%20Exchange
|
Experts Exchange (EE) is a website for people in information technology (IT) related jobs to ask each other for tech help, receive instant help via chat, hire freelancers, and browse tech jobs. Controversy has surrounded their policy of providing answers only via paid subscription.
History
Experts Exchange went live in October 1996. The first question asked was for a "Case sensitive Win31 HTML Editor".
Experts Exchange went bankrupt in 2001 after venture capitalists moved the company to San Mateo, CA, and was brought back largely through the efforts of unpaid volunteers.
Later, Austin Miller and Randy Redberg took ownership of Experts Exchange, and the company was made profitable again. Experts Exchange claims to have more than 3 million solutions. Its users are mainly young to middle-aged males in the IT field.
Paywall
In the past, the site employed HTTP cookie and HTTP referer inspection to display content selectively. The page shown employed JavaScript to display answers to humans after some content showing how to become a member. Subsequently, when an internal link was clicked by the user, they were blocked from viewing the answer information until either becoming a paid member or spoofing their browser's User Agent string to that of a search engine crawler such as GoogleBot.
In response to these obfuscation techniques, which prevented anonymous users from seeing answer content, a few members of the community wrote articles about how to bypass the obfuscation by spoofing one's web browser referrer using an addon like Smart Referrer and setting the referer as being from Google.
Stack Overflow founder Jeff Atwood cited Experts-Exchange's poor reputation and paywall as a motivation for creating Stack Overflow.
See also
Bulletin board system
Chat room
Internet forum
Virtual community
References
External links
Experts Exchange
American social networking websites
Software developer communities
Internet properties established in 1996
Question-and-ans
|
https://en.wikipedia.org/wiki/Attack%20model
|
In cryptanalysis, attack models or attack types are a classification of cryptographic attacks specifying the kind of access a cryptanalyst has to a system under attack when attempting to "break" an encrypted message (also known as ciphertext) generated by the system. The greater the access the cryptanalyst has to the system, the more useful information they can get to utilize for breaking the cypher.
In cryptography, a sending party uses a cipher to encrypt (transform) a secret plaintext into a ciphertext, which is sent over an insecure communication channel to the receiving party. The receiving party uses an inverse cipher to decrypt the ciphertext to obtain the plaintext. A secret knowledge is required to apply the inverse cipher to the ciphertext. This secret knowledge is usually a short number or string called a key. In a cryptographic attack a third party cryptanalyst analyzes the ciphertext to try to "break" the cipher, to read the plaintext and obtain the key so that future enciphered messages can be read. It is usually assumed that the encryption and decryption algorithms themselves are public knowledge and available to the cryptographer, as this is the case for modern ciphers which are published openly. This assumption is called Kerckhoffs's principle.
Models
Some common attack models are:
Ciphertext-only attack (COA) - in this type of attack it is assumed that the cryptanalyst has access only to the ciphertext, and has no access to the plaintext. This type of attack is the most likely case encountered in real life cryptanalysis, but is the weakest attack because of the cryptanalyst's lack of information. Modern ciphers are required to be very resistant to this type of attack. In fact, a successful cryptanalysis in the COA model usually requires that the cryptanalyst must have some information on the plaintext, such as its distribution, the language in which the plaintexts are written in, standard protocol data or framing which is part of the pla
|
https://en.wikipedia.org/wiki/Transcritical%20bifurcation
|
In bifurcation theory, a field within mathematics, a transcritical bifurcation is a particular kind of local bifurcation, meaning that it is characterized by an equilibrium having an eigenvalue whose real part passes through zero.
A transcritical bifurcation is one in which a fixed point exists for all values of a parameter and is never destroyed. However, such a fixed point interchanges its stability with another fixed point as the parameter is varied. In other words, both before and after the bifurcation, there is one unstable and one stable fixed point. However, their stability is exchanged when they collide. So the unstable fixed point becomes stable and vice versa.
The normal form of a transcritical bifurcation is
This equation is similar to the logistic equation, but in this case we allow and to be positive or negative (while in the logistic equation and must be non-negative).
The two fixed points are at and . When the parameter is negative, the fixed point at is stable and the fixed point is unstable. But for , the point at is unstable and the point at is stable. So the bifurcation occurs at .
A typical example (in real life) could be the consumer-producer problem where the consumption is proportional to the (quantity of) resource.
For example:
where
is the logistic equation of resource growth; and
is the consumption, proportional to the resource .
References
Bifurcation theory
|
https://en.wikipedia.org/wiki/Saddle-node%20bifurcation
|
In the mathematical area of bifurcation theory a saddle-node bifurcation, tangential bifurcation or fold bifurcation is a local bifurcation in which two fixed points (or equilibria) of a dynamical system collide and annihilate each other. The term 'saddle-node bifurcation' is most often used in reference to continuous dynamical systems. In discrete dynamical systems, the same bifurcation is often instead called a fold bifurcation. Another name is blue sky bifurcation in reference to the sudden creation of two fixed points.
If the phase space is one-dimensional, one of the equilibrium points is unstable (the saddle), while the other is stable (the node).
Saddle-node bifurcations may be associated with hysteresis loops and catastrophes.
Normal form
A typical example of a differential equation with a saddle-node bifurcation is:
Here is the state variable and is the bifurcation parameter.
If there are two equilibrium points, a stable equilibrium point at and an unstable one at .
At (the bifurcation point) there is exactly one equilibrium point. At this point the fixed point is no longer hyperbolic. In this case the fixed point is called a saddle-node fixed point.
If there are no equilibrium points.
In fact, this is a normal form of a saddle-node bifurcation. A scalar differential equation which has a fixed point at for with is locally topologically equivalent to , provided it satisfies and . The first condition is the nondegeneracy condition and the second condition is the transversality condition.
Example in two dimensions
An example of a saddle-node bifurcation in two dimensions occurs in the two-dimensional dynamical system:
As can be seen by the animation obtained by plotting phase portraits by varying the parameter ,
When is negative, there are no equilibrium points.
When , there is a saddle-node point.
When is positive, there are two equilibrium points: that is, one saddle point and one node (either an attractor or a repellor).
Other exa
|
https://en.wikipedia.org/wiki/Man%20After%20Man
|
Man After Man: An Anthropology of the Future is a 1990 speculative evolution and science fiction book written by Scottish geologist and palaeontologist Dougal Dixon and illustrated by Philip Hood. The book also features a foreword by Brian Aldiss. Man After Man explores a hypothetical future path of human evolution set from 200 years in the future to 5 million years in the future, with several future human species evolving through genetic engineering and natural means through the course of the book.
Man After Man is Dixon's third work on speculative evolution, following After Man (1981) and The New Dinosaurs (1988). Unlike the previous two books, which were written much like field guides, the focus of Man After Man lies much on the individual perspectives of future human individuals of various species. Man After Man, like its predecessors, uses its fictional setting to explore and explain real natural processes, in this case climate change through the eyes of the various human descendants in the book, who have been engineered specifically to adapt to it.
Reviews of Man After Man were generally positive, but more mixed than the previous books and criticised its scientific basis to a greater extent than that of its predecessors. Dixon himself is not fond of the book, having referred to it as a "disaster of a project". During writing, the book had changed considerably from its initial concept, which Dixon instead repurposed for his later book Greenworld (2010).
Summary
Man After Man explores an imaginary future evolutionary path of humanity, from 200 years in the future to five million years in the future. It contains several technological, social and biological concepts, most prominently genetic engineering but also parasitism, slavery, and elective surgery. As a result of mankind's technological prowess, evolution is accelerated, producing several species with varying intraspecific relations, many of them unrecognizable as humans.
Instead of the field guide-like
|
https://en.wikipedia.org/wiki/Orion%20Electric
|
was a Japanese consumer electronics company that was established in 1958 in Osaka, Japan. Their devices were branded as "Orion". The company used to be called Orion Electric, until Brain and Capital Holdings, Inc. (Japanese company) acquired it in 2019. From 1984 to their acquisition, their headquarters were based in Echizen, Fukui, Japan. Products manufactured and sold under the Orion brand included transistor radios, radio/cassette recorders, car stereos, and home stereo systems. Before their acquisition, they were of the world's largest OEM television and video equipment manufacturers, primarily supplying major-brand OEM customers, with Toshiba being its major customer in the 2000s. Orion produced around six million televisions and twelve million DVD player and TV combo units each year until 2019. Most of their products were manufactured in Thailand.
The Orion Group employed in excess of 9,000 workers. They had factories and offices in Japan, Thailand, Poland, the United Kingdom, and the United States. Orion's flagship factories in Thailand were one of Thailand’s top exporters, and they were recognized with an award from the Thai Government for their contribution.
Orion manufactured products primarily for Memorex, Otake, Hitachi, JVC, Emerson, and Sansui. In the North American market, Orion manufactured many televisions and VCRs for Emerson Radio during the 1980s and 1990s, but when Emerson Radio went bankrupt in 2000, the Emerson brand and their assets were bought by Orion’s primary competitor, Funai. During the 1990s, Orion and another of their brand names, World, were exclusively sold by Wal-Mart. The products sold consisted of discounted televisions, TV/VCR combos, and VHS players. In 2001, at its peak, Orion partnered with Toshiba to manufacture smaller CRT and LCD televisions, combo televisions, and DVD/VCR combos for the North American market, until 2009. After Toshiba exited, Orion production numbers had dropped significantly by more than 90% and ran in
|
https://en.wikipedia.org/wiki/MIPS-X
|
MIPS-X is a reduced instruction set computer (RISC) microprocessor and instruction set architecture (ISA) developed as a follow-on project to the MIPS project at Stanford University by the same team that developed MIPS. The project, supported by the Defense Advanced Research Projects Agency (DARPA), began in 1984, and its final form was described in a set of papers released in 1986–87. Unlike its older cousin, MIPS-X was never commercialized as a workstation central processing unit (CPU), and has mainly been seen in embedded system designs based on chips designed by Integrated Information Technology (IIT) for use in digital video applications.
MIPS-X, while designed by the same team and architecturally very similar, is instruction-set incompatible with the mainline MIPS architecture R-series processors. The MIPS-X processor is obscure enough that, as of November 20, 2005, support for it is provided only by specialist developers (such as Green Hills Software), and is notably missing from the GNU Compiler Collection (GCC).
MIPS-X has become important among DVD player firmware hackers, since many DVD players (especially low-end devices) use chips based on the IIT design (and produced by ESS Technology), as their central processor. Devices such as the ESS VideoDrive system on a chip (SoC) also include a digital signal processor (DSP) (coprocessor) for decoding MPEG audio and video streams.
External links
The original MIPS-X paper from Stanford
Instruction set architectures
MIPS architecture
Stanford University
32-bit microprocessors
|
https://en.wikipedia.org/wiki/Bragg%20plane
|
In physics, a Bragg plane is a plane in reciprocal space which bisects a reciprocal lattice vector, , at right angles. The Bragg plane is defined as part of the Von Laue condition for diffraction peaks in x-ray diffraction crystallography.
Considering the adjacent diagram, the arriving x-ray plane wave is defined by:
Where is the incident wave vector given by:
where is the wavelength of the incident photon. While the Bragg formulation assumes a unique choice of direct lattice planes and specular reflection of the incident X-rays, the Von Laue formula only assumes monochromatic light and that each scattering center acts as a source of secondary wavelets as described by the Huygens principle. Each scattered wave contributes to a new plane wave given by:
The condition for constructive interference in the direction is that the path difference between the photons is an integer multiple (m) of their wavelength. We know then that for constructive interference we have:
where . Multiplying the above by we formulate the condition in terms of the wave vectors, and :
Now consider that a crystal is an array of scattering centres, each at a point in the Bravais lattice. We can set one of the scattering centres as the origin of an array. Since the lattice points are displaced by the Bravais lattice vectors, , scattered waves interfere constructively when the above condition holds simultaneously for all values of which are Bravais lattice vectors, the condition then becomes:
An equivalent statement (see mathematical description of the reciprocal lattice) is to say that:
By comparing this equation with the definition of a reciprocal lattice vector, we see that constructive interference occurs if is a vector of the reciprocal lattice. We notice that and have the same magnitude, we can restate the Von Laue formulation as requiring that the tip of incident wave vector, , must lie in the plane that is a perpendicular bisector of the reciprocal lattice vector, . This rec
|
https://en.wikipedia.org/wiki/Methoden%20der%20mathematischen%20Physik
|
Methoden der mathematischen Physik (Methods of Mathematical Physics) is a 1924 book, in two volumes totalling around 1000 pages, published under the names of Richard Courant and David Hilbert. It was a comprehensive treatment of the "methods of mathematical physics" of the time. The second volume is devoted to the theory of partial differential equations. It contains presages of the finite element method, on which Courant would work subsequently, and which would eventually become basic to numerical analysis.
The material of the book was worked up from the content of Hilbert's lectures. While Courant played the major editorial role, many at the University of Göttingen were involved in the writing-up, and in that sense it was a collective production.
On its appearance in 1924 it apparently had little direct connection to the quantum theory questions at the centre of the theoretical physics of the time. That changed within two years, since the formulation of Schrödinger's equation made the Hilbert-Courant techniques of immediate relevance to the new wave mechanics.
There was a second edition (1931/7), wartime edition in the USA (1943), and a third German edition (1968). The English version Methods of Mathematical Physics (1953) was revised by Courant, and the second volume had extensive work done on it by the faculty of the Courant Institute. The books quickly gained the reputation as classics, and are among most highly referenced books in advanced mathematical physics courses.
References
Constance Reid (1986) Hilbert-Courant (separate biographies bound as one volume)
Methoden der mathematischen Physik online reproduction of 1924 German edition.
1924 non-fiction books
Mathematics books
|
https://en.wikipedia.org/wiki/Syntax%20%28programming%20languages%29
|
In computer science, the syntax of a computer language is the rules that define the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data.
The syntax of a language defines its surface form. Text-based computer languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical). Documents that are syntactically invalid are said to have a syntax error. When designing the syntax of a language, a designer might start by writing down examples of both legal and illegal strings, before trying to figure out the general rules from these examples.
Syntax therefore refers to the form of the code, and is contrasted with semantics – the meaning. In processing computer languages, semantic processing generally comes after syntactic processing; however, in some cases, semantic processing is necessary for complete syntactic analysis, and these are done together or concurrently. In a compiler, the syntactic analysis comprises the frontend, while the semantic analysis comprises the backend (and middle end, if this phase is distinguished).
Levels of syntax
Computer language syntax is generally distinguished into three levels:
Words – the lexical level, determining how characters form tokens;
Phrases – the grammar level, narrowly speaking, determining how tokens form phrases;
Context – determining what objects or variables names refer to, if types are valid, etc.
Distinguishing in this way yields modularity, allowing each level to be described and processed separately and often independently. First, a lexer turns the linear sequence of characters into a linear sequence of tokens; this is known as "lexical analysis" or "lexing". Second, the parser turns the linea
|
https://en.wikipedia.org/wiki/Biological%20half-life
|
Biological half-life (elimination half-life, pharmacological half-life) is the time taken for concentration of a biological substance (such as a medication) to decrease from its maximum concentration (Cmax) to half of Cmax in the blood plasma. It is denoted by the abbreviation .
This is used to measure the removal of things such as metabolites, drugs, and signalling molecules from the body. Typically, the biological half-life refers to the body's natural detoxification (cleansing) through liver metabolism and through the excretion of the measured substance through the kidneys and intestines. This concept is used when the rate of removal is roughly exponential.
In a medical context, half-life explicitly describes the time it takes for the blood plasma concentration of a substance to halve (plasma half-life) its steady-state when circulating in the full blood of an organism. This measurement is useful in medicine, pharmacology and pharmacokinetics because it helps determine how much of a drug needs to be taken and how frequently it needs to be taken if a certain average amount is needed constantly. By contrast, the stability of a substance in plasma is described as plasma stability. This is essential to ensure accurate analysis of drugs in plasma and for drug discovery.
The relationship between the biological and plasma half-lives of a substance can be complex depending on the substance in question, due to factors including accumulation in tissues, protein binding, active metabolites, and receptor interactions.
Examples
Water
The biological half-life of water in a human is about 7 to 14 days. It can be altered by behavior. Drinking large amounts of alcohol will reduce the biological half-life of water in the body. This has been used to decontaminate patients who are internally contaminated with tritiated water. The basis of this decontamination method is to increase the rate at which the water in the body is replaced with new water.
Alcohol
The removal of ethan
|
https://en.wikipedia.org/wiki/Electronic%20Frontiers%20Georgia
|
Electronic Frontiers Georgia (EFGA) is a non-profit organization in the US state of Georgia focusing on issues related to cyber law and free speech. It was founded in 1995 by Tom Cross, Robert Costner, Chris Farris, and Robbie Honerkamp, primarily in response to the Communications Decency Act.
One of the organization's early causes was to oppose Georgia House Bill 1630 (HB1630), an attempt to ban anonymous speech on the internet in Georgia. Though the bill was passed into law, after being challenged in court by the EFGA, the ACLU, and the national Electronic Frontier Foundation (EFF), the law was deemed unconstitutional.
Origins
Electronic Frontiers Georgia began after a suggestion by Stanton McCandlish of the EFF in conversations with Atlanta businessman and computer store owner Robert Costner. Costner expressed concern after Philip Elmer-DeWitt's Time magazine article claimed that pornography was pervasive on the internet. Costner was angered because he thought the article was bogus. While DeWitt later apologized for the article, the "correction by Time sought to downplay, rather than apologize for, misleading their readers". It was a precursor to the Communications Decency Act.
Seeking partners to provide in-kind donations, Costner approached the Georgia ACLU for meeting space and Comstar, an internet hosting company, for rackspace for an internet server from Costner's store.
On local newsgroups Costner announced a public meeting to be held at the ACLU's downtown offices. From this, and similar meetings, Georgia residents joined in and became part of Electronic Frontiers Georgia. Most notable were Tom Cross, Chris Farris, and Robbie Honerkamp. At a later point Andy Dustman and Scott M. Jones joined the organization in significant capacities.
The EFGA's mission is to explore the intersection of public policy and technology.
Distinction from the EFF
Though often confused with the Electronic Frontier Foundation, the EFGA is a separate organization. The EFF i
|
https://en.wikipedia.org/wiki/Linear%20alternator
|
A linear alternator is essentially a linear motor used as an electrical generator.
An alternator is a type of alternating current (AC) electrical generator. The devices are often physically equivalent. The principal difference is in how they are used and which direction the energy flows. An alternator converts mechanical energy to electrical energy, whereas a motor converts electrical energy to mechanical energy. Like many electric motors and electric generators, the linear alternator works by the principle of electromagnetic induction. However, most alternators work with rotary motion, whereas linear alternators work with linear motion (i.e. motion in a straight line).
Theory
A linear alternator is most commonly used to convert back-and-forth motion directly into electrical energy. This eliminates the need for a crank or linkage to convert a reciprocating motion to a rotary motion in order to drive a rotary generator.
Air compression generator
Mainspring Energy's linear generator uses a flameless reaction to efficiently convert chemical-bond energy into electricity. It is compatible with various fuels, including ammonia, and biogas, and hydrogen. The commercial device is double-ended. A translator on each end equipped with permanent magnets moves back and forth between the reaction chamber and a fixed-dimension box that functions as an air spring. Stationary copper coils surround each translator, forming a linear electromagnetic machine (LEM). Air and fuel are introduced into the center reaction chamber. Energy stored in the air springs from a previous cycle compresses the mixture until a flameless, exothermic reaction occurs. The reaction pushes the translators back through the copper coils, producing electricity. This motion recompresses the air springs, readying the system for the next cycle. Byproducts are water, nitrogen gas, and other substances. The reaction requires no spark/ignition source. A 115 kW machine extends 5.5 meters and is about 1 meter in di
|
https://en.wikipedia.org/wiki/Viridiplantae
|
Viridiplantae (literally "green plants") constitute a clade of eukaryotic organisms that comprises approximately 450,000–500,000 species that play important roles in both terrestrial and aquatic ecosystems. They include the green algae, which are primarily aquatic, and the land plants (embryophytes), which emerged from within them. Green algae traditionally excludes the land plants, rendering them a paraphyletic group. However it is accurate to think of land plants as a kind of alga. Since the realization that the embryophytes emerged from within the green algae, some authors are starting to include them. They have cells with cellulose in their cell walls, and primary chloroplasts derived from endosymbiosis with cyanobacteria that contain chlorophylls a and b and lack phycobilins. Corroborating this, a basal phagotroph archaeplastida group has been found in the Rhodelphydia.
In some classification systems, the group has been treated as a kingdom, under various names, e.g. Viridiplantae, Chlorobionta, or simply Plantae, the latter expanding the traditional plant kingdom to include the green algae. Adl et al., who produced a classification for all eukaryotes in 2005, introduced the name Chloroplastida for this group, reflecting the group having primary chloroplasts with green chlorophyll. They rejected the name Viridiplantae on the grounds that some of the species are not plants, as understood traditionally. The Viridiplantae are made up of two clades: Chlorophyta and Streptophyta as well as the basal Mesostigmatophyceae and Chlorokybophyceae. Together with Rhodophyta and glaucophytes, Viridiplantae are thought to belong to a larger clade called Archaeplastida or Primoplantae.
Phylogeny and classification
Simplified phylogeny of the Viridiplantae, according to Leliaert et al. 2012.
Viridiplantae
Chlorophyta
core chlorophytes
Ulvophyceae
Cladophorales
Dasycladales
Bryopsidales
Trentepohliales
Ulvales-Ulotrichales
Oltmannsiellopsidales
Chlorophyceae
Oedogoniales
Chae
|
https://en.wikipedia.org/wiki/Robotic%20paradigm
|
In robotics, a robotic paradigm is a mental model of how a robot operates. A robotic paradigm can be described by the relationship between the three basic elements of robotics: Sensing, Planning, and Acting. It can also be described by how sensory data is processed and distributed through the system, and where decisions are made.
Hierarchical/deliberative paradigm
The robot operates in a top-down fashion, heavy on planning.
The robot senses the world, plans the next action, acts; at each step the robot explicitly plans the next move.
All the sensing data tends to be gathered into one global world model.
The reactive paradigm
Sense-act type of organization.
The robot has multiple instances of Sense-Act couplings.
These couplings are concurrent processes, called behaviours, which take the local sensing data and compute the best action to take independently of what the other processes are doing.
The robot will do a combination of behaviours.
Hybrid deliberate/reactive paradigm
The robot first plans (deliberates) how to best decompose a task into subtasks (also called “mission planning”) and then what are the suitable behaviours to accomplish each subtask.
Then the behaviours starts executing as per the Reactive Paradigm.
Sensing organization is also a mixture of Hierarchical and Reactive styles; sensor data gets routed to each behaviour that needs that sensor, but is also available to the planner for construction of a task-oriented global world model.
See also
Behavior-based robotics
Hierarchical control system
Subsumption architecture
References
Asada, H. & Slotine, J.-J. E. (1986). Robot Analysis and Control. Wiley. .
Arkin, Ronald C. (1998). Behavior-Based Robotics. MIT Press. .
|
https://en.wikipedia.org/wiki/Versit%20Consortium
|
The versit Consortium was a multivendor initiative founded by Apple Computer, AT&T, IBM and Siemens in the early 1990s in order to create Personal Data Interchange (PDI) technology, open specifications for exchanging personal data over the Internet, wired and wireless connectivity and Computer Telephony Integration (CTI). The Consortium started a number of projects to deliver open specifications aimed at creating industry standards.
Computer Telephony Integration
One of the most ambitious projects of the Consortium was the Versit CTI Encyclopedia (VCTIE), a 3,000 page, 6 volume set of specifications defining how computer and telephony systems are to interact and become interoperable. The Encyclopedia was built on existing technologies and specifications such as ECMA's call control specifications, TSAPI and industry expertise of the core technical team. The volumes are:
Volume 1, Concepts & Terminology
Volume 2, Configurations & Landscape
Volume 3, Telephony Feature Set
Volume 4, Call Flow Scenarios
Volume 5, CTI Protocols
Volume 6, Versit TSAPI
Appendices include:
Versit TSAPI header file
Protocol 1 ASN.1 description
Protocol 2 ASN.1 description
Versit Server Mapper Interface header file
Versit TSDI header file
The core Versit CTI Encyclopedia technical team was composed of David H. Anderson and Marcus W. Fath from IBM, Frédéric Artru and Michael Bayer from Apple Computer, James L. Knight and Steven Rummel from AT&T (then Lucent Technologies), Tom Miller from Siemens, and consultants Ellen Feaheny and Charles Hudson. Upon completion, the Versit CTI Encyclopedia was transferred to the ECTF and has been adopted in the form of ECTF C.001. This model represents the basis for the ECTF's call control efforts.
Though the Versit CTI Encyclopedia ended up influencing many products, there was one full compliant implementation of the specifications that was brought to market: Odisei, a French company founded by team member Frédéric Artru developed the IntraSw
|
https://en.wikipedia.org/wiki/Sign%20painting
|
Sign painting is the craft of painting lettered signs on buildings, billboards or signboards, for promoting, announcing, or identifying products, services and events. Sign painting artisans are signwriters.
History
Signwriters often learned the craft through apprenticeship or trade school, although many early sign painters were self-taught. The Sign Graphics program at the Los Angeles Trade Technical College program is the last remaining sign painting program in the United States.
Skillful manipulation of a lettering brush can take years to develop.
In the 1980s, with the advent of computer printing on vinyl, traditional hand-lettering faced stiff competition. Interest in the craft waned during the 1980s and 90s, but hand-lettering and traditional sign painting have experienced a resurgence in popularity in recent years.
The 2012 book and documentary, Sign Painters by Faythe Levine and Sam Macon, chronicle the historical changes and current state of the sign painting industry through personal interviews with contemporary sign painters.
Old painted signs which fade but remain visible are known as ghost signs.
Techniques
There are a number of other associated skills and techniques as well, including gold leafing (surface and glass), carving (in various mediums), glue-glass chipping, stencilling, and silk-screening.
Bibliography
Turvey, Lisa (April 2012). "An American Language". Artforum International. 50: 218–9.
Swezy, Tim (February 25, 2014). "One Shot Seen 'Round the World: A Survey of Sign Painting on the Internet (Part of AIGA Raleigh - the oldest and largest professional organization for Design)". AIGA Raleigh. Retrieved April 21, 2020.
Childs, Mark C. (2016). The Zeon files : art and design of historic Route 66 signs. Babcock, Ellen D., 1957-. Albuquerque: University of New Mexico Press. . OCLC 944156236.
Auer, Michael (1991). The Preservation of Historic Signs. Washington, D.C: U.S. Department of the Interior, National Park Service, Cultural Reso
|
https://en.wikipedia.org/wiki/Predicate%20abstraction
|
In logic, predicate abstraction is the result of creating a predicate from a formula. If Q is any formula then the predicate abstract formed from that sentence is (λx.Q), where λ is an abstraction operator and in which every occurrence of x that is free in Q is bound by λ in (λx.Q). The resultant predicate (λx.Q(x)) is a monadic predicate capable of taking a term t as argument as in (λx.Q(x))(t), which says that the object denoted by 't' has the property of being such that Q.
The states ( λx.Q(x) )(t) ≡ Q(t/x) where Q(t/x) is the result of replacing all free occurrences of x in Q by t. This law is shown to fail in general in at least two cases: (i) when t is irreferential and (ii) when Q contains modal operators.
In modal logic the "de re / de dicto distinction" is stated as
1. (DE DICTO):
2. (DE RE): .
In (1) the modal operator applies to the formula A(t) and the term t is within the scope of the modal operator. In (2) t is not within the scope of the modal operator.
References
For the semantics and further philosophical developments of predicate abstraction see Fitting and Mendelsohn, First-order Modal Logic, Springer, 1999.
Modal logic
Philosophical logic
|
https://en.wikipedia.org/wiki/DSL%20filter
|
A DSL filter (also DSL splitter or microfilter) is an analog low-pass filter installed between analog devices (such as telephones or analog modems) and a plain old telephone service (POTS) line. The DSL filter prevents interference between such devices and a digital subscriber line (DSL) service connected to the same line. Without DSL filters, signals or echoes from analog devices at the top of their frequency range can reduce performance and create connection problems with DSL service, while those from the DSL service at the bottom of its range can cause line noise and other problems for analog devices.
The concept of a low pass filter for ADSL was first described in 1996 by Vic Charlton when working for the Canadian Operations Development Consortium: Low-Pass Filter On All Phones.
DSL filters are passive devices, requiring no power source to operate. Some high-quality filters may contain active transistors to refine the signal.
Components
The primary distinguishing factor between high-quality and low-quality filters is the use of transistors in high-quality (and more expensive) active filters, in addition to the usual components like capacitors, resistors, and ferrite cores, while the low-quality passive filters lack transistors.
Installation
Typical installation for an existing home involves installing DSL filters on every telephone, fax machine, voice band modem, and other voiceband device in the home, leaving the DSL modem as the only unfiltered device. For wall mounted phones, the filter is in the form of a plate hung on the standard wall mount, on which the phone hangs in turn.
In cases where it is possible to run new cables, it can be advantageous to split the telephone line after it enters the home, installing a single DSL filter on one leg and running it to every jack in the home where an analog device will be in use, and dedicating the other (unfiltered) leg to the DSL modem. Some devices such as monitored alarms and Telephone Devices for the Deaf
|
https://en.wikipedia.org/wiki/Fekete%20polynomial
|
In mathematics, a Fekete polynomial is a polynomial
where is the Legendre symbol modulo some integer p > 1.
These polynomials were known in nineteenth-century studies of Dirichlet L-functions, and indeed to Dirichlet himself. They have acquired the name of Michael Fekete, who observed that the absence of real zeroes t of the Fekete polynomial with 0 < t < 1 implies an absence of the same kind for the L-function
This is of considerable potential interest in number theory, in connection with the hypothetical Siegel zero near s = 1. While numerical results for small cases had indicated that there were few such real zeroes, further analysis reveals that this may indeed be a 'small number' effect.
References
Peter Borwein: Computational excursions in analysis and number theory. Springer, 2002, , Chap.5.
External links
Brian Conrey, Andrew Granville, Bjorn Poonen and Kannan Soundararajan, Zeros of Fekete polynomials, arXiv e-print math.NT/9906214, June 16, 1999.
Polynomials
Zeta and L-functions
|
https://en.wikipedia.org/wiki/Blasteroids
|
Blasteroids is the third official sequel to the 1979 multidirectional shooter video game, Asteroids. It was developed by Atari Games and released in arcades in 1987. Unlike the previous games, Blasteroids uses raster graphics instead of vector graphics, and has power-ups and a boss.
Home computer ports of Blasteroids were released by Image Works for the Amiga, Amstrad CPC, Atari ST, Commodore 64, MSX, MS-DOS, and ZX Spectrum. An emulated version of Blasteroids is an unlockable mini-game in Lego Dimensions.
Gameplay
The gameplay is basically the same as for the original. The player controls a spaceship viewed from "above" in a 2D representation of space, by rotating the ship, and using thrust to give the ship momentum. To slow down or completely stop moving, the player has to rotate the ship to face the direction it came from, and generate the right amount of thrust to nullify its momentum. The ship has a limited amount of fuel to generate thrust with. This fuel comes in the form of "Energy" that is also used for the ship's Shields which protect it against collisions and enemy fire. Once all Energy is gone, the player's ship is destroyed. The ship can shoot to destroy asteroids and enemy ships. The ship can also be transformed at will into 3 different versions: the Speeder with greatest speed, the Fighter with the most firepower, and the Warrior with extra armor.
Levels
At the start of the game, the player is in a screen with four warps indicating the game's difficulty: Easy, Medium, Hard, and Expert. Flying through any of the warps starts the game with that difficulty. Each has several galaxies, each with 9 or 16 sectors depending on difficulty. Once a sector is completed by destroying all the asteroids, an exit portal appears to lead the player to the galactic map screen. Similarly to the difficulty screen, the player can here choose which Sector to visit next. Completed and empty sectors can be revisited, but this costs energy. Sectors that are currently out
|
https://en.wikipedia.org/wiki/List%20of%20coordinate%20charts
|
This article contains a non-exhaustive list of coordinate charts for Riemannian manifolds and pseudo-Riemannian manifolds. Coordinate charts are mathematical objects of topological manifolds, and they have multiple applications in theoretical and applied mathematics. When a differentiable structure and a metric are defined, greater structure exists, and this allows the definition of constructs such as integration and geodesics.
Charts for Riemannian and pseudo-Riemannian surfaces
The following charts (with appropriate metric tensors) can be used in the stated classes of Riemannian and pseudo-Riemannian surfaces:
Radially symmetric surfaces:
Hyperspherical coordinates
Surfaces embedded in E3:
Monge chart
Certain minimal surfaces:
Asymptotic chart (see also asymptotic line)
Euclidean plane E2:
Cartesian chart
Sphere S2:
Spherical coordinates
Stereographic chart
Central projection chart
Axial projection chart
Mercator chart
Hyperbolic plane H2:
Polar chart
Stereographic chart (Poincaré model)
Upper half-space chart (Poincaré model)
Central projection chart (Klein model)
Mercator chart
AdS2 (or S1,1) and dS2 (or H1,1):
Central projection
Sn
Hopf chart
Hn
Upper half-space chart (Poincaré model)
Hopf chart
The following charts apply specifically to three-dimensional manifolds:
Axially symmetric manifolds:
Cylindrical chart
Parabolic chart
Hyperbolic chart
Toroidal chart
Three-dimensional Euclidean space E3:
Cartesian
Polar spherical chart
Cylindrical chart
Elliptical cylindrical, hyperbolic cylindrical, parabolic cylindrical charts
Parabolic chart
Hyperbolic chart
Prolate spheroidal chart (rational and trigonometric forms)
Oblate spheroidal chart (rational and trigonometric forms)
Toroidal chart
Cassini toroidal chart and Cassini bipolar chart
Three-sphere S3
Polar chart
Stereographic chart
Hopf chart
Hyperbolic three-space H3
Polar chart
Upper half space chart (Poincaré model)
Hopf chart
See also
Coordinate chart
Coordinate system
Metric tensor
List of mathemat
|
https://en.wikipedia.org/wiki/Darwinian%20puzzle
|
A Darwinian puzzle is a trait that appears to reduce the fitness of individuals that possess it. Such traits attract the attention of evolutionary biologists. Several human traits pose challenges to evolutionary thinking, as they are relatively prevalent but are associated with lower reproductive success through reduced fertility and/or longevity. Some of the classic examples include: left handedness, menopause, and mental disorders. These traits are also found in animals, a peacock shows an example of a trait that may reduce its fitness. The bigger the tail, the easier it is seen by predators and it also may hinder the movement of the peacock. Darwin, in fact, solved this "puzzle" by explaining the peacock's tail as evidence of sexual selection; a bigger tail confers evolutionary fitness on the male by allowing it to attract more females than other males with shorter tails. The phrase "Darwinian puzzle" itself is rare and of unclear origin; it's typically talked about in the context of animal behavior.
Applications in nature
Darwinian puzzles are evident in nature, even though it appears to reduce the fitness of the individual that possesses it. Different individuals use the odd phenomenon in particular ways such as toxins, fitness demonstration, and mimicry.
Factors that affect Darwinian puzzles
There are a few contributing factors in biology which may affect Darwinian puzzles.
Coefficient of relatedness (r): the percentage of genes shared by two animals.
This may be based on common descent. Animals may share 1/4, 1/2, or even in some cases all of their genes with others. Identical twins have a coefficient of relatedness of r=1. Full siblings have a coefficient of relatedness of r=.50, and half siblings and first cousins have a coefficient of relatedness of r=.25. Depending on how related two animals are, they may be more likely to act altruistically to one another. Even if it is of no benefit to themselves, it helps to promote survival of at least some of the
|
https://en.wikipedia.org/wiki/ACS%20Combinatorial%20Science
|
ACS Combinatorial Science (usually abbreviated as ACS Comb. Sci.), formerly Journal of Combinatorial Chemistry (1999-2010), was a peer-reviewed scientific journal, published since 1999 by the American Chemical Society. ACS Combinatorial Science publishes articles, reviews, perspectives, accounts and reports in the field of Combinatorial Chemistry.
Anthony Czarnik served as the founding editor from 1999 to 2010. M.G. Finn served as Editor from 2010 to 2020. In 2010, ACS agreed to change the name of the journal to "Combinatorial Science" and it was the first and only ACS journal to be devoted to a way of doing science, rather than to a specific field of knowledge or application.
The journal stopped accepting new submissions in August and the last issue was published in December 2020.
Abstracting and indexing
JCS is currently indexed in:
Chemical Abstracts Service (CAS)
SCOPUS
EBSCOhost
PubMed
Web of Science
References
Combinatorial Science
Academic journals established in 1999
Monthly journals
English-language journals
Combinatorial chemistry
1999 establishments in the United States
|
https://en.wikipedia.org/wiki/MAX-3SAT
|
MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:
Given a 3-CNF formula Φ (i.e. with at most 3 variables per clause), find an assignment that satisfies the largest number of clauses.
MAX-3SAT is a canonical complete problem for the complexity class MAXSNP (shown complete in Papadimitriou pg. 314).
Approximability
The decision version of MAX-3SAT is NP-complete. Therefore, a polynomial-time solution can only be achieved if P = NP. An approximation within a factor of 2 can be achieved with this simple algorithm, however:
Output the solution in which most clauses are satisfied, when either all variables = TRUE or all variables = FALSE.
Every clause is satisfied by one of the two solutions, therefore one solution satisfies at least half of the clauses.
The Karloff-Zwick algorithm runs in polynomial-time and satisfies ≥ 7/8 of the clauses. While this algorithm is randomized, it can be derandomized using, e.g., the techniques from to yield a deterministic (polynomial-time) algorithm with the same approximation guarantees.
Theorem 1 (inapproximability)
The PCP theorem implies that there exists an ε > 0 such that (1-ε)-approximation of MAX-3SAT is NP-hard.
Proof:
Any NP-complete problem by the PCP theorem. For x ∈ L, a 3-CNF formula Ψx is constructed so that
x ∈ L ⇒ Ψx is satisfiable
x ∉ L ⇒ no more than (1-ε)m clauses of Ψx are satisfiable.
The Verifier V reads all required bits at once i.e. makes non-adaptive queries. This is valid because the number of queries remains constant.
Let q be the number of queries.
Enumerating all random strings Ri ∈ V, we obtain poly(x) strings since the length of each string .
For each Ri
V chooses q positions i1,...,iq and a Boolean function fR: {0,1}q->{0,1} and accepts if and only if fR(π(i1,...,iq)). Here π refers to the proof obtained from
|
https://en.wikipedia.org/wiki/Abraham%E2%80%93Lorentz%20force
|
In the physics of electromagnetism, the Abraham–Lorentz force (also known as the Lorentz–Abraham force) is the recoil force (a force of equal magnitude and opposite direction) on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz.
The formula although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called the Lorentz–Dirac force or collectively known as Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the "Abraham–Lorentz–Dirac–Langevin equation", the other is the self-force on a moving mirror.
The force is proportional to the square of the object's charge, multiplied by the jerk that it is experiencing. (Jerk is the rate of change of acceleration.) The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves.
There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause
|
https://en.wikipedia.org/wiki/List%20of%20social%20software
|
This is a list of notable social software: selected examples of social software products and services that facilitate a variety of forms of social human contact.
Blogs
Apache Roller
Blogger
IBM Lotus Connections
Posterous
Telligent Community
Tumblr
Typepad
WordPress
Xanga
Clipping
Diigo
Evernote
Instant messaging
Comparison of instant messaging clients
IBM Lotus Sametime
Live Communications Server 2003
Live Communications Server 2005
Microsoft Lync Server
Internet forums
Comparison of Internet forum software
Internet Relay Chat (IRC)
Internet Relay Chat
eLearning
Massively multiplayer online games
Media sharing
blip.tv
Dailymotion
Flickr
Ipernity
Metacafe
Putfile
SmugMug
Tangle
Vimeo
YouTube
Zooomr
IBM Lotus Connections
Media cataloging
Online dating
Web directories
Social bookmarking
Web widgets
AddThis
AddToAny
ShareThis
Social bookmark link generator
Websites
Enterprise software
Altova MetaTeam
IBM Lotus Connections
Jumper 2.0 Enterprise
Social cataloging
aNobii
Goodreads
Knowledge Plaza
Librarything
Readgeek
Shelfari
KartMe
Social citations
BibSonomy
CiteULike
Connotea
Jumper 2.0 Enterprise
Knowledge Plaza
Mendeley
refbase
Zotero
Social evolutionary computation
Knowledge iN
Quora
Yahoo! Answers
Social login
Social networks
Social search
Jumper 2.0
Knowledge Plaza
Social customer support software
Virtual worlds
Active Worlds
Google Lively (now defunct)
Kaneva
Second Life
There
Meez
Wikis
References
Social software
Social software
|
https://en.wikipedia.org/wiki/Juice%20%28aggregator%29
|
Juice is a podcast aggregator for Windows and OS X used for downloading media files such as ogg and mp3 for playback on the computer or for copying to a digital audio player. Juice lets a user schedule downloading of specific podcasts, and will notify the user when a new show is available. It is free software available under the GNU General Public License. The project is hosted at SourceForge. Formerly known as iPodder and later as iPodder Lemon, the software's name was changed to Juice in November 2005 in the face of legal pressure from Apple, Inc.
Development
The original development team was formed by Erik de Jonge, Robin Jans, Martijn Venrooy, Perica Zivkovic from the company Active8 based in the Netherlands, Andrew Grumet, Garth Kidd and Mark Posth joined the team soon after the first release. The development team credited the program concept to Adam Curry who wrote a little Applescript as a proof of concept and provided the first podcast shows (then referred to as 'audio enclosures') but primarily to Dave Winer who was the inspiration for Adam Curry. The first version also included a screenscraper for normal HTML files. Initially it was not clear that podcasting would be completely tied to RSS. Although that was eventually the method chosen, during the early development phase a diverse range of people were working on alternatives, including a version based on Freenet.
The program is written in Python and, through use of a cross-platform UI library, runs on Mac OS X and Microsoft Windows 2000 or Windows XP. A Linux variant has not been developed.
The 2004 growth of podcasting inspired other podcatching programs, such as jPodder, as well as the June 2005 addition of a podcast subscription feature in Apple's iTunes music player. This development quickly put an end to the popularity of the Juice application.
In 2006 the team effectively stopped further development of the program, the developers started working in other fields, some Podcasting related. The tea
|
https://en.wikipedia.org/wiki/Outline%20of%20biochemistry
|
The following outline is provided as an overview of and topical guide to biochemistry:
Biochemistry – study of chemical processes in living organisms, including living matter. Biochemistry governs all living organisms and living processes.
Applications of biochemistry
Testing
Ames test – salmonella bacteria is exposed to a chemical under question (a food additive, for example), and changes in the way the bacteria grows are measured. This test is useful for screening chemicals to see if they mutate the structure of DNA and by extension identifying their potential to cause cancer in humans.
Pregnancy test – one uses a urine sample and the other a blood sample. Both detect the presence of the hormone human chorionic gonadotropin (hCG). This hormone is produced by the placenta shortly after implantation of the embryo into the uterine walls and accumulates.
Breast cancer screening – identification of risk by testing for mutations in two genes—Breast Cancer-1 gene (BRCA1) and the Breast Cancer-2 gene (BRCA2)—allow a woman to schedule increased screening tests at a more frequent rate than the general population.
Prenatal genetic testing – testing the fetus for potential genetic defects, to detect chromosomal abnormalities such as Down syndrome or birth defects such as spina bifida.
PKU test – Phenylketonuria (PKU) is a metabolic disorder in which the individual is missing an enzyme called phenylalanine hydroxylase. Absence of this enzyme allows the buildup of phenylalanine, which can lead to mental retardation.
Genetic engineering – taking a gene from one organism and placing it into another. Biochemists inserted the gene for human insulin into bacteria. The bacteria, through the process of translation, create human insulin.
Cloning – Dolly the sheep was the first mammal ever cloned from adult animal cells. The cloned sheep was, of course, genetically identical to the original adult sheep. This clone was created by taking cells from the udder of a six-year-old
|
https://en.wikipedia.org/wiki/Anthrozoology
|
Anthrozoology, also known as human–nonhuman-animal studies (HAS), is the subset of ethnobiology that deals with interactions between humans and other animals. It is an interdisciplinary field that overlaps with other disciplines including anthropology, ethnology, medicine, psychology, social work, veterinary medicine, and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human–animal relationships on either party and the study of their interactions. It includes scholars from fields such as anthropology, sociology, biology, history and philosophy.
Anthrozoology scholars, such as Pauleen Bennett recognize the lack of scholarly attention given to non-human animals in the past, and to the relationships between human and non-human animals, especially in the light of the magnitude of animal representations, symbols, stories and their actual physical presence in human societies. Rather than a unified approach, the field currently consists of several methods adapted from the several participating disciplines to encompass human–nonhuman animal relationships and occasional efforts to develop sui generis methods.
Areas of study
The interaction and enhancement within captive animal interactions.
Affective (emotional) or relational bonds between humans and animals
Human perceptions and beliefs in respect of other animals
How some animals fit into human societies
How these vary between cultures, and change over times
The study of animal domestication: how and why domestic animals evolved from wild species (paleoanthrozoology)
Captive zoo animal bonds with keepers
The social construction of animals and what it means to be animal
The human–animal bond
Parallels between human–animal interactions and human–technology interactions
The symbolism of animals in literature and art
The history of animal domestication
The intersections of speciesism, racism, and sexism
The place of animals in human-occupied spaces
The religious
|
https://en.wikipedia.org/wiki/LwIP
|
lwIP (lightweight IP) is a widely used open-source TCP/IP stack designed for embedded systems. lwIP was originally developed by Adam Dunkels at the Swedish Institute of Computer Science and is now developed and maintained by a worldwide network of developers.
lwIP is used by many manufacturers of embedded systems, including Intel/Altera, Analog Devices, Xilinx, TI, ST and Freescale.
lwIP network stack
The focus of the lwIP network stack implementation is to reduce resource usage while still having a full-scale TCP stack. This makes lwIP suitable for use in embedded systems with tens of kilobytes of free RAM and room for around 40 kilobytes of code ROM.
lwIP protocol implementations
Aside from the TCP/IP stack, lwIP has several other important parts, such as a network interface, an operating system emulation layer, buffers and a memory management section. The operating system emulation layer and the network interface allow the network stack to be transplanted into an operating system, as it provides a common interface between lwIP code and the operating system kernel.
The network stack of lwIP includes an IP (Internet Protocol) implementation at the Internet layer that can handle packet forwarding over multiple network interfaces. Both IPv4 and IPv6 are supported dual stack since lwIP v2.0.0 . For network maintenance and debugging, lwIP implements ICMP (Internet Control Message Protocol). IGMP (Internet Group Management Protocol) is supported for multicast traffic management. While ICMPv6 (including MLD) is implemented to support the use of IPv6.
lwIP includes an implementation of IPv4 ARP (Address Resolution Protocol) and IPv6 Neighbor Discovery Protocol to support Ethernet at the data link layer. lwIP may also be operated on top of a PPP (Point-to-Point Protocol) implementation at the data link layer.
At the transport layer lwIP implements TCP (Transmission Control Protocol) with congestion control, RTT estimation and fast recovery/fast retransmit. UDP (U
|
https://en.wikipedia.org/wiki/Local%20cohomology
|
In algebraic geometry, local cohomology is an algebraic analogue of relative cohomology. Alexander Grothendieck introduced it in seminars in Harvard in 1961 written up by , and in 1961-2 at IHES written up as SGA2 - , republished as . Given a function (more generally, a section of a quasicoherent sheaf) defined on an open subset of an algebraic variety (or scheme), local cohomology measures the obstruction to extending that function to a larger domain. The rational function , for example, is defined only on the complement of on the affine line over a field , and cannot be extended to a function on the entire space. The local cohomology module (where is the coordinate ring of ) detects this in the nonvanishing of a cohomology class . In a similar manner, is defined away from the and axes in the affine plane, but cannot be extended to either the complement of the -axis or the complement of the -axis alone (nor can it be expressed as a sum of such functions); this obstruction corresponds precisely to a nonzero class in the local cohomology module .
Outside of algebraic geometry, local cohomology has found applications in commutative algebra, combinatorics, and certain kinds of partial differential equations.
Definition
In the most general geometric form of the theory, sections are considered of a sheaf of abelian groups, on a topological space , with support in a closed subset , The derived functors of form local cohomology groups
In the theory's algebraic form, the space X is the spectrum Spec(R) of a commutative ring R (assumed to be Noetherian throughout this article) and the sheaf F is the quasicoherent sheaf associated to an R-module M, denoted by . The closed subscheme Y is defined by an ideal I. In this situation, the functor ΓY(F) corresponds to the I-torsion functor, a union of annihilators
i.e., the elements of M which are annihilated by some power of I. As a right derived functor, the ith local cohomology module with respect to I is the ith c
|
https://en.wikipedia.org/wiki/Cuminaldehyde
|
Cuminaldehyde (4-isopropylbenzaldehyde) is a natural organic compound with the molecular formula C10H12O. It is a benzaldehyde with an isopropyl group substituted in the 4-position.
Cuminaldehyde is a constituent of the essential oils of eucalyptus, myrrh, cassia, cumin, and others. It has a pleasant smell and contributes to the aroma of these oils. It is used commercially in perfumes and other cosmetics.
It has been shown that cuminaldehyde, as a small molecule, inhibits the fibrillation of alpha-synuclein, which, if aggregated, forms insoluble fibrils in pathological conditions characterized by Lewy bodies, such as Parkinson's disease, dementia with Lewy bodies and multiple system atrophy.
Cuminaldehyde can be prepared synthetically by the reduction of 4-isopropylbenzoyl chloride or by the formylation of cumene.
The thiosemicarbazone of cuminaldehyde has antiviral properties.
References
Flavors
Monoterpenes
Benzaldehydes
Alkyl-substituted benzenes
Isopropyl compounds
Perfume ingredients
|
https://en.wikipedia.org/wiki/Solaris%20Multiplexed%20I/O
|
Solaris Multiplexed I/O (MPxIO), known also as Sun StorageTek Traffic Manager (SSTM, earlier Sun StorEdge Traffic Manager), is multipath I/O software for Solaris/illumos. It enables a storage device to be accessed through multiple host controller interfaces from a single operating system instance. The MPxIO architecture helps protect against I/O outages due to I/O controller failures. Should one I/O controller fail, MPxIO automatically switches to an alternate controller.
This architecture also increases I/O performance by load balancing across multiple I/O channels.
It was integrated within the Solaris operating system beginning in February 2000 with Solaris 8 release.
The file to enable or disable mpxio has been moved in Solaris 10 from /kernel/drv/scsi_vhci.conf to the bottom of the file /kernel/drv/fp.conf and /kernel/drv/mpt.conf.
See also
Multipath I/O
References
External links
Oracle Solaris SAN Configuration and Multipathing Guide (September 2010)
Sun StorageTek Traffic Manager Software
Sun Microsystems software
Computer storage technologies
Fault-tolerant computer systems
|
https://en.wikipedia.org/wiki/Guaiacol
|
Guaiacol () is an organic compound with the formula C6H4(OH)(OCH3). It is a phenolic compound containing a methoxy functional group. Guaiacol appears as a viscous colorless oil, although aged or impure samples are often yellowish. It occurs widely in nature and is a common product of the pyrolysis of wood.
Occurrence
Guaiacol is usually derived from guaiacum or wood creosote.
It is produced by a variety of plants. It is also found in essential oils from celery seeds, tobacco leaves, orange leaves, and lemon peels. The pure substance is colorless, but samples become yellow upon exposure to air and light. The compound is present in wood smoke, resulting from the pyrolysis of lignin. The compound contributes to the flavor of many substances such as whiskey and roasted coffee.
Preparation
The compound was first isolated by Otto Unverdorben in 1826. Guaiacol is produced by methylation of o-catechol, for example using potash and dimethyl sulfate:
C6H4(OH)2 + (CH3O)2SO2 → C6H4(OH)(OCH3) + HO(CH3O)SO2
Laboratory methods
Guaiacol can be prepared by diverse routes in the laboratory. o-Anisidine, derived in two steps from anisole, can be hydrolyzed via its diazonium derivative. Guaiacol can be synthesized by the dimethylation of catechol followed by selective mono-demethylation.
C6H4(OCH3)2 + C2H5SNa → C6H4(OCH3)(ONa) + C2H5SCH3
Uses and chemical reactions
Syringyl/guaiacyl ratio
Lignin, comprising a major fraction of biomass, is sometimes classified according to the guaiacyl component. Pyrolysis of lignin from gymnosperms gives more guaiacol, resulting from removal of the propenyl group of coniferyl alcohol. These lignins are said to have a high guaiacyl (or G) content. In contrast, lignins derived from sinapyl alcohol affords syringol. A high syringyl (or S) content is indicative of lignin from angiosperms. Sugarcane bagasse is one useful source of guaiacol; pyrolysis of the bagasse lignins yields compounds including guaiacol, 4-methylguaiacol and
|
https://en.wikipedia.org/wiki/Internet%20Listing%20Display
|
Internet Listing Display (ILD) was a set of rules put forth by the National Association of National Association of Realtors in 2005 to regulate how homes and properties can be displayed on internet sites. The ILD policy was intended to consolidate and replace both the Virtual Office Website (VOW) and Internet Data Exchange (IDX) policies to create one set of rules.
The ILD policy is a work in progress created as a result of investigation from the U.S. Department of Justice into anti-competitive practices by traditional real estate brokers. The ILD policy is intended to prevent traditional brokers from solely excluding their property listings from selected discount broker web sites, since they must "opt out" from display on all other brokers' sites
.
In late 2005, the NAR recommended to avoid using ILD, and the policy has since largely been abandoned.
See also
Internet Data Exchange (IDX)
Real estate trends
Virtual Office Website (VOW)
References
ILD Policy ILD Internet Listing Display policy Retrieved November 7, 2005
American real estate websites
Network protocols
|
https://en.wikipedia.org/wiki/Schwarz%20reflection%20principle
|
In mathematics, the Schwarz reflection principle is a way to extend the domain of definition of a complex analytic function, i.e., it is a form of analytic continuation. It states that if an analytic function is defined on the upper half-plane, and has well-defined (non-singular) real values on the real axis, then it can be extended to the conjugate function on the lower half-plane. In notation, if is a function that satisfies the above requirements, then its extension to the rest of the complex plane is given by the formula,
That is, we make the definition that agrees along the real axis.
The result proved by Hermann Schwarz is as follows. Suppose that F is a continuous function on the closed upper half plane , holomorphic on the upper half plane , which takes real values on the real axis. Then the extension formula given above is an analytic continuation to the whole complex plane.
In practice it would be better to have a theorem that allows F certain singularities, for example F a meromorphic function. To understand such extensions, one needs a proof method that can be weakened. In fact Morera's theorem is well adapted to proving such statements. Contour integrals involving the extension of F clearly split into two, using part of the real axis. So, given that the principle is rather easy to prove in the special case from Morera's theorem, understanding the proof is enough to generate other results.
The principle also adapts to apply to harmonic functions.
See also
Kelvin transform
Method of image charges
Schwarz function
References
External links
Harmonic functions
Theorems in complex analysis
Mathematical principles
|
https://en.wikipedia.org/wiki/Electronic%20hardware
|
Electronic hardware consists of interconnected electronic components which perform analog or logic operations on received and locally stored information to produce as output or store resulting new information or to provide control for output actuator mechanisms.
Electronic hardware can range from individual chips/circuits to distributed information processing systems. Well designed electronic hardware is composed of hierarchies of functional modules which inter-communicate via precisely defined interfaces.
Hardware logic is primarily a differentiation of the data processing circuitry from other more generalized circuitry. For example nearly all computers include a power supply which consists of circuitry not involved in data processing but rather powering the data processing circuits. Similarly, a computer may output information to a computer monitor or audio amplifier which is also not involved in the computational processes.
See also
Digital electronics
References
Electronic engineering
|
https://en.wikipedia.org/wiki/Mean%20value%20theorem%20%28divided%20differences%29
|
In mathematical analysis, the mean value theorem for divided differences generalizes the mean value theorem to higher derivatives.
Statement of the theorem
For any n + 1 pairwise distinct points x0, ..., xn in the domain of an n-times differentiable function f there exists an interior point
where the nth derivative of f equals n ! times the nth divided difference at these points:
For n = 1, that is two function points, one obtains the simple mean value theorem.
Proof
Let be the Lagrange interpolation polynomial for f at x0, ..., xn.
Then it follows from the Newton form of that the highest term of is .
Let be the remainder of the interpolation, defined by . Then has zeros: x0, ..., xn.
By applying Rolle's theorem first to , then to , and so on until , we find that has a zero . This means that
,
Applications
The theorem can be used to generalise the Stolarsky mean to more than two variables.
References
Finite differences
|
https://en.wikipedia.org/wiki/Menthone
|
Menthone is a monoterpene with a minty flavor that occurs naturally in a number of essential oils. l-Menthone (or (2S,5R)-trans-2-isopropyl-5-methylcyclohexanone), shown at right, is the most abundant in nature of the four possible stereoisomers. It is structurally related to menthol, which has a secondary alcohol in place of the carbonyl. Menthone is used in flavoring, perfume and cosmetics for its characteristic aromatic and minty odor.
Occurrence
Menthone is a constituent of the essential oils of pennyroyal, peppermint, Mentha arvensis, Pelargonium geraniums, and others. In most essential oils, it is a minor compound; it was first synthesized by oxidation of menthol in 1881 before it was found in essential oils in 1891.
Structure and preparation
2-Isopropyl-5-methylcyclohexanone has two asymmetric carbon centers, meaning that it can have four different stereoisomers: (2S,5S), (2R,5S), (2S,5R) and (2R,5R). The S,S and R,R stereoisomers have the methyl and isopropyl groups on the same side of the cyclohexane ring: the so-called cis conformation. These stereoisomers are called isomenthone. The trans-isomers are called menthone. Because the (2S,5R) isomer has negative optical rotation, it is called l-menthone or (−)-menthone. It is the enantiomeric partner of the (2R,5S) isomer: (+)- or d-menthone. Menthone can easily be converted to isomenthone and vice versa via a reversible epimerization reaction via an enol intermediate, which changes the direction of optical rotation, so that l-menthone becomes d-isomenthone, and d-menthone becomes l-isomenthone.
In the laboratory, l-menthone may be prepared by oxidation of menthol with acidified dichromate. If the chromic acid oxidation is performed with stoichiometric oxidant in the presence of diethyl ether as co-solvent, a method introduced by H.C. Brown, the epimerization of l-menthone to d-isomenthone is largely avoided. If menthone and isomenthone are equilibrated at room temperature, the isomenthone content will re
|
https://en.wikipedia.org/wiki/Glass%20Packaging%20Institute
|
The Glass Packaging Institute (GPI) is the North American trade association for the glass container industry, headquartered in Arlington, Virginia. Through GPI, glass container manufacturers advocate job preservation and industry standards, and promote sound energy, environmental, and recycling policies.
Organization
The GPI membership consists of 5 glass container manufacturing member companies, and 27 supplier member companies, who provide raw materials, recycled glass, equipment, decorating, and other services to the glass companies. The country's 41 glass container plants in 20 states comprise a $5.5 billion industry. U.S. glass container manufacturers operate 102 glass furnaces, collectively producing 30 billion glass food, beverage, cosmetic, spirits, wine, and beer containers annually. The U.S. glass container industry directly employs approximately 16,500 nationwide, and its supplier and customer companies support hundreds of thousands of additional jobs.
GPI's board of trustees is the core decision-making body in the organization. It is made up of representatives from each of the glass container manufacturing member companies, as well as two representatives from the associate member companies (supplier member companies). The Trustees meet quarterly for budget, agenda and future planning purposes.
GPI hosts two meetings each year: a spring membership meeting in Washington, D.C., and an annual meeting in the fall.
Scott DeFife serves as the trade association's president.
The board is supported with a series of committees, including Marketing and Communications, Government Affairs & Regulatory Affairs, Environment, Labor & HR, Design and Specifications Committee and Management Committee.
Container finish standards
GPI publishes a voluntary set of standards for glass container finishes and their closures to improve compatibility and interchangeability between manufacturers. This includes vials, wine bottles, canning jars, beer bottles, and jugs. They are
|
https://en.wikipedia.org/wiki/Moisture%20vapor%20transmission%20rate
|
Moisture vapor transmission rate (MVTR), also water vapor transmission rate (WVTR), is a measure of the passage of water vapor through a substance. It is a measure of the permeability for vapor barriers.
There are many industries where moisture control is critical. Moisture sensitive foods and pharmaceuticals are put in packaging with controlled MVTR to achieve the required quality, safety, and shelf life. In clothing, MVTR as a measure of breathability has contributed to greater comfort for wearers of clothing for outdoor activity. The building materials industry also manages the moisture barrier properties in architectural components to ensure the correct moisture levels in the internal spaces of buildings. Optoelectronic devices based on organic material, generally named OLEDs, need an encapsulation with low values of WVTR to guarantee the same performances over the lifetime of the device.
MVTR generally decreases with increasing thickness of the film/barrier, and increases with increasing temperature.
Measurement
There are various techniques to measure MVTR, ranging from gravimetric techniques that measure the gain or loss of moisture by mass, to highly sophisticated instrumental techniques that in some designs can measure extremely low transmission rates. Special care has to be taken in measuring porous substances such as fabrics, as some techniques are not appropriate. For very low levels, many techniques do not have adequate resolution. Numerous standard methods are described in ISO, ASTM, BS, DIN etc.—these are quite often industry-specific. Instrument manufacturers are often able to provide test methods developed to fully exploit the specific design which they are selling. The search for the most appropriate instrument is a zealous task which is in itself part of the measurement.
The conditions under which the measurement is made has a considerable influence on the result. Both the temperature and humidity gradients across the sample need to be meas
|
https://en.wikipedia.org/wiki/Glass%20code
|
A glass code is a method of classifying glasses for optical use, such as the manufacture of lenses and prisms. There are many different types of glass with different compositions and optical properties, and a glass code is used to distinguish between them.
There are several different glass classification schemes in use, most based on the catalogue systems used by glass manufacturers such as Pilkington and Schott Glass. These tend to be based on the material composition, for example BK7 is the Schott Glass classification of a common borosilicate crown glass.
Technical definition
The international glass code is based on U.S. military standard MIL-G-174, and is a six-digit number specifying the glass according to its refractive index at the Fraunhofer d- (or D3-) line, 589.3 nm, and its Abbe number also taken at that line. The resulting glass code is the value of rounded to three digits, followed by rounded to three digits, with all decimal points ignored. For example, BK7 has and giving a six-digit glass code of 517642.
Consequently, a linear approximation for the refractive index dispersion close that wavelength is given by:
where is the wavelength in nanometers.
The following table shows some example glasses and their glass code. Note that the glass properties can vary slightly between different manufacturer types.
References
Optical materials
Glass engineering and science
|
https://en.wikipedia.org/wiki/Almond%20meal
|
Almond meal, almond flour or ground almond is made from ground sweet almonds. Almond flour is usually made with blanched almonds (no skin), whereas almond meal can be made with whole or blanched almonds. The consistency is more like corn meal than wheat flour.
It is used in pastry and confectionery – in the manufacture of almond macarons and macaroons and other sweet pastries, in cake and pie filling, such as Austrian Sachertorte – and is one of the two main ingredients of marzipan and almond paste. In France, almond meal is an important ingredient in frangipane, the filling of traditional galette des Rois cake.
Almond meal has recently become important in baking items for those on low-carbohydrate diets. It adds moistness and a rich nutty taste to baked goods. Items baked with almond meal tend to be calorie-dense.
Almonds have high levels of polyunsaturated fats. Typically, the omega 6 fatty acids in almonds are protected from oxidation by the surface skin and vitamin E. When almonds are ground, this protective skin is broken and exposed surface area increases dramatically, greatly enhancing the nut's tendency to oxidize.
See also
Almond butter
List of almond dishes
References
Almonds
Food ingredients
|
https://en.wikipedia.org/wiki/Almond%20paste
|
Almond paste is made from ground almonds or almond meal and sugar in equal quantities, with small amounts of cooking oil, beaten eggs, heavy cream or corn syrup added as a binder. It is similar to marzipan, but has a coarser texture. Almond paste is used as a filling in pastries, but it can also be found in chocolates. In commercially manufactured almond paste, ground apricot or peach kernels are sometimes added to keep the cost down (also known as persipan).
Uses
Almond paste is used as a filling in pastries of many different cultures. It is a chief ingredient of the American bear claw pastry. In the Nordic countries almond paste is used extensively, in various pastries and cookies. In Sweden (where it is known as mandelmassa) it is used in biscuits, muffins and buns and as a filling in the traditional Shrove Tuesday pastry semla and is used in Easter and Christmas sweets. In Denmark (where it is known as marcipan or mandelmasse), almond paste is used in several pastries, for example as a filling in the Danish traditional pastry kringle. In Finland almond paste (called mantelimassa) is used in chocolate pralines and in the Finnish version of the Shrove Tuesday pastry laskiaispulla.
In the Netherlands, almond paste (called amandelspijs) is used in gevulde speculaas (stuffed brown-spiced biscuit) and banket. It is used as filling in the fruited Christmas bread Kerststol, traditionally eaten at Christmas breakfast. In Germany, almond paste is also used in pastries and sweets. In the German language, almond paste is known as Marzipanrohmasse and sold for example as Lübecker Edelmarzipan, i.e. "high quality marzipan from Lübeck".
It Italy it's known as "pasta di mandorle". The soft paste is molded into creative shapes by pastry chefs which can be used as cake decorations or to make frutta martorana.
Almond paste is the main ingredient of French traditional calisson candy in Aix-en-Provence. It is used as a filling in almond croissants.
In Turkey, almond paste
|
https://en.wikipedia.org/wiki/Surface%20states
|
Surface states are electronic states found at the surface of materials. They are formed due to the sharp transition from solid material that ends with a surface and are found only at the atom layers closest to the surface. The termination of a material with a surface leads to a change of the electronic band structure from the bulk material to the vacuum. In the weakened potential at the surface, new electronic states can be formed, so called surface states.
Origin at condensed matter interfaces
As stated by Bloch's theorem, eigenstates of the single-electron Schrödinger equation with a perfectly periodic potential, a crystal, are Bloch waves
Here is a function with the same periodicity as the crystal, n is the band index and k is the wave number. The allowed wave numbers for a given potential are found by applying the usual Born–von Karman cyclic boundary conditions. The termination of a crystal, i.e. the formation of a surface, obviously causes deviation from perfect periodicity. Consequently, if the cyclic boundary conditions are abandoned in the direction normal to the surface the behavior of electrons will deviate from the behavior in the bulk and some modifications of the electronic structure has to be expected.
A simplified model of the crystal potential in one dimension can be sketched as shown in Figure 1. In the crystal, the potential has the periodicity, a, of the lattice while close to the surface it has to somehow attain the value of the vacuum level. The step potential (solid line) shown in Figure 1 is an oversimplification which is mostly convenient for simple model calculations. At a real surface the potential is influenced by image charges and the formation of surface dipoles and it rather looks as indicated by the dashed line.
Given the potential in Figure 1, it can be shown that the one-dimensional single-electron Schrödinger equation gives two qualitatively different types of solutions.
The first type of states (see figure 2) extends into
|
https://en.wikipedia.org/wiki/Hausdorff%20moment%20problem
|
In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments
of some Borel measure supported on the closed unit interval . In the case , this is equivalent to the existence of a random variable supported on , such that .
The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line , and in the Hamburger moment problem one considers the whole line . The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space.
Completely monotonic sequences
In 1921, Hausdorff showed that is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation
for all . Here, is the difference operator given by
The necessity of this condition is easily seen by the identity
which is non-negative since it is the integral of a non-negative function. For example, it is necessary to have
See also
Total monotonicity
References
Hausdorff, F. "Summationsmethoden und Momentfolgen. I." Mathematische Zeitschrift 9, 74–109, 1921.
Hausdorff, F. "Summationsmethoden und Momentfolgen. II." Mathematische Zeitschrift 9, 280–299, 1921.
Feller, W. "An Introduction to Probability
|
https://en.wikipedia.org/wiki/Hamburger%20moment%20problem
|
In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that
In other words, an affirmative answer to the problem means that (m0, m1, m2, ...) is the sequence of moments of some positive Borel measure μ.
The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff).
Characterization
The Hamburger moment problem is solvable (that is, (mn) is a sequence of moments) if and only if the corresponding Hankel kernel on the nonnegative integers
is positive definite, i.e.,
for every arbitrary sequence (cj)j ≥ 0 of complex numbers that are finitary (i.e. cj = 0 except for finitely many values of j).
For the "only if" part of the claims simply note that
which is non-negative if is non-negative.
We sketch an argument for the converse. Let Z+ be the nonnegative integers and F0(Z+) denote the family of complex valued sequences with finitary support. The positive Hankel kernel A induces a (possibly degenerate) sesquilinear product on the family of complex-valued sequences with finite support. This in turn gives a Hilbert space
whose typical element is an equivalence class denoted by [f].
Let en be the element in F0(Z+) defined by en(m) = δnm. One notices that
Therefore, the "shift" operator T on , with T[en] = [en + 1], is symmetric.
On the other hand, the desired expression
suggests that μ is the spectral measure of a self-adjoint operator. (More precisely stated, μ is the spectral measure for an operator defined below and the vector [1], ). If we can find a "function model" such that the symmetric operator T is multip
|
https://en.wikipedia.org/wiki/Capacitively%20coupled%20plasma
|
A capacitively coupled plasma (CCP) is one of the most common types of industrial plasma sources. It essentially consists of two metal electrodes separated by a small distance, placed in a reactor. The gas pressure in the reactor can be lower than atmosphere or it can be atmospheric.
Description
A typical CCP system is driven by a single radio-frequency (RF) power supply, typically at 13.56 MHz. One of two electrodes is connected to the power supply, and the other one is grounded. As this configuration is similar in principle to a capacitor in an electric circuit, the plasma formed in this configuration is called a capacitively coupled plasma.
When an electric field is generated between electrodes, atoms are ionized and release electrons. The electrons in the gas are accelerated by the RF field and can ionize the gas directly or indirectly by collisions, producing secondary electrons. When the electric field is strong enough, it can lead to what is known as electron avalanche. After avalanche breakdown, the gas becomes electrically conductive due to abundant free electrons. Often it accompanies light emission from excited atoms or molecules in the gas. When visible light is produced, plasma generation can be indirectly observed even with bare eyes.
A variation on capacitively coupled plasma involves isolating one of the electrodes, usually with a capacitor. The capacitor appears like a short circuit to the high frequency RF field, but like an open circuit to direct current (DC) field. Electrons impinge on the electrode in the sheath, and the electrode quickly acquires a negative charge (or self-bias) because the capacitor does not allow it to discharge to ground. This sets up a secondary, DC field across the plasma in addition to the alternating current (AC) field. Massive ions are unable to react to the quickly changing AC field, but the strong, persistent DC field accelerates them toward the self-biased electrode. These energetic ions are exploited in m
|
https://en.wikipedia.org/wiki/Statistics%20Online%20Computational%20Resource
|
The Statistics Online Computational Resource (SOCR) is an online multi-institutional research and education organization. SOCR designs, validates and broadly shares a suite of online tools for statistical computing, and interactive materials for hands-on learning and teaching concepts in data science, statistical analysis and probability theory. The SOCR resources are platform agnostic based on HTML, XML and Java, and all materials, tools and services are freely available over the Internet.
The core SOCR components include interactive distribution calculators, statistical analysis modules, tools for data modeling, graphics visualization, instructional resources, learning activities and other resources.
All SOCR resources are licensed under either the Lesser GNU Public License or CC BY; peer-reviewed, integrated internally and interoperate with independent digital libraries developed by other professional societies and scientific organizations like NSDL, Open Educational Resources, Mathematical Association of America, California Digital Library, LONI Pipeline, etc.
See also
List of statistical packages
Comparison of statistical packages
External links
SOCR University of Michigan site
SOCR UCLA site
References
Educational math software
Research institutes in the United States
Statistical software
University of Michigan
|
https://en.wikipedia.org/wiki/CRAM-MD5
|
In cryptography, CRAM-MD5 is a challenge–response authentication mechanism (CRAM) based on the HMAC-MD5 algorithm. As one of the mechanisms supported by the Simple Authentication and Security Layer (SASL), it is often used in email software as part of SMTP Authentication and for the authentication of POP and IMAP users, as well as in applications implementing LDAP, XMPP, BEEP, and other protocols.
When such software requires authentication over unencrypted connections, CRAM-MD5 is preferred over mechanisms that transmit passwords "in the clear," such as LOGIN and PLAIN. However, it can't prevent derivation of a password through a brute-force attack, so it is less effective than alternative mechanisms that avoid passwords or that use connections encrypted with Transport Layer Security (TLS).
Protocol
The CRAM-MD5 protocol involves a single challenge and response cycle, and is initiated by the server:
Challenge: The server sends a base64-encoded string to the client. Before encoding, it could be any random string, but the standard that currently defines CRAM-MD5 says that it is in the format of a Message-ID email header value (including angle brackets) and includes an arbitrary string of random digits, a timestamp, and the server's fully qualified domain name.
Response: The client responds with a string created as follows.
The challenge is base64-decoded.
The decoded challenge is hashed using HMAC-MD5, with a shared secret (typically, the user's password, or a hash thereof) as the secret key.
The hashed challenge is converted to a string of lowercase hex digits.
The username and a space character are prepended to the hex digits.
The concatenation is then base64-encoded and sent to the server
Comparison: The server uses the same method to compute the expected response. If the given response and the expected response match, then authentication was successful.
Strengths
The one-way hash and the fresh random challenge provide three types of security:
Others
|
https://en.wikipedia.org/wiki/Benzyl%20acetate
|
Benzyl acetate is an organic ester with the molecular formula . It is formed by the condensation of benzyl alcohol and acetic acid.
Similar to most other esters, it possesses a sweet and pleasant aroma, owing to which, it finds applications in personal hygiene and health care products. It is a constituent of jasmin and of the essential oils of ylang-ylang and neroli. It has pleasant sweet aroma reminiscent of jasmine. Further as a flavoring agent it is also used to impart jasmine or apple flavors to various cosmetics and personal care products like lotions, hair creams etc..
It is one of many compounds that is attractive to males of various species of orchid bees. It is collected and used by the bees as an intra-specific pheromone; In apiculture benzyl acetate is used as a bait to collect bees. Natural sources of benzyl acetate include varieties of flowers like jasmine (Jasminum), and fruits like pear, apple, etc.
References
External links
Perfume ingredients
Flavors
Insect pheromones
IARC Group 3 carcinogens
Acetate esters
Benzyl esters
Sweet-smelling chemicals
|
https://en.wikipedia.org/wiki/Mees%27%20lines
|
Mees' lines or Aldrich–Mees lines, also called leukonychia striata, are white lines of discoloration across the nails of the fingers and toes (leukonychia).
Presentation
They are typically white bands traversing the width of the nail. As the nail grows they move towards the end, and finally disappear when trimmed.
Causes
Mees' lines appear after an episode of poisoning with arsenic, thallium or other heavy metals or selenium, opioid MT-45, and can also appear if the subject is suffering from kidney failure. They have been observed in chemotherapy patients.
Eponym and history
Although the phenomenon is named after Dutch physician R. A. Mees, who described the abnormality in 1919, earlier descriptions of the same abnormality were made by Englishman E. S. Reynolds in 1901 and by American C. J. Aldrich in 1904.
See also
Leukonychia
List of cutaneous conditions
Muehrcke's nails – a similar condition, except the lines are underneath the nails and so do not move as the nail grows
References
External links
Conditions of the skin appendages
|
https://en.wikipedia.org/wiki/Hot%20spare
|
A hot spare or warm spare or hot standby is used as a failover mechanism to provide reliability in system configurations. The hot spare is active and connected as part of a working system. When a key component fails, the hot spare is switched into operation. More generally, a hot standby can be used to refer to any device or system that is held in readiness to overcome an otherwise significant start-up delay.
Examples
Examples of hot spares are components such as A/V switches, computers, network printers, and hard disks. The equipment is powered on, or considered "hot," but not actively functioning in (i.e. used by) the system.
Electrical generators may be held on hot standby, or a steam train may be held at the shed fired up (literally hot) ready to replace a possible failure of an engine in service.
Explanation
In designing a reliable system, it is recognized that there will be failures. At the extreme, a complete system can be duplicated and kept up to date—so in the event of the primary system failing, the secondary system can be switched in with little or no interruption. More often, a hot spare is a single vital component without which the entire system would fail. The spare component is integrated into the system in such a way that in the event of a problem, the system can be altered to use the spare component. This may be done automatically or manually, but in either case it is normal to have some means of error detection. A hot spare does not necessarily give 100% availability or protect against temporary loss of the system during the switching process; it is designed to significantly reduce the time that the system is unavailable.
Hot standby may have a slightly different connotation of being active but not productive to hot spare, that is it is a state rather than object. For example, in a national power grid, the supply of power needs to be balanced to demand over a short term. It can take many hours to bring a coal-fired power station up to produ
|
https://en.wikipedia.org/wiki/Natural%20heritage
|
Natural heritage refers to the sum total of the elements of biodiversity, including flora and fauna, ecosystems and geological structures. It forms part of our natural resources.
Definition
Heritage is that which is inherited from past generations, maintained in the present, and bestowed to future generations. The term "natural heritage", derived from "natural inheritance", pre-dates the term "biodiversity". It is a less scientific term and more easily comprehended in some ways by the wider audience interested in conservation.
The term was used in this context in the United States when Jimmy Carter set up the Georgia Heritage Trust while he was governor of Georgia; Carter's trust dealt with both natural and cultural heritage. It would appear that Carter picked the term up from Lyndon Johnson, who used it in a 1966 Message to Congress. (He may have gotten the term from his wife Lady Bird Johnson who was personally interested in conservation.) President Johnson signed the Wilderness Act of 1964.
The term "Natural Heritage" was picked up by the Science Division of The Nature Conservancy (TNC) when, under Robert E. Jenkins, Jr., it launched in 1974 what ultimately became the network of state natural heritage programs—one in each state, all using the same methodology and all supported permanently by state governments because they scientifically document conservation priorities and facilitate science-based environmental reviews. When this network was extended outside the United States, the term "Conservation Data Center (or Centre)" was suggested by Guillermo Mann and came to be preferred for programs outside the US. Despite the name difference, these programs, too, use the same core methodology as the 50 state natural heritage programs. In 1994 The network of natural heritage programs formed a membership association to work together on projects of common interest: the Association for Biodiversity Information (ABI). In 1999, Through an agreement with The Nature
|
https://en.wikipedia.org/wiki/Design%20structure%20matrix
|
The design structure matrix (DSM; also referred to as dependency structure matrix, dependency structure method, dependency source matrix, problem solving matrix (PSM), incidence matrix, N2 matrix, interaction matrix, dependency map or design precedence matrix) is a simple, compact and visual representation of a system or project in the form of a square matrix.
It is the equivalent of an adjacency matrix in graph theory, and is used in systems engineering and project management to model the structure of complex systems or processes, in order to perform system analysis, project planning and organization design. Don Steward coined the term "design structure matrix" in the 1960s, using the matrices to solve mathematical systems of equations.
Overview
A design structure matrix lists all constituent subsystems/activities and the corresponding information exchange, interactions, and dependency patterns. For example, where the matrix elements represent activities, the matrix details what pieces of information are needed to start a particular activity, and shows where the information generated by that activity leads. In this way, one can quickly recognize which other activities are reliant upon information outputs generated by each activity.
The use of DSMs in both research and industrial practice increased greatly in the 1990s. DSMs have been applied in the building construction, real estate development, semiconductor, automotive, photographic, aerospace, telecom, small-scale manufacturing, factory equipment, and electronics industries, to name a few, as well as in many government agencies.
The matrix representation has several strengths.
The matrix can represent a large number of system elements and their relationships in a compact way that highlights important patterns in the data (such as feedback loops and modules).
The presentation is amenable to matrix-based analysis techniques, which can be used to improve the structure of the system.
In modeling activit
|
https://en.wikipedia.org/wiki/Myrcene
|
Myrcene, or β-myrcene, is a monoterpene. A colorless oil, it occurs widely in essential oils. It is produced mainly semi-synthetically from Myrcia, from which it gets its name. It is an intermediate in the production of several fragrances. α-Myrcene is the name for the isomer 2-methyl-6-methylene-1,7-octadiene, which has not been found in nature.
Production
Myrcene is often produced commercially by the pyrolysis (400 °C) of β-pinene, which is obtained from turpentine. It is rarely obtained directly from plants.
Plants biosynthesize myrcene via geranyl pyrophosphate (GPP), which isomerizes into linalyl pyrophosphate. The release of the pyrophosphate (OPP) and a proton completes the conversion.
Occurrence
It could in principle be extracted from any number of plants, such as verbena or wild thyme, the leaves of which contain up to 40% by weight of myrcene. Many other plants contain myrcene, sometimes in substantial amounts. Some of these include cannabis, hops, Houttuynia, lemon grass, mango, Myrcia, West Indian bay tree, and cardamom.
Of the several terpenes extracted from Humulus lupulus (hops), the largest monoterpenes fraction is β-myrcene. One study of the chemical composition of the fragrance of Cannabis sativa found β-myrcene to compose between 29.4% and 65.8% of the steam-distilled essential oil for the set of fiber and drug strains tested. Additionally, myrcene is thought to be the predominant terpene found in modern cannabis cultivars within North America. Interestingly, photo-oxidation of myrcene has been shown to rearrange the molecule into a novel terpene known as "hashishene" which is named for its abundance in hashish.
It is found in the South African Adenandra villosa (50%). & Brazilian Schinus molle (40%) Myrcene is also found in Myrcia cuprea petitgrain (up to 48%), bay leaf, juniper berry, cannabis, and hops.
Use in fragrance and flavor industries
Myrcene is an intermediate used in the perfumery industry. It has a pleasant odor but is rarely
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.