source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Repeating%20waveforms
|
Repeating waveforms is a technique for digital synthesis common in PC sound cards.
The waveform amplitude values are stored in a buffer memory, which is stored in a phase generator. When addressed, the retrieved value is used as the basis of the synthesized sound.
In the phase generator, a value proportional to the desired signal frequency is periodically added to an accumulator. The high order bits of the accumulator form the output address, while the typically larger number of bits in the accumulator and addition value results in an arbitrarily high frequency resolution.
|
https://en.wikipedia.org/wiki/Equating%20coefficients
|
In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.
Example in real fractions
Suppose we want to apply partial fraction decomposition to the expression:
that is, we want to bring it into the form:
in which the unknown parameters are A, B and C.
Multiplying these formulas by x(x − 1)(x − 2) turns both into polynomials, which we equate:
or, after expansion and collecting terms with equal powers of x:
At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0x2 + 0x + 1, having zero coefficients for the positive powers of x. Equating the corresponding coefficients now results in this system of linear equations:
Solving it results in:
Example in nested radicals
A similar problem, involving equating like terms rather than coefficients of like terms, arises if we wish to de-nest the nested radicals to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters d, e such that
Squaring both sides of this equation yields:
To find d and e we equate the terms not involving square roots, so and equate the parts involving radicals, so which when squared implies This gives us two equations, one quadratic and one linear, in the desired parameters d and e, and these can be solved to obtain
which is a valid solution pair if and only if is a rational number.
Example of testing for linear dependence of equations
Consider this overdetermined system of equations (with 3 equations in just 2 unknowns):
To test whether the third equation is linearly dependent on the first two, postulate two parameters a and b suc
|
https://en.wikipedia.org/wiki/Helaman%20Ferguson
|
Helaman Rolfe Pratt Ferguson (born 1940 in Salt Lake City, Utah) is an American sculptor and a digital artist, specifically an algorist. He is also well known for his development of the PSLQ algorithm, an integer relation detection algorithm.
Early life and education
Ferguson's mother died when he was about three and his father went off to serve in the Second World War. He was adopted by an Irish immigrant and raised in New York. He learned to work with his hands in an old-world style with earthen materials from his adoptive father who was a carpenter and stonemason by trade. An art-inclined math teacher in high school helped him develop his dual interests in math and art.
Ferguson is a graduate of Hamilton College, a liberal arts school in New York. In 1971, he received a Ph.D. in mathematics from the University of Washington.
Work
In 1977, Ferguson and another mathematician, Rodney Forcade, developed an algorithm for integer relation detection. It was the first viable generalization of the Euclidean algorithm for three or more variables. He later developed a more notable integer relation detection algorithm - the PSLQ algorithm - which was selected as one of the "Top Ten Algorithms of the Century" by Jack Dongarra and Francis Sullivan.
In January 2014, Ferguson and his wife Claire Ferguson delivered an MAA Invited Address, titled "Mathematics in Stone and Bronze," at the Joint Math Meetings in Baltimore Maryland. He is an active artist, often representing mathematical shapes in his works. One of the first bronze torii sculpted by Ferguson was exhibited at a computer art exhibition in 1989 at the Computer Museum in Boston. His most widely known piece of art is a 69 cm (27") bronze sculpture, Umbilic Torus. In 2010, the Simons Foundation, a private institution committed to the advancement of science and mathematics, commissioned him to create the Umbilic Torus SC, a massive 8.5 m (28½') high sculpture in cast bronze and granite weighing more than nine tons. Wi
|
https://en.wikipedia.org/wiki/ESPN%20Megacast
|
ESPN Megacast, formerly known as ESPN Full Circle, is a multi-network simulcast of a single sporting event across multiple ESPN networks and serviceswith each feed providing a different version of the telecast making use of different features, functions or perspectives. These simulcasts typically involve ESPN's linear television channels and internet streaming platforms, and may occasionally incorporate other Walt Disney Television networks at once.
ESPN Full Circle debuted with ESPN Full Circle: North Carolina at Duke on March 4, 2006, on the one-year anniversary of ESPNU. The game was the North Carolina Tar Heels at the Duke Blue Devils in college basketball. Five further Full Circle broadcasts were produced (one NBA playoff game, one NASCAR race and three more college basketball games) before the format was discontinued in 2007.
After a seven-year hiatus, full-circle broadcasts resumed under the Megacast branding in 2014. To date, the feature has primarily been used for the College Football Playoff and National Championship. ESPN has occasionally provided smaller-scale slates of alternate feeds during other broadcasts, although these have not always used the "Megacast" branding.
College Basketball Megacasts
North Carolina at Duke
The first Full Circle telecast covered the college basketball game between the North Carolina Tar Heels and the Duke Blue Devils, to honor the one-year anniversary of the launch of ESPN's college sports network ESPNU.
ESPN aired the game's traditional coverage (along with live "look-ins" to the other views, simulcast in 120 countries through ESPN International), ESPN2 featured an "Above the Rim" camera, and ESPNU featured a split-screen with the "Cameron Crazy Cam". ESPN360 offered additional stats, hosted by ESPN Radio's Jeff Rickard, Mobile ESPN featured game alerts, live updates and in-game polling for a replay of a classic Duke-North Carolina game, and ESPN.com featured live chats, in-game polling and highlights. The ESPN and E
|
https://en.wikipedia.org/wiki/Spherical%20cow
|
The spherical cow is a humorous metaphor for highly simplified scientific models of complex phenomena. Originating in theoretical physics, the metaphor refers to physicists' tendency to reduce a problem to the simplest form imaginable in order to make calculations more feasible, even if the simplification hinders the model's application to reality.
The metaphor and variants have subsequently been used in other disciplines.
History
The phrase comes from a joke that spoofs the simplifying assumptions sometimes used in theoretical physics.
It is told in many variants, including a joke about a physicist who said he could predict the winner of any race provided it involved spherical horses moving through a vacuum. A 1973 letter to the editor in the journal Science describes the "famous story" about a physicist whose solution to a poultry farm's egg-production problems began with "Postulate a spherical chicken".
Cultural references
The concept is familiar enough that the phrase is sometimes used as shorthand for the entire issue of proper modeling. For example, Consider a Spherical Cow is a 1988 book about problem solving using simplified models. A 2015 paper on the systemic errors introduced by simplifying assumptions about spherical symmetries in galactic dark-matter haloes was titled "Milking the spherical cow – on aspherical dynamics in spherical coordinates".
|
https://en.wikipedia.org/wiki/Core-Plus%20Mathematics%20Project
|
Core-Plus Mathematics is a high school mathematics program consisting of a four-year series of print and digital student textbooks and supporting materials for teachers, developed by the Core-Plus Mathematics Project (CPMP) at Western Michigan University, with funding from the National Science Foundation. Development of the program started in 1992. The first edition, entitled Contemporary Mathematics in Context: A Unified Approach, was completed in 1995. The third edition, entitled Core-Plus Mathematics: Contemporary Mathematics in Context, was published by McGraw-Hill Education in 2015.
Key Features
The first edition of Core-Plus Mathematics was designed to meet the curriculum, teaching, and assessment standards from the National Council of Teachers of Mathematics and the broad goals outlined in the National Research Council report, Everybody Counts: A Report to the Nation on the Future of Mathematics Education. Later editions were designed to also meet the American Statistical Association Guidelines for Assessment and Instruction in Statistics Education (GAISE) and most recently the standards for mathematical content and practice in the Common Core State Standards for Mathematics (CCSSM).
The program puts an emphasis on teaching and learning mathematics through mathematical modeling and mathematical inquiry. Each year, students learn mathematics in four interconnected strands: algebra and functions, geometry and trigonometry, statistics and probability, and discrete mathematical modeling.
First Edition (1994-2003)
The program originally comprised three courses, intended to be taught in grades 9 through 11. Later, authors added a fourth course intended for college-bound students.
Second Edition (2008-2011)
The course was re-organized around interwoven strands of algebra and functions, geometry and trigonometry, statistics and probability, and discrete mathematics. Lesson structure was updated, and technology tools, including CPMP-Tools software was introduced.
|
https://en.wikipedia.org/wiki/Mathematically%20Correct
|
Mathematically Correct was a U.S.-based website created by educators, parents, mathematicians, and scientists who were concerned about the direction of reform mathematics curricula based on NCTM standards. Created in 1997, it was a frequently cited website in the so-called Math wars, and was actively updated until 2003.
History
Although Mathematically Correct had a national scope, much of its focus was on advocating against mathematics curricula prevalent in California in the mid-1990s. When California reversed course and adopted more traditional mathematics texts (2001 - 2002), Mathematically Correct changed its focus to reviewing the new text books. Convinced that the choices were adequate, the website went largely dormant.
Mathematically Correct maintained a large section of critical articles and reviews for a number of math programs. Most of the program opposed by Mathematically Correct had been developed from research projects funded by the National Science Foundation. Most of these programs also claimed to have been based on the 1989 Curriculum and Evaluation Standards for School Mathematics published by the National Council of Teachers of Mathematics.
Mathematically Correct's main point of contention was that, in reform textbooks, traditional methods and concepts have been omitted or replaced by new terminology and procedures. As a result, in the case of the high-school program Core-Plus Mathematics Project, for example, some reports suggest that students may be unprepared for college level courses upon completion of the program. Other programs given poor ratings include programs aimed at elementary school students, such as Dale Seymour Publications (TERC) Investigations in Numbers, Data, and Space and Everyday Learning Everyday Mathematics.
After Mathematically Correct's review of the programs, many have undergone revisions and are now with different publishers. Other programs, such as Mathland have been terminated.
Reviews by the site
Publications w
|
https://en.wikipedia.org/wiki/P.%20A.%20P.%20Moran
|
Patrick Alfred Pierce Moran FRS (14 July 1917 – 19 September 1988) was an Australian statistician who made significant contributions to probability theory and its application to population and evolutionary genetics.
Early years
Patrick Moran was born in Sydney and was the only child of Herbert Michael Moran (b. 1885 in Sydney, d. 1945 in Cambridge UK), a prominent surgeon and captain of the first Wallabies, and Eva Mann (b. 1887 in Sydney, d. 1977 in Sydney). Patrick did have five other siblings, but they all died at or shortly after birth. He completed his high school studies in Bathurst, in three and a half years instead of the normal five-year course. At age 16, in 1934, he commenced study at the University of Sydney where he studied chemistry, math and physics, graduating with first class honours in mathematics in 1937. Following graduation he went to study at Cambridge University from 1937 to 1939, his supervisors noted that he was not a good mathematician and the outbreak of World War II interrupted his studies. He graduated with an MA (by proxy) from St John's College, Cambridge, on 22 January 1943 and continued his studies there from 1945 to 1946. He was admitted to Balliol College, Oxford University, on 3 December 1946. He was awarded an MA, from Oxford University, by incorporation in 1947.
Career
During the war Moran worked in rocket development in the Ministry of Supply and later at the External Ballistics Laboratory in Cambridge. In late 1943 he joined the Australian Scientific Liaison Office (ASLO), run by the CSIRO. He worked on applied physics including vision, camouflage, army signals, quality control, road research, infra-red detection, metrology, UHF radio propagation, general radar, bomb-fragmentation, rockets, ASDICs and on operational research. He also wrote some papers on the Hausdorff measure during the War.
After the war, Moran returned to Cambridge where he was supervised by Frank Smithies and worked unsuccessfully on determining the na
|
https://en.wikipedia.org/wiki/San%20Francisco%20City%20Clinic
|
San Francisco City Clinic also known as SF City Clinic or usually as City Clinic is a municipal public sexual health clinic specializing in sexually transmitted infections testing and treatment, in addition to advocacy work and medical research. The center is located in the South of Market or "SoMa" district on the north-east side of San Francisco, California, along San Francisco Bay.
Overview
History
San Francisco City Clinic is run by the City and County of San Francisco's Department of Public Health. The health center opened and began serving the sexually active members of San Francisco's communities in 1933. Its precursor was the Municipal Clinic of San Francisco opened in 1911 to treat prostitutes suffering from the "Red Plague". The clinic is located on 7th Street, at number 356, its current location where it has operated continuously since the health center's inception. The center has been researching the human immunodeficiency virus since the 1970s. City Clinic has done extensive HIV research and how it affects the gay community in San Francisco including tracking the epidemic by zip codes and neighborhoods and mapping the severity of infections. In the 2000s Craigslist added a link the clinic's website as a disclaimer on the page preceding the men seeking men section of the page. Since its opening the most common diseases treated were gonorrhea, syphilis, and chlamydia, and they continue to be so. Most of the clientele are teenagers, people in their 20s, and gays and lesbians.
Operations
The clinic offers low cost sexually transmitted disease testing and treatment in addition to birth control to anyone over the age of 12. It runs out of an old firehouse on 7th Street. The services are provided in English, Spanish, Mandarin and Cantonese Chinese, Tagalog, and Russian. Although there is a nominal US$25 flat rate charge no one is refused for lack of funds nor are they invoiced later. The health center also offers free counseling on genital, reproductive, ST
|
https://en.wikipedia.org/wiki/Paratransgenesis
|
Paratransgenesis is a technique that attempts to eliminate a pathogen from vector populations through transgenesis of a symbiont of the vector. The goal of this technique is to control vector-borne diseases. The first step is to identify proteins that prevent the vector species from transmitting the pathogen. The genes coding for these proteins are then introduced into the symbiont, so that they can be expressed in the vector. The final step in the strategy is to introduce these transgenic symbionts into vector populations in the wild. One use of this technique is to prevent mortality for humans from insect-borne diseases. Preventive methods and current controls against vector-borne diseases depend on insecticides, even though some mosquito breeds may be resistant to them. There are other ways to fully eliminate them. “Paratransgenesis focuses on utilizing genetically modified insect symbionts to express molecules within the vector that are deleterious to pathogens they transmit.” The acidic bacteria Asaia symbionts are beneficial in the normal development of mosquito larvae; however, it is unknown what Asais symbionts do to adult mosquitoes.
The first example of this technique used Rhodnius prolixus which is associated with the symbiont Rhodococcus rhodnii. R. prolixus is an important insect vector of Chagas disease that is caused by Trypanosoma cruzi. The strategy was to engineer R. rhodnii to express proteins such as Cecropin A that are toxic to T. cruzi or that block the transmission of T. cruzi.
Attempts are also made in Tse-tse flies using bacteria and in malaria mosquitoes using fungi, viruses, or bacteria.
Uses
Although the use of paratransgenesis can serve many different purposes, one of the main purposes is “breaking the disease cycle”. This study focuses on the experiments with tsetse flies and trypanosomes, which cause sleeping sickness in Subsaharan Africa. The tsetse fly’s transmission biology was studied to learn how it transmits the disease. This
|
https://en.wikipedia.org/wiki/Traditional%20mathematics
|
Traditional mathematics (sometimes classical math education) was the predominant method of mathematics education in the United States in the early-to-mid 20th century. This contrasts with non-traditional approaches to math education. Traditional mathematics education has been challenged by several reform movements over the last several decades, notably new math, a now largely abandoned and discredited set of alternative methods, and most recently reform or standards-based mathematics based on NCTM standards, which is federally supported and has been widely adopted, but subject to ongoing criticism.
Traditional methods
The topics and methods of traditional mathematics are well documented in books and open source articles of many nations and languages. Major topics covered include:
Elementary arithmetic
Addition
Carry
Subtraction
Multiplication
Multiplication table
Division
Long division
Arithmetic with fractions
Lowest common denominator
Arithmetic mean
Volume
In general, traditional methods are based on direct instruction where students are shown one standard method of performing a task such as decimal addition, in a standard sequence. A task is taught in isolation rather than as only a part of a more complex project. By contrast, reform books often postpone standard methods until students have the necessary background to understand the procedures. Students in modern curricula often explore their own methods for multiplying multi-digit numbers, deepening their understanding of multiplication principles before being guided to the standard algorithm. Parents sometimes misunderstand this approach to mean that the children will not be taught formulas and standard algorithms and therefore there are occasional calls for a return to traditional methods. Such calls became especially intense during the 1990s. (See Math wars.)
A traditional sequence early in the 20th century would leave topics such as algebra or geometry entirely for high school, and statistic
|
https://en.wikipedia.org/wiki/Investigations%20in%20Numbers%2C%20Data%2C%20and%20Space
|
Investigations in Numbers, Data, and Space is a K–5 mathematics curriculum, developed at TERC in Cambridge, Massachusetts, United States. The curriculum is often referred to as Investigations or simply TERC. Patterned after the NCTM standards for mathematics, it is among the most widely used of the new reform mathematics curricula. As opposed to referring to textbooks and having teachers impose methods for solving arithmetic problems, the TERC program uses a constructivist approach that encourages students to develop their own understanding of mathematics. The curriculum underwent a major revision in 2005–2007.
History
Investigations was developed between 1990 and 1998. It was just one of a number of reform mathematics curricula initially funded by a National Science Foundation grant. The goals of the project raised opposition to the curriculum from critics (both parents and mathematics teachers) who objected to the emphasis on conceptual learning instead of instruction in more recognized specific methods for basic arithmetic..
The goal of the Investigations curriculum is to help all children understand the fundamental ideas of number and arithmetic, geometry, data, measurement and early algebra. Unlike traditional methods, the original edition did not provide student textbooks to describe standard methods or provide solved examples. Instead, students were guided to develop their own invented algorithms through working with concrete representations of number such as manipulatives and drawings as well as more traditional number sentences. Additional activities include journaling, cutting and pasting, interviewing (for data collection) and playing conceptual games.
Investigations released its second edition for 2006 that continues its focus on the core value of teaching for understanding. The revised version has further emphasis on basic skills and computation to complement the development of place value concepts and number sense. It is also easier for teachers t
|
https://en.wikipedia.org/wiki/Information%20technology%20security%20assessment
|
Information Technology Security Assessment (IT Security Assessment) is an explicit study to locate IT security vulnerabilities and risks.
Background
In an assessment, the assessor should have the full cooperation of the organization being assessed. The organization grants access to its facilities, provides network access, outlines detailed information about the network, etc. All parties understand that the goal is to study security and identify improvements to secure the systems. An assessment for security is potentially the most useful of all security tests.
Purpose of security assessment
The goal of a security assessment (also known as a security audit, security review, or network assessment), is
to ensure that necessary security controls are integrated into the design and implementation of a project. A properly completed security assessment should provide documentation outlining any security gaps between a project design and approved corporate security policies. Management can address security gaps in three ways:
Management can decide to cancel the project, allocate the necessary resources to correct
the security gaps, or accept the risk based on an informed risk / reward analysis.
Methodology
The following methodology outline is put forward as the effective means in conducting security assessment.
Requirement Study and Situation Analysis
Security policy creation and update
Document Review
Risk Analysis
Vulnerability Scan
Data Analysis
Report & Briefing
Sample report
A security assessment report should include the following information:
Introduction/background information
Executive and Management summary
Assessment scope and objectives
Assumptions and limitations
Methods and assessment tools used
Current environment or system description with network diagrams, if any
Security requirements
Summary of findings and recommendations
The general control review result
The vulnerability test results
Risk assessment results including identified assets,
|
https://en.wikipedia.org/wiki/TwinBee
|
is a vertically scrolling shooter released by Konami as an arcade video game in 1985 in Japan. Along with Sega's Fantasy Zone, released a year later, TwinBee is credited as an early archetype of the "cute 'em up" type in its genre. It was the first game to run on Konami's Bubble System hardware. TwinBee was ported to the Family Computer and MSX in 1986 and has been included in numerous compilations released in later years. The original arcade game was released outside Japan for the first time in the Nintendo DS compilation Konami Classics Series: Arcade Hits. A mobile phone version was released for i-mode Japan phones in 2003 with edited graphics.
Various TwinBee sequels were released for the arcade and home console markets following the original game, some which spawned audio drama and anime adaptations in Japan.
Gameplay
TwinBee can be played by up to 2-players simultaneously. The player takes control of a cartoon-like anthropomorphic spacecraft, with Player 1 taking control of TwinBee, the titular ship, while Player 2 controls WinBee. The game control consists of an eight-way joystick and two buttons: one for shooting enemies in the air and the other for dropping bombs to ground enemies (similarly to Xevious).
The player's primary power-ups are bells that can be uncovered by shooting at the floating clouds where they're hidden. If the player continues shooting the bell after it appears, it will change into one of four other colors: the regular yellow bells only grant bonus points, the white bell will upgrade the player's gun into a twin cannon, the blue bell increases the player's speed (for up to five speed levels), the green bell will allow the player to create image copies of its ship for additional firepower, and the red bell will provide the player's ship a barrier that allows it to sustain more damage. The green and red bells cannot be combined. Other power-ups can also be retrieved from ground enemies such as an alternate bell that gives the player's sh
|
https://en.wikipedia.org/wiki/PCMark
|
PCMark is a computer benchmark tool developed by UL (formerly Futuremark) to test the performance of a PC at the system and component level. In most cases, the tests in PCMark are designed to represent typical home user workloads. Running PCMark produces a score with higher numbers indicating better performance. Several versions of PCMark have been released. Scores cannot be compared across versions since each includes different tests.
Versions
Controversy
In a 2008 Ars Technica article, a VIA Nano gained significant performance after its CPUID changed to Intel. This was because Intel compilers create conditional code that uses more advanced instructions for CPUs that identify as Intel.
See also
Benchmark (computing)
3DMark
Futuremark
|
https://en.wikipedia.org/wiki/Bullwheel
|
A bullwheel or bull wheel is a large wheel on which a rope turns, such as in a chairlift or other ropeway. In this application, the bullwheel that is attached to the prime mover is called the drive bullwheel, and the other is the return bullwheel. One of the bullwheels is usually attached to a cable tensioning system, which is usually either hydraulic or fixed counterweights.
A double-grooved bullwheel may be used by some ropeways, whereby two cables travelling at the same speed, or the same cable twice, loop around the bullwheel.
The bullwheel began use in farm implements with the reaper. The term described the traveling wheel, traction wheel, drive wheel, or harvester wheel. The bullwheel powered all the moving parts of these farm machines including the reciprocating knives, reel, rake, and self binder. The bullwheel's outer surface provided traction against the ground and turned when the draft animals or tractor pulled the implement forward. Cyrus McCormick used the bullwheel to power his 1834 reaper and until the early 1920s when small internal combustion engine gasoline engines like the Cushman Motor began to be favored.
|
https://en.wikipedia.org/wiki/Prior%20knowledge%20for%20pattern%20recognition
|
Pattern recognition is a very active field of research intimately bound to machine learning. Also known as classification or statistical classification, pattern recognition aims at building a classifier that can determine the class of an input pattern. This procedure, known as training, corresponds to learning an unknown decision function based only on a set of input-output pairs that form the training data (or training set). Nonetheless, in real world applications such as character recognition, a certain amount of information on the problem is usually known beforehand. The incorporation of this prior knowledge into the training is the key element that will allow an increase of performance in many applications.
Prior Knowledge
Prior knowledge refers to all information about the problem available in addition to the training data. However, in this most general form, determining a model from a finite set of samples without prior knowledge is an ill-posed problem, in the sense that a unique model may not exist. Many classifiers incorporate the general smoothness assumption that a test pattern similar to one of the training samples tends to be assigned to the same class.
The importance of prior knowledge in machine learning is suggested by its role in search and optimization. Loosely, the no free lunch theorem states that all search algorithms have the same average performance over all problems, and thus implies that to gain in performance on a certain application one must use a specialized algorithm that includes some prior knowledge about the problem.
The different types of prior knowledge encountered in pattern recognition are now regrouped under two main categories: class-invariance and knowledge on the data.
Class-invariance
A very common type of prior knowledge in pattern recognition is the invariance of the class (or the output of the classifier) to a transformation of the input pattern. This type of knowledge is referred to as transformation-invariance.
|
https://en.wikipedia.org/wiki/Submarine%20groundwater%20discharge
|
Submarine groundwater discharge (SGD) is a hydrological process which commonly occurs in coastal areas. It is described as submarine inflow of fresh-, and brackish groundwater from land into the sea. Submarine Groundwater Discharge is controlled by several forcing mechanisms, which cause a hydraulic gradient between land and sea. Considering the different regional settings the discharge occurs either as (1) a focused flow along fractures in karst and rocky areas, (2) a dispersed flow in soft sediments, or (3) a recirculation of seawater within marine sediments. Submarine Groundwater Discharge plays an important role in coastal biogeochemical processes and hydrological cycles such as the formation of offshore plankton blooms, hydrological cycles, and the release of nutrients, trace elements and gases. It affects coastal ecosystems and has been used as a freshwater resource by some local communities for millennia.
Forcing mechanisms
In coastal areas the groundwater and seawater flows are driven by a variety of factors. Both types of water can circulate in marine sediments due to tidal pumping, waves, bottom currents or density driven transport processes. Meteoric freshwaters can discharge along confined and unconfined aquifers into the sea or the oppositional process of seawater intruding into groundwater charged aquifers can take place. The flow of both fresh and sea water is primarily controlled by the hydraulic gradients between land and sea and differences in the densities between both waters and the permeabilities of the sediments.
According to Drabbe and Badon-Ghijben (1888) and Herzberg (1901), the thickness of a freshwater lens below sea level (z) corresponds with the thickness of the freshwater level above sea level (h) as:
z= ρf/((ρs-ρf))*h
With z being the thickness between the saltwater-freshwater interface and the sea level, h being the thickness between the top of the freshwater lens and the sea level, ρf being the density of freshwater and ρs bein
|
https://en.wikipedia.org/wiki/International%20Association%20of%20Homes%20and%20Services%20for%20the%20Aging
|
The Global Ageing Network (formerly the International Association for Homes and Services for the Aging (IAHSA)) is an international, not-for-profit educational and charitable organization founded in 1994.
Affiliations
The Global Ageing Network is a Non-Governmental Organization (NGO) in Special Consultative Status with the United Nations. It works with other organizations, including:
AARP International
Alzheimer's Disease International
Helpage International
International Coalition of Intergenerational Programs
International Federation on Ageing
International Longevity Center
United Nations Department of Economic and Social Affairs
Education
The Global Ageing Network hosts a biennial international conference.
Past conference site have included:
Amsterdam, The Netherlands
Barcelona, Spain
Honolulu, Hawaii, United States of America
Vancouver, BC, Canada
Sydney, Australia
Trondheim, Norway
St. Julian's, Malta
London, England
Washington, D.C.
Shanghai, China
Perth, Australia
Membership
The Global Ageing Network membership is open providers of aging services, governments, universities, individuals and corporations.
Global Ageing Network
Global Ageing Network Members are from the following countries:
Australia, Austria, Canada, People's Republic of China, Czech Republic, Croatia, Denmark, Ecuador, Finland, France, Germany, India, Italy, Japan, South Korea, Hong Kong, Malta, The Netherlands, New Zealand, Norway, Portugal, Russia, Singapore, South Africa, Spain, Sweden, Switzerland, Taiwan, Thailand, United Kingdom, Ukraine and the United States of America.
International non-profit organizations
Organizations established in 1994
Non-profit organizations based in Washington, D.C.
Gerontology organizations
|
https://en.wikipedia.org/wiki/Insect%20migration
|
Insect migration is the seasonal movement of insects, particularly those by species of dragonflies, beetles, butterflies and moths. The distance can vary with species and in most cases, these movements involve large numbers of individuals. In some cases, the individuals that migrate in one direction may not return and the next generation may instead migrate in the opposite direction. This is a significant difference from bird migration.
Definition
All insects move to some extent. The range of movement can vary from within a few centimeters for some sucking insects and wingless aphids to thousands of kilometers in the case of other insects such as locusts, butterflies and dragonflies. The definition of migration is therefore particularly difficult in the context of insects. A behavior-oriented definition proposed is
This definition disqualifies movements made in the search of resources and which are terminated upon finding the resource. Migration involves longer distance movement and these movements are not affected by the availability of the resource items. All cases of long-distance insect migration concern winged insects.
General patterns
Migrating butterflies fly within a boundary layer, with a specific upper limit above the ground. The airspeeds in this region are typically lower than the flight speed of the insect. These 'boundary-layer' migrants include the larger day-flying insects, and their low-altitude flight is obviously easier to observe than that of most high-altitude windborne migrants.
Many migratory species tend to have polymorphic forms, a migratory one, and a resident phase. The migratory phases are marked by their well-developed and long wings. Such polymorphism is well known in aphids and grasshoppers. In the migratory locusts, there are distinct long and short-winged forms.
The energetic cost of migration has been studied in the context of life-history strategies. It has been suggested that adaptations for migration would be more valuable
|
https://en.wikipedia.org/wiki/Goldman%E2%80%93Hodgkin%E2%80%93Katz%20flux%20equation
|
The Goldman–Hodgkin–Katz flux equation (or GHK flux equation or GHK current density equation) describes the ionic flux across a cell membrane as a function of the transmembrane potential and the concentrations of the ion inside and outside of the cell. Since both the voltage and the concentration gradients influence the movement of ions, this process is a simplified version of electrodiffusion. Electrodiffusion is most accurately defined by the Nernst–Planck equation and the GHK flux equation is a solution to the Nernst–Planck equation with the assumptions listed below.
Origin
The American David E. Goldman of Columbia University, and the English Nobel laureates Alan Lloyd Hodgkin and Bernard Katz derived this equation.
Assumptions
Several assumptions are made in deriving the GHK flux equation (Hille 2001, p. 445) :
The membrane is a homogeneous substance
The electrical field is constant so that the transmembrane potential varies linearly across the membrane
The ions access the membrane instantaneously from the intra- and extracellular solutions
The permeant ions do not interact
The movement of ions is affected by both concentration and voltage differences
Equation
The GHK flux equation for an ion S (Hille 2001, p. 445):
where
S is the current density (flux) outward through the membrane carried by ion S, measured in amperes per square meter (A·m−2)
PS is the permeability of the membrane for ion S measured in m·s−1
zS is the valence of ion S
Vm is the transmembrane potential in volts
F is the Faraday constant, equal to 96,485 C·mol−1 or J·V−1·mol−1
R is the gas constant, equal to 8.314 J·K−1·mol−1
T is the absolute temperature, measured in kelvins (= degrees Celsius + 273.15)
[S]i is the intracellular concentration of ion S, measured in mol·m−3 or mmol·l−1
[S]o is the extracellular concentration of ion S, measured in mol·m−3
Implicit definition of reversal potential
The reversal potential is shown to be contained in the GHK flux equation (Flax
|
https://en.wikipedia.org/wiki/Vertical%20line%20test
|
In mathematics, the vertical line test is a visual way to determine if a curve is a graph of a function or not. A function can only have one output, y, for each unique input, x. If a vertical line intersects a curve on an xy-plane more than once then for one value of x the curve has more than one value of y, and so, the curve does not represent a function. If all vertical lines intersect a curve at most once then the curve represents a function.
See also
Horizontal line test
Notes
Functions and mappings
|
https://en.wikipedia.org/wiki/Framework%20for%20integrated%20test
|
Framework for Integrated Test, or "Fit", is an open-source (GNU GPL v2) tool for automated customer tests. It integrates the work of customers, analysts, testers, and developers.
Customers provide examples of how their software should work. Those examples are then connected to the software with programmer-written test fixtures and automatically checked for correctness. The customers' examples are formatted in tables and saved as HTML using ordinary business tools such as Microsoft Excel. When Fit checks the document, it creates a copy and colors the tables green, red, and yellow according to whether the software behaved as expected.
Fit was invented by Ward Cunningham in 2002. He created the initial Java version of Fit. As of June 2005, it has up-to-date versions for Java, C#, Python, Perl, PHP and Smalltalk.
Although Fit is an acronym, the word "Fit" came first, making it a backronym. Fit is sometimes italicized but should not be capitalized. In other words, "Fit" and "Fit" are appropriate usage, but "FIT" is not.
Fit includes a simple command-line tool for checking Fit documents. There are third-party front-ends available. Of these, FitNesse is the most popular. FitNesse is a complete IDE for Fit that uses a Wiki for its front end. As of June 2005, FitNesse had forked Fit, making it incompatible with newer versions of Fit, but plans were underway to re-merge with Fit.
See also
YatSpec - a Java testing framework that supersedes Fit
Concordion - a Java testing framework similar to Fit
Endly - a language agnostic and declarative end to end testing framework
|
https://en.wikipedia.org/wiki/Random%20chimeragenesis%20on%20transient%20templates
|
Random chimeragenesis on transient templates (RACHITT) is a method to perform molecular mutagenesis at a high recombination rate. For example, RACHITT can be used to generate increased rate and extent of biodesulfurization of diesel by modification of dibenzothiophene mono-oxygenase. DNA shuffling is a similar but less powerful method used in directed evolution experiments.
|
https://en.wikipedia.org/wiki/Estray
|
Estray, in common law, is any domestic animal found wandering at large or lost, particularly if the owner is unknown. In most cases, this implies domesticated animals rather than pets.
Under early English common law, estrays were forfeited to the king or lord of the manor; under modern statutes, provision is made for taking up stray animals and acquiring either title to them or a lien for the expenses incurred in keeping them. A person taking up an estray has a qualified ownership in it, which becomes absolute if the owner fails to claim the animal within the statutory time limit. Whether the animal escaped through the owner's negligence or through the wrongful act of a third person is immaterial. If the owner reclaims the estray, he is liable for reasonable costs of its upkeep. The use of an estray during the period of qualified ownership, other than for its own preservation or for the benefit of the owner, is not authorized. Some statutes limit the right to take up estrays to certain classes of persons, to certain seasons or places, or to animals requiring care.
When public officials, such as a county sheriff impound stray animals, they may sell them at auction to recover the costs of upkeep, with proceeds, if any, going into the public treasury. In some places, an uncastrated male livestock animal running at large may be neutered at the owner's expense.
In the United States, it is common for there to be a required "Notice of Estray" sworn and filed in a local office. The process usually takes a prescribed time to permit the property owner to collect his property. Otherwise, the finder obtains title to the property.
See also
Abandoned pets
Feral
Maverick (animal)
Waif and stray
Rabies
|
https://en.wikipedia.org/wiki/Cyclol
|
The cyclol hypothesis is the now discredited first structural model of a folded, globular protein, formulated in the 1930s. It was based on the cyclol reaction of peptide bonds proposed by physicist Frederick Frank in 1936, in which two peptide groups are chemically crosslinked. These crosslinks are covalent analogs of the non-covalent hydrogen bonds between peptide groups and have been observed in rare cases, such as the ergopeptides.
Based on this reaction, mathematician Dorothy Wrinch hypothesized in a series of five papers in the late 1930s a structural model of globular proteins. She postulated that, under some conditions, amino acids will spontaneously make the maximum possible number of cyclol crosslinks, resulting in cyclol molecules and cyclol fabrics. She further proposed that globular proteins have a tertiary structure corresponding to Platonic solids and semiregular polyhedra formed of cyclol fabrics with no free edges. In contrast to the cyclol reaction itself, these hypothetical molecules, fabrics and polyhedra have not been observed experimentally. The model has several consequences that render it energetically implausible, such as steric clashes between the protein sidechains. In response to such criticisms J. D. Bernal proposed that hydrophobic interactions are chiefly responsible for protein folding, which was indeed borne out.
Historical context
By the mid-1930s, analytical ultracentrifugation studies by Theodor Svedberg had shown that proteins had a well-defined chemical structure, and were not aggregations of small molecules. The same studies appeared to show that the molecular weight of proteins fell into a few well-defined classes related by integers, such as Mw = 2p3q Da, where p and q are nonnegative integers. However, it was difficult to determine the exact molecular weight and number of amino acids in a protein. Svedberg had also shown that a change in solution conditions could cause a protein to disassemble into small subunits
|
https://en.wikipedia.org/wiki/GQM
|
GQM, the initialism for goal, question, metric, is an established goal-oriented approach to software metrics to improve and measure software quality.
History
GQM has been promoted by Victor Basili of the University of Maryland, College Park and the Software Engineering Laboratory at the NASA Goddard Space Flight Center after supervising a Ph.D. thesis by Dr. David M. Weiss. Dr. Weiss' work was inspired by the work of Albert Endres at IBM Germany.
Method
GQM defines a measurement model on three levels:
1. Conceptual level (Goal) A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view and relative to a particular environment.
2. Operational level (Question) A set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal.
3. Quantitative level (Metric) A set of metrics, based on the models, is associated with every question in order to answer it in a measurable way.
GQM stepwise
Another interpretation of the procedure is:
Planning
Definition
Data collection
Interpretation
Sub-steps
Sub-steps are needed for each phases. To complete the definition phase, an eleven-step procedure is proposed:
Define measurement goals
Review or produce software process models
Conduct GQM interviews
Define questions and hypotheses
Review questions and hypotheses
Define metrics
Check metrics on consistency and completeness
Produce GQM plan
Produce measurement plan
Produce analysis plan
Review plans
Recent developments
The GQM+Strategies approach was developed by Victor Basili and a group of researchers from the Fraunhofer Society. It is based on the Goal Question Metric paradigm and adds the capability to create measurement programs that ensure alignment between business goals and strategies, software-specific goals, and measurement goals.
Novel application of GQM towards business data are
|
https://en.wikipedia.org/wiki/Tera-10
|
TERA-10 is a supercomputer built by Bull SA for the French Commissariat à l'Énergie Atomique, (Atomic Energy Commission).
TERA-10 was ranked 142nd on the TOP500 list in 2010. By 2015 it had dropped off the bottom of the list. It runs at 52.84 teraFLOPs (52.84 trillion floating-point calculations per second) using nearly 10,000 processor cores (in about 4800 dual-core processors). It runs the Linux operating system, with an SMP kernel specially modified to handle very large symmetric clusters currently cadenced by an 8-processors central system.
Its main application is the simulation of atomic experiences and the maintenance of the French nuclear defence force, using the results of true nuclear tests (between 1956 and 1995, most of them in the French Nuclear Test Plant in the Pacific) and the new results obtained from the LMJ (Laser Mega-Joule) built in continental France.
Evolution
This is the end of a second generation of evolutions, the next generation Tera-100 reached about 1 petaFLOPS using processors with more internal cores and a new central scheduling system allowing asymmetric operation on a variable number of processors, for easier upgrades, lower maintenance cost, and more experiences requiring different computing scales, without having to rebuild the whole cluster.
Supercomputing in Europe
Supercomputers
|
https://en.wikipedia.org/wiki/EF-Tu
|
EF-Tu (elongation factor thermo unstable) is a prokaryotic elongation factor responsible for catalyzing the binding of an aminoacyl-tRNA (aa-tRNA) to the ribosome. It is a G-protein, and facilitates the selection and binding of an aa-tRNA to the A-site of the ribosome. As a reflection of its crucial role in translation, EF-Tu is one of the most abundant and highly conserved proteins in prokaryotes. It is found in eukaryotic mitochondria as TUFM.
As a family of elongation factors, EF-Tu also includes its eukaryotic and archaeal homolog, the alpha subunit of eEF-1 (EF-1A).
Background
Elongation factors are part of the mechanism that synthesizes new proteins through translation in the ribosome. Transfer RNAs (tRNAs) carry the individual amino acids that become integrated into a protein sequence, and have an anticodon for the specific amino acid that they are charged with. Messenger RNA (mRNA) carries the genetic information that encodes the primary structure of a protein, and contains codons that code for each amino acid. The ribosome creates the protein chain by following the mRNA code and integrating the amino acid of an aminoacyl-tRNA (also known as a charged tRNA) to the growing polypeptide chain.
There are three sites on the ribosome for tRNA binding. These are the aminoacyl/acceptor site (abbreviated A), the peptidyl site (abbreviated P), and the exit site (abbreviated E). The P-site holds the tRNA connected to the polypeptide chain being synthesized, and the A-site is the binding site for a charged tRNA with an anticodon complementary to the mRNA codon associated with the site. After binding of a charged tRNA to the A-site, a peptide bond is formed between the growing polypeptide chain on the P-site tRNA and the amino acid of the A-site tRNA, and the entire polypeptide is transferred from the P-site tRNA to the A-site tRNA. Then, in a process catalyzed by the prokaryotic elongation factor EF-G (historically known as translocase), the coordinated tr
|
https://en.wikipedia.org/wiki/Bootstrapping%20%28electronics%29
|
In the field of electronics, a technique where part of the output of a system is used at startup can be described as bootstrapping.
A bootstrap circuit is one where part of the output of an amplifier stage is applied to the input, so as to alter the input impedance of the amplifier. When applied deliberately, the intention is usually to increase rather than decrease the impedance.
In the domain of MOSFET circuits, bootstrapping is commonly used to mean pulling up the operating point of a transistor above the power supply rail. The same term has been used somewhat more generally for dynamically altering the operating point of an operational amplifier (by shifting both its positive and negative supply rail) in order to increase its output voltage swing (relative to the ground). In the sense used in this paragraph, bootstrapping an operational amplifier means "using a signal to drive the reference point of the op-amp's power supplies". A more sophisticated use of this rail bootstrapping technique is to alter the non-linear C/V characteristic of the inputs of a JFET op-amp in order to decrease its distortion.
Input impedance
In analog circuit designs, a bootstrap circuit is an arrangement of components deliberately intended to alter the input impedance of a circuit. Usually it is intended to increase the impedance, by using a small amount of positive feedback, usually over two stages. This was often necessary in the early days of bipolar transistors, which inherently have quite a low input impedance. Because the feedback is positive, such circuits can suffer from poor stability and noise performance compared to ones that don't bootstrap.
Negative feedback may alternatively be used to bootstrap an input impedance, causing the apparent impedance to be reduced. This is seldom done deliberately, however, and is normally an unwanted result of a particular circuit design. A well-known example of this is the Miller effect, in which an unavoidable feedback capacitance
|
https://en.wikipedia.org/wiki/Bootstrapping%20%28statistics%29
|
Bootstrapping is any test or metric that uses random sampling with replacement (e.g. mimicking the sampling process), and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.
Bootstrapping estimates the properties of an estimand (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution function of the observed data. In the case where a set of observations can be assumed to be from an independent and identically distributed population, this can be implemented by constructing a number of resamples with replacement, of the observed data set (and of equal size to the observed data set).
It may also be used for constructing hypothesis tests. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation of standard errors.
History
The bootstrap was published by Bradley Efron in "Bootstrap methods: another look at the jackknife" (1979), inspired by earlier work on the jackknife. Improved estimates of the variance were developed later. A Bayesian extension was developed in 1981. The bias-corrected and accelerated (BCa) bootstrap was developed by Efron in 1987, and the ABC procedure in 1992.
Approach
The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled by resampling the sample data and performing inference about a sample from resampled data (resampled → sample). As the population is unknown, the true error in a sample statistic against its population value is
|
https://en.wikipedia.org/wiki/Separable%20partial%20differential%20equation
|
A separable partial differential equation is one that can be broken into a set of separate equations of lower dimensionality (fewer independent variables) by a method of separation of variables. This generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations.
The most common form of separation of variables is simple separation of variables in which a solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called -separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on is an example of a partial differential equation which admits solutions through -separation of variables; in the three-dimensional case this uses 6-sphere coordinates.
(This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals; see separation of variables.)
Example
For example, consider the time-independent Schrödinger equation
for the function (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function in three dimensions is of the form
then it turns out that the problem can be separated into three one-dimensional ODEs for functions , , and , and the final solution can be written as . (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.)
|
https://en.wikipedia.org/wiki/Dilution%20cloning
|
Dilution cloning or cloning by limiting dilution describes a procedure to obtain a monoclonal cell population starting from a polyclonal mass of cells.
This is achieved by setting up a series of increasing dilutions of the parent (polyclonal) cell culture. A suspension of the parent cells is made. Appropriate dilutions are then made, depending on cell number in the starting population, as well as the viability and characteristics of the cells being cloned.
After the final dilutions are produced, aliquots of the suspension are plated or placed in wells and incubated. If all works correctly, a monoclonal cell colony will be produced. Applications for the procedure include cloning of parasites, T cells, transgenic cells, and macrophages.
|
https://en.wikipedia.org/wiki/Health%20Products%20and%20Food%20Branch
|
The Health Products and Food Branch (HPFB) of Health Canada manages the health-related risks and benefits of health products and food by minimizing risk factors while maximizing the safety provided by the regulatory system and providing information to Canadians so they can make healthy, informed decisions about their health.
HPFB has ten operational Directorates with direct regulatory responsibilities:
Biologics and Genetic Therapies Directorate
Food Directorate
Marketed Health Products Directorate (with responsibility for post-market surveillance)
Medical Devices Directorate
Natural Health Products Directorate
Office of Nutrition Policy and Promotion
Pharmaceutical Drugs Directorate
Policy, Planning and International Affairs Directorate
Resource Management and Operations Directorate
Veterinary Drugs Directorate
Extraordinary Use New Drugs
Extraordinary Use New Drugs (EUNDs) is a regulatory programme under which, in times of emergency, drugs can be granted regulatory approval under the Food and Drug Act and its regulations. An EUND approved through this pathway can only be sold to federal, provincial, territorial and municipal governments. The text of the EUNDs regulations is available.
On 25 March 2011 and after the pH1N1 pandemic, amendments were made to the Food and Drug Regulations (FDR) to include a specific regulatory pathway for EUNDs. Typically, clinical trials in human subjects are conducted and the results are provided as part of the clinical information package of a New Drug Submission (NDS) to Health Canada, the federal authority that reviews the safety and efficacy of human drugs.
Health Canada recognizes that there are circumstances in which sponsors cannot reasonably provide substantial evidence demonstrating the safety and efficacy of a therapeutic product for NDS as there are logistical or ethical challenges in conducting the appropriate human clinical trials. The EUND pathway was developed to allow a mechanism for authorization of th
|
https://en.wikipedia.org/wiki/Gaelic%20football%2C%20hurling%20and%20camogie%20positions
|
The following are the positions in the Gaelic sports of Gaelic football, hurling and camogie.
Each team consists of one goalkeeper (who wears a different colour jersey), six backs, two midfielders, and six forwards: 15 players in all. Some under-age games are played 13-a-side (in which case the full-back and full-forward positions are removed) or 11-a-side (in which case the full-back, centre back, centre forward and full-forward positions are removed).
The positions are listed below, with the jersey number usually worn by players in that position given.
Summary table
Goalkeeper
The role of a goalkeeper, who wears the number 1 jersey in Gaelic games, is similar to other codes: to prevent the ball from entering the goal. The goalkeeper in Gaelic football and hurling also usually has the role of kicking or pucking the ball out to the outfield players. A good goalkeeper most often has great agility and bravery as well as strength and height. In Gaelic football a keeper's shot stopping ability is of great importance alongside blocking. There is no limit to where on the field the goalkeeper can travel, although once they are outside the penalty area, they are subject to the same rules as all other players. A goalkeeper in men's football may touch the ball on the ground within his own small parallelogram, and is the only player permitted to do so.It is not permitted to physically challenge a goalkeeper while inside his own small parallelogram, but players may harass him into playing a bad pass, or block an attempted pass. The substitute goalkeeper usually wears the number 16 jersey and the third choice goalkeeper usually wears the number 31.
Full-backs
Right and left corner-back
The role of the right and left corner-back who, respectively, wear the number 2 and number 4 jerseys, is to defend against opposing attackers – in particular the left and right corner-forwards. They will play most around the 20-metre line. The positions require the players having decent spee
|
https://en.wikipedia.org/wiki/Love%20lock
|
A love lock or love padlock is a padlock that couples lock to a bridge, fence, gate, monument, or similar public fixture to symbolize their love. Typically the sweethearts' names or initials, and perhaps the date, are inscribed on the padlock, and its key is thrown away (often into a nearby river) to symbolize unbreakable love.
Since the 2000s, love locks have proliferated at an increasing number of locations worldwide. They are treated by some municipal authorities as litter or vandalism, and there is some cost to their removal. However, there are other authorities who embrace them, and who use them as fundraising projects or tourist attractions.
History
In 2014, the New York Times reported that the history of love padlocks dates back at least 100 years to a melancholic Serbian tale of World War I, with an attribution for the bridge Most Ljubavi (lit. the Bridge of Love) in the spa town of Vrnjačka Banja. A local schoolmistress named Nada fell in love with a Serbian officer named Relja. After they committed to each other, Relja went to war in Greece, where he fell in love with a local woman from Corfu. As a consequence, Relja and Nada broke off their engagement. Nada never recovered from that devastating blow, and after some time she died due to heartbreak from her unfortunate love.
As young women from Vrnjačka Banja wanted to protect their own loves, they started writing down their names, with the names of their loved ones, on padlocks and affixing them to the railings of the bridge where Nada and Relja used to meet.
In the rest of Europe, love padlocks started appearing in the early 2000s as a ritual. The reasons love padlocks started to appear vary between locations and in many instances are unclear. However, in Rome, the ritual of affixing love padlocks to the bridge Ponte Milvio can be attributed to the 2006 book I Want You by Italian author Federico Moccia, who made a film adaptation in 2007.
Manufacturers
The German lock company ABUS manufacturers a
|
https://en.wikipedia.org/wiki/Hyper-IgM%20syndrome%20type%201
|
Hyper IgM Syndrome Type 1 (HIGM-1) is the X-linked variant of the hyper IgM syndrome.
The affected individuals are virtually always male, because males only have one X chromosome, received from their mothers. Their mothers are not symptomatic, even though they are carriers of the allele, because the trait is recessive. Male offspring of these women have a 50% chance of inheriting their mother's mutant allele.
Signs and symptoms
A patient presenting with hyper IgM syndrome may be affected by simple infectious organisms in exposed regions like the respiratory system. Vaccination against pathogenic organisms may not help these individuals, because vaccinating them does not properly stimulate production of antibodies. Symptoms can include:
Fever (recurrent infections)
Low counts of IgA, IgG and IgE antibodies
CD40L not reactive in T cells
Recurrent sinopulmonary and GI infections with pyogenic bacteria and opportunistic organisms, and cutaneous manifestations including pyodermas extensive warts.
Pathogenesis
This variant of the hyper IgM syndrome is caused by mutation of the CD40LG gene. The genetic locus for this gene is Xq26. This gene codes for the CD40 ligand, which is expressed on T cells. When the CD40 ligand binds CD40 on B cells, then the B cell switches from producing IgM to producing IgA or IgG.
In these patients a biopsy of a lymph node may show poor development of structural and germinal centers because of the lack of activation of B cells by the T cells in them.
Diagnosis
Treatment
Patients presenting with this disease undergo antibiotic treatment and gammaglobulin transfusions. Antibiotics are used to fight off the pathogenic organisms and the gammaglobulin helps provide a normal balance of antibodies to fight the infection. Bone marrow transplantation may be an option in some cases.
OMIM: 308230
|
https://en.wikipedia.org/wiki/Hyper-IgM%20syndrome%20type%202
|
Hyper IgM Syndrome Type 2 is a rare disease. Unlike other hyper-IgM syndromes, Type 2 patients identified thus far did not present with a history of opportunistic infections. One would expect opportunistic infections in any immunodeficiency syndrome. The responsible genetic lesion is in the AICDA gene found at 12p13.
Hyper IgM syndromes
Hyper IgM syndromes is a group of primary immune deficiency disorders characterized by defective CD40 signaling; via B cells affecting class switch recombination (CSR) and somatic hypermutation. Immunoglobulin (Ig) class switch recombination deficiencies are characterized by elevated serum IgM levels and a considerable deficiency in Immunoglobulins G (IgG), A (IgA) and E (IgE). As a consequence, people with HIGM have an increased susceptibility to infections.
Signs and symptoms
Hyper IgM syndrome can have the following syndromes:
Infection/Pneumocystis pneumonia (PCP), which is common in infants with hyper IgM syndrome, is a serious illness. PCP is one of the most frequent and severe opportunistic infections in people with weakened immune systems.
Hepatitis (Hepatitis C)
Chronic diarrhea
Hypothyroidism
Neutropenia
Arthritis
Encephalopathy (degenerative)
Cause
Different genetic defects cause HIgM syndrome, the vast majority are inherited as an X-linked recessive genetic trait and most with the condition are male.
IgM is the form of antibody that all B cells produce initially before they undergo class switching. Healthy B cells efficiently switch to other types of antibodies as needed to attack invading bacteria, viruses, and other pathogens. In people with hyper IgM syndromes, the B cells keep making IgM antibodies because can not switch to a different antibody. This results in an overproduction of IgM antibodies and an underproduction of IgA, IgG, and IgE.
Pathophysiology
CD40 is a costimulatory receptor on B cells that, when bound to CD40 ligand (CD40L), sends a signal to the B-cell receptor. When there is a defect in
|
https://en.wikipedia.org/wiki/Hyper-IgM%20syndrome%20type%205
|
The fifth type of hyper-IgM syndrome has been characterized in three patients from France and Japan. The symptoms are similar to hyper IgM syndrome type 2, but the AICDA gene is intact.
These three patients instead had mutations in the catalytic domain of uracil-DNA glycosylase, an enzyme that removes uracil from DNA. In hyper-IgM syndromes, patients are deficient in the immunoglobulins, IgG, IgE and IgA types since the antibody producing B cells can not carry out the gene recombination steps necessary to class switch from immunoglobulin M (IgM) to the other three immunoglobulins types.
Hyper IgM syndromes
Hyper IgM syndromes is a group of primary immune deficiency disorders characterized by defective CD40 signaling; via B cells affecting class switch recombination (CSR) and somatic hypermutation. Immunoglobulin (Ig) class switch recombination deficiencies are characterized by elevated serum IgM levels and a considerable deficiency in Immunoglobulins G (IgG), A (IgA) and E (IgE). As a consequence, people with HIGM have an increased susceptibility to infections.
Signs and symptoms
Hyper IgM syndrome can have the following syndromes:
Infection/Pneumocystis pneumonia (PCP), which is common in infants with hyper IgM syndrome, is a serious illness. PCP is one of the most frequent and severe opportunistic infections in people with weakened immune systems.
Hepatitis (Hepatitis C)
Chronic diarrhea
Hypothyroidism
Neutropenia
Arthritis
Encephalopathy (degenerative)
Cause
Different genetic defects cause HIgM syndrome, the vast majority are inherited as an X-linked recessive genetic trait and most with the condition are male.
IgM is the form of antibody that all B cells produce initially before they undergo class switching. Healthy B cells efficiently switch to other types of antibodies as needed to attack invading bacteria, viruses, and other pathogens. In people with hyper IgM syndromes, the B cells keep making IgM antibodies because can not switch to a differe
|
https://en.wikipedia.org/wiki/Zoomracks
|
Zoomracks was a shareware database management system for the Atari ST and IBM PC that used a card-file metaphor for displaying and manipulating data. Its main claim to fame was an early and somewhat contentious software patent lawsuit filed against Apple Computer's HyperCard and similar products.
Zoomracks, introduced in 1985, represented data in a form that was visually represented by filing cards, known as "QUICKCARD"s. Cards could be designed within the program as "templates", using general-purpose data fields known as "FIELDSCROLL"s, which could hold up to 250 lines of 80 characters. Cards were collected into a "RACK", which was essentially a single database file. The display was character-based and did not make use of the Atari's GEM interface even though this was the primary platform for the product. Unlike similar database programs of the era, Zoomracks did not support different types of data internally; everything was represented as text.
When a rack was opened the cards were displayed as if they were in a sort of linear rolodex, and the user could "zoom in", non-graphically, on any particular area to see more details of the cards in that area, and then zoom in again to see all of the fields on a particular card. The racks could display their cards sorted in a variety of ways, making navigation much easier than with a real-world rolodex, which is sorted only by a single pre-defined index (normally last name). Data could be moved from database to database simply by cutting a card out of one stack and pasting it into another. Up to nine racks could be opened at one time.
Zoomracks II, introduced in 1987, added support for report generation and some basic mathematics. In order to extract a certain subset of the information in a rack and lay it out for printing, the original Zoomracks required the user to cut and paste the desired cards into a new rack. In Zoomracks II a report (possibly only one per rack?) could be defined, laid out as needed complete with h
|
https://en.wikipedia.org/wiki/Altair%20Engineering
|
Altair Engineering Inc. is an American multinational information technology company headquartered in Troy, Michigan. It provides software and cloud solutions for simulation, IoT, high performance computing (HPC), data analytics, and artificial intelligence (AI). Altair Engineering is the creator of the HyperWorks CAE software product, among numerous other software packages and suites. The company was founded in 1985 and went public in 2017. It is traded on the Nasdaq stock exchange under the stock ticker symbol ALTR.
History
Founding
Altair Engineering was founded in 1985 by James R. Scapa, George Christ, and Mark Kistner in Troy, Michigan. Since the company's outset, Scapa has served as its CEO (and now chairman). Initially, Altair started as an engineering consulting firm, but branched out into product development and computer-aided engineering (CAE) software. In the 1990s, it became known for its software products like HyperWorks, OptiStruct, and HyperMesh, which were often used for product development by the automotive industry. Some of Altair's early clients included the Ford Motor Company, General Motors, and Chrysler. Its software also aided in the development of the Young America and AmericaOne racing yachts, the former of which was used to compete in the 1995 America's Cup.
Its software also found uses in other sectors, including aerospace (NASA), aviation (Airbus), consumer electronics (Nokia), and toy manufacturing (Mattel), among others. In 2002, Altair software aided in the design of the Airbus A380 by weight optimizing the aircraft wing ribs. Also in 2002, Altair opened offices in Seongnam, South Korea and Shanghai, China, adding those locales to its international footprint alongside India where it had begun investment in 1992.
Early 2000's and 2017 IPO
In addition to its software production, Altair hires out engineering consultants to its corporate clientele. Its consultancy services accounted for the majority of the company's revenue until 2004
|
https://en.wikipedia.org/wiki/Wheat%20and%20chessboard%20problem
|
The wheat and chessboard problem (sometimes expressed in terms of rice grains) is a mathematical problem expressed in textual form as:
The problem may be solved using simple addition. With 64 squares on a chessboard, if the number of grains doubles on successive squares, then the sum of grains on all 64 squares is: and so forth for the 64 squares. The total number of grains can be shown to be 264−1 or 18,446,744,073,709,551,615 (eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen, over 1.4 trillion metric tons), which is over 2,000 times the annual world production of wheat.
This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power, capital-sigma notation, and geometric series. Updated for modern times using pennies and a hypothetical question such as "Would you rather have a million dollars or a penny on day one, doubled every day until day 30?", the formula has been used to explain compound interest. (Doubling would yield over one billion seventy three million pennies, or over 10 million dollars: 230−1=1,073,741,823).
Origins
The problem appears in different stories about the invention of chess. One of them includes the geometric progression problem. The story is first known to have been recorded in 1256 by Ibn Khallikan. Another version has the inventor of chess (in some tellings Sessa, an ancient Indian Minister) request his ruler give him wheat according to the wheat and chessboard problem. The ruler laughs it off as a meager prize for a brilliant invention, only to have court treasurers report the unexpectedly huge number of wheat grains would outstrip the ruler's resources. Versions differ as to whether the inventor becomes a high-ranking advisor or is executed.
Macdonnell also investigates the earlier development of the theme.
Solutions
|
https://en.wikipedia.org/wiki/Chronic%20atrophic%20rhinitis
|
Chronic atrophic rhinitis, or simply atrophic rhinitis, is a chronic inflammation of the nose characterised by atrophy of nasal mucosa, including the glands, turbinate bones and the nerve elements supplying the nose. Chronic atrophic rhinitis may be primary and secondary. Special forms of chronic atrophic rhinitis are rhinitis sicca anterior and ozaena. It can also be described as the empty nose syndrome.
Signs and symptoms
It is most commonly seen in females.
It is reported among patients from lower socioeconomic groups.
The nasal cavities become roomy and are filled with foul smelling crusts which are black or dark green and dry, making expiration painful and difficult.
Microorganisms are known to multiply and produce a foul smell from the nose, though the patients may not be aware of this, because the nerve endings (responsible for the perception of smell) have become atrophied. This is called merciful anosmia.
Patients usually complain of nasal obstruction despite the roomy nasal cavity, which can be caused either by the obstruction produced by the discharge in the nose, or as a result of sensory loss due to atrophy of nerves in the nose, so the patient is unaware of the air flow. In the case of the second cause, the sensation of obstruction is subjective.
Bleeding from the nose, also called epistaxis, may occur when the dried discharge (crusts) are removed.
Septal perforation and dermatitis of nasal vestibule can occur. The nose may show a saddle-nose deformity.
Atrophic rhinitis is also associated with similar atrophic changes in the pharynx or larynx, producing symptoms pertaining to these structures. Hearing impairment can occur due to Eustachian tube blockage causing middle ear effusion.
Etiology
Causes can be remembered by the mnemonic HERNIA:
Hereditary factors: the disease runs in families
Endocrine imbalance: the disease tends to start at puberty and mostly involves females
Racial factors: white people are more susceptible than natives of
|
https://en.wikipedia.org/wiki/Roadrunner%20%28supercomputer%29
|
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.
In November 2008, it reached a top performance of 1.456 petaFLOPS, retaining its top spot in the TOP500 list. It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers. Roadrunner was decommissioned by Los Alamos on March 31, 2013. In its place, Los Alamos commissioned a supercomputer called Cielo, which was installed in 2010.
Overview
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA). It was a hybrid design with 12,960 IBM PowerXCell 8i and 6,480 AMD Opteron dual-core processors in specially designed blade servers connected by InfiniBand. The Roadrunner used Red Hat Enterprise Linux along with Fedora as its operating systems, and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.
Roadrunner occupied approximately 296 server racks which covered and became operational in 2008. It was decommissioned March 31, 2013. The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive, and aerospace industries.
Hybrid design
Roadrunner differed from other contemporary supercomputers because it continued the hybrid approach to supercomputer design introduced by Seymour Cray in 1964 with the Control Data Corporation CDC 6600 and continued with the order of
|
https://en.wikipedia.org/wiki/Deep%20hypothermic%20circulatory%20arrest
|
Deep hypothermic circulatory arrest (DHCA) is a surgical technique in which the temperature of the body falls significantly (between 20 °C (68 °F) to 25 °C (77 °F))and blood circulation is stopped for up to one hour. It is used when blood circulation to the brain must be stopped because of delicate surgery within the brain, or because of surgery on large blood vessels that lead to or from the brain. DHCA is used to provide a better visual field during surgery due to the cessation of blood flow. DHCA is a form of carefully managed clinical death in which heartbeat and all brain activity cease.
When blood circulation stops at normal body temperature (37 °C), permanent damage occurs in only a few minutes. More damage occurs after circulation is restored. Reducing body temperature extends the time interval that such stoppage can be survived. At a brain temperature of 14 °C, blood circulation can be safely stopped for 30 to 40 minutes. There is an increased incidence of brain injury at times longer than 40 minutes, but sometimes circulatory arrest for up to 60 minutes is used if life-saving surgery requires it. Infants tolerate longer periods of DHCA than adults.
Applications of DHCA include repairs of the aortic arch, repairs to head and neck great vessels, repair of large cerebral aneurysms, repair of cerebral arteriovenous malformations, pulmonary thromboendarterectomy, and resection of tumors that have invaded the vena cava.
History
The use of hypothermia for medical purposes dates back to Hippocrates, who advocated packing snow and ice into wounds to reduce hemorrhage. The origin of hypothermia and neuroprotection was also observed in infants who were exposed to cold due to abandonment and the prolonged viability of these infants.
In the 1940s and 1950s, Canadian surgeon Wilfred Bigelow demonstrated in animal models that the length of time the brain could survive stopped blood circulation could be extended from 3 minutes to 10 minutes by cooling to 30 °C before
|
https://en.wikipedia.org/wiki/Acoustic%20tag
|
Acoustic tags are small sound-emitting devices that allow the detection and/or remote tracking of organisms in aquatic ecosystems. Acoustic tags are commonly used to monitor the behavior of fish. Studies can be conducted in lakes, rivers, tributaries, estuaries or at sea. Acoustic tag technology allows researchers to obtain locational data of tagged fish: depending on tag and receiver array configurations, researchers can receive simple presence/absence data, 2D positional data, or even 3D fish tracks in real-time with sub-meter resolutions.
Acoustic tags allow researchers to:
Conduct Survival Studies
Monitor Migration/Passage/Trajectory
Track Behavior in Two or Three Dimensions (2D or 3D)
Measure Bypass Effectiveness at Dams and other Passages
Observe Predator/Prey Dynamics
Serveil movement and activity
Sampling
Acoustic Tags transmit a signal made up of acoustic pulses or "pings" that sends location information about the tagged organism to the hydrophone receiver. By tying the received acoustic signature to the known type of programmed signal code, the specific tagged individual is identified. The transmitted signal can propagate up to 1 km (in freshwater). Receivers can be actively held by a researcher ("Active Tracking") or affixed to specific locations ("Passive Tracking"). Arrays of receivers can allow the triangulation of tagged individuals over many kilometers. Acoustic tags can have very long battery life - some tags last up to four years .
Tags
Acoustic Tags are produced in many different shapes and sizes depending on the type of species being studied, or the type of environment in which the study is conducted. Sound parameters such as frequency and modulation method are chosen for optimal detectability, and signal level. For oceanic environments, frequencies less than 100 kHz range are often used, while frequencies of several hundreds of kilohertz are more common in for studies in rivers and lakes.
A typical Acoustic Tag consists of a
|
https://en.wikipedia.org/wiki/Induced%20subgraph%20isomorphism%20problem
|
In complexity theory and graph theory, induced subgraph isomorphism is an NP-complete decision problem that involves finding a given graph as an induced subgraph of a larger graph.
Problem statement
Formally, the problem takes as input two graphs G1=(V1, E1) and G2=(V2, E2), where the number of vertices in V1 can be assumed to be less than or equal to the number of vertices in V2. G1 is isomorphic to an induced subgraph of G2 if there is an injective function f which maps the vertices of G1 to vertices of G2 such that for all pairs of vertices x, y in V1, edge (x, y) is in E1 if and only if the edge (f(x), f(y)) is in E2. The answer to the decision problem is yes if this function f exists, and no otherwise.
This is different from the subgraph isomorphism problem in that the absence of an edge in G1 implies that the corresponding edge in G2 must also be absent. In subgraph isomorphism, these "extra" edges in G2 may be present.
Computational complexity
The complexity of induced subgraph isomorphism separates outerplanar graphs from their generalization series–parallel graphs: it may be solved in polynomial time for 2-connected outerplanar graphs, but is NP-complete for 2-connected series–parallel graphs.
Special cases
The special case of finding a long path as an induced subgraph of a hypercube has been particularly well-studied, and is called the snake-in-the-box problem. The maximum independent set problem is also an induced subgraph isomorphism problem in which one seeks to find a large independent set as an induced subgraph of a larger graph, and the maximum clique problem is an induced subgraph isomorphism problem in which one seeks to find a large clique graph as an induced subgraph of a larger graph.
Differences with the subgraph isomorphism problem
Although the induced subgraph isomorphism problem seems only slightly different from the subgraph isomorphism problem, the "induced" restriction introduces changes large enough that we can witness differences
|
https://en.wikipedia.org/wiki/Folding%20funnel
|
The folding funnel hypothesis is a specific version of the energy landscape theory of protein folding, which assumes that a protein's native state corresponds to its free energy minimum under the solution conditions usually encountered in cells. Although energy landscapes may be "rough", with many non-native local minima in which partially folded proteins can become trapped, the folding funnel hypothesis assumes that the native state is a deep free energy minimum with steep walls, corresponding to a single well-defined tertiary structure. The term was introduced by Ken A. Dill in a 1987 article discussing the stabilities of globular proteins.
The folding funnel hypothesis is closely related to the hydrophobic collapse hypothesis, under which the driving force for protein folding is the stabilization associated with the sequestration of hydrophobic amino acid side chains in the interior of the folded protein. This allows the water solvent to maximize its entropy, lowering the total free energy. On the side of the protein, free energy is further lowered by favorable energetic contacts: isolation of electrostatically charged side chains on the solvent-accessible protein surface and neutralization of salt bridges within the protein's core. The molten globule state predicted by the folding funnel theory as an ensemble of folding intermediates thus corresponds to a protein in which hydrophobic collapse has occurred but many native contacts, or close residue-residue interactions represented in the native state, have yet to form.
In the canonical depiction of the folding funnel, the depth of the well represents the energetic stabilization of the native state versus the denatured state, and the width of the well represents the conformational entropy of the system. The surface outside the well is shown as relatively flat to represent the heterogeneity of the random coil state. The theory's name derives from an analogy between the shape of the well and a physical funnel, in
|
https://en.wikipedia.org/wiki/Native%20contact
|
In protein folding, a native contact is a contact between the side chains of two amino acids that are not neighboring in the amino acid sequence (i.e., they are more than four residues apart in the primary sequence in order to remove trivial i to i+4 contacts along alpha helices) but are spatially close in the protein's native state tertiary structure. The fraction of native contacts reproduced in a particular structure is often used as a reaction coordinate for measuring the deviation from the native state of structures produced during molecular dynamics simulations or in benchmarks of protein structure prediction methods.
The contact order is a measure of the locality of a protein's native contacts; that is, the sequence distance between amino acids that form contacts. Proteins with low contact order are thought to fold faster and some may be candidates for downhill folding.
Protein structure
|
https://en.wikipedia.org/wiki/Akodon%20azarae
|
Akodon azarae, also known as Azara's akodont or Azara's grass mouse, is a rodent species from South America. It is found from southernmost Brazil through Paraguay and Uruguay into eastern Argentina. It is named after Spanish naturalist Félix de Azara.
|
https://en.wikipedia.org/wiki/Passenger%20leukocyte
|
In tissue and organ transplantation, the passenger leukocyte theory is the proposition that leucocytes within a transplanted allograft sensitize the recipient's alloreactive T-lymphocytes, causing transplant rejection.
The concept was first proposed by George Davis Snell and the term coined in 1968 when Elkins and Guttmann showed that leukocytes present in a donor graft initiate an immune response in the recipient of a transplant.
See also
History of immunology
|
https://en.wikipedia.org/wiki/Math%20Patrol
|
Math Patrol was a children's educational television show produced by TVOntario from 1976 to 1978 and aired by the public broadcaster in the late 1970s and the early 1980s.
The series starred John Kozak as "Sydney" – a "math detective" who repeatedly went undercover as a kangaroo ("the only disguise Math Patrol had that would fit him"). Other cast members included Carl Banas, Jessica Booker, Luba Goy and Nikki Tilroe.
Producer/Director Clive Vanderburgh, Production Assistant Jane Downey and Editor Brian Elston.
The program was designed to teach basic math skills and terminology in an entertaining fashion to children between approximately 8 and 10 years of age. In each 15-minute episode, Math Patrol'''s unseen (silhouetted) boss "Mr. Big" would send the detective on a case or charge him with a task which could only be solved through mathematic deduction.
Over the course of 20 episodes, Math Patrol provided introductory math lessons on topics including addition, subtraction, multiplication, division, area, fractions, length, shapes, geometry and symmetry.
Because of its highly educational nature, Math Patrol'' was often shown to groups of primary school students during class time.
External links
A fan site dedicated to classic TVO children's shows of the 1970s
1970s Canadian children's television series
Canadian children's education television series
TVO original programming
Television shows filmed in Toronto
Mathematics education television series
|
https://en.wikipedia.org/wiki/Baltic%20Way%20%28mathematical%20contest%29
|
The Baltic Way mathematical contest has been organized annually since 1990, usually in early November, to commemorate the Baltic Way demonstration of 1989. Unlike most international mathematical competitions, Baltic Way is a true team contest. Each team consists of five secondary-school students, who are allowed and expected to collaborate on the twenty problems during the four and a half hours of the contest.
Originally, the three Baltic states participated, but the list of invitees has since grown to include all countries around the Baltic Sea; Germany sends a team representing only its northernmost parts, and Russia a team from St. Petersburg. Iceland is invited on grounds of being the first state to recognize the newfound independence of the Baltic states. Extra "guest" teams are occasionally invited at the discretion of the organizers: Israel was invited in 2001, Belarus in 2004 and 2014, Belgium in 2005, South Africa in 2011, the Netherlands in 2015 and Ireland in 2021. Responsibility for organizing the contest circulates among the regular participants.
History
Notes
External links and references
Problems, solutions, results and links (some of them broken) to web sites 1990-2010
Baltic Way contest web sites
Problems
European student competitions
Mathematics_competitions
Recurring events established in 1990
|
https://en.wikipedia.org/wiki/Broadcom
|
Broadcom Inc. is an American multinational designer, developer, manufacturer, and global supplier of a wide range of semiconductor and infrastructure software products. Broadcom's product offerings serve the data center, networking, software, broadband, wireless, storage, and industrial markets. As of 2022, some 78 percent of Broadcom's revenue was coming from its semiconductor-based products and 22 percent from its infrastructure software products and services.
Tan Hock Eng is the company's president and CEO. The company is headquartered in San Jose, California. Avago Technologies Limited took the Broadcom part of the Broadcom Corporation name after acquiring it in January 2016. The ticker symbol AVGO which represented old Avago now represents the newly merged entity. The Broadcom Corporation ticker symbol BRCM was retired. At first the merged entity was known as Broadcom Ltd., before assuming the present name.
Broadcom has a long history of corporate transactions (or attempted transactions) with other prominent corporations mainly in the high-technology space.
In October 2019, the European Union issued an interim antitrust order against Broadcom concerning anticompetitive business practices which allegedly violate European Union competition law.
History
Origin in Hewlett-Packard
The company that would later become Broadcom Inc. was established in 1961 as HP Associates, a semiconductor products division of Hewlett-Packard.
The division separated from Hewlett-Packard as part of the Agilent Technologies spinoff in 1999.
Formation of Avago Technologies
KKR and Silver Lake Partners acquired the Semiconductor Products Group of Agilent Technologies in 2005 for $2.6 billion and formed Avago Technologies. Avago Technologies agreed to sell its I/O solutions unit to PMC-Sierra for $42.5 million in October 2005.
In August 2008, the company filed an initial public offering of $400 million. In October 2008, Avago Technologies acquired Infineon Technologies' Munich-ba
|
https://en.wikipedia.org/wiki/List%20of%20CRT%20video%20projectors
|
This is an incomplete list of front-projection CRT video projectors.
List of CRT projectors
|}
Re-badged projectors
A number of projector manufacturers produced projectors that were sold under the brand of different makers, sometimes with minor electrical or cosmetic modification. The following list reflects these re-badged projectors.
External links
Curt Palme CRT Projector FAQs, Tips, Manuals
|
https://en.wikipedia.org/wiki/Deoxyadenosine%20triphosphate
|
Deoxyadenosine triphosphate (dATP) is a nucleotide used in cells for DNA synthesis (or replication), as a substrate of DNA polymerase.
Deoxyadenosine triphosphate is produced from DNA by the action of nuclease P1, adenylate kinase, and pyruvate kinase.
Health effects
High levels of dATP can be toxic and result in impaired immune function, since dATP acts as a noncompetitive inhibitor for the DNA synthesis enzyme ribonucleotide reductase. Patients with adenosine deaminase deficiency (ADA) tend to have elevated intracellular dATP concentrations because adenosine deaminase normally curbs adenosine levels by converting it into inosine. Deficiency of this deaminase also causes immunodeficiency.
In cardiac myosin, dATP is an alternative to ATP as an energy substrate for facilitating cross-bridge formation.
See also
Adenosine triphosphate (ATP)
Adenosine deaminase deficiency (ADA)
Dilated cardiomyopathy (DCM)
|
https://en.wikipedia.org/wiki/GSP%20algorithm
|
GSP algorithm (Generalized Sequential Pattern algorithm) is an algorithm used for sequence mining. The algorithms for solving sequence mining problems are mostly based on the apriori (level-wise) algorithm. One way to use the level-wise paradigm is to first discover all the frequent items in a level-wise fashion. It simply means counting the occurrences of all singleton elements in the database. Then, the transactions are filtered by removing the non-frequent items. At the end of this step, each transaction consists of only the frequent elements it originally contained. This modified database becomes an input to the GSP algorithm. This process requires one pass over the whole database.
GSP algorithm makes multiple database passes. In the first pass, all single items (1-sequences) are counted. From the frequent items, a set of candidate 2-sequences are formed, and another pass is made to identify their frequency. The frequent 2-sequences are used to generate the candidate 3-sequences, and this process is repeated until no more frequent sequences are found. There are two main steps in the algorithm.
Candidate Generation. Given the set of frequent (k-1)-frequent sequences Fk-1, the candidates for the next pass are generated by joining F(k-1) with itself. A pruning phase eliminates any sequence, at least one of whose subsequences is not frequent.
Support Counting. Normally, a hash tree–based search is employed for efficient support counting. Finally non-maximal frequent sequences are removed.
Algorithm
F1 = the set of frequent 1-sequence
k=2,
do while Fk-1 != Null;
Generate candidate sets Ck (set of candidate k-sequences);
For all input sequences s in the database D
do
Increment count of all a in Ck if s supports a
End do
Fk = {a ∈ Ck such that its frequency exceeds the threshold}
k = k+1;
End do
Result = Set of all frequent sequences is the union of all Fk's
The above algorithm looks
|
https://en.wikipedia.org/wiki/Secret%20Files%3A%20Tunguska
|
Secret Files: Tunguska (German: Geheimakte Tunguska) is a 2006 graphic adventure video game developed by German studios Fusionsphere Systems and Animation Arts and published by Deep Silver for Microsoft Windows, Nintendo DS, Wii, iOS, Android, Wii U and Nintendo Switch. The game is the start of the Secret Files trilogy, with a sequel, Secret Files 2: Puritas Cordis, being released in 2008.
Gameplay
The game is viewed from a third person perspective and uses a classic point and click interface. The game features a 'snoop key' tool, which highlights all interactive objects on screen and assists in finding small, easily overlooked objects. Unlike games in the Broken Sword series, however, it is not possible for the player to lose or get into an unwinnable situation. There are also a few parts of the game where players must switch between the main character, Nina Kalenkov, and her boyfriend Max Gruber, to progress.
The Wii version of the game exclusively allows the player to connect a Nunchuk to a Wii Remote and use its analog stick to directly control character movement and also supports cooperative multiplayer in which a second player uses another Wii Remote to easily point out anything that could be crucial to progress.
Plot
One night, Nina's father, Vladimir Kalenkov, in his office in a museum in Berlin, is suddenly attacked by a figure in black robes who seemingly possesses psychic powers. A while later Nina enters the office but finds the place ransacked and her father missing. She calls the police but they refuse to help due to bureaucratic reasons, but afterwards, detective Kanski arrives at the place. Nina is finally offered help by Vladimir's assistant, Max Gruber. She returns to her and her father's apartment to find it ransacked too and she is knocked unconscious being attacked from behind. In the apartment she finds clues about someone named Oleg Kambursky, who is living nearby and had been contacted by her father and the latter wanted to see him agai
|
https://en.wikipedia.org/wiki/Ittiam%20Systems
|
Ittiam Systems is a venture capital funded technology company founded by ex-Managing Director of Texas Instruments' India Srini Rajam in 2001. It is headquartered in Bangalore, India and has marketing offices in the United States, UK, France, Japan, Mainland China, Singapore and Taiwan.
Ittiam Systems is India's first technology firm to be based on licensing of intellectual property (IP). Revenue is mainly generated through licensing of its DSP intellectual property and reference designs.
One of its early United States customers was e.Digital Corporation, a San Diego-based company that developed the digEplayer portable audio/video in-flight entertainment device under contract by Tacoma, Washington-based APS, now named digEcor.
Ittiam Systems demonstrated its HEVC and VP9 implementations accelerated using ARM Mali-T600 GPU Compute technology at CES 2014 and MWC 2014.
|
https://en.wikipedia.org/wiki/Spatiotemporal%20database
|
A spatiotemporal database is a database that manages both space and time information. Common examples include:
Tracking of moving objects, which typically can occupy only a single position at a given time.
A database of wireless communication networks, which may exist only for a short timespan within a geographic region.
An index of species in a given geographic region, where over time additional species may be introduced or existing species migrate or die out.
Historical tracking of plate tectonic activity.
Spatiotemporal databases are an extension of spatial databases and temporal databases. A spatiotemporal database embodies spatial, temporal, and spatiotemporal database concepts, and captures spatial and temporal aspects of data and deals with:
geometry changing over time and/or
location of objects moving over invariant geometry (known variously as moving objects databases or real-time locating systems).
Implementations
Although there exist numerous relational databases with spatial extensions, spatiotemporal databases are not based on the relational model for practical reasons, chiefly among them that the data is multi-dimensional, capturing complex structures and behaviours.
As of 2008, there are no RDBMS products with spatiotemporal extensions. There are some products such as the open-source TerraLib which use a middleware approach storing their data in a relational database. Unlike in the pure spatial domain, there are however no official or de facto standards for spatio-temporal data models and their querying. In general, the theory of this area is also less well-developed. Another approach is the constraint database system such as MLPQ (Management of Linear Programming Queries).
GeoMesa is an open-source distributed spatiotemporal index built on top of Bigtable-style databases using an implementation of the Z-order_curve to create a multi-dimensional index combining space and time.
SpaceTime is a commercial spatiotemporal database built on top o
|
https://en.wikipedia.org/wiki/X-ray%20standing%20waves
|
The X-ray standing wave (XSW) technique can be used to study the structure of surfaces and interfaces with high spatial resolution and chemical selectivity. Pioneered by B.W. Batterman in the 1960s, the availability of synchrotron light has stimulated the application of this interferometric technique to a wide range of problems in surface science.
Basic principles
An X-ray standing wave (XSW) field is created by interference between an X-ray beam impinging on a sample and a reflected beam. The reflection may be generated at the Bragg condition for a crystal lattice or an engineered multilayer superlattice; in these cases, the period of the XSW equals the periodicity of the reflecting planes. X-ray reflectivity from a mirror surface at small incidence angles may also be used to generate long-period XSWs.
The spatial modulation of the XSW field, described by the dynamical theory of X-ray diffraction, undergoes a pronounced change when the sample is scanned through the Bragg condition. Due to a relative phase variation between the incoming and reflected beams, the nodal planes of the XSW field shift by half the XSW period. Depending on the position of the atoms within this wave field, the measured element-specific absorption of X-rays varies in a characteristic way. Therefore, measurement of the absorption (via X-ray fluorescence or photoelectron yield) can reveal the position of the atoms relative to the reflecting planes. The absorbing atoms can be thought of as "detecting" the phase of the XSW; thus, this method overcomes the phase problem of X-ray crystallography.
For quantitative analysis, the normalized fluorescence or photoelectron yield is described by
,
where is the reflectivity and is the relative phase of the interfering beams. The characteristic shape of can be used to derive precise structural information about the surface atoms because the two parameters (coherent fraction) and (coherent position) are directly related to the Fourier represen
|
https://en.wikipedia.org/wiki/ANNNI%20model
|
In statistical physics, the axial (or anisotropic) next-nearest neighbor Ising model, usually known as the ANNNI model, is a variant of the Ising model. In the ANNNI model, competing ferromagnetic and antiferromagnetic exchange interactions couple spins at nearest and next-nearest neighbor sites along one of the crystallographic axes of the lattice.
The model is a prototype for complicated spatially modulated magnetic superstructures in
crystals.
To describe experimental results on magnetic orderings in erbium, the model was introduced in 1961 by Roger Elliott from the University of Oxford. The model has given its name in 1980 by Michael E. Fisher and Walter Selke, who analysed it first by Monte Carlo methods, and then by low temperature series expansions, showing the fascinating complexity of its phase diagram, including devil's staircases and a Lifshitz point. Indeed, it provides, for two- and three-dimensional systems, a theoretical basis for understanding numerous experimental observations on commensurate and incommensurate structures, as well as accompanying phase transitions, in various magnets, alloys, adsorbates, polytypes, multiferroics, and other solids.
Further possible applications range from modeling of cerebral cortex to quantum information.
|
https://en.wikipedia.org/wiki/Terpineol
|
Terpineol is any of four isomeric monoterpenoids. Terpenoids are terpene that are modified by the addition of a functional group, in this case, an alcohol. Terpineols have been isolated from a variety of sources such as cardamom, cajuput oil, pine oil, and petitgrain oil. Four isomers exist: α-, β-, γ-terpineol, and terpinen-4-ol. β- and γ-terpineol differ only by the location of the double bond. Terpineol is usually a mixture of these isomers with α-terpineol as the major constituent.
Terpineol has a pleasant odor similar to lilac and is a common ingredient in perfumes, cosmetics, and flavors. α-Terpineol is one of the two most abundant aroma constituents of lapsang souchong tea; the α-terpineol originates in the pine smoke used to dry the tea. (+)-α-terpineol is a chemical constituent of skullcap.
Synthesis and biosynthesis
Although it is naturally occurring, terpineol is commonly manufactured from alpha-pinene, which is hydrated in the presence of sulfuric acid.
An alternative route starts from limonene:
Limonene reacts with trifluoroacetic acid in a Markovnikov addition to a trifluoroacetate intermediate, which is easily hydrolyzed with sodium hydroxide to α-terpineol with 7% selectivity. Side-products are β-terpineol in a mixture of the cis isomer, the trans isomer, and 4-terpineol.
The biosynthesis of α-terpineol proceeds from geranyl pyrophosphate, which releases pyrophosphate to give the terpinyl cation. This carbocation is the precursor to many terpenes and terpenoids. Its hydrolysis gives terpineol.
|
https://en.wikipedia.org/wiki/Hispid%20cotton%20rat
|
The hispid cotton rat (Sigmodon hispidus) is a rodent species long thought to occur in parts of South America, Central America, and southern North America. However, recent taxonomic revisions, based on mitochondrial DNA sequence data, have split this widely distributed species into three separate species (S. hispidus, S. toltecus, and S. hirsutus). The distribution of S. hispidus ranges from Arizona in the west to Virginia to the east and from the Platte River in Nebraska in the north to, likely, the Rio Grande in the south, where it meets the northern edge of the distribution of S. toltecus (formerly S. h. toltecus). Adult size is total length ; tail , frequently broken or stubbed; hind foot ; ear ; mass . They have been used as laboratory animals.
Taxonomy
The currently accepted scientific name for the hispid cotton rat is Sigmodon hispidus. It is a member of the family Cricetidae. Although 25 subspecies are accepted, including the type subspecies, the most distinct genetic subdivision within S. hispidus separates the species into two genetic lineages, an eastern one and a western one, which hybridize along a contact zone.
Distribution
In the United States, the hispid cotton rat ranges from southern Virginia and North Carolina (especially the coastal plain) west through Tennessee, northern Missouri, Kansas, and extreme southern Nebraska to southeastern Colorado, New Mexico, and southeastern Arizona; south to the Gulf Coast; and south to northern Mexico. It does not occur on the coastal plain of North Carolina nor in the mountains of Virginia. Disjunct populations occur in southeastern Arizona and extreme southeastern California into Baja California Norte. In Kansas, it appeared within the last 50 years.
Habitat
Hispid cotton rats occupy a wide variety of habitats within their range, but are not randomly distributed among microhabitats. They are strongly associated with grassy patches with some shrub overstory and they have little or no affinity for dicot-domi
|
https://en.wikipedia.org/wiki/Trevor%20Philips
|
Trevor Philips is a character and one of the three playable protagonists, alongside Michael De Santa and Franklin Clinton, of Grand Theft Auto V, the seventh main title in the Grand Theft Auto series developed by Rockstar North and published by Rockstar Games. He also appears in the game's multiplayer component, Grand Theft Auto Online. A career criminal and former bank robber, Trevor leads his own organisation, Trevor Philips Enterprises, and comes into conflict with various rival gangs and criminal syndicates as he attempts to secure control of the drugs and weapons trade in the fictional Blaine County, San Andreas. He is portrayed by Canadian actor Steven Ogg, who provided the voice and motion capture for the character.
Rockstar based Trevor's appearance on Ogg's physical appearance, while his personality was inspired by Charles Bronson. Grand Theft Auto V co-writer Dan Houser described Trevor as purely driven by desire and resentment. To make players care for the character, the designers gave the character more emotions. Trevor is shown to care about people very close to him, despite his antisocial behavior and psychotic derangement.
Trevor is considered one of the most controversial characters in video game history. The general attention given to Trevor by critics was mostly very positive, although some reviewers felt that his violent personality and actions negatively affected the game's narrative. His design and personality have drawn comparisons to other influential video game and film characters. Many reviewers have called Trevor a likeable and believable character, and felt that he is one of the few protagonists in the Grand Theft Auto series that would willingly execute popular player actions, such as murder and violence.
Character design
Grand Theft Auto V co-writer Dan Houser explained that Trevor "appeared to pretty much out of nowhere as the embodiment of another side of criminality [...] If Michael was meant to be the idea of some version of cr
|
https://en.wikipedia.org/wiki/Paley%20construction
|
In mathematics, the Paley construction is a method for constructing Hadamard matrices using finite fields. The construction was described in 1933 by the English mathematician Raymond Paley.
The Paley construction uses quadratic residues in a finite field GF(q) where q is a power of an odd prime number. There are two versions of the construction depending on whether q is congruent to 1 or 3 (mod 4).
Quadratic character and Jacobsthal matrix
Let q be a power of an odd prime. In the finite field GF(q) the quadratic character χ(a) indicates whether the element a is zero, a non-zero perfect square, or a non-square:
For example, in GF(7) the non-zero squares are 1 = 12 = 62, 4 = 22 = 52, and 2 = 32 = 42. Hence χ(0) = 0, χ(1) = χ(2) = χ(4) = 1, and χ(3) = χ(5) = χ(6) = −1.
The Jacobsthal matrix Q for GF(q) is the q×q matrix with rows and columns indexed by finite field elements such that the entry in row a and column b is χ(a − b). For example, in GF(7), if the rows and columns of the Jacobsthal matrix are indexed by the field elements 0, 1, 2, 3, 4, 5, 6, then
The Jacobsthal matrix has the properties Q QT = q I − J and Q J = J Q = 0 where I is the q×q identity matrix and J is the q×q all 1 matrix. If q is congruent to 1 (mod 4) then −1 is a square in GF(q)
which implies that Q is a symmetric matrix. If q is congruent to 3 (mod 4) then −1 is not a square, and Q is a
skew-symmetric matrix. When q is a prime number and rows and columns are indexed by field elements in the usual 0, 1, 2, … order, Q is a circulant matrix. That is, each row is obtained from the row above by cyclic permutation.
Paley construction I
If q is congruent to 3 (mod 4) then
is a Hadamard matrix of size q + 1. Here j is the all-1 column vector of length q and I is the (q+1)×(q+1) identity matrix. The matrix H is a skew Hadamard matrix, which means it satisfies H+HT = 2I.
Paley construction II
If q is congruent to 1 (mod 4) then the matrix obtained by replacing all 0 entries in
with
|
https://en.wikipedia.org/wiki/Klocwork
|
Klocwork is a static code analysis tool owned by Minneapolis, Minnesota-based software developer Perforce. Klocwork software analyzes source code in real time, simplifies peer code reviews, and extends the life of complex software.
Overview
Klocwork is used to identify security, safety and reliability issues in C, C++, C#, Java, JavaScript and Python code. The product includes numerous desktop plug-ins for developers, metrics and reporting.
History
Originally Klocwork’s technology was developed to address requirements for large-scale source code analysis to optimize software architecture for C code inside Nortel Networks and spun out in 2001.
In January 2012, Klocwork Insight 9.5 was released. It provided on-the-fly static analysis in Visual Studio, like a word processor does with spelling mistakes.
In May 2013, Klocwork Cahoots peer code review tool was launched.
Awards and recognition
In 2007, Klocwork was awarded the 2007 InfoWorld Technology of Year award for best source code analyzer.
In May 2014, Klocwork won the Red Herring Top 100 North America Award, in the software sector.
Original developer
Klocwork was an Ottawa, Canada-based software company that developed the Klocwork brand of programming tools for software developers. The company was acquired by Minneapolis-based application software developer Perforce in 2019, as part of their acquisition of Klocwork's parent software company Rogue Wave. Klocwork no longer exists as a standalone company, but Perforce continues to develop Klocwork branded static code analysis software.
Company history
The company was founded in 2001 as a spin-out of Nortel Networks. Its initial investors were Firstmark Capital, USVP, and Mobius Ventures.
In January 2014, the company was acquired by Rogue Wave Software.
In January 2019, Rogue Wave was acquired by Minneapolis-based application software developer Perforce.
|
https://en.wikipedia.org/wiki/Building%20insulation
|
Building insulation is material used in a building (specifically the building envelope) to reduce the flow of thermal energy. While the majority of insulation in buildings is for thermal purposes, the term also applies to acoustic insulation, fire insulation, and impact insulation (e.g. for vibrations caused by industrial applications). Often an insulation material will be chosen for its ability to perform several of these functions at once.
Insulation is an important economic and environmental investment for buildings. By installing insulation, buildings use less energy for heating and cooling and occupants experience less thermal variability. Retrofitting buildings with further insulation is an important climate change mitigation tactic, especially when buildings are heated by oil, natural gas, or coal-based electricity. Local and national governments and utilities often have a mix of incentives and regulations to encourage insulation efforts on new and renovated buildings as part of efficiency programs in order to reduce grid energy use and its related environmental impacts and infrastructure costs.
Insulation
The definition of thermal insulation
Thermal insulation usually refers to the use of appropriate insulation materials and design adaptations for buildings to slow the transfer of heat through the enclosure to reduce heat loss and gain. The transfer of heat is caused by the temperature difference between indoors and outdoors. Heat may be transferred either by conduction, convection, or radiation. The rate of transmission is closely related to the propagating medium. Heat is lost or gained by transmission through the ceilings, walls, floors, windows, and doors. This heat reduction and acquisition are usually unwelcome. It not only increases the load on the HVAC system resulting in more energy wastes but also reduces the thermal comfort of people in the building. Thermal insulation in buildings is an important factor in achieving thermal comfort for its o
|
https://en.wikipedia.org/wiki/Trochlea
|
Trochlea (Latin for pulley) is a term in anatomy. It refers to a grooved structure reminiscent of a pulley's wheel.
Related to joints
Most commonly, trochleae bear the articular surface of saddle and other joints:
Trochlea of humerus (part of the elbow hinge joint with the ulna)
Trochlea of femur (forming the knee hinge joint with the patella)
The trochlea tali in the superior surface of the body of talus (part of the ankle hinge joint with the tibia)
Trochlear process of the calcaneus
In quadrupeds, the trochlea of Radius (bone)
The "knuckles" of the tarsometatarsus which articulate with the proximal phalanges in a bird's foot
Related to muscles
It also can refer to structures which serve as a guide for muscles:
Trochlea of superior oblique (see also superior oblique muscle), a mover of the eye which is supplied by the trochlear nerve, or fourth cranial nerve
|
https://en.wikipedia.org/wiki/Biodiversity%20informatics
|
Biodiversity informatics is the application of informatics techniques to biodiversity information, such as taxonomy, biogeography or ecology. It is defined as the application of Information technology technologies to management, algorithmic exploration, analysis and interpretation of primary data regarding life, particularly at the species level organization. Modern computer techniques can yield new ways to view and analyze existing information, as well as predict future situations (see niche modelling). Biodiversity informatics is a term that was only coined around 1992 but with rapidly increasing data sets has become useful in numerous studies and applications, such as the construction of taxonomic databases or geographic information systems. Biodiversity informatics contrasts with "bioinformatics", which is often used synonymously with the computerized handling of data in the specialized area of molecular biology.
Overview
Biodiversity informatics (different but linked to bioinformatics) is the application of information technology methods to the problems of organizing, accessing, visualizing and analyzing primary biodiversity data. Primary biodiversity data is composed of names, observations and records of specimens, and genetic and morphological data associated to a specimen. Biodiversity informatics may also have to cope with managing information from unnamed taxa such as that produced by environmental sampling and sequencing of mixed-field samples. The term biodiversity informatics is also used to cover the computational problems specific to the names of biological entities, such as the development of algorithms to cope with variant representations of identifiers such as species names and authorities, and the multiple classification schemes within which these entities may reside according to the preferences of different workers in the field, as well as the syntax and semantics by which the content in taxonomic databases can be made machine queryable and in
|
https://en.wikipedia.org/wiki/Gpsd
|
gpsd is a computer software program that collects data from a Global Positioning System (GPS) receiver and provides the data via an Internet Protocol (IP) network to potentially multiple client applications in a server-client application architecture. Gpsd may be run as a daemon to operate transparently as a background task of the server. The network interface provides a standardized data format for multiple concurrent client applications, such as Kismet or GPS navigation software.
Gpsd is commonly used on Unix-like operating systems. It is distributed as free software under the 3-clause BSD license.
Design
gpsd provides a TCP/IP service by binding to port 2947 by default. It communicates via that socket by accepting commands, and returning results. These commands use a JSON-based syntax and provide JSON responses. Multiple clients can access the service concurrently.
The application supports many types of GPS receivers with connections via serial ports, USB, and Bluetooth. Starting in 2009, gpsd also supports AIS receivers.
gpsd supports interfacing with the Network Time Protocol (NTP) server ntpd via shared memory to enable setting the host platform's time via the GPS clock.
Authors
gpsd was originally written by Remco Treffkorn with Derrick Brashear, then maintained by Russell Nelson. It is now maintained by Eric S. Raymond.
|
https://en.wikipedia.org/wiki/Global%20distribution%20system
|
A global distribution system (GDS) is a computerised network system owned or operated by a company that enables transactions between travel industry service providers, mainly airlines, hotels, car rental companies, and travel agencies. The GDS mainly uses real-time inventory (e.g. number of hotel rooms available, number of flight seats available, or number of cars available) from the service providers. Travel agencies traditionally relied on GDS for services, products and rates in order to provide travel-related services to the end consumers. Thus, a GDS can link services, rates and bookings consolidating products and services across all three travel sectors: i.e., airline reservations, hotel reservations, car rentals.
GDS is different from a computer reservation system, which is a reservation system used by the service providers (also known as vendors). Primary customers of GDS are travel agents (both online and office-based) who make reservations on various reservation systems run by the vendors. GDS holds no inventory; the inventory is held on the vendor's reservation system itself. A GDS system will have a real-time link to the vendor's database. For example, when a travel agency requests a reservation on the service of a particular airline company, the GDS system routes the request to the appropriate airline's computer reservations system.
Example of a booking facilitation done by an airline GDS
A mirror image of the passenger name record (PNR) in the airline reservations system is maintained in the GDS system. If a passenger books an itinerary containing air segments of multiple airlines through a travel agency, the passenger name record in the GDS system would hold information on their entire itinerary, while each airline they fly on would only have a portion of the itinerary that is relevant to them. This would contain flight segments on their own services and inbound and onward connecting flights (known as info segments) of other airlines in the itinerary
|
https://en.wikipedia.org/wiki/Algorithm%20characterizations
|
Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.
The problem of definition
Over the last 200 years, the definition of the algorithm has become more complicated and detailed as researchers have tried to pin down the term. Indeed, there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers – "input parameters" arbitrary and infinite in extent, or limited in extent but still variable—by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a person can perform with paper and pencil.
The most common number-manipulation schemes—both in formal mathematics and in routine life—are: (1) the recursive functions calculated by a person with paper and pencil, and (2) the Turing machine or its Turing equivalents—the primitive register-machine or "counter-machine" model, the random-access machine model (RAM), the random-access stored-program machine model (RASP) and its functional equivalent "the computer".
When we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade school, for example, adding and subtracting.
The proofs that every "recursive function" we can calculate by hand we can compute by machine and vice versa—note the usage of the words calculate versus compute—is remarkable. But this equivalence together with the thesis (unproven assertion) that this includes every calculation/computation indicates why so much emphasis has been placed upon the use of Turing-equivalent machines in the definition of specific algorithms, and why the definition of "algorithm" itself ofte
|
https://en.wikipedia.org/wiki/David%20Klein%20%28mathematician%29
|
David Klein is a professor of Mathematics at California State University in Northridge. He is an advocate of increasingly rigorous treatment of mathematics in school curricula and a frequently cited opponent of reforms based on the NCTM standards. One of the participants in the founding of Mathematically Correct, Klein appears regularly in the Math Wars.
Klein, who is a member of the U.S. Campaign for the Academic and Cultural Boycott of Israel, supports the BDS movement which seeks to impose comprehensive boycotts against Israel until it meets its obligations under international law. Klein hosts a webpage supportive of the BDS movement on his university website and, starting in 2011, it became the target of numerous complaints from the pro-Israel groups AMCHA Initiative, Shurat HaDin, and the Global Frontier Justice Center who claimed that it constituted a misuse of state resources. The complaints were dismissed both by the university's staff and by legal authorities as baseless.
Concordant with his support for the BDS movement, Klein defended University of Michigan associate professor John Cheney-Lippold's decision to decline to write a letter of recommendation to a student who planned to study in Israel.
Klein is the director of CSUNs Climate Science Program.
|
https://en.wikipedia.org/wiki/Matrox%20Mystique
|
The Mystique and Mystique 220 were 2D, 3D, and video accelerator cards for personal computers designed by Matrox, using the VGA connector. The original Mystique was introduced in 1996, with the slightly upgraded Mystique 220 having been released in 1997.
History
Matrox had been known for years as a significant player in the high-end 2D graphics accelerator market. Cards they produced were Windows accelerators, and the company's Millennium card, released in 1995, supported MS-DOS as well. In 1996 Next Generation called Millenium "the definitive 2D accelerator." With regard to 3D acceleration, Matrox stepped forward in 1994 with their Impression Plus. However, that card only could accelerate a very limited feature set, and was primarily targeted at CAD applications. The Impression could not perform hardware texture mapping, for example, requiring Gouraud shading or lower-quality techniques. Very few games took advantage of the 3D capabilities of Impression Plus, with the only known games being the three titles that were bundled with the card in its '3D Superpack' CD bundle: 3D fighting game, Sento by 47 Tek; 3D space combat game, IceHawk by Amorphous Designs, and Specter MGA (aka Specter VR) by Velocity.
The newer Millennium card also contained 3D capabilities similar to the Impression Plus, and was nearly as limited. Without support for texturing, the cards were very limited in visual enhancement capability. The only game to be accelerated by the Millennium was the CD-ROM version of NASCAR Racing, which received a considerable increase in speed over software rendering but no difference in image quality. The answer to these limitations, and Matrox's first attempt at targeting the consumer gaming PC market, would be the Matrox Mystique. It was based heavily on the Millennium but with various additions and some cost-cutting measures.
Overview
The Mystique was a 64-bit 2D GUI and video accelerator (MGA1064SG) with 3D acceleration support. Mystique has "Matrox Simple
|
https://en.wikipedia.org/wiki/Adempiere
|
ADempiere is an Enterprise Resource Planning or ERP software package released under a free software license. The verb adempiere in Italian means "to fulfill a duty" or "to accomplish".
The software is licensed under the GNU General Public License.
History
The ADempiere project was created in September 2006. Disagreement between the open-source developer community that formed around the Compiere open-source ERP software and the project's corporate sponsor ultimately led to the creation of Adempiere as a fork of Compiere.
Within weeks of the fork, ADempiere reached the top five of the SourceForge.net rankings. This ranking provides a measure of both the size of its developer community and also its impact on the open-source ERP software market.
The project name comes from the Italian verb "adempiere", which means "fulfillment of a duty" but with the additional senses of "Complete, reach, practice, perform tasks, or release; also, give honor, respect", here which were considered appropriate to what the project aimed to achieve.
Goals of this project
The goal of the Adempiere project is the creation of a community-developed and supported open source business solution. The Adempiere community follows the open-source model of the Bazaar described in Eric Raymond's article The Cathedral and the Bazaar.
Business functionality
The following business areas are addressed by the Adempiere application:
Enterprise Resource Planning (ERP)
Supply Chain Management (SCM)
Customer Relationship Management (CRM)
Financial Performance Analysis
Integrated Point of sale (POS) solution
Cost Engine for different Cost types
Two different Productions (light and complex) which include Order batch and Material Requirements Planning (or Manufacturing Resource Planning).
Project structure
All community members are entitled to their say in the project discussion forums. For practical purposes, the project is governed by a council of contributors. A leader is nominated from this cou
|
https://en.wikipedia.org/wiki/Rational%20animal
|
The term rational animal (Latin: animal rationale or animal rationabile) refers to a classical definition of humanity or human nature, associated with Aristotelianism.
History
While the Latin term itself originates in scholasticism, it reflects the Aristotelian view of man as a creature distinguished by a rational principle. In the Nicomachean Ethics I.13, Aristotle states that the human being has a rational principle (Greek: λόγον ἔχον), on top of the nutritive life shared with plants, and the instinctual life shared with other animals, i. e., the ability to carry out rationally formulated projects. That capacity for deliberative imagination was equally singled out as man's defining feature in De anima III.11. While seen by Aristotle as a universal human feature, the definition applied to wise and foolish alike, and did not in any way imply necessarily the making of rational choices, as opposed to the ability to make them.
The Neoplatonic philosopher Porphyry defined man as a "mortal rational animal", and also considered animals to have a (lesser) rationality of their own.
The definition of man as a rational animal was common in scholastical philosophy. Catholic Encyclopedia states that this definition means that "in the system of classification and definition shown in the Arbor Porphyriana, man is a substance, corporeal, living, sentient, and rational".
In Meditation II of Meditations on First Philosophy, Descartes considers and rejects the scholastic concept of the "rational animal":
Shall I say 'a rational animal'? No; for then I should have to inquire what an animal is, what rationality is, and in this one question would lead me down the slope to other harder ones.
Modern use
Freud was as aware as any of the irrational forces at work in humankind, but he nevertheless resisted what he called too much "stress on the weakness of the ego in relation to the id and of our rational elements in the faced of the daemonic forces within us".
Neo-Kantian philosop
|
https://en.wikipedia.org/wiki/Oyster%20bar
|
An oyster bar, also known as an oyster saloon, oyster house or a raw bar service, is a restaurant specializing in serving oysters, or a section of a restaurant which serves oysters buffet-style. Oysters have been consumed since ancient times and were common tavern food in Europe, but the oyster bar as a distinct restaurant began making an appearance in the 18th century.
History
Oyster consumption in Europe was confined to the wealthy until the mid-17th century but, by the 18th century, the poor were also consuming them. Sources vary as to when the first oyster bar was created. One source claims that Sinclair's, a pub in Manchester, England, is the United Kingdom's oldest oyster bar. It opened in 1738. London's oldest restaurant, Rules, also began business as an oyster bar. It opened in 1798.
In North America, Native Americans on both coasts ate oysters in large quantities, as did colonists from Europe. Unlike in Europe, oyster consumption in North America after colonization by Europeans was never confined to class, and oysters were commonly served in taverns. During the early 19th century, express wagons filled with oysters crossed the Allegheny Mountains to reach the American Midwest. The oldest oyster bar in the United States is Union Oyster House in Boston, which opened in 1826. It features oyster shucking in front of the customer, and patrons may make their own oyster sauces from condiments on the tables. It has served as a model for many oyster bars in the United States.
During the same period, oysters were an integral part of some African-American communities. One example is Sandy Ground, which was located in modern-day Rossville, Staten Island. African-Americans were drawn to the oyster industry because it promised autonomy, as they were involved throughout the process of harvesting and selling. In addition, oyster farmers were relatively less impoverished than slaves and did not work under white owners. A recipe for an oyster pie in Abby Fisher's 1881 co
|
https://en.wikipedia.org/wiki/Deoxyguanosine%20triphosphate
|
Deoxyguanosine triphosphate (dGTP) is a nucleoside triphosphate, and a nucleotide precursor used in cells for DNA synthesis. The substance is used in the polymerase chain reaction technique, in sequencing, and in cloning. It is also the competitor of inhibition onset by acyclovir in the treatment of HSV virus.
|
https://en.wikipedia.org/wiki/MHC%20restriction
|
MHC-restricted antigen recognition, or MHC restriction, refers to the fact that a T cell can interact with a self-major histocompatibility complex molecule and a foreign peptide bound to it, but will only respond to the antigen when it is bound to a particular MHC molecule.
When foreign proteins enter a cell, they are broken into smaller pieces called peptides. These peptides, also known as antigens, can derive from pathogens such as viruses or intracellular bacteria. Foreign peptides are brought to the surface of the cell and presented to T cells by proteins called the major histocompatibility complex (MHC). During T cell development, T cells go through a selection process in the thymus to ensure that the T cell receptor (TCR) will not recognize MHC molecule presenting self-antigens, i.e that its affinity is not too high. High affinity means it will be autoreactive, but no affinity means it will not bind strongly enough to the MHC. The selection process results in developed T cells with specific TCRs that might only respond to certain MHC molecules but not others. The fact that the TCR will recognize only some MHC molecules but not others contributes to "MHC restriction". The biological reason of MHC restriction is to prevent supernumerary wandering lymphocytes generation, hence energy saving and economy of cell-building materials.
T-cells are a type of lymphocyte that is significant in the immune system to activate other immune cells. T-cells will recognize foreign peptides through T-cell receptors (TCRs) on the surface of the T cells, and then perform different roles depending on the type of T cell they are in order to defend the host from the foreign peptide, which may have come from pathogens like bacteria, viruses or parasites. Enforcing the restriction that T cells are activated by peptide antigens only when the antigens are bound to self-MHC molecules, MHC restriction adds another dimension to the specificity of T cell receptors so that an antigen is rec
|
https://en.wikipedia.org/wiki/Demidov%20Prize
|
The Demidov Prize () is a national scientific prize in Russia awarded annually to the members of the Russian Academy of Sciences. Originally awarded from 1832 to 1866 in the Russian Empire, it was revived by the government of Russia's Sverdlovsk Oblast in 1993. In its original incarnation it was one of the first annual scientific awards, and its traditions influenced other awards of this kind including the Nobel Prize.
History
In 1831 Count Pavel Nikolaievich Demidov, representative of the famous Demidov family, established a scientific prize in his name. The Saint Petersburg Academy of Sciences (now the Russian Academy of Sciences) was chosen as the awarding institution. In 1832 the president of the Petersburg Academy of Sciences, Sergei Uvarov, awarded the first prizes.
From 1832 to 1866 the Academy awarded 55 full prizes (5,000 rubles) and 220 part prizes. Among the winners were many prominent Russian scientists: the founder of field surgery and inventor of the plaster immobilisation method in treatment of fractures, Nikolai Pirogov; the seafarer and geographer Adam Johann von Krusenstern, who led the first russian circumnavigation of the globe; Dmitri Mendeleev, the creator of the periodic table of elements; Boris Jacobi, pioneer of the first usable electric motors; and many others. One of the recipients was the founder's younger brother, Count Anatoly Nikolaievich Demidov, 1st Prince of San Donato, in 1847; Pavel had died in 1840, making Anatoly the Count Demidov (note that Russia did not recognize Anatoly's Italian title of prince).
From 1866, 25 years after Count Demidov's death, as was according to the terms of his bequest, there were no more awards.
In 1993, on the initiative of the vice-president of the Russian Academy of Sciences Gennady Mesyats and the governor of the Sverdlovsk Oblast Eduard Rossel, the Demidov Prize traditions were restored. The prize is awarded for outstanding achievements in natural sciences and humanities. The winners are elect
|
https://en.wikipedia.org/wiki/Spur%20%28botany%29
|
The botanical term “spur” is given to outgrowths of tissue on different plant organs. The most common usage of the term in botany refers to nectar spurs in flowers.
nectar spur
spur (stem)
spur (leaf)
See also
Fascicle
Sepal
Petal
Tepal
Calyx
Corolla
Plant anatomy
Plant morphology
|
https://en.wikipedia.org/wiki/Shimeji
|
Shimeji (Japanese: , or ) is a group of edible mushrooms native to East Asia, but also found in northern Europe. Hon-shimeji (Lyophyllum shimeji) is a mycorrhizal fungus and difficult to cultivate. Other species are saprotrophs, and buna-shimeji (Hypsizygus tessulatus) is now widely cultivated. Shimeji is rich in umami-tasting compounds such as guanylic acid, glutamic acid, and aspartic acid.
Species
Several species are sold as shimeji mushrooms. All are saprotrophic except Lyophyllum shimeji.
Mycorrhizal
Hon-shimeji (), Lyophyllum shimeji
The cultivation methods have been patented by several groups, such as Takara Bio and Yamasa, and the cultivated hon-shimeji is available from several manufacturers in Japan.
Saprotrophic
Buna-shimeji (, lit. beech shimeji), Hypsizygus tessulatus, also known in English as the brown beech or brown clamshell mushroom.
Hypsizygus marmoreus is a synonym of Hypsizygus tessulatus. Cultivation of Buna-shimeji was first patented by Takara Shuzo Co., Ltd. in 1972 as hon-shimeji and the production started in 1973 in Japan. Now, several breeds are widely cultivated and sold fresh in markets.
Bunapi-shimeji (), known in English as the white beech or white clamshell mushroom.
Bunapi was selected from UV-irradiated buna-shimeji ('hokuto #8' x 'hokuto #12') and the breed was registered as 'hokuto shiro #1' by Hokuto Corporation.
Hatake-shimeji (), Lyophyllum decastes.
Shirotamogidake (), Hypsizygus ulmarius.
These two species had been also sold as hon-shimeji.
Velvet pioppino (alias velvet pioppini, black poplar mushroom, Chinese: /), Agrocybe aegerita.
Shimeji health benefits
Shimeji mushrooms contain minerals like potassium and phosphorus, magnesium, zinc, and copper. Shimeji mushrooms lower the cholesterol level of the body. This mushroom is rich in glycoprotein (HM-3A), marmorin, beta-(1-3)-glucan, hypsiziprenol, and hypsin therefore is a potential natural anticancer agent. Shimeji mushrooms contain angiotensin I-converting enzym
|
https://en.wikipedia.org/wiki/Timelike%20homotopy
|
On a Lorentzian manifold, certain curves are distinguished as timelike. A timelike homotopy between two timelike curves is a homotopy such that each intermediate curve is timelike. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves (or timelike multiply connected). A manifold such as the 3-sphere can be simply connected (by any type of curve), and at the same time be timelike multiply connected. Equivalence classes of timelike homotopic curves define their own fundamental group, as noted by Smith (1967). A smooth topological feature which prevents a CTC from being deformed to a point may be called a timelike topological feature.
|
https://en.wikipedia.org/wiki/Jakarta%20Mail
|
Jakarta Mail (formerly JavaMail) is a Jakarta EE API used to send and receive email via SMTP, POP3 and IMAP. Jakarta Mail is built into the Jakarta EE platform, but also provides an optional package for use in Java SE.
The current version is 2.1.2, released on May 5, 2023. Another open source Jakarta Mail implementation exists (GNU JavaMail), which -while supporting only the obsolete JavaMail 1.3 specification- provides the only free NNTP backend, which makes it possible to use this technology to read and send news group articles.
As of 2019, the software is known as Jakarta Mail, and is part of the Jakarta EE brand (formerly known as Java EE). The reference implementation is part of the Eclipse Angus project.
Maven coordinates of the relevant projects required for operation are:
mail API: jakarta.mail:jakarta.mail-api:2.1.2
mail implementation: org.eclipse.angus:angus-mail:2.0.2
multimedia extensions: jakarta.activation:jakarta.activation-api:2.1.2
Licensing
Jakarta Mail is hosted as an open source project on Eclipse.org under its new name Jakarta Mail.
Most of the Jakarta Mail source code is licensed under the following licences:
EPL-2.0
GPL-2.0 with Classpath Exception license
The source code for the demo programs is licensed under the BSD license
Examples
import jakarta.mail.*;
import jakarta.mail.internet.*;
import java.time.*;
import java.util.*;
// Send a simple, single part, text/plain e-mail
public class TestEmail {
static Clock clock = Clock.systemUTC();
public static void main(String[] args) {
// SUBSTITUTE YOUR EMAIL ADDRESSES HERE!
String to = "sendToMailAddress";
String from = "sendFromMailAddress";
// SUBSTITUTE YOUR ISP'S MAIL SERVER HERE!
String host = "smtp.yourisp.invalid";
// Create properties, get Session
Properties props = new Properties();
// If using static Transport.send(),
// need to specify which host to send it to
props.put("ma
|
https://en.wikipedia.org/wiki/Suprarenal%20plexus
|
The suprarenal plexus is formed by branches from the celiac plexus, from the celiac ganglion, and from the phrenic and greater splanchnic nerves, a ganglion being formed at the point of junction with the latter nerve.
The plexus supplies the suprarenal gland, being distributed chiefly to its medullary portion; its branches are remarkable for their large size in comparison with that of the organ they supply.
|
https://en.wikipedia.org/wiki/Splenic%20plexus
|
The splenic plexus (lienal plexus in older texts) is formed by branches from the celiac plexus, the left celiac ganglion, and from the right vagus nerve.
It accompanies the lienal artery to the spleen, giving off, in its course, subsidiary plexuses along the various branches of the artery.
|
https://en.wikipedia.org/wiki/Electron%20beam-induced%20current
|
Electron-beam-induced current (EBIC) is a semiconductor analysis technique performed in a scanning electron microscope (SEM) or scanning transmission electron microscope (STEM). It is most commonly used to identify buried junctions or defects in semiconductors, or to examine minority carrier properties. EBIC is similar to cathodoluminescence in that it depends on the creation of electron–hole pairs in the semiconductor sample by the microscope's electron beam. This technique is used in semiconductor failure analysis and solid-state physics.
Physics of the technique
If the semiconductor sample contains an internal electric field, as will be present in the depletion region at a p-n junction or Schottky junction, the electron–hole pairs will be separated by drift due to the electric field. If the p- and n-sides (or semiconductor and Schottky contact, in the case of a Schottky device) are connected through a picoammeter, a current will flow.
EBIC is best understood by analogy: in a solar cell, photons of light fall on the entire cell, thus delivering energy and creating electron hole pairs, and cause a current to flow. In EBIC, energetic electrons take the role of the photons, causing the EBIC current to flow. However, because the electron beam of an SEM or STEM is very small, it is scanned across the sample and variations in the induced EBIC are used to map the electronic activity of the sample.
By using the signal from the picoammeter as the imaging signal, an EBIC image is formed on the screen of the SEM or STEM. When a semiconductor device is imaged in cross-section, the depletion region will show bright EBIC contrast. The shape of the contrast can be treated mathematically to determine the minority carrier properties of the semiconductor, such as diffusion length and surface recombination velocity. In plain-view, areas with good crystal quality will show bright contrast, and areas containing defects will show dark EBIC contrast.
As such, EBIC is a semico
|
https://en.wikipedia.org/wiki/Infrared%20cirrus
|
Infrared cirrus or galactic cirrus are galactic filamentary structures seen in space over most of the sky that emit far-infrared light. The name is given because the structures are cloud-like in appearance. These structures were first detected by the Infrared Astronomy Satellite at wavelengths of 60 and 100 micrometres.
See also
Cosmic infrared background
|
https://en.wikipedia.org/wiki/Gastric%20plexuses
|
The superior gastric plexus (gastric or coronary plexus) accompanies the left gastric artery along the lesser curvature of the stomach, and joins with branches from the left vagus nerve.
The term "inferior gastric plexus" is sometimes used to describe a continuation of the hepatic plexus.
Additional images
|
https://en.wikipedia.org/wiki/Ovarian%20plexus
|
The ovarian plexus arises from the renal plexus, and is distributed to the ovary, and fundus of the uterus.
It is carried in the suspensory ligament of the ovary.
|
https://en.wikipedia.org/wiki/Generative%20systems
|
Generative systems are technologies with the overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences. When generative systems provide a common platform, changes may occur at varying layers (physical, network, application, content) and provide a means through which different firms and individuals may cooperate indirectly and contribute to innovation.
Depending on the rules, the patterns can be extremely varied and unpredictable. One of the better-known examples is Conway's Game of Life, a cellular automaton. Other examples include Boids and Wikipedia. More examples can be found in generative music, generative art, and, more recently, in video games such as Spore.
Theory
Jonathan Zittrain
In 2006, Jonathan Zittrain published The Generative Internet in Volume 119 of the Harvard Law Review. In this paper, Zittrain describes a technology's degree of generativity as being the function of four characteristics:
Capacity for leverage – the extent to which an object enables something to be accomplished that would not have otherwise be possible or worthwhile.
Adaptability – how widely a technology can be used without it needing to be modified.
Ease of mastery – how much effort and skill is required for people to take advantage of the technology's leverage.
Accessibility – how easily people are able to start using a technology.
See also
|
https://en.wikipedia.org/wiki/Phrenic%20plexus
|
The phrenic plexus accompanies the inferior phrenic artery to the diaphragm, some filaments passing to the suprarenal gland.
It arises from the upper part of the celiac ganglion, and is larger on the right than on the left side.
It receives one or two branches from the phrenic nerve.
At the point of junction of the right phrenic plexus with the phrenic nerve is a small ganglion (ganglion phrenicum).
This plexus distributes branches to the inferior vena cava, and to the suprarenal and hepatic plexuses.
|
https://en.wikipedia.org/wiki/Uterovaginal%20plexus%20%28nerves%29
|
The Uterovaginal plexus is a division of the inferior hypogastric plexus. In older texts, it is referred to as two structures, the "vaginal plexus" and "uterine plexus".
The Vaginal Plexus arises from the lower part of the pelvic plexus. It is distributed to the walls of the vagina, to the erectile tissue of the vestibule, and to the cavernous nerves of the clitoris. The nerves composing this plexus contain, like the vesical, a large proportion of spinal nerve fibers.
The Uterine Plexus accompanies the uterine artery to the side of the uterus, between the layers of the broad ligament; it communicates with the ovarian plexus.
|
https://en.wikipedia.org/wiki/Prostatic%20plexus%20%28nervous%29
|
The Prostatic Plexus is continued from the lower part of the pelvic plexus. It lies within the fascial shell of the prostate.
The nerves composing it are of large size.
They are distributed to the prostate seminal vesicle and the corpora cavernosa of the penis and urethra.
The nerves supplying the corpora cavernosa consist of two sets, the lesser and greater cavernous nerves, which arise from the forepart of the prostatic plexus, and, after joining with branches from the pudendal nerve, pass forward beneath the pubic arch. Injury to the prostatic plexus (during prostatic resection for example) is highly likely to cause erectile dysfunction. It is because of this relationship that surgeons are careful to maintain the integrity of the prostatic fascial shell so as to not interrupt the post-ganglionic parasympathetic fibers that produce penile erection.
|
https://en.wikipedia.org/wiki/Vesical%20nervous%20plexus
|
The vesical nervous plexus arises from the forepart of the pelvic plexus. The nerves composing it are numerous, and contain a large proportion of spinal nerve fibers. They accompany the vesicle arteries, and are distributed to the sides and fundus of the bladder. Numerous filaments also pass to the seminal vesicles and vas deferens; those accompanying the vas deferens join, on the spermatic cord, with branches from the spermatic plexus.
Additional images
|
https://en.wikipedia.org/wiki/Townsend%20discharge
|
In electromagnetism, the Townsend discharge or Townsend avalanche is a ionisation process for gases where free electrons are accelerated by an electric field, collide with gas molecules, and consequently free additional electrons. Those electrons are in turn accelerated and free additional electrons. The result is an avalanche multiplication that permits electrical conduction through the gas. The discharge requires a source of free electrons and a significant electric field; without both, the phenomenon does not occur.
The Townsend discharge is named after John Sealy Townsend, who discovered the fundamental ionisation mechanism by his work circa 1897 at the Cavendish Laboratory, Cambridge.
General description
The avalanche occurs in a gaseous medium that can be ionised (such as air). The electric field and the mean free path of the electron must allow free electrons to acquire an energy level (velocity) that can cause impact ionisation. If the electric field is too small, then the electrons do not acquire enough energy. If the mean free path is too short, the electron gives up its acquired energy in a series of non-ionising collisions. If the mean free path is too long, then the electron reaches the anode before colliding with another molecule.
The avalanche mechanism is shown in the accompanying diagram. The electric field is applied across a gaseous medium; initial ions are created with ionising radiation (for example, cosmic rays). An original ionisation event produces an ion pair; the positive ion accelerates towards the cathode while the free electron accelerates towards the anode. If the electric field is strong enough, the free electron can gain sufficient velocity (energy) to liberate another electron when it next collides with a molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause further impact ionisations, and so on. This process is effectively a chain reaction that generates
|
https://en.wikipedia.org/wiki/Conformational%20entropy
|
In chemical thermodynamics, conformational entropy is the entropy associated with the number of conformations of a molecule. The concept is most commonly applied to biological macromolecules such as proteins and RNA, but also be used for polysaccharides and other molecules. To calculate the conformational entropy, the possible conformations of the molecule may first be discretized into a finite number of states, usually characterized by unique combinations of certain structural parameters, each of which has been assigned an energy. In proteins, backbone dihedral angles and side chain rotamers are commonly used as parameters, and in RNA the base pairing pattern may be used. These characteristics are used to define the degrees of freedom (in the statistical mechanics sense of a possible "microstate"). The conformational entropy associated with a particular structure or state, such as an alpha-helix, a folded or an unfolded protein structure, is then dependent on the probability of the occupancy of that structure.
The entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the tertiary structure of its folded native state. In particular, the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding. However, a recent study has shown that side-chain conformational entropy can stabilize native structures among alternative compact structures. The conformational entropy of RNA and proteins can be estimated; for example, empirical methods to estimate the loss of conformational entropy in a particular side chain on incorporation into a folded protein can roughly predict the effects of particular point mutations in a protein. Side-chain conformational entropies can be defined as Boltzmann sampling over all possible rotameric states:
where is the gas constant and is the probability of a residue bei
|
https://en.wikipedia.org/wiki/Ultraviolet%20germicidal%20irradiation
|
Ultraviolet germicidal irradiation (UVGI) is a disinfection technique employing ultraviolet (UV) light, particularly UV-C (180-280 nm), to kill or inactivate microorganisms. UVGI primarily inactivates microbes by damaging their genetic material, thereby inhibiting their capacity to carry out vital functions.
The use of UVGI extends to an array of applications, encompassing food, surface, air, and water disinfection. UVGI devices can inactivate microorganisms including bacteria, viruses, fungi, molds, and other pathogens. Recent studies have substantiated the ability of UV-C light to inactivate SARS-CoV-2, the strain of coronavirus that causes COVID-19.
UV-C wavelengths demonstrate varied germicidal efficacy and effects on biological tissue. Many germicidal lamps like low-pressure mercury (LP-Hg) lamps, with peak emissions around 254 nm, contain UV wavelengths that can be hazardous to humans. As a result, UVGI systems have been primarily limited to applications where people are not directly exposed, including hospital surface disinfection, upper-room UVGI, and water treatment. More recently, the application of wavelengths between 200-235 nm, often referred to as far-UVC, has gained traction for surface and air disinfection. These wavelengths are regarded as much safer due to their significantly reduced penetration into human tissue.
Notably, UV-C light is virtually absent in sunlight reaching the Earth's surface due to the absorptive properties of the ozone layer within the atmosphere.
History
Origins of UV germicidal action
The development of UVGI traces back to 1878 when Arthur Downes and Thomas Blunt found that sunlight, particularly its shorter wavelengths, hindered microbial growth. Expanding upon this work, Émile Duclaux, in 1885, identified variations in sunlight sensitivity among different bacterial species. A few years later, in 1890, Robert Koch demonstrated the lethal effect of sunlight on Mycobacterium tuberculosis, hinting at UVGI's potential for c
|
https://en.wikipedia.org/wiki/Layer%20Jump%20Recording
|
Layer Jump Recording (LJR) is a writing method used for DVD-R DL (Dual Layer).
Overview
LJR permits recording the disc per increments called session (see Optical disc authoring), a.k.a. multi-session. It also permits a faster closing of the disc by saving extraneous padding when the amount of recorded data does not fill-up the disc. It overcomes these limitations of Sequential Recording (SR), the writing method usually applied to write-once optical media.
The layer jump is a switch (jump) between the layer closer to the laser head (referred as L0) to the farther layer (referred as L1), or vice versa. Jumping layers is already necessary for reading multiple layer optical media (so far market released products are limited to two layers despite some research prototypes having up to eight layers), as well as for recording them with Sequential Recording. However the layer jump during the recording occurs only once, at the position called Middle Area, during a Sequential Recording, while it may occur multiple times with Layer Jump Recording.
Two different Layer Jump methods are defined: Manual Layer Jump and Regular Layer Jump. The first require the software to specify to the hardware each jump point from layer zero to layer one (the jump from layer one to layer zero occurring always at the symmetric jump point). The latter requires the software to specify to the hardware only once the jumping interval size.
This technology was championed by Pioneer Corporation, optical device manufacturer among other things, and introduced to the market in 2005. The physical part of the technology was first specified within DVD Forum, and then a matching device command set was introduced to the Mt Fuji specification (which eventually was replicated within the MMC specification). Later the Layer Jump Recording impacted the UDF file system specification.
Unlike most recording methods, Layer Jump Recording was not unanimously adopted by optical driver manufacturers. The limited backwar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.