source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Leucine-rich%20repeat
A leucine-rich repeat (LRR) is a protein structural motif that forms an α/β horseshoe fold. It is composed of repeating 20–30 amino acid stretches that are unusually rich in the hydrophobic amino acid leucine. These tandem repeats commonly fold together to form a solenoid protein domain, termed leucine-rich repeat domain. Typically, each repeat unit has beta strand-turn-alpha helix structure, and the assembled domain, composed of many such repeats, has a horseshoe shape with an interior parallel beta sheet and an exterior array of helices. One face of the beta sheet and one side of the helix array are exposed to solvent and are therefore dominated by hydrophilic residues. The region between the helices and sheets is the protein's hydrophobic core and is tightly sterically packed with leucine residues. Leucine-rich repeats are frequently involved in the formation of protein–protein interactions. Examples Leucine-rich repeat motifs have been identified in a large number of functionally unrelated proteins. The best-known example is the ribonuclease inhibitor, but other proteins such as the tropomyosin regulator tropomodulin and the toll-like receptor also share the motif. In fact, the toll-like receptor possesses 10 successive LRR motifs which serve to bind pathogen- and danger-associated molecular patterns. Although the canonical LRR protein contains approximately one helix for every beta strand, variants that form beta-alpha superhelix folds sometimes have long loops rather than helices linking successive beta strands. One leucine-rich repeat variant domain (LRV) has a novel repetitive structural motif consisting of alternating alpha- and 310-helices arranged in a right-handed superhelix, with the absence of the beta-sheets present in other leucine-rich repeats. Associated domains Leucine-rich repeats are often flanked by N-terminal and C-terminal cysteine-rich domains, but not always as is the case with C5orf36 They also co-occur with LRR adjacent domains. The
https://en.wikipedia.org/wiki/Alice%20Burks
Alice Burks (née Rowe, August 20, 1920 – November 21, 2017) was an American author of children's books and books about the history of electronic computers. Early life and education Burks was born Alice Rowe in East Cleveland, Ohio, in 1920. She began her undergraduate degree at Oberlin College on a competitive mathematics scholarship and transferred to the University of Pennsylvania in Philadelphia where she completed her B.A. in mathematics in 1944. During this period, she was employed as a human computer at the University of Pennsylvania's Moore School of Electrical Engineering. Career Burks retired from full-time employment after marrying Moore School lecturer Dr. Arthur Burks, a mathematician who served as one of the principal engineers in the construction of the ENIAC, the world's first general-purpose electronic digital computer, built at the Moore School between 1943 and 1946. Unlike some of the Moore School women computers, she never worked directly with the ENIAC. At the conclusion of Arthur's work with the Moore School and at the Institute for Advanced Study in 1946, Burks moved with her husband to Ann Arbor, Michigan, where he joined the faculty of the University of Michigan and helped to found the computer science department. She returned to school, earning an M.S. in educational psychology in 1957 from Michigan. Starting in the 1970s following the decision of Honeywell v. Sperry Rand, the federal court case that invalidated the ENIAC patent, she and husband Arthur championed the work of John Vincent Atanasoff, the Iowa State College physics professor whom the court had ruled invented the first electronic digital computer (a machine that came to be called the Atanasoff–Berry Computer) and from whom the subject matter of the ENIAC was ruled to be derived. In articles and two books, the first co-authored with Arthur, Mrs. Burks sought to bolster the judge's decision and highlight testimony and evidence from the case. This pitted the Burkses in a d
https://en.wikipedia.org/wiki/Failing%20badly
Failing badly and failing well are concepts in systems security and network security (and engineering in general) describing how a system reacts to failure. The terms have been popularized by Bruce Schneier, a cryptographer and security consultant. Failing badly A system that fails badly is one that has a catastrophic result when failure occurs. A single point of failure can thus bring down the whole system. Examples include: Databases (such as credit card databases) protected only by a password. Once this security is breached, all data can be accessed. Fracture critical structures, such as buildings or bridges, that depend on a single column or truss, whose removal would cause a chain reaction collapse under normal loads. Security checks which concentrate on establishing identity, not intent (thus allowing, for example, suicide attackers to pass). Internet access provided by a single service provider. If the provider's network fails, all Internet connectivity is lost. Systems, including social ones, that rely on a single person, who, if absent or becomes permanently unavailable, halts the entire system. Brittle materials, such as "over-reinforced concrete", when overloaded, fail suddenly and catastrophically with no warning. Keeping the only copy of data in one central place. That data is lost forever when that place is damaged, such as the 1836 U.S. Patent Office fire, the American 1973 National Personnel Records Center fire, and the destruction of the Library of Alexandria. Failing well A system that fails well is one that compartmentalizes or contains its failure. Examples include: Compartmentalized hulls in watercraft, ensuring that a hull breach in one compartment will not flood the entire vessel. Databases that do not allow downloads of all data in one attempt, limiting the amount of compromised data. Structurally redundant buildings conceived to resist loads beyond those expected under normal circumstances, or resist loads when the structure is
https://en.wikipedia.org/wiki/CTL%2A
CTL* is a superset of computational tree logic (CTL) and linear temporal logic (LTL). It freely combines path quantifiers and temporal operators. Like CTL, CTL* is a branching-time logic. The formal semantics of CTL* formulae are defined with respect to a given Kripke structure. History LTL had been proposed for the verification of computer programs, first by Amir Pnueli in 1977. Four years later in 1981 E. M. Clarke and E. A. Emerson invented CTL and CTL model checking. CTL* was defined by E. A. Emerson and Joseph Y. Halpern in 1983. CTL and LTL were developed independently before CTL*. Both sublogics have become standards in the model checking community, while CTL* is of practical importance because it provides an expressive testbed for representing and comparing these and other logics. This is surprising because the computational complexity of model checking in CTL* is not worse than that of LTL: they both lie in PSPACE. Syntax The language of well-formed CTL* formulae is generated by the following unambiguous (with respect to bracketing) context-free grammar: where ranges over a set of atomic formulas. Valid CTL*-formulae are built using the nonterminal . These formulae are called state formulae, while those created by the symbol are called path formulae. (The above grammar contains some redundancies; for example as well as implication and equivalence can be defined as just for Boolean algebras (or propositional logic) from negation and conjunction, and the temporal operators X and U are sufficient to define the other two.) The operators basically are the same as in CTL. However, in CTL, every temporal operator () has to be directly preceded by a quantifier, while in CTL* this is not required. The universal path quantifier may be defined in CTL* in the same way as for classical predicate calculus , although this is not possible in the CTL fragment. Examples of formulae CTL* formula that is neither in LTL or in CTL: LTL formula that is not in C
https://en.wikipedia.org/wiki/Curing%20%28food%20preservation%29
Curing is any of various food preservation and flavoring processes of foods such as meat, fish and vegetables, by the addition of salt, with the aim of drawing moisture out of the food by the process of osmosis. Because curing increases the solute concentration in the food and hence decreases its water potential, the food becomes inhospitable for the microbe growth that causes food spoilage. Curing can be traced back to antiquity, and was the primary method of preserving meat and fish until the late 19th century. Dehydration was the earliest form of food curing. Many curing processes also involve smoking, spicing, cooking, or the addition of combinations of sugar, nitrate, and nitrite. Meat preservation in general (of meat from livestock, game, and poultry) comprises the set of all treatment processes for preserving the properties, taste, texture, and color of raw, partially cooked, or cooked meats while keeping them edible and safe to consume. Curing has been the dominant method of meat preservation for thousands of years, although modern developments like refrigeration and synthetic preservatives have begun to complement and supplant it. While meat-preservation processes like curing were mainly developed in order to prevent disease and to increase food security, the advent of modern preservation methods mean that in most developed countries , curing is instead mainly practised for its cultural value and desirable impact on the texture and taste of food. For less-developed countries, curing remains a key process in the production, transport and availability of meat. Some traditional cured meat (such as authentic Parma ham and some authentic Spanish chorizo and Italian salami) is cured with salt alone. Today, potassium nitrate (KNO3) and sodium nitrite (NaNO2) (in conjunction with salt) are the most common agents in curing meat, because they bond to the myoglobin and act as a substitute for oxygen, thus turning myoglobin red. More recent evidence shows that thes
https://en.wikipedia.org/wiki/Incompressible%20string
An incompressible string is a string with Kolmogorov complexity equal to its length, so that it has no shorter encodings. Example Suppose we have the string 12349999123499991234, and we are using a compression method that works by putting a special character into the string (say @) followed by a value that points to an entry in a lookup table (or dictionary) of repeating values. Let us imagine we have an algorithm that examines the string in 4 character chunks. Looking at our string, our algorithm might pick out the values 1234 and 9999 to place into its dictionary. Let us say that 1234 is entry 0 and 9999 is entry 1. Now the string can become: @0@1@0@1@0 This string is much shorter, although storing the dictionary itself will cost some space. However, the more repeats there are in the string, the better the compression will be. Our algorithm can do better though, if it can view the string in chunks larger than 4 characters. Then it can put 12349999 and 1234 into the dictionary, giving us: @0@0@1 This string is even shorter. Now consider another string: 1234999988884321 This string is incompressible by our algorithm. The only repeats that occur are 88 and 99. If we were to store 88 and 99 in our dictionary, we would produce: 1234@1@1@0@04321 This is just as long as the original string, because our placeholders for items in the dictionary are 2 characters long, and the items they replace are the same length. Hence, this string is incompressible by our algorithm.
https://en.wikipedia.org/wiki/Jerusalem%20Biblical%20Zoo
The Tisch Family Biblical Zoo in Jerusalem (, ), popularly known as the Jerusalem Biblical Zoo, is a zoo located in the Malha neighborhood of Jerusalem. It is famous for its Afro-Asiatic collection of wildlife, many of which are described in the Hebrew Bible, as well as for its success in breeding endangered species. According to Dun and Bradstreet, the Biblical Zoo was the most popular tourist attraction in Israel from 2005 to 2007, and logged a record 738,000 visitors in 2009. The zoo had about 55,000 members in 2009. History Downtown Jerusalem (1940–1947) The Jerusalem Biblical Zoo opened in September 1940 as a small "animal corner" on Rabbi Kook Street in central Jerusalem. The zoo was founded by Aharon Shulov, a professor of zoology at the Hebrew University of Jerusalem, Mount Scopus. Among Shulov's goals were to provide a research facility for his students; to gather animals, reptiles and birds mentioned in the Bible; and, as he wrote in 1951, to break down the "invisible wall" between the intellectuals on Mount Scopus and the general public. Early on, the zoo ran into several difficulties in its decision to focus on animals mentioned in the Bible. For one, the meaning of many names of animals, reptiles and birds in Scriptures is often uncertain; for example, nesher (), commonly translated as "eagle", could also mean "vulture". More significantly, many of the animals mentioned in the Bible are now extinct in Israel due to over-hunting, destruction of natural habitats by rapid construction and development, illegal poisoning by farmers, and low birth rate. Zoo planners decided to branch beyond strictly biblical animals and include worldwide endangered species as well. The presence of the animal corner generated many complaints from residents in adjoining buildings due to the smell and noise, as well as the perceived danger of animal escapes. Due to the complaints, the zoo relocated in 1941 to a lot on Shmuel HaNavi Street. Here, too, complaints were heard
https://en.wikipedia.org/wiki/Computational%20complexity%20of%20mathematical%20operations
The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm. Arithmetic functions This table lists the complexity of mathematical operations on integers. On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two -bit numbers in time O(n). Algebraic functions Here we consider operations over polynomials and denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers. Special functions Many of the methods in this section are given in Borwein & Borwein. Elementary functions The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions. Below, the size refers to the number of digits of precision at which the function is to be evaluated. It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound . Non-elementary functions Mathematical constants This table gives the complexity of computing approximations to the given constants to correct digits. Number theory Algorithms for number theoretical cal
https://en.wikipedia.org/wiki/Internal%20sort
An internal sort is any data sorting process that takes place entirely within the main memory of a computer. This is possible whenever the data to be sorted is small enough to all be held in the main memory. like a hard-disk. Any reading or writing of data to and from this slower media can slow the sortation process considerably. This issue has implications for different sort algorithms. Some common internal sorting algorithms include: Bubble Sort Insertion Sort Quick Sort Heap Sort Radix Sort Selection sort Consider a Bubblesort, where adjacent records are swapped in order to get them into the right order, so that records appear to “bubble” up and down through the dataspace. If this has to be done in chunks, then when we have sorted all the records in chunk 1, we move on to chunk 2, but we find that some of the records in chunk 1 need to “bubble through” chunk 2, and vice versa (i.e., there are records in chunk 2 that belong in chunk 1, and records in chunk 1 that belong in chunk 2 or later chunks). This will cause the chunks to be read and written back to disk many times as records cross over the boundaries between them, resulting in a considerable degradation of performance. If the data can all be held in memory as one large chunk, then this performance hit is avoided. On the other hand, some algorithms handle external sorting rather better. A Merge sort breaks the data up into chunks, sorts the chunks by some other algorithm (maybe bubblesort or Quick sort) and then recombines the chunks two by two so that each recombined chunk is in order. This approach minimises the number or reads and writes of data-chunks from disk, and is a popular external sort method. Sorting algorithms
https://en.wikipedia.org/wiki/Correspondence%20problem
The correspondence problem refers to the problem of ascertaining which parts of one image correspond to which parts of another image, where differences are due to movement of the camera, the elapse of time, and/or movement of objects in the photos. Correspondence is a fundamental problem in computer vision — influential computer vision researcher Takeo Kanade famously once said that the three fundamental problems of computer vision are: “Correspondence, correspondence, and correspondence!” Indeed, correspondence is arguably the key building block in many related applications: optical flow (in which the two images are subsequent in time), dense stereo vision (in which two images are from a stereo camera pair), structure from motion (SfM) and visual SLAM (in which images are from different but partially overlapping views of a scene), and cross-scene correspondence (in which images are from different scenes entirely). Overview Given two or more images of the same 3D scene, taken from different points of view, the correspondence problem refers to the task of finding a set of points in one image which can be identified as the same points in another image. To do this, points or features in one image are matched with the points or features in another image, thus establishing corresponding points or corresponding features, also known as homologous points or homologous features. The images can be taken from a different point of view, at different times, or with objects in the scene in general motion relative to the camera(s). The correspondence problem can occur in a stereo situation when two images of the same scene are used, or can be generalised to the N-view correspondence problem. In the latter case, the images may come from either N different cameras photographing at the same time or from one camera which is moving relative to the scene. The problem is made more difficult when the objects in the scene are in motion relative to the camera(s). A typical application
https://en.wikipedia.org/wiki/Multifractal%20system
A multifractal system is a generalization of a fractal system in which a single exponent (the fractal dimension) is not enough to describe its dynamics; instead, a continuous spectrum of exponents (the so-called singularity spectrum) is needed. Multifractal systems are common in nature. They include the length of coastlines, mountain topography, fully developed turbulence, real-world scenes, heartbeat dynamics, human gait and activity, human brain activity, and natural luminosity time series. Models have been proposed in various contexts ranging from turbulence in fluid dynamics to internet traffic, finance, image modeling, texture synthesis, meteorology, geophysics and more. The origin of multifractality in sequential (time series) data has been attributed to mathematical convergence effects related to the central limit theorem that have as foci of convergence the family of statistical distributions known as the Tweedie exponential dispersion models, as well as the geometric Tweedie models. The first convergence effect yields monofractal sequences, and the second convergence effect is responsible for variation in the fractal dimension of the monofractal sequences. Multifractal analysis is used to investigate datasets, often in conjunction with other methods of fractal and lacunarity analysis. The technique entails distorting datasets extracted from patterns to generate multifractal spectra that illustrate how scaling varies over the dataset. Multifractal analysis has been used to decipher the generating rules and functionalities of complex networks. Multifractal analysis techniques have been applied in a variety of practical situations, such as predicting earthquakes and interpreting medical images. Definition In a multifractal system , the behavior around any point is described by a local power law: The exponent is called the singularity exponent, as it describes the local degree of singularity or regularity around the point . The ensemble formed by all th
https://en.wikipedia.org/wiki/Murnong
The murnong or yam daisy is any of the plants Microseris walteri, Microseris lanceolata and Microseris scapigera, which are an important food source for many Aboriginal peoples in southern parts of Australia. Murnong is a Woiwurrung word for the plant, used by the Wurundjeri people and possibly other clans of the Kulin nation. They are called by a variety of names in the many different Aboriginal Australian languages, and occur in many oral traditions as part of Dreamtime stories. The tubers were often dug out with digging sticks and cooked before eating. They were widespread and deliberately cultivated by Aboriginal peoples in some areas, but the hoofed animals introduced by early settlers to Australia destroyed vast areas of habitat, leading to calamitous results for the Indigenous people. The shortage of food led to Aboriginal people stealing food from settlers, in a cycle of violence known as the Australian frontier wars. The township of Myrniong in Victoria was named after the murnong. In the 21st century there has been a revival and recultivation of the plant, and an honouring of it in art. History The roots of the murnong plants were consumed in large quantities by Aboriginal people in the colony of Victoria until the 1840s, when European colonists began using the murnong crop lands for sheep farming. Botanical naming The binomial names of the three species Microseris walteri, Microseris lanceolata and Microseris scapigera are often misidentified, because they were classified under different names until they were clarified in 2016. Murnong is often described as growing a sweet tuber, but this identifies Microseris walteri rather than the other two plants, which have bitter roots. For more than 30 years Murnong was named as Microseris sp. or Microseris lanceolata or Microseris scapigera. Royal Botanic Gardens Victoria botanist Neville Walsh clarified the botanical name of Microseris walteri in 2016 and defined the differences in the three species in th
https://en.wikipedia.org/wiki/Structured%20light
Structured light is the process of projecting a known pattern (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners. Invisible (or imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high frame rates alternating between two exact opposite patterns. Structured light is used by a number of police forces for the purpose of photographing fingerprints in a 3D scene. Where previously they would use tape to extract the fingerprint and flatten it out, they can now use cameras and flatten the fingerprint digitally, which allows the process of identification to begin before the officer has even left the scene. See also Structured Illumination Microscopy (SIM) Stereoscopy 3D scanner#Structured light Structured-light 3D scanners often employed in a multiple-camera setup in conjunction with structured light to capture the geometry of the target Dual photography More advanced light stages make use of structured light to capture geometry of the target. The primary use of a light stage is an instrumentation setup for reflectance capture. Depth map Laser Dynamic Range Imager Lidar Range imaging Kinect Time-of-flight camera External links Projector-Camera Calibration Toolbox Tutorial on Coded Light Projection Techniques Structured light using pseudorandom codes High-accuracy stereo depth maps using structured light A comparative survey on invisible structured light A Real-Time Laser Range Finding Vision System Dual-frequency Pattern Scheme for High-speed 3-D Shape Measurement High-Contrast Color-Stripe Pattern for Rapid Structured-Light Range Imaging Image sensor technology in computer vision Machine vision
https://en.wikipedia.org/wiki/Surrogate%20model
A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so an approximate mathematical model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables (e.g., length, curvature, material, etc.). For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and "what-if" analysis become impossible since they require thousands or even millions of simulation evaluations. One way of alleviating this burden is by constructing approximation models, known as surrogate models, metamodels or emulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheaper to evaluate. Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), relying solely on the input-output behavior. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling or black-box modeling, though the terminology is not always consistent. When only a single design variable is involved, the process is known as curve fitting. Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modeling may be used in many other areas of science where there are expensive experiments and/or function evaluations. Goals The scientific challenge of surrogate modeling is the generation of a sur
https://en.wikipedia.org/wiki/Miniemulsion
A miniemulsion (also known as nanoemulsion) is a particular type of emulsion. A miniemulsion is obtained by shearing a mixture comprising two immiscible liquid phases (for example, oil and water), one or more surfactants and, possibly, one or more co-surfactants (typical examples are hexadecane or cetyl alcohol). They usually have nanodroplets with uniform size distribution (20–500 nm) and are also known as sub-micron, mini-, and ultra-fine grain emulsions. How to prepare a miniemulsion Selection of ingredients: The first step in creating a nanoemulsion is to select the ingredients, which include the oil, water, and emulsifying agent. The type and proportions of these ingredients will affect the stability and properties of the final emulsion. Preparation of oil and aqueous phases: The oil and water phases are separately prepared, with any desired ingredients, such as surfactants or flavoring agents, added at this step. Mixing oil and emulsifier with stirrer: Next, the oil and water phases are mixed in the presence of an emulsifying agent, typically using a high-shear mixing device such as a homogenizer or a high-pressure homogenizer. Aging and stabilization: The emulsion is typically aged at room temperature to allow the droplets to stabilize, after which it can be cooled or heated as required. Optimizing and characterization: The droplet size and stability are then optimized by adjusting the ingredients and process parameters, such as temperature, pH, and mixing conditions. The nanoemulsion is also sterilized by filtration with 0.22μm. Several methods, such as DLS, TEM, and SEM, can characterize the final nanoemulsion's properties. Analyzing the quality of the particle sizer Methods of preparing nanoemulsions/miniemulsions There are two general types of methods for preparing miniemulsions: High-energy methods - For the high-energy methods, the shearing proceeds usually via exposure to high power ultrasound of the mixture or with a high-pressure homogen
https://en.wikipedia.org/wiki/Lxrun
In Unix computing, lxrun is a compatibility layer to allow Linux binaries to run on UnixWare, SCO OpenServer and Solaris without recompilation. It was created by Mike Davidson. It has been an open source software project since 1997, and is available under the Mozilla Public License. Both SCO and Sun Microsystems began officially supporting lxrun in 1999. Timeline August 22, 1997: lxrun is cited as a proof of concept of cross-platform binary compatibility at the 86open conference hosted by SCO in Santa Cruz, CA. August 29, 1997: lxrun's first mention on Usenet, in comp.unix.sco.misc. Most notably, the post mentions lxrun's availability in source and binary form from the SCO Skunkware FTP site. A later post in the thread mentions contributions by various authors, both inside and outside of SCO. October 1, 1997: The official lxrun website is established. June 19, 1998: Ronald Joe Record, Michael Hopkirk, and Steven Ginzburg present a paper on lxrun at the USENIX 1998 Technical Conference in New Orleans, LA. Mar 1, 1999: SCO announces Linux compatibility in UnixWare 7 and demonstrates lxrun at LinuxWorld Expo and Conference in San Jose, CA. May 12, 1999: Sun Microsystems announces support for Linux binaries on Solaris using lxrun. Status According to the official lxrun website, as of 2003 lxrun is in "maintenance" mode, meaning that it is no longer being actively developed. Reasons cited for the declining interest in lxrun include the wide availability of real Linux machines, and the availability of more capable emulation systems, such as SCO's Linux Kernel Personality (LKP), OpenSolaris BrandZ, and various virtual machine solutions. Newer Linux applications and host operating systems are not officially supported by lxrun.
https://en.wikipedia.org/wiki/Przybylski%27s%20Star
Przybylski's Star (pronounced or ), or HD 101065, is a rapidly oscillating Ap star at roughly from the Sun in the southern constellation of Centaurus. It has a unique spectrum showing over-abundances of most rare-earth elements, including some short-lived radioactive isotopes, but under-abundances of more common elements such as iron. History In 1961, the Polish-Australian astronomer Antoni Przybylski discovered that this star had a peculiar spectrum that would not fit into the standard framework for stellar classification. Przybylski's observations indicated unusually low amounts of iron and nickel in the star's spectrum, but higher amounts of unusual elements like strontium, holmium, niobium, scandium, yttrium, caesium, neodymium, praseodymium, thorium, ytterbium and uranium. In fact, at first Przybylski doubted that iron was present in the spectrum at all. Modern work shows that the iron group elements are somewhat below normal in abundance, but it is clear that the lanthanides and other exotic elements are highly over-abundant. Przybylski's Star possibly also contains many different short-lived actinide elements with actinium, protactinium, neptunium, plutonium, americium, curium, berkelium, californium, and einsteinium being theoretically detected. The longest-lived known isotope of einsteinium has a half-life of only 472 days, with astrophysicist Stephane Goriely at the Free University of Brussels (ULB) stating (in 2017) that the evidence for such actinides is not strong as “Przybylski’s stellar atmosphere is highly magnetic, stratified and chemically peculiar, so that the interpretation of its spectrum remains extremely complex [and] the presence of such nuclei remains to be confirmed.” As well, the lead author of the actinide studies, Vera F. Gopka, directly admits that "the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are no atomic data for these lines except
https://en.wikipedia.org/wiki/Druid%20%28video%20game%29
Druid is a hack and slash dungeon crawl developed by Electralyte Software and published by Firebird in 1986 for the Atari 8-bit family and Commodore 64. It was also ported to Amstrad CPC, ZX Spectrum, and by Nippon Dexter in 1988 for the MSX, although the MSX port was released in Japan only. Another Japanese port of Druid entitled was made for the Famicom Disk System by Jaleco in 1988. The game was followed by Druid II: Enlightenment and Warlock: The Avenger. Gameplay Inspired by the arcade game Gauntlet, Druid is a fantasy-themed dungeon crawl where the player plays the part of Hasrinaxx, a druid who is trying to rid the world of the dark mage Acamantor and his army of demons. To do this, Hasrinaxx must travel through several levels. The first level is a normal landscape, and the ones after that are underground, each one deeper than the previous. Each level is infested with various enemies such as ghosts, giant insects, witches, and the four demon princes. Hasrinaxx can shoot these enemies with three different weapons: water, fire, and electricity, but they all come in a finite supply and are not equally effective on all enemies. Using a Chaos spell will destroy all enemies in the player's vicinity as well as replenish energy. The four demon princes can only be defeated with a Chaos spell; common weapons are ineffective against them. Hasrinaxx can also summon a golem to help him or turn invisible for a brief period. In both Atari 8-bit and Commodore 64 versions, a second player can take control of the golem using joystick port 2.
https://en.wikipedia.org/wiki/Nv%20network
A Nv network is a term used in BEAM robotics referring to the small electrical Neural Networks that make up the bulk of BEAM-based robot control mechanisms. Building blocks The most basic component included in Nv Networks is the Nv neuron. The purpose of a Nv neuron is simply to take an input, do something with it, and give an output. The most common action of Nv neurons is to give a delay. BEAM Nv Neurons The standard for BEAM-based neurons is a capacitor that has one lead as an input, and the other going into the input line of an inverter. That inverter's output is the output of the neuron. The capacitor lead that is inputting into the inverter is pulled to ground with a resistor. The neuron functions because when an input is received (positive power on the input line), it charges the capacitor. Once the input is lost (negative power on the input line), the capacitor discharges into the inverter, causing the inverter to produce an output that is passed to the next neuron. The rate that the capacitor discharges is tied to the resistor that is pulling the input to the inverter to the negative. The larger the resistor, the longer it will take for the capacitor to fully discharge, and the longer it will take for that neuron to completely fire. Types There are many common network topologies used in BEAM robots, the most common of which are listed here. Bicore Probably the most utilized Nv Net topology in BEAM, the Bicore consists of two neurons placed in a loop that alternates current to the output. Input into the loop is given in the form of changing the resistance in each separate Neuron, which changes the rate at which the Neuron discharges, affecting the pace at which the loop oscillates. Master/Slave bicores Another common topology is using two bicores in a master/slave layout where the master bicore leads the slave and sets the pace, while the slave bicore follows at an offset pace. This layout is most commonly used for dual-motor walkers. Larger network
https://en.wikipedia.org/wiki/Caprenin
Caprenin is a fat substitute designed for lowering the caloric content of food. Structurally, it resembles normal food fat, being made up of glycerol and fatty acids (behenic, capric, and caprylic acids). Caprenin contains about 4 kcal per gram, or about half the energy in traditional fats and oils. Caloric reduction results, in part, from incomplete absorption of the unusual fatty acids. Caprenin was launched by Procter & Gamble as a cocoa butter replacement, but it proved difficult to use and appeared to increase serum cholesterol slightly, resulting in its withdrawal from the market in the mid-'90s. It is used as a reduced-calorie substitute in soft candies and confectionery coatings. External links Usable Energy Value of a Synthetic Fat (Caprenin) in Muffins Fed to Rats. New Age Fats & Oils Fat substitutes
https://en.wikipedia.org/wiki/Ground%20granulated%20blast-furnace%20slag
Ground granulated blast-furnace slag (GGBS or GGBFS) is obtained by quenching molten iron slag (a by-product of iron and steel-making) from a blast furnace in water or steam, to produce a glassy, granular product that is then dried and ground into a fine powder. Ground granulated blast furnace slag is a latent hydraulic binder forming calcium silicate hydrates (C-S-H) after contact with water. It is a strength-enhancing compound improving the durability of concrete. It is a component of metallurgic cement ( in the European norm ). Its main advantage is its slow release of hydration heat, allowing limitation of the temperature increase in massive concrete components and structures during cement setting and concrete curing, or to cast concrete during hot summer. Production and composition The chemical composition of a slag varies considerably depending on the composition of the raw materials in the iron production process. Silicate and aluminate impurities from the ore and coke are combined in the blast furnace with a flux which lowers the viscosity of the slag. In the case of pig iron production, the flux consists mostly of a mixture of limestone and forsterite or in some cases dolomite. In the blast furnace the slag floats on top of the iron and is decanted for separation. Slow cooling of slag melts results in an unreactive crystalline material consisting of an assemblage of Ca-Al-Mg silicates. To obtain a good slag reactivity or hydraulicity, the slag melt needs to be rapidly cooled or quenched below 800 °C in order to prevent the crystallization of merwinite and melilite. In order to cool and fragment the slag, a granulation process can be applied in which molten slag is subjected to jet streams of water or air under pressure. Alternatively, in the pelletization process, the liquid slag is partially cooled with water and subsequently projected into the air by a rotating drum. In order to obtain a suitable reactivity, the obtained fragments are ground to reach the
https://en.wikipedia.org/wiki/Sugar%20signal%20transduction
Sugar signal transduction is an evolutionarily conserved mechanism used by organisms to survive. Sugars have an overwhelming effect on gene expression. In yeast, glucose levels are managed by controlling the mRNA levels of hexose transporters, while in mammals, the response to glucose is more tightly controlled with glucose metabolism and is therefore much more complex. Several glucose-responsive DNA motifs and DNA binding protein complexes have been identified in liver and b-cells. Although not proven, glucose repression appears to be conserved in plants because in many cases, both sugar induction and sugar repression are initiated by turning off transcription factors. See also Glycobiology
https://en.wikipedia.org/wiki/The%20Character%20of%20Physical%20Law
The Character of Physical Law is a series of seven lectures by physicist Richard Feynman concerning the nature of the laws of physics. Feynman delivered the lectures in 1964 at Cornell University, as part of the Messenger Lectures series. The BBC recorded the lectures, and published a book under the same title the following year; Cornell published the BBC's recordings online in September 2015. In 2017 MIT Press published, with a new foreword by Frank Wilczek, a paperback reprint of the 1965 book. Topics The lectures covered the following topics: The law of gravitation, an example of physical law The relation of mathematics and physics The great conservation principles Symmetry in physical law The distinction of past and future Probability and uncertainty - the quantum mechanical view of nature Seeking new laws Reception Critical reception has been positive. The journal The Physics Teacher, in recommending it to both scientists and non-scientists alike, gave The Character of Physical Law a favorable review, writing that although the book was initially intended to supplement the recordings, it was "complete in itself and will appeal to a far wider audience". Selections "In general we look for a new law by the following process. First we guess it. ...", – See also QED: The Strange Theory of Light and Matter The Feynman Lectures on Physics
https://en.wikipedia.org/wiki/Alteon%20WebSystems
Alteon WebSystems, originally known as Alteon Networks, is a division of Radware that produces application delivery controllers. Alteon was acquired by Nortel Networks on October 4, 2000. On February 22, 2009 Nortel Networks sold the Alteon application switching line to Radware. History Alteon Networks was founded in 1996 by Mark Bryers, John Hayes, Ted Schroeder and Wayne Hathaway. Initial venture capital investors were Matrix Partners and Sutter Hill Ventures. Dominic Orr became chief executive in October 1996. Alteon introduced innovative products such as the ACEswitch 180, which was the first network switch to deliver Ethernet with selectable speed, 10/100 or 1000 Mbit/s, on every port via autonegotiation. Their ACEdirector Layer 4-7 switch was designed as an integrated services front-end and server load balancer. They also introduced Jumbo Frames (up to 9,000 bytes) with their ACEnic adapters, and supported by their switches. In addition to server switches, Alteon produced the first network interface controller (NIC) in 1997 that used Gigabit Ethernet (demonstrated at the Networld + Interop trade show in September 1996). Alteon's third generation Gigabit Ethernet NIC (code named "Tigon") became the basis for Broadcom's family of Ethernet controllers (series BCM570x) and has shipped over 60 million copies. It was used in low-cost adapters from vendors such as 3Com. In July 2000, Nortel Networks announced it was buying Alteon for US$6 billion in stock. The deal had originally been announced with a value of $7.8 billion, but the stock market plummeted before the deal closed in October. Nortel rolled the ACEDirector and ACESwitch products into its Personal Internet product line, but one year later sales had slowed down. On February 22, 2009 Nortel Networks announced they would sell the Alteon application switching line to Radware, for $17.65 million. In November 2013, Radware announced the Alteon NG, marketed as an application delivery controller.
https://en.wikipedia.org/wiki/Milne%20model
The Milne model was a special-relativistic cosmological model proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space. Since it features both zero energy density and maximally negative spatial curvature, the Milne model is inconsistent with cosmological observations. Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness. Milne metric The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: where is the metric for a two-sphere and is the curvature-corrected radial component for negatively curved space that varies between 0 and . The empty space that the Milne model describes can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. Milne developed this model independent of general relativity but with awareness of special relativity. As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density () version of the FLRW metric to Milne's model implies that a full gen
https://en.wikipedia.org/wiki/Bloodstopping
Bloodstopping refers to an American folk practice once common in the Ozarks and the Appalachians, Canadian lumbercamps and the northern woods of the United States. It was believed (and still is) that certain persons, known as bloodstoppers, could halt bleeding in humans and animals by supernatural means. The most common method was to walk east and recite Ezekiel 16:6. This is referred to as the blood verse. And when I passed by thee, and saw thee polluted in thine own blood, I said unto thee when thou wast in thy blood, Live; yea, I said unto thee when thou wast in thy blood, Live. History Bloodstopping was used in areas of North American where modern medicine was not reachable. Many of these communities had one or two bloodstoppers in their community. Since they were able to help when doctors were unavailable they became very popular in their community and were well respected. Bloodstopping was used mostly in the Ozarks, in the states Illinois, Missouri, and Arkansas. Each bloodstopper used their own technique to fix wounds. The person performing the bloodstopping must have been given the power to do so. The gift was mostly passed down through family (older to younger). It can only be passed down to the opposite sex. It can only be told to three people, with the third person gaining the power. The person performing it does not need to believe in it fully or be sinless since the blood verse is so powerful. Throughout Europe Christianity was becoming the main religion. However those who lived in rural areas were not as quick to convert. They were more fond of polytheistic religions since they had been used to it for so many years. German settlers who ended up in the Appalachians had many folk beliefs about magic. At first they used stars to determine planting cycles and to predict weather. When medical treatment became scarce they turned to other forms of medicine. This is when bloodstopping became a practice.
https://en.wikipedia.org/wiki/MaxDiff
The MaxDiff is a long-established theory in mathematical psychology with very specific assumptions about how people make choices: it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. It may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons: The only paired comparison that cannot be inferred is B vs. C. In a choice, like above, with four items MaxDiff questioning informs on five of six implied paired comparisons. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons. The total amount of known relations between items, can be mathematically expressed as follows: . represents here the total amount of items. The formula, makes it clear that the effectiveness of this method, of assuming relations, drastically decreases as grows bigger. Overview In 1938 Richardson introduced a choice method in which subjects reported the most alike pair of a triad and the most different pair. The component of this method involving the most different pair may be properly called "MaxDiff" in contrast to a "most-least" or "best-worst" method where both the most different pair and the direction of difference are obtained. Ennis, Mullen and Frijters (1988) derived a unidimensional Thurstonian scaling model for Richardson's method of triads so that the results could be scaled under normality assumptions about the item percepts. MaxDiff may involve multidimensional percepts, unlike most-least models that assume a unidimensional representation. MaxDiff and most-least methods belong to a class of methods that do not require the estimation of a cognitive parameter as occurs in the analysis of r
https://en.wikipedia.org/wiki/Pake%20doublet
A Pake Doublet (or "Pake Pattern") is a characteristic line shape seen in solid-state nuclear magnetic resonance and electron paramagnetic resonance spectroscopy. It was first described by George Pake. It arises from dipolar coupling between isolated two spin-1/2 nuclei, or from transitions in quadrupolar nuclei such as deuterium. It is the general shape obtained from an orientationally dependent doublet. The "horns" of the Pake doublet correspond to the situation when the principal axis of the coupling interaction (the internuclear vector in the case dipolar coupling and the principal component of the electric field gradient tensor for quadrupolar nuclei) is perpendicular to the magnetic field. This situation is the most probable and the intensity is much higher. The "feet" of the lineshape correspond to the situation when the principal axis of the coupling interaction is parallel to the magnetic field which is much less statistically relevant. Pake was the first to describe this lineshape and used it to extract the proton-proton distance from his experiments on a single crystal and powdered hydrates of gypsum (CaSO4.2H2O). This made it possible to experimentally determine the internuclear distance between the hydrogen atoms in water. In solids with vacant positions, dipole coupling is averaged partially due to water diffusion which proceeds according to the symmetry of the solids and the probability distribution of molecules between the vacancies. In the case the averaged lineshape is used to analyze crystal symmetry, phase transitions, and the degree of molecular disorder in crystalline hydrates, zeolites, clays and biological tissues.
https://en.wikipedia.org/wiki/SYN%20cookies
SYN cookie is a technique used to resist SYN flood attacks. The technique's primary inventor Daniel J. Bernstein defines SYN cookies as "particular choices of initial TCP sequence numbers by TCP servers." In particular, the use of SYN cookies allows a server to avoid dropping connections when the SYN queue fills up. Instead of storing additional connections, a SYN queue entry is encoded into the sequence number sent in the SYN+ACK response. If the server then receives a subsequent ACK response from the client with the incremented sequence number, the server is able to reconstruct the SYN queue entry using information encoded in the TCP sequence number and proceed as usual with the connection. Implementation In order to initiate a TCP connection, the client sends a TCP SYN packet to the server. In response, the server sends a TCP SYN+ACK packet back to the client. One of the values in this packet is a sequence number, which is used by the TCP to reassemble the data stream. According to the TCP specification, that first sequence number sent by an endpoint can be any value as decided by that endpoint. As the sequence number is chosen by the sender, returned by the recipient, and has no otherwise-defined internal structure, it can be overloaded to carry additional data. The following describes one possible implementation, however as there is no public standard to follow, the order, length, and semantics of the fields may differ between SYN cookie implementations. SYN cookies are initial sequence numbers that are carefully constructed according to the following rules: let t be a slowly incrementing timestamp (typically logically right-shifted 6 positions, which gives a resolution of 64 seconds) let m be the maximum segment size (MSS) value that the server would have stored in the SYN queue entry let s be the result of a cryptographic hash function computed over the server IP address and port number, the client IP address and port number, and the value t. The retu
https://en.wikipedia.org/wiki/Self-extracting%20archive
A self-extracting archive (SFX or SEA) is a computer executable program which contains compressed data in an archive file combined with machine-executable program instructions to extract this information on a compatible operating system and without the necessity for a suitable extractor to be already installed on the target computer. The executable part of the file is known as a decompressor stub. Self-extracting files are used to share compressed files with a party that may not necessarily have the software to decompress a regular archive. Users can also use self-extracting to distribute their own software. For example, the WinRAR installation program is made using the graphical GUI RAR self-extracting module Default.sfx. Overview Self-extracting archives contains an executable file module, a module used to run uncompressed files from compressed files. Such a compressed file does not require an external program to decompress the contents of the self-extracting file, and can run the operation itself. However, file archivers like WinRAR can still treat a self-extracting file as though it is any other type of compressed file. By using a file archiver, users can view or decompress self-extracting files they received without running executable code (for example, if they are concerned about viruses). A self-extracting archive is extracted and stored on a disk when executed under an operating system that supports it. Many embedded self-extractors support a number of command line arguments, such as specifying the target location or selecting only specific files. Unlike self-extracting archives, non-self-extracting archives only contain archived files and must be extracted with a program that is compatible with them. While self-extracting archives cannot be extracted under another operating system, they can usually still be opened using a suitable extractor as this tool will disregard the executable part of the file and extract only the archive resource. The self-extrac
https://en.wikipedia.org/wiki/Tree%20sort
A tree sort is a sort algorithm that builds a binary search tree from the elements to be sorted, and then traverses the tree (in-order) so that the elements come out in sorted order. Its typical use is sorting elements online: after each insertion, the set of elements seen so far is available in sorted order. Tree sort can be used as a one-time sort, but it is equivalent to quicksort as both recursively partition the elements based on a pivot, and since quicksort is in-place and has lower overhead, tree sort has few advantages over quicksort. It has better worst case complexity when a self-balancing tree is used, but even more overhead. Efficiency Adding one item to a binary search tree is on average an process (in big O notation). Adding n items is an process, making tree sorting a 'fast sort' process. Adding an item to an unbalanced binary tree requires time in the worst-case: When the tree resembles a linked list (degenerate tree). This results in a worst case of time for this sorting algorithm. This worst case occurs when the algorithm operates on an already sorted set, or one that is nearly sorted, reversed or nearly reversed. Expected time can however be achieved by shuffling the array, but this does not help for equal items. The worst-case behaviour can be improved by using a self-balancing binary search tree. Using such a tree, the algorithm has an worst-case performance, thus being degree-optimal for a comparison sort. However, tree sort algorithms require separate memory to be allocated for the tree, as opposed to in-place algorithms such as quicksort or heapsort. On most common platforms, this means that heap memory has to be used, which is a significant performance hit when compared to quicksort and heapsort. When using a splay tree as the binary search tree, the resulting algorithm (called splaysort) has the additional property that it is an adaptive sort, meaning that its running time is faster than for inputs that are nearly sorted. Exampl
https://en.wikipedia.org/wiki/Cryptography%20law
Cryptography is the practice and study of encrypting information, or in other words, securing information from unauthorized access. There are many different cryptography laws in different nations. Some countries prohibit export of cryptography software and/or encryption algorithms or cryptoanalysis methods. Some countries require decryption keys to be recoverable in case of a police investigation. Overview Issues regarding cryptography law fall into four categories: Export control, which is the restriction on export of cryptography methods within a country to other countries or commercial entities. There are international export control agreements, the main one being the Wassenaar Arrangement. The Wassenaar Arrangement was created after the dissolution of COCOM (Coordinating committee for Multilateral Export Controls), which in 1989 "decontrolled password and authentication-only cryptography." Import controls, which is the restriction on using certain types of cryptography within a country. Patent issues, which deal with the use of cryptography tools that are patented. Search and seizure issues, on whether and under what circumstances, a person can be compelled to decrypt data files or reveal an encryption key. Legal issues Prohibitions Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous . Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible. In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since rel
https://en.wikipedia.org/wiki/Tachyons%20in%20fiction
The hypothetical particles tachyons have inspired many occurrences of in fiction. The use of the word in science fiction dates back at least to 1970 when James Blish's Star Trek novel Spock Must Die! incorporated tachyons into an ill-fated transporter experiment. In general, tachyons are a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. For example, in the Babylon 5 television series, tachyons are used for real-time communication over long distances. Another instance is Gregory Benford's novel Timescape, winner of the Nebula Award, which involves the use of tachyons to transmit a message of salvation back in time. Likewise, John Carpenter's horror film Prince of Darkness uses tachyons to explain how future humans send messages backward through time to warn the characters of their impending doom. By contrast, Alan Moore's classic comic book limited series Watchmen features a character who uses "a squall of tachyons" broadcasting from space to muddle the mind of the only person on Earth capable of seeing the future. The word "tachyon" has become widely recognized to such an extent that it can impart a science-fictional "sound" even if the subject in question has no particular relation to superluminal travel (compare positronic brain). Classic anime fans may associate tachyons with the energy source for the wave-motion gun and wave-motion engine in Space Battleship Yamato (Starblazers in the United States). Further examples include the "Tachion Tanks" of the PC game Dark Reign and the "tachyon beam" of the game Master of Orion. The space-combat sim Tachyon: The Fringe utilizes "tachyon gates" for superluminal travel but gives no exact explanation for the technology, and the MMORPG Eve Online features six types of "Large Tachyon Lasers", technically a contradiction since by definition, lasers emit light—photons, not any kind of hypothetical tachyon. In p
https://en.wikipedia.org/wiki/Beat%20slicing
Beat slicing is the process of using computer programs to slice an audio file of a drumloop in smaller sections, separating different drumhits. This is employed to rearrange the beat with either a sequencer or play them with a sampler, with the results ranging from changing particular hits to completely rearranging the flow of the beat. Slicing a beat also allows the tempo of the beat to be altered heavily in music sequencers, without resulting downsides such as the pitch being increased or decreased. This process is the most prominent in genres of Drum and bass, Hip-Hop, Glitch and IDM, the two latter being notorious for their prominent artists rearranging and altering beats in extreme ways. Common programs used for beat slicing Ableton Live BeatCleaver ReCycle Renoise Reaktor Max/MSP FL Studio
https://en.wikipedia.org/wiki/Revegetation
Revegetation is the process of replanting and rebuilding the soil of disturbed land. This may be a natural process produced by plant colonization and succession, manmade rewilding projects, accelerated process designed to repair damage to a landscape due to wildfire, mining, flood, or other cause. Originally the process was simply one of applying seed and fertilizer to disturbed lands, usually grasses or clover. The fibrous root network of grasses is useful for short-term erosion control, particularly on sloping ground. Establishing long-term plant communities requires forethought as to appropriate species for the climate, size of stock required, and impact of replanted vegetation on local fauna. The motivations behind revegetation are diverse, answering needs that are both technical and aesthetic, but it is usually erosion prevention that is the primary reason. Revegetation helps prevent soil erosion, enhances the ability of the soil to absorb more water in significant rain events, and in conjunction reduces turbidity dramatically in adjoining bodies of water. Revegetation also aids protection of engineered grades and other earthworks. Organisations like Trees For Life (Brooklyn Park) provide good examples. For conservation Revegetation is often used to join up patches of natural habitat that have been lost and can be a very important tool in places where much of the natural vegetation has been cleared. It is therefore particularly important in urban environments, and research in Brisbane has shown that revegetation projects can significantly improve urban bird populations. The Brisbane study showed that connecting a revegetation patch with existing habitat improved bird species richness, while simply concentrating on making large patches of habitat was the best way to increase bird abundance. Revegetation plans, therefore, need to consider how the revegetated sites are connected with existing habitat patches. Revegetation in agricultural areas can support breedi
https://en.wikipedia.org/wiki/META%20II
META II is a domain-specific programming language for writing compilers. It was created in 1963–1964 by Dewey Val Schorre at UCLA. META II uses what Schorre called syntax equations. Its operation is simply explained as: Each syntax equation is translated into a recursive subroutine which tests the input string for a particular phrase structure, and deletes it if found. Meta II programs are compiled into an interpreted byte code language. VALGOL and SMALGOL compilers illustrating its capabilities were written in the META II language, VALGOL is a simple algebraic language designed for the purpose of illustrating META II. SMALGOL was a fairly large subset of ALGOL 60. Notation META II was first written in META I, a hand-compiled version of META II. The history is unclear as to whether META I was a full implementation of META II or a required subset of the META II language required to compile the full META II compiler. In its documentation, META II is described as resembling BNF, which today is explained as a production grammar. META II is an analytical grammar. In the TREE-META document these languages were described as reductive grammars. For example, in BNF, an arithmetic expression may be defined as: <expr> := <term> | <expr> <addop> <term> BNF rules are today production rules describing how constituent parts may be assembled to form only valid language constructs. A parser does the opposite taking language constructs apart. META II is a stack-based functional parser programming language that includes output directive. In META II, the order of testing is specified by the equation. META II like other programming languages would overflow its stack attempting left recursion. META II uses a $ (zero or more) sequence operator. The expr parsing equation written in META II is a conditional expression evaluated left to right: expr = term $( '+' term .OUT('ADD') / '-' term .OUT('SUB')); Above the expr equation is defined by the expression to the right o
https://en.wikipedia.org/wiki/Reel%20Top%2040%20Radio%20Repository
The Reel Top 40 Radio Repository, sometimes called REELRADIO, is a virtual museum of radio broadcasts, primarily airchecks from the "Top 40" era of radio in North America. The archives are available by streaming. Established in 1996 as the first online airchecks archive, it was transferred to a dedicated not-for-profit organization, REELRADIO, Inc., in 2000. The site was organized as a series of "collections"; most collections represent the archives of a single contributor. As of April 2018, the repository featured more than 3,568 exhibits. Two collections are tribute sites to famed Los Angeles disc jockeys Robert W. Morgan and The Real Don Steele, and the organization also hosts other sites about the Top 40 eras at WPGC in Washington and WIXY 1260 in Cleveland. , the board of directors includes founder Richard "Uncle Ricky" Irwin, news reporter Michael Burgess (known as Mike Scott) of KGPE (TV), and Bob Shannon (former vice president of TM Century). History The repository site was started as Uncle Ricky's Reel Top 40 Radio Repository on by Richard "Uncle Ricky" Irwin, who had been in the radio business for 30 years before becoming a webmaster for Sacramento Network Access. The repository was started using SNA's servers, including a RealAudio streaming media server. Articles about the site were published in Radio World magazine on March 20, 1996 and Radio & Records on September 13, 1996. Uncle Ricky's Reel Top 40 Radio Repository was one of five Radio category nominees for the 1998 Webby Awards, and an article titled "Radio Patter From The Past: Vintage D.J's Rock On" was published in The New York Times on May 9, 2002. After SNA was sold to PSINet, the not-for-profit corporation REELRADIO, Inc. was formed on March 23, 2000, with assistance from the Media Preservation Foundation, to collect donations for funding the site; once under the new organization, the site was moved to new hosting facilities in July. For the first 10 years, the site was supported
https://en.wikipedia.org/wiki/Happy%20mapping
In genetics, HAPPY Mapping, first proposed by Paul H. Dear and Peter R. Cook in 1989, is a method used to study the linkage between two or more DNA sequences. According to the Single Molecule Genomics Group, it is "Mapping based on the analysis of approximately HAPloid DNA samples using the PolYmerase chain reaction". In genomics, HAPPY mapping can be applied to assess the synteny and orientation of various DNA sequences across a particular genome - the generation of a "genomic" map. As with linkage mapping, HAPPY mapping relies on the differential probability of two or more DNA sequences being separated. In genetic mapping, the probability of a recombination event between two genetic loci on the same chromosome is directly proportional to the distance between them. HAPPY mapping replaces recombination with fragmentation - instead of relying on recombination to separate genetic loci, the entire genome is fragmented, for example, by radiation or mechanical shearing. If the DNA is broken on a random basis, the longer the distance between two DNA sequences, the higher the chances of it to break between the two, and vice versa. HAPPY mapping retains the benefits of genetic mapping while removing some of the problems associated with recombination. I.e., the need for polymorphism, and breeding. Also, recombination can be locale specific whereas breakage of genomic DNA by radiation or mechanical shearing seems to be more random. It has been used to genetically map several organisms. HAPPY mapping has also been adapted to allow the precise analysis of copy-number variation, and in particular the analysis of copy-number changes in cancer.
https://en.wikipedia.org/wiki/Human%20accelerated%20regions
Human accelerated regions (HARs), first described in August 2006, are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans. They are named according to their degree of difference between humans and chimpanzees (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits. Others may represent loss of functional mutations, possibly due to the action of biased gene conversion rather than adaptive evolution. Several of the HARs encompass genes known to produce proteins important in neurodevelopment. HAR1 is a 106-base pair stretch found on the long arm of chromosome 20 overlapping with part of the RNA genes HAR1F and HAR1R. HAR1F is active in the developing human brain. The HAR1 sequence is found (and conserved) in chickens and chimpanzees but is not present in fish or frogs that have been studied. There are 18 base pair mutations different between humans and chimpanzees, far more than expected by its history of conservation. HAR2 includes HACNS1 a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb, and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome, HACNS1 has undergone the most change during the evolution of humans following the split with the ancestors of chimpanzees. The substitutions in HAR2 may have resulted in loss of binding sites for a repressor, possibly due to biased gene conversion. HAR genes HAR01: HAR1F & HAR1R HAR02: CENTG2 including the HACNS1 module HAR03: MAD1L1 HAR04: ? HAR05: WNK1 HAR06: WWOX HAR07: ? HAR08: POU6F2 HAR09: PTPRT HAR10: FHIT HAR11: DMD HAR12: ? HAR20: PPARGC1A HAR21: NPAS3 - association with psychiatric disorders HAR23: MGC27016
https://en.wikipedia.org/wiki/Parsonage%E2%80%93Turner%20syndrome
Parsonage–Turner syndrome, also known as acute brachial neuropathy, neuralgic amyotrophy and abbreviated PTS, is a syndrome of unknown cause; although many specific risk factors have been identified (such as; post-operative, post-infectious, post-traumatic or post-vaccination), the cause is still unknown. The condition manifests as a set of symptoms most likely resulting from autoimmune inflammation of unknown cause of the brachial plexus. Parsonage–Turner syndrome occurs in about 1.6 out of 100,000 people every year. Signs and symptoms This syndrome can begin with severe shoulder or arm pain followed by weakness and numbness. Those with Parsonage–Turner experience acute, sudden-onset pain radiating from the shoulder to the upper arm. Affected muscles become weak and atrophied, and in advanced cases, paralyzed. Occasionally, there will be no pain and just paralysis, and sometimes just pain, not ending in paralysis. MRI may assist in diagnosis. Scapular winging is commonly seen. Mechanism Parsonage-Turner involves neuropathy of the suprascapular nerve in 97% of cases, and variably involves the axillary and subscapular nerves. As such, the muscles usually involved are the supraspinatus and infraspinatus, which are both innervated by the suprascapular nerve. Involvement of the deltoid is more variable, as it is innervated by the axillary nerve. Diagnosis Diagnosis often takes three to nine months to be made, as the condition is often unrecognised by physicians. Differential diagnosis The differential focuses on distinguishing it from similar entities such as quadrilateral space syndrome, which involves the teres minor and variably the deltoid, and suprascapular nerve impingement at the spinoglenoid notch, which predominantly involves the infraspinatus. Prognosis Despite its wasting and at times long-lasting effects, most cases are resolved by the body's healing system, and recovery is usually good in 18–24 months, depending on how old the person in question is.
https://en.wikipedia.org/wiki/Komar%20mass
The Komar mass (named after Arthur Komar) of a system is one of several formal concepts of mass that are used in general relativity. The Komar mass can be defined in any stationary spacetime, which is a spacetime in which all the metric components can be written so that they are independent of time. Alternatively, a stationary spacetime can be defined as a spacetime which possesses a timelike Killing vector field. The following discussion is an expanded and simplified version of the motivational treatment in (Wald, 1984, pg 288). Motivation Consider the Schwarzschild metric. Using the Schwarzschild basis, a frame field for the Schwarzschild metric, one can find that the radial acceleration required to hold a test mass stationary at a Schwarzschild coordinate of r is: Because the metric is static, there is a well-defined meaning to "holding a particle stationary". Interpreting this acceleration as being due to a "gravitational force", we can then compute the integral of normal acceleration multiplied by area to get a "Gauss law" integral of: While this approaches a constant as r approaches infinity, it is not a constant independent of r. We are therefore motivated to introduce a correction factor to make the above integral independent of the radius r of the enclosing shell. For the Schwarzschild metric, this correction factor is just , the "red-shift" or "time dilation" factor at distance r. One may also view this factor as "correcting" the local force to the "force at infinity", the force that an observer at infinity would need to apply through a string to hold the particle stationary. (Wald, 1984). To proceed further, we will write down a line element for a static metric. where gtt and the quadratic form are functions only of the spatial coordinates x, y, z and are not functions of time. In spite of our choices of variable names, it should not be assumed that our coordinate system is Cartesian. The fact that none of the metric coefficients are functi
https://en.wikipedia.org/wiki/Mass%20in%20general%20relativity
The concept of mass in general relativity (GR) is more subtle to define than the concept of mass in special relativity. In fact, general relativity does not offer a single definition of the term mass, but offers several different definitions that are applicable under different circumstances. Under some circumstances, the mass of a system in general relativity may not even be defined. The reason for this subtlety is that the energy and momentum in the gravitational field cannot be unambiguously localized. (See Chapter 20 of .) So, rigorous definitions of the mass in general relativity are not local, as in classical mechanics or special relativity, but make reference to the asymptotic nature of the spacetime. A well defined notion of the mass exists for asymptotically flat spacetimes and for asymptotically Anti-de Sitter space. However, these definitions must be used with care in other settings. Defining mass in general relativity: concepts and obstacles In special relativity, the rest mass of a particle can be defined unambiguously in terms of its energy and momentum as described in the article on mass in special relativity. Generalizing the notion of the energy and momentum to general relativity, however, is subtle. The main reason for this is that that gravitational field itself contributes to the energy and momentum. However, the "gravitational field energy" is not a part of the energy–momentum tensor; instead, what might be identified as the contribution of the gravitational field to a total energy is part of the Einstein tensor on the other side of Einstein's equation (and, as such, a consequence of these equations' non-linearity). While in certain situations it is possible to rewrite the equations so that part of the "gravitational energy" now stands alongside the other source terms in the form of the stress–energy–momentum pseudotensor, this separation is not true for all observers, and there is no general definition for obtaining it. How, then, does one
https://en.wikipedia.org/wiki/Eifion%20Jones
William Eifion Jones (1925 – March 2004) was a Welsh marine botanist, noted for his study of marine algae. He was born and brought up in Aberystwyth and studied botany at the University of Wales under Professor Lilly Newton. He moved to Bangor in 1953 to join the newly founded Marine Biology Station as a lecturer with Denis Crisp, and completed his PhD in 1957. He had a wife, Marian, and two children, Rhiannon and Aled. He retired early in 1986, but went on to lecture in Kuwait, returning to do part-time lecturing at the University of Wales, Bangor. He died in a car accident at Gaerwen, aged 79. He wrote A key to the Genera of the British Seaweeds (1962). It was most valuable as an update to Newton's Handbook of 1931 had become out-of-date and this was required to identify the genera of algae to be found on the shores of the British Isles. He joined the British Phycological Society in 1955 and served as a Member of Council (1959 and 1974–1977), as Assistant Secretary (1959) and Hon. Treasurer (1964–1968). At the Eighth International Seaweed Symposium, held in Bangor in 1974, he was a member of the organizing committee and secretary. He was President of the North Wales Wildlife Trust and Bardsey Bird and Field Observatory. Publications Jones, W.E. 1956. Effect of spore coalescence on the early development of Gracilaria verrucosa (Hudson) Papenfuss. Nature, Lond. 178: 426 - 427. Jones, W.E. 1958. Experiments on some effects of certain environmental factors on Gracilaria verrucosa (Huds.) Papenf. Journal of the Marine biological Association of the United Kingdom., 38: 153 - 167. Jones, W.E. 1959. The growth and fruiting of Gracilaria verrucosa (Hudson) Papenfuss. Journal of the Marine Biological Association of the United Kingdom., 38: 47 - 56. Jones, W.E. 1962a. "A key to the genera of the British Seaweeds." Field Studies. 1: No.4. pp. 1 – 32. Jones, W.E. 1962b. The identity of Gracilaria erecta (Grev.) Grev. British phycological Bulletin 2: 140 - 144. Jone
https://en.wikipedia.org/wiki/List%20of%20Australian%20herbs%20and%20spices
Australian herbs and spices were used by Aboriginal peoples to flavour food in ground ovens. The term "spice" is applied generally to the non-leafy range of strongly flavoured dried Australian bushfoods. They mainly consist of aromatic fruits and seed products, although Australian wild peppers also have spicy leaves. There are also a few aromatic leaves but unlike culinary herbs from other cultures which often come from small soft-stemmed forbs, the Australian herb species are generally trees from rainforests, open forests and woodlands. Australian herbs and spices are generally dried and ground to produce a powdered or flaked spice, either used as a single ingredient or in blends. They were used to a limited extent by colonists in the 18th and 19th centuries. Some extracts were used as flavouring during the 20th century. Australian native spices have become more widely recognized and used by non-indigenous people since the early 1980s as part of the bushfood industry, with increasing gourmet use and export. They can also be used as a fresh product. Leaves can be used whole, like a bay-leaf in cooking, or spicy fruits are added to various dishes for flavour. The distilled essential oils from leaves and twigs are also used as flavouring products. Fruit Acronychia acidula, Lemon Aspen Acronychia oblongifolia, White Aspen, Yellow Wood Austromyrtus dulcis, Midgen Berry, Silky Myrtle Citrus australasica, Finger Lime, Caviar Lime Citrus australis, Round Lime, Australian Lime Citrus glauca, Desert Lime Eupomatia laurina, Bolwarra, Native Guava, Copper Laurel Kunzea pomifera, Muntries, Emu Apples, Native Cranberries Solanum centrale, Akudjura, Australian Desert Raisin Solanum chippendalei, Chippendale's Tomato, Bush Tomato Solanum cleistogamum, Potato Bush, Bush Tomato Syzygium luehmannii, Riberry, Cherry Alder, Small Leaf Lilly Pilly Herbs Apium insulare, Flinders Island Celery Apium prostratum, Sea Celery Atherosperma Moschatum, safrole, Southern S
https://en.wikipedia.org/wiki/Fiducial%20marker
A fiducial marker or fiducial is an object placed in the field of view of an imaging system that appears in the image produced, for use as a point of reference or a measure. It may be either something placed into or on the imaging subject, or a mark or set of marks in the reticle of an optical instrument. Applications Microscopy In high-resolution optical microscopy, fiducials can be used to actively stabilize the field of view. Stabilization to better than 0.1 nm is achievable. Physics In physics, 3D computer graphics, and photography, fiducials are reference points: fixed points or lines within a scene to which other objects can be related or against which objects can be measured. Cameras outfitted with Réseau plates produce these reference marks (also called Réseau crosses) and are commonly used by NASA. Such marks are closely related to the timing marks used in optical mark recognition. Geographical survey Airborne geophysical surveys also use the term "fiducial" as a sequential reference number in the measurement of various geophysical instruments during a survey flight. This application of the term evolved from air photo frame numbers that were originally used to locate geophysical survey lines in the early days of airborne geophysical surveying. This method of positioning has since been replaced by GPS, but the term "fiducial" continues to be used as the time reference for data measured during flights. Augmented reality In applications of augmented reality, fiducials help resolve several problems of integration between the real world view and the synthetic images that augment it. Fiducials of known pattern and size can serve as real world anchors of location, orientation and scale. They can establish the identity of the scene or objects within the scene. For example, a fiducial printed on one page of an augmented reality popup book would identify the page to allow the system to select the augmentation content. It would also serve to moor the coordinate
https://en.wikipedia.org/wiki/Plane%20stress
In continuum mechanics, a material is said to be under plane stress if the stress vector is zero across a particular plane. When that situation occurs over an entire element of a structure, as is often the case for thin plates, the stress analysis is considerably simplified, as the stress state can be represented by a tensor of dimension 2 (representable as a 2×2 matrix rather than 3×3). A related notion, plane strain, is often applicable to very thick members. Plane stress typically occurs in thin flat plates that are acted upon only by load forces that are parallel to them. In certain situations, a gently curved thin plate may also be assumed to have plane stress for the purpose of stress analysis. This is the case, for example, of a thin-walled cylinder filled with a fluid under pressure. In such cases, stress components perpendicular to the plate are negligible compared to those parallel to it. In other situations, however, the bending stress of a thin plate cannot be neglected. One can still simplify the analysis by using a two-dimensional domain, but the plane stress tensor at each point must be complemented with bending terms. Mathematical definition Mathematically, the stress at some point in the material is a plane stress if one of the three principal stresses (the eigenvalues of the Cauchy stress tensor) is zero. That is, there is Cartesian coordinate system in which the stress tensor has the form For example, consider a rectangular block of material measuring 10, 40 and 5 cm along the , , and , that is being stretched in the direction and compressed in the direction, by pairs of opposite forces with magnitudes 10 N and 20 N, respectively, uniformly distributed over the corresponding faces. The stress tensor inside the block will be More generally, if one chooses the first two coordinate axes arbitrarily but perpendicular to the direction of zero stress, the stress tensor will have the form and can therefore be represented by a 2 × 2
https://en.wikipedia.org/wiki/Hyperchromicity
Hyperchromicity is the increase of absorbance (optical density) of a material. The most famous example is the hyperchromicity of DNA that occurs when the DNA duplex is denatured. The UV absorption is increased when the two single DNA strands are being separated, either by heat or by addition of denaturant or by increasing the pH level. The opposite, a decrease of absorbance is called hypochromicity. Hyperchromicity in DNA denaturation Heat denaturation of DNA, also called melting, causes the double helix structure to unwind to form single stranded DNA. When DNA in solution is heated above its melting temperature (usually more than 80 °C), the double-stranded DNA unwinds to form single-stranded DNA. The bases become unstacked and can thus absorb more light. In their native state, the bases of DNA absorb light in the 260-nm wavelength region. When the bases become unstacked, the wavelength of maximum absorbance does not change, but the amount absorbed increases by 37%. A double stranded DNA strand dissociating to two single strands produces a sharp cooperative transition. Hyperchromicity can be used to track the condition of DNA as temperature changes. The transition/melting temperature (Tm) is the temperature where the absorbance of UV light is 50% between the maximum and minimum, i.e. where 50% of the DNA is denatured. A ten fold increase of monovalent cation concentration increases the temperature by 16.6 °C. The hyperchromic effect is the striking increase in absorbance of DNA upon denaturation. The two strands of DNA are bound together mainly by the stacking interactions, hydrogen bonds and hydrophobic effect between the complementary bases. The hydrogen bond limits the resonance of the aromatic ring so the absorbance of the sample is limited as well. When the DNA double helix is treated with denatured agents, the interaction force holding the double helical structure is disrupted. The double helix then separates into two single strands which are in the rand
https://en.wikipedia.org/wiki/Impalefection
Impalefection is a method of gene delivery using nanomaterials, such as carbon nanofibers, carbon nanotubes, nanowires. Needle-like nanostructures are synthesized perpendicular to the surface of a substrate. Plasmid DNA containing the gene, and intended for intracellular delivery, is attached to the nanostructure surface. A chip with arrays of these needles is then pressed against cells or tissue. Cells that are impaled by nanostructures can express the delivered gene(s). As one of the types of transfection, the term is derived from two words – impalement and infection. Applications One of the features of impalefection is spatially resolved gene delivery that holds potential for such tissue engineering approaches in wound healing as gene activated matrix technology. Though impalefection is an efficient approach in vitro, it has not yet been effectively used in vivo on live organisms and tissues. Carrier materials Vertically aligned carbon nanofiber arrays prepared by photolithography and plasma enhanced chemical vapor deposition are one of the suitable types of material. Silicon nanowires are another choice of nanoneedles that have been utilized for impalefection. See also Nanomedicine Tim McKnight
https://en.wikipedia.org/wiki/PBA%20on%20NBN/IBC
The PBA on NBN and The PBA on IBC were brandings used for presentations of Philippine Basketball Association games produced by Summit Sports and was aired on Philippine television networks National Broadcasting Network and Intercontinental Broadcasting Corporation, respectively in 2003. The PBA on NBN and The PBA on IBC succeeded PBA's longtime TV partner Vintage Television. Overview The consortium of NBN and IBC took over the league's TV coverage after winning the TV rights over the league's longtime TV partner Vintage Television on November 8, 2002. The league considered the NBN-IBC bid because they can provide a wider coverage of the games not only in Metro Manila but also throughout the provinces. The consortium signed an agreement to the PBA to cover the games for three years, paying the league for almost P670 million. NBN began airing the PBA games with the opening of the 2003 PBA season on February 23, 2003, while IBC first aired the PBA games on March 16, 2003. The first week's ratings of the games over NBN were negligible when compared to those of IBC which had telecast the games for several years, while even the two networks combined ratings were way below those of Viva TV in 2002. The use of two networks to broadcast the PBA also led to an experiment during the first few months of the season where NBN and IBC would air separate telecasts of a game aimed at different audiences, NBN's broadcasts were more traditionally styled, while IBC's broadcasts were aimed at a younger audience, utilizing a separate pool of younger personalities. The format was eventually dropped and replaced with straight simulcasts later on in the season. Due to allegations by IBC that NBN had not paid close to 30 million pesos in rights fees, IBC stopped broadcasting PBA games at the end of October 2003. NBN would continue on until the finals of the 2003 Reinforced Conference. The PBA gave the consortium a formal notice on December 1, 2003, to settle their debts of unpaid rights fe
https://en.wikipedia.org/wiki/Legacy%20of%20the%20Wizard
Legacy of the Wizard, originally released in Japan as , is a fantasy-themed action role-playing platform game released for the MSX, MSX2 and Famicom in Japan and for the Nintendo Entertainment System in the United States. Legacy of the Wizard is an installment in Falcom's Dragon Slayer series, and one of only five Dragon Slayer games that were localized outside Japan. The game was an early example of an open-world, non-linear action RPG, combining action-RPG gameplay with what would later be called "Metroidvania"-style action-adventure elements. Plot The game chronicles the story of the Drasle family (an abbreviation for "Dragon Slayer"; though the characters are given the last name "Worzen" in the credits) and their attempt to destroy an ancient dragon named Keela that is magically entrapped in a painting within an underground labyrinth. To accomplish this goal, they must find the "Dragon Slayer", a magical sword that is protected by four hidden crowns. The player must use the unique abilities of each member of the family to regain possession of the crowns and destroy the evil Keela. Like many games of its era, the story of Legacy of the Wizard is explained almost entirely in the game's instruction manual. Gameplay The Drasle family consists of six members of three generations, plus the family pet, which resembles a small dinosaur. The player takes control of the members of the Drasles and their pet, sending them one at a time into the vast cavern filled with traps, puzzles and monsters, in search of the four crowns, while periodically returning to the family household on the surface to change characters and to obtain a password. Each member of the family, which consists of the father, mother, son, daughter, and the pet, has different strengths and weaknesses to contribute to this goal. Some characters have seemingly powerful strengths, but each is offset by proportionate limitations. For example, the father has the strongest attack power, but cannot jump as hig
https://en.wikipedia.org/wiki/Bessel%20polynomials
In mathematics, the Bessel polynomials are an orthogonal sequence of polynomials. There are a number of different but closely related definitions. The definition favored by mathematicians is given by the series Another definition, favored by electrical engineers, is sometimes known as the reverse Bessel polynomials The coefficients of the second definition are the same as the first but in reverse order. For example, the third-degree Bessel polynomial is while the third-degree reverse Bessel polynomial is The reverse Bessel polynomial is used in the design of Bessel electronic filters. Properties Definition in terms of Bessel functions The Bessel polynomial may also be defined using Bessel functions from which the polynomial draws its name. where Kn(x) is a modified Bessel function of the second kind, yn(x) is the ordinary polynomial, and θn(x) is the reverse polynomial . For example: Definition as a hypergeometric function The Bessel polynomial may also be defined as a confluent hypergeometric function A similar expression holds true for the generalized Bessel polynomials (see below): The reverse Bessel polynomial may be defined as a generalized Laguerre polynomial: from which it follows that it may also be defined as a hypergeometric function: where (−2n)n is the Pochhammer symbol (rising factorial). Generating function The Bessel polynomials, with index shifted, have the generating function Differentiating with respect to , cancelling , yields the generating function for the polynomials Similar generating function exists for the polynomials as well: Upon setting , one has the following representation for the exponential function: Recursion The Bessel polynomial may also be defined by a recursion formula: and Differential equation The Bessel polynomial obeys the following differential equation: and Orthogonality The Bessel polynomials are orthogonal with respect to the weight integrated over the unit circle of the complex plane. In
https://en.wikipedia.org/wiki/Cisco%20Security%20Agent
Cisco Security Agent (CSA) was an endpoint intrusion prevention system software system made originally by Okena (formerly named StormWatch Agent), which was bought by Cisco Systems in 2003. The software is rule-based, and it examines system activities and network traffic, determining which behaviours are normal and which may indicate an attack. CSA was offered as a replacement for Cisco IDS Host Sensor, which was announced end-of-life on 21 February 2003. This end-of-life action resulted from Cisco's acquisition of Okena, Inc., and the Cisco Security Agent product line based on the Okena technology would replace the Cisco IDS Host Sensor product line from Entercept. As a result of this end-of-life action, Cisco offered a no-cost, one-for-one product replacement/migration program for all Cisco IDS Host Sensor customers to the new Cisco Security Agent product line. The intent of this program was to support existing IDS Host Sensor customers who choose to migrate to the new Cisco Security Agent product line. All Cisco IDS Host Sensor customers were eligible for this migration program, whether or not the customer had purchased a Cisco Software Application Support (SAS) service contract for their Cisco IDS Host Sensor products. CSA uses a two or three-tier client-server architecture. The Management Center 'MC' (or Management Console) contains the program logic. an MS SQL database backend is used to store alerts and configuration information. the MC and SQL database may be co-resident on the same system. The agent is installed on the desktops and/or servers to be protected and communicates with the Management Center, sending logged events to the Management Center and receiving updates on rules when they occur. A Network World article dated 17 December 2009 stated " Cisco hinted that it will end-of-life both CSA and MARS"—full article linked below. On 11 June 2010, Cisco announced the end-of-life and end-of-sale of CSA. Cisco did not offer any replacement products.
https://en.wikipedia.org/wiki/Atypia
Atypia (from Greek, a + typos, without type; a condition of being irregular or nonstandard) is a histopathologic term for a structural abnormality in a cell, i.e. it is used to describe atypical cells. Atypia can be caused by an infection or irritation if diagnosed in a Pap smear, for example. In the uterus it is more likely to be precancerous. The related concept of dysplasia refers to an abnormality of development, and includes abnormalities on larger, histopathologic scales. Example features Features that constitute atypia have different definitions for different diseases, but often include the following nucleus abnormalities: Enlargement Pleomorphism Nuclear polychromasia, which means variability in nuclear chromatin content. Polychromasia otherwise refers to a disease of immature red blood cells. Numerous mitotic figures Examples for Barrett's esophagus In Barrett's esophagus, features that are classified as atypia but not as dysplasia are mainly: Nuclear stratification, wherein cell nuclei, which are normally located nearly at the same level between adjacent cells, are instead located at different levels. Crowding Hyperchromatism Prominent nucleoli Prognosis It may or may not be a precancerous indication associated with later malignancy, but the level of appropriate concern is highly dependent on the context with which it is diagnosed. For example, already differentiated, specialised cells such as epithelia displaying "cellular atypia" are far less likely to become problematic (cancerous/malignant) than are myeloid progenitor cells of the immune system. The 'further back' in an already specialised, differentiated cell's lineage, the more problematic cellular atypia is likely to be. This is due to the conferring of such atypia to progeny-cells further down the lineage of that cell type. See also Irregularity List of biological development disorders
https://en.wikipedia.org/wiki/List%20of%20presidents%20of%20the%20Royal%20Statistical%20Society
The president of the Royal Statistical Society is the head of the Royal Statistical Society (RSS), elected biennially by the Fellows of the Society. (The time-period between elections has varied in the past, and in fact elections only rarely occur.). The president oversees the running of the Society and chairs its council meetings. In recent years, almost all presidents have been nominated following many years' service to the Society, although some have been nominated to mark their eminence in society generally, such as Harold Wilson. There has only been one contested election in the Society's history; in 1977, many fellows objected to the nomination by the Council of Campbell Adamson because he was not a statistician, was said to have made derogatory comments about statisticians, and principally because in the previous year he had been defeated in an election to the Council of the Society, and fellows felt that he was being foisted upon the Society by the current 'establishment' in an essentially undemocratic fashion. Henry Wynn was nominated by several fellows (including Adrian Smith, himself later president, and Philip Dawid) and won the election. Despite women being elected fellows from 1858, only five have been president of the society. In 2010, Bernard Silverman stepped down very early in his presidential term. This was due to being appointed as chief scientific advisor to the Home Office which presented a conflict of interest as the society sometimes issues expert statements on statistical matters in public life. In 2022 then president-elect David Firth withdrew on health grounds, prompting a further election in which Andrew Garrett, the current president, was elected. Honorary presidents Three Princes of Wales have been an honorary president: 1872 - 1901 Albert Edward, Prince of Wales (later Edward VII) 1902 - 1910 George, Prince of Wales (later George V) 1921 - 1936 Edward, Prince of Wales (later Edward VIII) List of presidents 19th century 20
https://en.wikipedia.org/wiki/Edible%20ink%20printing
Edible ink printing is the process of creating preprinted images with edible food colors onto various confectionery products such as cookies, cakes and pastries. Designs made with edible ink can be either preprinted or created with an edible ink printer, a specialty device which transfers an image onto a thin, edible paper. Edible paper is made of starches and sugars and printed with edible food colors. Some edible inks and paper materials have been approved by the Food and Drug Administration and carry its generally recognized as safe certification. Paper The first papers of this process used rice paper, while modern versions use frosting sheets. The first U.S. patent for food printing, as it applied to edible ink printing, was filed by George J. Krubert of the Keebler Company and granted in 1981. Such paper is eaten without harmful effects. Most edible paper has no significant flavor and limited texture. Edible paper may be printed on by a standard printer and, upon application to a moist surface, dissolves while maintaining a high resolution. The end effect is that the image (usually a photograph) on the paper appears to be printed on the icing. Edible inks Edible printer inks have become prevalent and are used in conjunction with special ink printers. Ink that is not specifically marketed as being edible may be harmful or fatal if swallowed. Edible toner for laser printers is not currently available. Any inkjet or bubblejet printer can be used to print, although resolution may be poor, and care should be taken to avoid contaminating the edible inks with previously used inks. Inkjet or bubblejet printers can be converted to print using edible ink, and cartridges of edible ink are commercially available. It is always much safer to use a dedicated inkjet printer for edible ink printing. Some edible inks are powdered, but if they are easily soluble in water they can also be used as any other edible ink without reducing quality. Edible paper is used on cakes, co
https://en.wikipedia.org/wiki/Vinyl%20banner
Vinyl banners are a form of banners made of vinyl. The most commonly used material is PVC (polyvinyl chloride). Most banners are now digitally printed on large format inkjet printers which are capable of printing a full color outdoor billboard on a single piece of material. They are used for outdoor advertising. Description A vinyl banner is a banner that is made of vinyl. Materials The most commonly used material is a heavy weight vinyl known as PVC (polyvinyl chloride). The weights of the different banner substrates range from as light as to as heavy as , and may be double- or single-sided. Large banners (which can be so large that they cover the side of a building) are usually printed on a special mesh pvc material so that some wind can pass through them. Grommets (eyelets) can also be added in order to facilitate hanging of the banner. A high frequency weld, stitching, or banner hem tape are also used to fasten the hems neatly, and provide the insertion of grommets/eyelets. Printing Most banners are now digitally printed on large format inkjet printers which are capable of printing a full color outdoor billboard on a single piece of material. There are various types of vinyl banner. Their useful life typically ranges from 3–5 years. Digitally printed banners: printed with aqueous (water-based), eco-solvent, solvent-based inks or UV-curable inkjet inks. The latter three types tend to contain durable pigments, which provide superior weather and UV-fading resistance. Large format inkjet printers are usually used, usually manufactured by companies such as HP, EFi Vutek, Mimaki, Roland, Mutoh, or one of many Chinese or Korean manufacturers. Very large banners may be produced using "grand format inkjet printers" of > width, or computer-controlled airbrush devices which print the ink directly onto the banner material. Some of the fastest wide and grand format inkjet printers are capable of printing up to per hour. Vinyl lettered banners: produced by applying
https://en.wikipedia.org/wiki/Mesic%20habitat
In ecology, a mesic habitat is a type of habitat with a well-balanced supply of moisture throughout the growing season, e.g., a mesic forest, a temperate hardwood forest, or dry-mesic prairie. Mesic is one of a triad of terms used to label a habitat or area that has a moderate or well-balanced supply of water within. The presence of water to allow for a habitat to be labeled as mesic can arise from a number of outside factors. A handful of the processes that allow for this moderate moisture content to remain constant during crucial growing periods include streams and their offshoots, wet meadows, springs, seeps, irrigated fields, and high-elevation habitats. These factors effectively provide drought insurance during the growing season as climate factors such as increasing temperatures, lack of rain, and especially urbanization in today's current light. Other habitat types, such as mesic hammocks, occupy the middle ground between bottomlands and sandhills or clay hills. These habitats can often be governed by oaks, hickories, and magnolias. However, there are some habitats that exhibit adaptations to fire. Natural Pinelands can persist in conjunction with mesic (moderately drained) or hydric but can also include mesic clay. Healthy mesic habitats can store large amounts of water given the typical rich loamy soil composition and streams, springs, etc. This allows the entire habitat to essentially function like a sponge storing water in such a way that it can be deposited to neighboring habitats as needed. This supply helps to capture, store, and slowly release water. This supply aids in nutrient facilitation, bolstering community interactions. Mesic habitats are common in dryer regions of the western United States, such as the Great Basin, Great Plains, and the Rocky Mountains. This allows for these habitats to become good water sources for neighboring dry climates and desert habitats. Healthy mesic habitats can provide extensive benefits to surrounding communi
https://en.wikipedia.org/wiki/Rostrum%20%28anatomy%29
Rostrum (from Latin , meaning beak) is a term used in anatomy for a number of phylogenetically unrelated structures in different groups of animals. Invertebrates In crustaceans, the rostrum is the forward extension of the carapace in front of the eyes. It is generally a rigid structure, but can be connected by a hinged joint, as seen in Leptostraca. Among insects, the rostrum is the name for the piercing mouthparts of the order Hemiptera as well as those of the snow scorpionflies, among many others. The long snout of weevils is also called a rostrum. Gastropod molluscs have a rostrum or proboscis. Cephalopod molluscs have hard beak-like mouthparts referred to as the rostrum. Vertebrates In mammals, the rostrum is that part of the cranium located in front of the zygomatic arches, where it holds the teeth, palate, and nasal cavity. Additionally, the corpus callosum of the human brain has a nerve tract known as the rostrum. The beak or snout of a vertebrate may also be referred to as the rostrum. Some cetaceans, including toothed whales such as dolphins and beaked whales, have rostrums (beaks) which evolved from their jawbones. The narwhal possesses a large rostrum (tusk) which evolved from a protruding canine tooth. Some fish have permanently protruding rostrums which evolved from their upper jawbones. Billfish (marlin, swordfish and sailfish) use rostrums (bills) to slash and stun prey. Paddlefish, goblin sharks and hammerhead sharks have rostrums packed with electroreceptors which signal the presence of prey by detecting weak electrical fields. Sawsharks and the critically endangered sawfish have rostrums (saws) which are both electro-sensitive and used for slashing. The rostrums extend ventrally in front of the fish. In the case of hammerheads the rostrum (hammer) extends both ventrally and laterally (sideways). See also
https://en.wikipedia.org/wiki/Microsoft%20Point-to-Point%20Compression
Microsoft Point-to-Point Compression (MPPC; described in RFC 2118) is a streaming data compression algorithm based on an implementation of Lempel–Ziv using a sliding window buffer. According to Hifn's IP statement, MPPC was patent-encumbered (last US patent granted on 1996-07-02). Whereas V.44 or V.42bis operate at layer 1 on the OSI model, MPPC operates on layer 2, giving it a significant advantage in terms of computing resources available to it. The dialup modem's in-built compression (V.44 or V.42bis) can only occur after the data has been serially transmitted to the modem, typically at a maximum rate of 115,200 bit/s. MPPC, as it is controlled by the operating system, can receive as much data as it wishes to compress, before forwarding it on to the modem. The modem's hardware must not delay data too much, while waiting for more to compress in one packet, otherwise an unacceptable latency level will result. It also cannot afford to, as this would require both sizable computing resources (on the scale of a modem) as well as significant buffer RAM. Software compression such as MPPC is free to use the host computer's resources, exceeding the modem's by several orders of magnitude. This allows it to keep a much larger buffer to work on at any one time, and it processes through a given amount of data much faster. The end result is that where V.44 may achieve a maximum of 4:1 compression (230 kbit/s) but is usually limited to 115.2 kbit/s, MPPC is capable of a maximum of 8:1 compression (460 kbit/s). MPPC also, given the far greater computing power at its disposal, is more effective on data than V.44 and achieves higher compression ratios when 8:1 isn't achievable. See also Microsoft Point-to-Point Encryption (MPPE) LZ77 LZS Stac Electronics
https://en.wikipedia.org/wiki/Self-interacting%20dark%20matter
In astrophysics and particle physics, self-interacting dark matter (SIDM) is an alternative class of dark matter particles which have strong interactions, in contrast to the standard cold dark matter model (CDM). SIDM was postulated in 2000 as a solution to the core-cusp problem. In the simplest models of DM self-interactions, a Yukawa-type potential and a force carrier φ mediates between two dark matter particles. On galactic scales, DM self-interaction leads to energy and momentum exchange between DM particles. Over cosmological time scales this results in isothermal cores in the central region of dark matter haloes. If the self-interacting dark matter is in hydrostatic equilibrium, its pressure and density follow: where and are the gravitational potential of the dark matter and a baryon respectively. The equation naturally correlates the dark matter distribution to that of the baryonic matter distribution. With this correlation, the self-interacting dark matter can explain phenomena such as the Tully–Fisher relation. Self-interacting dark matter has also been postulated as an explanation for the DAMA annual modulation signal. Moreover, it is shown that it can serve the seed of supermassive black holes at high redshift. See also MACS J0025.4-1222, astronomical observations that constrain DM self-interaction ESO 146-5, the core of Abell 3827 that was claimed as the first evidence of SIDM Strongly interacting massive particle (SIMP), proposed to explain cosmic ray data Lambda-CDM model
https://en.wikipedia.org/wiki/Proof%20of%20knowledge
In cryptography, a proof of knowledge is an interactive proof in which the prover succeeds in 'convincing' a verifier that the prover knows something. What it means for a machine to 'know something' is defined in terms of computation. A machine 'knows something', if this something can be computed, given the machine as an input. As the program of the prover does not necessarily spit out the knowledge itself (as is the case for zero-knowledge proofs) a machine with a different program, called the knowledge extractor is introduced to capture this idea. We are mostly interested in what can be proven by polynomial time bounded machines. In this case the set of knowledge elements is limited to a set of witnesses of some language in NP. Let be a statement of language in NP, and the set of witnesses for x that should be accepted in the proof. This allows us to define the following relation: . A proof of knowledge for relation with knowledge error is a two party protocol with a prover and a verifier with the following two properties: Completeness: If , then the prover who knows witness for succeeds in convincing the verifier of his knowledge. More formally: , i.e. given the interaction between the prover P and the verifier V, the probability that the verifier is convinced is 1. Validity: Validity requires that the success probability of a knowledge extractor in extracting the witness, given oracle access to a possibly malicious prover , must be at least as high as the success probability of the prover in convincing the verifier. This property guarantees that no prover that doesn't know the witness can succeed in convincing the verifier. Details on the definition This is a more rigorous definition of Validity: Let be a witness relation, the set of all witnesses for public value , and the knowledge error. A proof of knowledge is -valid if there exists a polynomial-time machine , given oracle access to , such that for every , it is the case that and
https://en.wikipedia.org/wiki/Journal%20of%20Symbolic%20Computation
The Journal of Symbolic Computation is a peer-reviewed monthly scientific journal covering all aspects of symbolic computation published by Academic Press and then by Elsevier. It is targeted to both mathematicians and computer scientists. It was established in 1985 by Bruno Buchberger, who served as its editor until 1994. The journal covers a wide variety of topics, including: Computer algebra, for which it is considered the top journal Computational geometry Automated theorem proving Applications of symbolic computation in education, science, and industry According to the Journal Citation Reports, its 2020 impact factor is 0.847. The journal is abstracted and indexed by Scopus and the Science Citation Index. See also Higher-Order and Symbolic Computation International Symposium on Symbolic and Algebraic Computation
https://en.wikipedia.org/wiki/Bovine%20papillomavirus
Bovine papillomaviruses (BPV) are a paraphyletic group of DNA viruses of the subfamily Firstpapillomavirinae of Papillomaviridae that are common in cattle. All BPVs have a circular double-stranded DNA genome. Infection causes warts (papillomas and fibropapillomas) of the skin and alimentary tract, and more rarely cancers of the alimentary tract and urinary bladder. They are also thought to cause the skin tumour equine sarcoid in horses and donkeys. BPVs have been used as a model for studying papillomavirus molecular biology and for dissecting the mechanisms by which this group of viruses cause cancer. Structure and genetic organisation Like other papillomaviruses, BPVs are small non-enveloped viruses with an icosahedral capsid around 50–60 nm in diameter. The capsid is formed of the L1 and L2 structural proteins, with the L1 C-terminus exposed. All BPVs have a circular double-stranded DNA genome of 7.3–8.0 kb. The genetic organisation of those BPVs which have been sequenced is broadly similar to other papillomaviruses. The open reading frames (ORFs) are all located on one strand, and are divided into early and late regions. The early region encodes nonstructural proteins E1 to E7. There are three viral oncoproteins, E5, E6 and E7; BPVs of the Xipapillomavirus group lack E6. The late region encodes structural proteins L1 and L2. There is also a non-coding long control region (LCR). Types Six types of BPV have been characterised, BPV-1 to BPV-6, which are divided into three broad subgroups. Deltapapillomavirus or fibropapillomaviruses (formerly known as subgroup A), including types 1 and 2, have a genome of around 7.9 kb. Similar papillomaviruses of ungulates (e.g. deer papillomavirus, European elk papillomavirus, ovine papillomavirus 1,2) are also found in this group. Like all members of the papillomavirus class, these viruses infect only keratinocytes (epithelial cells); however, unlike other papillomaviruses, they cause proliferation of both keratinocytes and
https://en.wikipedia.org/wiki/Mother%20Armenia
Mother Armenia ( ) is a female personification of Armenia. Her most public visual rendering is a monumental statue in Victory Park overlooking the capital city of Yerevan, Armenia. Mother Armenia statue in Yerevan The current statue replaces a monumental statue of General Secretary Joseph Stalin that was created as a victory memorial for World War II. During Stalin's reign of the Soviet Union, Grigor Harutyunyan, the first secretary of the Armenian Communist Party's Central Committee, and members of the government oversaw the construction of the monument which was completed and unveiled to the people on November 29, 1950. The statue was considered a masterpiece of the sculptor Sergey Merkurov. The pedestal was designed by architect Rafayel Israyelian. Realizing that occupying a pedestal can be a short-term honour, Israyelian designed the pedestal to resemble a three-nave basilica Armenian church, as he confessed many years later "Knowing that the glory of dictators is temporary, I have built a simple three-nave Armenian basilica". In contrast to the right-angled shapes of the external view, the interior is light and pleasing to the eye and resembled Echmiadzin's seventh-century St. Hripsime Church. In spring 1962, the statue of Stalin was removed, with one soldier being killed and many injured during the process, and in 1967, the statue of Mother Armenia, designed by Ara Harutyunyan, was installed in its place. The prototype of "Mother Armenia" was a 17-year-old girl Genya Muradian. Ara Harutyunyan met her at the store and persuaded her to pose for the sculpture. "Mother Armenia" has a height of , thus making the overall height of the monument , including the pedestal. The statue is built of hammered copper while the pedestal-museum is of basalt. Symbolism The Mother Armenia statue symbolises peace through strength. It can remind viewers of some of the prominent female figures in Armenian history, such as Sose Mayrig and others, who took up arms to help the
https://en.wikipedia.org/wiki/Croatian%20interlace
The Croatian interlace or Croatian wattle, known as the or in Croatian, is a type of interlace, most characteristic for its three-ribbon pattern. It is one of the most often used patterns of pre-romanesque Croatian art. It is found on and within churches as well as monasteries built in early medieval Kingdom of Croatia between the 9th and beginning of the 12th century. The ornamental strings were sometimes grouped together with animal and herbal figures. Most representative examples of inscriptions embellished with the interlace include the Baška tablet, baptismal font of Duke Višeslav of Croatia and the Branimir Inscription. Other notable examples are located near Knin, in Ždrapanj and Žavić by the Bribir settlement, Rižinice near Solin and in Split and Zadar. Croatia has a civil and military decoration called the Order of the Croatian Interlace. Pleter Cross During the 8th century, the Pope assigned Croatia with its own idiosyncratic cross based on Croatian interlace. It is commonly known as The Pleter Cross. The symbol is still very much used and adored in Croatia in the modern day as it represents both Christianity and Croatian Culture. Gallery Examples: See also Interlace (art) Croatian pre-Romanesque art and architecture
https://en.wikipedia.org/wiki/Gn15
Gn15 is a rail modelling scale, using G scale 1:22.5 scale trains running on H0/00 gauge () track, representing minimum gauge and miniature railways. Typical models built are between 1:20.3 and 1:24, or up to 1:29. NEM 010 specification defines IIp for modelling gauge. History Gn15 modeling is a relatively new phenomenon in the model railroading world. While the idea of this scale has existed for some time, as evidenced by the early efforts of Marc Horovitz, editor of Garden Railways magazine, Gn15 did not gain any measure of popularity until the Sidelines range of models. Following the advent of these kits, a few other lines of kits became available. Initial community development took place on Yahoo email groups, but these have been superseded by the forums at Gn15.info as the primary form of communication between the far flung practitioners of this scale. 09, GNine and related scales Alongside Gn15 other modeling scales have developed to cover both the modelling of minimum gauge lines in scales smaller than G, and 'miniature' lines (less than ) in G scale. O9 or On15 is the use of N gauge track in 7mm scale to represent a 'minimum gauge' line. In comparison, GNine is the use of 9mm track to represent 'miniature' lines. GNine is a 'flexible' term for scale, referring to modelling using garden railway scales and N gauge track. GNine models can be built to scales between 7/8" and 1:35 representing anything between gauge and miniature railways.
https://en.wikipedia.org/wiki/Tinker%20%28software%29
Tinker, previously stylized as TINKER, is a suite of computer software applications for molecular dynamics simulation. The codes provide a complete and general set of tools for molecular mechanics and molecular dynamics, with some special features for biomolecules. The core of the software is a modular set of callable routines which allow manipulating coordinates and evaluating potential energy and derivatives via straightforward means. Tinker works on Windows, macOS, Linux and Unix. The source code is available free of charge to non-commercial users under a proprietary license. The code is written in portable FORTRAN 77, Fortran 95 or CUDA with common extensions, and some C. Core developers are: (a) the Jay Ponder lab, at the Department of Chemistry, Washington University in St. Louis, St. Louis, Missouri. Laboratory head Ponder is Full Professor of Chemistry, and of Biochemistry & Molecular Biophysics; (b) the Pengyu Ren lab , at the Department of Biomedical Engineering University of Texas in Austin, Austin, Texas. Laboratory head Ren is Full Professor of Biomedical Engineering; (c) Jean-Philip Piquemal's research team at Laboratoire de Chimie Théorique, Department of Chemistry, Sorbonne University, Paris, France. Research team head Piquemal is Full Professor of Theoretical Chemistry. Features The Tinker package is based on several related codes: (a) the canonical Tinker, version 8, (b) the Tinker9 package as a direct extension of canonical Tinker to GPU systems, (c) the Tinker-HP package for massively parallel MPI applications on hybrid CPU and GPU-based systems, (d) Tinker-FFE for visualization of Tinker calculations via a Java-based graphical interface, and (e) the Tinker-OpenMM package for Tinker's use with GPUs via an interface for the OpenMM software. All of the Tinker codes are available from the TinkerTools organization site on GitHub. Additional information is available from the TinkerTools community web site. Programs are provided to perform many f
https://en.wikipedia.org/wiki/Chemokine%20receptor
Chemokine receptors are cytokine receptors found on the surface of certain cells that interact with a type of cytokine called a chemokine. There have been 20 distinct chemokine receptors discovered in humans. Each has a rhodopsin-like 7-transmembrane (7TM) structure and couples to G-protein for signal transduction within a cell, making them members of a large protein family of G protein-coupled receptors. Following interaction with their specific chemokine ligands, chemokine receptors trigger a flux in intracellular calcium (Ca2+) ions (calcium signaling). This causes cell responses, including the onset of a process known as chemotaxis that traffics the cell to a desired location within the organism. Chemokine receptors are divided into different families, CXC chemokine receptors, CC chemokine receptors, CX3C chemokine receptors and XC chemokine receptors that correspond to the 4 distinct subfamilies of chemokines they bind. Four families of chemokine receptors differ in spacing of cysteine residues near N-terminal of the receptor. Structural characteristics Chemokine receptors are G protein-coupled receptors containing 7 transmembrane domains that are found predominantly on the surface of leukocytes, making it one of the rhodopsin-like receptors. Approximately 19 different chemokine receptors have been characterized to date, which share many common structural features; they are composed of about 350 amino acids that are divided into a short and acidic N-terminal end, seven helical transmembrane domains with three intracellular and three extracellular hydrophilic loops, and an intracellular C-terminus containing serine and threonine residues that act as phosphorylation sites during receptor regulation. The first two extracellular loops of chemokine receptors are linked together by disulfide bonding between two conserved cysteine residues. The N-terminal end of a chemokine receptor binds to chemokines and is important for ligand specificity. G-proteins couple
https://en.wikipedia.org/wiki/Speedwriting
Speedwriting is the trademark under which three versions of a shorthand system were marketed during the 20th century. The original version was designed so that it could be written with a pen or typed on a typewriter. At the peak of its popularity, Speedwriting was taught in more than 400 vocational schools and its advertisements were ubiquitous in popular American magazines. Description of the original version The original version of Speedwriting uses letters of the alphabet and a few punctuation marks to represent the sounds of English. There are abbreviations for common prefixes and suffixes; for example, uppercase N represents enter- or inter- so "entertainment" is written as Ntn- and "interrogation" is reduced to Ngj. Vowels are omitted from many words and arbitrary abbreviations are provided for the most common words. Specimen: . Let us have a quiet little party and surprise our neighbor on the farm. By reducing the use of spaces between words a high level of brevity can be achieved: "laugh and the world laughs with you" can be written as "lfatwolfs wu". Original Speedwriting can be typed on a typewriter or computer keyboard. When writing with a pen, one uses regular cursive handwriting with a few small modifications. Lowercase 't' is written as a simple vertical line and 'l' must be written with a distinctive loop; specific shapes for various letters are prescribed in the textbook. With twelve weeks of training, students could achieve speeds of 80 to 100 words per minute writing with a pen. The inventor of the system was able to type notes on a typewriter as fast as anyone could speak, therefore she believed Speedwriting could eliminate the need for stenotype machines in most applications. History of the original version Emma B. Dearborn (February 1, 1875 – July 28, 1937) worked as a shorthand instructor and trainer of shorthand teachers at Simmons College, Columbia University, and several other institutions. She was an expert in several pen stenography
https://en.wikipedia.org/wiki/Dump%20analyzer
A dump analyzer is a programming tool which is used for understanding a machine readable core dump. The GNU utils , , , and the powerful can all be used to look inside core files. Introspector is a core dump analyzer for a compiler.
https://en.wikipedia.org/wiki/Frattini%27s%20argument
In group theory, a branch of mathematics, Frattini's argument is an important lemma in the structure theory of finite groups. It is named after Giovanni Frattini, who used it in a paper from 1885 when defining the Frattini subgroup of a group. The argument was taken by Frattini, as he himself admits, from a paper of Alfredo Capelli dated 1884. Frattini's Argument Statement If is a finite group with normal subgroup , and if is a Sylow p-subgroup of , then where denotes the normalizer of in and means the product of group subsets. Proof The group is a Sylow -subgroup of , so every Sylow -subgroup of is an -conjugate of , that is, it is of the form , for some (see Sylow theorems). Let be any element of . Since is normal in , the subgroup is contained in . This means that is a Sylow -subgroup of . Then by the above, it must be -conjugate to : that is, for some , and so . Thus, , and therefore . But was arbitrary, and so Applications Frattini's argument can be used as part of a proof that any finite nilpotent group is a direct product of its Sylow subgroups. By applying Frattini's argument to , it can be shown that whenever is a finite group and is a Sylow -subgroup of . More generally, if a subgroup contains for some Sylow -subgroup of , then is self-normalizing, i.e. . External links Frattini's Argument on ProofWiki
https://en.wikipedia.org/wiki/Microecology
Microecology means microbial ecology or ecology of a microhabitat. In humans, gut microecology is the study of the microbial ecology of the human gut which includes gut microbiota composition, its metabolic activity, and the interactions between the microbiota, the host, and the environment. Research in human gut microecology is important because the microbiome can have profound effects on human health. The microbiome is known to influence the immune system, digestion, and metabolism, and is thought to play a role in a variety of diseases, including diabetes, obesity, inflammatory bowel disease, and cancer. Studying the microbiome can help us better understand these diseases and develop treatments. Microecology is a large field that includes many topics such as; evolution, biodiversity, exobiology, ecology, bioremediation, recycling, and food microbiology. It is the study of the interactions between living organisms and their environment, and how these interactions affect the organisms and their environment. Additionally, it is a multidisciplinary area of study, combining elements of biology, chemistry, physics, and mathematics. It focuses on the study of the interactions between microorganisms and the environment they inhabit, their effects on the environment, and their effects on other organisms. Microecology also studies the effects of human activity on the environment and how this affects the growth and development of microorganisms. Microecology has many applications in the fields of medicine, agriculture, and biotechnology. It is also important for understanding the cycling of nutrients in the environment, and the behavior of microorganisms in various environments. Moving onwards Intestinal microecology is a new area of microecology study. It is a complex microflora that is directly related to human health. Therefore, regulation of intestinal microecology will help in the treatment of many diseases. It was reported that intestinal flora is involved in anti-t
https://en.wikipedia.org/wiki/Bog-wood
Bog-wood (also spelled bogwood or bog wood), also known as abonos and, especially amongst pipe smokers, as morta, is a material from trees that have been buried in peat bogs and preserved from decay by the acidic and anaerobic bog conditions, sometimes for hundreds or even thousands of years. The wood is usually stained brown by tannins dissolved in the acidic water. Bog-wood represents the early stages in the fossilisation of wood, with further stages ultimately forming jet, lignite and coal over a period of many millions of years. Bog-wood may come from any tree species naturally growing near or in bogs, including oak (Quercus – "bog oak"), pine (Pinus), yew (Taxus), swamp cypress (Taxodium) and kauri (Agathis). Bog-wood is often removed from fields and placed in clearance cairns. It is a rare form of timber that is claimed to be "comparable to some of the world's most expensive tropical hardwoods". Formation process Bog-wood is created from the trunks of trees that have lain in bogs, and bog-like conditions such as lakes, river bottoms and swamps, for centuries and even millennia. Deprived of oxygen, the wood undergoes the process of fossilization. Water flow and depth play a special role in the creation of bog-wood. Currents bind the minerals and iron in the water with tannins in the wood, naturally staining the wood in the process. This centuries-long process, often termed "maturation," turns the wood from golden-brown to completely black, while increasing its hardness to such a level that it can only be carved with the use of specialty cutting tools. While the time necessary for the oak to transform into bog-wood varies, the "maturation" commonly lasts thousands of years. Due to the ecological reasons mentioned above, no two trunks can be found of the same color. Excavation sites Sites of high quality bog-wood in the world are very rare. In the sites expected to yield it, bog-wood is hard to find, and access to the river bank and its bed is often diffic
https://en.wikipedia.org/wiki/Juno%20First
is an arcade video game developed by Konami and released in 1983. It was licensed to Gottlieb in the United States. Juno First is a fixed shooter with a slightly tilted perspective, similar to Nintendo's Radar Scope from 1980. The game was ported to the Commodore 64, Atari 8-bit family, MSX, IBM PC, and IBM PCjr. Gameplay Juno First presents a set number of enemies per level, but they do not make a gallery formation like Galaga or Space Invaders. Instead, the player's ship can move forward and backward (in addition to left and right) to hunt enemies in an orientation that is vertical, but has some horizon-oriented tilt. This style of gameplay would be re-used in a later Konami shooter, Axelay. The player destroys waves of enemies to finish levels. Starting formations vary from stage to stage. In addition, the player can pick up a humanoid, upon which the screen will have a red tint. While this happens, every enemy the player shoots will earn the player 200 more points than the previous enemy destroyed. The original score for shooting an enemy while in humanoid mode depends on the stage. Ports Juno First was first ported in the western market to the Commodore 64 in 1983. In 1984, Atari 8-bit family and IBM PC/IBM PCjr conversions were also released. All of these ports were handled by Datasoft. The Commodore and Atari ports were programmed by Greg Hiscott, whereas the IBM version was programmed by Scott Titus. In Japan, Sony released a conversion of Juno First in 1983 for MSX computers. This version soon made its way to other MSX markets as well. Legacy An unofficial hobbyist port—with the same name as the original—was made available for the Atari 2600. See also Beamrider (1983)
https://en.wikipedia.org/wiki/Net-SNMP
Net-SNMP is a suite of software for using and deploying the SNMP protocol (v1, v2c and v3 and the AgentX subagent protocol). It supports IPv4, IPv6, IPX, AAL5, Unix domain sockets and other transports. It contains a generic client library, a suite of command line applications, a highly extensible SNMP agent, perl modules and python modules. Distribution Net-SNMP is housed on SourceForge and is usually in the top 100 projects in the SourceForge ranking system. It was the March 2005 SourceForge Project of the Month. It is very widely distributed and comes included with many operating systems including most distributions of Linux, FreeBSD, OpenBSD, Solaris, and OS X. It is also available from the Net-SNMP web site. History Steve Waldbusser of CMU started a freely available SNMP tool kit in 1992. The package was later abandoned by CMU and Wes Hardaker at UC Davis renamed it to UCD-SNMP and extended it to meet the network management needs of the Electrical Engineering department there. Eventually Mr. Hardaker left the university and realized that the project was now network wide and thus renamed it to Net-SNMP to reflect its distributed development. The roots of the Net-SNMP project are long and a full description can be found on the Net-SNMP history page. SNMP Applications Included With Net-SNMP Snmpget The command snmpget uses the snmpget application to retrieve information associated with a specific object identifier (OID) from a target device. Example An example of snmpget usage (this will retrieve a specific OID 'sysUpTime' under the community string 'demopublic', with 'test.net-snmp.org' as the host name of the agent to query: Snmpwalk The command snmpwalk uses the SNMP GETNEXT request to query a network for a tree of information. An object identifier (OID) may be given on the command line. This OID specifies which portion of the object identifier space will be searched using GETNEXT requests. All variables in the subtree below the given OID are quer
https://en.wikipedia.org/wiki/Micro%20perforated%20plate
A micro perforated plate (MPP) is a device used to absorb sound, reducing its intensity. It consists of a thin flat plate, made from one of several different materials, with small holes punched in it. An MPP offers an alternative to traditional sound absorbers made from porous materials. Structure An MPP is normally 0.5–2 mm thick. The holes typically cover 0.5 to 2% of the plate, depending on the application and the environment in which the MPP is to be mounted. Hole diameter is usually less than 1 millimeter, typically 0.05 to 0.5 mm. They are usually made using the microperforation process. Operating principle The goal of a sound absorber is to convert acoustical energy into heat. In a traditional absorber, the sound wave propagates into the absorber. Because of the proximity of the porous material, the oscillating air molecules inside the absorber lose their acoustical energy due to friction. A MPP works in almost the same way. When the oscillating air molecules penetrate the MPP, the friction between the air in motion and the surface of the MPP dissipates the acoustical energy. Comparison with other materials Traditional sound absorbers are porous materials such as mineral wool, glass or polyester fibres. It is not possible to use these materials in harsh environments such as engine compartments. Traditional absorbers have many drawbacks, including pollution, the risk of fire, and problems with the useful lifetime of the absorbing material. The main reason why Micro Perforates have become so popular among acousticians is that they have a good absorption performance but without the disadvantages of a porous material. Furthermore, an MPP is also preferable from an aesthetic point of view. History For a while, perforated metal panels with holes in the 1–10 mm range have been used as a cage for sound-absorbing glass-fiber bats where large holes let the sound waves reach into the absorbent fiber. Another use has been the creation of narrowband Helmholtz
https://en.wikipedia.org/wiki/Ljung%E2%80%93Box%20test
The Ljung–Box test (named for Greta M. Ljung and George E. P. Box) is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the "overall" randomness based on a number of lags, and is therefore a portmanteau test. This test is sometimes known as the Ljung–Box Q test, and it is closely connected to the Box–Pierce test (which is named after George E. P. Box and David A. Pierce). In fact, the Ljung–Box test statistic was described explicitly in the paper that led to the use of the Box–Pierce statistic, and from which that statistic takes its name. The Box–Pierce test statistic is a simplified version of the Ljung–Box statistic for which subsequent simulation studies have shown poor performance. The Ljung–Box test is widely applied in econometrics and other applications of time series analysis. A similar assessment can be also carried out with the Breusch–Godfrey test and the Durbin–Watson test. Formal definition The Ljung–Box test may be defined as: : The data are independently distributed (i.e. the correlations in the population from which the sample is taken are 0, so that any observed correlations in the data result from randomness of the sampling process). : The data are not independently distributed; they exhibit serial correlation. The test statistic is: where n is the sample size, is the sample autocorrelation at lag k, and h is the number of lags being tested. Under the statistic Q asymptotically follows a . For significance level α, the critical region for rejection of the hypothesis of randomness is: where is the (1 − α)-quantile of the chi-squared distribution with h degrees of freedom. The Ljung–Box test is commonly used in autoregressive integrated moving average (ARIMA) modeling. Note that it is applied to the residuals of a fitted ARIMA model, not the original series, and in such applications the hypothesis actually bein
https://en.wikipedia.org/wiki/Pentraxins
Pentraxins (PTX), also known as pentaxins, are an evolutionary conserved family of proteins characterised by containing a pentraxin protein domain. Proteins of the pentraxin family are involved in acute immunological responses. They are a class of pattern recognition receptors (PRRs). They are a superfamily of multifunctional conserved proteins, some of which are components of the humoral arm of innate immunity and behave as functional ancestors of antibodies (Abs). They are known as classical acute phase proteins (APP), known for over a century. Structure Pentraxins are characterised by calcium dependent ligand binding and a distinctive flattened β-jellyroll structure similar to that of the legume lectins. The name "pentraxin" is derived from the Greek word for five (, pente) and axle (axis) relating to the radial symmetry of five monomers forming a ring approximately 95Å across and 35Å deep observed in the first members of this family to be identified. The "short" pentraxins include Serum Amyloid P component (SAP) and C reactive protein (CRP). The "long" pentraxins include PTX3 (a cytokine modulated molecule) and several neuronal pentraxins. Family members Three of the principal members of the pentraxin family are serum proteins: namely, CRP, SAP, and hamster female protein (FP). PTX3 (or TSG-14) protein is a cytokine-induced protein that is homologous to CRPs and SAPs. C-reactive protein C-reactive protein is expressed during the acute phase response to tissue injury or inflammation in mammals. The protein resembles antibody and performs several functions associated with host defence: it promotes agglutination, bacterial capsular swelling and phagocytosis, and activates the classical complement pathway through its calcium-dependent binding to phosphocholine. CRPs have also been sequenced in an invertebrate, Limulus polyphemus (Atlantic horseshoe crab), where they are a normal constituent of the hemolymph. Pentraxin 3 Pentraxin 3 (PTX3) is an acute pha
https://en.wikipedia.org/wiki/I.%20M.%20Dharmadasa
I.M. Dharmadasa is Professor of Applied Physics and leads the Electronic Materials and Solar Energy (solar cells and other Semiconductor Devices) Group at Sheffield Hallam University, UK. Dharme has worked in semiconductor research since becoming a PhD student at Durham University as a Commonwealth Scholar in 1977, under the supervision of the late Sir Gareth Roberts. His interest in the electrodeposition of thin film solar cells grew when he joined the Apollo Project at BP Solar in 1988. He continued this area of research on joining Sheffield Hallam University in 1990. Career and research He has published over 200 refereed and conference papers, has six British patents on thin film solar cells and has made over 175 conference presentations. He has made five book contributions and is the author of the book Advances in Thin Film Solar cells, which was published in 2012. Dharmadasa has also successfully supervised 20 Ph.D. and M.Phil. candidates and 14 years of PDRA support. He has gained research council and international government funding, and was included in the 2001 Research Assessment Exercise for Metallurgy and Materials which gained the top rating of five. His recent scientific breakthroughs [1-2], which are fundamental to describing the photovoltaic activity of cadmium telluride/cadmium sulfide solar cells, were summarised in a "new theoretical model for CdTe”. Based on these novel ideas he has reported a higher efficiency of 18% for cadmium telluride/cadmium sulfide cell [3], compared with 16.5% reported by NREL in the United States in 2002. He currently focuses on low-cost methods to develop thin film solar cells based on electrodeposited copper indium gallium selenide materials, where he has reported efficiencies of 15.9% to date, compared with the highest value of 19.5% reported by NREL [4] using more expensive techniques. His article 'Fermi level pinning and effects on CuInGaSe2-based thin-film solar cells' was selected to be part of the Semiconductor
https://en.wikipedia.org/wiki/Satoyoshi%20syndrome
Satoyoshi syndrome, also known as Komura-Guerri syndrome, is a rare progressive disorder of presumed autoimmune cause, characterized by painful muscle spasms, alopecia, diarrhea, endocrinopathy with amenorrhoea, and secondary skeletal abnormalities. The syndrome was first reported in 1967 by Eijiro Satoyoshi and Kaneo Yamada in Tokyo, Japan. To this date, fewer than 50 cases worldwide have been reported for the syndrome. People with the syndrome typically develop symptoms of the illness at a young age, usually between 6 and 15 years old. The initial symptoms are muscle spasms in the legs and alopecia (hair loss). The spasms are painful and progressive, and their frequency varies from a few to 100 per day, each lasting a few minutes. It can be sufficiently severe to produce abnormal posturing of the affected limbs, particularly the thumbs. With progression, the illness involves the pectoral girdle and trunk muscles and finally the masseters and temporal muscles. The spasms usually spare the facial muscles. Severe spasms can interfere with respiration and speech. During an attack-free period, stimulus-insensitive myoclonus can occur in the arms, legs, and neck. Diarrhea occurs in the first 2–3 years with intolerance to carbohydrate and high-glucose diets. Endocrinopathy manifests as amenorrhea and hypoplasia of the uterus. Affected children fail to attain height after 10–12 years of age. The syndrome is not known to be a primary cause of mortality, but some patients have died as a result of secondary complications, such as respiratory failure and malnourishment. In one 6-year-old patient, antibodies to GABA-producing enzyme glutamate decarboxylase were detected. See also Stiff-person syndrome
https://en.wikipedia.org/wiki/Restriction%20map
A restriction map is a map of known restriction sites within a sequence of DNA. Restriction mapping requires the use of restriction enzymes. In molecular biology, restriction maps are used as a reference to engineer plasmids or other relatively short pieces of DNA, and sometimes for longer genomic DNA. There are other ways of mapping features on DNA for longer length DNA molecules, such as mapping by transduction. One approach in constructing a restriction map of a DNA molecule is to sequence the whole molecule and to run the sequence through a computer program that will find the recognition sites that are present for every restriction enzyme known. Before sequencing was automated, it would have been prohibitively expensive to sequence an entire DNA strand. To find the relative positions of restriction sites on a plasmid, a technique involving single and double restriction digests is used. Based on the sizes of the resultant DNA fragments the positions of the sites can be inferred. Restriction mapping is a very useful technique when used for determining the orientation of an insert in a cloning vector, by mapping the position of an off-center restriction site in the insert. Method The experimental procedure first requires an aliquot of purified plasmid DNA (see appendix) for each digest to be run. Digestion is then performed with each enzyme(s) chosen. The resulting samples are subsequently run on an electrophoresis gel, typically on agarose gel. The first step following the completion of electrophoresis is to add up the sizes of the fragments in each lane. The sum of the individual fragments should equal the size of the original fragment, and each digest's fragments should also sum up to be the same size as each other. If fragment sizes do not properly add up, there are two likely problems. In one case, some of the smaller fragments may have run off the end of the gel. This frequently occurs if the gel is run too long. A second possible source of error is th
https://en.wikipedia.org/wiki/Stowers%20Institute%20for%20Medical%20Research
The Stowers Institute for Medical Research is a biomedical research organization that conducts basic research on genes and proteins that control fundamental processes in living cells to analyze diseases and find keys to their causes, treatment, and prevention. It is located in Kansas City, Missouri adjacent to the University of Missouri–Kansas City main campus. The Institute has spent over 1 billion $US on research. Structure The Institute was incorporated with an initial donation of $500 million in 1994 by James E. Stowers founder of American Century Investments and his wife Virginia Stowers, both cancer survivors. Over the next decade, the couple endowed the institute with gifts totaling almost $2 billion. The Institute opened its doors in November 2000 on the former site of Menorah Hospital. In 2008, there were 25 independent research programs plus core facilities in bioinformatics, proteomics, microarray, molecular biology, flow cytometry, and microscopy. In total, the organization employs more than 550 scientists, research associates, technicians and support staff, including more than 140 postdoctoral research associates and graduate students. The Institute is recognized by the IRS as a medical research organization. It is a Missouri not-for-profit corporation, and is a 501(c)(3) charitable organization.
https://en.wikipedia.org/wiki/Paloozaville
Paloozaville is an animated/live-action series for children and their parents on the Video On Demand network, Mag Rack. The Mag Rack original series was created exclusively for On Demand and stars John Lithgow as Paloozaville's absent-minded mayor. The show is based on Lithgow's best selling children's books. Every episode begins with a boredom crisis that is subsequently solved by co-host Suza Palooza (Carmen De La Paz) and her team of kids. Every episode has a different theme centering on arts and crafts, music, history, dance, literature, and drama. The series strives to create educational children's entertainment that will allow parents to spend time with their children and learn at the same time. External links https://web.archive.org/web/20060822014311/http://www.magrack.com/paloozaville/ Video on demand 2000s American children's television series American television series with live action and animation
https://en.wikipedia.org/wiki/Herkogamy
Herkogamy (or hercogamy) is the spatial separation of the anthers and stigma in hermaphroditic angiosperms. It is a common strategy for reducing self-fertilization. Common forms Approach herkogamy - (called "pin flowers") is displayed when the stigma is displayed above the level of the anthers. This arrangement of sex organs causes floral visitors to first contact the stigma, before removing pollen from the anthers. This form of herkogamy is considered to be common, and is associated with a large, diverse fauna of floral visitors/pollinators. Reverse herkogamy - (called "thrum flowers") is displayed when the stigma is recessed below the level of the anthers. This arrangement causes floral visitors to first contact the anthers before the stigma. For this reason, reverse herkogamy is believed to facilitate greater pollen export than approach herkogamy. This type of sex-organ arrangement is typically associated with Lepidopteran (moth or butterfly) pollination. See also Heterostyly
https://en.wikipedia.org/wiki/Nobody%20%28username%29
In many Unix variants, "nobody" is the conventional name of a user identifier which owns no files, is in no privileged groups, and has no abilities except those which every other user has. It is normally not enabled as a user account, i.e. has no home directory or login credentials assigned. Some systems also define an equivalent group "nogroup". Uses The pseudo-user "nobody" and group "nogroup" are used, for example, in the NFSv4 implementation of Linux by idmapd, if a user or group name in an incoming packet does not match any known username on the system. It was once common to run daemons as nobody, especially on servers, in order to limit the damage that could be done by a malicious user who gained control of them. However, the usefulness of this technique is reduced if more than one daemon is run like this, because then gaining control of one daemon would provide control of them all. The reason is that processes owned by the same user have the ability to send signals to each other and use debugging facilities to read or even modify each other's memory. Modern practice, as recommended by the Linux Standard Base, is to create a separate user account for each daemon. See also User identifier Principle of least privilege Privilege revocation (computing)
https://en.wikipedia.org/wiki/Conditioned%20disjunction
In logic, conditioned disjunction (sometimes called conditional disjunction) is a ternary logical connective introduced by Church. Given operands p, q, and r, which represent truth-valued propositions, the meaning of the conditioned disjunction is given by: In words, is equivalent to: "if q then p, else r", or "p or r, according as q or not q". This may also be stated as "q implies p, and not q implies r". So, for any values of p, q, and r, the value of is the value of p when q is true, and is the value of r otherwise. The conditioned disjunction is also equivalent to: and has the same truth table as the ternary conditional operator ?: in many programming languages (with being equivalent to a ? b : c). In electronic logic terms, it may also be viewed as a single-bit multiplexer. In conjunction with truth constants denoting each truth-value, conditioned disjunction is truth-functionally complete for classical logic. There are other truth-functionally complete ternary connectives. Truth table The truth table for :
https://en.wikipedia.org/wiki/Symposium%20on%20Logic%20in%20Computer%20Science
The ACM–IEEE Symposium on Logic in Computer Science (LICS) is an annual academic conference on the theory and practice of computer science in relation to mathematical logic. Extended versions of selected papers of each year's conference appear in renowned international journals such as Logical Methods in Computer Science and ACM Transactions on Computational Logic. History LICS was originally sponsored solely by the IEEE, but as of the 2014 founding of the ACM Special Interest Group on Logic and Computation LICS has become the flagship conference of SIGLOG, under the joint sponsorship of ACM and IEEE. From the first installment in 1988 until 2013, the cover page of the conference proceedings has featured an artwork entitled Irrational Tiling by Logical Quantifiers, by Alvy Ray Smith. Since 1995, each year the Kleene award is given to the best student paper. In addition, since 2006, the LICS Test-of-Time Award is given annually to one among the twenty-year-old LICS papers that have best met the test of time. LICS Awards Test-of-Time Award Each year, since 2006, the LICS Test-of-Time Award recognizes those articles from LICS proceedings 20 years earlier, which have become influential. 2006 Leo Bachmair, Nachum Dershowitz, Jieh Hsiang, "Orderings for Equational Proofs" E. Allen Emerson, Chin-Laung Lei, "Efficient Model Checking in Fragments of the Propositional Mu-Calculus (Extended Abstract)" Moshe Y. Vardi, Pierre Wolper, "An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report)" 2007 Samson Abramsky, "Domain theory in Logical Form" Robert Harper, Furio Honsell, Gordon D. Plotkin, "A Framework for Defining Logics" 2008 Martin Abadi, Leslie Lamport, "The existence of refinement mappings" 2009 Eugenio Moggi, "Computational lambda-calculus and monads" 2010 Rajeev Alur, Costas Courcoubetis, David L. Dill, "Model-checking for real-time systems" Jerry R. Burch, Edmund Clarke, Kenneth L. McMillan, David L. Dill, James Hwang, "S
https://en.wikipedia.org/wiki/International%20Conference%20on%20Automated%20Reasoning%20with%20Analytic%20Tableaux%20and%20Related%20Methods
The International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX) is an annual international academic conference that deals with all aspects of automated reasoning with analytic tableaux. Periodically, it joins with CADE and TPHOLs into the International Joint Conference on Automated Reasoning (IJCAR). The first table convened in 1992. Since 1995, the proceedings of this conference have been published by Springer's LNAI series. In August 2006 TABLEAUX was part of the Federated Logic Conference in Seattle, USA. The following TABLEAUX were held in 2007 in Aix en Provence, France, as part of IJCAR 2008, in Sydney, Australia, as TABLEAUX 2009, in Oslo, Norway, as part of IJCAR 2010, Edinburgh, UK, as TABLEAUX 2011, in Bern, Switzerland, 4–8 July 2011, as part of IJCAR 2012, Manchester, United Kingdom, as TABLEAUX 2013, Nancy, France, 16–19 September 2013, and as part of IJCAR 2014, Vienna, Austria, 19–22 July 2014. External links TABLEAUX home page Theoretical computer science conferences Logic conferences
https://en.wikipedia.org/wiki/Stratum%20lucidum%20of%20hippocampus
The stratum lucidum of the hippocampus is a layer of the hippocampus between the stratum pyramidale and the stratum radiatum. It is the tract of the mossy fiber projections, both inhibitory and excitatory from the granule cells of the dentate gyrus. One mossy fiber may make up to 37 connections to a single pyramidal cell, and innervate around 12 pyramidal cells on top of that. Any given pyramidal cell in the stratum lucidum may get input from as many as 50 granule cells. Location The stratum lucidum is located within the CA3 region of the hippocampus distally to the dentate gyrus and proximally to the CA2 region. It is composed of a densely packed bundle of mossy fibers (unmyelinated) and spiny and aspiny interneurons that lie immediately above the CA3 pyramidal cell layer in the hippocampus, and immediately below the stratum radiatum. Most mossy fiber axons are perpendicular to the CA3 pyramidal region where they project and synapse to either the CA3 pyramidal cells or the stratum oriens below the pyramidal region. The interneurons of the stratum lucidum are generally found to be local circuit neurons remaining within the CA3 region. A majority of the interneuron axons remain within the stratum lucidum but some also extend to the stratum radiatum and stratum lacunosum-molecular above the radiatum as well as to the CA1 and hilur regions. Composition Stratum pyramidale In hippocampus anatomy, the stratum pyramidale is one of seven layers, or stratums, that make up the entire neural structure. The stratum pyramidale is the third deepest hippocampal layer, and in relation to the stratum lucidum, is located underneath it. The stratum pyramidale houses cell bodies of the pyramidal neurons, which are the foundational excitatory neurons of the hippocampus. In the CA3 region of the hippocampus, the stratum pyramidale connects with the stratum lucidum by mossy fibers that run through both subfields. Stratum radiatum The layer above the stratum lucidum. Mossy fibers I
https://en.wikipedia.org/wiki/Dual%20control%20theory
Dual control theory is a branch of control theory that deals with the control of systems whose characteristics are initially unknown. It is called dual because in controlling such a system the controller's objectives are twofold: (1) Action: To control the system as well as possible based on current system knowledge (2) Investigation: To experiment with the system so as to learn about its behavior and control it better in the future. These two objectives may be partly in conflict. In the context of reinforcement learning, this is known as the exploration-exploitation trade-off (e.g. Multi-armed bandit#Empirical motivation). Dual control theory was developed by Alexander Aronovich Fel'dbaum in 1960. He showed that in principle the optimal solution can be found by dynamic programming, but this is often impractical; as a result a number of methods for designing sub-optimal dual controllers have been devised. Example To use an analogy: if you are driving a new car you want to get to your destination cheaply and smoothly, but you also want to see how well the car accelerates, brakes and steers so as to get a better feel for how to drive it, so you will do some test manoeuvers for this purpose. Similarly a dual controller will inject a so-called probing (or exploration) signal into the system that may detract from short-term performance but will improve control in the future.
https://en.wikipedia.org/wiki/Sabin%20%28unit%29
In acoustics, the sabin (or more precisely the square foot sabin) is a unit of sound absorption, used for expressing the total effective absorption for the interior of a room. Sound absorption can be expressed in terms of the percentage of energy absorbed compared with the percentage reflected. It can also be expressed as a coefficient, with a value of 1.00 representing a material which absorbs 100% of the energy, and a value of 0.00 meaning all the sound is reflected. The concept of a unit for absorption was first suggested by American physicist Wallace Clement Sabine, the founder of the field of architectural acoustics. He defined the "open-window unit" as the absorption of of open window. The unit was renamed the sabin after Sabine, and it is now defined as "the absorption due to unit area of a totally absorbent surface". Sabins may be calculated with either imperial or metric units. One square foot of 100% absorbing material has a value of one imperial sabin, and 1 square metre of 100% absorbing material has a value of one metric sabin. The total absorption in metric sabins for a room containing many types of surface is given by where are the areas of the surfaces in the room (in m2), and are the absorption coefficients of the surfaces. Sabins are used in calculating the reverberation time of concert halls, lecture theatres, and recording studios.
https://en.wikipedia.org/wiki/Antigen%20presentation
Antigen presentation is a vital immune process that is essential for T cell immune response triggering. Because T cells recognize only fragmented antigens displayed on cell surfaces, antigen processing must occur before the antigen fragment can be recognized by a T-cell receptor. Specifically, the fragment, bound to the major histocompatibility complex (MHC), is transported to the surface of the cell, a process known as presentation. If there has been an infection with viruses or bacteria, the cell will present an endogenous or exogenous peptide fragment derived from the antigen by MHC molecules. There are two types of MHC molecules which differ in the behaviour of the antigens: MHC class I molecules (MHC-I) bind peptides from the cell cytosol, while peptides generated in the endocytic vesicles after internalisation are bound to MHC class II (MHC-II). Cellular membranes separate these two cellular environments - intracellular and extracellular. Each T cell can only recognize tens to hundreds of copies of a unique sequence of a single peptide among thousands of other peptides presented on the same cell, because an MHC molecule in one cell can bind to quite a large range of peptides. Predicting which (fragments of) antigens will be presented to the immune system by a certain MHC/HLA type is difficult, but the technology involved is improving. Presentation of intracellular antigens: Class I Cytotoxic T cells (also known as Tc, killer T cell, or cytotoxic T-lymphocyte (CTL)) express CD8 co-receptors and are a population of T cells that are specialized for inducing programmed cell death of other cells. Cytotoxic T cells regularly patrol all body cells to maintain the organismal homeostasis. Whenever they encounter signs of disease, caused for example by the presence of viruses or intracellular bacteria or a transformed tumor cell, they initiate processes to destroy the potentially harmful cell. All nucleated cells in the body (along with platelets) display class I majo
https://en.wikipedia.org/wiki/Learnable%20evolution%20model
The learnable evolution model (LEM) is a non-Darwinian methodology for evolutionary computation that employs machine learning to guide the generation of new individuals (candidate problem solutions). Unlike standard, Darwinian-type evolutionary computation methods that use random or semi-random operators for generating new individuals (such as mutations and/or recombinations), LEM employs hypothesis generation and instantiation operators. The hypothesis generation operator applies a machine learning program to induce descriptions that distinguish between high-fitness and low-fitness individuals in each consecutive population. Such descriptions delineate areas in the search space that most likely contain the desirable solutions. Subsequently the instantiation operator samples these areas to create new individuals. LEM has been modified from optimization domain to classification domain by augmented LEM with ID3 (February 2013 by M. Elemam Shehab, K. Badran, M. Zaki and Gouda I. Salama). Selected references Evolutionary computation
https://en.wikipedia.org/wiki/Specol
Specol (trade name Stimune) is a water in oil adjuvant composed of defined and purified light mineral oil (Stills. 2005). It has been suggested as an alternative to Freund's adjuvant for hyperactivation of the immune response in rabbits (Leenaars et al. 1994). Specol can be used for antigens of low immunogenicity and can be administered equally effectively by the subcutaneous or intraperitoneal routes. The histological lesions produced are fewer than those produced by using Complete Freund's adjuvant. However, pain and distress following administration of these 2 adjuvants seem to be similar (based on limited data).
https://en.wikipedia.org/wiki/Titermax
Titermax is a mixture of compounds used in antibody generation and vaccination to stimulate the immune system to recognise an antigen given together with the mixture. Titermax is a developed immune adjuvant. It is a water-in-oil emulsion and consists of squalene, an emulsifier (sorbitan monooleate 80), a patented block copolymer and microparticulate silica (Stills 2005). Toxicity seems to be lower than other water in oil adjuvant such as Freund's adjuvant. The efficacy to elicit an immune response against antigens of low immunogenicity is however subject to debate. See also Polyclonal antibodies, esp. section on Titermax Immunologic adjuvant Further reading Stills H.F. (2005) "Adjuvants and antibody production: dispelling the myths associated with Freund's complete and other adjuvants". ILAR Journal. 46(3) 280–293 External links TiterMax on Wiki Doc Official web site Immunology
https://en.wikipedia.org/wiki/Duodenal%20cytochrome%20B
Duodenal cytochrome B (Dcytb) also known as cytochrome b reductase 1 is an enzyme that in humans is encoded by the gene. Dcytb CYBRD1 was first identified as a ferric reductase enzyme which catalyzes the reduction of Fe3+ to Fe2+ required for dietary iron absorption in the duodenum of mammals. Dcytb mRNA and protein levels in the gut are increased by iron deficiency and hypoxia which acts to promote dietary iron absorption. The effect of iron deficiency and hypoxia on Dcytb levels are medicated via the HIF2 (Hypoxia inducible factor 2) transcription factor which binds to hypoxia response elements within the Dcytb promoter and increases transcription of the gene. DCYTB protein has also been found in other tissues, such as lung epithelial cells and in the plasma membrane of mature red blood cells of scorbutic species (unable to make ascorbate) such as human and guinea pig but not in other species which have retained the ability to synthesise ascorbate like mice and rat. This has led to the notion that Dcytb may have an additional role in ascorbate metabolism in scorbutic species. DCYTB protein has also been found in breast tissue (epithelial and myoepithelial cells) and high DCYTB levels are associated with a favourable prognosis in patients with breast cancer. A single nucleotide polymorphism (SNP) within the DCYTB promoter (SNP rs884409) which reduced functional DCYTB promoter activity was also associated with reduced serum ferritin levels in a patient cohort with C282Y haemochromatosis.
https://en.wikipedia.org/wiki/Stress%20space
In continuum mechanics, Haigh–Westergaard stress space, or simply stress space is a 3-dimensional space in which the three spatial axes represent the three principal stresses of a body subject to stress. This space is named after Bernard Haigh and Harold M. Westergaard. In mathematical terms, H-W space can also be interpreted (understood) as a set of numerical markers of stress tensors orbits (with respect to proper rotations group – special orthogonal group SO3); every point of H-W space represents one orbit. Functions of the principal stresses, such as the yield function, can be represented by surfaces in stress space''. In particular, the surface represented by von Mises yield function is a right circular cylinder, equiaxial to each of the three stress axes. In 2-dimensional models, stress space''' reduces to a plane and the von Mises yield surface reduces to an ellipse.
https://en.wikipedia.org/wiki/Parker%E2%80%93Sochacki%20method
In mathematics, the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. Summary The Parker–Sochacki method rests on two simple observations: If a set of ODEs has a particular form, then the Picard method can be used to find their solution in the form of a power series. If the ODEs do not have the required form, it is nearly always possible to find an expanded set of equations that do have the required form, such that a subset of the solution is a solution of the original ODEs. Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats. The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients. Advantages The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.) The order and step size can be easily changed from one step to the next. It is possible to calculate a guaranteed error bound on the solution. Arbitrary precision floatin
https://en.wikipedia.org/wiki/Tightness%20of%20measures
In mathematics, tightness is a concept in measure theory. The intuitive idea is that a given collection of measures does not "escape to infinity". Definitions Let be a Hausdorff space, and let be a σ-algebra on that contains the topology . (Thus, every open subset of is a measurable set and is at least as fine as the Borel σ-algebra on .) Let be a collection of (possibly signed or complex) measures defined on . The collection is called tight (or sometimes uniformly tight) if, for any , there is a compact subset of such that, for all measures , where is the total variation measure of . Very often, the measures in question are probability measures, so the last part can be written as If a tight collection consists of a single measure , then (depending upon the author) may either be said to be a tight measure or to be an inner regular measure. If is an -valued random variable whose probability distribution on is a tight measure then is said to be a separable random variable or a Radon random variable. Examples Compact spaces If is a metrisable compact space, then every collection of (possibly complex) measures on is tight. This is not necessarily so for non-metrisable compact spaces. If we take with its order topology, then there exists a measure on it that is not inner regular. Therefore, the singleton is not tight. Polish spaces If is a Polish space, then every probability measure on is tight. Furthermore, by Prokhorov's theorem, a collection of probability measures on is tight if and only if it is precompact in the topology of weak convergence. A collection of point masses Consider the real line with its usual Borel topology. Let denote the Dirac measure, a unit mass at the point in . The collection is not tight, since the compact subsets of are precisely the closed and bounded subsets, and any such set, since it is bounded, has -measure zero for large enough . On the other hand, the collection is tight: the compact interval
https://en.wikipedia.org/wiki/Prokhorov%27s%20theorem
In measure theory Prokhorov's theorem relates tightness of measures to relative compactness (and hence weak convergence) in the space of probability measures. It is credited to the Soviet mathematician Yuri Vasilyevich Prokhorov, who considered probability measures on complete separable metric spaces. The term "Prokhorov’s theorem" is also applied to later generalizations to either the direct or the inverse statements. Statement Let be a separable metric space. Let denote the collection of all probability measures defined on (with its Borel σ-algebra). Theorem. A collection of probability measures is tight if and only if the closure of is sequentially compact in the space equipped with the topology of weak convergence. The space with the topology of weak convergence is metrizable. Suppose that in addition, is a complete metric space (so that is a Polish space). There is a complete metric on equivalent to the topology of weak convergence; moreover, is tight if and only if the closure of in is compact. Corollaries For Euclidean spaces we have that: If is a tight sequence in (the collection of probability measures on -dimensional Euclidean space), then there exist a subsequence and a probability measure such that converges weakly to . If is a tight sequence in such that every weakly convergent subsequence has the same limit , then the sequence converges weakly to . Extension Prokhorov's theorem can be extended to consider complex measures or finite signed measures. Theorem: Suppose that is a complete separable metric space and is a family of Borel complex measures on . The following statements are equivalent: is sequentially precompact; that is, every sequence has a weakly convergent subsequence. is tight and uniformly bounded in total variation norm. Comments Since Prokhorov's theorem expresses tightness in terms of compactness, the Arzelà–Ascoli theorem is often used to substitute for compactness: in function sp