source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Telecentric%20lens
|
A telecentric lens is a special optical lens (often an objective lens or a camera lens) that has its entrance or exit pupil, or both, at infinity. The size of images produced by a telecentric lens is insensitive to either the distance between an object being imaged and the lens, or the distance between the image plane and the lens, or both, and such an optical property is called telecentricity. Telecentric lenses are used for precision optical two-dimensional measurements, reproduction (e.g., photolithography), and other applications that are sensitive to the image magnification or the angle of incidence of light.
The simplest way to make a lens telecentric is to put the aperture stop at one of the lens's focal points. This makes the chief rays (light rays that pass through the center of the aperture stop) on the other side of the lens parallel to the optical axis for any point in the field of view. Commercially available telecentric lenses are often compound lenses that include multiple lens elements, for improved optical performance. Telecentricity is not a property of the lenses inside the compound lens but is established by the location of the aperture stop in the lens. The aperture stop selects the rays that are passed through the lens and this specific selection is what makes a lens telecentric.
If a lens is not telecentric, it is either entocentric or hypercentric. Common lenses are usually entocentric. In particular, a single lens without a separate aperture stop is entocentric. For such a lens the chief ray originating at any point off of the optical axis is never parallel to the optical axis, neither in front of nor behind the lens. A non-telecentric lens exhibits varying magnification for objects at different distances from the lens. An entocentric lens has a smaller magnification for objects farther away; objects of the same size appear smaller the farther they are away. A hypercentric lens produces larger images the farther the object is away.
A tele
|
https://en.wikipedia.org/wiki/Chondroblast
|
Chondroblasts, or perichondrial cells, is the name given to mesenchymal progenitor cells in situ which, from endochondral ossification, will form chondrocytes in the growing cartilage matrix. Another name for them is subchondral cortico-spongious progenitors. They have euchromatic nuclei and stain by basic dyes.
These cells are extremely important in chondrogenesis due to their role in forming both the chondrocytes and cartilage matrix which will eventually form cartilage. Use of the term is technically inaccurate since mesenchymal progenitors can also technically differentiate into osteoblasts or fat. Chondroblasts are called chondrocytes when they embed themselves in the cartilage matrix, consisting of proteoglycan and collagen fibers, until they lie in the matrix lacunae. Once they embed themselves into the cartilage matrix, they grow the cartilage matrix by growing more cartilage extracellular matrix rather than by dividing further.
Structure
Within adults and developing adults, most chondroblasts are located in the perichondrium. This is a thin layer of connective tissue which protects cartilage and is where chondroblasts help to expand cartilage size whenever prompted to by hormones such as GH, TH, and glycosaminoglycans. They are located on the perichondrium because the perichondrium, located on the outside of developing bone, is not as heavily ensheathed in cartilage extracellular matrix as the interior and because here where capillaries are located. The type of growth maintained by chondroblasts is called appositional bone growth and increases the birth of the affected tissue. It is important to note that perichondrium, and thus chondroblasts, are not found on the articular cartilage surfaces of joints.
Matrix formation and composition
The extracellular matrix secreted by chondroblasts is composed of fibers, collagen, hyaluronic acid, proteoglycans, glycoproteins, water, and a host of macromolecules. Within finished cartilage, collagen fibers compose 10
|
https://en.wikipedia.org/wiki/Z-value%20%28temperature%29
|
"F0" is defined as the number of equivalent minutes of steam sterilization at temperature 121.1 °C (250 °F) delivered to a container or unit of product calculated using a z-value of 10 °C. The term F-value or "FTref/z" is defined as the equivalent number of minutes to a certain reference temperature (Tref) for a certain control microorganism with an established Z-value.
Z-value is a term used in microbial thermal death time calculations. It is the number of degrees the temperature has to be increased to achieve a tenfold (i.e. 1 log10) reduction in the D-value. The D-value of an organism is the time required in a given medium, at a given temperature, for a ten-fold reduction in the number of organisms. It is useful when examining the effectiveness of thermal inactivations under different conditions, for example in food cooking and preservation. The z-value is a measure of the change of the D-value with varying temperature, and is a simplified version of an Arrhenius equation and it is equivalent to z=2.303 RT Tref/E.
The z-value of an organism in a particular medium is the temperature change required for the D-value to change by a factor of ten, or put another way, the temperature required for the thermal destruction curve to move one log cycle. It is the reciprocal of the slope resulting from the plot of the logarithm of the D-value versus the temperature at which the D-value was obtained. While the D-value gives the time needed at a certain temperature to kill 90% of the organisms, the z-value relates the resistance of an organism to differing temperatures. The z-value allows calculation of the equivalency of two thermal processes, if the D-value and the z-value are known.
Example: if it takes an increase of 10 °C (18 °F) to move the curve one log, then our z-value is 10. Given a D-value of 4.5 minutes at 150 °C, the D-value can be calculated for 160 °C by reducing the time by 1 log. The new D-value for 160 °C given the z-value is 0.45 minutes. This means
|
https://en.wikipedia.org/wiki/Parametric%20surface
|
A parametric surface is a surface in the Euclidean space which is defined by a parametric equation with two parameters Parametric representation is a very general way to specify a surface, as well as implicit representation. Surfaces that occur in two of the main theorems of vector calculus, Stokes' theorem and the divergence theorem, are frequently given in a parametric form. The curvature and arc length of curves on the surface, surface area, differential geometric invariants such as the first and second fundamental forms, Gaussian, mean, and principal curvatures can all be computed from a given parametrization.
Examples
The simplest type of parametric surfaces is given by the graphs of functions of two variables:
A rational surface is a surface that admits parameterizations by a rational function. A rational surface is an algebraic surface. Given an algebraic surface, it is commonly easier to decide if it is rational than to compute its rational parameterization, if it exists.
Surfaces of revolution give another important class of surfaces that can be easily parametrized. If the graph , is rotated about the z-axis then the resulting surface has a parametrization It may also be parameterized showing that, if the function is rational, then the surface is rational.
The straight circular cylinder of radius R about x-axis has the following parametric representation:
Using the spherical coordinates, the unit sphere can be parameterized by This parametrization breaks down at the north and south poles where the azimuth angle θ is not determined uniquely. The sphere is a rational surface.
The same surface admits many different parametrizations. For example, the coordinate z-plane can be parametrized as
for any constants a, b, c, d such that , i.e. the matrix is invertible.
Local differential geometry
The local shape of a parametric surface can be analyzed by considering the Taylor expansion of the function that parametrizes it. The arc length of a cu
|
https://en.wikipedia.org/wiki/Radware
|
Radware Inc. is an American provider of cybersecurity and application delivery products for physical, cloud and software-defined data centers. Radware's corporate headquarters are located in Mahwah, New Jersey. The company also has offices in Europe, Africa and Asia Pacific regions. The company's global headquarters is in Israel. Radware is a member of the Rad Group of companies and its shares are traded on NASDAQ.
History
Radware co-founder Roy Zisapel has served as President, Chief Executive Officer and Director since the company's inception in April 1997. In 1999, the company had an initial public offering and was listed on the NASDAQ stock exchange. Zisapel holds a 3.4 percent stake in the company. His father, Yehuda Zisapel, is the largest shareholder, with a 15 percent stake.
Acquisitions
In January 2019, Radware expanded its cloud security portfolio with the acquisition of ShieldSquare, a market-leading bot management solutions provider. In January 2017, Radware acquired Seculert, a SaaS cloud-based provider of protection against enterprise network breach and data exfiltration. In February 2013, Radware acquired Strangeloop Networks, a leader in web performance optimization (WPO) solutions for e-commerce and enterprise applications. In April 2007, Radware acquired Covelight Systems, a provider of web application auditing and monitoring tools. In February 2009, Radware acquired Nortel's application delivery business. In November 2005, Radware acquired V-Secure Technologies, a leading provider of behavioral-based network intrusion prevention products.
Products
Radware's products and services include cloud services (Cloud WAF, Cloud DDoS Protection, Cloud Workload Protection, Cloud Web Acceleration, Cloud Malware Protection, and Bot Manager), application and network security (DefensePro, AppWall, DefenseFlow), application delivery and load balancing (Alteon, AppWall, FastView, AppXML, LinkProof NG), and management and monitoring (APSolute Vision, MSSP
|
https://en.wikipedia.org/wiki/MOS%20Technology%20SPI
|
The 6529 Single Port Interface (SPI aka PIO) was an integrated circuit made by MOS Technology. It served as an I/O controller for the 6502 family of microprocessors, providing a single 8-bit digital bidirectional parallel I/O port. Unlike the more sophisticated 6522 VIA and 6526 CIA, it did not allow the data direction for each I/O line to be separately specified, nor did it support serial I/O or contain any timer capabilities. Because of this, it did not achieve widespread use.
6529 ICs were available in 1 MHz, 2 MHz, and 3 MHz versions. The form factor was a JEDEC-standard 20-pin ceramic or plastic DIP.
The 6529 differs from a 74(LS)639 bidirectional three-state/open-Collector-busdriver in that the 6529 has passive output pullups and power-on reset circuitry.
External links
MOS 6529 datasheet
MOS 6529 datasheet (GIF format, zipped)
MOS Technology integrated circuits
Input/output integrated circuits
|
https://en.wikipedia.org/wiki/List%20of%20tree%20species%20by%20shade%20tolerance
|
A list of tree species, grouped generally by biogeographic realm and specifically by bioregions, and shade tolerance. Shade-tolerant species are species that are able to thrive in the shade, and in the presence of natural competition by other plants. Shade-intolerant species require full sunlight and little or no competition. Intermediate shade-tolerant trees fall somewhere in between the two.
Americas
Nearctic realm
Eastern North America
Shade tolerant
Abies balsamea, Balsam Fir
Acer negundo, Boxelder
Acer saccharum, Sugar Maple
Aesculus spp., Buckeyes
Carpinus caroliniana, American Hornbeam
Carya laciniosa, Shellbark Hickory
Chamaecyparis thyoides, Atlantic White Cypress or Atlantic White Cedar
Cornus florida, Flowering Dogwood
Diospyros spp., Persimmon
Fagus grandifolia, American Beech
Ilex opaca, American Holly
Magnolia grandiflora, Southern Magnolia
Morus rubra, Red Mulberry
Nyssa spp., Tupelos
Ostrya virginiana, Eastern Hophornbeam
Picea glauca, White Spruce
Picea mariana, Black Spruce
Picea rubens, Red Spruce
Tilia americana, Basswood
Thuja occidentalis, Northern White Cedar
Tsuga canadensis, Eastern Hemlock
Intermediate shade tolerant
Acer rubrum, Red Maple
Acer saccharinum, Silver Maple
Betula alleghaniensis, Yellow Birch
Betula lenta, Sweet Birch
Carya spp., Hickories (except for Shellbark)
Castanea dentata, American Chestnut
Celtis occidentalis, Hackberry
Fraxinus americana, White Ash
Fraxinus pennsylvanica, Green Ash
Fraxinus nigra, Black Ash
Magnolia spp., Magnolias
Quercus alba, White Oak
Quercus macrocarpa, Bur Oak
Quercus nigra, Water Oak
Quercus rubra, Northern Red Oak
Pinus elliottii, Slash Pine
Pinus strobus, Eastern White Pine
Taxodium distichum, Bald Cypress
Ulmus americana, American Elm
Ulmus thomasii, Rock Elm
Shade intolerant
Betula papyrifera, Paper Birch
Betula populifolia, Gray Birch
Catalpa spp., Catalpas
Carya illinoinensis, Pecan
Gymnocladus dioicus, Kentucky Coffee Tree
Juglans cine
|
https://en.wikipedia.org/wiki/Biomedical%20scientist
|
A biomedical scientist is a scientist trained in biology, particularly in the context of medical laboratory sciences or laboratory medicine. These scientists work to gain knowledge on the main principles of how the human body works and to find new ways to cure or treat disease by developing advanced diagnostic tools or new therapeutic strategies. The research of biomedical scientists is referred to as biomedical research.
Description
The specific activities of the biomedical scientist can differ in various parts of the world and vary with the level of education. Generally speaking, biomedical scientists conduct research in a laboratory setting, using living organisms as models to conduct experiments. These can include cultured human or animal cells grown outside of the whole organism, small animals such as flies, worms, fish, mice, and rats, or, rarely, larger animals and primates. Biomedical scientists may also work directly with human tissue specimens to perform experiments as well as participate in clinical research.
Biomedical scientists employ a variety of techniques in order to carry out laboratory experiments. These include:
Molecular and biochemical techniques
Electrophoresis and blotting
Immunostaining
Chromatography
Mass spectrometry
PCR and sequencing
Microarrays
Imaging technologies
Light, fluorescence, and electron microscopy
MRI
PET
X-ray
Genetic engineering/modification
Transfection
Viral transduction
Transgenic model organisms
Electrophysiology techniques
Patch clamp
EEG, EKG, ERG
In silico techniques
Bioinformatics
Computational biology
Level of education
Biomedical scientists typically obtain a bachelor of science degree, and usually take postgraduate studies leading to a diploma, master or doctorate. This degree is necessary for faculty positions at academic institutions, as well as senior scientist positions at most companies. Some biomedical scientists also possess a medical degree (MD, DO, PharmD, Doctor of Medical Laboratory Sciences[MLSD]
|
https://en.wikipedia.org/wiki/Software%20appliance
|
A software appliance is a software application combined with just enough operating system (JeOS) to run optimally on industry-standard hardware (typically a server) or in a virtual machine. It is a software distribution or firmware that implements a computer appliance.
Virtual appliances are a subset of software appliances. The main distinction is the packaging format and the specificity of the target platform. A virtual appliance is a virtual machine image designed to run on a specific virtualization platform, while a software appliance is often packaged in more generally applicable image format (e.g., Live CD) that supports installations to physical machines and multiple types of virtual machines.
Installing a software appliance to a virtual machine and packaging that into an image, creates a virtual appliance.
Benefits
Software appliances have several benefits over traditional software applications that are installed on top of an operating system:
Simplified deployment: A software appliance encapsulates an application's dependencies in a pre-integrated, self-contained unit. This can dramatically simplify software deployment by freeing users from having to worry about resolving potentially complex OS compatibility issues, library dependencies or undesirable interactions with other applications. This is known as a "toaster."
Improved isolation: software appliances are typically used to run applications in isolation from one another. If the security of an appliance is compromised, or if the appliance crashes, other isolated appliances will not be affected.
Improved performance: A software appliance does not embed any unused operating system services, applications or any form of bloatware hence it does not have to share the hardware resources (CPU, memory, storage space, ...) usually consumed by these on a generic OS setup. This naturally leads to faster boot time and application execution speed. In the case where multiple software applianc
|
https://en.wikipedia.org/wiki/Systems%20architecture
|
A system architecture is the conceptual model that defines the structure, behavior, and more views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system.
A system architecture can consist of system components and the sub-systems developed, that will work together to implement the overall system. There have been efforts to formalize languages to describe system architecture, collectively these are called architecture description languages (ADLs).
Overview
Various organizations can define systems architecture in different ways, including:
The fundamental organization of a system, embodied in its components, their relationships to each other and to the environment, and the principles governing its design and evolution.
A representation of a system, including a mapping of functionality onto hardware and software components, a mapping of the software architecture onto the hardware architecture, and human interaction with these components.
An allocated arrangement of physical elements which provides the design solution for a consumer product or life-cycle process intended to satisfy the requirements of the functional architecture and the requirements baseline.
An architecture consists of the most important, pervasive, top-level, strategic inventions, decisions, and their associated rationales about the overall structure (i.e., essential elements and their relationships) and associated characteristics and behavior.
A description of the design and contents of a computer system. If documented, it may include information such as a detailed inventory of current hardware, software and networking capabilities; a description of long-range plans and priorities for future purchases, and a plan for upgrading and/or replacing dated equipment and software.
A formal description of a system, or a detailed plan of the system at component level t
|
https://en.wikipedia.org/wiki/MOS%20Technology%20TED
|
The 7360/8360 TExt Display (TED) was an integrated circuit made by MOS Technology, Inc. It was a video chip that also contained sound generation hardware, DRAM refresh circuitry, interval timers, and keyboard input handling. It was designed for the Commodore Plus/4 and 16. Packaging consisted of a JEDEC-standard 48-pin DIP.
The only difference between models 7360 and 8360 is the manufacturing technology used; model 8360 is more common.
Video capabilities
The video capabilities provided by the TED were largely a subset of those in the VIC-II. The TED supported five video modes:
Text mode of 40 × 25 characters with 8 × 8 pixels
Multicolor text (4 × 8 pixels per character, double pixel width in the x-direction)
Extended background color mode (8 × 8 pixels per character)
Multicolor Graphics 160 × 200 pixels
Hi-Res Graphics 320 × 200 pixels
of the long visible part of the scan lines is filled with pixels
These modes were largely unchanged from the corresponding VIC-II modes aside from different register and memory mappings (see the article on the VIC-II for information on graphics modes). However, the TED lacked the sprite capabilities of the VIC-II, and so game animation had to be done exclusively with custom character sets like on the VIC-20. This restricted the graphics of C16/Plus 4 games versus the C64. On the VIC-II, sprites used of the die area pushing the transistor count over that of the CPU. In contrast, the TED caches the color attributes on-chip, increasing the SRAM from and does away with the external color RAM.
The TED did include two features that the VIC-II lacked: luminance control and blinking text.
It generated 16 base colors by variations of Pb and Pr chroma signals (with 8 possible steps, ranging from 0, +-0.3826834, +-0.7071068 to +-1.0). Fifteen of these 16 colors (black being the exception) could be assigned one of 8 Y luma values (0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0), thus making the TED capable of displaying a far
|
https://en.wikipedia.org/wiki/Phenol%20extraction
|
Phenol extraction is a laboratory technique that purifies nucleic acid samples using a phenol solution. Phenol is common reagent in extraction because its properties allow for effective nucleic acid extraction, particularly as it strongly denatures proteins, it is a nucleic acid preservative, and it is immiscible in water.
It may also refer to the process of extracting and isolating phenols from raw materials such as coal tar. These purified phenols are used in many industrial and medical compounds and are used as precursors in some synthesis reactions.
Phenol extraction of nucleic acids
Phenol extraction is a widely used technique for purifying nucleic acid samples from cell lysates. To obtain nucleic acids, the cell must be lysed, and the nucleic acids separated from other cell components.
Phenol is a polar substance with a higher density than water (1.07 g/cm3 compared to water's 1.00 g/cm3). When suspended in a water-phenol solution, denatured proteins and unwanted cell components dissolve in the phenol, while polar nucleic acids dissolve in the water phase. The solution may then be centrifuged to separate the phenol and water into distinct organic and aqueous phases. Purified nucleic acids can be precipitated from the aqueous phase of the solution.
Phenol is often used in combination with chloroform. Adding an equal volume of chloroform and phenol ensures a distinct separation between the aqueous and organic phases. Chloroform and phenol are miscible and create a denser solution than phenol alone, aiding the separation of the organic and aqueous layers. This addition of chloroform is useful when removing the aqueous phase to obtain a purified nucleic acid sample.
The pH of the solution must be adjusted specifically for each type of extraction. For DNA extraction, the pH is adjusted to 7.0–8.0. For RNA-specific extraction, the pH is adjusted to 4.5. At pH 4.5, hydrogen ions neutralize the negative charges on the phosphate groups, causing DNA to dissolve
|
https://en.wikipedia.org/wiki/Pitch%20shifting
|
Pitch shifting is a sound recording technique in which the original pitch of a sound is raised or lowered. Effects units that raise or lower pitch by a pre-designated musical interval (transposition) are called pitch shifters.
Pitch and time shifting
The simplest methods are used to increase pitch and reduce durations or, conversely, reduce pitch and increase duration. This can be done by replaying a sound waveform at a different speed than it was recorded. It could be accomplished on an early reel-to-reel tape recorder by changing the diameter of the capstan or using a different motor. As for vinyl records, placing a finger on the turntable to give friction will retard it, while giving it a "spin" can advance it. As technologies improved, motor speed and pitch control could be achieved electronically by servo drive system circuits.
Pitch shifter and harmonizer
A pitch shifter is a sound effects unit that raises or lowers the pitch of an audio signal by a preset interval. For example, a pitch shifter set to increase the pitch by a fourth will raise each note three diatonic intervals above the notes actually played. Simple pitch shifters raise or lower the pitch by one or two octaves, while more sophisticated devices offer a range of interval alterations. Pitch shifters are included in most audio processors today.
A harmonizer is a type of pitch shifter that combines the pitch-shifted signal with the original to create a two or more note harmony. The Eventide H910 Harmonizer, released in 1975, was one of the first commercially available pitch-shifters and digital multi-effects units. On November 10, 1976, Eventide filed a trademark registration for "Harmonizer" and continues to maintain its rights to the Harmonizer trademark today.
In digital recording, pitch shifting is accomplished through digital signal processing. Older digital processors could often shift pitch only in post-production, whereas many modern devices using computer processing technology can cha
|
https://en.wikipedia.org/wiki/NYSERNet
|
NYSERNet (New York State Education and Research Network) is a non-profit Internet Service Provider in New York State. It mainly provides Internet access to universities, colleges, museums, health care facilities, primary and secondary schools, and research institutions.
History
NYSERNet was founded in 1986 in Troy, New York. Its founders compared NYSERNet's network with the Erie Canal and considered it the next step in two centuries to draw the country together. NYSERNet's network reaches from Buffalo to New York City. Completed in 1987, it was the first statewide regional IP network in the United States. Initial speed of 56 kbps was upgraded to T1 in 1989 and T3 in 1994.
It was the original assignee of AS174 according to RFC1117. This ASN is used today by Cogent Communications for their global network.
References
External links
NYSERNet homepage
Computer networks
1987 establishments in New York (state)
|
https://en.wikipedia.org/wiki/Difference%20in%20differences
|
Difference in differences (DID or DD) is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differential effect of a treatment on a 'treatment group' versus a 'control group' in a natural experiment. It calculates the effect of a treatment (i.e., an explanatory variable or an independent variable) on an outcome (i.e., a response variable or dependent variable) by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group. Although it is intended to mitigate the effects of extraneous factors and selection bias, depending on how the treatment group is chosen, this method may still be subject to certain biases (e.g., mean regression, reverse causality and omitted variable bias).
In contrast to a time-series estimate of the treatment effect on subjects (which analyzes differences over time) or a cross-section estimate of the treatment effect (which measures the difference between treatment and control groups), difference in differences uses panel data to measure the differences, between the treatment and control group, of the changes in the outcome variable that occur over time.
General definition
Difference in differences requires data measured from a treatment group and a control group at two or more different time periods, specifically at least one time period before "treatment" and at least one time period after "treatment." In the example pictured, the outcome in the treatment group is represented by the line P and the outcome in the control group is represented by the line S. The outcome (dependent) variable in both groups is measured at time 1, before either group has received the treatment (i.e., the independent or explanatory variable), represented by the points P1 and S1. The treatment group then receives or experiences the treatmen
|
https://en.wikipedia.org/wiki/Rng%20%28algebra%29
|
In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng (IPA: ) is meant to suggest that it is a ring without i, that is, without the requirement for an identity element.
There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see ). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity.
A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space.
Definition
Formally, a rng is a set R with two binary operations called addition and multiplication such that
(R, +) is an abelian group,
(R, ·) is a semigroup,
Multiplication distributes over addition.
A rng homomorphism is a function from one rng to another such that
f(x + y) = f(x) + f(y)
f(x · y) = f(x) · f(y)
for all x and y in R.
If R and S are rings, then a ring homomorphism is the same as a rng homomorphism that maps 1 to 1.
Examples
All rings are rngs. A simple example of a rng that is not a ring is given by the even integers with the ordinary addition and multiplication of integers. Another example is given by the set of all 3-by-3 real matrices whose bottom row is zero. Both of these examples are instances of the general fact that every (one- or two-sided) ideal is a rng.
Rngs often appear naturally in functional analysis when linear operators on infinite-dimensional vector spaces are considered. Take for instance any infinite-dimensional vector space V and consider the set of all linear operators with finite rank (i.e. ). Together with addition and composition of operators, this is a rng, but not
|
https://en.wikipedia.org/wiki/WWHB-CD
|
WWHB-CD (channel 48) is a low-power, Class A television station licensed to Stuart, Florida, United States, serving the West Palm Beach area with programming from the digital multicast network TBD. It is owned and operated by Sinclair Broadcast Group alongside CBS affiliate WPEC (channel 12), CW affiliate WTVX (channel 34), and Class A MyNetworkTV affiliate WTCN-CD (channel 43). The stations share studios on Fairfield Drive in Mangonia Park, Florida (with a West Palm Beach postal address), while WWHB-CD's transmitter is located southwest of Hobe Sound, Florida.
WWHB-CD is the ATSC 3.0 (Next Gen TV) transmitter for West Palm Beach, hosting its main subchannel and the four major network stations. In exchange, its subchannels are broadcast on four full-power stations in the market.
History
WWHB began broadcasting on January 11, 1991, as an independent with the call sign W16AR. It was located on UHF channel 16 and was licensed to Stuart. Retired businessman August Gabriel began the station with $200,000 and three employees. It changed its call sign to WTCN-LP in 1995. From October 1996 until February 1997, it briefly produced a local morning show known as Good Morning Treasure Coast that was hosted by Tom Teter. Ed Birchfield also briefly hosted a 7 p.m. Treasure Coast News program from February to July 1997.
The station moved to UHF channel 15 in 2001 (when it converted to Class A and changed its calls to WTCN-CA in February of that year) and then to UHF channel 14 in 2002. It added a translator on UHF channel 53 in order to reach West Palm Beach. On January 15, 2003, the station changed its calls to the current WWHB-CA and switched to UHF channel 48. This aired from a transmitter at the western boundary of Johnathan Dickinson State Park in Martin County southwest of Jupiter Island.
Martin County businessman Bill Brothers purchased the station in 2001. It was Brothers who revitalized the station creating the first Hispanic language local television service for the
|
https://en.wikipedia.org/wiki/Routhian%20mechanics
|
In classical mechanics, Routh's procedure or Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions. Routhian mechanics is equivalent to Lagrangian mechanics and Hamiltonian mechanics, and introduces no new physics. It offers an alternative way to solve mechanical problems.
Definitions
The Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables. For a given set of generalized coordinates representing the degrees of freedom in the system, the Lagrangian is a function of the coordinates and velocities, while the Hamiltonian is a function of the coordinates and momenta.
The Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem. It also has the consequence that the Routhian equations are exactly the Hamiltonian equations for some coordinates and corresponding momenta, and the Lagrangian equations for the rest of the coordinates and their velocities. In each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations.
In the case of Lagrangian mechanics, the generalized coordinates , ... and the corresponding velocities , and possibly time , enter the Lagrangian,
where the overdots denote time derivatives.
In Hamiltonian mechanics, the generalized coordinates and the correspondi
|
https://en.wikipedia.org/wiki/APL%20syntax%20and%20symbols
|
The programming language APL is distinctive in being symbolic rather than lexical: its primitives are denoted by symbols, not words. These symbols were originally devised as a mathematical notation to describe algorithms. APL programmers often assign informal names when discussing functions and operators (for example, "product" for ×/) but the core functions and operators provided by the language are denoted by non-textual symbols.
Monadic and dyadic functions
Most symbols denote functions or operators. A monadic function takes as its argument the result of evaluating everything to its right. (Moderated in the usual way by parentheses.) A dyadic function has another argument, the first item of data on its left. Many symbols denote both monadic and dyadic functions, interpreted according to use. For example, ⌊3.2 gives 3, the largest integer not above the argument, and 3⌊2 gives 2, the lower of the two arguments.
Functions and operators
APL uses the term operator in Heaviside’s sense as a moderator of a function as opposed to some other programming language's use of the same term as something that operates on data, ref. relational operator and operators generally. Other programming languages also sometimes use this term interchangeably with function, however both terms are used in APL more precisely. Early definitions of APL symbols were very specific about how symbols were categorized. For example, the operator reduce is denoted by a forward slash and reduces an array along one axis by interposing its function operand. An example of reduce:
In the above case, the reduce or slash operator moderates the multiply function. The expression ×/2 3 4 evaluates to a scalar (1 element only) result through reducing an array by multiplication. The above case is simplified, imagine multiplying (adding, subtracting or dividing) more than just a few numbers together. (From a vector, ×/ returns the product of all its elements.)
The above dyadic functions examples [left and ri
|
https://en.wikipedia.org/wiki/Social%20shaping%20of%20technology
|
According to Robin A. Williams and David Edge (1996), "Central to social shaping of technology (SST) is the concept that there are choices (though not necessarily conscious choices) inherent in both the design of individual artifacts and systems, and in the direction or trajectory of innovation programs."
If technology does not emerge from the unfolding of a predetermined logic or a single determinant, then innovation is a 'garden of forking paths'. Different routes are available, potentially leading to different technological outcomes. Significantly, these choices could have differing implications for society and for particular social groups.
SST is one of the models of the technology: society relationship which emerged in the 1980s with MacKenzie and Wajcman's influential 1985 collection, alongside Pinch and Bijker's social construction of technology framework and Callon and Latour's actor-network theory. These have a common feature of criticism of the linear model of innovation and technological determinism. It differs from these notably in the attention it pays to the influence of the social and technological context of development which shapes innovation choices. SST is concerned to explore the material consequences of different technical choices, but criticizes technological determinism, which argues that technology follows its own developmental path, outside of human influences, and in turn, influences society. In this way, social shaping theorists conceive the relationship between technology and society as one of 'mutual shaping'.
Some versions of this theory state that technology affects society by affordances, constraints, preconditions, and unintended consequences (Baym, 2015). Affordance is the idea that technology makes specific tasks easier in our lives, while constraints make tasks harder to complete. The preconditions of technology are the skills and resources that are vital to using technology to its fullest potential. Finally, the unintended
|
https://en.wikipedia.org/wiki/Hut%206
|
Hut 6 was a wartime section of the Government Code and Cypher School (GC&CS) at Bletchley Park, Buckinghamshire, Britain, tasked with the solution of German Army and Air Force Enigma machine cyphers. Hut 8, by contrast, attacked Naval Enigma. Hut 6 was established at the initiative of Gordon Welchman, and was run initially by Welchman and fellow Cambridge mathematician John Jeffreys.
Welchman's deputy, Stuart Milner-Barry, succeeded Welchman as head of Hut 6 in September 1943, at which point over 450 people were working in the section.
Hut 6 was partnered with Hut 3, which handled the translation and intelligence analysis of the raw decrypts provided by Hut 6.
Location
Hut 6 was originally named after the building in which the section was located. Welchman says the hut was 20 yards (18m) long by 10 yards (9m) wide, with two large rooms at the far end – and no toilets. Staff had to go to another building. Irene Young recalled that she "worked in Room 82, though in typical Bletchley fashion there were not eighty-one rooms preceding it". She was glad to move from the Decoding Room "where all the operators were constantly having nervous breakdowns on account of the pace of work and the appalling noise" to the Registration Room which arranged intercepts according to callsign and frequency.
As the number of personnel increased, the section moved to additional buildings around Bletchley Park, but its name was retained, with each new location also being known as 'Hut 6'. The original building was then renamed 'Hut 16'.
Personnel
John Jeffreys was initially in charge of the Hut with Gordon Welchman until May 1940; Jeffreys was diagnosed ill in 1940, and died in 1944. Welchman became official head of section until autumn 1943, subsequently rising to Assistant Director of Mechanisation at Bletchley Park. Hugh Alexander, was a member February 1940 – March 1941 before moving to become head of Hut 8. Stuart Milner-Barry joined early 1940 and was in charge from autumn 1943
|
https://en.wikipedia.org/wiki/Bristol%20stool%20scale
|
The Bristol stool scale is a diagnostic medical tool designed to classify the form of human faeces into seven categories. It is used in both clinical and experimental fields.
It was developed at the Bristol Royal Infirmary as a clinical assessment tool in 1997, and is widely used as a research tool to evaluate the effectiveness of treatments for various diseases of the bowel, as well as a clinical communication aid; including being part of the diagnostic triad for irritable bowel syndrome.
Interpretation
The seven types of stool are:
Type 1: Separate hard lumps, like nuts (difficult to pass)
Type 2: Sausage-shaped, but lumpy
Type 3: Like a sausage but with cracks on its surface
Type 4: Like a sausage or snake, smooth and soft (average stool)
Type 5: Soft blobs with clear cut edges
Type 6: Fluffy pieces with ragged edges, a mushy stool (diarrhea)
Type 7: Watery, no solid pieces, entirely liquid (diarrhea)
Types 1 and 2 indicate constipation, with 3 and 4 being the ideal stools as they are easy to defecate while not containing excess liquid, 5 indicating lack of dietary fiber, and 6 and 7 indicate diarrhoea.
In the initial study, in the population examined in this scale, the type 1 and 2 stools were more prevalent in females, while the type 5 and 6 stools were more prevalent in males; furthermore, 80% of subjects who reported rectal tenesmus (sensation of incomplete defecation) had type 7. These and other data have allowed the scale to be validated. The initial research did not include a pictorial chart with this being developed at a later point.
The Bristol stool scale is also very sensitive to changes in intestinal transit time caused by medications, such as antidiarrhoeal loperamide, senna, or anthraquinone with laxative effect.
Uses
Diagnosis of irritable bowel syndrome
People with irritable bowel syndrome (IBS) typically report that they suffer with abdominal cramps and constipation.
In some patients, chronic constipation is interspersed with br
|
https://en.wikipedia.org/wiki/Corresponding%20sides%20and%20corresponding%20angles
|
In geometry, the tests for congruence and similarity involve comparing corresponding sides and corresponding angles of polygons. In these tests, each side and each angle in one polygon is paired with a side or angle in the second polygon, taking care to preserve the order of adjacency.
For example, if one polygon has sequential sides , , , , and and the other has sequential sides , , , , and , and if and are corresponding sides, then side (adjacent to ) must correspond to either or (both adjacent to ). If and correspond to each other, then corresponds to , corresponds to , and corresponds to ; hence the th element of the sequence corresponds to the th element of the sequence for On the other hand, if in addition to corresponding to we have corresponding to , then the th element of corresponds to the th element of the reverse sequence .
Congruence tests look for all pairs of corresponding sides to be equal in length, though except in the case of the triangle this is not sufficient to establish congruence (as exemplified by a square and a rhombus that have the same side length). Similarity tests look at whether the ratios of the lengths of each pair of corresponding sides are equal, though again this is not sufficient. In either case equality of corresponding angles is also necessary; equality (or proportionality) of corresponding sides combined with equality of corresponding angles is necessary and sufficient for congruence (or similarity). The corresponding angles as well as the corresponding sides are defined as appearing in the same sequence, so for example if in a polygon with the side sequence and another with the corresponding side sequence we have vertex angle appearing between sides and then its corresponding vertex angle must appear between sides and .
References
Geometry
|
https://en.wikipedia.org/wiki/4-Anisaldehyde
|
4-Anisaldehyde, or p-Anisaldehyde, is an organic compound with the formula CH3OC6H4CHO. The molecule consists of a benzene ring with a formyl and a methoxy group. It is a colorless liquid with a strong aroma. It provides sweet, floral and strong aniseed odor. Two isomers of 4-anisaldehyde are known, ortho-anisaldehyde and meta-anisaldehyde. They are less commonly encountered.
Production
Anisaldehyde is prepared commercially by oxidation of 4-methoxytoluene (p-cresyl methyl ether) using manganese dioxide to convert a methyl group to the aldehyde group. It can also be produced by oxidation of anethole, a related fragrance that is found in some alcoholic beverages, by oxidative cleavage of an alkene.
Uses
Being structurally related to vanillin, 4-anisaldehyde is a widely used in the fragrance and flavor industry. It is used as an intermediate in the synthesis of other compounds important in pharmaceuticals and perfumery. The related ortho isomer has a scent of licorice.
A solution of para-anisaldehyde in acid and ethanol is a useful stain in thin layer chromatography. Different chemical compounds on the plate can give different colors, allowing easy distinction.
DNA breakage
Anisaldehyde in combination with copper (II) can induce single- and double-strand breaks in double stranded DNA.
References
Flavors
Benzaldehydes
|
https://en.wikipedia.org/wiki/Memory-bound%20function
|
Memory bound refers to a situation in which the time to complete a given computational problem is decided primarily by the amount of free memory required to hold the working data. This is in contrast to algorithms that are compute-bound, where the number of elementary computation steps is the deciding factor.
Memory and computation boundaries can sometimes be traded against each other, e.g. by saving and reusing preliminary results or using lookup tables.
Memory-bound functions and memory functions
Memory-bound functions and memory functions are related in that both involve extensive memory access, but a distinction exists between the two.
Memory functions use a dynamic programming technique called memoization in order to relieve the inefficiency of recursion that might occur. It is based on the simple idea of calculating and storing solutions to subproblems so that the solutions can be reused later without recalculating the subproblems again. The best known example that takes advantage of memoization is an algorithm that computes the Fibonacci numbers. The following pseudocode uses recursion and memoization, and runs in linear CPU time:
Fibonacci (n)
{
for i = 0 to n-1
results[i] = -1 // -1 means undefined
return Fibonacci_Results (results, n);
}
Fibonacci_Results (results, n)
{
if (results[n] != -1) // If it has been solved before,
return results[n] // look it up.
if (n == 0)
val = 0
else if (n == 1)
val = 1
else
val = Fibonacci_Results(results, n-2 ) + Fibonacci_Results(results, n-1)
results[n] = val // Save this result for re-use.
return val
}
Compare the above to an algorithm that uses only recursion, and runs in exponential CPU time:
Recursive_Fibonacci (n)
{
if (n == 0)
return 0
if (n == 1)
return 1
return Recursive_Fibonacci (n-1) + Recursive_Fibonacci (n-2)
}
While the recursive-only algorithm is simpler and more elegant
|
https://en.wikipedia.org/wiki/Metabolic%20network%20modelling
|
Metabolic network modelling, also known as metabolic network reconstruction or metabolic pathway analysis, allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. A reconstruction breaks down metabolic pathways (such as glycolysis and the citric acid cycle) into their respective reactions and enzymes, and analyzes them within the perspective of the entire network. In simplified terms, a reconstruction collects all of the relevant metabolic information of an organism and compiles it in a mathematical model. Validation and analysis of reconstructions can allow identification of key features of metabolism such as growth yield, resource distribution, network robustness, and gene essentiality. This knowledge can then be applied to create novel biotechnology.
In general, the process to build a reconstruction is as follows:
Draft a reconstruction
Refine the model
Convert model into a mathematical/computational representation
Evaluate and debug model through experimentation
The related method of flux balance analysis seeks to mathematically simulate metabolism in genome-scale reconstructions of metabolic networks.
Genome-scale metabolic reconstruction
A metabolic reconstruction provides a highly mathematical, structured platform on which to understand the systems biology of metabolic pathways within an organism. The integration of biochemical metabolic pathways with rapidly available, annotated genome sequences has developed what are called genome-scale metabolic models. Simply put, these models correlate metabolic genes with metabolic pathways. In general, the more information about physiology, biochemistry and genetics is available for the target organism, the better the predictive capacity of the reconstructed models. Mechanically speaking, the process of reconstructing prokaryotic and eukaryotic metabolic networks is essentially the same. Having said this,
|
https://en.wikipedia.org/wiki/YIG%20sphere
|
Yttrium iron garnet spheres (YIG spheres) serve as magnetically tunable filters and resonators for microwave frequencies. YIG filters are used for their high Q factors, typically between 100 and 200. A sphere made from a single crystal of synthetic yttrium iron garnet acts as a resonator.
The field from an electromagnet changes the resonance frequency of the sphere and hence the frequency it will allow to pass. The advantage of this type of filter is that the garnet can be tuned over a very wide frequency range by varying the strength of the magnetic field. Some filters can be tuned from 3 GHz up to 50 GHz.
Construction
The YIG spheres themselves are on the order of 0.5 mm in diameter and are manufactured from slightly larger cubes of diced material by tumbling, as is done in the manufacture of jewelry.
The garnet is mounted on a ceramic rod, and a pair of small loops around the sphere couple fields into and out of the sphere; the loops are half-turns, positioned at right-angles to each other to prevent direct electromagnetic coupling between them and each is grounded at one end.
The input and output coils are oriented at right angles to one another around the YIG crystal. They are cross-coupled when energized by the ferrimagnetic resonance frequency, which depends on the external magnetic field supplied by an electromagnet.
YIG filters usually consist of several coupled stages, each stage consisting of a sphere and a pair of loops.
Applications
YIG filters are often used as preselectors. YIG filters tuned by a sweep current are used in spectrum analyzers. Another YIG application is YIG oscillators, where the sphere acts as a tunable frequency-determining element. It is coupled to an amplifier which provides the required feedback for oscillation.
References
Further reading
External links
Experimenting with a Stellex YIG Oscillator
A simple approach to YIG oscillators
Microwave technology
Filter theory
Wireless tuning and filtering
Iron(III) compounds
|
https://en.wikipedia.org/wiki/Fountain%20code
|
In coding theory, fountain codes (also known as rateless erasure codes) are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of size equal to or only slightly larger than the number of source symbols. The term fountain or rateless refers to the fact that these codes do not exhibit a fixed code rate.
A fountain code is optimal if the original k source symbols can be recovered from any k successfully received encoding symbols (i.e., excluding those that were erased). Fountain codes are known that have efficient encoding and decoding algorithms and that allow the recovery of the original k source symbols from any k’ of the encoding symbols with high probability, where k’ is just slightly larger than k.
LT codes were the first practical realization of fountain codes. Raptor codes and online codes were subsequently introduced, and achieve linear time encoding and decoding complexity through a pre-coding stage of the input symbols.
Applications
Fountain codes are flexibly applicable at a fixed code rate, or where a fixed code rate cannot be determined a priori, and where efficient encoding and decoding of large amounts of data is required.
One example is that of a data carousel, where some large file is continuously broadcast to a set of receivers. Using a fixed-rate erasure code, a receiver missing a source symbol (due to a transmission error) faces the coupon collector's problem: it must successfully receive an encoding symbol which it does not already have. This problem becomes much more apparent when using a traditional short-length erasure code, as the file must be split into several blocks, each being separately encoded: the receiver must now collect the required number of missing encoding symbols for each block. Using a fountain code, it suffices for a rece
|
https://en.wikipedia.org/wiki/Sharps%20waste
|
Sharps waste is a form of biomedical waste composed of used "sharps", which includes any device or object used to puncture or lacerate the skin. Sharps waste is classified as biohazardous waste and must be carefully handled. Common medical materials treated as sharps waste are
hypodermic needles,
disposable scalpels and blades,
contaminated glass and certain plastics, and
guidewires used in surgery.
Qualifying materials
In addition to needles and blades, anything attached to them, such as syringes and injection devices, is also considered sharps waste.
Blades can include razors, scalpels, X-Acto knives, scissors, or any other items used for cutting in a medical or biological research setting, regardless of whether they have been contaminated with biohazardous material. While glass and sharp plastic are considered sharps waste, their handling methods can vary.
Glass items which have been contaminated with a biohazardous material are treated with the same concern as needles and blades, even if unbroken. If glass is contaminated, it is still often treated as a sharp, because it can break during the disposal process. Contaminated plastic items which are not sharp can be disposed of in a biohazardous waste receptacle instead of a sharps container.
Dangers involved
Injuries from sharps waste can pose a large public health concern, as used sharps may contain biohazardous material. It is possible for this waste to spread blood-borne pathogens if contaminated sharps penetrate the skin. The spread of these pathogens is directly responsible for the transmission of blood-borne diseases, such as hepatitis B (HBV), hepatitis C (HCV), and HIV. Health care professionals expose themselves to the risk of transmission of these diseases when handling sharps waste. The large volume handled by health care professionals on a daily basis increases the chance that an injury may occur.
The general public can occasionally be at risk of sustaining injuries from sharps waste as we
|
https://en.wikipedia.org/wiki/International%20Conference%20on%20Dependable%20Systems%20and%20Networks
|
The International Conference on Dependable Systems and Networks (or DSN) is an annual conference on topics related to dependable computer systems and reliable networks. It typically features a number of coordinated tracks, including the main paper track, several workshops, tutorials, industry session, a student forum, and fast abstracts. It is sponsored by the IEEE and the IFIP WG 10.4 on Dependable Computing and Fault Tolerance. DSN was formed in 2000 by merging the IEEE International Symposium on Fault-Tolerant Computing (FTCS) and the IFIP International Working Conference on Dependable Computing for Critical Applications (DCCA). The instance number for DSN is taken from FTCS which was first held in 1980 and annually thereafter.
A DSN Hall of Fame ranks the researchers by the number of papers that they have published in DSN.
In 2020, the 50th DSN was to be held in Valencia, Spain and due to the Covid situation, was held virtually.
In 2021, the 51st DSN was to be held in Taipei, Taiwan and due to the Covid situation, was held virtually.
In 2022, the 52nd DSN was held in person in Baltimore, Maryland, United States.
In 2023, the 53rd DSN is scheduled to be held in Porto, Portugal.
Awards
The following awards are given at DSN.
Best paper award: The winner from among three nominees is selected through audience voting
William C. Carter PhD Dissertation Award in Dependability
Rising Star in Dependability Award
Test-of-Time Award: This recognizes two papers published at DSN 10 years ago
Jean-Claude Laprie Award: This recognizes outstanding papers that have significantly influenced the theory and/or practice of dependable computing
References
External links
Computer networking conferences
|
https://en.wikipedia.org/wiki/Fader%20creep
|
Fader creep is a colloquial term used in audio recording to describe a tendency for sound engineers to raise the gain of individual channels on a mixing console, rather than lowering others to achieve the desired change in the mix.
Results of creeping
As a result, the faders (potentiometers that operate by sliding up or down) or volume controls (rotary potentiometers) on the mixing board or audio processor gradually "creep" toward the maximum volume setting, which reduces the ability to manipulate the relative volumes between channels. It can also result in clipping or distortion of the master mix, which is when the overall volume of sound is too great for the equipment or recording medium intended to hold it.
Multi track problems with creep
Fader creep can be a particular problem in audio mixing sessions for multi-track recordings, where individual sounds held on separate audio tracks, or delivered by outboard MIDI or computer audio equipment are combined into the final stereo presentation of the recording. For example, an engineer might compensate for a particularly loud drum track by raising the volumes of the voice, the guitar, and the piano to the point where all of the individual signals are competing for headroom. A better solution is to lower the volume of the drums, and adjust the other channels accordingly.
Live concert creep
In audio mixing for live concerts, fader creep can result when ear fatigue (the diminishing of the ability for the human ear to hear clearly after prolonged exposure to loud sounds) reduces the ability of the sound engineer to hear the individual components of the mix (i.e. separate instruments and voices on the stage) accurately.
Audio mixing
|
https://en.wikipedia.org/wiki/Hydroperoxide
|
Hydroperoxides or peroxols are compounds of the form ROOH, which contain the hydroperoxy functional group (–OOH). The hydroperoxide anion () and the neutral hydroperoxyl radical (HOO·) consist of an unbond hydroperoxy group. When R is organic, the compounds are called organic hydroperoxides. Such compounds are a subset of organic peroxides, which have the formula ROOR. Organic hydroperoxides can either intentionally or unintentionally initiate explosive polymerisation in materials with unsaturated chemical bonds.
Properties
The O−O bond length in peroxides is about 1.45 Å, and the R−O−O angles (R = H, C) are about 110° (water-like). Characteristically, the C−O−O−H dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of , less than half the strengths of C−C, C−H, and C−O bonds.
Hydroperoxides are typically more volatile than the corresponding alcohols:
tert-BuOOH (b.p. 36°C) vs tert-BuOH (b.p. 82-83°C)
CH3OOH (b.p. 46°C) vs CH3OH (b.p. 65°C)
cumene hydroperoxide (b.p. 153°C) vs cumyl alcohol (b.p. 202°C)
Miscellaneous reactions
Hydroperoxides are mildly acidic. The range is indicated by 11.5 for CH3OOH to 13.1 for Ph3COOH.
Hydroperoxides can be reduced to alcohols with lithium aluminium hydride, as described in this idealized equation:
4 ROOH + LiAlH4 → LiAlO2 + 2 H2O + 4 ROH
This reaction is the basis of methods for analysis of organic peroxides. Another way to evaluate the content of peracids and peroxides is the volumetric titration with alkoxides such as sodium ethoxide.
The phosphite esters and tertiary phosphines also effect reduction:
ROOH + PR3 → OPR3 + ROH
Uses
Precursors to epoxides
"The single most important synthetic application of alkyl hydroperoxides is without doubt the metal-catalysed epoxidation of alkenes." In the Halcon process tert-butyl hydroperoxide (TBHP) is employed for the production of propylene oxide.
Of specialized interest, chiral epoxides are prepared using hydropero
|
https://en.wikipedia.org/wiki/Pneumatic%20gripper
|
A pneumatic gripper is a specific type of pneumatic actuator that typically involves either parallel or angular motion of surfaces, A.K.A. “tooling jaws or fingers” that will grip an object. The gripper makes use of compressed air which powers a piston rod inside the tool. Grippers exist both internal with and external bore grip with the same equipment because of an increased quantity of cross rollers in the parallel slide part.
References
Actuators
Fluid dynamics
|
https://en.wikipedia.org/wiki/Paint%20by%20number
|
Paint by number or painting by numbers kits are self-contained painting sets, designed to facilitate painting a pre-designed image. They generally include brushes, tubs of paint with numbered labels, and a canvas printed with borders and numbers. The user selects the color corresponding to one of the numbers then uses it to fill in a delineated section of the canvas, in a manner similar to a coloring book.
The kits were invented, developed and marketed in 1950 by Max S. Klein, an engineer and owner of the Palmer Paint Company in Detroit, Michigan, and Dan Robbins, a commercial artist. When Palmer Paint introduced crayons to consumers, they also posted images online for a "Crayon by Number" version.
History
The first patent for the paint by number technique was filed in 1923.
Paint by Number in its popular form was created by the Palmer Show Card Paint Company. The owner of the company approached employee Dan Robbins with the idea for the project. After several iterations of the product, the company in 1951 introduced the Craft Master brand, which went on to sell over 12 million kits. This public response induced other companies to produce their own versions of paint by number. The Craft Master paint kit box tops proclaimed, "A BEAUTIFUL OIL PAINTING THE FIRST TIME YOU TRY."
Following the death of Max Klein in 1993, his daughter, Jacquelyn Schiffman, donated the Palmer Paint Co. archives to the Smithsonian Museum of American History. The archival materials have been placed in the museum's Archives Center where they have been designated collection #544, the "Paint by Number Collection".
In 1992, Michael O'Donoghue and Trey Speegle organized and mounted a show of O'Donoghue's paint by number collection in New York City at the Bridgewater/Lustberg Gallery. After O'Donoghue's death in 1994, the Smithsonian Institution's National Museum of American History exhibited many key pieces from O'Donoghue's collection, now owned by Speegle, along with works from other col
|
https://en.wikipedia.org/wiki/Dreamwork
|
Dreamwork differs from classical dream interpretation in that the aim is to explore the various images and emotions that a dream presents and evokes, while not attempting to come up with a unique dream meaning. In this way the dream remains "alive" whereas if it has been assigned a specific meaning, it is "finished" (i.e., over and done with). Dreamworkers take the position that a dream may have a variety of meanings depending on the levels (e.g. subjective, objective) that are being explored.
A belief of dreamwork is that each person has their own dream "language". Any given place, person, object, or symbol can differ in its meaning from dreamer to dreamer and also from time to time in the dreamer's ongoing life situation. Thus someone helping a dreamer get closer to their dream through dreamwork adopts an attitude of "not knowing" as far as possible.
In dreamwork it is usual to wait until all the questions have been asked—and the answers carefully listened to—before the dreamworker (or dreamworkers if it is done in a group setting) offers any suggestions about what the dream might mean. In fact, a dreamworker often prefaces any interpretation by saying, "if this were my dream, it might mean..." (a technique first developed by Montague Ullman, Stanley Krippner, and Jeremy Taylor and now widely practiced). In this way, dreamers are not obliged to agree with what is said and may use their own judgment in deciding which comments appear valid or provide insight. If the dreamwork is done in a group, there may well be several things that are said by participants that seem valid to the dreamer but it can also happen that nothing does. Appreciation of the validity or insightfulness of a comment from a dreamwork session can come later, sometimes days after the end of the session.
Dreamwork or dream-work can also refer to Sigmund Freud's idea that a person's forbidden and repressed desires are distorted in dreams, so they appear in disguised forms. Freud used the term 'dr
|
https://en.wikipedia.org/wiki/Unix%20architecture
|
A Unix architecture is a computer operating system system architecture that embodies the Unix philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or similar POSIX IEEE standard. No single published standard describes all Unix architecture computer operating systems — this is in part a legacy of the Unix wars.
Description
There are many systems which are Unix-like in their architecture. Notable among these are the Linux distributions. The distinctions between Unix and Unix-like systems have been the subject of heated legal battles, and the holders of the UNIX brand, The Open Group, object to "Unix-like" and similar terms.
For distinctions between SUS branded UNIX architectures and other similar architectures, see Unix-like.
Kernel
A Unix kernel — the core or key components of the operating system — consists of many kernel subsystems like process management, scheduling, file management, device management, network management, memory management, and dealing with interrupts from hardware devices.
Each of the subsystems has some features:
Concurrency: As Unix is a multiprocessing OS, many processes run concurrently to improve the performance of the system.
Virtual memory (VM): Memory management subsystem implements the virtual memory concept and users need not worry about the executable program size and the RAM size.
Paging: It is a technique to minimize the internal as well as the external fragmentation in the physical memory.
Virtual file system (VFS): A VFS is a file system used to help the user to hide the different file systems complexities. A user can use the same standard file system related calls to access different file systems.
The kernel provides these and other basic services: interrupt and trap handling, separation between user and system space, system calls, scheduling, timer and clock handling, file descriptor management.
Features
Some key features of the Unix architecture concept are:
Unix systems use a central
|
https://en.wikipedia.org/wiki/Aluminized%20screen
|
Aluminized screen may refer to a type of cathode ray tube (CRT) for video display or to a type of projection screen for showing motion pictures or slides, especially in polarized 3D.
Some cathode ray tubes, e.g., television picture tubes, include a thin layer of aluminium deposited on the back surface of their internal phosphor screen coating. Light from an excited area of the phosphor which would otherwise wastefully shine back into the tube, is instead reflected forward through the phosphor coating, increasing the total visible light output by around a factor of two. As well it prevents physical phosphors degradation, "phosphor poisoning", increasing the longevity of the device, and it may also act as a heat sink. The aluminium layer must be thick enough to reflect light efficiently, yet not so thick as to absorb too great a proportion of the electron beam that excites the phosphor.
Some projection screens have an aluminized surface, usually an aluminium paint rather than a metal sheet. They reflect polarized light without altering its polarization. This is necessary when showing 3D films as left-eye and right-eye views are superimposed but oppositely polarized (typically at opposite 45 degree angles to the vertical if linearly polarized, right-handed and left-handed if circularly polarized). Audience members wear polarized glasses that allow only the correct image to be seen by each eye.
References
Cathode ray tube
|
https://en.wikipedia.org/wiki/Sample-rate%20conversion
|
Sample-rate conversion, sampling-frequency conversion or resampling is the process of changing the sampling rate or sampling frequency of a discrete signal to obtain a new discrete representation of the underlying continuous signal. Application areas include image scaling and audio/visual systems, where different sampling rates may be used for engineering, economic, or historical reasons.
For example, Compact Disc Digital Audio and Digital Audio Tape systems use different sampling rates, and American television, European television, and movies all use different frame rates. Sample-rate conversion prevents changes in speed and pitch that would otherwise occur when transferring recorded material between such systems.
More specific types of resampling include: upsampling or upscaling; downsampling, downscaling, or decimation; and interpolation.
The term multi-rate digital signal processing is sometimes used to refer to systems that incorporate sample-rate conversion.
Techniques
Conceptual approaches to sample-rate conversion include: converting to an analog continuous signal, then re-sampling at the new rate, or calculating the values of the new samples directly from the old samples. The latter approach is more satisfactory since it introduces less noise and distortion. Two possible implementation methods are as follows:
If the ratio of the two sample rates is (or can be approximated by) a fixed rational number L/M: generate an intermediate signal by inserting L − 1 zeros between each of the original samples. Low-pass filter this signal at half of the lower of the two rates. Select every M-th sample from the filtered output, to obtain the result.
Treat the samples as geometric points and create any needed new points by interpolation. Choosing an interpolation method is a trade-off between implementation complexity and conversion quality (according to application requirements). Commonly used are: ZOH (for film/video frames), cubic (for image processing) and w
|
https://en.wikipedia.org/wiki/Conformal%20gravity
|
Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime.
Weyl-squared theories
The simplest theory in this category has the square of the Weyl tensor as the Lagrangian
where is the Weyl tensor. This is to be contrasted with the usual Einstein–Hilbert action where the Lagrangian is just the Ricci scalar. The equation of motion upon varying the metric is called the Bach tensor,
where is the Ricci tensor. Conformally flat metrics are solutions of this equation.
Since these theories lead to fourth-order equations for the fluctuations around a fixed background, they are not manifestly unitary. It has therefore been generally believed that they could not be consistently quantized. This is now disputed.
Four-derivative theories
Conformal gravity is an example of a 4-derivative theory. This means that each term in the wave equation can contain up to four derivatives. There are pros and cons of 4-derivative theories. The pros are that the quantized version of the theory is more convergent and renormalisable. The cons are that there may be issues with causality. A simpler example of a 4-derivative wave equation is the scalar 4-derivative wave equation:
The solution for this in a central field of force is:
The first two terms are the same as a normal wave equation. Because this equation is a simpler approximation to conformal gravity, m corresponds to the mass of the central source. The last two terms are unique to 4-derivative wave equations. It has been suggested that small values be assigned to them to account for the galactic acceleration constant (also known as dark matter) and the dark energy constant. The solution equivalent to the Schwarzschild solution in general relativity for a spherical source for conformal gravity has a metric with:
to show the difference betw
|
https://en.wikipedia.org/wiki/Cortical%20reaction
|
The cortical reaction is a process initiated during fertilization that prevents polyspermy, the fusion of multiple sperm with one egg. In contrast to the fast block of polyspermy which immediately but temporarily blocks additional sperm from fertilizing the egg, the cortical reaction gradually establishes a permanent barrier to sperm entry and functions as the main part of the slow block of polyspermy in many animals.
To create this barrier, cortical granules, specialized secretory vesicles located within the egg's cortex (the region directly below the plasma membrane), are fused with the egg's plasma membrane. This releases the contents of the cortical granules outside the cell, where they modify an existing extracellular matrix to make it impenetrable to sperm entry. The cortical granules contain proteases that clip perivitelline tether proteins, peroxidases that harden the vitelline envelope, and glycosaminoglycans that attract water into the perivitelline space, causing it to expand and form the hyaline layer. The trigger for the cortical granules to exocytose is the release of calcium ions from cortical smooth endoplasmic reticulum in response to sperm binding to the egg.
In most animals, the extracellular matrix present around the egg is the vitelline envelope which becomes the fertilization membrane following the cortical reaction. In mammals, however, the extracellular matrix modified by the cortical reaction is the zona pellucida. This modification of the zona pellucida is known as the zona reaction. Although highly conserved across the animal kingdom, the cortical reaction shows great diversity between species. While much has been learned about the identity and function of the contents of the cortical granules in the highly accessible sea urchin, little is known about the contents of cortical granules in mammals.
The cortical reaction within the egg is analogous to the acrosomal reaction within the sperm, where the acrosome, a specialized secretory ves
|
https://en.wikipedia.org/wiki/Doctest
|
doctest is a module included in the Python programming language's standard library that allows the easy generation of tests based on output from the standard Python interpreter shell, cut and pasted into docstrings.
Implementation specifics
Doctest makes innovative use of the following Python capabilities:
docstrings
The Python interactive shell (both command line and the included idle application)
Python introspection
When using the Python shell, the primary prompt: >>> , is followed by new commands. The secondary prompt: ... , is used when continuing commands on multiple lines; and the result of executing the command is expected on following lines.
A blank line, or another line starting with the primary prompt is seen as the end of the output from the command.
The doctest module looks for such sequences of prompts in a docstring, re-executes the extracted command and checks the output against the output of the command given in the docstrings test example.
The default action when running doctests is for no output to be shown when tests pass. This can be modified by options to the doctest runner. In addition, doctest has been integrated with the Python unit test module allowing doctests to be run as standard unittest testcases. Unittest testcase runners allow more options when running tests such as the reporting of test statistics such as tests passed, and failed.
Literate programming and doctests
Although doctest does not allow a Python program to be embedded in narrative text, it does allow for verifiable examples to be embedded in docstrings, where the docstrings can contain other text. Docstrings can in turn be extracted from program files to generate documentation in other formats such as HTML or PDF.
A program file can be made to contain the documentation, tests, as well as the code and the tests easily verified against the code. This allows code, tests, and documentation to evolve together.
Documenting libraries by example
Doctests are well suite
|
https://en.wikipedia.org/wiki/Affine%20hull
|
In mathematics, the affine hull or affine span of a set S in Euclidean space Rn is the smallest affine set containing S, or equivalently, the intersection of all affine sets containing S. Here, an affine set may be defined as the translation of a vector subspace.
The affine hull aff(S) of S is the set of all affine combinations of elements of S, that is,
Examples
The affine hull of the empty set is the empty set.
The affine hull of a singleton (a set made of one single element) is the singleton itself.
The affine hull of a set of two different points is the line through them.
The affine hull of a set of three points not on one line is the plane going through them.
The affine hull of a set of four points not in a plane in R3 is the entire space R3.
Properties
For any subsets
is a closed set if is finite dimensional.
If then .
If then is a linear subspace of .
.
So in particular, is always a vector subspace of .
If is convex then
For every , where is the smallest cone containing (here, a set is a cone if for all and all non-negative ).
Hence is always a linear subspace of parallel to .
Related sets
If instead of an affine combination one uses a convex combination, that is one requires in the formula above that all be non-negative, one obtains the convex hull of S, which cannot be larger than the affine hull of S as more restrictions are involved.
The notion of conical combination gives rise to the notion of the conical hull
If however one puts no restrictions at all on the numbers , instead of an affine combination one has a linear combination, and the resulting set is the linear span of S, which contains the affine hull of S.
References
Sources
R.J. Webster, Convexity, Oxford University Press, 1994. .
Affine geometry
Closure operators
|
https://en.wikipedia.org/wiki/Core-based%20trees
|
Core-based trees (CBT) is a proposal for making IP Multicast scalable by constructing a tree of routers. It was first proposed in a paper by Ballardie, Francis, and Crowcroft. What differentiates it from other schemes for multicasting is that the routing tree comprises multiple "cores" (also known as "centres"). The locations of the core routers are statically configured. Other routers are added by growing "branches" of a tree, comprising a chain of routers, from the core routers out towards the routers directly adjacent to the multicast group members.
References
RFC 2189
Network architecture
|
https://en.wikipedia.org/wiki/RelayNet
|
RelayNet was an e-mail exchange network used by PCBoard bulletin board systems (BBS's). By 1990, RelayNet comprised more than 200 bulletin board systems. BBS's on RelayNet communicated via a communications protocol called RIME (RelayNet International Mail Exchange).
RelayNet was similar to FidoNet in purpose and technology, although it used names for its nodes instead of Fido's numeric address pairs. Due to it being limited to PCBoard, it carried a much smaller amount of traffic than Fido. RIME was built up, starting in 1988, from a master hub owned by Bonnie Anthony, a local Psychiatrist, in Bethesda, Maryland and a subordinate hub owned by her brother, Howard Belasco, in The Bronx, New York. Kip Compton, in high-school at the time, played an important role in the software's development and evolution. Dr. Anthony died in 2015.
PCBoard, created by Clark Development Corporation (CDC) in Salt Lake City, Utah, was always a "premium" BBS system and fairly expensive. For this reason it was limited mostly to larger multi-line BBS systems, where it was particularly well liked due to its "nice" behaviour on the network when running off a common file server. However this also meant that the PCBoard market generally consisted of a small number of large systems, as opposed to a large number of small ones, hence RIME had usually only a few hundred member boards.
Thus RelayNet, which originally ran only on PCBoard, did not have the same level of infrastructure as FidoNet, and didn't build the sort of global organizational structure that FidoNet needed. Instead, RelayNet evolved as a series of smaller regional networks, including the NANET hosted by Canada Remote Systems, RoseNet hosted by their competitors Rose Media, QuebecNet, FINET, Smartnet, Intelec, ILink, U'NI-net, Friendsnet and others.
RelayNet software later appeared for a variety of other BBS systems, including RBBS, GAP, EIS, QBBS and Wildcat! BBS, but these systems also provided excellent FidoNet support and Rel
|
https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein%20distance
|
In information theory and computer science, the Damerau–Levenshtein distance (named after Frederick J. Damerau and Vladimir I. Levenshtein) is a string metric for measuring the edit distance between two sequences. Informally, the Damerau–Levenshtein distance between two words is the minimum number of operations (consisting of insertions, deletions or substitutions of a single character, or transposition of two adjacent characters) required to change one word into the other.
The Damerau–Levenshtein distance differs from the classical Levenshtein distance by including transpositions among its allowable operations in addition to the three classical single-character edit operations (insertions, deletions and substitutions).
In his seminal paper, Damerau stated that in an investigation of spelling errors for an information-retrieval system, more than 80% were a result of a single error of one of the four types. Damerau's paper considered only misspellings that could be corrected with at most one edit operation. While the original motivation was to measure distance between human misspellings to improve applications such as spell checkers, Damerau–Levenshtein distance has also seen uses in biology to measure the variation between protein sequences.
Definition
To express the Damerau–Levenshtein distance between two strings and , a function is defined, whose value is a distance between an prefix (initial substring) of string and a prefix of .
The restricted distance function is defined recursively as:
where is the indicator function equal to 0 when and equal to 1 otherwise.
Each recursive call matches one of the cases covered by the Damerau–Levenshtein distance:
corresponds to a deletion (from a to b),
corresponds to an insertion (from a to b),
corresponds to a match or mismatch, depending on whether the respective symbols are the same,
corresponds to a transposition between two successive symbols.
The Damerau–Levenshtein distance between and is then
|
https://en.wikipedia.org/wiki/Geography%20%28Ptolemy%29
|
The Geography (, Geōgraphikḕ Hyphḗgēsis, "Geographical Guidance"), also known by its Latin names as the and the , is a gazetteer, an atlas, and a treatise on cartography, compiling the geographical knowledge of the 2nd-century Roman Empire. Originally written by Claudius Ptolemy in Greek at Alexandria around AD 150, the work was a revision of a now-lost atlas by Marinus of Tyre using additional Roman and Persian gazetteers and new principles. Its translation into Arabic in the 9th century was highly influential on the geographical knowledge and cartographic traditions of the Islamic world. Alongside the works of Islamic scholars - and the commentary containing revised and more accurate data by Alfraganus - Ptolemy's work was subsequently highly influential on Medieval and Renaissance Europe.
Manuscripts
Versions of Ptolemy's work in antiquity were probably proper atlases with attached maps, although some scholars believe that the references to maps in the text were later additions.
No Greek manuscript of the Geography survives from earlier than the 13th century. A letter written by the Byzantine monk Maximus Planudes records that he searched for one for Chora Monastery in the summer of 1295; one of the earliest surviving texts may have been one of those he then assembled. In Europe, maps were sometimes redrawn using the coordinates provided by the text, as Planudes was forced to do. Later scribes and publishers could then copy these new maps, as Athanasius did for the emperor Andronicus II Palaeologus. The three earliest surviving texts with maps are those from Constantinople (Istanbul) based on Planudes's work.
The first Latin translation of these texts was made in 1406 or 1407 by Jacobus Angelus in Florence, Italy, under the name . It is not thought that his edition had maps, although Manuel Chrysoloras had given Palla Strozzi a Greek copy of Planudes's maps in Florence in 1397.
Contents
The Geography consists of three sections, divided among 8 books. Book
|
https://en.wikipedia.org/wiki/Secure%20Hypertext%20Transfer%20Protocol
|
Secure Hypertext Transfer Protocol (S-HTTP) is an obsolete alternative to the HTTPS protocol for encrypting web communications carried over the Internet. It was developed by Eric Rescorla and Allan M. Schiffman at EIT in 1994 and published in 1999 as .
Even though S-HTTP was first to market, Netscape's dominance of the browser market led to HTTPS becoming the de facto method for securing web communications.
Comparison to HTTP over TLS (HTTPS)
S-HTTP encrypts only the served page data and submitted data like POST fields, leaving the initiation of the protocol unchanged. Because of this, S-HTTP could be used concurrently with HTTP (unsecured) on the same port, as the unencrypted header would determine whether the rest of the transmission is encrypted.
In contrast, HTTP over TLS wraps the entire communication within Transport Layer Security (TLS; formerly SSL), so the encryption starts before any protocol data is sent. This creates a name-based virtual hosting "chicken and egg" issue with determining which DNS name was intended for the request.
This means that HTTPS implementations without Server Name Indication (SNI) support require a separate IP address per DNS name, and all HTTPS implementations require a separate port (usually 443 vs. HTTP's standard 80) for unambiguous use of encryption (treated in most browsers as a separate URI scheme, https://).
As documented in RFC 2817, HTTP can also be secured by implementing HTTP/1.1 Upgrade headers and upgrading to TLS. Running HTTP over TLS negotiated in this way does not have the implications of HTTPS with regards to name-based virtual hosting (no extra IP addresses, ports, or URI space). However, few implementations support this method.
In S-HTTP, the desired URL is not transmitted in the cleartext headers, but left blank; another set of headers is present inside the encrypted payload. In HTTP over TLS, all headers are inside the encrypted payload and the server application does not generally have the opportunity
|
https://en.wikipedia.org/wiki/PC%C2%B2
|
PC² is the Programming Contest Control System developed at California State University, Sacramento in support of Computer Programming Contest activities of the ACM, and in particular the ACM International Collegiate Programming Contest. It was used to conduct the ACM ICPC World Finals in 1990 and from 1994 through 2009. In 2010, the ACM ICPC World Finals switched to using Kattis, the KTH automated teaching tool; however, PC2 continues to be used for a large number of ICPC Regional Contests around the world.
Computer programming contests and PC²
Computer programming contest have rules and methods for judging submissions. The following describes in a general way a contest where PC2 is used.
A computer programming contest is a competition where teams submit (computer program) solutions to judges. The teams
are given a set of problems to solve in a limited amount of time (for example 8-13 problems in 5 hours).
The judges then give pass/fail judgements to the submitted solutions. Team rankings are computed based on the solutions, when the solutions were submitted and how many attempts were made to solve the problem. The judges test in a Black box testing where the teams do not have access to the judges' test data.
PC2 manages single or multi-site programming contests. It provides a team a way to log in, test solutions, submit solutions and view judgements from judges. PC2 provides judges a way to request team solutions (from a PC2 server) run/execute the solution and enter a judgment. The PC2 scoreboard module computes and creates standings and statistics web pages (HTML/XML). PC2 is easy to install on Linux/Linux-like systems and MS Windows and does not require super-user (root) access to install it or use it: this makes it an attractive choice for users who may not have super-user access.
Usage and User Experiences
PC2 was used for the ACM International Collegiate Programming Contest World Finals from 1994 to 2009. It has also been used in hundreds of ICPC Regio
|
https://en.wikipedia.org/wiki/George%20Laurer
|
George Joseph Laurer III (September 23, 1925 – December 5, 2019) was an American engineer for IBM at Research Triangle Park in North Carolina. He published 20 bulletins, held 28 patents and developed the Universal Product Code (UPC) in the early 1970s. He devised the coding and pattern used for the UPC, based on Joe Woodland's more general idea for barcodes.
Early life
George Laurer was born on September 23, 1925, in New York City. His family moved to Baltimore, Maryland, so his father, an electrical engineer, could work for the United States Navy. Laurer recovered from polio which he contracted as a teenager, nonetheless, while in 11th grade, he was drafted into the U.S. Army during World War II. After being discharged from the military, he attended technical school where he studied radio and television repair. Upon completion of his first year at the technical school, his instructor convinced him that he should not continue that course of study, but that he should go to college. Laurer graduated from the A. James Clark School of Engineering at the University of Maryland in 1951. He was still interested in radio and kept up his amateur radio licence.
Career
Laurer was a 36-year employee of IBM until his retirement in June 1987. He joined IBM in 1951 as a junior engineer. By 1969, he had been promoted to senior engineer / scientist and moved to the company's offices in Research Triangle Park in North Carolina.
At IBM, Laurer was assigned the task of developing barcodes for use in grocery stores. Initially, IBM envisioned a circular bullseye pattern as proposed by Joe Woodland in 1940s. Laurer realized that the pattern was ineffective because of smearing during printing. Instead, he designed a vertical pattern of stripes which he proposed to his superior in 1971 or 1972. This change was accepted by IBM management and Laurer then worked with Woodland and mathematician David Savir to develop and refine the details. These included the addition of a check digit to p
|
https://en.wikipedia.org/wiki/Air%20track
|
The Air Track may also refer to a breakdance move.
See also AirTrack (disambiguation) for other uses.
An air track is a scientific device used to study motion in a low friction environment. Its name comes from its structure: air is pumped through a hollow track with fine holes all along the track that allows specially fitted air track cars to glide relatively friction-free. Air tracks are usually triangular in cross-section. Carts which have a triangular base and fit neatly on to the top of the track are used to study motion in low friction environments.
The air track is also used to study collisions, both elastic and inelastic. Since there is very little energy lost through friction it is easy to demonstrate how momentum is conserved before and after a collision. The track can be used to calculate the force of gravity when placed at an angle.
It was invented in the mid-1960s at the California Institute of Technology by Prof Nehr and Leighton. It was first presented by them at a meeting of the American Physical Society in NYC in 1965(?) where it was viewed by Prof John Stull, Alfred University, and Frank Ferguson, the Ealing Corporation. The original track was about 1 meter long with tiny air orifices and highly compressed air. Stull returned to Alfred University, where he developed a simple version using standard square aluminum tubing with large air orifices and air from the vent of a shop vacuum cleaner. With Ferguson's help at Ealing, Stull designed a custom aluminum track that Ealing offered commercially in various lengths up to 10 meters. T. Walley Williams III at Ealing extended the concept to the 2-dimensional air table in 1969.
References
Measuring instruments
|
https://en.wikipedia.org/wiki/Pneumatic%20actuator
|
A pneumatic control valve actuator converts energy (typically in the form of compressed air) into mechanical motion. The motion can be rotary or linear, depending on the type of actuator.
Principle of operation
A pneumatic actuator mainly consists of a piston or a diaphragm which develops the motive power. It keeps the air in the upper portion of the cylinder, allowing air pressure to force the diaphragm or piston to move the valve stem or rotate the valve control element.
Valves require little pressure to operate and usually double or triple the input force. The larger the size of the piston, the larger the output pressure can be. Having a larger piston can also be good if the air supply is low, allowing the same forces with less input. These pressures are large enough to crush objects in the pipe. On 100 kPa input, you could lift a small car (upwards of 1,000 lbs) easily, and this is only a basic, small pneumatic valve. However, the resulting forces required of the stem would be too great and cause the valve stem to fail.
This pressure is transferred to the valve stem, which is connected to either the valve plug (see plug valve), butterfly valve etc. Larger forces are required in high pressure or high flow pipelines to allow the valve to overcome these forces, and allow it to move the valves moving parts to control the material flowing inside.
The valve's input is the "control signal." This can come from a variety of measuring devices, and each different pressure is a different set point for a valve. A typical standard signal is 20–100 kPa. For example, a valve could be controlling the pressure in a vessel that has a constant out-flow, and a varied in-flow (varied by the actuator and valve). A pressure transmitter will monitor the pressure in the vessel and transmit a signal from 20–100 kPa. 20 kPa means there is no pressure, 100 kPa means there is full range pressure (can be varied by the transmitters calibration points). As the pressure rises in
|
https://en.wikipedia.org/wiki/Hand%20coding
|
In computing, hand coding means editing the underlying representation of a document or a computer program, when tools that allow working on a higher level representation also exist. Typically this means editing the source code, or the textual representation of a document or program, instead of using a WYSIWYG editor that always displays an approximation of the final product. It may mean translating the whole or parts of the source code into machine language manually instead of using a compiler or an automatic translator.
Most commonly, it refers to directly writing HTML documents for the web (rather than in a specialized editor), or to writing a program or portion of a program in assembly language (more rarely raw machine code) rather than in a higher level language. It can also include other markup languages, such as wikitext.
Purpose
The reasons to use hand coding include the ability to:
Use features or refinements not supported by the graphical editor or compiler
Control the semantics of a document beyond that allowed by the graphical editor
Produce more elegant source code to help maintenance and integration
Produce better performing machine code than that produced by the compiler (see optimization)
Avoid having to pay for expensive WYSIWYG Editors. Note that there are some open-source editors available on the web, however.
Develop an understanding of the methods underlying a common level of abstraction. For example, although it has become rare in real-life scenarios, computer science students may be required to write a program in an assembly language to get a notion of processor registers and other basal elements of computer architecture.
Escape abstractions and templated code. Hand coding allows more refined control of code, which may improve efficiency, or add functionality that is otherwise unavailable.
Hand coding may require more expertise and time than using automatic tools.
Hand code
Hand code is source code which does not have tools that c
|
https://en.wikipedia.org/wiki/Ion%20Barbu
|
Ion Barbu (, pen name of Dan Barbilian; 18 March 1895 –11 August 1961) was a Romanian mathematician and poet. His name is associated with the Mathematics Subject Classification number 51C05, which is a major posthumous recognition reserved only to pioneers of investigations in an area of mathematical inquiry.
Early life
Born in Câmpulung-Muscel, Argeș County, he was the son of Constantin Barbilian and Smaranda, born Șoiculescu. He attended elementary school in Câmpulung, Dămienești, and Stâlpeni, and for secondary studies he went to the Ion Brătianu High School in Pitești, the Dinicu Golescu High School in Câmpulung, and finally the Gheorghe Lazăr High School and the Mihai Viteazul High School in Bucharest. During that time, he discovered that he had a talent for mathematics, and started publishing in Gazeta Matematică; it was also then that he discovered his passion for poetry. Barbu was known as "one of the greatest Romanian poets of the twentieth century and perhaps the greatest of all" according to Romanian literary critic Alexandru Ciorănescu. As a poet, he is known for his volume Joc secund ("Mirrored Play").
He was a student at the University of Bucharest when World War I caused his studies to be interrupted by military service. He completed his degree in 1921. He then went to the University of Göttingen to study number theory with Edmund Landau for two years. Returning to Bucharest, he studied with Gheorghe Țițeica, completing in 1929 his thesis, Canonical representation of the addition of hyperelliptic functions.
Achievements in mathematics
Apollonian metric
In 1934, Barbilian published his article describing metrization of a region K, the interior of a simple closed curve J. Let xy denote the Euclidean distance from x to y. Barbilian's function for the distance from a to b in K is
At the University of Missouri in 1938 Leonard Blumenthal wrote Distance Geometry. A Study of the Development of Abstract Metrics, where he used the term "Barbilian spaces" f
|
https://en.wikipedia.org/wiki/Anton%20Davidoglu
|
Anton Davidoglu (June 30, 1876–May 27, 1958) was a Romanian mathematician who specialized in differential equations.
He was born in 1876 in Bârlad, Vaslui County, the son of Profira Moțoc and Doctor Cleante Davidoglu. His older brother was General Cleante Davidoglu.
He studied under Jacques Hadamard at the École Normale Supérieure in Paris, defending his Ph.D. dissertation in 1900. His thesis — the first mathematical investigation of deformable solids — applied Émile Picard's method of successive approximations to the study of fourth order differential equations that model traverse vibrations of non-homogeneous elastic bars.
After returning to Romania, Davidoglu became a professor at the University of Bucharest. In 1913, he was founding rector of the
Academy of High Commercial and Industrial Studies in Bucharest. He also continued to teach at the University of Bucharest, until his retirement in 1941.
Davidoglu was a founding member of the Romanian Academy of Sciences, and was featured on a 1976 Romanian postage stamp. He died in 1958 in Bucharest.
Publications
References
1876 births
1958 deaths
People from Bârlad
Romanian expatriates in France
École Normale Supérieure alumni
Academic staff of the University of Bucharest
20th-century Romanian mathematicians
Mathematical analysts
Academic staff of the Bucharest Academy of Economic Studies
Members of the Romanian Academy of Sciences
|
https://en.wikipedia.org/wiki/Dimitrie%20Pompeiu
|
Dimitrie D. Pompeiu (; – 8 October 1954) was a Romanian mathematician, professor at the University of Bucharest, titular member of the Romanian Academy, and President of the Chamber of Deputies.
Biography
He was born in 1873 in Broscăuți, Botoșani County, in a family of well-to-do peasants. After completing high school in nearby Dorohoi, he went to study at the Normal Teachers School in Bucharest, where he had Alexandru Odobescu as a teacher. After obtaining his diploma in 1893, he taught for five years at schools in Galați and Ploiești. In 1898 he went to France, where he studied mathematics at the University of Paris (the Sorbonne). He obtained his Ph.D. degree in mathematics in 1905, with thesis On the continuity of complex variable functions written under the direction of Henri Poincaré.
After returning to Romania, Pompeiu was named Professor of Mechanics at the University of Iași. In 1912, he assumed a chair at the University of Bucharest. In the early 1930s he was elected to the Chamber of Deputies as a member of Nicolae Iorga's Democratic Nationalist Party, and served as President of the Chamber of Deputies for a year. In 1934, Pompeiu was elected titular member of the Romanian Academy, while in 1943 he was elected to the Romanian Academy of Sciences. In 1945, he became the founding director of the Institute of Mathematics of the Romanian Academy.
He died in Bucharest in 1954. A boulevard in the Pipera neighborhood of the city is named after him, and so is a school in his hometown of Broscăuți.
Research
Pompeiu's contributions were mainly in the field of mathematical analysis, complex functions theory, and rational mechanics. In an article published in 1929, he posed a challenging conjecture in integral geometry, now widely known as the Pompeiu problem. Among his contributions to real analysis there is the construction, dated 1906, of non-constant, everywhere differentiable functions, with derivative vanishing on a dense set. Such derivatives are now ca
|
https://en.wikipedia.org/wiki/Fairness%20measure
|
Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness.
Transmission Control Protocol fairness
Congestion control mechanisms for new network transmission protocols or peer-to-peer applications must interact well with Transmission Control Protocol (TCP). TCP fairness requires that a new protocol receive a no larger share of the network than a comparable TCP flow. This is important as TCP is the dominant transport protocol on the Internet, and if new protocols acquire unfair capacity they tend to cause problems such as congestion collapse. This was the case with the first versions of RealMedia's streaming protocol: it was based on UDP and was widely blocked at organizational firewalls until a TCP-based version was developed. TCP throughput unfairness over WiFi is a critical problem and needs further investigations.
Jain's fairness index
Raj Jain's equation,
rates the fairness of a set of values where there are users, is the throughput for the th connection, and is the sample coefficient of variation . The result ranges from (worst case) to 1 (best case), and it is maximum when all users receive the same allocation. This index is when users equally share the resource, and the other users receive zero allocation.
This metric identifies underutilized channels and is not unduly sensitive to atypical network flow patterns.
To achieve a given fairness level , one approximate method is to let , where
and A is an arbitrary factor, typically used for normalization. This gives an allocation with a fairness close to F, and the allocation can then be refined to get even closer. Note this also allows for a prioritization of allocation, as the s will be sorted.
An exact method is to let , where solves
.
A simple way to calculate is to use Newton's Method on , which converges consistently and fairly q
|
https://en.wikipedia.org/wiki/Ring%20counter
|
A ring counter is a type of counter composed of flip-flops connected into a shift register, with the output of the last flip-flop fed to the input of the first, making a "circular" or "ring" structure.
There are two types of ring counters:
A straight ring counter, also known as a one-hot counter, connects the output of the last shift register to the first shift register input and circulates a single one (or zero) bit around the ring.
A twisted ring counter, also called switch-tail ring counter, walking ring counter, Johnson counter, or Möbius counter, connects the complement of the output of the last shift register to the input of the first register and circulates a stream of ones followed by zeros around the ring.
Four-bit ring-counter sequences
Properties
Ring counters are often used in hardware design (e.g. ASIC and FPGA design) to create finite-state machines. A binary counter would require an adder circuit which is substantially more complex than a ring counter and has higher propagation delay as the number of bits increases, whereas the propagation delay of a ring counter will be nearly constant regardless of the number of bits in the code.
The straight and twisted forms have different properties, and relative advantages and disadvantages.
A general disadvantage of ring counters is that they are lower density codes than normal binary encodings of state numbers. A binary counter can represent 2N states, where N is the number of bits in the code, whereas a straight ring counter can represent only N states and a Johnson counter can represent only 2N states. This may be an important consideration in hardware implementations where registers are more expensive than combinational logic.
Johnson counters are sometimes favored, because they offer twice as many count states from the same number of shift registers, and because they are able to self-initialize from the all-zeros state, without requiring the first count bit to be injected externally at start
|
https://en.wikipedia.org/wiki/BrookGPU
|
The Brook programming language and its implementation BrookGPU were early and influential attempts to enable general-purpose computing on graphics processing units.
Brook, developed at Stanford University graphics group, was a compiler and runtime implementation of a stream programming language targeting modern, highly parallel GPUs such as those found on ATI or Nvidia graphics cards.
BrookGPU compiled programs written using the Brook stream programming language, which is a variant of ANSI C. It could target OpenGL v1.3+, DirectX v9+ or AMD's Close to Metal for the computational backend and ran on both Microsoft Windows and Linux. For debugging, BrookGPU could also simulate a virtual graphics card on the CPU.
Status
The last major beta release (v0.4) was in October 2004 but renewed development began and stopped again in November 2007 with a v0.5 beta 1 release.
The new features of v0.5 include a much upgraded and faster OpenGL backend which uses framebuffer objects instead of PBuffers and harmonised the code around standard OpenGL interfaces instead of using proprietary vendor extensions. GLSL support was added which brings all the functionality (complex branching and loops) previously only supported by DirectX 9 to OpenGL. In particular, this means that Brook is now just as capable on Linux as Windows.
Other improvements in the v0.5 series include multi-backend usage whereby different threads can run different Brook programs concurrently (thus maximising use of a multi-GPU setup) and SSE and OpenMP support for the CPU backend (this allows near maximal usage of modern CPUs).
Performance comparison
A like for like comparison between desktop CPUs and GPGPUs is problematic because of algorithmic & structural differences.
For example, a 2.66 GHz Intel Core 2 Duo can perform a maximum of 25 GFLOPs (25 billion single-precision floating-point operations per second) if optimally using SSE and streaming memory access so the prefetcher works perfectly. However, traditi
|
https://en.wikipedia.org/wiki/Cyber-security%20regulation
|
A cybersecurity regulation comprises directives that safeguard information technology and computer systems with the purpose of forcing companies and organizations to protect their systems and information from cyberattacks like viruses, worms, Trojan horses, phishing, denial of service (DOS) attacks, unauthorized access (stealing intellectual property or confidential information) and control system attacks. There are numerous measures available to prevent cyberattacks.
Cybersecurity measures include firewalls, anti-virus software, intrusion detection and prevention systems, encryption, and login passwords. There have been attempts to improve cybersecurity through regulation and collaborative efforts between the government and the private sector to encourage voluntary improvements to cybersecurity. Industry regulators, including banking regulators, have taken notice of the risk from cybersecurity and have either begun or planned to begin to include cybersecurity as an aspect of regulatory examinations.
Recent research suggests there is also a lack of cyber-security regulation and enforcement in maritime businesses, including the digital connectivity between ships and ports.
Background
In 2011 the DoD released a guidance called the Department of Defense Strategy for Operating in Cyberspace which articulated five goals: to treat cyberspace as an operational domain, to employ new defensive concepts to protect DoD networks and systems, to partner with other agencies and the private sector in pursuit of a "whole-of-government cybersecurity Strategy", to work with international allies in support of collective cybersecurity and to support the development of a cyber workforce capable of rapid technological innovation. A March 2011 GAO report "identified protecting the federal government's information systems and the nation's cyber critical infrastructure as a governmentwide high-risk area" noting that federal information security had been designated a high-risk area since
|
https://en.wikipedia.org/wiki/Traian%20Lalescu
|
Traian Lalescu (; 12 July 1882 – 15 June 1929) was a Romanian mathematician. His main focus was on integral equations and he contributed to work in the areas of functional equations, trigonometric series, mathematical physics, geometry, mechanics, algebra, and the history of mathematics.
Life
He went to the Carol I High School in Craiova, continuing high school in Roman, and graduating from the Boarding High School in Iași. After entering the University of Iași, he completed his undergraduate studies in 1903 at the University of Bucharest.
He earned his Ph.D. in Mathematics from the University of Paris in 1908. His dissertation, Sur les équations de Volterra, was written under the direction of Émile Picard. In 1911, he published Introduction to the Theory of Integral Equations, the first book ever on the subject of integral equations.
After returning to Romania in 1909, he first taught Mathematics at the Ion Maiorescu Gymnasium in Giurgiu. From 1909 to 1910, he was a teaching assistant at the School of Bridges and Highways, in the department of graphic statistics.
He was a professor at the University of Bucharest, the Polytechnic University of Timișoara (where he was the first rector, in 1920), and the Polytechnic University of Bucharest.
The Lalescu sequence
Legacy
There are several institutions bearing his name, including Colegiul Naţional de Informatică Traian Lalescu in Hunedoara and Liceul Teoretic Traian Lalescu in Reşiţa. There is also a Traian Lalescu Street in Timişoara.
The National Mathematics Contest Traian Lalescu for undergraduate students is also named after him.
A statue of Lalescu, carved in 1930 by Cornel Medrea, is situated in front of the Faculty of Mechanical Engineering, in Timişoara and another statue of Lalescu is situated inside the University of Bucharest.
Work
T. Lalesco, Introduction à la théorie des équations intégrales. Avec une préface de É. Picard, Paris: A. Hermann et Fils, 1912. VII + 152 pp. JFM entry
Traian Lalescu,
|
https://en.wikipedia.org/wiki/Branching%20%28version%20control%29
|
Branching, in version control and software configuration management, is the duplication of an object under version control (such as a source code file or a directory tree). Each object can thereafter be modified separately and in parallel so that the objects become different. In this context the objects are called branches. The users of the version control system can branch any branch.
Branches are also known as trees, streams or codelines. The originating branch is sometimes called the parent branch, the upstream branch (or simply upstream, especially if the branches are maintained by different organizations or individuals), or the backing stream.
Child branches are branches that have a parent; a branch without a parent is referred to as the trunk or the mainline. The trunk is also sometimes loosely referred to as HEAD, but properly head refers not to a branch, but to the most recent commit on a given branch, and both the trunk and each named branch has its own head. The trunk is usually meant to be the base of a project on which development progresses. If developers are working exclusively on the trunk, it always contains the latest cutting-edge version of the project, but therefore may also be the most unstable version. Another approach is to split a branch off the trunk, implement changes in that branch and merge the changes back into the trunk when the branch has proven to be stable and working. Depending on development mode and commit policy the trunk may contain the most stable or the least stable or something-in-between version. Other terms for trunk include baseline, mainline, and master, though in some cases these are used with similar but distinct senses – see . Often main developer work takes place in the trunk and stable versions are branched, and occasional bug-fixes are merged from branches to the trunk. When development of future versions is done in non-trunk branches, it is usually done for projects that do not change often, or where a change is
|
https://en.wikipedia.org/wiki/Topkis%27s%20theorem
|
In mathematical economics, Topkis's theorem is a result that is useful for establishing comparative statics. The theorem allows researchers to understand how the optimal value for a choice variable changes when a feature of the environment changes. The result states that if f is supermodular in (x,θ), and D is a lattice, then is nondecreasing in θ. The result is especially helpful for establishing comparative static results when the objective function is not differentiable. The result is named after Donald M. Topkis.
An example
This example will show how using Topkis's theorem gives the same result as using more standard tools. The advantage of using Topkis's theorem is that it can be applied to a wider class of problems than can be studied with standard economics tools.
A driver is driving down a highway and must choose a speed, s. Going faster is desirable, but is more likely to result in a crash. There is some prevalence of potholes, p. The presence of potholes increases the probability of crashing. Note that s is a choice variable and p is a parameter of the environment that is fixed from the perspective of the driver. The driver seeks to .
We would like to understand how the driver's speed (a choice variable) changes with the amount of potholes:
If one wanted to solve the problem with standard tools such as the implicit function theorem, one would have to assume that the problem is well behaved: U(.) is twice continuously differentiable, concave in s, that the domain over which s is defined is convex, and that it there is a unique maximizer for every value of p and that is in the interior of the set over which s is defined. Note that the optimal speed is a function of the amount of potholes. Taking the first order condition, we know that at the optimum, . Differentiating the first order condition, with respect to p and using the implicit function theorem, we find that
or that
So,
If s and p are substitutes,
and hence
and more
|
https://en.wikipedia.org/wiki/IEEE%201355
|
IEEE Standard 1355-1995, IEC 14575, or ISO 14575 is a data communications standard for Heterogeneous Interconnect (HIC).
IEC 14575 is a low-cost, low latency, scalable serial interconnection system, originally intended for communication between large numbers of inexpensive computers.
IEC 14575 lacks many of the complexities of other data networks. The standard defined several different types of transmission media (including wires and optic fiber), to address different applications.
Since the high-level network logic is compatible, inexpensive electronic adapters are possible. IEEE 1355 is often used in scientific laboratories. Promoters include large laboratories, such as CERN, and scientific agencies.
For example, the ESA advocates a derivative standard called SpaceWire.
Goals
The protocol was designed for a simple, low cost switched network made of point-to-point links. This network sends variable length data packets reliably at high speed. It routes the packets using wormhole routing. Unlike Token Ring or other types of local area networks (LANs) with comparable specifications, IEEE 1355 scales beyond a thousand nodes without requiring higher transmission speeds. The network is designed to carry traffic from other types of networks, notably Internet Protocol and Asynchronous Transfer Mode (ATM), but does not depend on other protocols for data transfers or switching. In this, it resembles Multiprotocol Label Switching (MPLS).
IEEE 1355 had goals like Futurebus and its derivatives Scalable Coherent Interface (SCI), and InfiniBand. The packet routing system of IEEE 1355 is also similar to VPLS, and uses a packet labeling scheme similar to MPLS.
IEEE 1355 achieves its design goals with relatively simple digital electronics and very little software. This simplicity is valued by many engineers and scientists.
Paul Walker (see links ) said that when implemented in an FPGA, the standard takes about a third the hardware resources of a UART (a standard seria
|
https://en.wikipedia.org/wiki/Annealing%20%28glass%29
|
Annealing is a process of slowly cooling hot glass objects after they have been formed, to relieve residual internal stresses introduced during manufacture. Especially for smaller, simpler objects, annealing may be incidental to the process of manufacture, but in larger or more complex products it commonly demands a special process of annealing in a temperature-controlled kiln known as a lehr. Annealing of glass is critical to its durability. Glass that has not been properly annealed retains thermal stresses caused by quenching, which will indefinitely decrease the strength and reliability of the product. Inadequately annealed glass is likely to crack or shatter when subjected to relatively small temperature changes or to mechanical shock or stress. It even may fail spontaneously.
To anneal glass, it is necessary to heat it to its annealing temperature, at which its viscosity, η, drops to 1013 Poise (1013 dyne-second/cm²). For most kinds of glass, this annealing temperature is in the range of 454–482 °C (850–900 °F), and is the so-called stress-relief point or annealing point of the glass. At such a viscosity, the glass is still too hard for significant external deformation without breaking, but it is soft enough to relax internal strains by microscopic flow in response to the intense stresses they introduce internally. The piece then heat-soaks until its temperature is even throughout and the stress relaxation is adequate. The time necessary for this step varies depending on the type of glass and its maximum thickness. The glass then is permitted to cool at a predetermined rate until its temperature passes the strain point (η = 1014.5 Poise), below which even microscopic internal flow effectively stops and annealing stops with it. It then is safe to cool the product to room temperature at a rate limited by the heat capacity, thickness, thermal conductivity, and thermal expansion coefficient of the glass. After annealing is complete the material can be cut to size
|
https://en.wikipedia.org/wiki/Annealing%20%28materials%20science%29
|
In metallurgy and materials science, annealing is a heat treatment that alters the physical and sometimes chemical properties of a material to increase its ductility and reduce its hardness, making it more workable. It involves heating a material above its recrystallization temperature, maintaining a suitable temperature for an appropriate amount of time and then cooling.
In annealing, atoms migrate in the crystal lattice and the number of dislocations decreases, leading to a change in ductility and hardness. As the material cools it recrystallizes. For many alloys, including carbon steel, the crystal grain size and phase composition, which ultimately determine the material properties, are dependent on the heating rate and cooling rate. Hot working or cold working after the annealing process alters the metal structure, so further heat treatments may be used to achieve the properties required. With knowledge of the composition and phase diagram, heat treatment can be used to adjust from harder and more brittle to softer and more ductile.
In the case of ferrous metals, such as steel, annealing is performed by heating the material (generally until glowing) for a while and then slowly letting it cool to room temperature in still air. Copper, silver and brass can be either cooled slowly in air, or quickly by quenching in water. In this fashion, the metal is softened and prepared for further work such as shaping, stamping, or forming.
Many other materials, including glass and plastic films, use annealing to improve the finished properties.
Thermodynamics
Annealing occurs by the diffusion of atoms within a solid material, so that the material progresses towards its equilibrium state. Heat increases the rate of diffusion by providing the energy needed to break bonds. The movement of atoms has the effect of redistributing and eradicating the dislocations in metals and (to a lesser extent) in ceramics. This alteration to existing dislocations allows a metal object to def
|
https://en.wikipedia.org/wiki/Input%20shaping
|
In control theory, input shaping is an open-loop control technique for reducing vibrations in computer-controlled machines. The method works by creating a command signal that cancels its own vibration. That is, a vibration excited by previous parts of the command signal is cancelled by vibration excited by latter parts of the command. Input shaping is implemented by convolving a sequence of impulses, known as an input shaper, with any arbitrary command. The shaped command that results from the convolution is then used to drive the system. If the impulses in the shaper are chosen correctly, then the shaped command will excite less residual vibration than the unshaped command. The amplitudes and time locations of the impulses are obtained from the system's natural frequencies and damping ratios. Shaping can be made very robust to errors in the system parameters.
References
External links
Input shaping simulator demonstrates the filter principle on a gantry crane control problem.
Control theory
Cybernetics
Dynamics (mechanics)
Mechanical vibrations
|
https://en.wikipedia.org/wiki/Aerospace%20architecture
|
Aerospace architecture is broadly defined to encompass architectural design of non-habitable and habitable structures and living and working environments in aerospace-related facilities, habitats, and vehicles. These environments include, but are not limited to: science platform aircraft and aircraft-deployable systems; space vehicles, space stations, habitats and lunar and planetary surface construction bases; and Earth-based control, experiment, launch, logistics, payload, simulation and test facilities. Earth analogs to space applications may include Antarctic, desert, high altitude, underground, undersea environments and closed ecological systems.
The American Institute of Aeronautics and Astronautics (AIAA) Design Engineering Technical Committee (DETC) meets several times a year to discuss policy, education, standards, and practice issues pertaining to aerospace architecture.
The role of Appearance in Aerospace architecture
"The role of design creates and develops concepts and specifications that seek to simultaneously and synergistically optimize function, production, value and appearance." In connection with, and with respect to, human presence and interactions, appearance is a component of human factors and includes considerations of human characteristics, needs and interests.
Appearance in this context refers to all visual aspects – the statics and dynamics of form(s), color(s), patterns, and textures in respect to all products, systems, services, and experiences. Appearance/esthetics affects humans both psychologically and physiologically and can effect/improving both human efficiency, attitude, and well-being.
In reference to non-habitable design the influence of appearance is minimal if not non-existent. However, as the industry of aerospace continues to rapidly grow, and missions to put humans on Mars and back to the Moon are being announced. The role that appearance/esthetics to maintain crew well-being and health of multi-month or year missions b
|
https://en.wikipedia.org/wiki/Armstrong%27s%20axioms
|
Armstrong's axioms are a set of references (or, more precisely, inference rules) used to infer all the functional dependencies on a relational database. They were developed by William W. Armstrong in his 1974 paper. The axioms are sound in generating only functional dependencies in the closure of a set of functional dependencies (denoted as ) when applied to that set (denoted as ). They are also complete in that repeated application of these rules will generate all functional dependencies in the closure .
More formally, let denote a relational scheme over the set of attributes with a set of functional dependencies . We say that a functional dependency is logically implied by , and denote it with if and only if for every instance of that satisfies the functional dependencies in , also satisfies . We denote by the set of all functional dependencies that are logically implied by .
Furthermore, with respect to a set of inference rules , we say that a functional dependency is derivable from the functional dependencies in by the set of inference rules , and we denote it by if and only if is obtainable by means of repeatedly applying the inference rules in to functional dependencies in . We denote by the set of all functional dependencies that are derivable from by inference rules in .
Then, a set of inference rules is sound if and only if the following holds:
that is to say, we cannot derive by means of functional dependencies that are not logically implied by .
The set of inference rules is said to be complete if the following holds:
more simply put, we are able to derive by all the functional dependencies that are logically implied by .
Axioms (primary rules)
Let be a relation scheme over the set of attributes . Henceforth we will denote by letters , , any subset of and, for short, the union of two sets of attributes and by instead of the usual ; this notation is rather standard in database theory when dealing with sets of attributes.
A
|
https://en.wikipedia.org/wiki/Caribbean%20Knowledge%20and%20Learning%20Network
|
The Caribbean Knowledge and Learning Network (CKLN) is an inter-governmental agency of the Caribbean Community, CARICOM, responsible for developing and managing a high capacity, broadband fiber optic network called C@ribNET, connecting all CARICOM member states.
The Caribbean Knowledge Learning Network Agency was first proposed in 2002 at a meeting where the 7 Prime Ministers of Eastern Caribbean States and Barbados met with the president of the World Bank. It was established in 2004 as an institution of the CARICOM, under the authority of Article 21 of the Revised Treaty of Chaguaramas.
Academic computer network organizations
|
https://en.wikipedia.org/wiki/Process%20function
|
In thermodynamics, a quantity that is well defined so as to describe the path of a process through the equilibrium state space of a thermodynamic system is termed a process function, or, alternatively, a process quantity, or a path function. As an example, mechanical work and heat are process functions because they describe quantitatively the transition between equilibrium states of a thermodynamic system.
Path functions depend on the path taken to reach one state from another. Different routes give different quantities. Examples of path functions include work, heat and arc length. In contrast to path functions, state functions are independent of the path taken. Thermodynamic state variables are point functions, differing from path functions. For a given state, considered as a point, there is a definite value for each state variable and state function.
Infinitesimal changes in a process function are often indicated by to distinguish them from infinitesimal changes in a state function which is written . The quantity is an exact differential, while is not, it is an inexact differential. Infinitesimal changes in a process function may be integrated, but the integral between two states depends on the particular path taken between the two states, whereas the integral of a state function is simply the difference of the state functions at the two points, independent of the path taken.
In general, a process function may be either holonomic or non-holonomic. For a holonomic process function, an auxiliary state function (or integrating factor) may be defined such that is a state function. For a non-holonomic process function, no such function may be defined. In other words, for a holonomic process function, may be defined such that is an exact differential. For example, thermodynamic work is a holonomic process function since the integrating factor (where is pressure) will yield exact differential of the volume state function . The second law of thermodynamics
|
https://en.wikipedia.org/wiki/Miller%20effect
|
In electronics, the Miller effect accounts for the increase in the equivalent input capacitance of an inverting voltage amplifier due to amplification of the effect of capacitance between the input and output terminals. The virtually increased input capacitance due to the Miller effect is given by
where is the voltage gain of the inverting amplifier ( positive) and is the feedback capacitance.
Although the term Miller effect normally refers to capacitance, any impedance connected between the input and another node exhibiting gain can modify the amplifier input impedance via this effect. These properties of the Miller effect are generalized in the Miller theorem. The Miller capacitance due to parasitic capacitance between the output and input of active devices like transistors and vacuum tubes is a major factor limiting their gain at high frequencies. Miller capacitance was identified in 1920 in triode vacuum tubes by John Milton Miller.
History
The Miller effect was named after John Milton Miller. When Miller published his work in 1920, he was working on vacuum tube triodes. The same analysis applies to modern devices such as bipolar junction and field-effect transistors.
Derivation
Consider an ideal inverting voltage amplifier of gain with an impedance connected between its input and output nodes. The output voltage is therefore . Assuming that the amplifier input draws no current, all of the input current flows through , and is therefore given by
.
The input impedance of the circuit is
.
If represents a capacitor with impedance , the resulting input impedance is
.
Thus the effective or Miller capacitance CM is the physical C multiplied by the factor .
Effects
As most amplifiers are inverting ( as defined above is positive), the effective capacitance at their inputs is increased due to the Miller effect. This can reduce the bandwidth of the amplifier, restricting its range of operation to lower frequencies. The tiny junction and stray capaci
|
https://en.wikipedia.org/wiki/International%20Prize%20for%20Biology
|
The is an annual award for "outstanding contribution to the advancement of research in fundamental biology." The Prize, although it is not always awarded to a biologist, is one of the most prestigious honours a natural scientist can receive. There are no restrictions on the nationality of the recipient.
Past laureates include John B. Gurdon, Motoo Kimura, Edward O. Wilson, Ernst Mayr, Thomas Cavalier-Smith, Yoshinori Ohsumi and many other great biologists in the world.
Information
The International Prize of Biology was created in 1985 to commemorate the 60-year reign of Emperor Shōwa of Japan and his longtime interest in and support of biology. The selection and award of the prize is managed by the Japan Society for the Promotion of Science. The laureate is awarded a beautiful medal, 10 million yen, and an international symposium on the scientist's area of research is held in Tokyo. The prize ceremony is held in the presence of Emperor of Japan.
The first International Prize for Biology was awarded to E. J. H. Corner, who was a prominent scientist in the field of systematic biology, because Emperor Shōwa was interested in and worked on this field for long time.
Criteria
The Prize is awarded in accordance with the following criteria:
The Prize shall be made by the Committee every year, commencing in 1985.
The Prize shall consist of a medal and a prize of ten million (10,000,000) yen.
There shall be no restrictions on the nationality of the recipient.
The Prize shall be awarded to an individual who, in the judgment of the members of the Committee, has made an outstanding contribution to the advancement of research in fundamental biology.
The specialty within the field of biology for which the Prize will be awarded shall be decided upon annually by the Committee.
The Committee shall be advised on suitable candidates for the Prize by a selection committee, which will consist of Japanese and overseas members.
The selection committee shall invite nominations of can
|
https://en.wikipedia.org/wiki/Codec%20listening%20test
|
A codec listening test is a scientific study designed to compare two or more lossy audio codecs, usually with respect to perceived fidelity or compression efficiency.
Most tests take the form of a double-blind comparison. Commonly used methods are known as "ABX" or "ABC/HR" or "MUSHRA". There are various software packages available for individuals to perform this type of testing themselves with minimal assistance.
Testing methods
ABX test
In an ABX test, the listener has to identify an unknown sample X as being A or B, with A (usually the original) and B (usually the encoded version) available for reference. The outcome of a test must be statistically significant. This setup ensures that the listener is not biased by their expectations, and that the outcome is not likely to be the result of chance. If sample X cannot be determined reliably with a low p-value in a predetermined number of trials, then the null hypothesis cannot be rejected and it cannot be proved that there is a perceptible difference between samples A and B. This usually indicates that the encoded version will actually be transparent to the listener.
ABC/HR test
In an ABC/HR test, C is the original which is always available for reference. A and B are the original and the encoded version in randomized order. The listener must first distinguish the encoded version from the original (which is the Hidden Reference that the "HR" in ABC/HR stands for), prior to assigning a score as a subjective judgment of the quality. Different encoded versions can be compared against each other using these scores.
MUSHRA
In MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor), the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The purpose of the anchor(s) is to make the scale be closer to an "absolute scale", making sure that minor artifacts are not rated as having very bad quality.
Results
M
|
https://en.wikipedia.org/wiki/Switch56
|
Nortel's Switch56 was a networking protocol built on top of the telephone cabling hardware of their Digital Multiplex System and other telephone switches.
The name comes from the fact that Switch56 carried 56 kbit/s of data on its 64 kbit/s lines, as opposed to most systems, including ISDN, where the entire 64 kbit/s bandwidth was available for data. The speed was a side effect of Nortel using a 2-wire cable to carry both voice and switching commands, as opposed to other systems where the command data was carried on a separate set of low-speed lines. Switch56 "folded" the two sources of data into one, placing a single bit from the command channel onto the end of every 7 bits of data, similar to the original T-carrier supervision scheme. This data was split out at the "far end" as 56 kbit/s and 8 kbit/s subchannels.
Switch56 was built on top of the basic Nortel hardware to allow computers to put data into the existing telephony network. Although slow compared to even contemporary systems, Switch56 allowed network traffic to flow not only within an office like other LAN systems, but between any branch offices that were connected using a Nortel PBX like the Meridian Norstar. This was a much easier option to install than ISDN for most offices, requiring nothing more than a Switch56 bridge to their existing network. For the LAN role new telephone terminals were produced with a RS-232C port on the back, which were then plugged into the user's computer and used with custom software. Although interesting in theory, it appears Switch56 saw little use in this role.
Network protocols
Telephone exchange equipment
|
https://en.wikipedia.org/wiki/Eicon
|
Eicon Networks Corporation, formerly Eicon Technology Corporation, is a privately owned designer, developer and manufacturer of communication products founded on October 12, 1984 with headquarters in Montreal, Quebec, Canada. Eicon products are sold worldwide through a large network of distributors and resellers, and supplied to OEMs.
In October 2006, Eicon purchased the Media & Signalling Division of Intel, known as Dialogic before its purchase by Intel in 1999, which produces telephony boards for PC servers. The combined Eicon/Dialogic company changed its name to Dialogic Corporation at the time of the purchase. It is meanwhile known as Dialogic Inc.
Products
Eicon's products include the Diva Family (Diva Server and Diva Client) and Eiconcard product lines.
Diva Server
Diva Server is a range of telecoms products for voice, speech, conferencing and fax. It supports T1/E1; SS7; ISDN and conventional phone line (PSTN). As of 2008 Eicon Host Media Processing products, "software adapters" that provide VoIP capability for applications, are available.
Diva Server is used in VoiceXML speech servers; SMS gateways; fax and unified messaging and call recording and monitoring.
Diva Client
Diva products are connectivity products for remote access for the home and for remote and mobile workers. They are mostly ISDN or combined ISDN and dialup modems. In the past Eicon produced ADSL and Wi-Fi equipment, but these areas have become dominated by far-eastern manufacturers.
Eiconcard
The Eiconcard connects legacy X.25 systems for tasks such as credit card authorization, SMS, and satellite communications. The Eiconcard has been produced since the company was founded in 1984, and continues to be available.
Eicon cards with their flexible protocol stacks were also used as a flexible communications gateway to IBM's midrange and mainframe computers and for a time occupied a niche market allowing Ethernet based PC networks to utilise IBM's LU6.2 (intelligent) communications rou
|
https://en.wikipedia.org/wiki/Noreen
|
Noreen, or BID 590, was an off-line one-time tape cipher machine of British origin.
Usage
As well as being used by the United Kingdom, Noreen was used by Canada. It was widely used in diplomatic stations. According to the display note on a surviving unit publicly displayed at Bletchley Park in the United Kingdom, the system was predominantly used "by the foreign office in British embassies overseas where the electricity supply was unreliable."
Usage lasted from the mid-1960s through 1990.
Compatibility
It was completely compatible with Rockex.
Power Supply
The units were powered by two batteries of six and twelve volts respectively, though some were known to have been powered by mains.
Other uses of the name "Noreen"
Noreen was the name of a wooden dragger that was acquired by the U.S. Navy during World War II and converted into the minesweeper USS Heath Hen (AMc-6).
Noreen is a common name in the Americas, Ireland, Scotland, and the Middle East. Also spelt Naureen, Noirin and Nowrin (نورين). In Arabic, the word means "luminous"'. In Ireland and Scotland, 'Noreen' is the anglicized version of 'Nóirín', which is the diminutive of 'Nora'.
External links
Jerry Proc's page on Noreen
Noreen on Crypto Museum website
Noreen
|
https://en.wikipedia.org/wiki/NetBoot
|
NetBoot was a technology from Apple which enabled Macs with capable firmware (i.e. New World ROM) to boot from a network, rather than a local hard disk or optical disc drive. NetBoot is a derived work from the Bootstrap Protocol (BOOTP), and is similar in concept to the Preboot Execution Environment. The technology was announced as a part of the original version of Mac OS X Server at Macworld Expo on 5 January 1999. NetBoot has continued to be a core systems management technology for Apple, and has been adapted to support modern Mac Intel machines. NetBoot, USB, and FireWire are some of the external volume options for operating system re-install. NetBoot is not supported on newer Macs with T2 security chip or Apple silicon.
Process
A disk image with a copy of macOS, macOS Server, Mac OS 9, or Mac OS 8 is created using System Image Utility and is stored on a server, typically macOS Server. Clients receive this image across a network using many popular protocols including: HTTPS, AFP, TFTP, NFS, and multicast Apple Software Restore (ASR). Server-side NetBoot image can boot entire machines, although NetBoot is more commonly used for operating system and software deployment, somewhat similar to Norton Ghost.
Client machines first request network configuration information through DHCP, then a list of boot images and servers with BSDP and then proceed to download images with protocols mentioned above.
Both Intel and PowerPC-based servers can serve images for Intel and PowerPC-based clients.
NetInstall
NetInstall is a similar feature of macOS Server which utilizes NetBoot and ASR to deliver installation images to network clients (typically on first boot). Like NetBoot, NetInstall images can be created using the System Image Utility. NetInstall performs a function for macOS similar to Windows Deployment Services for Microsoft clients, which depend on the Preboot Execution Environment.
Legacy
Mac OS 8.5 and Mac OS 9 use only BOOTP/DHCP to get IP information, followed
|
https://en.wikipedia.org/wiki/Calculus%20of%20voting
|
Calculus of voting refers to any mathematical model which predicts voting behaviour by an electorate, including such features as participation rate. A calculus of voting represents a hypothesized decision-making process.
These models are used in political science in an attempt to capture the relative importance of various factors influencing an elector to vote (or not vote) in a particular way.
Example
One such model was proposed by Anthony Downs (1957) and is adapted by William H. Riker and Peter Ordeshook, in “A Theory of the Calculus of Voting” (Riker and Ordeshook 1968)
V = pB − C + D
where
V = the proxy for the probability that the voter will turn out
p = probability of vote “mattering”
B = “utility” benefit of voting--differential benefit of one candidate winning over the other
C = costs of voting (time/effort spent)
D = citizen duty, goodwill feeling, psychological and civic benefit of voting (this term is not included in Downs's original model)
A political science model based on rational choice used to explain why citizens do or do not vote.
The alternative equation is
V = pB + D > C
Where for voting to occur the (P)robability the vote will matter "times" the (B)enefit of one candidate winning over another combined with the feeling of civic (D)uty, must be greater than the (C)ost of voting
References
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row.
Riker, William and Peter Ordeshook. 1968. “A Theory of the Calculus of Voting.” American Political Science Review 62(1): 25–42.
Voting theory
Mathematical modeling
|
https://en.wikipedia.org/wiki/Circuit%20extraction
|
The electric circuit extraction or simply circuit extraction, also netlist extraction, is the translation of an integrated circuit layout back into the electrical circuit (netlist) it is intended to represent. This extracted circuit is needed for various purposes including circuit simulation, static timing analysis, signal integrity, power analysis and optimization, and logic to layout comparison. Each of these functions require a slightly different representation of the circuit, resulting in the need for multiple layout extractions. In addition, there may be a postprocessing step of converting the device-level circuit into a purely digital circuit, but this is not considered part of the extraction process.
The detailed functionality of an extraction process will depend on its system environment. The simplest form of extracted circuit may be in the form of a netlist, which is formatted for a particular simulator or analysis program. A more complex extraction may involve writing the extracted circuit back into the original database containing the physical layout and the logic diagram. In this case, by associating the extracted circuit with the layout and the logic network, the user can cross-reference any point in the circuit to its equivalent points in the logic and layout (cross-probing). For simulation or analysis, various formats of netlist can then be generated using programs that read the database and generate the appropriate text information.
In extraction, it is often helpful to make an (informal) distinction between designed devices, which are devices that are deliberately created by the designer, and parasitic devices, which were not explicitly intended by the designer but are inherent in the layout of the circuit.
Primarily there are three different parts to the extraction process. These are designed device extraction, interconnect extraction, and parasitic device extraction. These parts are inter-related since various device extractions can change th
|
https://en.wikipedia.org/wiki/Multiton%20pattern
|
In software engineering, the multiton pattern is a design pattern which generalizes the singleton pattern. Whereas the singleton allows only one instance of a class to be created, the multiton pattern allows for the controlled creation of multiple instances, which it manages through the use of a map.
Rather than having a single instance per application (e.g. the object in the Java programming language) the multiton pattern instead ensures a single instance per key.
The multiton pattern does not explicitly appear as a pattern in the highly regarded object-oriented programming textbook Design Patterns. However, the book describes using a registry of singletons to allow subclassing of singletons, which is essentially the multiton pattern.
Description
While it may appear that the multiton is a hash table with synchronized access there are two important distinctions. First, the multiton does not allow clients to add mappings. Secondly, the multiton never returns a null or empty reference; instead, it creates and stores a multiton instance on the first request with the associated key. Subsequent requests with the same key return the original instance. A hash table is merely an implementation detail and not the only possible approach. The pattern simplifies retrieval of shared objects in an application.
Since the object pool is created only once, being a member associated with the class (instead of the instance), the multiton retains its flat behavior rather than evolving into a tree structure.
The multiton is unique in that it provides centralized access to a single directory (i.e. all keys are in the same namespace, per se) of multitons, where each multiton instance in the pool may exist having its own state. In this manner, the pattern advocates indexed storage of essential objects for the system (such as would be provided by an LDAP system, for example). However, a multiton is limited to wide use by a single system rather than a myriad of distributed system
|
https://en.wikipedia.org/wiki/Academy%20ratio
|
The Academy ratio of 1.375:1 (abbreviated as 1.37:1) is an aspect ratio of a frame of 35 mm film when used with 4-perf pulldown. It was standardized by the Academy of Motion Picture Arts and Sciences as the standard film aspect ratio in 1932, although similar-sized ratios were used as early as 1928.
History
Silent films were shot at a 1.3 aspect ratio (also known as a 4:3 aspect ratio), with each frame using all of the negative space between the two rows of film perforations for a length of 4 perforations. The frame line between the silent film frames was very thin. When sound-on-film was introduced in the late 1920s, the soundtrack was recorded in a stripe running just inside one set of the perforations and cut into the 1.33 image. This made the image area "taller", usually around 1.19, which was slightly disorienting to audiences used to the 1.3 frame and also presented problems for exhibitors with fixed-size screens and stationary projectors.
From studio to studio, the common attempt to reduce the image back to a 1.3:1 ratio by decreasing the projector aperture in-house met with conflicting results. Each movie theater chain, furthermore, had its own designated house ratio. The first standards set for the new sound-on-film motion pictures were accepted in November 1929, when all major US studios agreed to compose for the Society of Motion Picture and Television Engineers (SMPE) designated size of returning to the aspect ratio of 1.3:1.
Following this, Academy of Motion Picture Arts and Sciences (AMPAS) considered further alterations to this 1930 standard. Various dimensions were submitted, and the projector aperture plate opening size of 0.825 in × 0.600 in was agreed upon. The resulting 1.375:1 aspect ratio was then dubbed the "Academy Ratio". On May 9, 1932, the SMPE adopted the same projector aperture standard.
All studio films shot in 35 mm from 1932 to 1952 were shot in the Academy ratio. However, following the widescreen "revolution" of 1953, it q
|
https://en.wikipedia.org/wiki/WTVJ
|
WTVJ (channel 6) is a television station in Miami, Florida, United States, serving as the market's NBC outlet. It is owned and operated by the network's NBC Owned Television Stations division alongside Fort Lauderdale–licensed WSCV (channel 51), a flagship station of Telemundo. Both stations share studios on Southwest 27th Street in Miramar, while WTVJ's transmitter is located in Andover, Florida.
History
Florida's first television station
The station first signed on the air on March 21, 1949, at 12:00 p.m. WTVJ was the first television station to sign on in the state of Florida, and the 16th station in the United States. Originally broadcasting on VHF channel 4, the station was founded by Wometco Enterprises (founded by Mitchell Wolfson and Sidney Meyer), a national movie theater chain that was headquartered in Miami. The station's original studio facilities were located in the former Capitol Theater on North Miami Avenue in Downtown Miami, which was the first theater operated by Wometco when the company was founded in 1926. The station was a primary CBS affiliate, but also carried programming from the other three major broadcast networks of that era (ABC, NBC and DuMont). During the late 1950s, the station was also briefly affiliated with the NTA Film Network.
WTVJ was the only commercial television station in the Miami market until Fort Lauderdale-based WFTL-TV (channel 23) signed on the air on December 24, 1954, operating as an NBC affiliate. However, WFTL had no success whatsoever in competing against WTVJ, in part because television sets were not required to have UHF tuning capability until the All-Channel Receiver Act went into effect in 1964. NBC continued to allow WTVJ to cherry-pick programs broadcast by the network until WCKT (channel 7, now Fox affiliate WSVN) signed on in July 1956 and WFTL went dark (that station's former channel 23 allocation is now occupied by Univision owned-and-operated station WLTV-DT). Channel 4 shared ABC programming with WC
|
https://en.wikipedia.org/wiki/WPCH-TV
|
WPCH-TV (channel 17), branded on-air as Peachtree TV, is a television station in Atlanta, Georgia, United States, affiliated with The CW. It is owned by locally based Gray Television alongside CBS affiliate and company flagship WANF (channel 46), and low-power, Class A Telemundo affiliate WKTB-CD (channel 47). WPCH-TV and WANF share studios on 14th Street Northwest in Atlanta's Home Park neighborhood, while WPCH-TV's transmitter is located in the Woodland Hills section of northeastern Atlanta.
During its ownership under the Turner Broadcasting System (which owned the station from April 1970 until February 2017), WPCH-TV—then using the WTCG call letters—pioneered the distribution of broadcast television stations retransmitted by communications satellite to cable and satellite subscribers throughout the United States, expanding the small independent station into the first national "superstation" on December 17, 1976. (The station eventually became among the first four American superstations to begin being distributed to television providers in Canada in 1985.)
The former superstation feed—which eventually became known as simply TBS, and had maintained a nearly identical program schedule as the local Atlanta feed—was converted by Turner into a conventional basic cable network on October 1, 2007, at which time it was concurrently added to cable providers within the Atlanta market (including Comcast and Charter) alongside its existing local carriage on satellite providers DirecTV and Dish Network. Channel 17—which had used the WTBS callsign since 1979—was concurrently relaunched as WPCH and reformatted as a traditional independent station with a separate schedule exclusively catering to the Atlanta market. Although the Atlanta station is no longer carried on American multichannel television providers outside of its home market, WPCH-TV continues to be available as a de facto superstation on most Canadian cable and satellite providers.
History
As WJRJ-TV
On October 20
|
https://en.wikipedia.org/wiki/Tea%20%28programming%20language%29
|
Tea is a high-level scripting language for the Java environment. It combines features of Scheme, Tcl, and Java.
Features
Integrated support for all major programming paradigms.
Functional programming language.
Functions are first-class objects.
Scheme-like closures are intrinsic to the language.
Support for object-oriented programming.
Modular libraries with autoloading on-demand facilities.
Large base of core functions and classes.
String and list processing.
Regular expressions.
File and network I/O.
Database access.
XML processing.
100% pure Java.
The Tea interpreter is implemented in Java.
Tea runs anywhere with a Java 1.6 JVM or higher.
Java reflection features allow the use of Java libraries directly from Tea code.
Intended to be easily extended in Java. For example, Tea supports relational database access through JDBC, regular expressions through GNU Regexp, and an XML parser through a SAX parser (XML4J for example).
Interpreter alternatives
Tea is a proprietary language. Its interpreter is subject to a non-free license. A project called "destea", which released as Language::Tea in CPAN, provides an alternative by generating Java code based on the Tea code.
TeaClipse is an open-source compiler that uses a JavaCC-generated parser to parse and then compile Tea source to the proprietary Tea bytecode.
References
External links
Tea Home Page
"destea" code converter
Scripting languages
Programming languages
High-level programming languages
Programming languages created in 1997
|
https://en.wikipedia.org/wiki/Lumen%20%28anatomy%29
|
In biology, a lumen (: lumina) is the inside space of a tubular structure, such as an artery or intestine. It comes .
It can refer to:
the interior of a vessel, such as the central space in an artery, vein or capillary through which blood flows
the interior of the gastrointestinal tract
the pathways of the bronchi in the lungs
the interior of renal tubules and urinary collecting ducts
the pathways of the female genital tract, starting with a single pathway of the vagina, splitting up in two lumina in the uterus, both of which continue through the fallopian tubes
In cell biology, a lumen is a membrane-defined space that is found inside several organelles, cellular components, or structures, including thylakoid, endoplasmic reticulum, Golgi apparatus, lysosome, mitochondrion, and microtubule.
Transluminal procedures
Transluminal procedures are procedures occurring through lumina, including:
natural orifice transluminal endoscopic surgery in the lumina of, for example, the stomach, vagina, bladder, or colon
procedures through the lumina of blood vessels, such as various interventional radiology procedures:
percutaneous transluminal angioplasty
percutaneous transluminal commissurotomy
See also
Foramen, any anatomical opening
References
Anatomy
Blood
|
https://en.wikipedia.org/wiki/Structured%20text
|
Structured text, abbreviated as ST or STX, is one of the five languages supported by the IEC 61131-3 standard, designed for programmable logic controllers (PLCs). It is a high level language that is block structured and syntactically resembles Pascal, on which it is based. All of the languages share IEC61131 Common Elements. The variables and function calls are defined by the common elements so different languages within the IEC 61131-3 standard can be used in the same program.
Complex statements and nested instructions are supported:
Iteration loops (REPEAT-UNTIL; WHILE-DO)
Conditional execution (IF-THEN-ELSE; CASE)
Functions (SQRT(), SIN())
Sample program
(* simple state machine *)
TxtState := STATES[StateMachine];
CASE StateMachine OF
1: ClosingValve();
StateMachine := 2;
2: OpeningValve();
ELSE
BadCase();
END_CASE;
Unlike in some other programming languages, there is no fallthrough for the CASE statement: the first matching condition is entered, and after running its statements, the CASE block is left without checking other conditions.
Additional ST programming examples
// PLC configuration
CONFIGURATION DefaultCfg
VAR_GLOBAL
b_Start_Stop : BOOL; // Global variable to represent a boolean.
b_ON_OFF : BOOL; // Global variable to represent a boolean.
Start_Stop AT %IX0.0:BOOL; // Digital input of the PLC (Address 0.0)
ON_OFF AT %QX0.0:BOOL; // Digital output of the PLC (Address 0.0). (Coil)
END_VAR
// Schedule the main program to be executed every 20 ms
TASK Tick(INTERVAL := t#20ms);
PROGRAM Main WITH Tick : Monitor_Start_Stop;
END_CONFIGURATION
PROGRAM Monitor_Start_Stop // Actual Program
VAR_EXTERNAL
Start_Stop : BOOL;
ON_OFF : BOOL;
END_VAR
VAR // Temporary variables for logic handling
ONS_Trig : BOOL;
Rising_ONS : BOOL;
END_VAR
// Start of Logic
|
https://en.wikipedia.org/wiki/IEC%2061131
|
IEC 61131 is an IEC standard for programmable controllers. It was first published in 1993; the current (third) edition dates from 2013. It was known as IEC 1131 before the change in numbering system by IEC. The parts of the IEC 61131 standard are prepared and maintained by working group 7, programmable control systems, of subcommittee SC 65B of Technical Committee TC65 of the IEC.
Sections of IEC 61131
Standard IEC 61131 is divided into several parts:
Part 1: General information. It is the introductory chapter; it contains definitions of terms that are used in the subsequent parts of the standard and outlines the main functional properties and characteristics of PLCs.
Part 2: Equipment requirements and tests - establishes the requirements and associated tests for programmable controllers and their peripherals. This standard prescribes: the normal service conditions and requirements (for example, requirements related with climatic conditions, transport and storage, electrical service, etc.); functional requirements (power supply & memory, digital and analog I/Os); functional type tests and verification (requirements and tests on environmental, vibration, drop, free fall, I/O, power ports, etc.) and electromagnetic compatibility (EMC) requirements and tests that programmable controllers must implement. This standard can serve as a basis in the evaluation of safety programmable controllers to IEC 61508.
Part 3: Programming languages
Part 4: User guidelines
Part 5: Communications
Part 6: Functional safety
Part 7: Fuzzy control programming
Part 8: Guidelines for the application and implementation of programming languages
Part 9: Single-drop digital communication interface for small sensors and actuators (SDCI, marketed as IO-Link)
Part 10: PLC open XML exchange format for the export and import of IEC 61131-3 projects
Related standards
IEC 61499 Function Block
PLCopen has developed several standards and working groups.
TC1 - Standards
TC2 - Functions
TC3
|
https://en.wikipedia.org/wiki/Dynamic%20Data%20Driven%20Applications%20Systems
|
Dynamic Data Driven Applications Systems (DDDAS) is a new paradigm whereby the computation and instrumentation aspects of an application system are dynamically integrated in a feed-back control loop, in the sense that instrumentation data can be dynamically incorporated into the executing model of the application, and in reverse the executing model can control the instrumentation. Such approaches have been shown that can enable more accurate and faster modeling and analysis of the characteristics and behaviors of a system and can exploit data in intelligent ways to convert them to new capabilities, including decision support systems with the accuracy of full scale modeling, efficient data collection, management, and data mining. The DDDAS concept - and the term - was proposed by Frederica Darema for the National Science Foundation (NSF) workshop in March 2000.
There are several affiliated annual meetings and conferences, including:
DDDAS workshop at ICCS (since 2003)
DyDESS conference and workshop at MIT organized by Sai Ravela and Adrian Sandu
DDDAS special session at the ACC organized by Puneet Singla and Dennis Bernstein and Sai Ravela
DDDAS Special Session Information Fusion
DDDAS 2016 at Hartford, the first full-fledged conference hosted and sponsored by MIT and some support from UTRC.
DDDAS 2017 at MIT, the second conference hosted and managed by MIT.
DDDAS 2020 Online, the third conference hosted by MIT.
DDDAS 2022 at MIT, the fourth conference hosted by MIT together with CLEPS22.
As time progressed, it was suggested by Dr. Ravela that DDDAS grow into its own conference, adding workshops to special subjects. The first full-fledged but environmentally-focussed DDDAS conference was DyDESS, held at MIT, and the community has since not looked back. MIT sponsored and setup the DyDESS conference, and continues to be the host and event organizer through its Earth, Atmospheric and Planetary Sciences department.
References
F. Darema, “Dynamic Data Driven
|
https://en.wikipedia.org/wiki/Surface-enhanced%20Raman%20spectroscopy
|
Surface-enhanced Raman spectroscopy or surface-enhanced Raman scattering (SERS) is a surface-sensitive technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or by nanostructures such as plasmonic-magnetic silica nanotubes. The enhancement factor can be as much as 1010 to 1011, which means the technique may detect single molecules.
History
SERS from pyridine adsorbed on electrochemically roughened silver was first observed by Martin Fleischmann, Patrick J. Hendra and A. James McQuillan at the Department of Chemistry at the University of Southampton, UK in 1973. This initial publication has been cited over 6000 times. The 40th Anniversary of the first observation of the SERS effect has been marked by the Royal Society of Chemistry by the award of a National Chemical Landmark plaque to the University of Southampton. In 1977, two groups independently noted that the concentration of scattering species could not account for the enhanced signal and each proposed a mechanism for the observed enhancement. Their theories are still accepted as explaining the SERS effect. Jeanmaire and Richard Van Duyne
proposed an electromagnetic effect, while Albrecht and Creighton
proposed a charge-transfer effect. Rufus Ritchie, of Oak Ridge National Laboratory's Health Sciences Research Division, predicted the existence of the surface plasmon.
Mechanisms
The exact mechanism of the enhancement effect of SERS is still a matter of debate in the literature. There are two primary theories and while their mechanisms differ substantially, distinguishing them experimentally has not been straightforward. The electromagnetic theory proposes the excitation of localized surface plasmons, while the chemical theory proposes the formation of charge-transfer complexes. The chemical theory is based on resonance Raman spectroscopy, in which the frequency coincidence (or resonance) of the incident photon energy and electron transition greatly enhances Raman scattering inten
|
https://en.wikipedia.org/wiki/Microbial%20corrosion
|
Microbial corrosion, also called microbiologically influenced corrosion (MIC), microbially induced corrosion (MIC) or biocorrosion, is "corrosion affected by the presence or activity (or both) of microorganisms in biofilms on the surface of the corroding material." This corroding material can be either a metal (such as steel or aluminum alloys) or a nonmetal (such as concrete or glass).
Bacteria
Some sulfate-reducing bacteria produce hydrogen sulfide, which can cause sulfide stress cracking. Acidithiobacillus bacteria produce sulfuric acid; Acidothiobacillus thiooxidans frequently damages sewer pipes. Ferrobacillus ferrooxidans directly oxidizes iron to iron oxides and iron hydroxides; the rusticles forming on the RMS Titanic wreck are caused by bacterial activity. Other bacteria produce various acids, both organic and mineral, or ammonia.
In presence of oxygen, aerobic bacteria like Acidithiobacillus thiooxidans, Thiobacillus thioparus, and Thiobacillus concretivorus, all three widely present in the environment, are the common corrosion-causing factors resulting in biogenic sulfide corrosion.
Without presence of oxygen, anaerobic bacteria, especially Desulfovibrio and Desulfotomaculum, are common. Desulfovibrio salixigens requires at least 2.5% concentration of sodium chloride, but D. vulgaris and D. desulfuricans can grow in both fresh and salt water. D. africanus is another common corrosion-causing microorganism. The genus Desulfotomaculum comprises sulfate-reducing spore-forming bacteria; Dtm. orientis and Dtm. nigrificans are involved in corrosion processes. Sulfate-reducers require a reducing environment; an electrode potential lower than -100 mV is required for them to thrive. However, even a small amount of produced hydrogen sulfide can achieve this shift, so the growth, once started, tends to accelerate.
Layers of anaerobic bacteria can exist in the inner parts of the corrosion deposits, while the outer parts are inhabited by aerobic bacteria.
Some ba
|
https://en.wikipedia.org/wiki/Equiconsistency
|
In mathematical logic, two theories are equiconsistent if the consistency of one theory implies the consistency of the other theory, and vice versa. In this case, they are, roughly speaking, "as consistent as each other".
In general, it is not possible to prove the absolute consistency of a theory T. Instead we usually take a theory S, believed to be consistent, and try to prove the weaker statement that if S is consistent then T must also be consistent—if we can do this we say that T is consistent relative to S. If S is also consistent relative to T then we say that S and T are equiconsistent.
Consistency
In mathematical logic, formal theories are studied as mathematical objects. Since some theories are powerful enough to model different mathematical objects, it is natural to wonder about their own consistency.
Hilbert proposed a program at the beginning of the 20th century whose ultimate goal was to show, using mathematical methods, the consistency of mathematics. Since most mathematical disciplines can be reduced to arithmetic, the program quickly became the establishment of the consistency of arithmetic by methods formalizable within arithmetic itself.
Gödel's incompleteness theorems show that Hilbert's program cannot be realized: if a consistent recursively enumerable theory is strong enough to formalize its own metamathematics (whether something is a proof or not), i.e. strong enough to model a weak fragment of arithmetic (Robinson arithmetic suffices), then the theory cannot prove its own consistency. There are some technical caveats as to what requirements the formal statement representing the metamathematical statement "The theory is consistent" needs to satisfy, but the outcome is that if a (sufficiently strong) theory can prove its own consistency then either there is no computable way of identifying whether a statement is even an axiom of the theory or not, or else the theory itself is inconsistent (in which case it can prove anything, includin
|
https://en.wikipedia.org/wiki/Aerotolerant%20anaerobe
|
Aerotolerant anaerobes use fermentation to produce ATP. They do not use oxygen, but they can protect themselves from reactive oxygen molecules. In contrast, obligate anaerobes can be harmed by reactive oxygen molecules.
There are three categories of anaerobes. Where obligate aerobes require oxygen to grow, obligate anaerobes are damaged by oxygen, aerotolerant organisms cannot use oxygen but tolerate its presence, and facultative anaerobes use oxygen if it is present but can grow without it.
Most aerotolerant anaerobes have superoxide dismutase and (non-catalase) peroxidase but don't have catalase. More specifically, they may use a NADH oxidase/NADH peroxidase (NOX/NPR) system or a glutathione peroxidase system. An example of an aerotolerant anaerobe is Cutibacterium acnes.
References
Microbiology
|
https://en.wikipedia.org/wiki/Malthusian%20growth%20model
|
A Malthusian growth model, sometimes called a simple exponential growth model, is essentially exponential growth based on the idea of the function being proportional to the speed to which the function grows. The model is named after Thomas Robert Malthus, who wrote An Essay on the Principle of Population (1798), one of the earliest and most influential books on population.
Malthusian models have the following form:
where
P0 = P(0) is the initial population size,
r = the population growth rate, which Ronald Fisher called the Malthusian parameter of population growth in The Genetical Theory of Natural Selection, and Alfred J. Lotka called the intrinsic rate of increase,
t = time.
The model can also been written in the form of a differential equation:
with initial condition:
P(0)= P0
This model is often referred to as the exponential law. It is widely regarded in the field of population ecology as the first principle of population dynamics, with Malthus as the founder. The exponential law is therefore also sometimes referred to as the Malthusian Law. By now, it is a widely accepted view to analogize Malthusian growth in Ecology to Newton's First Law of uniform motion in physics.
Malthus wrote that all life forms, including humans, have a propensity to exponential population growth when resources are abundant but that actual growth is limited by available resources:
A model of population growth bounded by resource limitations was developed by Pierre Francois Verhulst in 1838, after he had read Malthus' essay. Verhulst named the model a logistic function.
See also
Albert Allen Bartlett – a leading proponent of the Malthusian Growth Model
Exogenous growth model – related growth model from economics
Growth theory – related ideas from economics
Human overpopulation
Irruptive growth – an extension of the Malthusian model accounting for population explosions and crashes
Malthusian catastrophe
Neo-malthusianism
The Genetical Theory of Natural Selection
References
|
https://en.wikipedia.org/wiki/Science%2C%20technology%2C%20engineering%2C%20and%20mathematics
|
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers.
There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.
Terminology
History
Previously referred to as SMET by the NSF, in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE). Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering edu
|
https://en.wikipedia.org/wiki/In-phase%20and%20quadrature%20components
|
A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are in quadrature phase, i.e., with a phase offset of one-quarter cycle (90 degrees or /2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier.
Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions.
The implication is that the modulations in some signal can be treated separately from the carrier wave of the signal. This has extensive use in many radio and signal processing applications. I/Q data is used to represent the modulations of some carrier, independent of that carrier's frequency.
Orthogonality
In vector analysis, a vector with polar coordinates and Cartesian coordinates can be represented as the sum of orthogonal components: Similarly in trigonometry, the angle sum identity expresses:
And in functional analysis, when is a linear function of some variable, such as time, these components are sinusoids, and they are orthogonal functions. A phase-shift of changes the identity to:
,
in which case is the in-phase component. In both conventions is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component.
Narrowband signal model
In an angle modulation application, with carrier frequency φ is also a time-variant function, giving:
When all three terms above are multiplied by an optional amplitude function, the left-hand side of the equality is known as the amplitude/phase form, and the right-hand side is the quadrature-carrier or IQ form.
Because of the modulation, the components are no longer completely orthogonal functions. But when and are slowly varying functions compared
|
https://en.wikipedia.org/wiki/Common%20Base%20Event
|
Common Base Event (CBE) is an IBM implementation of the Web Services Distributed Management (WSDM) Event Format standard. IBM also implemented the Common Event Infrastructure, a unified set of APIs and infrastructure for the creation, transmission, persistence and distribution of a wide range of business, system and network Common Base Event formatted events.
External links
Understanding Common Base Events Specification V1.0.1, IBM.
Common Base Events Best Practices, IBM.
Web services
IBM software
|
https://en.wikipedia.org/wiki/Residue-class-wise%20affine%20group
|
In mathematics, specifically in group theory, residue-class-wise affine
groups are certain permutation groups acting on
(the integers), whose elements are bijective
residue-class-wise affine mappings.
A mapping is called residue-class-wise affine
if there is a nonzero integer such that the restrictions of
to the residue classes
(mod ) are all affine. This means that for any
residue class there are coefficients
such that the restriction of the mapping
to the set is given by
.
Residue-class-wise affine groups are countable, and they are accessible
to computational investigations.
Many of them act multiply transitively on or on subsets thereof.
A particularly basic type of residue-class-wise affine permutations are the
class transpositions: given disjoint residue classes
and , the corresponding class transposition is the permutation
of which interchanges and
for every and which
fixes everything else. Here it is assumed that
and that .
The set of all class transpositions of generates
a countable simple group which has the following properties:
It is not finitely generated.
Every finite group, every free product of finite groups and every free group of finite rank embeds into it.
The class of its subgroups is closed under taking direct products, under taking wreath products with finite groups, and under taking restricted wreath products with the infinite cyclic group.
It has finitely generated subgroups which do not have finite presentations.
It has finitely generated subgroups with algorithmically unsolvable membership problem.
It has an uncountable series of simple subgroups which is parametrized by the sets of odd primes.
It is straightforward to generalize the notion of a residue-class-wise affine group
to groups acting on suitable rings other than ,
though only little work in this direction has been done so far.
See also the Collatz conjecture, which is an assertion about a surjective,
but not injective residue-class-wise affine mapping
|
https://en.wikipedia.org/wiki/Rate%20of%20return
|
In finance, return is a profit on an investment. It comprises any change in value of the investment, and/or cash flows (or securities, or other investments) which the investor receives from that investment over a specified time period, such as interest payments, coupons, cash dividends and stock dividends. It may be measured either in absolute terms (e.g., dollars) or as a percentage of the amount invested. The latter is also called the holding period return.
A loss instead of a profit is described as a negative return, assuming the amount invested is greater than zero.
To compare returns over time periods of different lengths on an equal basis, it is useful to convert each return into a return over a period of time of a standard length. The result of the conversion is called the rate of return.
Typically, the period of time is a year, in which case the rate of return is also called the annualized return, and the conversion process, described below, is called annualization.
The return on investment (ROI) is return per dollar invested. It is a measure of investment performance, as opposed to size (c.f. return on equity, return on assets, return on capital employed).
Calculation
The return, or the holding period return, can be calculated over a single period. The single period may last any length of time.
The overall period may, however, instead be divided into contiguous subperiods. This means that there is more than one time period, each sub-period beginning at the point in time where the previous one ended. In such a case, where there are multiple contiguous subperiods, the return or the holding period return over the overall period can be calculated by combining the returns within each of the subperiods.
Single-period
Return
The direct method to calculate the return or the holding period return over a single period of any length of time is:
where:
= final value, including dividends and interest
= initial value
For example, if someone purchases 100 s
|
https://en.wikipedia.org/wiki/Tolman%20length
|
The Tolman length (also known as Tolman's delta) measures the extent by which the surface tension of a small liquid drop deviates from its planar value. It is conveniently defined in terms of an expansion in , with the equimolar radius (defined below) of the liquid drop, of the pressure difference across the droplet's surface:
(1)
In this expression, is the pressure difference between the (bulk) pressure of the liquid inside and the pressure of the vapour outside, and is the surface tension of the planar interface, i.e. the interface with zero curvature . The Tolman length is thus defined as the leading order correction in an expansion in .
The equimolar radius is defined so that the superficial density is zero, i.e., it is defined by imagining a sharp mathematical dividing surface with a uniform internal and external density, but where the total mass of the pure fluid is exactly equal to the real situation. At the atomic scale in a real drop, the surface is not sharp, rather the density gradually drops to zero, and the Tolman length captures the fact that the idealized equimolar surface does not necessarily coincide with the idealized tension surface.
Another way to define the Tolman length is to consider the radius dependence of the surface tension, . To leading order in one has:
(2)
Here denotes the surface tension (or (excess) surface free energy) of a liquid drop with radius , whereas denotes its value in the planar limit.
In both definitions (1) and (2) the Tolman length is defined as a coefficient in an expansion in and therefore does not depend on .
Furthermore, the Tolman length can be related to the radius of spontaneous curvature when one compares the free energy method of Helfrich with the method of Tolman:
Any result for the Tolman length therefore gives information about the radius of spontaneous curvature, . If the Tolman length is known to be positive (with ) the interface tends to curve towards the liquid ph
|
https://en.wikipedia.org/wiki/Pseudocircle
|
The pseudocircle is the finite topological space X consisting of four distinct points {a,b,c,d } with the following non-Hausdorff topology:
This topology corresponds to the partial order where open sets are downward-closed sets. X is highly pathological from the usual viewpoint of general topology as it fails to satisfy any separation axiom besides T0. However, from the viewpoint of algebraic topology X has the remarkable property that it is indistinguishable from the circle S1.
More precisely the continuous map from S1 to X (where we think of S1 as the unit circle in ) given by
is a weak homotopy equivalence, that is induces an isomorphism on all homotopy groups. It follows<ref>Allen Hatcher (2002) Algebraic Topology, Proposition 4.21, Cambridge University Press</ref> that also induces an isomorphism on singular homology and cohomology and more generally an isomorphism on all ordinary or extraordinary homology and cohomology theories (e.g., K-theory).
This can be proved using the following observation. Like S1, X is the union of two contractible open sets {a,b,c} and {a,b,d } whose intersection {a,b} is also the union of two disjoint contractible open sets {a} and {b}. So like S1, the result follows from the groupoid Seifert-van Kampen theorem, as in the book Topology and Groupoids.
More generally McCord has shown that for any finite simplicial complex K, there is a finite topological space XK which has the same weak homotopy type as the geometric realization |K| of K. More precisely there is a functor, taking K to XK, from the category of finite simplicial complexes and simplicial maps and a natural weak homotopy equivalence from |K| to X''K.
See also
References
Algebraic topology
Topological spaces
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.