source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Biomarker
|
In biomedical contexts, a biomarker, or biological marker, is a measurable indicator of some biological state or condition. Biomarkers are often measured and evaluated using blood, urine, or soft tissues to examine normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. Biomarkers are used in many scientific fields.
Medicine
Biomarkers used in the medical field, are a part of a relatively new clinical toolset categorized by their clinical applications. The four main classes are molecular, physiologic, histologic and radiographic biomarkers. All four types of biomarkers have a clinical role in narrowing or guiding treatment decisions and follow a sub-categorization of being either predictive, prognostic, or diagnostic.
Predictive
Predictive molecular, cellular, or imaging biomarkers that pass validation can serve as a method of predicting clinical outcomes. Predictive biomarkers are used to help optimize ideal treatments, and often indicate the likelihood of benefiting from a specific therapy. For example, molecular biomarkers situated at the interface of pathology-specific molecular process architecture and drug mechanism of action promise capturing aspects allowing assessment of an individual treatment response. This offers a dual approach to both seeing trends in retrospective studies and using biomarkers to predict outcomes. For example, in metastatic colorectal cancer predictive biomarkers can serve as a way of evaluating and improving patient survival rates and in the individual case by case scenario, they can serve as a way of sparing patients from needless toxicity that arises from cancer treatment plans.
Common examples of predictive biomarkers are genes such as ER, PR and HER2/neu in breast cancer, BCR-ABL fusion protein in chronic myeloid leukaemia, c-KIT mutations in GIST tumours and EGFR1 mutations in NSCLC.
Diagnostic
Diagnostic biomarkers that meet a burden of proof can serve a role in narrowi
|
https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin%20constant
|
In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted , is a mathematical constant, related to the -function and the Barnes -function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin.
Its approximate value is:
= ... .
The Glaisher–Kinkelin constant can be given by the limit:
where is the hyperfactorial. This formula displays a similarity between and which is perhaps best illustrated by noting Stirling's formula:
which shows that just as is obtained from approximation of the factorials, can also be obtained from a similar approximation to the hyperfactorials.
An equivalent definition for involving the Barnes -function, given by where is the gamma function is:
.
The Glaisher–Kinkelin constant also appears in evaluations of the derivatives of the Riemann zeta function, such as:
where is the Euler–Mascheroni constant. The latter formula leads directly to the following product found by Glaisher:
An alternative product formula, defined over the prime numbers, reads
where denotes the th prime number.
The following are some integrals that involve this constant:
A series representation for this constant follows from a series for the Riemann zeta function given by Helmut Hasse.
|
https://en.wikipedia.org/wiki/Ringing%20artifacts
|
In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in a signal. Visually, they appear as bands or "ghosts" near edges; audibly, they appear as "echos" near transients, particularly sounds from percussion instruments; most noticeable are the pre-echos. The term "ringing" is because the output signal oscillates at a fading rate around a sharp transition in the input, similar to a bell after being struck. As with other artifacts, their minimization is a criterion in filter design.
Introduction
The main cause of ringing artifacts is due to a signal being bandlimited (specifically, not having high frequencies) or passed through a low-pass filter; this is the frequency domain description.
In terms of the time domain, the cause of this type of ringing is the ripples in the sinc function, which is the impulse response (time domain representation) of a perfect low-pass filter. Mathematically, this is called the Gibbs phenomenon.
One may distinguish overshoot (and undershoot), which occurs when transitions are accentuated – the output is higher than the input – from ringing, where after an overshoot, the signal overcorrects and is now below the target value; these phenomena often occur together, and are thus often conflated and jointly referred to as "ringing".
The term "ringing" is most often used for ripples in the time domain, though it is also sometimes used for frequency domain effects:
windowing a filter in the time domain by a rectangular function causes ripples in the frequency domain for the same reason as a brick-wall low pass filter (rectangular function in the frequency domain) causes ripples in the time domain, in each case the Fourier transform of the rectangular function being the sinc function.
There are related artifacts caused by other frequency domain effects,
and similar artifacts due to unrelated causes.
Causes
Description
By definition, ringing occu
|
https://en.wikipedia.org/wiki/Lego%20Mindstorms%20EV3
|
LEGO Mindstorms EV3 (stylized: LEGO MINDSTORMS EV3) is the third generation robotics kit in LEGO's Mindstorms line. It is the successor to the second generation LEGO Mindstorms NXT kit. The "EV" designation refers to the "evolution" of the Mindstorms product line. "3" refers to the fact that it is the third generation of computer modules - first was the RCX and the second is the NXT. It was officially announced on January 4, 2013, and was released in stores on September 1, 2013. The education edition was released on August 1, 2013. There are many competitions using this set, including the FIRST LEGO League Challenge and the World Robot Olympiad, sponsored by LEGO.
After an announcement in October 2022, The Lego Group officially discontinued Lego Mindstorms at the end of 2022.
Overview
The biggest change from the LEGO Mindstorms NXT and NXT 2.0 to the EV3 is the technological advances in the programmable brick. The main processor of the NXT was an ARM7 microcontroller, whereas the EV3 has a more powerful ARM9 CPU running Linux. A USB connector and Micro SD slot (up to 32GB) are new to the EV3. It comes with the plans to build 5 different robots: EV3RSTORM, GRIPP3R, R3PTAR, SPIK3R, and TRACK3R. LEGO has also released instructions online to build 12 additional projects: ROBODOZ3R, BANNER PRINT3R, EV3MEG, BOBB3E, MR-B3AM, RAC3 TRUCK, KRAZ3, EV3D4, EL3CTRIC GUITAR, DINOR3X, WACK3M, and EV3GAME. It uses a program called LEGO Mindstorms EV3 Home Edition, which is developed by LabVIEW, to write code using blocks instead of lines. However it can also be programmed on the actual robot and saved. MicroPython support has been recently added.
The EV3 Home (31313) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 1 Touch Sensor, 1 Color Sensor, 1 Infrared Sensor, 1 Remote Control, cables, USB cable, and 585 TECHNIC elements.
The Education EV3 Core Set (45544) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 2 Touch Sensors,
|
https://en.wikipedia.org/wiki/Mathematical%20fiction
|
Mathematical fiction is a genre of creative fictional work in which mathematics and mathematicians play important roles. The form and the medium of the works are not important. The genre may include poems, short stories, novels or plays; comic books; films, videos, or audios. One of the earliest, and much studied, work of this genre is Flatland: A Romance of Many Dimensions, an 1884 satirical novella by the English schoolmaster Edwin Abbott Abbott. Mathematical fiction may have existed since ancient times, but it was recently rediscovered as a genre of literature; since then there has been a growing body of literature in this genre, and the genre has attracted a growing body of readers. For example, Abbot's Flatland spawned a sequel in the 21st century: a novel titled Flatterland, authored by Ian Stewart and published in 2001.
A database of mathematical fiction
Alex Kasman, a Professor of Mathematics at College of Charleston, who maintains a database of works that could possibly be included in this genre, has a broader definition for the genre: Any work "containing mathematics or mathematicians" has been treated as mathematical fiction. Accordingly, Gulliver's Travels by Jonathan Swift, War and Peace by Lev Tolstoy, Mrs. Warren's Profession by George Bernard Shaw, and several similar literary works appear in Kasman's database because these works contain references to mathematics or mathematicians, even though mathematics and mathematicians are not important in their plots. According to this broader approach, the oldest extant work of mathematical fiction is The Birds, a comedy by the Ancient Greek playwright Aristophanes performed in 414 BCE. Kasman's database has a list of more than one thousand items of diverse categories like literature, comic books and films.
Some works of mathematical fiction
The top ten results turned up by a search of the website of Mathematical Association of America using the keywords "mathematical fiction" contained references to the fol
|
https://en.wikipedia.org/wiki/Potentiostat
|
A potentiostat is the electronic hardware required to control a three electrode cell and run most electroanalytical experiments. A Bipotentiostat and polypotentiostat are potentiostats capable of controlling two working electrodes and more than two working electrodes, respectively.
The system functions by maintaining the potential of the working electrode at a constant level with respect to the reference electrode by adjusting the current at an auxiliary electrode. The heart of the different potentiostatic electronic circuits is an operational amplifier (op amp). It consists of an electric circuit which is usually described in terms of simple op amps.
Primary use
This equipment is fundamental to modern electrochemical studies using three electrode systems for investigations of reaction mechanisms related to redox chemistry and other chemical phenomena. The dimensions of the resulting data depend on the experiment. In voltammetry, electric current in amps is plotted against electric potential in voltage. In a bulk electrolysis total coulombs passed (total electric charge) is plotted against time in seconds even though the experiment measures electric current (amperes) over time. This is done to show that the experiment is approaching an expected number of coulombs.
Most early potentiostats could function independently, providing data output through a physical data trace. Modern potentiostats are designed to interface with a personal computer and operate through a dedicated software package. The automated software allows the user rapidly to shift between experiments and experimental conditions. The computer allows data to be stored and analyzed more effectively, rapidly, and accurately than the earlier standalone devices.
Basic relationships
A potentiostat is a control and measuring device. It comprises an electric circuit which controls the potential across the cell by sensing changes in its resistance, varying accordingly the current supplied
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20physics%20journals
|
This is a list of peer-reviewed scientific journals published in the field of Mathematical Physics.
Advances in Theoretical and Mathematical Physics
Annales Henri Poincaré
Communications in Mathematical Physics
International Journal of Geometric Methods in Modern Physics
Journal of Geometry and Physics
Journal of Mathematical Physics
Journal of Nonlinear Mathematical Physics
Journal of Physics A: Mathematical and Theoretical
Journal of Statistical Physics
Letters in Mathematical Physics
Reports on Mathematical Physics
Reviews in Mathematical Physics
International Journal of Physics and Mathematics
SIGMA (Symmetry, Integrability and Geometry: Methods and Applications)
Teoreticheskaya i Matematicheskaya Fizika (Theoretical and Mathematical Physics), Steklov Mathematical Institute
|
https://en.wikipedia.org/wiki/Random%20pulse-width%20modulation
|
Random pulse-width modulation (RPWM) is a modulation technique introduced for mitigating electromagnetic interference (EMI) of power converters by spreading the energy of the noise signal over a wider bandwidth, so that there are no significant peaks of the noise. This is achieved by randomly varying the main parameters of the pulse-width modulation signal.
Description
Electromagnetic interference (EMI) filters have been widely used for filtering out the conducted emissions generated by power converters since their advent. However, when size is of great concern like in aircraft and automobile applications, one of the practical solutions to suppress conducted emissions is to use random pulse-width modulation (RPWM). In conventional pulse-width modulation (PWM) schemes, the harmonics power is concentrated on the deterministic or known frequencies with a significant magnitude, which leads to mechanical vibration, noise, and EMI. However, by applying randomness to the conventional PWM scheme, the harmonic power will spread out so that no harmonic of significant magnitude exists, and peak harmonics at discrete frequency are significantly reduced.
In RPWM, one of the switching parameters of the PWM signal, such as switching frequency, pulse position and duty cycle are varied randomly in order to spread the energy of the PWM signal. Hence, depending on the parameter which is made random, RPWM can be classified as random frequency modulation (RFM), random pulse-position modulation (RPPM) and random duty-cycle modulation (RDCM).
The properties of RPWM can be investigated further by looking at the power spectral density (PSD). For conventional PWM, the PSD can be directly determined from the Fourier Series expansion of the PWM signal. However, the PSD of the RPWM signals can be described only by a probabilistic level using the theory of stochastic processes such as wide-sense stationary (WSS) random processes.
RFM
Among the different RPWM techniques, RFM (random frequen
|
https://en.wikipedia.org/wiki/Consolidated%20Tape%20Association
|
The Consolidated Tape Association (CTA) oversees the Securities Information Processor that disseminates real-time trade and quote information (market data) in New York Stock Exchange (NYSE) and American Stock Exchange (AMEX) listed securities (stocks and bonds). It is currently chaired by Emily Kasparov of the Chicago Stock Exchange, the first woman and the youngest chair elected to the position.
CTA manages two Plans to govern the collection, processing and dissemination of trade and quote data: the Consolidated Tape Plan, which governs trades, and the Consolidated Quotation Plan, which governs quotes. The Plans were filed with and approved by the Securities and Exchange Commission (SEC) in accordance with Section 11A of the Securities Exchange Act of 1934.
Since the late 1970s, all SEC-registered exchanges and market centers that trade NYSE or AMEX-listed securities send their trades and quotes to a central consolidator where the Consolidated Tape System (CTS) and Consolidated Quotation System (CQS) data streams are produced and distributed worldwide. The CTA is the operating authority for CQS and CTS.
Participant exchanges
The current Participants include:
Cboe BZX Exchange (BZX)
Cboe BYX Exchange (BYX)
Cboe EDGX Exchange (EDGX)
Cboe EDGA Exchange (EDGA)
Financial Industry Regulatory Authority (FINRA)
Nasdaq ISE (ISE)
Nasdaq OMX BX (BSE)
Nasdaq OMX PHLX (PHLX)
Nasdaq Stock Market (NASDAQ)
New York Stock Exchange (NYSE)
NYSE Arca (ARCA)
NYSE American (AMEX)
NYSE Chicago (CHX)
NYSE National (NSX)
Acquisition and distribution of market data
The New York Stock Exchange is the Administrator of Network A, which includes NYSE-listed securities, and the American Stock Exchange is the Administrator of Network B, which includes AMEX-listed securities.
CTS and CQS receive trade and quote information, respectively from NYSE, AMEX, and the other regional market centers using a standard message format. Each system validates its respective message format, ve
|
https://en.wikipedia.org/wiki/Cyclomorphosis
|
Cyclomorphosis (also known as seasonal polyphenism) is the name given to the occurrence of cyclic or seasonal changes in the phenotype of an organism through successive generations.
In species undergoing cyclomorphosis, physiological characteristics and development cycles of individuals being born depend on the time of the year at which they are conceived.
It occurs in small aquatic invertebrates that reproduce by parthenogenesis and give rise to several generations annually. It occurs especially in marine planktonic animals, and is thought to be caused by the epigenetic effect of environmental cues on the organism, thereby altering the course of their development.
|
https://en.wikipedia.org/wiki/Corelis
|
Corelis, Inc, a subsidiary of Electronic Warfare Associates, is a private American company categorized under Electronic Equipment & Supplies and based in Cerritos, California.
History
Corelis was incorporated in 1991 and initially provided engineering services primarily to the aerospace and defense industries. Corelis introduced their first JTAG boundary scan products in 1998. In 2006, Electronic Warfare Associates, Inc. (EWA) a global provider of technology and engineering services to the aerospace, defense and commercial industries, announced their acquisition of Corelis, Inc. In 2008, the appointment of George B. La Fever as Corelis President and CEO finalized the transition of Corelis, Inc. into EWA Technologies, Inc., a wholly owned subsidiary of the EWA corporate family of high technology companies. In May 2018, David Mason was appointed as Corelis President and CEO.
Products
Corelis offers two distinct types of products and services: Standard Products (Boundary Scan Test Systems and Development Tools); and Custom Test Systems and System Integration.
Boundary Scan
Corelis introduced their first (JTAG boundary scan products in 1998. Corelis offers boundary scan/JTAG software and hardware products. Its ScanExpress boundary scan systems are used for structural testing as well as JTAG functional emulation test and in-system programming of Flash memory, CPLDs, and FPGAs. a
In 2007, Corelis released ScanExpress JET, a test tool that combines boundary scan and functional test (FCT) technologies for test coverage.
Test Systems and Integration
Systems are available for design and debugging, manufacturing test, and field service and support. A variety of system options are available including desktop solutions as well as portable solutions for use in the field with laptops. Corelis also provides engineering services, training, and customer support.
Projects
Between 1991 and 1998, Corelis offered engineering services and licensed HP technologies. Co
|
https://en.wikipedia.org/wiki/Cyber%20manufacturing
|
Cyber manufacturing is a concept derived from cyber-physical systems (CPS) that refers to a modern manufacturing system that offers an information-transparent environment to facilitate asset management, provide reconfigurability, and maintain productivity. Compared with conventional experience-based management systems, cyber manufacturing provides an evidence-based environment to keep equipment users aware of networked asset status, and transfer raw data into possible risks and actionable information. Driving technologies include design of cyber-physical systems, combination of engineering domain knowledge and computer sciences, as well as information technologies. Among them, mobile applications for manufacturing is an area of specific interest to industries and academia.
Motivation
The idea of cyber manufacturing originates from the fact that Internet-enabled services have added business value in economic sectors such as retail, music, consumer products, transportation, and healthcare; however, compared to existing Internet-enabled sectors, manufacturing assets are less connected and less accessible in real-time. Besides, current manufacturing enterprises make decisions following a top-down approach: from overall equipment effectiveness to assignment of production requirements, without considering the condition of machines. This usually leads to inconsistency in operation management due to lack of linkage between factories, possible overstock in spare part inventory, as well as unexpected machine downtime. Such situation calls for connectivity between machines as a foundation, and analytics on top of that as a necessity to translate raw data into information that actually facilitates user decision making. Expected functionalities of cyber manufacturing systems include machine connectivity and data acquisition, machine health prognostics, fleet-based asset management, and manufacturing reconfigurability.
Technology
Several technologies are involved in developing
|
https://en.wikipedia.org/wiki/Woeseian%20revolution
|
The Woeseian revolution was the progression of the phylogenetic tree of life concept from two main divisions, known as the Prokarya and Eukarya, into three domains now classified as Bacteria, Archaea, and Eukaryotes. The discovery of the new domain stemmed from the work of biophysicist Carl Woese in 1977 from a principle of evolutionary biology designated as Woese's dogma. It states that the evolution of ribosomal RNA (rRNA) was a necessary precursor to the evolution of modern life forms. Although the three-domain system has been widely accepted, the initial introduction of Woese’s discovery received criticism from the scientific community.
Phylogenetic implications
The basis of phylogenetics was limited by the technology of the time, which led to a greater dependence on phenotypic classification before advances that would allow for molecular organization methods. This was a major reason why the dichotomy of all living things, being either animal or plant in nature, was deemed an acceptable theory. Without truly understanding the genetic implication of each organismal classification in phylogenies via nucleic acid sequencing of shared molecular material, the phylogenetic tree of life and other such phylogenies would no doubt be incorrect. Woese’s advances in molecular sequencing and phylogenetic organization allowed for a better understanding of the three domains of life - the Bacteria, Archaea, and Eukaryotes. Regarding their varying types of shared rRNA, the small subunit rRNA was deemed as the best molecule to sequence to distinguish phylogenetic relationships because of its relatively small size, ease of isolation, and universal distribution.
Controversy
This reorganization caused an initial pushback: it wasn't accepted until nearly a decade after its publication. Possible factors that led to initial criticisms of his discovery included Woese's oligonucleotide cataloging, of which he was one of "only two or three people in the world" to be able to execute th
|
https://en.wikipedia.org/wiki/List%20of%20second%20moments%20of%20area
|
The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia.
Second moments of area
Please note that for the second moment of area equations in the below table: and
Parallel axis theorem
The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance (d) between the axes.
See also
List of moments of inertia
List of centroids
Second polar moment of area
|
https://en.wikipedia.org/wiki/Rapid%20prototyping
|
Rapid prototyping is a group of techniques used to quickly fabricate a scale model of a physical part or assembly using three-dimensional computer aided design (CAD) data.
Construction of the part or assembly is usually done using 3D printing or "additive layer manufacturing" technology.
The first methods for rapid prototyping became available in mid 1987 and were used to produce models and prototype parts. Today, they are used for a wide range of applications and are used to manufacture production-quality parts in relatively small numbers if desired without the typical unfavorable short-run economics. This economy has encouraged online service bureaus. Historical surveys of RP technology start with discussions of simulacra production techniques used by 19th-century sculptors. Some modern sculptors use the progeny technology to produce exhibitions and various objects. The ability to reproduce designs from a dataset has given rise to issues of rights, as it is now possible to interpolate volumetric data from 2D images.
As with CNC subtractive methods, the computer-aided-design – computer-aided manufacturing CAD -CAM workflow in the traditional rapid prototyping process starts with the creation of geometric data, either as a 3D solid using a CAD workstation, or 2D slices using a scanning device. For rapid prototyping this data must represent a valid geometric model; namely, one whose boundary surfaces enclose a finite volume, contain no holes exposing the interior, and do not fold back on themselves. In other words, the object must have an "inside". The model is valid if for each point in 3D space the computer can determine uniquely whether that point lies inside, on, or outside the boundary surface of the model. CAD post-processors will approximate the application vendors' internal CAD geometric forms (e.g., B-splines) with a simplified mathematical form, which in turn is expressed in a specified data format which is a common feature in additive manufacturing: STL
|
https://en.wikipedia.org/wiki/Home%20network
|
A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.
Origin
IPv4 address exhaustion has forced most Internet service providers to grant only a single WAN-facing IP address for each residential account. Multiple devices within a residence or small office are provisioned with internet access by establishing a local area network (LAN) for the local devices with IP addresses reservied for private networks. A network router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN by network address translation.
Infrastructure devices
Certain devices on a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices home-dwellers more directly interact with. Unlike their data center counterparts, these "networking" devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible:
A gateway establishes physical and data link layer connectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain a cable, DSL, or optical modem bound to a network interface controller for Ethernet. Routers are often incorporated into these devices for additional convenience.
A router establishes network layer connectivity between a WAN and the home network. It also performs the key function of network address translation that allows independently add
|
https://en.wikipedia.org/wiki/Mead%E2%80%93Conway%20VLSI%20chip%20design%20revolution
|
The Mead–Conway VLSI chip design revolution, or Mead and Conway revolution, was a very-large-scale integration (VLSI) design revolution starting in 1978 which resulted in a worldwide restructuring of academic materials in computer science and electrical engineering education, and was paramount for the development of industries based on the application of microelectronics.
A prominent factor in promoting this design revolution throughout industry was the DARPA-funded VLSI Project instigated by Mead and Conway which spurred development of electronic design automation.
Details
When the integrated circuit was originally invented and commercialized, the initial chip designers were co-located with the physicists, engineers and factories that understood integrated circuit technology. At that time, fewer than 100 transistors would fit in an integrated circuit "chip". The design capability for such circuits was centered in industry, with universities struggling to catch up. Soon, the number of transistors which fit in a chip started doubling every year. (The doubling period later grew to two years.) Much more complex circuits could then fit on a single chip, but the device physicists who fabricated the chips were not experts in electronic circuit design, so their designs were limited more by their expertise and imaginations than by limitations in the technology.
In 1978–79, when approximately 20,000 transistors could be fabricated in a single chip, Carver Mead and Lynn Conway wrote the textbook Introduction to VLSI Systems. It was published in 1979 and became a bestseller, since it was the first VLSI (Very Large Scale Integration) design textbook usable by non-physicists. ("In a self-aligned CMOS process, a transistor is formed wherever the gate layer ... crosses a diffusion layer." from: Integrated circuit § Manufacturing) The authors intended the book to fill a gap in the literature and introduce electrical engineering and computer science students to integrated s
|
https://en.wikipedia.org/wiki/Prosection
|
A prosection is the dissection of a cadaver (human or animal) or part of a cadaver by an experienced anatomist in order to demonstrate for students anatomic structure. In a dissection, students learn by doing; in a prosection, students learn by either observing a dissection being performed by an experienced anatomist or examining a specimen that has already been dissected by an experienced anatomist (etymology: Latin pro- "before" + sectio "a cutting")
A prosection may also refer to the dissected cadaver or cadaver part which is then reassembled and provided to students for review.
Use of prosections in medicine
Prosections are used primarily in the teaching of anatomy in disciplines as varied as human medicine, chiropractic, veterinary medicine, and physical therapy. Prosections may also be used to teach surgical techniques (such as the suturing of skin), pathology, physiology, reproduction medicine and theriogenology, and other topics.
The use of the prosection teaching technique is somewhat controversial in medicine. In the teaching of veterinary medicine, the goal is to "create the best quality education ... while ensuring that animals are not used harmfully and that respect for animal life is engendered within the student." Others have concluded that dissections and prosections have a negative impact on students' respect for patients and human life. Some scholars argue that while actual hands-on experience is essential, alternatives such as plastinated or freeze-dried cadavers are just as effective in the teaching of anatomy while dramatically reducing the number of cadavers or cadaver parts needed. Other alternatives such as instructional videos, plastic models, and printed materials also exist. Some studies find them equally effective as dissection or prosections, and some schools of human medicine in the UK have abandoned the use of cadavers entirely. But others question the usefulness of these alternatives, arguing dissection or prosection of cadavers ar
|
https://en.wikipedia.org/wiki/Norator
|
In electronics, a norator is a theoretical linear, time-invariant one-port which can have an arbitrary current and voltage between its terminals. A norator represents a controlled voltage or current source with infinite gain.
Inserting a norator in a circuit schematic provides whatever current and voltage the outside circuit demands, in particular, the demands of Kirchhoff's circuit laws. For example, the output of an ideal opamp behaves as a norator, producing nonzero output voltage and current that meet circuit requirements despite a zero input.
A norator is often paired with a nullator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage).
|
https://en.wikipedia.org/wiki/Access%20level
|
In computer science and computer programming, access level denotes the set of permissions or restrictions provided to a data type. Reducing access level is an effective method for limiting failure modes, reducing debugging time, and simplifying overall system complexity. It restricts variable modification to only the methods defined within the interface to the class. Thus, it is incorporated into many fundamental software design patterns. In general, a given object cannot be created, read, updated or deleted by any function without having a sufficient access level.
The two most common access levels are public and private, which denote, respectively; permission across the entire program scope, or permission only within the corresponding class. A third, protected, extends permissions to all subclasses of the corresponding class. Access levels modifiers are commonly used in Java as well as C#, which further provides the internal level. In C++, the only difference between a struct and a class is the default access level, which is private for classes and public for structs.
To illustrate the benefit: consider a public variable which can be accessed from any part of a program. If an error occurs, the culprit could be within any portion of the program, including various sub-dependencies. In a large code base, this leads to thousands of potential sources. Alternatively, consider a private variable. Due to access restrictions, all modifications to its value must occur via functions defined within the class. Therefore, the error is structurally contained within the class. There is often only a single source file for each class, which means debugging only requires evaluation of a single file. With sufficient modularity and minimal access level, large code bases can avoid many challenges associated with complexity.
Example: Bank Balance Class
Retrieved from Java Coffee Break Q&A
public class bank_balance
{
public String owner;
private int balance;
public bank_b
|
https://en.wikipedia.org/wiki/Asymptotic%20safety%20in%20quantum%20gravity
|
Asymptotic safety (sometimes also referred to as nonperturbative renormalizability) is a concept in quantum field theory which aims at finding a consistent and predictive quantum theory of the gravitational field. Its key ingredient is a nontrivial fixed point of the theory's renormalization group flow which controls the behavior of the coupling constants in the ultraviolet (UV) regime and renders physical quantities safe from divergences. Although originally proposed by Steven Weinberg to find a theory of quantum gravity, the idea of a nontrivial fixed point providing a possible UV completion can be applied also to other field theories, in particular to perturbatively nonrenormalizable ones. In this respect, it is similar to quantum triviality.
The essence of asymptotic safety is the observation that nontrivial renormalization group fixed points can be used to generalize the procedure of perturbative renormalization. In an asymptotically safe theory the couplings do not need to be small or tend to zero in the high energy limit but rather tend to finite values: they approach a nontrivial UV fixed point. The running of the coupling constants, i.e. their scale dependence described by the renormalization group (RG), is thus special in its UV limit in the sense that all their dimensionless combinations remain finite. This suffices to avoid unphysical divergences, e.g. in scattering amplitudes. The requirement of a UV fixed point restricts the form of the bare action and the values of the bare coupling constants, which become predictions of the asymptotic safety program rather than inputs.
As for gravity, the standard procedure of perturbative renormalization fails since Newton's constant, the relevant expansion parameter, has negative mass dimension rendering general relativity perturbatively nonrenormalizable. This has driven the search for nonperturbative frameworks describing quantum gravity, including asymptotic safety which in contrast to other approaches is cha
|
https://en.wikipedia.org/wiki/6174
|
The number 6174 is known as Kaprekar's constant after the Indian mathematician D. R. Kaprekar. This number is renowned for the following rule:
Take any four-digit number, using at least two different digits (leading zeros are allowed).
Arrange the digits in descending and then in ascending order to get two four-digit numbers, adding leading zeros if necessary.
Subtract the smaller number from the bigger number.
Go back to step 2 and repeat.
The above process, known as Kaprekar's routine, will always reach its fixed point, 6174, in at most 7 iterations. Once 6174 is reached, the process will continue yielding 7641 – 1467 = 6174. For example, choose 1459:
9541 – 1459 = 8082
8820 – 0288 = 8532
8532 – 2358 = 6174
7641 – 1467 = 6174
The only four-digit numbers for which Kaprekar's routine does not reach 6174 are repdigits such as 1111, which give the result 0000 after a single iteration. All other four-digit numbers eventually reach 6174 if leading zeros are used to keep the number of digits at 4. For numbers with three identical numbers and a fourth number that is one number higher or lower (such as 2111), it is essential to treat 3-digit numbers with a leading zero; for example: 2111 – 1112 = 0999; 9990 – 999 = 8991; 9981 – 1899 = 8082; 8820 – 288 = 8532; 8532 – 2358 = 6174.
Other "Kaprekar's constants"
There can be analogous fixed points for digit lengths other than four; for instance, if we use 3-digit numbers, then most sequences (i.e., other than repdigits such as 111) will terminate in the value 495 in at most 6 iterations. Sometimes these numbers (495, 6174, and their counterparts in other digit lengths or in bases other than 10) are called "Kaprekar constants".
Other properties
6174 is a 7-smooth number, i.e. none of its prime factors are greater than 7.
6174 can be written as the sum of the first three degrees of 18:
18 + 18 + 18 = 5832 + 324 + 18 = 6174, and coincidentally, 6 + 1 + 7 + 4 = 18.
The sum of squares of the prime factors of 6174 is a
|
https://en.wikipedia.org/wiki/Steam%20infusion
|
Steam Infusion is a direct-contact heating process in which steam condenses on the surface of a pumpable food product. Its primary use is for the gentle and rapid heating of a variety of food ingredients and products including milk, cream, soymilk, ketchup, soups and sauces.
Unlike steam injection and traditional vesselled steam heating; the steam infusion process surrounds the liquid food product with steam as opposed to passing steam through the liquid.
Steam Infusion allows food product to be cooked, mixed and pumped within a single unit, often removing the need for multiple stages of processing.
History
Steam infusion was first used in pasteurization and has since been developed for further liquid heating applications.
First generation
In the 1960s APV PLC launched the first steam infusion system under the Palarisator brand name. This involves a 2-stage process for steam infusion whereby the liquid is cascaded into a large pressurized steam chamber and is sterilized when falling as film or droplets through the chamber. The liquid is then condensed at the chilled bottom of the chamber. Illustrated in the image on the right hand side of the page.
Second generation
The Steam Infusion process was first developed in 2000 by Pursuit Dynamics PLC as a method for marine propulsion. The process has since been developed to be used for applications in brewing, food and beverages, public health and safety, bioenergy, industrial licensing, and waste treatment worldwide. On the right a diagram shows how the process creates an environment of vaporised product surrounded by high energy steam. The supersonic steam flow entrains and vaporises the process flow to form a multiphase flow, which heats the suspended particles by surface conduction and condensation. The condensation of the steam causes the process flow to return to a liquid state. This causes rapid and uniform heating over the unit making it applicable to industrial cooking processes. This process has been use
|
https://en.wikipedia.org/wiki/Out-of-band%20agreement
|
In the exchange of information over a communication channel, an out-of-band agreement is an agreement or understanding between the communicating parties that is not included in any message sent over the channel but which is relevant for the interpretation of such messages.
By extension, in a client–server or provider-requester setting, an out-of-band agreement is an agreement or understanding that governs the semantics of the request/response interface but which is not part of the formal or contractual description of the interface specification itself.
See also
API
Contract
Out-of-band
Off-balance-sheet
External links
SakaiProject definition
Computer networking
|
https://en.wikipedia.org/wiki/Knowledge-based%20processor
|
Knowledge-based processors (KBPs) are used for processing packets in computer networks. Knowledge-based processors are designed with the goal of increased performance of the IPv6 network. By contributing to the buildout of the IPv6 network, KBPs provide the means to an improved and secure networking system.
Standards
All networks are required to perform the following functions:
IPv4/IPv6 multilayer packet/flow classification
Policy-based routing and Policy enforcement (QoS)
Longest Prefix Match (CIDR)
Differentiated Services (DiffServ)
IP Security (IPSec)
Server Load Balancing
Transaction verification
All of the above functions must occur at high speeds in advanced networks. Knowledge-based processors contain embedded databases that store information required to process packets that travel through a network at wired speeds. Knowledge based processors are a new addition to intelligent networking that allow these functions to occur at high speeds and at the same time provide for lower power consumption.
Knowledge-based processors currently target the 3rd layer of the 7 layer OSI model which is devoted to packet processing.
Advantages
The advantages that knowledge based processors offer are the ability to execute multiple simultaneous decision making processes for a range of network-aware processing functions. These include routing, Quality of Service (QOS), access control for both security and billing, as well as the forwarding of voice/video packets. These functions improve the performance of advanced Internet applications in IPv6 networks such as VOD (Video on demand), VoIP (voice over Internet protocol), and streaming of video and audio.
Knowledge-based processors use a variety of techniques to improve network functioning such as parallel processing, deep pipelining and advanced power management techniques. Improvements in each of these areas allows for existing components to carry on their functions at wired speeds more efficiently thus improving
|
https://en.wikipedia.org/wiki/Fully%20differential%20amplifier
|
A fully differential amplifier (FDA) is a DC-coupled high-gain electronic voltage amplifier with differential inputs and differential outputs. In its ordinary usage, the output of the FDA is controlled by two feedback paths which, because of the amplifier's high gain, almost completely determine the output voltage for any given input.
In a fully differential amplifier, common-mode noise such as power supply disturbances is rejected; this makes FDAs especially useful as part of a mixed-signal integrated circuit.
An FDA is often used to convert an analog signal into a form more suitable for driving into an analog-to-digital converter; many modern high-precision ADCs have differential inputs.
The ideal FDA
For any input voltages, the ideal FDA has infinite open-loop gain, infinite bandwidth, infinite input impedances resulting in zero input currents, infinite slew rate, zero output impedance and zero noise.
In the ideal FDA, the difference in the output voltages is equal to the difference between the input voltages multiplied by the gain. The common mode voltage of the output voltages is not dependent on the input voltage. In many cases, the common mode voltage can be directly set by a third voltage input.
Input voltage:
Output voltage:
Output common-mode voltage:
A real FDA can only approximate this ideal, and the actual parameters are subject to drift over time and with changes in temperature, input conditions, etc. Modern integrated FET or MOSFET FDAs approximate more closely to these ideals than bipolar ICs where large signals must be handled at room temperature over a limited bandwidth; input impedance, in particular, is much higher, although the bipolar FDA usually exhibit superior (i.e., lower) input offset drift and noise characteristics.
Where the limitations of real devices can be ignored, an FDA can be viewed as a Black Box with gain; circuit function and parameters are determined by feedback, usually negative. An FDA, as implemented in practic
|
https://en.wikipedia.org/wiki/Maze%20runner
|
In electronic design automation, maze runner is a connection routing method that represents the entire routing space as a grid. Parts of this grid are blocked by components, specialised areas, or already present wiring. The grid size corresponds to the wiring pitch of the area. The goal is to find a chain of grid cells that go from point A to point B.
A maze runner may use the Lee algorithm. It uses a wave propagation style (a wave are all cells that can be reached in n steps) throughout the routing space. The wave stops when the target is reached, and the path is determined by backtracking through the cells.
See also
Autorouter
|
https://en.wikipedia.org/wiki/MCU%208051%20IDE
|
MCU 8051 IDE is a free software integrated development environment for microcontrollers based on the 8051. MCU 8051 IDE has a built-in simulator not only for the MCU itself, but also LCD displays and simple LED outputs as well as button inputs. It supports two programming languages: C (using SDCC) and assembly and runs on both Windows and Unix-based operating systems, such as FreeBSD and Linux.
Features
MCU simulator with many debugging features: register status, step by step, interrupt viewer, external memory viewer, code memory viewer, etc.
Simulator for certain electronic peripherals like LEDs, LED displays, LED matrices, LCD displays, etc.
Support for C language
Native macro-assembler
Support for ASEM-51 and other assemblers
Advanced text editor with syntax highlighting and validation
Support for vim and nano embedded in the IDE
Simple hardware programmer for certain AT89Sxx MCUs
Scientific calculator: time delay calculation and code generation, base converter, etc.
Hexadecimal editor
Supported MCUs
The current version 1.4 supports many microcontrollers including:
* 8051
* 80C51
* 8052
* AT89C2051
* AT89C4051
* AT89C51
* AT89C51RC
* AT89C52
* AT89C55WD
* AT89LV51
* AT89LV52
* AT89LV55
* AT89S52
* AT89LS51
* AT89LS52
* AT89S8253
* AT89S2051
* AT89S4051
* T87C5101
* T83C5101
* T83C5102
* TS80C32X2
* TS80C52X2
* TS87C52X2
* AT80C32X2
* AT80C52X2
* AT87C52X2
* AT80C54X2
* AT80C58X2
* AT87C54X2
* AT87C58X2
* TS80C54X2
* TS80C58X2
* TS87C54X2
* TS87C58X2
* TS80C31X2
* AT80C31X2
* 8031
* 8751
* 8032
* 8752
* 80C31
* 87C51
* 80C52
* 87C52
* 80C32
* 80C54
* 87C54
* 80C58
* 87C58
See also
8051 information
Assembly language
C language
External links
Paul's 8051 Tools, Projects and Free Code
ASEM-51
SDCC
Free integrated development environments
Embedded systems
|
https://en.wikipedia.org/wiki/Ambiguity
|
Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two," as in "two meanings.")
The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity.
Linguistic forms
Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.
Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system which is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system.
Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.
Lexical ambiguity
The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of
|
https://en.wikipedia.org/wiki/List%20of%20centroids
|
The following is a list of centroids of various two-dimensional and three-dimensional objects. The centroid of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. Informally, it is the "average" of all points of . For an object of uniform composition, the centroid of a body is also its center of mass. In the case of two-dimensional objects shown below, the hyperplanes are simply lines.
2-D Centroids
For each two-dimensional shape below, the area and the centroid coordinates are given:
Where the centroid coordinates are marked as zero, the coordinates are at the origin, and the equations to get those points are the lengths of the included axes divided by two, in order to reach the center which in these cases are the origin and thus zero.
3-D Centroids
For each three-dimensional body below, the volume and the centroid coordinates are given:
See also
List of moments of inertia
List of second moments of area
|
https://en.wikipedia.org/wiki/Runt
|
In a group of animals (usually a litter of animals born in multiple births), a runt is a member which is significantly smaller or weaker than the others. Owing to its small size, a runt in a litter faces obvious disadvantage, including difficulties in competing with its siblings for survival and possible rejection by its mother. Therefore, in the wild, a runt is less likely to survive infancy.
Even among domestic animals, runts often face rejection. They may be placed under the direct care of an experienced animal breeder, although the animal's size and weakness coupled with the lack of natural parental care make this difficult. Some tamed animals are the result of reared runts.
Not all litters have runts. All animals in a litter will naturally vary slightly in size and weight, but the smallest is not considered a "runt" if it is healthy and close in weight to its littermates. It may be perfectly capable of competing with its siblings for nutrition and other resources. A runt is specifically an animal that suffered in utero from deprivation of nutrients by comparison to its siblings, or from a genetic defect, and thus is born underdeveloped or less fit than expected.
In popular culture
Literature
Wilbur, the pig from Charlotte's Web, is the runt of his litter.
Orson, the pig in Jim Davis' U.S. Acres, is a runt who was bullied by his normal siblings. The strip changed direction when he was moved to a different farm and settled in with a supporting cast of oddball animals.
Shade the bat from Silverwing is a runt.
Fiver and Pipkin from Watership Down are runts, and their names in the Lapine language, Hrairoo and Hlao-roo, reflect this fact (the suffix -roo means "Small" or "undersized").
Clifford the Big Red Dog was born a runt, but inexplicably began to grow explosively until he became 25 feet tall.
Cadpig, a female Dalmatian puppy in Dodie Smith's children's novels The Hundred and One Dalmatians and The Starlight Barking, is the runt of her litter and is t
|
https://en.wikipedia.org/wiki/Configurable%20mixed-signal%20IC
|
Configurable Mixed-signal IC (abbreviated as CMIC) is a category of ICs comprising a matrix of analog and digital blocks which are configurable through programmable (OTP) non-volatile memory. The technology, in combination with its design software and development kits, allows immediate prototyping of custom mixed-signal circuits, as well as the integration of multiple discrete components into a single IC to reduce PCB cost, size and assembly issues.
See also
Field-programmable analog array
Programmable system-on-chip
|
https://en.wikipedia.org/wiki/Die%20%28integrated%20circuit%29
|
A die, in the context of integrated circuits, is a small block of semiconducting material on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (diced) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die.
There are three commonly used plural forms: dice, dies, and die. To simplify handling and integration onto a printed circuit board, most dies are packaged in various forms.
Manufacturing process
Most dies are composed of silicon and used for integrated circuits. The process begins with the production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm.
These wafers are then polished to a mirror finish before going through photolithography. In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out the faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped.
Uses
A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a Central Processing Unit (CPU). Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's Law. Other uses for dies can range from LED lighting to power semiconductor devices.
Images
See also
Die preparation
Integrated circuit design
Wire bonding and ball bonding
|
https://en.wikipedia.org/wiki/Flat-panel%20display
|
A flat-panel display (FPD) is an electronic display used to display visual content such as text or images. It is present in consumer, medical, transportation, and industrial equipment.
Flat-panel displays are thin, lightweight, provide better linearity and are capable of higher resolution than typical consumer-grade TVs from earlier eras. They are usually less than thick. While the highest resolution for consumer-grade CRT televisions was 1080i, many flat-panel displays in the 2020s are capable of 1080p and 4K resolution.
In the 2010s, portable consumer electronics such as laptops, mobile phones, and portable cameras have used flat-panel displays since they consume less power and are lightweight. As of 2016, flat-panel displays have almost completely replaced CRT displays.
Most 2010s-era flat-panel displays use LCD or light-emitting diode (LED) technologies, sometimes combined. Most LCD screens are back-lit with color filters used to display colors. In many cases, flat-panel displays are combined with touch screen technology, which allows the user to interact with the display in a natural manner. For example, modern smartphone displays often use OLED panels, with capacitive touch screens.
Flat-panel displays can be divided into two display device categories: volatile and static. The former requires that pixels be periodically electronically refreshed to retain their state (e.g. liquid-crystal displays (LCD)), and can only show an image when it has power. On the other hand, static flat-panel displays rely on materials whose color states are bistable, such as displays that make use of e-ink technology, and as such retain content even when power is removed.
History
The first engineering proposal for a flat-panel TV was by General Electric in 1954 as a result of its work on radar monitors. The publication of their findings gave all the basics of future flat-panel TVs and monitors. But GE did not continue with the R&D required and never built a working flat panel a
|
https://en.wikipedia.org/wiki/Leibniz%27s%20notation
|
In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols and to represent infinitely small (or infinitesimal) increments of and , respectively, just as and represent finite increments of and , respectively.
Consider as a function of a variable , or = . If this is the case, then the derivative of with respect to , which later came to be viewed as the limit
was, according to Leibniz, the quotient of an infinitesimal increment of by an infinitesimal increment of , or
where the right hand side is Joseph-Louis Lagrange's notation for the derivative of at . The infinitesimal increments are called . Related to this is the integral in which the infinitesimal increments are summed (e.g. to compute lengths, areas and volumes as sums of tiny pieces), for which Leibniz also supplied a closely related notation involving the same differentials, a notation whose efficiency proved decisive in the development of continental European mathematics.
Leibniz's concept of infinitesimals, long considered to be too imprecise to be used as a foundation of calculus, was eventually replaced by rigorous concepts developed by Weierstrass and others in the 19th century. Consequently, Leibniz's quotient notation was re-interpreted to stand for the limit of the modern definition. However, in many instances, the symbol did seem to act as an actual quotient would and its usefulness kept it popular even in the face of several competing notations. Several different formalisms were developed in the 20th century that can give rigorous meaning to notions of infinitesimals and infinitesimal displacements, including nonstandard analysis, tangent space, O notation and others.
The derivatives and integrals of calculus can be packaged into the modern theory of differential forms, in which the derivative is genuinely a ratio of two differentials, and the integral likewise behaves in exact accordance w
|
https://en.wikipedia.org/wiki/Flexible-fuel%20vehicle
|
A flexible-fuel vehicle (FFV) or dual-fuel vehicle (colloquially called a flex-fuel vehicle) is an alternative fuel vehicle with an internal combustion engine designed to run on more than one fuel, usually gasoline blended with either ethanol or methanol fuel, and both fuels are stored in the same common tank. Modern flex-fuel engines are capable of burning any proportion of the resulting blend in the combustion chamber as fuel injection and spark timing are adjusted automatically according to the actual blend detected by a fuel composition sensor. This device is known as an oxygen sensor and it reads the oxygen levels in the stream of exhaust gasses, its signal enriching or leaning the fuel mixture going into the engine. Flex-fuel vehicles are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks and the engine runs on one fuel at a time, for example, compressed natural gas (CNG), liquefied petroleum gas (LPG), or hydrogen.
The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with about 60 million automobiles, motorcycles and light duty trucks manufactured and sold worldwide by March 2018, and concentrated in four markets, Brazil (30.5 million light-duty vehicles and over 6 million motorcycles), the United States (21 million by the end of 2017), Canada (1.6 million by 2014), and Europe, led by Sweden (243,100). In addition to flex-fuel vehicles running with ethanol, in Europe and the US, mainly in California, there have been successful test programs with methanol flex-fuel vehicles, known as M85 flex-fuel vehicles. There have been also successful tests using P-series fuels with E85 flex fuel vehicles, but as of June 2008, this fuel is not yet available to the general public. These successful tests with P-series fuels were conducted on Ford Taurus and Dodge Caravan flexible-fuel vehicles.
Though technology exists to allow ethanol FFVs to run on any mixture of gasoline and ethanol, from pu
|
https://en.wikipedia.org/wiki/Bridging%20model
|
In computer science, a bridging model is an abstract model of a computer which provides a conceptual bridge between the physical implementation of the machine and the abstraction available to a programmer of that machine; in other words, it is intended to provide a common level of understanding between hardware and software engineers.
A successful bridging model is one which can be efficiently implemented in reality and efficiently targeted by programmers; in particular, it should be possible for a compiler to produce good code from a typical high-level language. The term was introduced by Leslie Valiant's 1990 paper A Bridging Model for Parallel Computation, which argued that the strength of the von Neumann model was largely responsible for the success of computing as a whole. The paper goes on to develop the bulk synchronous parallel model as an analogous model for parallel computing.
|
https://en.wikipedia.org/wiki/Atari%20AMY
|
The Atari AMY (or Amy) was a 64-oscillator additive synthesizer implemented as a single-IC sound chip. It was initially developed as part of a new advanced chipset, codenamed "Rainbow" that included a graphics processor and sprite generator. Rainbow was considered for use in the 16/32-bit workstation known as Sierra, but the Sierra project was bogged down in internal committee meetings. However the Rainbow chipset development continued up until Atari's CED and HCD divisions were sold to Tramel Technologies, Ltd. For a time, AMY was slated to be included in the Atari 520ST, then an updated version of the Atari 8-bit family, the 65XEM, but development was discontinued. The technology was later sold, but when the new owners started to introduce it as a professional synthesizer, Atari sued, and work on the project ended.
Description
The AMY was based around a bank of 64 oscillators, which emit sine waves of a specified frequency. The sine waves were created by looking up the amplitude at a given time from a 16-bit table stored in ROM, rather than calculating the amplitude using math hardware. The signals could then be mixed together to perform additive synthesis. The AMY also included a number of ramp generators that could be used to smoothly modify the amplitude or frequency of a given oscillator over a given time. During the design phase, it was believed these would be difficult to implement in hardware, so only eight frequency ramps are included.
Sounds were created by selecting one of the oscillators to be the master channel, and then attaching other oscillators and ramps to it, slaved to some multiple of the fundamental frequency. Sound programs then sent the AMY a series of instructions setting the master frequency, and instructions on how quickly to ramp to new values. The output of the multiple oscillators was then summed and sent to the output. The AMY allowed the oscillators to be combined in any fashion, two at a time, to produce up to eight output channe
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20proofs
|
A list of articles with mathematical proofs:
Theorems of which articles are primarily devoted to proving them
Bertrand's postulate and a proof
Estimation of covariance matrices
Fermat's little theorem and some proofs
Gödel's completeness theorem and its original proof
Mathematical induction and a proof
Proof that 0.999... equals 1
Proof that 22/7 exceeds π
Proof that e is irrational
Proof that π is irrational
Proof that the sum of the reciprocals of the primes diverges
Articles devoted to theorems of which a (sketch of a) proof is given
Banach fixed-point theorem
Banach–Tarski paradox
Basel problem
Bolzano–Weierstrass theorem
Brouwer fixed-point theorem
Buckingham π theorem (proof in progress)
Burnside's lemma
Cantor's theorem
Cantor–Bernstein–Schroeder theorem
Cayley's formula
Cayley's theorem
Clique problem (to do)
Compactness theorem (very compact proof)
Erdős–Ko–Rado theorem
Euler's formula
Euler's four-square identity
Euler's theorem
Five color theorem
Five lemma
Fundamental theorem of arithmetic
Gauss–Markov theorem (brief pointer to proof)
Gödel's incompleteness theorem
Gödel's first incompleteness theorem
Gödel's second incompleteness theorem
Goodstein's theorem
Green's theorem (to do)
Green's theorem when D is a simple region
Heine–Borel theorem
Intermediate value theorem
Itô's lemma
Kőnig's lemma
Kőnig's theorem (set theory)
Kőnig's theorem (graph theory)
Lagrange's theorem (group theory)
Lagrange's theorem (number theory)
Liouville's theorem (complex analysis)
Markov's inequality (proof of a generalization)
Mean value theorem
Multivariate normal distribution (to do)
Holomorphic functions are analytic
Pythagorean theorem
Quadratic equation
Quotient rule
Ramsey's theorem
Rao–Blackwell theorem
Rice's theorem
Rolle's theorem
Splitting lemma
squeeze theorem
Sum rule in differentiation
Sum rule in integration
Sylow theorems
Transcendence of e and π (as corollaries of Lindemann–Weierstrass)
Tychonoff's theorem (to do)
Ultrafilter lemma
Ultraparallel theorem
|
https://en.wikipedia.org/wiki/List%20of%20BioBlitzes%20in%20New%20Zealand
|
This is a list of BioBlitzes that have been held in New Zealand. The date is the first day of the BioBlitz if held over several days. This list only includes those that were major public events. BioBlitz was established in New Zealand by Manaaki Whenua - Landcare Research initially based on seed funding from The Royal Society of NZ's "Science & Technology Promotion Fund 2003/2004". BioBlitz events have always been a collaborative activity of professional and amateur taxonomic experts from multiple organisations and the public. Auckland BioBlitz events were coordinated by Manaaki Whenua, later from 2015 moving to events coordinated by Auckland Museum. The first events were 24 hours continuously, e.g. from 3 pm Friday overnight to 3 pm Saturday. Subsequently, this changed to 24 hours spread across mostly daylight hours over 2 consecutive days. For a series of downloadable posters for BioBlitz see: . See also: .
|
https://en.wikipedia.org/wiki/Form%20classification
|
Form classification is the classification of organisms based on their morphology, which does not necessarily reflect their biological relationships. Form classification, generally restricted to palaeontology, reflects uncertainty; the goal of science is to move "form taxa" to biological taxa whose affinity is known.
Form taxonomy is restricted to fossils that preserve too few characters for a conclusive taxonomic definition or assessment of their biological affinity, but whose study is made easier if a binomial name is available by which to identify them. The term "form classification" is preferred to "form taxonomy"; taxonomy suggests that the classification implies a biological affinity, whereas form classification is about giving a name to a group of morphologically-similar organisms that may not be related.
A "parataxon" (not to be confused with parataxonomy), or "sciotaxon" (Gr. "shadow taxon"), is a classification based on incomplete data: for instance, the larval stage of an organism that cannot be matched up with an adult. It reflects a paucity of data that makes biological classification impossible. A sciotaxon is defined as a taxon thought to be equivalent to a true taxon (orthotaxon), but whose identity cannot be established because the two candidate taxa are preserved in different ways and thus cannot be compared directly.
Examples
In zoology
Form taxa are groupings that are based on common overall forms. Early attempts at classification of labyrinthodonts was based on skull shape (the heavily armoured skulls often being the only preserved part). The amount of convergent evolution in the many groups lead to a number of polyphyletic taxa. Such groups are united by a common mode of life, often one that is generalist, in consequence acquiring generally similar body shapes by convergent evolution. Ediacaran biota — whether they are the precursors of the Cambrian explosion of the fossil record, or are unrelated to any modern phylum — can currently on
|
https://en.wikipedia.org/wiki/Radar%20ornithology
|
Radar ornithology is the use of radar technology in studies of bird migration and in approaches to prevent bird strikes particularly to aircraft. The technique was developed from the observations of pale wisps seen moving on radar during the Second World War. These were termed as "angels", "ghosts", or "phantoms" in Britain and were later identified as being caused by migrating birds. Over time, the technology has been vastly improved with Doppler weather radars that allow the detection of birds, bats, as well as insects with resolution and sensitivity that is sufficient to quantify the speed of flaps that can sometimes aid in the identification of species.
History
According to David Lack, the earliest recorded use of radar in detecting birds came in 1940. The movements of gulls, herons and lapwings that caused some of the detentions was visually confirmed. It was however only in the 1950s through the work of Ernst Sutter at Zurich airport that more elusive "angels" were confirmed to be caused by small passerines. David Lack was one of the pioneers of radar ornithology in England.
Applications
Early radar ornithology mainly focused, due to limitations of the equipment, on the seasonality, timing, intensity, and direction of flocks of birds in migration. Modern weather radars can detect the wing area of the flying, the speed of flight, the frequency of wing beat, the direction, distance and altitude. The sensitivity and modern analytical techniques now allows detection of flying insects as well.
Radar has been used to study seasonal variations in starling roosting behaviour. It has also been used to identify risks to aircraft operations at airports. The technique has been in conservation applications such as being used to assess the risk to birds by proposed wind energy installations, to quantify the number of birds at roost or nesting sites.
|
https://en.wikipedia.org/wiki/Randomized%20benchmarking
|
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations.
Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations.
The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures.
Overview
Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (called tomography) grows exponentially with the number of quantum bits (called qubits). This makes tomographic methods impractical for even small systems of just 3 or 4 qubits. In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases. Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum comp
|
https://en.wikipedia.org/wiki/Sand%20table
|
A sand table uses constrained sand for modelling or educational purposes. The original version of a sand table may be the abax used by early Greek students. In the modern era, one common use for a sand table is to make terrain models for military planning and wargaming.
Abax
An abax was a table covered with sand commonly used by students, particularly in Greece, to perform studies such as writing, geometry, and calculations.
An abax was the predecessor to the abacus. Objects, such as stones, were added for counting and then columns for place-valued arithmetic. The demarcation between an abax and an abacus seems to be poorly defined in history; moreover, modern definitions of the word abacus universally describe it as a frame with rods and beads and, in general, do not include the definition of "sand table".
The sand table may well have been the predecessor to some board games. ("The word abax, or abacus, is used both for the reckoning-board with its counters and the play-board with its pieces, ..."). Abax is from the old Greek for "sand table".
Ghubar
An Arabic word for sand (or dust) is ghubar (or gubar), and Western numerals (the decimal digits 0–9) are derived from the style of digits written on ghubar tables in North-West Africa and Iberia, also described as the 'West Arabic' or 'gubar' style.
Military use
Sand tables have been used for military planning and wargaming for many years as a field expedient, small-scale map, and in training for military actions. In 1890 a Sand table room was built at the Royal Military College of Canada for use in teaching cadets military tactics; this replaced the old sand table room in a pre-college building, in which the weight of the sand had damaged the floor. The use of sand tables increasingly fell out of favour with improved maps, aerial and satellite photography, and later, with digital terrain simulations. More modern sand tables have incorporated Augmented Reality, such as the Augmented Reality Sandtable (ARES) deve
|
https://en.wikipedia.org/wiki/Big%20O%20in%20probability%20notation
|
The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation that is standard in mathematics. Where the big-O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability.
Definitions
Small o: convergence in probability
For a set of random variables Xn and a corresponding set of constants an (both indexed by n, which need not be discrete), the notation
means that the set of values Xn/an converges to zero in probability as n approaches an appropriate limit.
Equivalently, Xn = op(an) can be written as Xn/an = op(1),
i.e.
for every positive ε.
Big O: stochastic boundedness
The notation
means that the set of values Xn/an is stochastically bounded. That is, for any ε > 0, there exists a finite M > 0 and a finite N > 0 such that
Comparison of the two definitions
The difference between the definitions is subtle. If one uses the definition of the limit, one gets:
Big :
Small :
The difference lies in the : for stochastic boundedness, it suffices that there exists one (arbitrary large) to satisfy the inequality, and is allowed to be dependent on (hence the ). On the other hand, for convergence, the statement has to hold not only for one, but for any (arbitrary small) . In a sense, this means that the sequence must be bounded, with a bound that gets smaller as the sample size increases.
This suggests that if a sequence is , then it is , i.e. convergence in probability implies stochastic boundedness. But the reverse does not hold.
Example
If is a stochastic sequence such that each element has finite variance, then
(see Theorem 14.4-1 in Bishop et al.)
If, moreover, is a null sequence for a sequence of real numbers, then converges to zero in probability by Chebyshev's inequality, so
|
https://en.wikipedia.org/wiki/List%20of%20theoretical%20physicists
|
The following is a partial list of notable theoretical physicists. Arranged by century of birth, then century of death, then year of birth, then year of death, then alphabetically by surname. For explanation of symbols, see Notes at end of this article.
Ancient times
Kaṇāda (6th century BCE or 2nd century BCE)
Thales (c. 624 – c. 546 BCE)
Pythagoras^* (c. 570 – c. 495 BCE)
Democritus° (c. 460 – c. 370 BCE)
Aristotle‡ (384–322 BCE)
Archimedesº* (c. 287 – c. 212 BCE)
Hypatia^ªº (c. 350–370; died 415 AD)
Middle Ages
Al Farabi (c. 872 – c. 950)
Ibn al-Haytham (c. 965 – c. 1040)
Al Beruni (c. 973 – c. 1048)
Omar Khayyám (c. 1048 – c. 1131)
Bhaskara II (c.1114 - c.1185)
Nasir al-Din Tusi (1201–1274)
Jean Buridan (1301 – c. 1359/62)
Nicole Oresme (c. 1320 – 1325 –1382)
Sigismondo Polcastro (1384–1473)
15th–16th century
Nicolaus Copernicusº (1473–1543)
16th century and 16th–17th centuries
Gerolamo Cardano (1501–1576)
Tycho Brahe (1546–1601)
Giordano Bruno (1548–1600)
Galileo Galileiº* (1564–1642)
Johannes Keplerº (1571–1630)
Benedetto Castelli (1578–1643)
René Descartes‡^ (1596–1650)
Bonaventura Cavalieri (1598–1647)
17th century
Pierre de Fermat (1607–1665)
Evangelista Torricelli (1608–1647)
Giovanni Alfonso Borelli (1608–1679)
Francesco Maria Grimaldi (1618–1663)
Jacques Rohault (1618–1672)
Blaise Pascal^ (1623–1662)
Erhard Weigel (1625–1699)
Christiaan Huygens^ (1629–1695)
Ignace-Gaston Pardies (1636–1673)
17th–18th centuries
Vincenzo Viviani (1622–1703)
Isaac Newton^*º (1642–1727)
Gottfried Leibniz^ (1646–1716)
Edmond Pourchot (1651–1734)
Jacob Bernoulli (1655–1705)
Edmond Halley (1656–1742)
Luigi Guido Grandi (1671–1742)
Jakob Hermann (1678–1733)
Jean-Jacques d'Ortous de Mairan (1678–1771)
Nicolaus II Bernoulli (1695–1726)
Pierre Louis Maupertuis (1698–1759)
Daniel Bernoulli (1700–1782)
18th century
Leonhard Euler^ (1707–1783)
Vincenzo Riccati (1707–1785)
Mikhail Lomonosov (1711–1765)
Laura Bassiª* (1711–17
|
https://en.wikipedia.org/wiki/Acclimatization
|
Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do.
Names
The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimatation is less commonly encountered, and fewer dictionaries enter it.
Methods
Biochemical
In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low t
|
https://en.wikipedia.org/wiki/Proof-carrying%20code
|
Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows).
Proof-carrying code was originally described in 1996 by George Necula and Peter Lee.
Packet filter example
The original publication on proof-carrying code in 1996 used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed.
With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing i
|
https://en.wikipedia.org/wiki/Microvia
|
Microvias are used as the interconnects between layers in high density interconnect (HDI) substrates and printed circuit boards (PCBs) to accommodate the high input/output (I/O) density of advanced packages. Driven by portability and wireless communications, the electronics industry strives to produce affordable, light, and reliable products with increased functionality. At the electronic component level, this translates to components with increased I/Os with smaller footprint areas (e.g. flip-chip packages, chip-scale packages, and direct chip attachments), and on the printed circuit board and package substrate level, to the use of high density interconnects (HDIs) (e.g. finer lines and spaces, and smaller vias).
Overview
IPC standards revised the definition of a microvia in 2013 to a hole with depth to diameter aspect ratio of 1:1 or less, and the hole depth not to exceed 0.25mm. Previously, microvia was any hole less than or equal to 0.15 mm in diameter
With the advent of smartphones and hand-held electronic devices, microvias have evolved from single-level to stacked microvias that cross over multiple HDI layers. Sequential build-up (SBU) technology is used to fabricate HDI boards. The HDI layers are usually built up from a traditionally manufactured double-sided core board or multilayer PCB. The HDI layers are built on both sides of the traditional PCB one by one with microvias. The SBU process consists of several steps: layer lamination, via formation, via metallization, and via filling. There are multiple choices of materials and/or technologies for each step.
Microvias can be filled with different materials and processes:
Filled with epoxy resin (b-stage) during a sequential lamination process step
Filled with non-conductive or conductive material other than copper as a separate processing step
Plated closed with electroplated copper
Screen printed closed with a copper paste
Buried microvias are required to be filled, while blind microvias on the
|
https://en.wikipedia.org/wiki/Utility%20computing
|
Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.
This repackaging of computing services became the foundation of the shift to "on demand" computing, software as a service and cloud computing models that further propagated the idea of computing, application and network as a service.
There was some initial skepticism about such a significant shift. However, the new model of computing caught on and eventually became mainstream.
IBM, HP and Microsoft were early leaders in the new field of utility computing, with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications.
Utility computing can support grid computing which has the characteristic of very large computations or sudden peaks in demand which are supported via a large number of computers.
"Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer.
|
https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20equations
|
The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance and conservation of mass for Newtonian fluids. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether t
|
https://en.wikipedia.org/wiki/ITU-T%20Study%20Group%2015
|
The ITU-T Study Group 15 (SG15) 'Transport' is a standardization committee of ITU-T concerned with networks, technologies and infrastructures for transport, access and home. It responsible for standards such as GPON, G.fast, etc.
Administratively, SG15 is a statutory meeting of the World Telecommunication Standardization Assembly (WTSA), which creates the ITU-T Study Groups and appoints their management teams. The secretariat is provided by the Telecommunication Standardization Bureau (under Director Chaesub Lee).
The goal of SG15 is to produce recommendations (international standards) for networks.
Area of work
SG15 focuses on developing standards and recommendations related to optical transport networks, access network transport, and associated technologies.
Some of the key responsibilities of SG15 include:
Developing international standards for optical and transport networks, which covers fiber-optic communication systems, dense wavelength division multiplexing (DWDM), and synchronization aspects.
Addressing issues related to access network transport, such as digital subscriber lines (DSL), gigabit-capable passive optical networks (GPON), and Ethernet passive optical networks (EPON).
Developing recommendations for network management, control, and performance monitoring, as well as resilience, protection, and restoration mechanisms.
SG15 collaborates with other ITU-T study groups, regional standardization bodies, and industry stakeholders to ensure a comprehensive and coordinated approach to global telecommunication standardization.
See also
ITU-T
|
https://en.wikipedia.org/wiki/Ternary%20fission
|
Ternary fission is a comparatively rare (0.2 to 0.4% of events) type of nuclear fission in which three charged products are produced rather than two. As in other nuclear fission processes, other uncharged particles such as multiple neutrons and gamma rays are produced in ternary fission.
Ternary fission may happen during neutron-induced fission or in spontaneous fission (the type of radioactive decay). About 25% more ternary fission happens in spontaneous fission compared to the same fission system formed after thermal neutron capture, illustrating that these processes remain physically slightly different, even after the absorption of the neutron, possibly because of the extra energy present in the nuclear reaction system of thermal neutron-induced fission.
Quaternary fission, at 1 per 10 million fissions, is also known (see below).
Products
The most common nuclear fission process is "binary fission." It produces two charged asymmetrical fission products with maximally probable charged product at 95±15 and 135±15 u atomic mass. However, in this conventional fission of large nuclei, the binary process happens merely because it is the most energetically probable.
In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, the alternative ternary fission process produces three positively charged fragments (plus neutrons, which are not charged and not counted in this reckoning). The smallest of the charged products may range from so small a charge and mass as a single proton (Z=1), up to as large a fragment as the nucleus of argon (Z=18).
Although particles as large as argon nuclei may be produced as the smaller (third) charged product in the usual ternary fission, the most common small fragments from ternary fission are helium-4 nuclei, which make up about 90% of the small fragment products. This high incidence is related to the stability (high binding energy) of the alpha particle, which makes more energy available to the reaction. The second-most common
|
https://en.wikipedia.org/wiki/Transistor%20count
|
The transistor count is the number of transistors in an electronic device (typically on a single substrate or "chip"). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years. However, being directly proportional to the area of a chip, transistor count does not represent how advanced the corresponding manufacturing technology is: a better indication of this is the transistor density (the ratio of a chip's transistor count to its area).
, the highest transistor count in flash memory is Micron's 2terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3trillion floating-gate MOSFETs (3bits per transistor).
The highest transistor count in a single chip processor is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process.
As of 2023, the GPU with the highest transistor count is AMD's MI300X, built on TSMC's N5 process and totalling 153 billion MOSFETs.
The highest transistor count in a consumer microprocessor is 134billion transistors, in Apple's ARM-based dual-die M2 Ultra system on a chip, which is fabricated using TSMC's 5 nm semiconductor manufacturing process.
In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined "about 400 trillion transistors in the processing part of the hardware" and "the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors." To compare, the smallest comp
|
https://en.wikipedia.org/wiki/Radial%20immunodiffusion
|
Radial immunodiffusion (RID), Mancini immunodiffusion or single radial immunodiffusion assay, is an immunodiffusion technique used in immunology to determine the quantity or concentration of an antigen in a sample.
Description
Preparation
A solution containing antibody is added to a heated medium such as agar or agarose dissolved in buffered normal saline. The molten medium is then poured onto a microscope slide or into an open container, such as a Petri dish, and allowed to cool and form a gel. A solution containing the antigen is then placed in a well that is punched into the gel. The slide or container is then covered, closed or placed in a humidity box to prevent evaporation.
The antigen diffuses radially into the medium, forming a circle of precipitin that marks the boundary between the antibody and the antigen. The diameter of the circle increases with time as the antigen diffuses into the medium, reacts with the antibody, and forms insoluble precipitin complexes. The antigen is quantitated by measuring the diameter of the precipitin circle and comparing it with the diameters of precipitin circles formed by known quantities or concentrations of the antigen.
Antigen-antibody complexes are small and soluble when in antigen excess. Therefore, precipitation near the center of the circle is usually less dense than it is near the circle's outer edge, where antigen is less concentrated.
Expansion of the circle reaches an endpoint and stops when free antigen is depleted and when antigen and antibody reach equivalence. However, the clarity and density of the circle's outer edge may continue to increase after the circle stops expanding.
Interpretation
For most antigens, the area and the square of the diameter of the circle at the circle's endpoint are directly proportional to the initial quantity of antigen and are inversely proportional to the concentration of antibody. Therefore, a graph that compares the quantities or concentrations of antigen in the origin
|
https://en.wikipedia.org/wiki/Intrusion%20tolerance
|
Intrusion tolerance is a fault-tolerant design approach to defending information systems against malicious attacks. In that sense, it is also a computer security approach. Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering mechanisms that prevent intrusions from leading to a system security failure.
Distributed computing
In distributed computing there are two major variants of intrusion tolerance mechanisms: mechanisms based on redundancy, such as the Byzantine fault tolerance, as well as mechanisms based on intrusion detection as implemented in intrusion detection system) and intrusion reaction.
Intrusion-tolerant server architectures
Intrusion-tolerance has started to influence the design of server architectures in academic institutions, and industry. Examples of such server architectures include KARMA, Splunk IT Service Intelligence (ITSI), project ITUA, and the practical Byzantine Fault Tolerance (pBFT) model.
See also
Intrusion detection system evasion techniques
|
https://en.wikipedia.org/wiki/Necessity%20and%20sufficiency
|
In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of is guaranteed by the truth of . (Equivalently, it is impossible to have without , or the falsity of ensures the falsity of .) Similarly, is sufficient for , because being true always implies that is true, but not being true does not always imply that is not true.
In general, a necessary condition is one (possibly one of multiple conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition. The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false.
In ordinary English (also natural language) "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being a male is a necessary condition for being a brother, but it is not sufficient—while being a male sibling is a necessary and sufficient condition for being a brother.
Any conditional statement consists of at least one sufficient condition and at least one necessary condition.
In data analytics, necessity and sufficiency can refer to different causal logics, where Necessary Condition Analysis and Qualitative Comparative Analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest.
Definitions
In the conditional statement, "if S, then N", the expression represented by S is called the antecedent, and the expression represented by N is called the consequent. This conditional statement may be written in several equivalent ways, such as "N if S", "S only if N", "S implies
|
https://en.wikipedia.org/wiki/Random%20flip-flop
|
Random flip-flop (RFF) is a theoretical concept of a non-sequential logic circuit capable of generating true randomness. By definition, it operates as an "ordinary" edge-triggered clocked flip-flop, except that its clock input acts randomly and with probability p = 1/2. Unlike Boolean circuits, which behave deterministically, random flip-flop behaves non-deterministically. By definition, random flip-flop is electrically compatible with Boolean logic circuits. Together with them, RFF makes up a full set of logic circuits capable of performing arbitrary algorithms, namely to realize Probabilistic Turing machine.
Symbol
Random flip-flop comes in all varieties in which ordinary, edge triggered clocked flip-flop does, for example: D-type random flip-flop (DRFF). T-type random flip-flop (TRFF), JK-type random flip-flop (JKRFF), etc. Symbol for DRFF, TRFF and JKRFF are shown in the Fig. 1.
While varieties are possible, not all of them are needed: a single RFF type can be used to emulate all other types. Emulation of one type of RFF by the other type of RFF can be done using the same additional gates circuitry as for ordinary flip-flops. Examples are shown in the Fig. 2.
Practical realization of random flip-flip
By definition, action of a theoretical RFF is truly random. This is difficult to achieve in practice and is probably best realized through use of physical randomness. A RFF, based on quantum-random effect of photon emission in semiconductor and subsequent detection, has been demonstrated to work well up to a clock frequency of 25 MHz. At a higher clock frequency, subsequent actions of the RFF become correlated. This RFF has been built using bulk components and the effort resulted only in a handful of units. Recently, a monolithic chip containing 2800 integrated RFFs based on quantum randomness has been demonstrated in Bipolar-CMOS-DMOS (BCD) process.
Applications and prospects
One straightforward application of a RFF is generation of random bits, as shown
|
https://en.wikipedia.org/wiki/EEMBC
|
EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit, member-funded organization formed in 1997, focused on the creation of standard benchmarks for the hardware and software used in embedded systems. The goal of its members is to make EEMBC benchmarks an industry standard for evaluating the capabilities of embedded processors, compilers, and the associated embedded system implementations, according to objective, clearly defined, application-based criteria. EEMBC members may contribute to the development of benchmarks, vote at various stages before public distribution, and accelerate testing of their platforms through early access to benchmarks and associated specifications.
Most Popular Benchmark Working Groups
In chronological order of development:
AutoBench 1.1 - single-threaded code for automotive, industrial, and general-purpose applications
Networking - single-threaded code associated with moving packets in networking applications.
MultiBench - multi-threaded code for testing scalability of multicore processors.
CoreMark - measures the performance of central processing units (CPU) used in embedded systems
BXBench - system benchmark measuring the web browsing user-experience, from the click/touch on a URL to final page rendered on the screen, and is not limited to measuring only JavaScript execution.
AndEBench-Pro - system benchmark providing a standardized, industry-accepted method of evaluating Android platform performance. It's available for free download in Google Play.
FPMark - multi-threaded code for both single- and double-precision floating-point workloads, as well as small, medium, and large data sets.
ULPMark - energy-measuring benchmark for ultra-low power microcontrollers; benchmarks include ULPMark-Core (with a focus on microcontroller core activity and sleep modes) and ULPMark-Peripheral (with a focus on microcontroller peripheral activity such as Analog-to-digital converter, Serial Peripheral Interface Bus, Real-time
|
https://en.wikipedia.org/wiki/Outline%20of%20regression%20analysis
|
The following outline is provided as an overview of and topical guide to regression analysis:
Regression analysis – use of statistical techniques for learning about the relationship between one or more dependent variables (Y) and one or more independent variables (X).
Overview articles
Regression analysis
Linear regression
Non-statistical articles related to regression
Least squares
Linear least squares (mathematics)
Non-linear least squares
Least absolute deviations
Curve fitting
Smoothing
Cross-sectional study
Basic statistical ideas related to regression
Conditional expectation
Correlation
Correlation coefficient
Mean square error
Residual sum of squares
Explained sum of squares
Total sum of squares
Visualization
Scatterplot
Linear regression based on least squares
General linear model
Ordinary least squares
Generalized least squares
Simple linear regression
Trend estimation
Ridge regression
Polynomial regression
Segmented regression
Nonlinear regression
Generalized linear models
Generalized linear models
Logistic regression
Multinomial logit
Ordered logit
Probit model
Multinomial probit
Ordered probit
Poisson regression
Maximum likelihood
Cochrane–Orcutt estimation
Computation
Numerical methods for linear least squares
Inference for regression models
F-test
t-test
Lack-of-fit sum of squares
Confidence band
Coefficient of determination
Multiple correlation
Scheffé's method
Challenges to regression modeling
Autocorrelation
Cointegration
Multicollinearity
Homoscedasticity and heteroscedasticity
Lack of fit
Non-normality of errors
Outliers
Diagnostics for regression models
Regression model validation
Studentized residual
Cook's distance
Variance inflation factor
DFFITS
Partial residual plot
Partial regression plot
Leverage
Durbin–Watson statistic
Condition number
Formal aids to model selection
Model selection
Mallows's Cp
Akaike information criterion
Bayesian information criterion
Hannan–Q
|
https://en.wikipedia.org/wiki/Computer%20bureau
|
A computer bureau is a service bureau providing computer services.
Computer bureaus developed during the early 1960s, following the development of time-sharing operating systems. These allowed the services of a single large and expensive mainframe computer to be divided up and sold as a fungible commodity. Development of telecommunications and the first modems encouraged the growth of computer bureau as they allowed immediate access to the computer facilities from a customer's own premises.
The computer bureau model shrank during the 1980s, as cheap commodity computers, particularly the PC clone but also the minicomputer allowed services to be hosted on-premises.
See also
Batch processing
Cloud computing
Grid computing
Service Bureau Corporation
Utility computing
|
https://en.wikipedia.org/wiki/Network%20forensics
|
Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information. Network traffic is transmitted and then lost, so network forensics is often a pro-active investigation.
Network forensics generally has two uses. The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. The second form relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions.
Two systems are commonly used to collect network data; a brute force "catch it as you can" and a more intelligent "stop look listen" method.
Overview
Network forensics is a comparatively new field of forensic science. The growing popularity of the Internet in homes means that computing has become network-centric and data is now available outside of disk-based digital evidence. Network forensics can be performed as a standalone investigation or alongside a computer forensics analysis (where it is often used to reveal links between digital devices or reconstruct how a crime was committed).
Marcus Ranum is credited with defining Network forensics as "the capture, recording, and analysis of network events in order to discover the source of security attacks or other problem incidents".
Compared to computer forensics, where evidence is usually preserved on disk, network data is more volatile and unpredictable. Investigators often only have material to examine if packet filters, firewalls, and intrusion dete
|
https://en.wikipedia.org/wiki/Biological%20target
|
A biological target is anything within a living organism to which some other entity (like an endogenous ligand or a drug) is directed and/or binds, resulting in a change in its behavior or function. Examples of common classes of biological targets are proteins and nucleic acids. The definition is context-dependent, and can refer to the biological target of a pharmacologically active drug compound, the receptor target of a hormone (like insulin), or some other target of an external stimulus. Biological targets are most commonly proteins such as enzymes, ion channels, and receptors.
Mechanism
The external stimulus (i.e., the drug or ligand) physically binds to ("hits") the biological target. The interaction between the substance and the target may be:
noncovalent – A relatively weak interaction between the stimulus and the target where no chemical bond is formed between the two interacting partners and hence the interaction is completely reversible.
reversible covalent – A chemical reaction occurs between the stimulus and target in which the stimulus becomes chemically bonded to the target, but the reverse reaction also readily occurs in which the bond can be broken.
irreversible covalent – The stimulus is permanently bound to the target through irreversible chemical bond formation.
Depending on the nature of the stimulus, the following can occur:
There is no direct change in the biological target, but the binding of the substance prevents other endogenous substances (such as activating hormones) from binding to the target. Depending on the nature of the target, this effect is referred as receptor antagonism, enzyme inhibition, or ion channel blockade.
A conformational change in the target is induced by the stimulus which results in a change in target function. This change in function can mimic the effect of the endogenous substance in which case the effect is referred to as receptor agonism (or channel or enzyme activation) or be the opposite of the endog
|
https://en.wikipedia.org/wiki/List%20of%20geometric%20topology%20topics
|
This is a list of geometric topology topics.
Low-dimensional topology
Knot theory
Knot (mathematics)
Link (knot theory)
Wild knots
Examples of knots
Unknot
Trefoil knot
Figure-eight knot (mathematics)
Borromean rings
Types of knots
Torus knot
Prime knot
Alternating knot
Hyperbolic link
Knot invariants
Crossing number
Linking number
Skein relation
Knot polynomials
Alexander polynomial
Jones polynomial
Knot group
Writhe
Quandle
Seifert surface
Braids
Braid theory
Braid group
Kirby calculus
Surfaces
Genus (mathematics)
Examples
Positive Euler characteristic
2-disk
Sphere
Real projective plane
Zero Euler characteristic
Annulus
Möbius strip
Torus
Klein bottle
Negative Euler characteristic
The boundary of the pretzel is a genus three surface
Embedded/Immersed in Euclidean space
Cross-cap
Boy's surface
Roman surface
Steiner surface
Alexander horned sphere
Klein bottle
Mapping class group
Dehn twist
Nielsen–Thurston classification
Three-manifolds
Moise's Theorem (see also Hauptvermutung)
Poincaré conjecture
Thurston elliptization conjecture
Thurston's geometrization conjecture
Hyperbolic 3-manifolds
Spherical 3-manifolds
Euclidean 3-manifolds, Bieberbach Theorem, Flat manifolds, Crystallographic groups
Seifert fiber space
Heegaard splitting
Waldhausen conjecture
Compression body
Handlebody
Incompressible surface
Dehn's lemma
Loop theorem (aka the Disk theorem)
Sphere theorem
Haken manifold
JSJ decomposition
Branched surface
Lamination
Examples
3-sphere
Torus bundles
Surface bundles over the circle
Graph manifolds
Knot complements
Whitehead manifold
Invariants
Fundamental group
Heegaard genus
tri-genus
Analytic torsion
Manifolds in general
Orientable manifold
Connected sum
Jordan-Schönflies theorem
Signature (topology)
Handle decomposition
Handlebody
h-cobordism theorem
s-cobordism theorem
Manifold decomposition
Hilbert-Smith conjecture
Mapping class group
Orbifolds
Examples
Exotic sphere
Homology sphere
Lens space
I-bundle
See also
topology glossary
List of topo
|
https://en.wikipedia.org/wiki/Ramanujan%E2%80%93Soldner%20constant
|
In mathematics, the Ramanujan–Soldner constant (also called the Soldner constant) is a mathematical constant defined as the unique positive zero of the logarithmic integral function. It is named after Srinivasa Ramanujan and Johann Georg von Soldner.
Its value is approximately μ ≈ 1.45136923488338105028396848589202744949303228…
Since the logarithmic integral is defined by
then using we have
thus easing calculation for numbers greater than μ. Also, since the exponential integral function satisfies the equation
the only positive zero of the exponential integral occurs at the natural logarithm of the Ramanujan–Soldner constant, whose value is approximately ln(μ) ≈ 0.372507410781366634461991866…
External links
Mathematical constants
Srinivasa Ramanujan
|
https://en.wikipedia.org/wiki/Scotobiology
|
Scotobiology is the study of biology as directly and specifically affected by darkness, as opposed to photobiology, which describes the biological effects of light.
Overview
The science of scotobiology gathers together under a single descriptive heading a wide range of approaches to the study of the biology of darkness. This includes work on the effects of darkness on the behavior and metabolism of animals, plants, and microbes. Some of this work has been going on for over a century, and lays the foundation for understanding the importance of dark night skies, not only for humans but for all biological species.
The great majority of biological systems have evolved in a world of alternating day and night and have become irrevocably adapted to and dependent on the daily and seasonally changing patterns of light and darkness. Light is essential for many biological activities such as sight and photosynthesis. These are the focus of the science of photobiology. But the presence of uninterrupted periods of darkness, as well as their alternation with light, is just as important to biological behaviour. Scotobiology studies the positive responses of biological systems to the presence of darkness, and not merely the negative effects caused by the absence of light.
Effects of darkness
Many of the biological and behavioural activities of plants, animals (including birds and amphibians), insects, and microorganisms are either adversely affected by light pollution at night or can only function effectively either during or as the consequence of nightly darkness. Such activities include foraging, breeding and social behavior in higher animals, amphibians, and insects, which are all affected in various ways if light pollution occurs in their environment. These are not merely photobiological phenomena; light pollution acts by interrupting critical dark-requiring processes.
But perhaps the most important scotobiological phenomena relate to the regular periodic alternation of
|
https://en.wikipedia.org/wiki/ARKive
|
ARKive was a global initiative with the mission of "promoting the conservation of the world's threatened species, through the power of wildlife imagery", which it did by locating and gathering films, photographs and audio recordings of the world's species into a centralised digital archive. Its priority was the completion of audio-visual profiles for the c. 17,000 species on the IUCN Red List of Threatened Species.
The project was an initiative of Wildscreen, a UK-registered educational charity, based in Bristol. The technical platform was created by Hewlett-Packard, as part of the HP Labs' Digital Media Systems research programme.
ARKive had the backing of leading conservation organisations, including BirdLife International, Conservation International, International Union for Conservation of Nature (IUCN), the United Nations' World Conservation Monitoring Centre (UNEP-WCMC), and the World Wide Fund for Nature (WWF), as well as leading academic and research institutions, such as the Natural History Museum; Royal Botanic Gardens, Kew; and the Smithsonian Institution. It was a member of the Institutional Council of the Encyclopedia of Life.
Two ARKive layers for Google Earth, featuring endangered species and species in the Gulf of Mexico were produced by Google Earth Outreach. The first of these was launched in April 2008 by Wildscreen's Patron, Sir David Attenborough.
The website closed on 15 February 2019; its collection of images and videos remains securely stored for future generations.
History
The project formally was launched on 20 May 2003 by its patron, the UK-based natural history presenter, Sir David Attenborough, a long-standing colleague and friend of its chief instigator, the late Christopher Parsons, a former Head of the BBC Natural History Unit. Parsons never lived to see the fruition of the project, succumbing to cancer in November 2002 at the age of 70.
Parsons identified a need to provide a centralised safe haven for wildlife films and photogr
|
https://en.wikipedia.org/wiki/Biosignature
|
A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts.
Types
In general, biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely.
Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat
|
https://en.wikipedia.org/wiki/Permanent%20vegetative%20cover
|
Permanent vegetative cover refers to trees, perennial bunchgrasses and grasslands, legumes, and shrubs with an
expected life span of at least 5 years.
In the United States, permanent cover is required on cropland entered into the Conservation Reserve Program.
|
https://en.wikipedia.org/wiki/Causality%20%28physics%29
|
Physical causality is a physical relationship between causes and effects. It is considered to be fundamental to all natural sciences and behavioural sciences, especially physics. Causality is also a topic studied from the perspectives of philosophy, statistics and logic. Causality means that an effect can not occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause can not have an effect outside its front (future) light cone.
Macroscopic vs microscopic causality
Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer fast than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version.
Macroscopic causality
In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other.
Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observat
|
https://en.wikipedia.org/wiki/IP-XACT
|
IP-XACT, also known as IEEE 1685, is an XML format that defines and describes individual, re-usable electronic circuit designs (individual pieces of intellectual property, or IPs) to facilitate their use in creating integrated circuits (i.e. microchips). IP-XACT was created by the SPIRIT Consortium as a standard to enable automated configuration and integration through tools and evolving into an IEEE standard.
The goals of the standard are
to ensure delivery of compatible component descriptions, such as IPs, from multiple component vendors,
to enable exchanging complex component libraries between electronic design automation (EDA) tools for SoC design (design environments),
to describe configurable components using metadata, and
to enable the provision of EDA vendor-neutral scripts for component creation and configuration (generators, configurators).
Approved as IEEE 1685-2009 on December 9, 2009, published on February 18, 2010.
Superseded by IEEE 1685-2014. IEEE 1685-2009 was adopted as IEC 62014-4:2015. In June 2023, the supplemental material for standard IEEE 1685-2022 IP-XACT was approved by Accellera.
Overview
Conformance checks for eXtensible Markup Language (XML) data designed to describe electronic systems are formulated by this standard. The meta-data forms that are standardized include components, systems, bus interfaces and connections, abstractions of those buses, and details of the components including address maps, register and field descriptions, and file set descriptions for use in automating design, verification, documentation, and use flows for electronic systems. A set of XML schemas of the form described by the World Wide Web Consortium (W3C(R)) and a set of semantic consistency rules (SCRs) are included. A generator interface that is portable across tool environments is provided. The specified combination of methodology-independent meta-data and the tool-independent mechanism for accessing that data provides for portability of design d
|
https://en.wikipedia.org/wiki/Glossary%20of%20invasion%20biology%20terms
|
The need for a clearly defined and consistent invasion biology terminology has been acknowledged by many sources. Invasive species, or invasive exotics, is a nomenclature term and categorization phrase used for flora and fauna, and for specific restoration-preservation processes in native habitats. Invasion biology is the study of these organisms and the processes of species invasion.
The terminology in this article contains definitions for invasion biology terms in common usage today, taken from accessible publications. References for each definition are included. Terminology relates primarily to invasion biology terms with some ecology terms included to clarify language and phrases on linked articles.
Introduction
Definitions of "invasive non-indigenous species have been inconsistent", which has led to confusion both in literature and in popular publications (Williams and Meffe 2005). Also, many scientists and managers feel that there is no firm definition of non-indigenous species, native species, exotic species, "and so on, and ecologists do not use the terms consistently." (Shrader-Frechette 2001) Another question asked is whether current language is likely to promote "effective and appropriate action" towards invasive species through cohesive language (Larson 2005). Biologists today spend more time and effort on invasive species work because of the rapid spread, economic cost, and effects on ecological systems, so the importance of effective communication about invasive species is clear. (Larson 2005)
Controversy in invasion biology terms exists because of past usage and because of preferences for certain terms. Even for biologists, defining a species as native may be far from being a straightforward matter of biological classification based on the location or the discipline a biologist is working in (Helmreich 2005). Questions often arise as to what exactly makes a species native as opposed to non-native, because some non-native species have no kno
|
https://en.wikipedia.org/wiki/Glycobiology
|
Defined in the narrowest sense, glycobiology is the study of the structure, biosynthesis, and biology of saccharides (sugar chains or glycans) that are widely distributed in nature. Sugars or saccharides are essential components of all living things and aspects of the various roles they play in biology are researched in various medical, biochemical and biotechnological fields.
History
According to Oxford English Dictionary the specific term glycobiology was coined in 1988 by Prof. Raymond Dwek to recognize the coming together of the traditional disciplines of carbohydrate chemistry and biochemistry. This coming together was as a result of a much greater understanding of the cellular and molecular biology of glycans. However, as early as the late nineteenth century pioneering efforts were being made by Emil Fisher to establish the structure of some basic sugar molecules. Each year the Society of Glycobiology awards the Rosalind Kornfeld award for lifetime achievement in the field of glycobiology.
Glycoconjugates
Sugars may be linked to other types of biological molecule to form glycoconjugates. The enzymatic process of glycosylation creates sugars/saccharides linked to themselves and to other molecules by the glycosidic bond, thereby producing glycans. Glycoproteins, proteoglycans and glycolipids are the most abundant glycoconjugates found in mammalian cells. They are found predominantly on the outer cell membrane and in secreted fluids. Glycoconjugates have been shown to be important in cell-cell interactions due to the presence on the cell surface of various glycan binding receptors in addition to the glycoconjugates themselves. In addition to their function in protein folding and cellular attachment, the N-linked glycans of a protein can modulate the protein's function, in some cases acting as an on-off switch.
Glycomics
"Glycomics, analogous to genomics and proteomics, is the systematic study of all glycan structures of a given cell type or organism" and is
|
https://en.wikipedia.org/wiki/Embedded%20C%2B%2B
|
Embedded C++ (EC++) is a dialect of the C++ programming language for embedded systems. It was defined by an industry group led by major Japanese central processing unit (CPU) manufacturers, including NEC, Hitachi, Fujitsu, and Toshiba, to address the shortcomings of C++ for embedded applications. The goal of the effort is to preserve the most useful object-oriented features of the C++ language yet minimize code size while maximizing execution efficiency and making compiler construction simpler. The official website states the goal as "to provide embedded systems programmers with a subset of C++ that is easy for the average C programmer to understand and use".
Differences from C++
Embedded C++ excludes some features of C++.
Some compilers, such as those from Green Hills and IAR Systems, allow certain features of ISO/ANSI C++ to be enabled in Embedded C++. IAR Systems calls this "Extended Embedded C++".
Compilation
An EC++ program can be compiled with any C++ compiler. But, a compiler specific to EC++ may have an easier time doing optimization.
Compilers specific to EC++ are provided by companies such as:
IAR Systems
Freescale Semiconductor, (spin-off from Motorola in 2004 who had acquired Metrowerks in 1999)
Tasking Software, part of Altium Limited
Green Hills Software
Criticism
The language has had a poor reception with many expert C++ programmers. In particular, Bjarne Stroustrup says, "To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be." In fact, the official English EC++ website has not been updated since 2002. Nevertheless, a restricted subset of C++ (based on Embedded C++) has been adopted by Apple Inc. as the exclusive programming language to create all I/O Kit device drivers for Apple's macOS, iPadOS and iOS operating systems of the popular Macintosh, iPhone, and iPad products. Apple engineers felt the exceptions, multiple inheritance, templates, and runtime type information features of standard C++ were either insuffici
|
https://en.wikipedia.org/wiki/Thanatocoenosis
|
Thanatocoenosis (from Greek language thanatos - death and koinos - common) are all the embedded fossils at a single discovery site. This site may be referred to as a "death assemblage". Such groupings are composed of fossils of organisms which may not have been associated during life, often originating from different habitats. Examples include marine fossils having been brought together by a water current or animal bones having been deposited by a predator. A site containing thanatocoenosis elements can also lose clarity in its faunal history by more recent intruding factors such as burrowing microfauna or stratigraphic disturbances born from anthropogenic methods.
This term differs from a related term, biocoenosis, which refers to an assemblage in which all organisms within the community interacted and lived together in the same habitat while alive. A biocoenosis can lead to a thanatocoenosis if disrupted significantly enough to have its dead/fossilized matter scattered. A death community/thanatocoenosis is developed by multiple taphonomic processes (those being ones relating to the different ways in which organismal remains pass through strata and are decomposed and preserved) that are generally categorized into two groups: biostratinomy and diagenesis. As a whole, thanatocoenoses are divided into two categories as well: autochthonous and allochthonous.
Death assemblages and thanatocoenoses can provide insight into the process of early-stage fossilization, as well as information about the species within a given ecosystem. The study of taphonomy can aid in furthering the understanding of the ecological past of species and their fossil records if used in conjunction with research on death assemblages from modern ecosystems.
History
The term "thanatocoenosis" was originally created by Erich Wasmund in 1926, and he was the first to define both the similarities and contrasts between these death communities and biocoenoses. Due to confusion between some distinctions
|
https://en.wikipedia.org/wiki/Data%20communication
|
Data communication or digital communications, including data transmission and data reception, is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal.
Analog transmission is a method of conveying voice, data, image, signal or video information using a continuous signal which varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation is carried out by modem equipment. According to the most common definition of digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example, a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream, for example, using pulse-code modulation or more advanced source coding schemes. This source coding and decoding is carried out by codec equipment.
Distinction between related subjects
Courses and textbooks in the field of data transmission as well as digital transmission and digital communications have similar content.
Digital transmission or data transmission traditionally belongs to t
|
https://en.wikipedia.org/wiki/ISCSI%20Extensions%20for%20RDMA
|
The iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use Remote Direct Memory Access (RDMA). RDMA is provided by either the Transmission Control Protocol (TCP) with RDMA services (iWARP) that uses existing Ethernet setup and therefore no need of huge hardware investment, RoCE (RDMA over Converged Ethernet) that does not need the TCP layer and therefore provides lower latency, or InfiniBand.
It permits data to be transferred directly into and out of SCSI computer memory buffers (which connects computers to storage devices) without intermediate data copies and without much CPU intervention.
History
An RDMA consortium was announced on May 31, 2002, with a goal of product implementations by 2003.
The consortium released their proposal in July, 2003.
The protocol specifications were published as drafts in September 2004 in the Internet Engineering Task Force and issued as RFCs in October 2007.
The OpenIB Alliance was renamed in 2007 to be the OpenFabrics Alliance, and then released an open source software package.
Description
The motivation for iSER is to use RDMA to avoid unnecessary data copying on the target and initiator.
The Datamover Architecture (DA) defines an abstract model in which the movement of
data between iSCSI end nodes is logically separated from the rest of the iSCSI protocol; iSER
is one Datamover protocol. The interface between the iSCSI and a Datamover protocol, iSER
in this case, is called Datamover Interface (DI).
The main difference between the standard iSCSI and iSCSI over iSER is the execution of
SCSI read/write commands. With iSER the target drives all data transfer (with the
exception of iSCSI unsolicited data) by issuing RDMA write/read operations, respectively.
When the iSCSI layer issues an iSCSI command PDU, it calls the Send_Control primitive,
which is part of the DI. The Send_Control primitive sends the STag with the PDU. The iSER
layer in
|
https://en.wikipedia.org/wiki/Etherloop
|
Etherloop is a kind of DSL technology that combines the features of Ethernet and DSL. It allows the combination of voice and data transmission on standard phone lines. Under the right conditions it will allow speeds of up to 6 megabits per second over a distance of up to 6.4 km (21,000 feet).
Etherloop uses half-duplex transmission, and as such, is less susceptible to interference caused by poor line quality, bridge taps, etc. Also, etherloop modems can train up through line filters (although it is not recommended to do this).
Etherloop has been deployed by various internet service providers in areas where the loop length is very long or line quality is poor. Some Etherloop modems (those made by Elastic Networks) offer a "Central Office mode", in which two modems are connected back to back over a phone line and used as a LAN extension. An example of a situation where this would be done is to extend Ethernet to a building that is too far to reach with straight Ethernet.
See also
Ethernet in the first mile (especially 2BASE-TL)
|
https://en.wikipedia.org/wiki/Kawasaki%27s%20theorem
|
Kawasaki's theorem or Kawasaki–Justin theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero.
Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold.
The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi.
Statement
A one-vertex crease pattern consists of a set of rays or creases drawn on a flat sheet of paper, all emanating from the same point interior to the sheet. (This point is called the vertex of the pattern.) Each crease must be folded, but the pattern does not specify whether the folds should be mountain folds or valley folds. The goal is to determine whether it is possible to fold the paper so that every crease is folded, no folds occur elsewhere, and the whole folded sheet of paper lies flat.
To fold flat, the number of creases must be even. This follows, for instance, from Maekawa's theorem, which states that the number of mountain folds at a flat-folded vertex differs from the number of valley folds by exactly two folds. Therefore, suppose that a crease pattern consists of an even number of creases, and let be the consecutive angles between the creases around the vertex, in clockwise order, starting at any one of the angles. Then Kawasaki's theorem states that the crease pattern may be folded flat if and only if the alternating sum and difference of the angles adds to zero:
An equivalent way of stating the same condition is that, if the angles are partitioned into two alternating subsets, then the sum of the angles in eith
|
https://en.wikipedia.org/wiki/Poisson%20bracket
|
In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized by and , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.
In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups.
All of these objects are named in honor of Siméon Denis Poisson.
Properties
Given two functions and that depend on phase space and time, their Poisson bracket is another function that depends on phase space and time. The following rules hold for any three functions of phase space and time:
Anticommutativity
Bilinearity
Leibniz's rule
Jacobi identity
Also, if a function is constant over phase space (but may depend on time), then for any .
Definition in canonical coordinates
In canonical coordinates (also known as Darboux coordinates) on the phase space, given two functions and , the Poisson bracket takes the form
The Poisson brackets of the canonical coordinates are
|
https://en.wikipedia.org/wiki/Electronic%20symbol
|
An electronic symbol is a pictogram used to represent various electrical and electronic devices or functions, such as wires, batteries, resistors, and transistors, in a schematic diagram of an electrical or electronic circuit. These symbols are largely standardized internationally today, but may vary from country to country, or engineering discipline, based on traditional conventions.
Standards for symbols
The graphic symbols used for electrical components in circuit diagrams are covered by national and international standards, in particular:
IEC 60617 (also known as BS 3939).
There is also IEC 61131-3 – for ladder-logic symbols.
JIC JIC (Joint Industrial Council) symbols as approved and adopted by the NMTBA (National Machine Tool Builders Association). They have been extracted from the Appendix of the NMTBA Specification EGPl-1967.
ANSI Y32.2-1975 (also known as IEEE Std 315-1975 or CSA Z99-1975).
IEEE Std 91/91a: graphic symbols for logic functions (used in digital electronics). It is referenced in ANSI Y32.2/IEEE Std 315.
Australian Standard AS 1102 (based on a slightly modified version of IEC 60617; withdrawn without replacement with a recommendation to use IEC 60617).
The number of standards leads to confusion and errors.
Symbols usage is sometimes unique to engineering disciplines, and national or local variations to international standards exist. For example, lighting and power symbols used as part of architectural drawings may be different from symbols for devices used in electronics.
Common electronic symbols
Symbols shown are typical examples, not a complete list.
Traces
Grounds
The shorthand for ground is GND. Optionally, the triangle in the middle symbol may be filled in.
Sources
Resistors
It is very common for potentiometer and rheostat symbols to be used for many types of variable resistors, including trimmers.
Capacitors
Diodes
Optionally, the triangle in these symbols may be filled in. Note: The words anode and cathode typically
|
https://en.wikipedia.org/wiki/Inverted%20sugar%20syrup
|
Inverted sugar syrup, also called invert syrup, invert sugar, simple syrup, sugar syrup, sugar water, bar syrup, syrup USP, or sucrose inversion, is a syrup mixture of the monosaccharides glucose and fructose, that is made by hydrolytic saccharification of the disaccharide sucrose. This mixture's optical rotation is opposite to that of the original sugar, which is why it is called an invert sugar.
It is 1.3x sweeter than table sugar, and foods that contain invert sugar retain moisture better and crystallize less easily than do those that use table sugar instead. Bakers, who call it invert syrup, may use it more than other sweeteners.
Production
Plain water
Inverted sugar syrup can be made without acids or enzymes by heating it up alone: two parts granulated sugar and one part water, simmered for five to seven minutes, will be partly inverted.
The amount of water can be increased to increase the time it takes to reach the desired final temperature, and increasing the time increases the amount of inversion that occurs. In general, higher final temperatures result in thicker syrups, and lower final temperatures, in thinner ones.
Additives
Commercially prepared enzyme-catalyzed solutions are inverted at . The optimum pH for inversion is 5.0. Invertase is added at a rate of about 0.15% of the syrup's weight, and inversion time will be about 8 hours. When completed the syrup temperature is raised to inactivate the invertase, but the syrup is concentrated in a vacuum evaporator to preserve color.
Though inverted sugar syrup can be made by heating table sugar in water alone, the reaction can be sped up by adding lemon juice, cream of tartar, or other catalysts, often without changing the flavor noticeably. Common sugar can be inverted quickly by mixing sugar and citric acid or cream of tartar at a ratio of about 1000:1 by weight and adding water. If lemon juice which is about five percent citric acid by weight is used instead then the ratio becomes 50:1. Such a mixtu
|
https://en.wikipedia.org/wiki/Point%20particle
|
A point particle, ideal particle or point-like particle (often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension; being dimensionless, it does not take up space. A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. Point masses and point charges, discussed below, are two common cases. When a point particle has an additive property, such as mass or charge, it is often represented mathematically by a Dirac delta function.
In quantum mechanics, the concept of a point particle is complicated by the Heisenberg uncertainty principle, because even an elementary particle, with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~. There is nevertheless a distinction between elementary particles such as electrons or quarks, which have no known internal structure, versus composite particles such as protons, which do have internal structure: A proton is made of three quarks.
Elementary particles are sometimes called "point particles" in reference to their lack of internal structure, but this is in a different sense than discussed above.
Point mass
Point mass (pointlike mass) is the concept, for example in classical physics, of a physical object (typically matter) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions.
In the theory of gravity, extended objects can behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the Newtonian gravitation behave in such a way as if all their matter were concentrate
|
https://en.wikipedia.org/wiki/Session%20multiplexing
|
Session multiplexing in a computer network is a service provided by the transport layer (see OSI Layered Model). It multiplexes several message streams, or sessions onto one logical link and keeps track of which messages belong to which sessions (see session layer).
An example of session multiplexing—a single computer with one IP address has several websites open at once.
|
https://en.wikipedia.org/wiki/Connection-oriented%20communication
|
In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol, where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths.
Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the TCP protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network.
Connection-oriented protocol services are often, but not always, reliable network services that provide acknowledgment after successful delivery and automatic repeat request functions in case of missing or corrupted data. Asynchronous Transfer Mode, Frame Relay and MPLS are examples of a connection-oriented, unreliable protocol. SMTP is an example of a connection-oriented protocol in which if a message is not delivered, an error report is sent to the sender which makes SMTP a reliable protocol. Because they can keep track of a conversation, connection-oriented protocols are sometimes described as stateful.
Circuit switching
Circuit switched communication, for example the public switched telephone network,
|
https://en.wikipedia.org/wiki/List%20of%20moments%20of%20inertia
|
Moment of inertia, denoted by , measures the extent to which an object resists rotational acceleration about a particular axis, it is the rotational analogue to mass (which determines an object's resistance to linear acceleration). The moments of inertia of a mass have units of dimension ML2 ([mass] × [length]2). It should not be confused with the second moment of area, which has units of dimension L4 ([length]4) and is used in beam calculations. The mass moment of inertia is often also known as the rotational inertia, and sometimes as the angular mass.
For simple objects with geometric symmetry, one can often determine the moment of inertia in an exact closed-form expression. Typically this occurs when the mass density is constant, but in some cases the density can vary throughout the object as well. In general, it may not be straightforward to symbolically express the moment of inertia of shapes with more complicated mass distributions and lacking symmetry. When calculating moments of inertia, it is useful to remember that it is an additive function and exploit the parallel axis and perpendicular axis theorems.
This article mainly considers symmetric mass distributions, with constant density throughout the object, and the axis of rotation is taken to be through the center of mass unless otherwise specified.
Moments of inertia
Following are scalar moments of inertia. In general, the moment of inertia is a tensor, see below.
List of 3D inertia tensors
This list of moment of inertia tensors is given for principal axes of each object.
To obtain the scalar moments of inertia I above, the tensor moment of inertia I is projected along some axis defined by a unit vector n according to the formula:
where the dots indicate tensor contraction and the Einstein summation convention is used. In the above table, n would be the unit Cartesian basis ex, ey, ez to obtain Ix, Iy, Iz respectively.
See also
List of second moments of area
Parallel axis theorem
Perpendicula
|
https://en.wikipedia.org/wiki/Array%20factor
|
An array is simply a group of objects, and the array factor is a measure of how much a specific characteristic changes because of the grouping. This phenomenon is observed when antennas are grouped together. The radiation (or reception) pattern of the antenna group is considerably different from that of a single antenna. This is due to the constructive and destructive interference properties of radio waves. A well designed antenna array, allows the broadcast power to be directed to where it is needed most.
These antenna arrays are typically one dimensional, as seen on collinear dipole arrays, or two dimensional as on military phased arrays.
In order to simplify the mathematics, a number of assumptions are typically made:
1. all radiators are equal in every respect
2. all radiators are uniformly spaced
3. the signal phase shift between radiators is constant.
The array factor is the complex-valued far-field radiation pattern obtained for an array of isotropic radiators located at coordinates , as determined by:
where are the complex-valued excitation coefficients, and is the direction unit vector. The array factor is defined in the transmitting mode, with the time convention . A corresponding expression can be derived for the receiving mode, where a negative sign appears in the exponential factors, as derived in reference.
|
https://en.wikipedia.org/wiki/Ecosystem%20model
|
An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system.
Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model).
Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools.
Types of models
There are two major types of ecological models, which are generally applied to different types of problems: (1) analytic models and (2) simulation / computational models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutio
|
https://en.wikipedia.org/wiki/Cantor%20cube
|
In mathematics, a Cantor cube is a topological group of the form {0, 1}A for some index set A. Its algebraic and topological structures are the group direct product and product topology over the cyclic group of order 2 (which is itself given the discrete topology).
If A is a countably infinite set, the corresponding Cantor cube is a Cantor space. Cantor cubes are special among compact groups because every compact group is a continuous image of one, although usually not a homomorphic image. (The literature can be unclear, so for safety, assume all spaces are Hausdorff.)
Topologically, any Cantor cube is:
homogeneous;
compact;
zero-dimensional;
AE(0), an absolute extensor for compact zero-dimensional spaces. (Every map from a closed subset of such a space into a Cantor cube extends to the whole space.)
By a theorem of Schepin, these four properties characterize Cantor cubes; any space satisfying the properties is homeomorphic to a Cantor cube.
In fact, every AE(0) space is the continuous image of a Cantor cube, and with some effort one can prove that every compact group is AE(0). It follows that every zero-dimensional compact group is homeomorphic to a Cantor cube, and every compact group is a continuous image of a Cantor cube.
|
https://en.wikipedia.org/wiki/Traceability
|
Traceability is the capability to trace something. In some cases, it is interpreted as the ability to verify the history, location, or application of an item by means of documented recorded identification.
Other common definitions include the capability (and implementation) of keeping track of a given set or type of information to a given degree, or the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.
Traceability is applicable to measurement, supply chain, software development, healthcare and security.
Measurement
The term measurement traceability or metrological traceability is used to refer to an unbroken chain of comparisons relating an instrument's measurements to a known standard. Calibration to a traceable standard can be used to determine an instrument's bias, precision, and accuracy. It may also be used to show a chain of custody - from current interpretation of evidence to the actual evidence in a legal context, or history of handling of any information.
In many countries, national standards for weights and measures are maintained by a National Metrological Institute (NMI) which provides the highest level of standards for the calibration / measurement traceability infrastructure in that country. Examples of government agencies include the National Physical Laboratory, UK (NPL) the National Institute of Standards and Technology (NIST) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, and the Instituto Nazionale di Ricerca Metrologica (INRiM) in Italy. As defined by NIST, "Traceability of measurement requires the establishment of an unbroken chain of comparisons to stated references each with a stated uncertainty."
A clock providing is traceable to a time standard such as Coordinated Universal Time or International Atomic Time. The Global Positioning System is a source of traceable time.
Supply chain
Within a product's supply chain, traceability may be both a regulatory and an eth
|
https://en.wikipedia.org/wiki/Edinburgh%20BioQuarter
|
Edinburgh BioQuarter is a key initiative in the development of Scotland's life sciences industry, which employs more than 39,000 people in over 750 organisations.
A community of 8,000 people currently work and study within the boundary of BioQuarter, located on the south side of Edinburgh, Scotland’s capital city, approximately three miles from the city centre.
This 160-acre site includes health innovation businesses, the University of Edinburgh Medical School, 900-bed Royal Infirmary of Edinburgh, and new Royal Hospital for Children and Young People and Department of Clinical Neurosciences. The site is also home to many of the University of Edinburgh’s medical research institutes.
Partnership and Economic Impact
BioQuarter is a partnership with four of Scotland’s most prominent organisations - the City of Edinburgh Council, NHS Lothian, Scottish Enterprise and the University of Edinburgh.
Over the past three decades there has been over £600m investment in capital developments. BioQuarter has generated an estimated £2.72 billion in gross value added from its research, clinical and commercial activities, and a further £320 million from its development.
History
In 1997, the Scottish Government obtained planning permission for land in the Little France area of Edinburgh for a new Royal Infirmary of Edinburgh and it was procured under a Private Finance Initiative contract in 1998. This allowed the Royal Infirmary of Edinburgh and the University of Edinburgh’s Medical School to relocate from their historic sites in Edinburgh city centre.
Development commenced immediately and in 2002 NHS Lothian opened the new Royal Infirmary of Edinburgh, a major acute teaching hospital. At the same time the University of Edinburgh completed its first phase of relocation of the College of Medicine and Veterinary Medicine with the move of medical teaching and research to the adjacent Chancellor’s Building.
In 2004 Scottish Enterprise, Scotland’s economic development agency, had a
|
https://en.wikipedia.org/wiki/List%20of%20types%20of%20sets
|
Sets can be classified according to the properties they have.
Relative to set theory
Empty set
Finite set, Infinite set
Countable set, Uncountable set
Power set
Relative to a topology
Closed set
Open set
Clopen set
Fσ set
Gδ set
Compact set
Relatively compact set
Regular open set, regular closed set
Connected set
Perfect set
Meagre set
Nowhere dense set
Relative to a metric
Bounded set
Totally bounded set
Relative to measurability
Borel set
Baire set
Measurable set, Non-measurable set
Universally measurable set
Relative to a measure
Negligible set
Null set
Haar null set
In a linear space
Convex set
Balanced set, Absolutely convex set
Relative to the real/complex numbers
Fractal set
Ways of defining sets/Relation to descriptive set theory
Recursive set
Recursively enumerable set
Arithmetical set
Diophantine set
Hyperarithmetical set
Analytical set
Analytic set, Coanalytic set
Suslin set
Projective set
Inhabited set
More general objects still called sets
Multiset
icarus set
See also
Basic concepts in set theory
Sets
Set theory
|
https://en.wikipedia.org/wiki/Zinc%20in%20biology
|
Zinc is an essential trace element for humans and other animals, for plants and for microorganisms. Zinc is required for the function of over 300 enzymes and 1000 transcription factors, and is stored and transferred in metallothioneins. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes.
In proteins, zinc ions are often coordinated to the amino acid side chains of aspartic acid, glutamic acid, cysteine and histidine. The theoretical and computational description of this zinc binding in proteins (as well as that of other transition metals) is difficult.
Roughly grams of zinc are distributed throughout the human body. Most zinc is in the brain, muscle, bones, kidney, and liver, with the highest concentrations in the prostate and parts of the eye. Semen is particularly rich in zinc, a key factor in prostate gland function and reproductive organ growth.
Zinc homeostasis of the body is mainly controlled by the intestine. Here, ZIP4 and especially TRPM7 were linked to intestinal zinc uptake essential for postnatal survival.
In humans, the biological roles of zinc are ubiquitous. It interacts with "a wide range of organic ligands", and has roles in the metabolism of RNA and DNA, signal transduction, and gene expression. It also regulates apoptosis. A review from 2015 indicated that about 10% of human proteins (~3000) bind zinc, in addition to hundreds more that transport and traffic zinc; a similar in silico study in the plant Arabidopsis thaliana found 2367 zinc-related proteins.
In the brain, zinc is stored in specific synaptic vesicles by glutamatergic neurons and can modulate neuronal excitability. It plays a key role in synaptic plasticity and so in learning. Zinc homeostasis also plays a critical role in the functional regulation of the central nervous system. Dysregulation of zinc homeostasis in the central nervous system that results in excessive synaptic zinc concentrations is believed
|
https://en.wikipedia.org/wiki/Disk%20array%20controller
|
A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache.
Disk array controller is often improperly shortened to disk controller. The two should not be confused as they provide very different functionality.
Front-end and back-end side
A disk array controller provides front-end interfaces and back-end interfaces.
The back-end interface communicates with the controlled disks. Hence, its protocol is usually ATA (a.k.a. PATA), SATA, SCSI, FC or SAS.
The front-end interface communicates with a computer's host adapter (HBA, Host Bus Adapter) and uses:
one of ATA, SATA, SCSI, FC; these are popular protocols used by disks, so by using one of them a controller may transparently emulate a disk for a computer.
somewhat less popular dedicated protocols for specific solutions: FICON/ESCON, iSCSI, HyperSCSI, ATA over Ethernet or InfiniBand.
A single controller may use different protocols for back-end and for front-end communication. Many enterprise controllers use FC on front-end and SATA on back-end.
Enterprise controllers
In a modern enterprise architecture disk array controllers (sometimes also called storage processors, or SPs) are parts of physically independent enclosures, such as disk arrays placed in a storage area network (SAN) or network-attached storage (NAS) servers.
Those external disk arrays are usually purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, and management software. It is up to controllers to provide advanced functionality (various vendors name these differently):
Automatic failover to another controller (transparent to computers transmitting data)
Long-running operations performed without downtime
Forming a new RAID set
Reconstructing degraded RAID set (after a disk failure)
Adding a disk to onl
|
https://en.wikipedia.org/wiki/Phototropism
|
In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light. Phototropism is one of the many plant tropisms, or movements, which respond to external stimuli. Growth towards a light source is called positive phototropism, while growth away from light is called negative phototropism. Negative phototropism is not to be confused with skototropism, which is defined as the growth towards darkness, whereas negative phototropism can refer to either the growth away from a light source or towards the darkness. Most plant shoots exhibit positive phototropism, and rearrange their chloroplasts in the leaves to maximize photosynthetic energy and promote growth. Some vine shoot tips exhibit negative phototropism, which allows them to grow towards dark, solid objects and climb them. The combination of phototropism and gravitropism allow plants to grow in the correct direction.
Mechanism
There are several signaling molecules that help the plant determine where the light source is coming from, and these activate several genes, which change the hormone gradients allowing the plant to grow towards the light. The very tip of the plant is known as the coleoptile, which is necessary in light sensing. The middle portion of the coleoptile is the area where the shoot curvature occurs. The Cholodny–Went hypothesis, developed in the early 20th century, predicts that in the presence of asymmetric light, auxin will move towards the shaded side and promote elongation of the cells on that side to cause the plant to curve towards the light source. Auxins activate proton pumps, decreasing the pH in the cells on the dark side of the plant. This acidification of the cell
|
https://en.wikipedia.org/wiki/Packet%20concatenation
|
Packet concatenation is a computer networking optimization that coalesces multiple packets under a single header. The use of packet containment reduces the overhead at the physical and link layers.
See also
Frame aggregation
Packet aggregation
|
https://en.wikipedia.org/wiki/3Com%203c509
|
3Com 3c509 is a line of Ethernet IEEE 802.3 network cards for the ISA, EISA, MCA and PCMCIA computer buses. It was designed by 3Com, and put on the market in 1994.
Features
The 3Com 3c5x9 family of network controllers has various interface combinations of computer bus including ISA, EISA, MCA and PCMCIA. For network connection, 10BASE-2, AUI and 10BASE-T are used.
B = On ISA and PCMCIA adapter numbers indicates that these adapters are part of the second generation of the Parallel Tasking EtherLink III technology.
The DIP-28 (U1) EPROM for network booting may be 8, 16 or 32 kByte size. This means EPROMs of type 64, 128, 256 kbit (2^10) are compatible, like the 27C256.
Boot ROM address is located between 0xC0000 - 0xDE000.
Teardown example, the 3c509B-Combo
The Etherlink III 3C509B-Combo is registered with the FCC ID DF63C509B. The main components on the card is Y1: crystal oscillator 20 MHz, U50: coaxial transceiver interface DP8392, U4: main controller 3Com 9513S (or 9545S etc.), U6: 70 ns CMOS static RAM, U1: DIP-28 27C256 style EPROM for boot code, U3: 1024 bit 5V CMOS Serial EEPROM (configuration).
Label:
Etherlink III
(C) 1994 3C509B-C
ALL RIGHTS RESERVED
ASSY 03-0021-001 REV-A
FCC ID: DF63C509B
Barcode:
EA=0020AFDCC34C
SN=6AHDCC34C
MADE IN U.S.A.
R = Resistor
C = Capacitor
L = Inductance
Q = Transistor
CR = Transistor
FL = Transformer
T = Transformer
U = Integrated circuit
J = Jumper or connector
VR
F
FL70: Pulse transformer
bel9509 A
0556-3873-03
* HIPOTTED
Y1: 20 MHz crystal
20.000M
652DA
U50:
P9512BR
DP8392CN
Coaxial Transceiver Interface
T50: Pulse transformer, pinout: 2x8
VALOR
ST7033
x00: Pulse transformer
VALOR
PT0018
CHINA M
9449 C
U4: Plastic package 33x33 pins
Parallel Tasking TM
3Com
40-0130-002
9513S 22050553
AT&T 40-01302
Another chip with the same function:
40-0130-003
9545S 48324401
AT&T 40-01303
U6: 8192 x 8-bit 70 ns CMOS static RAM
HY 6264A
LJ-70
9509B KOR
|
https://en.wikipedia.org/wiki/3D%20Content%20Retrieval
|
A 3D Content Retrieval system is a computer system for browsing, searching and retrieving three dimensional digital contents (e.g.: Computer-aided design, molecular biology models, and cultural heritage 3D scenes, etc.) from a large database of digital images. The most original way of doing 3D content retrieval uses methods to add description text to 3D content files such as the content file name, link text, and the web page title so that related 3D content can be found through text retrieval. Because of the inefficiency of manually annotating 3D files, researchers have investigated ways to automate the annotation process and provide a unified standard to create text descriptions for 3D contents. Moreover, the increase in 3D content has demanded and inspired more advanced ways to retrieve 3D information. Thus, shape matching methods for 3D content retrieval have become popular. Shape matching retrieval is based on techniques that compare and contrast similarities between 3D models.
3D retrieval methods
Derive a high level description (e.g.: a skeleton) and then find matching results
This method describes 3D models by using a skeleton. The skeleton encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and compare them. However, this method requires a 2-manifold input model, and it is very sensitive to noise and details. Many of the existing 3D models are created for visualization purposes, while missing the input quality standard for the skeleton method. The skeleton 3D retrieval method needs more time and effort before it can be used widely.
Compute a feature vector based on statistics
Unlike Skeleton modeling, which requires a high quality standard for the input source, statistical methods do not put restriction on the validity of an input source. Shape histograms, feature vectors composed of global geo-metic properties such as circularity and eccentricity, and feature vector
|
https://en.wikipedia.org/wiki/Synchronous%20Data%20Flow
|
Synchronous Data Flow (SDF) is a restriction on Kahn process networks where the number of tokens read and written by each process is known ahead of time. In some cases, processes can be scheduled such that channels have bounded FIFOs.
Limitations
SDF does not account for asynchronous processes as their token read/write rates will vary. Practically, one can divide the network into synchronous sub-networks connected by asynchronous links. Alternatively a runtime supervisor can enforce fairness and other desired properties.
Applications
SDF is useful for modeling digital signal processing (DSP) routines. Models can be compiled to target parallel hardware like FPGAs, processors with DSP instruction sets like Qualcomm's Hexagon, and other systems.
See also
Kahn process networks
Petri net
Dataflow architecture
|
https://en.wikipedia.org/wiki/Digital%20down%20converter
|
In digital signal processing, a digital down-converter (DDC) converts a digitized, band-limited signal to a lower frequency signal at a lower sampling rate in order to simplify the subsequent radio stages. The process can preserve all the information in the frequency band of interest of the original signal. The input and output signals can be real or complex samples. Often the DDC converts from the raw radio frequency or intermediate frequency down to a complex baseband signal.
Architecture
A DDC consists of three subcomponents: a direct digital synthesizer (DDS), a low-pass filter (LPF), and a downsampler (which may be integrated into the low-pass filter).
The DDS generates a complex sinusoid at the intermediate frequency (IF). Multiplication of the intermediate frequency with the input signal creates images centered at the sum and difference frequency (which follows from the frequency shifting properties of the Fourier transform). The lowpass filters pass the difference (i.e. baseband) frequency while rejecting the sum frequency image, resulting in a complex baseband representation of the original signal. Assuming judicious choice of IF and LPF bandwidth, the complex baseband signal is mathematically equivalent to the original signal. In its new form, it can readily be downsampled and is more convenient to many DSP algorithms.
Any suitable low-pass filter can be used including FIR, IIR and CIC filters. The most common choice is a FIR filter for low amounts of decimation (less than ten) or a CIC filter followed by a FIR filter for larger downsampling ratios.
Variations on the DDC
Several variations on the DDC are useful, including many that input a feedback signal into the DDS. These include:
Decision directed carrier recovery phase locked loops in which the I and Q are compared to the nearest ideal constellation point of a PSK signal, and the resulting error signal is filtered and fed back into the DDS
A Costas loop in which the I and Q are multiplied
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.