source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/List%20of%20algorithm%20general%20topics
|
This is a list of algorithm general topics.
Analysis of algorithms
Ant colony algorithm
Approximation algorithm
Best and worst cases
Big O notation
Combinatorial search
Competitive analysis
Computability theory
Computational complexity theory
Embarrassingly parallel problem
Emergent algorithm
Evolutionary algorithm
Fast Fourier transform
Genetic algorithm
Graph exploration algorithm
Heuristic
Hill climbing
Implementation
Las Vegas algorithm
Lock-free and wait-free algorithms
Monte Carlo algorithm
Numerical analysis
Online algorithm
Polynomial time approximation scheme
Problem size
Pseudorandom number generator
Quantum algorithm
Random-restart hill climbing
Randomized algorithm
Running time
Sorting algorithm
Search algorithm
Stable algorithm (disambiguation)
Super-recursive algorithm
Tree search algorithm
See also
List of algorithms for specific algorithms
List of computability and complexity topics for more abstract theory
List of complexity classes, complexity class
List of data structures.
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Component-based%20software%20engineering
|
Component-based software engineering (CBSE), also called component-based development (CBD), is a style of software engineering that aims to build software out of loosely-coupled, modular components. It emphasizes the separation of concerns among different parts of a software system.
Definition and characteristics of components
An individual software component is a software package, a web service, a web resource, or a module that encapsulates a set of related functions or data.
Components communicate with each other via interfaces. Each component provides an interface (called a provided interface) through which other components can use it. When a component uses another component's interface, that interface is called a used interface.
In the UML illustrations in this article, provided interfaces are represented by lollipop-symbols, while used interfaces are represented by open socket symbols.
Components must be substitutable, meaning that a component must be replaceable by another one having the same interfaces without breaking the rest of the system.
Components should be reusable.
Component-based usability testing should be considered when software components directly interact with users.
Components should be:
fully documented
thoroughly tested
robust - with comprehensive input-validity checking
able to pass back appropriate error messages or return codes
History
The idea that software should be componentized - built from prefabricated components - first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea.
Brad Cox of Stepstone largely defined the modern concept of a software component. He called them Software ICs and set out to crea
|
https://en.wikipedia.org/wiki/Mathematical%20folklore
|
In common mathematical parlance, a mathematical result is called folklore if it is an unpublished result with no clear originator, but which is well-circulated and believed to be true among the specialists. More specifically, folk mathematics, or mathematical folklore, is the body of theorems, definitions, proofs, facts or techniques that circulate among mathematicians by word of mouth, but have not yet appeared in print, either in books or in scholarly journals.
Quite important at times for researchers are folk theorems, which are results known, at least to experts in a field, and are considered to have established status, though not published in complete form. Sometimes, these are only alluded to in the public literature.
An example is a book of exercises, described on the back cover:
Another distinct category is well-knowable mathematics, a term introduced by John Conway. These mathematical matters are known and factual, but not in active circulation in relation with current research (i.e., untrendy). Both of these concepts are attempts to describe the actual context in which research work is done.
Some people, in particular non-mathematicians, use the term folk mathematics to refer to the informal mathematics studied in many ethno-cultural studies of mathematics. Although the term "mathematical folklore" can also be used within the mathematics circle to describe the various aspects of their esoteric culture and practices (e.g., slang, proverb, limerick, joke).
Stories, sayings and jokes
Mathematical folklore can also refer to the unusual (and possibly apocryphal) stories or jokes involving mathematicians or mathematics that are told verbally in mathematics departments. Compilations include tales collected in G. H. Hardy's A Mathematician's Apology and ; examples include:
Srinivasa Ramanujan's taxicab numbers
Galileo dropping weights from the Leaning Tower of Pisa.
An apple falling on Isaac Newton's head to inspire his theory of gravitation.
The drinking,
|
https://en.wikipedia.org/wiki/Degree%20%28angle%29
|
A degree (in full, a degree of arc, arc degree, or arcdegree), usually denoted by ° (the degree symbol), is a measurement of a plane angle in which one full rotation is 360 degrees.
It is not an SI unit—the SI unit of angular measure is the radian—but it is mentioned in the SI brochure as an accepted unit. Because a full rotation equals 2 radians, one degree is equivalent to radians.
History
The original motivation for choosing the degree as a unit of rotations and angles is unknown. One theory states that it is related to the fact that 360 is approximately the number of days in a year. Ancient astronomers noticed that the sun, which follows through the ecliptic path over the course of the year, seems to advance in its path by approximately one degree each day. Some ancient calendars, such as the Persian calendar and the Babylonian calendar, used 360 days for a year. The use of a calendar with 360 days may be related to the use of sexagesimal numbers.
Another theory is that the Babylonians subdivided the circle using the angle of an equilateral triangle as the basic unit, and further subdivided the latter into 60 parts following their sexagesimal numeric system. The earliest trigonometry, used by the Babylonian astronomers and their Greek successors, was based on chords of a circle. A chord of length equal to the radius made a natural base quantity. One sixtieth of this, using their standard sexagesimal divisions, was a degree.
Aristarchus of Samos and Hipparchus seem to have been among the first Greek scientists to exploit Babylonian astronomical knowledge and techniques systematically. Timocharis, Aristarchus, Aristillus, Archimedes, and Hipparchus were the first Greeks known to divide the circle in 360 degrees of 60 arc minutes. Eratosthenes used a simpler sexagesimal system dividing a circle into 60 parts.
Another motivation for choosing the number 360 may have been that it is readily divisible: 360 has 24 divisors, making it one of only 7 numbers such th
|
https://en.wikipedia.org/wiki/Conic%20constant
|
In geometry, the conic constant (or Schwarzschild constant, after Karl Schwarzschild) is a quantity describing conic sections, and is represented by the letter K. The constant is given by where is the eccentricity of the conic section.
The equation for a conic section with apex at the origin and tangent to the y axis is
alternately
where R is the radius of curvature at .
This formulation is used in geometric optics to specify oblate elliptical (), spherical (), prolate elliptical (), parabolic (), and hyperbolic () lens and mirror surfaces. When the paraxial approximation is valid, the optical surface can be treated as a spherical surface with the same radius.
Some non-optical design references use the letter p as the conic constant. In these cases, .
|
https://en.wikipedia.org/wiki/Outline%20of%20calculus
|
Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of contemporary mathematics education. Calculus has widespread applications in science, economics, and engineering and can solve many problems for which algebra alone is insufficient.
Branches of calculus
Differential calculus
Integral calculus
Multivariable calculus
Fractional calculus
Differential Geometry
History of calculus
History of calculus
Important publications in calculus
General calculus concepts
Continuous function
Derivative
Fundamental theorem of calculus
Integral
Limit
Non-standard analysis
Partial derivative
Infinite Series
Calculus scholars
Sir Isaac Newton
Gottfried Leibniz
Calculus lists
List of calculus topics
See also
Glossary of calculus
Table of mathematical symbols
|
https://en.wikipedia.org/wiki/Zero-order%20hold
|
The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication.
Time-domain model
A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming one sample per time interval T:
where is the rectangular function.
The function is depicted in Figure 1, and is the piecewise-constant signal depicted in Figure 2.
Frequency-domain model
The equation above for the output of the ZOH can also be modeled as the output of a linear time-invariant filter with impulse response equal to a rect function, and with input being a sequence of dirac impulses scaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as the Whittaker–Shannon interpolation formula suggested by the Nyquist–Shannon sampling theorem, or such as the first-order hold or linear interpolation between sample values.
In this method, a sequence of Dirac impulses, xs(t), representing the discrete samples, x[n], is low-pass filtered to recover a continuous-time signal, x(t).
Even though this is not what a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses, xs(t), to a linear, time-invariant filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct constant pulse in the output.
Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions:
The scaling by , which arises naturally by time-scaling the delta function, has the result that the mean value of xs(t) is equal to the mean v
|
https://en.wikipedia.org/wiki/Metric%20tensor
|
In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point of is a bilinear form defined on the tangent space at (that is, a bilinear function that maps pairs of tangent vectors to real numbers), and a metric tensor on consists of a metric tensor at each point of that varies smoothly with .
A metric tensor is positive-definite if for every nonzero vector . A manifold equipped with a positive-definite metric tensor is known as a Riemannian manifold. Such a metric tensor can be thought of as specifying infinitesimal distance on the manifold. On a Riemannian manifold , the length of a smooth curve between two points and can be defined by integration, and the distance between and can be defined as the infimum of the lengths of all such curves; this makes a metric space. Conversely, the metric tensor itself is the derivative of the distance function (taken in a suitable manner).
While the notion of a metric tensor was known in some sense to mathematicians such as Gauss from the early 19th century, it was not until the early 20th century that its properties as a tensor were understood by, in particular, Gregorio Ricci-Curbastro and Tullio Levi-Civita, who first codified the notion of a tensor. The metric tensor is an example of a tensor field.
The components of a metric tensor in a coordinate basis take on the form of a symmetric matrix whose entries transform covariantly under changes to the coordinate system. Thus a metric tensor is a covariant symmetric tensor. From the coordinate-independent point of view, a metric tensor field is defined to be a nondegenerate symmetric bilinear form on each tangent space that varies smoothly from point to point.
Introduction
Carl Friedrich Gauss i
|
https://en.wikipedia.org/wiki/Comstock%E2%80%93Needham%20system
|
The Comstock–Needham system is a naming system for insect wing veins, devised by John Comstock and George Needham in 1898. It was an important step in showing the homology of all insect wings. This system was based on Needham's pretracheation theory that was later discredited by Frederic Charles Fraser in 1938.
Vein terminology
Longitudinal veins
The Comstock and Needham system attributes different names to the veins on an insect's wing. From the anterior (leading) edge of the wing towards the posterior (rear), the major longitudinal veins are named:
costa C, meaning rib
subcosta Sc, meaning below the rib
radius R, in analogy with a bone in the forearm, the radius
media M, meaning middle
cubitus Cu, meaning elbow
anal veins A, in reference to its posterior location
Apart from the costal and the anal veins, each vein can be branched, in which case the branches are numbered from anterior to posterior. For example, the two branches of the subcostal vein will be called Sc1 and Sc2.
The radius typically branches once near the base, producing anteriorly the R1 and posteriorly the radial sector Rs. The radial sector may fork twice.
The media may also fork twice, therefore having four branches reaching the wing margin.
According to the Comstock–Needham system, the cubitus forks once, producing the cubital veins Cu1 and Cu2.
According to some other authorities, Cu1 may fork again, producing the Cu1a and Cu1b.
As there are several anal veins, they are called A1, A2, and so on. They are usually unforked.
Crossveins
Crossveins link the longitudinal veins, and are named accordingly (for example, the medio-cubital crossvein is termed m-cu). Some crossveins have their own name, like the humeral crossvein h and the sectoral crossvein s.
Cell terminology
The cells are named after the vein on the anterior side; for instance, the cell between Sc2 and R1 is called Sc2.
In the case where two cells are separated by a crossvein but have the same anterior longitudinal vein, they
|
https://en.wikipedia.org/wiki/Location%20transparency
|
In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered around a disk in either the local computer or in a network. In a location transparency system, the actual location where the file is stored doesn't matter to the user. A distributed system will need to employ a networked scheme for naming resources.
The main benefit of location transparency is that it no longer matters where the resource is located. Depending on how the network is set, the user may be able to obtain files that reside on another computer connected to the particular network. This means that the location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development.
An additional benefit is the flexibility it provides. Systems resources can be moved to a different computer at any time without disrupting any software systems running on them. By simply updating the location that goes with the named resource, every program using that resource will be able to find it. Location transparency effectively makes the location easy to use for users, since the data can be accessed by almost everyone who can connect to the Internet, who knows the right file names for usage, and who has proper security credentials to access it.
See also
Transparency (computing)
|
https://en.wikipedia.org/wiki/Digital%20clock%20manager
|
A digital clock manager (DCM) is an electronic component available on some field-programmable gate arrays (FPGAs) (notably ones produced by Xilinx). A digital clock manager is useful for manipulating clock signals inside the FPGA, and to avoid clock skew which would introduce errors in the circuit.
Uses
Digital clock managers have the following applications:
Multiplying or dividing an incoming clock (which can come from outside the FPGA or from a Digital Frequency Synthesizer [DFS]).
Making sure the clock has a steady duty cycle.
Adding a phase shift with the additional use of a delay-locked loop.
Eliminating clock skew within an FPGA design.
See also
Phase-locked loop
|
https://en.wikipedia.org/wiki/Mechatronics
|
Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, and product engineering.
As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words "mechanics" and "electronics"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.
The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry.
Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering.
French standard NF E 01-010 gives the following definition: "approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality".
History
The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word t
|
https://en.wikipedia.org/wiki/MsQuic
|
MsQuic is a free and open source implementation of the IETF QUIC protocol written in C that is officially supported on the Microsoft Windows (including Server), Linux, and Xbox platforms. The project also provides libraries for macOS and Android, which are unsupported. It is designed to be a cross-platform general purpose QUIC library optimized for client and server applications benefitting from maximal throughput and minimal latency. By the end of 2021 the codebase had over 200,000 lines of production code, with 50,000 lines of "core" code, sharable across platforms. The source code is licensed under MIT License and available on GitHub.
Among its features are, in part, support for asynchronous IO, receive-side scaling (RSS), UDP send and receive coalescing, and connection migrations that persist connections between client and server to overcome client IP or port changes, such as when moving throughout mobile networks.
Both the HTTP/3 and SMB stacks of Microsoft Windows leverage MsQuic, with msquic.sys providing kernel-mode functionality. Being dependent upon Schannel for TLS 1.3, kernel mode therefore does not support 0-RTT.
User-mode programs can implement MsQuic, with support 0-RTT, through msquic.dll, which can be built from source code or downloaded as a shared library through binary releases on the repository.
Its support for the Microsoft Game Development Kit makes MsQuic possible on both Xbox and Windows.
See also
Transmission Control Protocol
User Datagram Protocol
HTTP/2
XDP for Windows
|
https://en.wikipedia.org/wiki/Chiplet
|
A chiplet is a tiny integrated circuit (IC) that contains a well-defined subset of functionality. It is designed to be combined with other chiplets on an interposer in a single package. A set of chiplets can be implemented in a mix-and-match "Lego-like" assembly. This provides several advantages over a traditional system on chip:
Reusable IP (intellectual property): the same chiplet can be used in many different devices
Heterogeneous integration: chiplets can be fabricated with different processes, materials, and nodes, each optimized for its particular function
Known good die: chiplets can be tested before assembly, improving the yield of the final device
Multiple chiplets working together in a single integrated circuit may be called a multi-chip module, hybrid IC, 2.5D IC, or an advanced package.
Chiplets may be connected with standards such as UCIe, bunch of wires (BoW), OpenHBI, and OIF XSR.
The term was coined by University of California, Berkeley professor John Wawrzynek as a component of the RAMP Project (research accelerator for multiple processors) in 2006 extension for the Department of Energy, as was RISC-V architecture.
|
https://en.wikipedia.org/wiki/Time-driven%20priority
|
Time-driven priority (TDP)
is a synchronous packet scheduling technique that implements UTC-based pipeline forwarding
and can be combined with conventional IP routing to achieve the higher flexibility than another pipeline forwarding implementation known as time-driven switching (TDS) or fractional lambda switching (FλS). Packets entering a switch from the same input port during the same [time frame] (TF) can be sent out from different output ports, according to the rules that drive IP packet routing. Operation in accordance to pipeline forwarding principles ensures deterministic quality of service and low complexity packet scheduling. Specifically, packets scheduled for transmission during a TF are given maximum priority; if resources have been properly reserved, all scheduled packets will be at the output port and transmitted before their TF ends.
Various aspects of the technology are covered by several patents issued by both the United States Patent and Trademark Office and the European Patent Office.
|
https://en.wikipedia.org/wiki/Hyperpalatable%20food
|
Hyperpalatable food (HPF) combines high levels of fat, sugar, sodium, or carbohydrates to trigger the brain's reward system, encouraging excessive eating. The concept of hyperpalatability is foundational to ultra-processed foods, which are usually engineered to have enjoyable qualities of sweetness, saltiness, or richness. Hyperpalatable foods can stimulate the release of metabolic, stress, and appetite hormones that play a role in cravings and may interfere with the body's ability to regulate appetite and satiety.
Definition
Researchers have proposed specific criteria for hyperpalatability based on the percentage of calories from fat, sugar, and salt in a food item. A team at the University of Kansas analysed databases from the United States Department of Agriculture to identify the most common descriptive definitions for hyperpalatable foods. They found three combinations that most frequently defined hyperpalatable foods:
Foods with more than 25 per cent of calories from fat plus more than 0.30 per cent sodium by weight (often including bacon, cheese, and salami).
Foods with more than 20 per cent of calories from fat and more than 20 per cent of calories from simple sugars (typically cake, ice cream, chocolate).
Foods with more than 40 per cent of calories from carbohydrates and more than 0.20 per cent sodium by weight (many brands of pretzels, popcorn, and crackers).
The proportion of foods sold in the United States fitting this definition of hyperpalatable increased by twenty per cent between 1988 and 2018.
Neurobiology
Hyperpalatable foods have been shown to activate the reward regions of the brain, such as the hypothalamus, that influence food choices and eating behaviours. When these foods are consumed, the neurons in the reward region become very active, creating highly positive feelings of pleasure so that people want to keep seeking these foods regularly. Hyperpalatable foods can also modify the release of hormones that regulate appetite, stress,
|
https://en.wikipedia.org/wiki/Network%20eavesdropping
|
Network eavesdropping, also known as eavesdropping attack, sniffing attack, or snooping attack, is a method that retrieves user information through the internet. This attack happens on electronic devices like computers and smartphones. This network attack typically happens under the usage of unsecured networks, such as public wifi connections or shared electronic devices. Eavesdropping attacks through the network is considered one of the most urgent threats in industries that rely on collecting and storing data. Internet users use eavesdropping via the Internet to improve information security.
A typical network eavesdropper may be called a Black-hat hacker and is considered a low-level hacker as it is simple to network eavesdrop successfully. The threat of network eavesdroppers is a growing concern. Research and discussions are brought up in the public's eye, for instance, types of eavesdropping, open-source tools, and commercial tools to prevent eavesdropping. Models against network eavesdropping attempts are built and developed as privacy is increasingly valued. Sections on cases of successful network eavesdropping attempts and its laws and policies in the National Security Agency are mentioned. Some laws include the Electronic Communications Privacy Act and the Foreign Intelligence Surveillance Act.
Types of attacks
Types of network eavesdropping include intervening in the process of decryption of messages on communication systems, attempting to access documents stored in a network system, and listening on electronic devices. Types include electronic performance monitoring and control systems, keystroke logging, man-in-the-middle attacks, observing exit nodes on a network, and Skype & Type.
Electronic performance monitoring and control systems (EPMCSs)
Electronic performance monitoring and control systems are used by employees or companies and organizations to collect, store, analyze, and report actions or performances of employers when they are working. Th
|
https://en.wikipedia.org/wiki/Global%20network
|
A global network is any communication network which spans the entire Earth. The term, as used in this article refers in a more restricted way to bidirectional communication networks, and to technology-based networks. Early networks such as international mail and unidirectional communication networks, such as radio and television, are described elsewhere.
The first global network was established using electrical telegraphy and global span was achieved in 1899. The telephony network was the second to achieve global status, in the 1950s. More recently, interconnected IP networks (principally the Internet, with estimated 2.5 billion users worldwide in 2014 ), and the GSM mobile communication network (with over 6 billion worldwide users in 2014) form the largest global networks of all.
Setting up global networks requires immensely costly and lengthy efforts lasting for decades. Elaborate interconnections, switching and routing devices, laying out physical carriers of information, such as land and submarine cables and earth stations must be set in operation. In addition, international communication protocols, legislation and agreements are involved.
Global networks might also refer to networks of individuals (such as scientists), communities (such as cities) and organizations (such as civil organizations) worldwide which, for instance, might have formed for the management, mitigation and resolval of global issues.
Satellite global networks
Communication satellites are an important part of global networks. However, there are specific low Earth orbit (LEO) global satellite constellations, such as Iridium, Globalstar and Orbcomm, which are comprised by dozens of similar satellites which are put in orbit at regularly spaced positions and form a mesh network, sometimes sending and receiving information directly among themselves. Using VSAT technology, satellite internet access has become possible.
Mobile wireless networks
It is estimated that 80% of the global mobile ma
|
https://en.wikipedia.org/wiki/Idiobiology
|
Idiobiology is a branch of biology which studies individual organisms, or the study of organisms as individuals.
|
https://en.wikipedia.org/wiki/Integrated%20circuit%20design
|
Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography.
IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry.
Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out.
Fundamentals
Integrated circuit design involves the creation of ele
|
https://en.wikipedia.org/wiki/Campenot%20chamber
|
A Campenot chamber is a three-chamber petri dish culture system devised by Robert Campenot to study neurons. Commonly used in neurobiology, the neuron soma or cell body is physically compartmentalized from its axons allowing for spatial segregation during investigation. This separation, typically done with a fluid impermeable barrier, can be used to study nerve growth factors (NGF). Neurons are particularly sensitive to environmental cues such as temperature, pH, and oxygen concentration which can affect their behavior.
The Campenot chamber can be used to study spatial and temporal axon guidance in both healthy controls and in cases of neuronal injury or neurodegeneration. Campenot concluded that neuron survival and growth depend on local nerve growth factors.
Structure
The Campenot chamber is made up of three chambers divided by Teflon fibers. These fibers are added to a petri dish coated in collagen with 20 scratches, spaced 200 μm apart, that become the parallel tracks for axons to grow. There is also a layer of grease that works to seal the Teflon to the neuron and separates the axon processes from the cell body. Refer to Side View of Campenot Chamber figure.
History of use
The uniqueness of the design allows for biochemical analysis and application of a stimulus at either distal or proximal ends. Campenot chambers have been used for a variety of studies including culturing of iPSC-derived motor neurons to isolate axonal RNA which can then be used for molecular analysis,,. The chamber has also been modified to study degeneration and apoptosis of cultured hippocampal neurons induced by amyloid beta. A modified 2-chamber system was used to examine the axonal transport of herpes simplex virus by examining the transmission of the virus from axon to epidermal cells. Through this study, the virus was found to undergo a specialized mode of viral transport, assembly and sensory neuron egress.
Recent techniques in lithography have made these chambers a more appea
|
https://en.wikipedia.org/wiki/Newton%27s%20theorem%20of%20revolving%20orbits
|
In classical mechanics, Newton's theorem of revolving orbits identifies the type of central force needed to multiply the angular speed of a particle by a factor k without affecting its radial motion (Figures 1 and 2). Newton applied his theorem to understanding the overall rotation of orbits (apsidal precession, Figure 3) that is observed for the Moon and planets. The term "radial motion" signifies the motion towards or away from the center of force, whereas the angular motion is perpendicular to the radial motion.
Isaac Newton derived this theorem in Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica, first published in 1687. In Proposition 43, he showed that the added force must be a central force, one whose magnitude depends only upon the distance r between the particle and a point fixed in space (the center). In Proposition 44, he derived a formula for the force, showing that it was an inverse-cube force, one that varies as the inverse cube of r. In Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit.
As noted by astrophysicist Subrahmanyan Chandrasekhar in his 1995 commentary on Newton's Principia, this theorem remained largely unknown and undeveloped for over three centuries. Since 1997, the theorem has been studied by Donald Lynden-Bell and collaborators. Its first exact extension came in 2000 with the work of Mahomed and Vawda.
Historical context
The motion of astronomical bodies has been studied systematically for thousands of years. The stars were observed to rotate uniformly, always maintaining the same relative positions to one another. However, other bodies were observed to wander against the background of the fixed stars; most such bodies were called planets after the Greek word "πλανήτοι" (planētoi) for "wanderers". Although they generally move in the same direction along a path across the sky (the ecliptic), individual planets sometimes re
|
https://en.wikipedia.org/wiki/Network%20configuration%20and%20change%20management
|
Network configuration and change management (NCCM) is a discipline in information technology. Organizations are using NCCM as a way to:
automate changes;
reduce network downtime;
network device configuration backup & restore;
meet compliance.
See also
Change Management (ITSM)
Computer networking
Information technology management
Computer networking
|
https://en.wikipedia.org/wiki/Multipacket%20reception
|
In networking, multipacket reception refers to the capability of networking nodes for decoding/demodulating signals from a number of source nodes concurrently. In wireless communications, Multipacket reception is achieved using physical layer technologies like orthogonal CDMA, MIMO and space–time codes.
See also
MIMO – Wireless communication systems having multiple antennas at both transmitter and receiver.
CDMA – Code division multiple access
External links
http://acronyms.thefreedictionary.com/MPR
Computer networking
|
https://en.wikipedia.org/wiki/Interspecific%20competition
|
Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition.
If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food.
Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition.
Types
All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric).
Based on mechanism
Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availab
|
https://en.wikipedia.org/wiki/Constant-resistance%20network
|
A constant-resistance network in electrical engineering is a network whose input resistance does not change with frequency when correctly terminated. Examples of constant resistance networks include:
Zobel network
Lattice phase equaliser
Boucherot cell
Bridged T delay equaliser
Electrical engineering
Physics-related lists
|
https://en.wikipedia.org/wiki/Hilbert%20transform
|
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
Definition
The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by
provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as
When the Hilbert transform is applied twice in succession to a function , the result is
provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is
. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below).
For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upp
|
https://en.wikipedia.org/wiki/Bernstein%27s%20constant
|
Bernstein's constant, usually denoted by the Greek letter β (beta), is a mathematical constant named after Sergei Natanovich Bernstein and is equal to 0.2801694990... .
Definition
Let En(ƒ) be the error of the best uniform approximation to a real function ƒ(x) on the interval [−1, 1] by real polynomials of no more than degree n. In the case of ƒ(x) = |x|, Bernstein showed that the limit
called Bernstein's constant, exists and is between 0.278 and 0.286. His conjecture that the limit is:
was disproven by Varga and Carpenter, who calculated
|
https://en.wikipedia.org/wiki/Proxy%20list
|
A proxy list is a list of open HTTP/HTTPS/SOCKS proxy servers all on one website. Proxies allow users to make indirect network connections to other computer network services. Proxy lists include the IP addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. Proxy lists are often organized by the various proxy protocols the servers use. Many proxy lists index, which can be used without changing browser settings.
Proxy Anonymity Levels
Elite proxies - Such proxies do not change request fields and look like a real browser, and your real IP address is hidden. Server administrators will commonly be fooled into believing that you are not using a proxy.
Anonymous proxies - These proxies do not show a real IP address, however, they do change the request fields, therefore it is very easy to detect that a proxy is being used by log analysis. You are still anonymous, but some server administrators may restrict proxy requests.
Transparent proxies - (not anonymous, simply HTTP) - These change the request fields and they transfer the real IP. Such proxies are not applicable for security or privacy uses while surfing the web, and should only be used for network speed improvement.
SOCKS is a protocol that relays TCP sessions through a firewall host to allow application users transparent access across the firewall. Because the protocol is independent of application protocols, it can be (and has been) used for many different services, such as telnet, FTP, finger, whois, gopher, WWW, etc. Access control can be applied at the beginning of each TCP session; thereafter the server simply relays the data between the client and the application server, incurring minimum processing overhead. Since SOCKS never has to know anything about the application protocol, it should also be easy for it to accommodate applications that use encryption to protect their traffic from nosy snoopers. No information about the client is se
|
https://en.wikipedia.org/wiki/Biorisk
|
Biorisk generally refers to the risk associated with biological materials and/or infectious agents, also known as pathogens. The term has been used frequently for various purposes since the early 1990s. The term is used by regulators, security experts, laboratory personnel and industry alike, and is used by the World Health Organization (WHO). WHO/Europe also provides tools and training courses in biosafety and biosecurity.
An international Laboratory Biorisk Management Standard developed under the auspices of the European Committee for Standardization, defines biorisk as the combination of the probability of occurrence of harm and the severity of that harm where the source of harm is a biological agent or toxin. The source of harm may be an unintentional exposure, accidental release or loss, theft, misuse, diversion, unauthorized access or intentional unauthorized release.
Biorisk reduction
Biorisk reduction involves creating expertise in managing high-consequence pathogens, by providing training on safe handling and control of pathogens that pose significant health risks.
See also
Biocontainment, related to laboratory biosafety levels
Biodefense
Biodiversity
Biohazard
Biological warfare
Biological Weapons Convention
Biosecurity
Bioterrorism
Cyberbiosecurity
Endangered species
|
https://en.wikipedia.org/wiki/Earliest%20known%20life%20forms
|
The earliest known life forms on Earth are believed to be fossilized microorganisms found in hydrothermal vent precipitates, considered to be about 3.42 billion years old. The earliest time for the origin of life on Earth is at least 3.77 billion years ago, possibly as early as 4.28 billion years ago — not long after the oceans formed 4.5 billion years ago, and after the formation of the Earth 4.54 billion years ago. The earliest direct evidence of life on Earth is from microfossils of microorganisms permineralized in 3.465-billion-year-old Australian Apex chert rocks, although the validity of these microfossils is debated.
Biospheres
Earth remains the only place in the universe known to harbor life. The origin of life on Earth was at least 3.77 billion years ago, possibly as early as 4.28 billion years ago. The Earth's biosphere extends down to at least below the surface, and up to at least into the atmosphere, and includes soil, hydrothermal vents, and rock. Further, the biosphere has been found to extend at least below the ice of Antarctica, and includes the deepest parts of the ocean, down to rocks kilometers below the sea floor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in "quasi-suspended animation", were found in organically-poor sediments, up to 101.5 million years old, below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. Under certain test conditions, life forms have been observed to survive in the vacuum of outer space. More recently, in August 2020, bacteria were found to survive for three years in outer space, according to studies conducted on the International Space Station. In February 2023, findings of a "dark microbiome" of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of planet Earth, were reported. The total mass of the biosphere has been estimated to be as much as 4 trillion tons of carb
|
https://en.wikipedia.org/wiki/Competition%20%28biology%29
|
Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other.
In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time).
There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition.
According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis.
Interference competition
During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites.
|
https://en.wikipedia.org/wiki/Bin%20picking
|
Bin picking (also referred to as random bin picking) is a core problem in computer vision and robotics. The goal is to have a robot with sensors and cameras attached to it pick-up known objects with random poses out of a bin using a suction gripper, parallel gripper, or other kind of robot end effector.
Early work on bin picking made use of Photometric Stereo
in recovering the shapes of objects and to determine their orientation in space.
Amazon previously held a competition focused on bin picking referred to as the "Amazon Picking Challenge", which was held from 2015 to 2017. The challenge tasked entrants with building their own robot hardware and software that could attempt simplified versions of the general task of picking and stowing items on shelves. The robots were scored by how many items were picked and stowed in a fixed amount of time. The first Amazon Robotics challenge was won by a team from TU Berlin in 2015, followed by a team from TU Delft and the Dutch company "Fizyr" in 2016. The last Amazon Robotics Challenge was won by the Australian Centre for Robotic Vision at Queensland University of Technology with their robot named Cartman. The Amazon Robotics/Picking Challenge was discontinued following the 2017 competition.
Although there can be some overlap, bin picking is distinct from "each picking" and the bin packing problem.
See also
3D pose estimation
Bowl feeder
|
https://en.wikipedia.org/wiki/Dynamic%20circuit%20network
|
A dynamic circuit network (DCN) is an advanced computer networking technology that combines traditional packet-switched communication based on the Internet Protocol, as used in the Internet, with circuit-switched technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure.
Implementation
Dynamic circuit networks were pioneered by the Internet2 advanced networking consortium. The experimental Internet2 HOPI infrastructure, decommissioned in 2007, was a forerunner to the current SONET-based Ciena Network underlying the Internet2 DCN. The Internet2 DCN began operation in late 2007 as part of the larger Internet2 network. It provides advanced networking capabilities and resources to the scientific and research communities, such as the Large Hadron Collider (LHC) project.
The Internet2 DCN is based on open-source, standards-based software, the Inter-domain Controller (IDC) protocol, developed in cooperation with ESnet and GÉANT2. The entire software set is known as the Dynamic Circuit Network Software Suite (DCN SS).
Inter-domain Controller protocol
The Inter-domain Controller protocol manages the dynamic provisioning of network resources participating in a dynamic circuit network across multiple administrative domain boundaries. It is a SOAP-based XML messaging protocol, secured by Web Services Security (v1.1) using the XML Digital Signature standard. It is transported over HTTP Secure (HTTPS) connections.
See also
Internet Protocol Suite
IPv6
Fiber-optic communication
|
https://en.wikipedia.org/wiki/Biocontainment%20of%20genetically%20modified%20organisms
|
Since the advent of genetic engineering in the 1970s, concerns have been raised about the dangers of the technology. Laws, regulations, and treaties were created in the years following to contain genetically modified organisms and prevent their escape. Nevertheless, there are several examples of failure to keep GM crops separate from conventional ones.
Overview
In the context of agriculture and food and feed production, co-existence means using cropping systems with and without genetically modified crops in parallel. In some countries, such as the United States, co-existence is not governed by any single law but instead is managed by regulatory agencies and tort law. In other regions, such as Europe, regulations require that the separation and the identity of the respective food and feed products must be maintained at all stages of the production process.
Many consumers are critical of genetically modified plants and their products, while, conversely, most experts in charge of GMO approvals do not perceive concrete threats to health or the environment. The compromise chosen by some countries - notably the European Union - has been to implement regulations specifically governing co-existence and traceability. Traceability has become commonplace in the food and feed supply chains of most countries in the world, but the traceability of GMOs is made more challenging by the addition of very strict legal thresholds for unwanted mixing. Within the European Union, since 2001, conventional and organic food and feedstuffs can contain up to 0.9% of authorised GM material without being labelled GM (any trace of non-authorised GM products would cause shipments to be rejected).
In the United States there is no legislation governing the co-existence of neighboring farms growing organic and GM crops; instead the US relies on a "complex but relaxed" combination of three federal agencies (FDA, EPA, and USDA/APHIS) and the common law tort system, governed by state law, to ma
|
https://en.wikipedia.org/wiki/Pulse%20shaping
|
In electronics and telecommunications, pulse shaping is the process of changing a transmitted pulses' waveform to optimize the signal for its intended purpose or the communication channel. This is often done by limiting the bandwidth of the transmission and filtering the pulses to control intersymbol interference. Pulse shaping is particularly important in RF communication for fitting the signal within a certain frequency band and is typically applied after line coding and modulation.
Need for pulse shaping
Transmitting a signal at high modulation rate through a band-limited channel can create intersymbol interference. The reason for this are Fourier correspondences (see Fourier transform). A bandlimited signal corresponds to an infinite time signal, that causes neighbouring pulses to overlap. As the modulation rate increases, the signal's bandwidth increases. As soon as the spectrum of the signal is a sharp rectangular, this leads to a sinc shape in the time domain. This happens if the bandwidth of the signal is larger than the channel bandwidth, leading to a distortion. This distortion usually manifests itself as intersymbol interference (ISI). Theoretically for sinc shaped pulses, there is no ISI, if neighbouring pulses are perfectly aligned, i.e. in the zero crossings of each other. But this requires a very good synchronization and precise/stable sampling without jitters. As a practical tool to determine ISI, one uses the Eye pattern, that visualizes typical effects of the channel and the synchronization/frequency stability.
The signal's spectrum is determined by the modulation scheme and data rate used by the transmitter, but can be modified with a pulse shaping filter. This pulse shaping will make the spectrum smooth, leading to a time limited signal again. Usually the transmitted symbols are represented as a time sequence of dirac delta pulses multiplied with the symbol. This is the formal transition from the digital to the analog domain. At this point, th
|
https://en.wikipedia.org/wiki/Lieb%27s%20square%20ice%20constant
|
Lieb's square ice constant is a mathematical constant used in the field of combinatorics to quantify the number of Eulerian orientations of grid graphs. It was introduced by Elliott H. Lieb in 1967.
Definition
An n × n grid graph (with periodic boundary conditions and n ≥ 2) has n2 vertices and 2n2 edges; it is 4-regular, meaning that each vertex has exactly four neighbors. An orientation of this graph is an assignment of a direction to each edge; it is an Eulerian orientation if it gives each vertex exactly two incoming edges and exactly two outgoing edges.
Denote the number of Eulerian orientations of this graph by f(n). Then
is Lieb's square ice constant. Lieb used a transfer-matrix method to compute this exactly.
The function f(n) also counts the number of 3-colorings of grid graphs, the number of nowhere-zero 3-flows in 4-regular graphs, and the number of local flat foldings of the Miura fold. Some historical and physical background can be found in the article Ice-type model.
See also
Spin ice
Ice-type model
|
https://en.wikipedia.org/wiki/Reciprocity%20%28network%20science%29
|
In network science, reciprocity is a measure of the likelihood of vertices in a directed network to be mutually linked. Like the clustering coefficient, scale-free degree distribution, or community structure, reciprocity is a quantitative measure used to study complex networks.
Motivation
In real network problems, people are interested in determining the likelihood of occurring double links (with opposite directions) between vertex pairs. This problem is fundamental for several
reasons. First, in the networks that transport information or material (such as email networks, World Wide Web (WWW), World Trade Web, or Wikipedia ), mutual links facilitate the transportation process. Second, when analyzing directed networks, people often treat them as undirected ones for simplicity; therefore, the information obtained from reciprocity studies helps to estimate the error introduced when a directed network is treated as undirected (for example, when measuring the clustering coefficient). Finally, detecting nontrivial patterns of reciprocity can reveal possible mechanisms and organizing principles that shape the observed network's topology.
Definitions
Traditional definition
A traditional way to define the reciprocity r is using the ratio of the number of links pointing in both directions to the total number of links L
With this definition, is for a purely bidirectional network while
for a purely unidirectional one. Real networks have an intermediate value between 0 and 1.
However, this definition of reciprocity has some defects. It cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, but whether mutual links occur more or less often than expected by chance. Besides, in those networks containing self-linking loops (links starting and ending at the same vertex), the self-linking loops should be excluded when calculating L.
Ga
|
https://en.wikipedia.org/wiki/QUIC
|
QUIC (pronounced "quick") is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge (a derivative of the open-source Chromium browser) and Firefox support it. Safari implements the protocol, however it is not enabled by default.
Although its name was initially proposed as the acronym for "Quick UDP Internet Connections", IETF's use of the word QUIC is not an acronym; it is simply the name of the protocol. QUIC improves performance of connection-oriented web applications that are currently using TCP. It does this by establishing a number of multiplexed connections between two endpoints using User Datagram Protocol (UDP), and is designed to obsolete TCP at the transport layer for many applications, thus earning the protocol the occasional nickname "TCP/2".
QUIC works hand-in-hand with HTTP/2's multiplexed connections, allowing multiple streams of data to reach all the endpoints independently, and hence independent of packet losses involving other streams. In contrast, HTTP/2 hosted on Transmission Control Protocol (TCP) can suffer head-of-line-blocking delays of all multiplexed streams if any of the TCP packets is delayed or lost.
QUIC's secondary goals include reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion. It also moves congestion control algorithms into the user space at both endpoints, rather than the kernel space, which it is claimed will allow these algorithms to improve more rapidly. Additionally, the protocol can be extended with forward error correction (FEC) to further improve performance when errors are expected, and this is seen as the next step in the protocol's evolution. It has been designed to avoid protocol ossifica
|
https://en.wikipedia.org/wiki/Autocorrelation
|
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.
Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.
Auto-correlation of stochastic processes
In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is
where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined.
Subtracting the mean before multiplication yields the auto-covariance function between times and :
Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant
|
https://en.wikipedia.org/wiki/Uniqueness%20quantification
|
In mathematics and logic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition. This sort of quantification is known as uniqueness quantification or unique existential quantification, and is often denoted with the symbols "∃!" or "∃=1". For example, the formal statement
may be read as "there is exactly one natural number such that ".
Proving uniqueness
The most common technique to prove the unique existence of a certain object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say, and ) must be equal to each other (i.e. ).
For example, to show that the equation has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds:
To establish the uniqueness of the solution, one would then proceed by assuming that there are two solutions, namely and , satisfying . That is,
Then since equality is a transitive relation,
Subtracting 2 from both sides then yields
which completes the proof that 3 is the unique solution of .
In general, both existence (there exists at least one object) and uniqueness (there exists at most one object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition.
An alternative way to prove uniqueness is to prove that there exists an object satisfying the condition, and then to prove that every object satisfying the condition must be equal to .
Reduction to ordinary existential and universal quantification
Uniqueness quantification can be expressed in terms of the existential and universal quantifiers of predicate logic, by defining the formula to mean
which is logically equivalent to
An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is
Another equivalent defin
|
https://en.wikipedia.org/wiki/UCI%20School%20of%20Biological%20Sciences
|
The School of Biological Sciences is one of the academic units of the University of California, Irvine (UCI). The school is divided into four departments: developmental and cell biology, ecology and evolutionary biology, molecular biology and biochemistry, and neurobiology and behavior. With over 3,700 students it is in the top four largest schools in the university.<ref></http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-medical-schools/research-rankings/page+2> In 2013, the Francisco J. Ayala School of Biological Sciences contained 19.4 percent of the student population
</ref>
It is consistently ranked in the top one hundred in U.S. News & World Report’s yearly list of best graduate schools.
History
The School of Biological Sciences first opened in 1965 at the University of California, Irvine and was one of the first schools founded when the university campus opened. The school's founding Dean, Edward A. Steinhaus, had four founding department chairs and started out with 17 professors.
On March 12, 2014, the School was officially renamed after UCI professor and donor Francisco J. Ayala by then-Chancellor Michael V. Drake. Ayala had previously pledged to donate $10 million to the School of Biological Sciences in 2011. The school reverted to its previous name in June 2018, after a university investigation confirmed that Ayala had sexually harassed at least four women colleagues and graduate students.
Notes
External links
University of California, Irvine
Biology education
Science education in the United States
Science and technology in Greater Los Angeles
University subdivisions in California
Educational institutions established in 1965
1965 establishments in California
|
https://en.wikipedia.org/wiki/Defense%20Information%20System%20Network
|
The Defense Information System Network (DISN) has been the United States Department of Defense's enterprise telecommunications network for providing data, video, and voice services for 40 years.
The DISN end-to-end infrastructure is composed of three major segments:
The sustaining base (I.e., base, post, camp, or station, and Service enterprise networks). The Command, Control, Communications, Computers and Intelligence (C4I) infrastructure will interface with the long-haul network to support the deployed warfighter. The sustaining base segment is primarily the responsibility of the individual Services.
The long-haul transport infrastructure, which includes the communication systems and services between the fixed environments and the deployed Joint Task Force (JTF) and/or Coalition Task Force (CTF) warfighter. The long-haul telecommunications infrastructure segment is primarily the responsibility of DISA.
The deployed warfighter, mobile users, and associated Combatant Commander telecommunications infrastructures are supporting the Joint Task Force (JTF) and/or Coalition Task Force (CTF). The deployed warfighter and associated Combatant Commander telecommunications infrastructure is primarily the responsibility of the individual Services.
The DISN provides the following multiple networking services:
Global Content Delivery System (GCDS)
Data Services
Sensitive but Unclassified (NIPRNet)
Secret Data Services (SIPRNet)
Multicast
Organizational Messaging
The Organizational Messaging Service provides a range of assured services to the customer community that includes the military services, DoD agencies, combatant commands (CCMDs), non-DoD U.S. government activities, and the Intelligence Community (IC). These services include the ability to exchange official information between military organizations and to support interoperability with allied nations, non-DoD activities, and the IC operating in both the strategic/fixed-base and the tactical/deployed enviro
|
https://en.wikipedia.org/wiki/Peptide%20microarray
|
A peptide microarray (also commonly known as peptide chip or peptide epitope microarray) is a collection of peptides displayed on a solid surface, usually a glass or plastic chip. Peptide chips are used by scientists in biology, medicine and pharmacology to study binding properties and functionality and kinetics of protein-protein interactions in general. In basic research, peptide microarrays are often used to profile an enzyme (like kinase, phosphatase, protease, acetyltransferase, histone deacetylase etc.), to map an antibody epitope or to find key residues for protein binding. Practical applications are seromarker discovery, profiling of changing humoral immune responses of individual patients during disease progression, monitoring of therapeutic interventions, patient stratification and development of diagnostic tools and vaccines.
Principle
The assay principle of peptide microarrays is similar to an ELISA protocol.
The peptides (up to tens of thousands in several copies) are linked to the surface of a glass chip typically the size and shape of a microscope slide. This peptide chip can directly be incubated with a variety of different biological samples like purified enzymes or antibodies, patient or animal sera, cell lysates and then be detected through a label-dependent fashion, for example, by a primary antibody that targets the bound protein or modified substrates. After several washing steps a secondary antibody with the needed specificity (e.g. anti IgG human/mouse or anti phosphotyrosine or anti myc) is applied. Usually, the secondary antibody is tagged by a fluorescence label that can be detected by a fluorescence scanner. Other label-dependent detection methods includes chemiluminescence, colorimetric or autoradiography.
Label-dependent assays are rapid and convenient to perform, but risk giving rise to false positive and negative results. More recently, label-free detection including surface plasmon resonance (SPR) spectroscopy, mass spectrometry (
|
https://en.wikipedia.org/wiki/Pacemaker%20crosstalk
|
Pacemaker crosstalk results when the pacemaker-generated electrical event in one chamber is sensed by the lead in another chamber, resulting in inappropriate inhibition of the pacing artifact in the second chamber.
Cause
Crosstalk can only occur in dual chamber or biventricular pacemaker. It happens less often in more recent models of dual chamber pacemakers due to the addition of a ventricular blanking period, which coincides with the atrial stimulus. This helps to prevent ventricular channel oversensing of atrial output. Newer dual chamber pacemakers also use bipolar leads with a smaller pacing spike, and steroid eluting leads with lower pacing thresholds. Crosstalk is more common in unipolar systems since they require a larger pacing spike. Crosstalk is sometimes referred to as crosstalk inhibition, far-field sensing, or self-inhibition. In some cases, crosstalk can occur in the pulse generator circuit itself, though more common causes include atrial lead dislodgement into the ventricle, ventricular lead dislodgement into the atrium, high atrial output current, high ventricular sensitivity, and short ventricular blanking period.
Treatment
In general, the treatment of crosstalk includes decreasing atrial pacing output, decreasing atrial pulse width, decreasing ventricular sensitivity, increasing the ventricular blanking period, activating ventricular safety pacing, and new atrial lead implant if insulation failure mandates unipolar programming.
See also
Pacemaker failure
Electrical conduction system of the heart
|
https://en.wikipedia.org/wiki/Quadrature%20filter
|
In signal processing, a quadrature filter is the analytic representation of the impulse response of a real-valued filter:
If the quadrature filter is applied to a signal , the result is
which implies that is the analytic representation of .
Since is an analytic signal, it is either zero or complex-valued. In practice, therefore, is often implemented as two real-valued filters, which correspond to the real and imaginary parts of the filter, respectively.
An ideal quadrature filter cannot have a finite support. It has single sided support, but by choosing the (analog) function carefully, it is possible to design quadrature filters which are localized such that they can be approximated by means of functions of finite support. A digital realization without feedback (FIR) has finite support.
Applications
This construction will simply assemble an analytic signal with a starting point to finally create a causal signal with finite energy. The two Delta Distributions will perform this operation. This will impose an additional constraint on the filter.
Single frequency signals
For single frequency signals (in practice narrow bandwidth signals) with frequency the magnitude of the response of a quadrature filter equals the signal's amplitude A times the frequency function of the filter at frequency .
This property can be useful when the signal s is a narrow-bandwidth signal of unknown frequency. By choosing a suitable frequency function Q of the filter, we may generate known functions of the unknown frequency which then can be estimated.
See also
Analytic signal
Hilbert transform
Signal processing
|
https://en.wikipedia.org/wiki/ISO%2031-11
|
ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019.
Its definitions include the following:
Mathematical logic
Sets
Miscellaneous signs and symbols
Operations
Functions
Exponential and logarithmic functions
Circular and hyperbolic functions
Complex numbers
Matrices
Coordinate systems
Vectors and tensors
Special functions
See also
Mathematical symbols
Mathematical notation
|
https://en.wikipedia.org/wiki/Comparison%20theorem
|
In mathematics, comparison theorems are theorems whose statement involves comparisons between various mathematical objects of the same type, and often occur in fields such as calculus, differential equations and Riemannian geometry.
Differential equations
In the theory of differential equations, comparison theorems assert particular properties of solutions of a differential equation (or of a system thereof), provided that an auxiliary equation/inequality (or a system thereof) possesses a certain property.
Chaplygin inequality
Grönwall's inequality, and its various generalizations, provides a comparison principle for the solutions of first-order ordinary differential equations.
Sturm comparison theorem
Aronson and Weinberger used a comparison theorem to characterize solutions to Fisher's equation, a reaction--diffusion equation.
Hille-Wintner comparison theorem
Riemannian geometry
In Riemannian geometry, it is a traditional name for a number of theorems that compare various metrics and provide various estimates in Riemannian geometry.
Rauch comparison theorem relates the sectional curvature of a Riemannian manifold to the rate at which its geodesics spread apart.
Toponogov's theorem
Myers's theorem
Hessian comparison theorem
Laplacian comparison theorem
Morse–Schoenberg comparison theorem
Berger comparison theorem, Rauch–Berger comparison theorem
Berger–Kazdan comparison theorem
Warner comparison theorem for lengths of N-Jacobi fields (N being a submanifold of a complete Riemannian manifold)
Bishop–Gromov inequality, conditional on a lower bound for the Ricci curvatures
Lichnerowicz comparison theorem
Eigenvalue comparison theorem
Cheng's eigenvalue comparison theorem
See also: Comparison triangle
Other
Limit comparison theorem, about convergence of series
Comparison theorem for integrals, about convergence of integrals
Zeeman's comparison theorem, a technical tool from the theory of spectral sequences
|
https://en.wikipedia.org/wiki/Potassium%20in%20biology
|
Potassium is the main intracellular ion for all types of cells, while having a major role in maintenance of fluid and electrolyte balance. Potassium is necessary for the function of all living cells, and is thus present in all plant and animal tissues. It is found in especially high concentrations within plant cells, and in a mixed diet, it is most highly concentrated in fruits. The high concentration of potassium in plants, associated with comparatively very low amounts of sodium there, historically resulted in potassium first being isolated from the ashes of plants (potash), which in turn gave the element its modern name. The high concentration of potassium in plants means that heavy crop production rapidly depletes soils of potassium, and agricultural fertilizers consume 93% of the potassium chemical production of the modern world economy.
The functions of potassium and sodium in living organisms are quite different. Animals, in particular, employ sodium and potassium differentially to generate electrical potentials in animal cells, especially in nervous tissue. Potassium depletion in animals, including humans, results in various neurological dysfunctions. Characteristic concentrations of potassium in model organisms are: 30–300mM in E. coli, 300mM in budding yeast, 100mM in mammalian cell and 4mM in blood plasma.
Function in plants
The main role of potassium in plants is to provide the ionic environment for metabolic processes in the cytosol, and as such functions as a regulator of various processes including growth regulation. Plants require potassium ions (K+) for protein synthesis and for the opening and closing of stomata, which is regulated by proton pumps to make surrounding guard cells either turgid or flaccid. A deficiency of potassium ions can impair a plant's ability to maintain these processes. Potassium also functions in other physiological processes such as photosynthesis, protein synthesis, activation of some enzymes, phloem solute transport of
|
https://en.wikipedia.org/wiki/Multi-index%20notation
|
Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices.
Definition and basic properties
An n-dimensional multi-index is an -tuple
of non-negative integers (i.e. an element of the -dimensional set of natural numbers, denoted ).
For multi-indices and , one defines:
Componentwise sum and difference
Partial order
Sum of components (absolute value)
Factorial
Binomial coefficient
Multinomial coefficient
where .
Power
.
Higher-order partial derivative
where (see also 4-gradient). Sometimes the notation is also used.
Some applications
The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, (or ), , and (or ).
Multinomial theorem
Multi-binomial theorem
Note that, since is a vector and is a multi-index, the expression on the left is short for .
Leibniz formula
For smooth functions and ,
Taylor series
For an analytic function in variables one has In fact, for a smooth enough function, we have the similar Taylor expansion where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets
General linear partial differential operator
A formal linear -th order partial differential operator in variables is written as
Integration by parts
For smooth functions with compact support in a bounded domain one has This formula is used for the definition of distributions and weak derivatives.
An example theorem
If are multi-indices and , then
Proof
The proof follows from the power rule for the ordinary derivative; if α and β are in , then
Suppose , , and . Then we have that
For each in , the function only depends on . In the above, each partial differe
|
https://en.wikipedia.org/wiki/Graph%20paper
|
Graph paper, coordinate paper, grid paper, or squared paper is writing paper that is printed with fine lines making up a regular grid. The lines are often used as guides for plotting graphs of functions or experimental data and drawing curves. It is commonly found in mathematics and engineering education settings and in laboratory notebooks. Graph paper is available either as loose leaf paper or bound in notebooks.
History
The Metropolitan Museum of Art owns a pattern book dated to around 1596 in which each page bears a grid printed with a woodblock. The owner has used these grids to create block pictures in black and white and in colour.
The first commercially published "coordinate paper" is usually attributed to a Dr. Buxton of England, who patented paper, printed with a rectangular coordinate grid, in 1794. A century later, E. H. Moore, a distinguished mathematician at the University of Chicago, advocated usage of paper with "squared lines" by students of high schools and universities. The 1906 edition of Algebra for Beginners by H. S. Hall and S. R. Knight included a strong statement that "the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shows that anything on a smaller scale (such as 'millimeter' paper) is practically worthless in the hands of beginners."
The term "graph paper" did not catch on quickly in American usage. A School Arithmetic (1919) by H. S. Hall and F. H. Stevens had a chapter on graphing with "squared paper". Analytic Geometry (1937) by W. A. Wilson and J. A. Tracey used the phrase "coordinate paper". The term "squared paper" remained in British usage for longer; for example it was used in Public School Arithmetic (2023) by W. M. Baker and A. A. Bourne published in London.
Formats
Quad paper, sometimes referred to as quadrille paper from French quadrillé, 'large square', is a common form of graph paper with a sparse grid printed in light blue or gray and right to the edge of the
|
https://en.wikipedia.org/wiki/Food%20history
|
Food history is an interdisciplinary field that examines the history and the cultural, economic, environmental, and sociological impacts of food and human nutrition. It is considered distinct from the more traditional field of culinary history, which focuses on the origin and recreation of specific recipes.
The first journal in the field, Petits Propos Culinaires, was launched in 1979 and the first conference on the subject was the 1981 Oxford Food Symposium.
Food and diets in history
Early human nutrition was largely determined by the availability and palatability (tastiness) of foods. Humans evolved as omnivorous hunter-gatherers, though our diet has varied significantly depending on location and climate. The diet in the tropics tended to depend more heavily on plant foods, while the diet at higher latitudes tended more towards animal products. Analyses of postcranial and cranial remains of humans and animals from the Neolithic, along with detailed bone-modification studies, have shown that cannibalism also occurred among prehistoric humans.
Agriculture developed at different times in different places, starting about 11,500 years ago, providing some cultures with a more abundant supply of grains (such as wheat, rice and maize) and potatoes; this made possible dough for staples such as bread, pasta, and tortillas. The domestication of animals provided some cultures with milk and dairy products.
In 2020, archeological research discovered a frescoed thermopolium (a fast-food counter) in an exceptional state of preservation from 79 CE/AD in Pompeii, including 2,000-year-old foods available in some of the deep terra cotta jars.
Classical antiquity
During classical antiquity, diets consisted of simple fresh or preserved whole foods that were either locally grown or transported from neighboring areas during times of crisis.
5th to 15th century: Middle Ages in Western Europe
In western Europe, medieval cuisine (5th–15th century) did not change rapidly.
Cereal
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20examples
|
This page will attempt to list examples in mathematics. To qualify for inclusion, an article should be about a mathematical object with a fair amount of concreteness. Usually a definition of an abstract concept, a theorem, or a proof would not be an "example" as the term should be understood here (an elegant proof of an isolated but particularly striking fact, as opposed to a proof of a general theorem, could perhaps be considered an "example"). The discussion page for list of mathematical topics has some comments on this. Eventually this page may have its own discussion page. This page links to itself in order that edits to this page will be included among related changes when the user clicks on that button.
The concrete example within the article titled Rao-Blackwell theorem is perhaps one of the best ways for a probabilist ignorant of statistical inference to get a quick impression of the flavor of that subject.
Uncategorized examples, alphabetized
Alexander horned sphere
All horses are the same color
Cantor function
Cantor set
Checking if a coin is biased
Concrete illustration of the central limit theorem
Differential equations of mathematical physics
Dirichlet function
Discontinuous linear map
Efron's non-transitive dice
Example of a game without a value
Examples of contour integration
Examples of differential equations
Examples of generating functions
Examples of groups
List of the 230 crystallographic 3D space groups
Examples of Markov chains
Examples of vector spaces
Fano plane
Frieze group
Gray graph
Hall–Janko graph
Higman–Sims graph
Hilbert matrix
Illustration of a low-discrepancy sequence
Illustration of the central limit theorem
An infinitely differentiable function that is not analytic
Leech lattice
Lewy's example on PDEs
List of finite simple groups
Long line
Normally distributed and uncorrelated does not imply independent
Pairwise independence of random variables need not imply mutual independence.
Petersen graph
Sierpinski space
Simple examp
|
https://en.wikipedia.org/wiki/Electrophoretic%20mobility%20shift%20assay
|
An electrophoretic mobility shift assay (EMSA) or mobility shift electrophoresis, also referred as a gel shift assay, gel mobility shift assay, band shift assay, or gel retardation assay, is a common affinity electrophoresis technique used to study protein–DNA or protein–RNA interactions. This procedure can determine if a protein or mixture of proteins is capable of binding to a given DNA or RNA sequence, and can sometimes indicate if more than one protein molecule is involved in the binding complex. Gel shift assays are often performed in vitro concurrently with DNase footprinting, primer extension, and promoter-probe experiments when studying transcription initiation, DNA gang replication, DNA repair or RNA processing and maturation, as well as pre-mRNA splicing. Although precursors can be found in earlier literature, most current assays are based on methods described by Garner and Revzin and Fried and Crothers.
Principle
A mobility shift assay is electrophoretic separation of a protein–DNA or protein–RNA mixture on a polyacrylamide or agarose gel for a short period (about 1.5-2 hr for a 15- to 20-cm gel). The speed at which different molecules (and combinations thereof) move through the gel is determined by their size and charge, and to a lesser extent, their shape (see gel electrophoresis). The control lane (DNA probe without protein present) will contain a single band corresponding to the unbound DNA or RNA fragment. However, assuming that the protein is capable of binding to the fragment, the lane with a protein that binds present will contain another band that represents the larger, less mobile complex of nucleic acid probe bound to protein which is 'shifted' up on the gel (since it has moved more slowly).
Under the correct experimental conditions, the interaction between the DNA (or RNA) and protein is stabilized and the ratio of bound to unbound nucleic acid on the gel reflects the fraction of free and bound probe molecules as the binding reaction ent
|
https://en.wikipedia.org/wiki/Overlay%20network
|
An overlay network is a computer network that is layered on top of another network.
Structure
Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. For example, distributed systems such as peer-to-peer networks and client–server applications are overlay networks because their nodes run on top of the Internet.
The Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet.
Uses
Enterprise networks
Enterprise private networks were first overlaid on telecommunication networks such as Frame Relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002).
From a physical standpoint, overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by various entities (businesses, universities, government etc.) but they allow separation of concerns that over time permitted the buildup of a broad set of services that could not have been proposed by a single telecommunication operator (ranging from broadband Internet access, voice over IP or IPTV, competitive telecom operators etc.).
Internet
Telecommunication transport networks and IP networks (which combined make up the broader Internet) are all overlaid with at least an optical fiber layer, a transport layer and an IP or circuit switching layers (in the case of the PSTN).
Over the Internet
Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. For example, distributed hash tables can be used to route messages to a node
|
https://en.wikipedia.org/wiki/Load%E2%80%93store%20architecture
|
In computer engineering, a load–store architecture (or a register–register architecture) is an instruction set architecture that divides instructions into two categories: memory access (load and store between memory and registers) and ALU operations (which only occur between registers).
Some RISC architectures such as PowerPC, SPARC, RISC-V, ARM, and MIPS are load–store architectures.
For instance, in a load–store approach both operands and destination for an ADD operation must be in registers. This differs from a register–memory architecture (for example, a CISC instruction set architecture such as x86) in which one of the operands for the ADD operation may be in memory, while the other is in a register.
The earliest example of a load–store architecture was the CDC 6600. Almost all vector processors (including many GPUs) use the load–store approach.
See also
Load–store unit
Register–memory architecture
|
https://en.wikipedia.org/wiki/Perceptual%20trap
|
A perceptual trap is an ecological scenario in which environmental change, typically anthropogenic, leads an organism to avoid an otherwise high-quality habitat. The concept is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat.
History
In a 2004 article discussing source–sink dynamics, James Battin did not distinguish between high-quality habitats that are preferred or avoided, labelling both "sources." The latter scenario, in which a high-quality habitat is avoided, was first recognised as an important phenomenon in 2007 by Gilroy and Sutherland, who described them as "undervalued resources." The term "perceptual trap" was first proposed by Michael Patten and Jeffrey Kelly in a 2010 article. Hans Van Dyck argues that the term is misleading because perception is also a major component in other cases of trapping.
Description
Animals use discrete environmental cues to select habitat. A perceptual trap occurs if change in an environmental cue leads an organism to avoid a high-quality habitat. It differs, therefore, from simple habitat avoidance, which may be a correct decision given the habitat's quality. The concept of a perceptual trap is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat. There is expected to be strong natural selection against ecological traps, but not necessarily against perceptual traps, as Allee effects may restrict a population’s ability to establish itself.
Examples
To support the concept of a perceptual trap, Patten and Kelly cited a study of the lesser prairie chicken (Tympanuchus pallidicinctus). The species' natural environment, shinnery oak grassland, is often treated with the herbicide tebuthiuron to increase grass cover for cattle grazing. Herbicide treatment resulted in less shrub cover, a habitat cue that caused female lesser prairie-chickens to avoid the habitat in favour of untreated areas. However
|
https://en.wikipedia.org/wiki/List%20of%20quantum-mechanical%20systems%20with%20analytical%20solutions
|
Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form
where is the wave function of the system, is the Hamiltonian operator, and is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation,
which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below.
Solvable systems
The two-state quantum system (the simplest possible quantum system)
The free particle
The delta potential
The double-well Dirac delta potential
The particle in a box / infinite potential well
The finite potential well
The one-dimensional triangular potential
The particle in a ring or ring wave guide
The particle in a spherically symmetric potential
The quantum harmonic oscillator
The quantum harmonic oscillator with an applied uniform field
The hydrogen atom or hydrogen-like atom e.g. positronium
The hydrogen atom in a spherical cavity with Dirichlet boundary conditions
The particle in a one-dimensional lattice (periodic potential)
The particle in a one-dimensional lattice of finite length
The Morse potential
The Mie potential
The step potential
The linear rigid rotor
The symmetric top
The Hooke's atom
The Spherium atom
Zero range interaction in a harmonic trap
The quantum pendulum
The rectangular potential barrier
The Pöschl–Teller potential
The Inverse square root potential
Multistate Landau–Zener models
The Luttinger liquid (the only exact quantum mechanical solution to a model including interparticle interactions)
See also
List of quantum-mechanical potentials – a list of physically
|
https://en.wikipedia.org/wiki/Like%20terms
|
In mathematics, like terms are summands in a sum that differ only by a numerical factor. Like terms can be regrouped by adding their coefficients.
Typically, in a polynomial expression, like terms are those that contain the same variables to the same powers, possibly with different coefficients.
More generally, when some variable are considered as parameters, like terms are defined similarly, but "numerical factors" must be replaced by "factors depending only on the parameters".
For example, when considering a quadratic equation, one considers often the expression
where and are the roots of the equation and may be considered as parameters. Then, expanding the above product and regrouping the like terms gives
Generalization
In this discussion, a "term" will refer to a string of numbers being multiplied or divided (that division is simply multiplication by a reciprocal) together. Terms are within the same expression and are combined by either addition or subtraction. For example, take the expression:
There are two terms in this expression. Notice that the two terms have a common factor, that is, both terms have an . This means that the common factor variable can be factored out, resulting in
If the expression in parentheses may be calculated, that is, if the variables in the expression in the parentheses are known numbers, then it is simpler to write the calculation . and juxtapose that new number with the remaining unknown number. Terms combined in an expression with a common, unknown factor (or multiple unknown factors) are called like terms.
Examples
Example
To provide an example for above, let and have numerical values, so that their sum may be calculated. For ease of calculation, let and . The original expression becomes
which may be factored into
or, equally,
.
This demonstrates that
The known values assigned to the unlike part of two or more terms are called coefficients. As this example shows, when like terms exist in an expression, they m
|
https://en.wikipedia.org/wiki/Cache%20pollution
|
Cache pollution describes situations where an executing computer program loads data into CPU cache unnecessarily, thus causing other useful data to be evicted from the cache into lower levels of the memory hierarchy, degrading performance. For example, in a multi-core processor, one core may replace the blocks fetched by other cores into shared cache, or prefetched blocks may replace demand-fetched blocks from the cache.
Example
Consider the following illustration:
T[0] = T[0] + 1;
for i in 0..sizeof(CACHE)
C[i] = C[i] + 1;
T[0] = T[0] + C[sizeof(CACHE)-1];
(The assumptions here are that the cache is composed of only one level, it is unlocked, the replacement policy is pseudo-LRU, all data is cacheable, the set associativity of the cache is N (where N > 1), and at most one processor register is available to contain program values).
Right before the loop starts, T[0] will be fetched from memory into cache, its value updated. However, as the loop executes, because the number of data elements the loop references requires the whole cache to be filled to its capacity, the cache block containing T[0] has to be evicted. Thus, the next time the program requests T[0] to be updated, the cache misses, and the cache controller has to request the data bus to bring the corresponding cache block from main memory again.
In this case the cache is said to be "polluted". Changing the pattern of data accesses by positioning the first update of T[0] between the loop and the second update can eliminate the inefficiency:
for i in 0..sizeof(CACHE)
C[i] = C[i] + 1;
T[0] = T[0] + 1;
T[0] = T[0] + C[sizeof(CACHE)-1];
Solutions
Other than code-restructuring mentioned above, the solution to cache pollution is ensure that only high-reuse data are stored in cache. This can be achieved by using special cache control instructions, operating system support or hardware support.
Examples of specialized hardware instructions include "lvxl" provided by PowerPC AltiVec. T
|
https://en.wikipedia.org/wiki/Mathematical%20beauty
|
Mathematical beauty is the aesthetic pleasure derived from the abstractness, purity, simplicity, depth or orderliness of mathematics. Mathematicians may express this pleasure by describing mathematics (or, at least, some aspect of mathematics) as beautiful or describe mathematics as an art form, (a position taken by G. H. Hardy) or, at a minimum, as a creative activity.
Comparisons are made with music and poetry.
In method
Mathematicians describe an especially pleasing method of proof as elegant. Depending on context, this may mean:
A proof that uses a minimum of additional assumptions or previous results.
A proof that is unusually succinct.
A proof that derives a result in a surprising way (e.g., from an apparently unrelated theorem or a collection of theorems).
A proof that is based on new and original insights.
A method of proof that can be easily generalized to solve a family of similar problems.
In the search for an elegant proof, mathematicians often look for different independent ways to prove a result—as the first proof that is found can often be improved. The theorem for which the greatest number of different proofs have been discovered is possibly the Pythagorean theorem, with hundreds of proofs being published up to date. Another theorem that has been proved in many different ways is the theorem of quadratic reciprocity. In fact, Carl Friedrich Gauss alone had eight different proofs of this theorem, six of which he published.
Conversely, results that are logically correct but involve laborious calculations, over-elaborate methods, highly conventional approaches or a large number of powerful axioms or previous results are usually not considered to be elegant, and may be even referred to as ugly or clumsy.
In results
Some mathematicians see beauty in mathematical results that establish connections between two areas of mathematics that at first sight appear to be unrelated. These results are often described as deep. While it is difficult to f
|
https://en.wikipedia.org/wiki/Biocontainment
|
One use of the concept of biocontainment is related to laboratory biosafety and pertains to microbiology laboratories in which the physical containment of pathogenic organisms or agents (bacteria, viruses, and toxins) is required, usually by isolation in environmentally and biologically secure cabinets or rooms, to prevent accidental infection of workers or release into the surrounding community during scientific research.
Another use of the term relates to facilities for the study of agricultural pathogens, where it is used similarly to the term "biosafety", relating to safety practices and procedures used to prevent unintended infection of plants or animals or the release of high-consequence pathogenic agents into the environment (air, soil, or water).
Terminology
The World Health Organization's 2006 publication, Biorisk management: Laboratory biosecurity guidance, defines laboratory biosafety as "the containment principles, technologies and practices that are implemented to prevent the unintentional exposure to pathogens and toxins, or their accidental release". It defines biorisk management as "the analysis of ways and development of strategies to minimize the likelihood of the occurrence of biorisks".
The term "biocontainment" is related to laboratory biosafety. Merriam-Webster's online dictionary reports the first use of the term in 1966, defined as "the containment of extremely pathogenic organisms (such as viruses) usually by isolation in secure facilities to prevent their accidental release especially during research".
The term laboratory biosafety refers to the measures taken "to reduce the risk of accidental release of or exposure to infectious disease agents", whereas laboratory biosecurity is usually taken to mean "a set of systems and practices employed in legitimate bioscience facilities to reduce the risk that dangerous biological agents will be stolen and used maliciously".
Containment types
Laboratory context
Primary containment is the first
|
https://en.wikipedia.org/wiki/I/O%20virtualization
|
In virtualization, input/output virtualization (I/O virtualization) is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections.
The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs). Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards.
In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks.
Background
Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations.
In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers .
However, increased utilization created by virtualization placed a significant strain on the server’s I/O cap
|
https://en.wikipedia.org/wiki/Xenohormesis
|
Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response.
This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals.
Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties.
Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team.
There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla
|
https://en.wikipedia.org/wiki/View%20model
|
A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns.
Since the early 1990s there have been a number of efforts to prescribe approaches for describing and analyzing system architectures. These recent efforts define a set of views (or viewpoints). They are sometimes referred to as architecture frameworks or enterprise architecture frameworks, but are usually called "view models".
Usually a view is a work product that presents specific architecture data for a given system. However, the same term is sometimes used to refer to a view definition, including the particular viewpoint and the corresponding guidance that defines each concrete view. The term view model is related to view definitions.
Overview
The purpose of views and viewpoints is to enable humans to comprehend very complex systems, to organize the elements of the problem and the solution around domains of expertise and to separate concerns. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization.
Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. Furthermore, we all have different interests in a given system and different reasons for examining the system's specifications. A business executive will ask different questions of a system make-up than would a system implementer. The concept of viewpoints framework, therefore, is to provide separate viewpoints into the specification of a given complex system in order to facilitate communication with the stakeholders. Each viewpoint satisfies an audience with interest in a pa
|
https://en.wikipedia.org/wiki/Virtual%20firewall
|
A virtual firewall (VF) is a network firewall service or appliance running entirely within a virtualized environment and which provides the usual packet filtering and monitoring provided via a physical network firewall. The VF can be realized as a traditional software firewall on a guest virtual machine already running, a purpose-built virtual security appliance designed with virtual network security in mind, a virtual switch with additional security capabilities, or a managed kernel process running within the host hypervisor.
Background
So long as a computer network runs entirely over physical hardware and cabling, it is a physical network. As such it can be protected by physical firewalls and fire walls alike; the first and most important protection for a physical computer network always was and remains a physical, locked, flame-resistant door. Since the inception of the Internet this was the case, and structural fire walls and network firewalls were for a long time both necessary and sufficient.
Since about 1998 there has been an explosive increase in the use of virtual machines (VM) in addition to — sometimes instead of — physical machines to offer many kinds of computer and communications services on local area networks and over the broader Internet. The advantages of virtual machines are well explored elsewhere.
Virtual machines can operate in isolation (for example as a guest operating system on a personal computer) or under a unified virtualized environment overseen by a supervisory virtual machine monitor or "hypervisor" process. In the case where many virtual machines operate under the same virtualized environment they might be connected together via a virtual network consisting of virtualized network switches between machines and virtualized network interfaces within machines. The resulting virtual network could then implement traditional network protocols (for example TCP) or virtual network provisioning such as VLAN or VPN, though the latter while u
|
https://en.wikipedia.org/wiki/Tokogeny
|
Tokogeny or tocogeny is the biological relationship between parent and offspring, or more generally between ancestors and descendants. In contradistinction to phylogeny it applies to individual organisms as opposed to species.
In the tokogentic system shared characteristics are called traits.
|
https://en.wikipedia.org/wiki/Koch%27s%20postulates
|
Koch's postulates ( ) are four criteria designed to establish a causal relationship between a microbe and a disease. The postulates were formulated by Robert Koch and Friedrich Loeffler in 1884, based on earlier concepts described by Jakob Henle, and the statements were refined and published by Koch in 1890. Koch applied the postulates to describe the etiology of cholera and tuberculosis, both of which are now ascribed to bacteria. The postulates have been controversially generalized to other diseases. More modern concepts in microbial pathogenesis cannot be examined using Koch's postulates, including viruses (which are obligate intracellular parasites) and asymptomatic carriers. They have largely been supplanted by other criteria such as the Bradford Hill criteria for infectious disease causality in modern public health and the Molecular Koch's postulates for microbial pathogenesis.
Postulates
Koch's four postulates are:
The microorganism must be found in abundance in all organisms suffering from the disease but should not be found in healthy organisms.
The microorganism must be isolated from a diseased organism and grown in pure culture.
The cultured microorganism should cause disease when introduced into a healthy organism.
The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent.
However, Koch later abandoned the universalist requirement of the first postulate when he discovered asymptomatic carriers of cholera and, later, of typhoid fever. Subclinical infections and asymptomatic carriers are now known to be a common feature of many infectious diseases, especially viral diseases such as polio, herpes simplex, HIV/AIDS, hepatitis C, and COVID-19. For example, poliovirus only causes paralysis in a small percentage of those infected.
The second postulate does not apply to pathogens incapable of growing in pure culture. For example, viruses are dependent
|
https://en.wikipedia.org/wiki/Field-programmable%20object%20array
|
A field-programmable object array (FPOA) is a class of programmable logic devices designed to be modified or programmed after manufacturing. They are designed to bridge the gap between ASIC and FPGA. They contain a grid of programmable silicon objects. Arrix range of FPOA contained three types of silicon objects: arithmetic logic units (ALUs), register files (RFs) and multiply-and-accumulate units (MACs). Both the objects and interconnects are programmable.
Motivation and history
The device was intended to bridge the gap between field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The design goal was to combine the programmability of FPGAs and the performance of ASICs. FPGAs, although programmable, lack performance; they may only be clocked to few hundreds of megahertz and most FPGAs operated below 100 MHz. FPGAs did not offer deterministic timing and the maximum operating frequency depends on the design. ASICs offered good performance, but they could not be modified and they were very costly. The FPOA had a programmable architecture, deterministic timing, and gigahertz performance. The FPOA was designed by Douglas Pihl who had this idea when working on a DARPA funded project. He founded MathStar in 1997 to manufacture FPOAs and the idea was patented in 2004. The first FPOA prototypes were made in 2005 and first batch of FPOA chips were fabricated in 2006.
Architecture
FPOAs have a core grid of silicon objects or core objects. These objects are connected through a synchronous interconnect. Each core object also has a supporting structures for clock synchronization, BIST and the like. The core is surrounded by peripheral circuitry that contains memory and I/O. An interface circuitry connects the objects to rest of FPOA. Exact number of each type of object and its arrangement are specific to a given family. There are two types of communication: nearest member and "party-line". Nearest member is used to connect a core to nea
|
https://en.wikipedia.org/wiki/RCA%20CDP1861
|
The RCA CDP1861 was an integrated circuit Video Display Controller, released by the Radio Corporation of America (RCA) in the mid-1970s as a support chip for the RCA 1802 microprocessor. The chip cost in 1977 amounted to less than US$20.
History
The CDP1861 was manufactured in a low-power CMOS technology, came in a 24-pin DIP (Dual in-line package), and required a minimum of external components to work. In 1802-based microcomputers, the CDP1861 (for the NTSC video format, CDP1864 variant for PAL), used the 1802's built-in DMA controller to display black and white (monochrome) bitmapped graphics on standard TV screens. The CDP1861 was also known as the Pixie graphics system, display, chip, and video generator, especially when used with the COSMAC ELF microcomputer. Other known chip markings for the 1861 are TA10171, TA10171V1 and a TA10171X, which were early designations for "pre-qualification engineering samples" and "preliminary part numbers", although they have been found in production RCA Studio II game consoles and Netronics Elf microcomputers. The CDP1861 was also used in the Telmac 1800 and Oscom Nano microcomputers.
Specifications
The 1861 chip could display 64 pixels horizontally and 128 pixels vertically, though by reloading the 1802's R0 DMA (direct memory access) register via the required 1802 software controller program and interrupt service routine, the resolution could be reduced to 64×64 or 64×32 to use less memory than the 1024 bytes needed for the highest resolution (with each monochrome pixel occupying one bit) or to display square pixels. A resolution of 64×32 created square pixels and used 256 bytes of memory (2K bits). This was the usual resolution for the Chip-8 game programming system. Since the video graphics frame buffer was often similar or equal in size to the memory size, it was not unusual to display your program/data on the screen allowing you to watch the computer "think" (i.e. process its data). Programs which ran amok and accidenta
|
https://en.wikipedia.org/wiki/Laws%20of%20robotics
|
Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.
The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.
Isaac Asimov's "Three Laws of Robotics"
The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In The Evitable Conflict the machines generalize the First Law to mean:
No machine may harm humanity; or, through inaction, allow humanity to come to harm.
This was refined in the end of Foundation and Earth, a zeroth law was introduced, with the original three suitably rewritten as subordinate to it:
Adaptations and extensions exist based upon this framework. As of 2021 they remain a "fictional device".
EPSRC / AHRC principles of robotics
In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) of United Kingdom jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:
Robots should not be de
|
https://en.wikipedia.org/wiki/Index%20of%20wave%20articles
|
This is a list of wave topics.
0–9
21 cm line
A
Abbe prism
Absorption spectroscopy
Absorption spectrum
Absorption wavemeter
Acoustic wave
Acoustic wave equation
Acoustics
Acousto-optic effect
Acousto-optic modulator
Acousto-optics
Airy disc
Airy wave theory
Alfvén wave
Alpha waves
Amphidromic point
Amplitude
Amplitude modulation
Animal echolocation
Antarctic Circumpolar Wave
Antiphase
Aquamarine Power
Arrayed waveguide grating
Artificial wave
Atmospheric diffraction
Atmospheric wave
Atmospheric waveguide
Atom laser
Atomic clock
Atomic mirror
Audience wave
Autowave
Averaged Lagrangian
B
Babinet's principle
Backward wave oscillator
Bandwidth-limited pulse
beat
Berry phase
Bessel beam
Beta wave
Black hole
Blazar
Bloch's theorem
Blueshift
Boussinesq approximation (water waves)
Bow wave
Bragg diffraction
Bragg's law
Breaking wave
Bremsstrahlung, Electromagnetic radiation
Brillouin scattering
Bullet bow shockwave
Burgers' equation
Business cycle
C
Capillary wave
Carrier wave
Cherenkov radiation
Chirp
Ernst Chladni
Circular polarization
Clapotis
Closed waveguide
Cnoidal wave
Coherence (physics)
Coherence length
Coherence time
Cold wave
Collimated light
Collimator
Compton effect
Comparison of analog and digital recording
Computation of radiowave attenuation in the atmosphere
Continuous phase modulation
Continuous wave
Convective heat transfer
Coriolis frequency
Coronal mass ejection
Cosmic microwave background radiation
Coulomb wave function
Cutoff frequency
Cutoff wavelength
Cymatics
D
Damped wave
Decollimation
Delta wave
Dielectric waveguide
Diffraction
Direction finding
Dispersion (optics)
Dispersion (water waves)
Dispersion relation
Dominant wavelength
Doppler effect
Doppler radar
Douglas Sea Scale
Draupner wave
Droplet-shaped wave
Duhamel's principle
E
E-skip
Earthquake
Echo (phenomenon)
Echo sounding
Echolocation (animal)
Echolocation (human)
Eddy (fluid dynamics)
Edge wave
Eikonal equation
Ekman layer
Ekman spiral
Ekman transport
El Niño–Southern Oscillation
El
|
https://en.wikipedia.org/wiki/Ns%20%28simulator%29
|
ns (from network simulator) is a name for a series of discrete event network simulators, specifically ns-1, ns-2, and ns-3. All are discrete-event computer network simulators, primarily used in research and teaching.
History
ns-1
The first version of ns, known as ns-1, was developed at Lawrence Berkeley National Laboratory (LBNL) in the 1995-97 timeframe by Steve McCanne, Sally Floyd, Kevin Fall, and other contributors. This was known as the LBNL Network Simulator, and derived in 1989 from an earlier simulator known as REAL by S. Keshav.
ns-2
Ns-2 began as a revision of ns-1. From 1997 to 2000, ns development was supported by DARPA through the VINT project at LBL, Xerox PARC, UC Berkeley, and USC/ISI. In 2000, ns-2 development was supported through DARPA with SAMAN and through NSF with CONSER, both at USC/ISI, in collaboration with other researchers including ACIRI.
Features of NS2
1. It is a discrete event simulator for networking research.
2. It provides substantial support to simulate bunch of protocols like TCP, FTP, UDP, https and DSR.
3. It simulates wired and wireless network.
4. It is primarily Unix based.
5. Uses TCL as its scripting languages.
6. Otcl: Object oriented support
7. Tclcl: C++ and otcl linkage
8. Discrete event schedule
Ns-2 incorporates substantial contributions from third parties, including wireless code from the UCB Daedelus and CMU Monarch projects and Sun Microsystems.
ns-3
In 2003, a team led by Tom Henderson, George Riley, Sally Floyd, and Sumit Roy, applied for and received funding from the U.S. National Science Foundation (NSF) to build a replacement for ns-2, called ns-3. This team collaborated with the Planete project of INRIA at Sophia Antipolis, with Mathieu Lacage as the software lead, and formed a new open source project.
In the process of developing ns-3, it was decided to completely abandon backward-compatibility with ns-2. The new simulator would be written from scratch, using the C++ programming language
|
https://en.wikipedia.org/wiki/Convergence%20%28routing%29
|
Convergence is the state of a set of routers that have the same topological information about the internetwork in which they operate. For a set of routers to have converged, they must have collected all available topology information from each other via the implemented routing protocol, the information they gathered must not contradict any other router's topology information in the set, and it must reflect the real state of the network. In other words: in a converged network all routers "agree" on what the network topology looks like.
Convergence is an important notion for a set of routers that engage in dynamic routing. All Interior Gateway Protocols rely on convergence to function properly. "To have, or be, converged" is the normal state of an operational autonomous system. The Exterior Gateway Routing Protocol BGP typically never converges because the Internet is too big for changes to be communicated fast enough.
Convergence process
When a routing protocol process is enabled, every participating router will attempt to exchange information about the topology of the network. The extent of this information exchange, the way it is sent and received, and the type of information required vary widely depending on the routing protocol in use, see e.g. RIP, OSPF, BGP4.
A state of convergence is achieved once all routing protocol-specific information has been distributed to all routers participating in the routing protocol process. Any change in the network that affects routing tables will break the convergence temporarily until this change has been successfully communicated to all other routers.
Convergence time
Convergence time is a measure of how fast a group of routers reach the state of convergence. It is one of the main design goals and an important performance indicator for routing protocols, which should implement a mechanism that allows all routers running the protocol to quickly and reliably converge. Of course, the size of the network also plays an imp
|
https://en.wikipedia.org/wiki/Memory%20hierarchy
|
In computer organisation, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference.
Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories in which each member is typically smaller and faster than the next highest member of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer.
There are four major storage levels.
Internal – Processor registers and cache.
Main – the system RAM and controller cards.
On-line mass storage – Secondary storage.
Off-line bulk storage – Tertiary and Off-line storage.
This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for virtual memory when designing a computer architecture, and one can include a level of nearline storage between online and offline storage.
Properties of the technologies in the memory hierarchy
Adding complexity slows down the memory hierarchy.
CMOx memory technology stretches the Flash space in the memory hierarchy
One of the main ways to increase system performance is minimising how far down the memory hierarchy one has to go to manipulate data.
Latency and bandwidth are two metrics associated with caches. Neither of them is uniform, but is specific to a particular component of the memory hierarchy.
Predicting where in the memory hierarchy the data resides is difficult.
...the location in the memory hierarchy dictates t
|
https://en.wikipedia.org/wiki/Reproductive%20interference
|
Reproductive interference is the interaction between individuals of different species during mate acquisition that leads to a reduction of fitness in one or more of the individuals involved. The interactions occur when individuals make mistakes or are unable to recognise their own species, labelled as ‘incomplete species recognition'. Reproductive interference has been found within a variety of taxa, including insects, mammals, birds, amphibians, marine organisms, and plants.
There are seven causes of reproductive interference, namely signal jamming, heterospecific rivalry, misdirected courtship, heterospecific mating attempts, erroneous female choice, heterospecific mating, and hybridisation. All types have fitness costs on the participating individuals, generally from a reduction in reproductive success, a waste of gametes, and the expenditure of energy and nutrients. These costs are variable and dependent on numerous factors, such as the cause of reproductive interference, the sex of the parent, and the species involved.
Reproductive interference occurs between species that occupy the same habitat and can play a role in influencing the coexistence of these species. It differs from competition as reproductive interference does not occur due to a shared resource. Reproductive interference can have ecological consequences, such as through the segregation of species both spatially and temporally. It can also have evolutionary consequences, for example; it can impose a selective pressure on the affected species to evolve traits that better distinguish themselves from other species.
Causes of reproductive interference
Reproductive interference can occur at different stages of mating, from locating a potential mate, to the fertilisation of an individual of a different species. There are seven causes of reproductive interference that each have their own consequences on the fitness of one or both of the involved individuals.
Signal jamming
Signal jamming refers to t
|
https://en.wikipedia.org/wiki/Adverse%20food%20reaction
|
An adverse food reaction is an adverse response by the body to food or a specific type of food.
The most common adverse reaction is a food allergy, which is an adverse immune response to either a specific type or a range of food proteins.
However, other adverse responses to food are not allergies. These reactions include responses to food such as food intolerance, pharmacological reactions, and toxin-mediated reactions, as well as physical responses, such as choking.
|
https://en.wikipedia.org/wiki/A%20History%20of%20Mathematical%20Notations
|
A History of Mathematical Notations is a book on the history of mathematics and of mathematical notation. It was written by Swiss-American historian of mathematics Florian Cajori (1859–1930), and originally published as a two-volume set by the Open Court Publishing Company in 1928 and 1929, with the subtitles Volume I: Notations in Elementary Mathematics (1928) and Volume II: Notations Mainly in Higher Mathematics (1929). Although Open Court republished it in a second edition in 1974, it was unchanged from the first edition. In 1993, it was published as an 820-page single volume edition by Dover Publications, with its original pagination unchanged.
The Basic Library List Committee of the Mathematical Association of America has listed this book as essential for inclusion in undergraduate mathematics libraries. It was already described as long-awaited at the time of its publication, and by 2013, when the Dover edition was reviewed by Fernando Q. Gouvêa, he wrote that it was "one of those books so well known that it doesn’t need a review". However, some of its claims on the history of the notations it describes have been subsumed by more recent research, and its coverage of modern mathematics is limited, so it should be used with care as a reference.
Topics
The first volume of the book concerns elementary mathematics. It has 400 pages of material on arithmetic. This includes the history of notation for numbers from many ancient cultures, arranged by culture, with the Hindu–Arabic numeral system treated separately. Following this, it covers notation for arithmetic operations, arranged separately by operation and by the mathematicians who used those notations (although not in a strict chronological ordering). The first volume concludes with 30 pages on elementary geometry, including also the struggle between symbolists and rhetoricians in the 18th and 19th centuries on whether to express mathematics in notation or words, respectively.
The second volume is divided more
|
https://en.wikipedia.org/wiki/Outline%20of%20algebraic%20structures
|
In mathematics, there are many types of algebraic structures which are studied. Abstract algebra is primarily the study of specific algebraic structures and their properties. Algebraic structures may be viewed in different ways, however the common starting point of algebra texts is that an algebraic object incorporates one or more sets with one or more binary operations or unary operations satisfying a collection of axioms.
Another branch of mathematics known as universal algebra studies algebraic structures in general. From the universal algebra viewpoint, most structures can be divided into varieties and quasivarieties depending on the axioms used. Some axiomatic formal systems that are neither varieties nor quasivarieties, called nonvarieties, are sometimes included among the algebraic structures by tradition.
Concrete examples of each structure will be found in the articles listed.
Algebraic structures are so numerous today that this article will inevitably be incomplete. In addition to this, there are sometimes multiple names for the same structure, and sometimes one name will be defined by disagreeing axioms by different authors. Most structures appearing on this page will be common ones which most authors agree on. Other web lists of algebraic structures, organized more or less alphabetically, include Jipsen and PlanetMath. These lists mention many structures not included below, and may present more information about some structures than is presented here.
Study of algebraic structures
Algebraic structures appear in most branches of mathematics, and one can encounter them in many different ways.
Beginning study: In American universities, groups, vector spaces and fields are generally the first structures encountered in subjects such as linear algebra. They are usually introduced as sets with certain axioms.
Advanced study:
Abstract algebra studies properties of specific algebraic structures.
Universal algebra studies algebraic structures abstractly, r
|
https://en.wikipedia.org/wiki/Systems%20design
|
Systems design interfaces, and data for an electronic control system to satisfy specified requirements. System design could be seen as the application of system theory to product development. There is some overlap with the disciplines of system analysis, system architecture and system engineering.
Overview
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user.
The basic study of system design is the understanding of component parts and their subsequent interaction with one another.
Physical design
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
User Interface Design
Data Design
Process Design
Web System design
Online websites, such as Google, Twitter, Facebook, Amazon and Netflix are used by millions of users worldwide. A scalable, highly available system must be designed to accommodate an increasing number of users. Here are the things to consider in designing the system:
Functional and non functional requirements
Capacity estimation
Database to use, Relational or NoSQL
Vertical scaling, Horizontal scaling, Sharding
Load Balancing
Primary-secondary Replication
Cache and CDN
Stateless and Stateful servers
Data center georouting
|
https://en.wikipedia.org/wiki/Time-stretch%20analog-to-digital%20converter
|
The time-stretch analog-to-digital converter (TS-ADC), also known as the time-stretch enhanced recorder (TiSER), is an analog-to-digital converter (ADC) system that has the capability of digitizing very high bandwidth signals that cannot be captured by conventional electronic ADCs. Alternatively, it is also known as the photonic time-stretch (PTS) digitizer, since it uses an optical frontend. It relies on the process of time-stretch, which effectively slows down the analog signal in time (or compresses its bandwidth) before it can be digitized by a standard electronic ADC.
Background
There is a huge demand for very high-speed analog-to-digital converters (ADCs), as they are needed for test and measurement equipment in laboratories and in high speed data communications systems. Most of the ADCs are based purely on electronic circuits, which have limited speeds and add a lot of impairments, limiting the bandwidth of the signals that can be digitized and the achievable signal-to-noise ratio. In the TS-ADC, this limitation is overcome by time-stretching the analog signal, which effectively slows down the signal in time prior to digitization. By doing so, the bandwidth (and carrier frequency) of the signal is compressed. Electronic ADCs that would have been too slow to digitize the original signal can now be used to capture and process this slowed down signal.
Operation principle
The time-stretch processor, which is generally an optical frontend, stretches the signal in time. It also divides the signal into multiple segments using a filter, for example, a wavelength-division multiplexing (WDM) filter, to ensure that the stretched replica of the original analog signal segments do not overlap each other in time after stretching. The time-stretched and slowed down signal segments are then converted into digital samples by slow electronic ADCs. Finally, these samples are collected by a digital signal processor (DSP) and rearranged in a manner such that output data is the
|
https://en.wikipedia.org/wiki/Moisture%20sensitivity%20level
|
Moisture sensitivity level (MSL) is a rating that shows a device's susceptibility to damage due to absorbed moisture when subjected to reflow soldering as defined in J-STD-020.
It relates to the packaging and handling precautions for some semiconductors. The MSL is an electronic standard for the time period in which a moisture sensitive device can be exposed to ambient room conditions (30 °C/85%RH at Level 1; 30 °C/60%RH at all other levels).
Increasingly, semiconductors have been manufactured in smaller sizes. Components such as thin fine-pitch devices and ball grid arrays could be damaged during SMT reflow when moisture trapped inside the component expands.
The expansion of trapped moisture can result in internal separation (delamination) of the plastic from the die or lead-frame, wire bond damage, die damage, and internal cracks. Most of this damage is not visible on the component surface. In extreme cases, cracks will extend to the component surface. In the most severe cases, the component will bulge and pop. This is known as the "popcorn" effect. This occurs when part temperature rises rapidly to a high maximum during the soldering (assembly) process. This does not occur when part temperature rises slowly and to a low maximum during a baking (preheating) process.
Moisture sensitive devices are packaged in a moisture barrier antistatic bag with a desiccant and a moisture indicator card which is sealed.
Moisture sensitivity levels are specified in technical standard IPC/JEDEC Moisture/reflow Sensitivity Classification for Nonhermetic Surface-Mount Devices. The times indicate how long components can be outside of dry storage before they have to be baked to remove any absorbed moisture.
MSL 6 – Mandatory bake before use
MSL 5A – 24 hours
MSL 5 – 48 hours
MSL 4 – 72 hours
MSL 3 – 168 hours
MSL 2A – 4 weeks
MSL 2 – 1 year
MSL 1 – Unlimited floor life
Practical
MSL-specified parts must be baked before assembly if their exposure has exceeded the r
|
https://en.wikipedia.org/wiki/Location%20arithmetic
|
Location arithmetic (Latin arithmeticae localis) is the additive (non-positional) binary numeral systems, which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard-like grid.
Napier's terminology, derived from using the positions of counters on the board to represent numbers, is potentially misleading because the numbering system is, in facts, non-positional in current vocabulary.
During Napier's time, most of the computations were made on boards with tally-marks or jetons. So, unlike how it may be seen by the modern reader, his goal was not to use moves of counters on a board to multiply, divide and find square roots, but rather to find a way to compute symbolically with pen and paper.
However, when reproduced on the board, this new technique did not require mental trial-and-error computations nor complex carry memorization (unlike base 10 computations). He was so pleased by his discovery that he said in his preface:
Location numerals
Binary notation had not yet been standardized, so Napier used what he called location numerals to represent binary numbers. Napier's system uses sign-value notation to represent numbers; it uses successive letters from the Latin alphabet to represent successive powers of two: a = 20 = 1, b = 21 = 2, c = 22 = 4, d = 23 = 8, e = 24 = 16 and so on.
To represent a given number as a location numeral, that number is expressed as a sum of powers of two and then each power of two is replaced by its corresponding digit (letter). For example, when converting from a decimal numeral:
87 = 1 + 2 + 4 + 16 + 64 = 20 + 21 + 22 + 24 + 26 = abceg
Using the reverse process, a location numeral can be converted to another numeral system. For example, when converting to a decimal numeral:
abdgkl = 20 + 21 + 23 + 26 + 210 + 211 = 1 + 2 + 8 + 64 + 1024 + 2048 = 3147
Napier showed multiple methods of converting numbers in and out of his numeral system. These methods are similar t
|
https://en.wikipedia.org/wiki/Network%20Coordinate%20System
|
A Network Coordinate System (NC system) is a system for predicting characteristics such as the latency or bandwidth of connections between nodes in a network by assigning coordinates to nodes. More formally, It assigns a coordinate embedding to each node in a network using an optimization algorithm such that a predefined operation estimates some directional characteristic of the connection between node and .
Uses
In general, Network Coordinate Systems can be used for peer discovery, optimal-server selection, and characteristic-aware routing.
Latency Optimization
When optimizing for latency as a connection characteristic i.e. for low-latency connections, NC systems can potentially help improve the quality of experience for many different applications such as:
Online Games
Forming game groups such that all the players are close to each other and thus have a smoother overall experience.
Choosing servers as close to as many players in a given multiplayer game as possible.
Automatically routing game packets through different servers so as to minimize the total latency between players who are actively interacting with each other in the game map.
Content delivery networks
Directing a user to the closest server that can handle a request to minimize latency.
Voice over IP
Automatically switch relay servers based on who is talking in a few-to-many or many-to-many voice chat to minimize latency between active participants.
Peer-to-peer networks
Can use the latency-predicting properties of NC systems to do a wide variety of routing optimizations in peer-to-peer networks.
Onion routing networks
Choose relays such as to minimize the total round trip delay to allow for a more flexible tradeoff between performance and anonymity.
Physical positioning
Latency correlates with the physical distances between computers in the real world. Thus, NC systems that model latency may be able to aid in locating the approximate physical area a computer resides in.
Bandwid
|
https://en.wikipedia.org/wiki/Radio%20access%20technology
|
A radio access technology (RAT) is the underlying physical connection method for a radio communication network. Many modern mobile phones support several RATs in one device such as Bluetooth, Wi-Fi, and GSM, UMTS, LTE or 5G NR.
The term RAT was traditionally used in mobile communication network interoperability.
More recently, the term RAT is used in discussions of heterogeneous wireless networks. The term is used when a user device selects between the type of RAT being used to connect to the Internet. This is often performed similar to access point selection in IEEE 802.11 (Wi-Fi) based networks.
Inter-RAT (IRAT) handover
A mobile terminal, while connected using a RAT, performs neighbour cell measurements and sends measurement report to the network. Based on this measurement report provided by the mobile terminal, the network can initiate handover from one RAT to another, e.g. from WCDMA to GSM or vice versa. Once the handover with the new RAT is completed, the channels used by the previous RAT are released.
See also
Radio access network (RAN)
|
https://en.wikipedia.org/wiki/P4%20%28programming%20language%29
|
P4 is a programming language for controlling packet forwarding planes in networking devices, such as routers and switches. In contrast to a general purpose language such as C or Python, P4 is a domain-specific language with a number of constructs optimized for network data forwarding. P4 is distributed as open-source, permissively licensed code, and is maintained by the P4 Project (formerly the P4 Language Consortium), a not-for-profit organization hosted by the Open Networking Foundation.
History
P4 was originally described in a 2014 SIGCOMM CCR paper titled “Programming Protocol-Independent Packet Processors”—the alliterative name shortens to "P4". The first P4 workshop took place in June 2015 at Stanford University. An updated specification of P4, called P4-16, was released between 2016 and 2017, replacing P4-14, the original specification of P4.
Design
As the language is specifically targeted at packet forwarding applications, the list of requirements or design choices is somewhat specific to those use cases. The language is designed to meet several goals:
Target independence
P4 programs are designed to be implementation-independent: they can be compiled against many different types of execution machines such as general-purpose CPUs, FPGAs, system(s)-on-chip, network processors, and ASICs. These different types of machines are known as P4 targets, and each target must be provided along with a compiler that maps the P4 source code into a target switch model. The compiler may be embedded in the target device, an externally running software, or even a cloud service. As many of the initial targets for P4 programs were used for simple packet switching it is very common to hear the term "P4 switch" used, even though "P4 target" is more formally correct.
Protocol independence
P4 is designed to be protocol-independent: the language has no native support for even common protocols such as IP, Ethernet, TCP, VxLAN, or MPLS. Instead, the P4 programmer describes the
|
https://en.wikipedia.org/wiki/Standard%20Commands%20for%20Programmable%20Instruments
|
The Standard Commands for Programmable Instruments (SCPI; often pronounced "skippy") defines a standard for syntax and commands to use in controlling programmable test and measurement devices, such as automatic test equipment and electronic test equipment.
Overview
SCPI was defined as an additional layer on top of the specification "Standard Codes, Formats, Protocols, and Common Commands". The standard specifies a common syntax, command structure, and data formats, to be used with all instruments. It introduced generic commands (such as CONFigure and MEASure) that could be used with any instrument. These commands are grouped into subsystems. SCPI also defines several classes of instruments. For example, any controllable power supply would implement the same DCPSUPPLY base functionality class. Instrument classes specify which subsystems they implement, as well as any instrument-specific features.
The physical hardware communications link is not defined by SCPI. While it was originally created for the IEEE-488.1 (GPIB) bus, SCPI can also be used with RS-232, RS-422, Ethernet, USB, VXIbus, HiSLIP, etc.
SCPI commands are ASCII textual strings, which are sent to the instrument over the physical layer (e.g., IEEE-488.1). Commands are a series of one or more keywords, many of which take parameters. In the specification, keywords are written CONFigure: The entire keyword can be used, or it can be abbreviated to just the uppercase portion. Responses to query commands are typically ASCII strings. However, for bulk data, binary formats can be used.
The SCPI specification consists of four volumes: Volume 1: "Syntax and Style", Volume 2: "Command Reference", Volume 3: "Data Interchange Format", Volume 4: "Instrument Classes". The specification was originally released as non-free printed manuals, then later as a free PDF file.
SCPI history
First released in 1990, SCPI originated as an additional layer for IEEE-488. IEEE-488.1 specified the physical and electrical bus, and
|
https://en.wikipedia.org/wiki/Log%20management
|
Log management (LM) comprises an approach to dealing with large volumes of computer-generated log messages (also known as audit records, audit trails, event-logs, etc.).
Log management generally covers:
Log collection
Centralized log aggregation
Long-term log storage and retention
Log rotation
Log analysis (in real-time and in bulk after storage)
Log search and reporting.
Overview
The primary drivers for log management implementations are concerns about security, system and network operations (such as system or network administration) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a local file system or remote system.
Effectively analyzing large volumes of diverse logs can pose many challenges, such as:
Volume: log data can reach hundreds of gigabytes of data per day for a large organization. Simply collecting, centralizing and storing data at this volume can be challenging.
Normalization: logs are produced in multiple formats. The process of normalization is designed to provide a common output for analysis from diverse sources.
Velocity: The speed at which logs are produced from devices can make collection and aggregation difficult
Veracity: Log events may not be accurate. This is especially problematic for systems that perform detection, such as intrusion detection systems.
Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from various open-source components, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it.
Logging can produce technical information usable for the maintenance of applications or websites. It can serve:
to define whether a reported bug is actually a bug
to help analyze, reproduce and solve bugs
to help test new features i
|
https://en.wikipedia.org/wiki/Symbols%20of%20grouping
|
In mathematics and related subjects, understanding a mathematical expression depends on an understanding of symbols of grouping, such as parentheses (), brackets [], and braces {}. These same symbols are also used in ways where they are not symbols of grouping. For example, in the expression 3(x+y) the parentheses are symbols of grouping, but in the expression (3, 5) the parentheses may indicate an open interval.
The most common symbols of grouping are the parentheses and the brackets, and the brackets are usually used to avoid too many repeated parentheses. For example, to indicate the product of binomials, parentheses are usually used, thus: . But if one of the binomials itself contains parentheses, as in one or more pairs of parentheses may be replaced by brackets, thus: . Beyond elementary mathematics, brackets are mostly used for other purposes, e.g. to denote a closed interval, or an equivalence class, so they appear rarely for grouping.
The usage of the word "parentheses" varies from country to country. In the United States, the word parentheses (singular "parenthesis") is used for the curved symbol of grouping, but in many other countries the curved symbol of grouping is called a "bracket" and the symbol of grouping with two right angles joined is called a "square bracket".
The symbol of grouping knows as "braces" has two major uses. If two of these symbols are used, one on the left and the mirror image of it on the right, it almost always indicates a set, as in , the set containing three members, , , and . But if it is used only on the left, it groups two or more simultaneous equations.
There are other symbols of grouping. One is the bar above an expression, as in the square root sign in which the bar is a symbol of grouping. For example is the square root of the sum. The bar is also a symbol of grouping in repeated decimal digits. A decimal point followed by one or more digits with a bar over them, for example 0., represents the repeating decimal 0.1
|
https://en.wikipedia.org/wiki/Gauss%27s%20Pythagorean%20right%20triangle%20proposal
|
Gauss's Pythagorean right triangle proposal is an idea attributed to Carl Friedrich Gauss for a method to signal extraterrestrial beings by constructing an immense right triangle and three squares on the surface of the Earth. The shapes would be a symbolic representation of the Pythagorean theorem, large enough to be seen from the Moon or Mars.
Although credited in numerous sources as originating with Gauss, with exact details of the proposal set out, the specificity of detail, and even whether Gauss made the proposal, have been called into question. Many of the earliest sources do not actually name Gauss as the originator, instead crediting a "German astronomer" or using other nonspecific descriptors, and in some cases naming a different author entirely. The details of the proposal also change significantly upon different retellings. Nevertheless, Gauss's writings reveal a belief and interest in finding a method to contact extraterrestrial life, and that he did, at the least, propose using amplified light using a heliotrope, his own 1818 invention, to signal supposed inhabitants of the Moon.
Proposal
Carl Friedrich Gauss is credited with an 1820 proposal for a method to signal extraterrestrial beings in the form of drawing an immense right triangle and three squares on the surface of the Earth, intended as a symbolical representation of the Pythagorean theorem, large enough to be seen from the Moon or Mars. Details vary between sources, but typically the "drawing" was to be constructed on the Siberian tundra, and made up of vast strips of pine forest forming the right triangle's borders, with the interior of the drawing and exterior squares composed of fields of wheat. Gauss is said to have been convinced that Mars harbored intelligent life and that this geometric figure, invoking the Pythagorean theorem through the squares on the outside borders (sometimes called a "windmill diagram", as originated by Euclid), would demonstrate to such alien observers the recipr
|
https://en.wikipedia.org/wiki/Navigation%20mesh
|
A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000.
Description
A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph.
Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment.
Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can.
Creation
Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh.
It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be crea
|
https://en.wikipedia.org/wiki/Photoperiodism
|
Photoperiodism is the physiological reaction of organisms to the length of night or a dark period. It occurs in plants and animals. Plant photoperiodism can also be defined as the developmental responses of plants to the relative lengths of light and dark periods. They are classified under three groups according to the photoperiods: short-day plants, long-day plants, and day-neutral plants.
In animals photoperiodism (sometimes called seasonality) is the suite of physiological changes that occur in response to changes in day length. This allows animals to respond to a temporally changing environment associated with changing seasons as the earth orbits the sun.
Plants
Many flowering plants (angiosperms) use a circadian rhythm together with photoreceptor protein, such as phytochrome or cryptochrome, to sense seasonal changes in night length, or photoperiod, which they take as signals to flower. In a further subdivision, obligate photoperiodic plants absolutely require a long or short enough night before flowering, whereas facultative photoperiodic plants are more likely to flower under one condition.
Phytochrome comes in two forms: Pr and Pfr. Red light (which is present during the day) converts phytochrome to its active form (Pfr) which then stimulates various processes such as germination, flowering or branching. In comparison, plants receive more far-red in the shade, and this converts phytochrome from Pfr to its inactive form, Pr, inhibiting germination. This system of Pfr to Pr conversion allows the plant to sense when it is night and when it is day. Pfr can also be converted back to Pr by a process known as dark reversion, where long periods of darkness trigger the conversion of Pfr. This is important in regards to plant flowering. Experiments by Halliday et al. showed that manipulations of the red-to far-red ratio in Arabidopsis can alter flowering. They discovered that plants tend to flower later when exposed to more red light, proving that red light i
|
https://en.wikipedia.org/wiki/Bipolar%20transistor%20biasing
|
Bipolar transistors must be properly biased to operate correctly. In circuits made with individual devices (discrete circuits), biasing networks consisting of resistors are commonly employed. Much more elaborate biasing arrangements are used in integrated circuits, for example, bandgap voltage references and current mirrors. The voltage divider configuration achieves the correct voltages by the use of resistors in certain patterns. By selecting the proper resistor values, stable current levels can be achieved that vary only little over temperature and with transistor properties such as β.
The operating point of a device, also known as bias point, quiescent point, or Q-point, is the point on the output characteristics that shows the DC collector–emitter voltage (Vce) and the collector current (Ic) with no input signal applied.
Bias circuit requirements
A bias network is selected to stabilize the operating point of the transistor, by reducing the following effects of device variability, temperature, and voltage changes:
The gain of a transistor can vary significantly between different batches, which results in widely different operating points for sequential units in serial production or after replacement of a transistor.
Due to the Early effect, the current gain is affected by the collector–emitter voltage.
Both gain and base–emitter voltage depend on the temperature.
The leakage current also increases with temperature.
A bias circuit may be composed of only resistors, or may include elements such as temperature-dependent resistors, diodes, or additional voltage sources, depending on the range of operating conditions expected.
Signal requirements
For analog operation of a class-A amplifier, the Q-point is placed so the transistor stays in active mode (does not shift to operation in the saturation region or cut-off region) across the input signal's range. Often, the Q-point is established near the center of the active region of a transistor characteristic t
|
https://en.wikipedia.org/wiki/Viridiplantae
|
Viridiplantae (literally "green plants") constitute a clade of eukaryotic organisms that comprises approximately 450,000–500,000 species that play important roles in both terrestrial and aquatic ecosystems. They include the green algae, which are primarily aquatic, and the land plants (embryophytes), which emerged from within them. Green algae traditionally excludes the land plants, rendering them a paraphyletic group. However it is accurate to think of land plants as a kind of alga. Since the realization that the embryophytes emerged from within the green algae, some authors are starting to include them. They have cells with cellulose in their cell walls, and primary chloroplasts derived from endosymbiosis with cyanobacteria that contain chlorophylls a and b and lack phycobilins. Corroborating this, a basal phagotroph archaeplastida group has been found in the Rhodelphydia.
In some classification systems, the group has been treated as a kingdom, under various names, e.g. Viridiplantae, Chlorobionta, or simply Plantae, the latter expanding the traditional plant kingdom to include the green algae. Adl et al., who produced a classification for all eukaryotes in 2005, introduced the name Chloroplastida for this group, reflecting the group having primary chloroplasts with green chlorophyll. They rejected the name Viridiplantae on the grounds that some of the species are not plants, as understood traditionally. The Viridiplantae are made up of two clades: Chlorophyta and Streptophyta as well as the basal Mesostigmatophyceae and Chlorokybophyceae. Together with Rhodophyta and glaucophytes, Viridiplantae are thought to belong to a larger clade called Archaeplastida or Primoplantae.
Phylogeny and classification
Simplified phylogeny of the Viridiplantae, according to Leliaert et al. 2012.
Viridiplantae
Chlorophyta
core chlorophytes
Ulvophyceae
Cladophorales
Dasycladales
Bryopsidales
Trentepohliales
Ulvales-Ulotrichales
Oltmannsiellopsidales
Chlorophyceae
Oedogoniales
Chae
|
https://en.wikipedia.org/wiki/Roshd%20Biological%20Education
|
Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers.
It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways.
Magazine layout
As of Autumn 2012, the magazine is laid out as follows:
Editorial—often offering a view of point from editor in chief on an educational and/or biological topics.
Explore— New research methods and results on biology and/or education.
World— Reports and explores on biological education worldwide.
In Brief—Summaries of research news and discoveries.
Trends—showing how new technology is altering the way we live our lives.
Point of View—Offering personal commentaries on contemporary topics.
Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader.
Muslim Biologists—Short histories of Muslim Biologists.
Environment—An article on Iranian environment and its problems.
News and Reports—Offering short news and reports events on biology education.
In Brief—Short articles explaining interesting facts.
Questions and Answers—Questions about biology concepts and their answers.
Book and periodical Reviews—About new publication on biology and/or education.
Reactions—Letter to the editors.
Editorial staff
Mohammad Karamudini, editor in chief
History
Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th
|
https://en.wikipedia.org/wiki/Bletting
|
Bletting is a process of softening that certain fleshy fruits undergo, beyond ripening. There are some fruits that are either sweeter after some bletting, such as sea buckthorn, or for which most varieties can be eaten raw only after bletting, such as medlars, persimmons, quince, service tree fruit, and wild service tree fruit (popularly known as chequers). The rowan or mountain ash fruit must be bletted and cooked to be edible, to break down the toxic parasorbic acid (hexenollactone) into sorbic acid.
History
The English verb to blet was coined by John Lindley, in his Introduction to Botany (1835). He derived it from the French poire blette meaning 'overripe pear'. "After the period of ripeness", he wrote, "most fleshy fruits undergo a new kind of alteration; their flesh either rots or blets."
In Shakespeare's Measure for Measure, he alluded to bletting when he wrote (IV. iii. 167) "They would have married me to the rotten Medler." Thomas Dekker also draws a similar comparison in his play The Honest Whore: "I scarce know her, for the beauty of her cheek hath, like the moon, suffered strange eclipses since I beheld it: women are like medlars – no sooner ripe but rotten." Elsewhere in literature, D. H. Lawrence dubbed medlars "wineskins of brown morbidity."
There is also an old saying, used in Don Quixote, that "time and straw make medlars ripe", referring to the bletting process.
Process
Chemically speaking, bletting brings about an increase in sugars and a decrease in the acids and tannins that make the unripe fruit astringent.
Ripe medlars, for example, are taken from the tree, placed somewhere cool, and allowed to further ripen for several weeks. In Trees and Shrubs, horticulturist F. A. Bush wrote about medlars that "if the fruit is wanted it should be left on the tree until late October and stored until it appears in the first stages of decay; then it is ready for eating. More often the fruit is used for making jelly." Ideally, the fruit should be harve
|
https://en.wikipedia.org/wiki/Gurzadyan%20theorem
|
In cosmology, Gurzadyan theorem, proved by Vahe Gurzadyan, states the most general functional form for the force satisfying the condition of identity of the gravity of the sphere and of a point mass located in the sphere's center. This theorem thus refers to the first statement of Isaac Newton’s shell theorem (the identity mentioned above) but not the second one, namely, the absence of gravitational force inside a shell.
The theorem had entered, for example, in physics manual website and its importance for cosmology outlined in several papers as well as in shell theorem.
The formula and the cosmological constant
The formula for the force derived in has the form
where and are constants. The first term is the familiar law of universal gravitation, the second one corresponds to the cosmological constant term in general relativity and McCrea-Milne cosmology.
Then the field is force-free only in the center of a shell but the confinement (oscillator) term does not change the initial symmetry of the Newtonian field. Also, this field corresponds to the only field possessing the property of the Newtonian one: the closing of orbits at any negative value of energy, i.e. the coincidence of the period of variation of the value of the radius vector with that of its revolution by (resonance principle) .
Consequences: cosmological constant as a physical constant
Einstein named the cosmological constant as a universal constant, introducing it to define the static cosmological model.
From this theorem the cosmological constant emerges as additional constant of gravity along with the Newton’s gravitational constant . Then, the cosmological constant is dimension independent and matter-uncoupled and hence can be considered even more universal than Newton’s gravitational constant.
For joining the set of fundamental constants , the gravitational
Newton’s constant, the speed of light and the Planck constant, yields
and a dimensionless quantity emerges for the 4-consta
|
https://en.wikipedia.org/wiki/Bracket%20%28mathematics%29
|
In mathematics, brackets of various typographical forms, such as parentheses ( ), square brackets [ ], braces { } and angle brackets ⟨ ⟩, are frequently used in mathematical notation. Generally, such bracketing denotes some form of grouping: in evaluating an expression containing a bracketed sub-expression, the operators in the sub-expression take precedence over those surrounding it. Sometimes, for the clarity of reading, different kinds of brackets are used to express the same meaning of precedence in a single expression with deep nesting of sub-expressions.
Historically, other notations, such as the vinculum generally, were similarly used for grouping. In present-day use, these notations all have specific meanings. The earliest use of brackets to indicate aggregation (i.e. grouping) was suggested in 1608 by Christopher Clavius, and in 1629 by Albert Girard.
Symbols for representing angle brackets
A variety of different symbols are used to represent angle brackets. In e-mail and other ASCII text, it is common to use the less-than (<) and greater-than (>) signs to represent angle brackets, because ASCII does not include angle brackets.
Unicode has pairs of dedicated characters; other than less-than and greater-than symbols, these include:
and
and
and
and
and , which are deprecated
In LaTeX the markup is \langle and \rangle: .
Non-mathematical angled brackets include:
and , used in East-Asian text quotation
and , which are dingbats
There are additional dingbats with increased line thickness, and some angle quotation marks and deprecated characters.
Algebra
In elementary algebra, parentheses ( ) are used to specify the order of operations. Terms inside the bracket are evaluated first; hence 2×(3 + 4) is 14, is 2 and (2×3) + 4 is 10. This notation is extended to cover more general algebra involving variables: for example . Square brackets are also often used in place of a second set of parentheses when they are nested—so as to provide a v
|
https://en.wikipedia.org/wiki/Frenetic%20%28programming%20language%29
|
Frenetic is a domain-specific language for programming software-defined networking (SDN). This domain-specific programming language allows network operators, rather than manually configuring each connected network device, to program the network as a whole. Frenetic is designed to solve major OpenFlow/NOX programming problems. In particular, Frenetic introduces a set of purely functional abstractions that enable modular program development, defines high-level, programmer-centric packet-processing operators, and eliminates many of the difficulties of the two-tier programming model by introducing a see-every-packet programming paradigm. Hence Frenetic is a functional reactive programming language operating at a packet level of abstraction.
|
https://en.wikipedia.org/wiki/Unit%20of%20work
|
A unit of work is a behavioral pattern in software development. Martin Fowler has defined it as everything one does during a business transaction which can affect the database. When the unit of work is finished it will provide everything that needs to be done to change the database as a result of the work.
A unit of work encapsulates one or more code repositories[de] and a list of actions to be performed which are necessary for the successful implementation of self-contained and consistent data change. A unit of work is also responsible for handling concurrency issues, and can be used for transactions and stability patterns.[de]
See also
ACID (atomicity, consistency, isolation, durability), a set of properties of database transactions
Database transaction, a unit of work within a database management system
Equi-join, a type of join where only equal signs are used in the join predicate
Lossless join decomposition, decomposition of a relation such that a natural join of the resulting relations yields back the original relation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.