source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Perpetual%20beta
|
Perpetual beta is the keeping of software or a system at the beta development stage for an extended or indefinite period of time. It is often used by developers when they continue to release new features that might not be fully tested. Perpetual beta software is not recommended for mission critical machines. However, many operational systems find this to be a much more rapid and agile approach to development, staging, and deployment.
Definition
Perpetual beta has come to be associated with the development and release of a service in which constant updates are the foundation for the habitability or usability of a service. According to publisher and open source advocate Tim O'Reilly:
Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often", in fact has morphed into an even more radical position, "the perpetual beta", in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time.
Used in the larger conversation of what defines Web 2.0, O'Reilly described the concept of perpetual beta as part of a customized Internet environment with these applications as distinguishing characteristics:
Services, not packaged software, with cost-effective scalability
Control over unique, hard-to-recreate data sources that get richer as more people use them
Trusting users as co-developers
Harnessing collective intelligence
Leveraging the long tail through customer self-service
Software above the level of a single device
Lightweight user interfaces, development models, and business models.However, the Internet and the development of open source programs have changed the role of the (e
|
https://en.wikipedia.org/wiki/FD%20Trinitron/WEGA
|
FD Trinitron/WEGA is Sony's flat version of the Trinitron picture tube. This technology was also used in computer monitors bearing the Trinitron mark. The FD Trinitron used computer-controlled feedback systems to ensure sharp focus across a flat screen. The FD Trinitron reduces the amount of glare on the screen by reflecting much less ambient light than spherical or vertically flat CRTs. Flat screens also increase total image viewing angle and have less geometric distortion in comparison to curved screens. The FD Trinitron line featured key standard improvements over prior Trinitron designs including a finer pitch aperture grille, an electron gun with a greater focal length for corner focus, and an improved deflection yoke for color convergence. Sony would go on to receive an Emmy Award from the National Academy of Television Arts and Sciences for its development of flat screen CRT technology.
Initially introduced on their 32 and 36 inch models in 1998, the new tubes were offered in a variety of resolutions for different uses. The basic WEGA models supported normal 480i signals, but a larger version offered 16:9 aspect ratios. The technology was quickly applied to the entire Trinitron range, from 13 to 40 inch along with high resolution versions; Hi-Scan and Super Fine Pitch. With the introduction of the FD Trinitron, Sony also introduced a new industrial style, leaving the charcoal-colored sets introduced in the 1980s for a new silver styling.
In 2001, the FD Trinitron WEGA series had become the top selling television model in the United States. By 2003, over 40 million sets had been sold worldwide. As the television market shifted towards LCD technology, Sony eventually ended production of the Trinitron in Japan in 2004, and in the US in 2006. Sony would continue to sell the Trinitron in China, India, and regions of South America using tubes delivered from their Singapore plant. Worldwide production ended when Singapore And Malaysia ceased production in end of
|
https://en.wikipedia.org/wiki/Rotary%20wheel%20blow%20molding%20systems
|
Rotary wheel blow molding systems are used for the high-output production of a wide variety of plastic extrusion blow molded articles. Containers may be produced from small, single serve bottles to large containers up to 20-30 liters in volume - but wheel machines are often sized for the volume and dimensional demands of a specific container, and are typically dedicated to a narrow range of bottle sizes once built. Multiple parison machines, with high numbers of molds are capable of producing over one million bottles per day in some configurations.
Description
Rotary blow molding "wheels" are targeted to the high output production of containers. They are used to produce containers from one to seven layers. View stripe and In Mold Labeling (IML) options are available in some configurations. Rotary wheels, which may contain from six to thirty molds, feature continuously extruded parisons. Revolving sets of blow molds capture the parison or parisons as they pass over the extrusion head. The revolving sets of molds are located on clamp "stations".
Rotary wheels come in different variations, including both continuous motion and indexing wheels, and vertical or horizontal variations. Wheel machines are favored for their processing ease, due to having only single (or in some cases, two) parisons, and mechanical repeatability.
In some machinery configurations, the molds take on the shape of a "pie" sector. Thus, if two or more parisons are used, each blow molded "log" has a unique length, requiring special downstream handling and trimming requirements. In other machine configurations, the molds utilize "book style" opening mechanisms, allowing multiple parisons of equal length. However, machines of this style typically have lower clamp force, limiting the available applications.
The mold close and open actuation is typically carried out through a toggle mechanism linkage that is activated during the rotational process by stationary cams. This mechanical rep
|
https://en.wikipedia.org/wiki/IBM%20SAN%20Volume%20Controller
|
The IBM SAN Volume Controller (SVC) is a block storage virtualization appliance that belongs to the IBM System Storage product family. SVC implements an indirection, or "virtualization", layer in a Fibre Channel storage area network (SAN).
Architecture
The IBM 2145 SAN Volume Controller (SVC) is an inline virtualization or "gateway" device. It logically sits between hosts and storage arrays, presenting itself to hosts as the storage provider (target) and presenting itself to storage arrays as one big host. SVC is physically attached to one or several SAN fabrics.
The virtualization approach allows for non-disruptive replacements of any part in the storage infrastructure, including the SVC devices themselves. It also aims at simplifying compatibility requirements in strongly heterogeneous server and storage landscapes. All advanced functions are therefore implemented in the virtualization layer, which allows switching storage array vendors without impact. Finally, spreading an SVC installation across two or more sites (stretched clustering) enables basic disaster protection paired with continuous availability.
SVC nodes are always clustered, with a minimum of 2 and a maximum of 8 nodes, and linear scalability. Nodes are rack-mounted appliances derived from IBM System x servers, protected by redundant power supplies and integrated batteries. Earlier models featured external battery-backed power supplies. Each node has Fibre Channel ports simultaneously used for incoming, outgoing, and intracluster data traffic. Hosts may also be attached via FCoE and iSCSI Gbit Ethernet ports. Intracluster communication includes maintaining read/write cache integrity, sharing status information, and forwarding reads and writes to any port. These ports must be zoned together.
Write cache is protected by mirroring within a pair of SVC nodes, called I/O group. Virtualized resources (= storage volumes presented to hosts) are distributed across I/O groups to improve performance. Volum
|
https://en.wikipedia.org/wiki/Mating%20of%20yeast
|
The yeast Saccharomyces cerevisiae is a simple single-celled eukaryote with both a diploid and haploid mode of existence. The mating of yeast only occurs between haploids, which can be either the a or α (alpha) mating type and thus display simple sexual differentiation. Mating type is determined by a single locus, MAT, which in turn governs the sexual behaviour of both haploid and diploid cells. Through a form of genetic recombination, haploid yeast can switch mating type as often as every cell cycle.
Mating type and the life cycle of Saccharomyces cerevisiae
S. cerevisiae (yeast) can stably exist as either a diploid or a haploid. Both haploid and diploid yeast cells reproduce by mitosis, with daughter cells budding off of mother cells. Haploid cells are capable of mating with other haploid cells of the opposite mating type (an a cell can only mate with an α cell, and vice versa) to produce a stable diploid cell. Diploid cells, usually upon facing stressful conditions such as nutrient depletion, can undergo meiosis to produce four haploid spores: two a spores and two α spores.
Differences between a and α cells
a cells produce 'a-factor', a mating pheromone which signals the presence of an a cell to neighbouring α cells. a cells respond to α-factor, the α cell mating pheromone, by growing a projection (known as a shmoo, due to its distinctive shape resembling the Al Capp cartoon character Shmoo) towards the source of α-factor. Similarly, α cells produce α-factor, and respond to a-factor by growing a projection towards the source of the pheromone. The response of haploid cells only to the mating pheromones of the opposite mating type allows mating between a and α cells, but not between cells of the same mating type.
These phenotypic differences between a and α cells are due to a different set of genes being actively transcribed and repressed in cells of the two mating types. a cells activate genes which produce a-factor and produce a cell surface receptor (Ste2) w
|
https://en.wikipedia.org/wiki/IBM%207070
|
IBM 7070 is a decimal-architecture intermediate data-processing system that was introduced by IBM in 1958. It was part of the IBM 700/7000 series, and was based on discrete transistors rather than the vacuum tubes of the 1950s. It was the company's first transistorized stored-program computer.
The 7070 was expected to be a "common successor to at least the 650 and the 705". The 7070 was not designed to be instruction set compatible with the 650, as the latter had a second jump address in every instruction to allow optimal use of the drum, something unnecessary and wasteful in a computer with random-access core memory. As a result, a simulator was needed to run old programs. The 7070 was also marketed as an IBM 705 upgrade, but failed miserably due to its incompatibilities, including an inability to fully represent the 705 character set; forcing IBM to quickly introduce the IBM 7080 as a "transistorized IBM 705" that was fully compatible.
The 7070 series stored data in words containing 10 decimal digits plus a sign. Digits were encoded using a two-out-of-five code. Characters were represented by a two-digit code. The machine shipped with 5,000 or 9,990 words of core memory and the CPU speed was about 27KIPS. A typical system was leased for $17,400 per month or could be purchased for $813,000.
The 7070 weighed .
Later systems in this series were the faster IBM 7074 introduced in July 1960
and the IBM 7072 (1961), a less expensive system using the slower 7330 instead of 729 tape drives. The 7074 could be expanded to 30K words. They were eventually replaced by the System/360, announced in 1964.
Hardware implementation
The 7070 was implemented using both CTDL (in the logic and control sections) and current-mode logic (in the timing storage and core storage sections) on Standard Modular System (SMS) cards. A total of about 30,000 alloy-junction germanium transistors and 22,000 germanium diodes are used, on approximately 14,000 SMS cards.
Input/Output in original an
|
https://en.wikipedia.org/wiki/Corpora%20amylacea
|
Corpora amylacea (CA) (from the Latin meaning "starch-like bodies;" also known as wasteosomes) is a general term for small hyaline masses found in the prostate gland, nervous system, lung, and sometimes in other organs of the body. Corpora amylacea increase in number and size with advancing age, although this increase varies from person to person. In the nervous system, they are particularly abundant in certain neurodegenerative diseases. While their significance is largely unknown, some researchers have suggested that corpora amylacea play a role in the clearance of debris.
The composition and appearance of corpora amylacea can differ in different organs. In the prostate gland, where they are also known as prostatic concretions, corpora amylacea are rich in aggregated protein that has many of the features of amyloid, whereas those in the central nervous system are generally smaller and do not contain amyloid. Corpora amylacea in the central nervous system occur in the foot processes of astrocytes, and they are usually present beneath the pia mater, in the tissues surrounding the ventricles, and around blood vessels. They have been proposed to be part of a family of polyglucosan diseases, in which polymers of glucose collect to form abnormal structures known as polyglucosan bodies. Polyglucosan bodies bearing at least partial resemblance to human corpora amylacea have been observed in various nonhuman species.
References
Anatomy
Prostate
Brain
|
https://en.wikipedia.org/wiki/DREAM%20%28software%29
|
The Distributed Real-time Embedded Analysis Method (DREAM) is a platform-independent open-source tool for the verification and analysis of distributed real-time and embedded (DRE) systems which focuses on the practical application of formal verification and timing analysis to real-time middleware. DREAM supports formal verification of scheduling based on task timed automata using the Uppaal model checker and the Verimag IF toolset as well as the random testing of real-time components using a discrete event simulator. DREAM is developed at the Center for Embedded Computer Systems at the University of California, Irvine, in cooperation with researchers from Vanderbilt University.
External links
DREAM website
Center for Embedded Computer Systems
Uppaal website
IF toolset website
Formal methods
|
https://en.wikipedia.org/wiki/Positive%20and%20negative%20parts
|
In mathematics, the positive part of a real or extended real-valued function is defined by the formula
Intuitively, the graph of is obtained by taking the graph of , chopping off the part under the x-axis, and letting take the value zero there.
Similarly, the negative part of f is defined as
Note that both f+ and f− are non-negative functions. A peculiarity of terminology is that the 'negative part' is neither negative nor a part (like the imaginary part of a complex number is neither imaginary nor a part).
The function f can be expressed in terms of f+ and f− as
Also note that
.
Using these two equations one may express the positive and negative parts as
Another representation, using the Iverson bracket is
One may define the positive and negative part of any function with values in a linearly ordered group.
The unit ramp function is the positive part of the identity function.
Measure-theoretic properties
Given a measurable space (X,Σ), an extended real-valued function f is measurable if and only if its positive and negative parts are. Therefore, if such a function f is measurable, so is its absolute value |f|, being the sum of two measurable functions. The converse, though, does not necessarily hold: for example, taking f as
where V is a Vitali set, it is clear that f is not measurable, but its absolute value is, being a constant function.
The positive part and negative part of a function are used to define the Lebesgue integral for a real-valued function. Analogously to this decomposition of a function, one may decompose a signed measure into positive and negative parts — see the Hahn decomposition theorem.
See also
Rectifier (neural networks)
Even and odd functions
Real and imaginary parts
References
External links
Positive part on MathWorld
Elementary mathematics
|
https://en.wikipedia.org/wiki/Levenshtein%20automaton
|
In computer science, a Levenshtein automaton for a string w and a number n is a finite-state automaton that can recognize the set of all strings whose Levenshtein distance from w is at most n. That is, a string x is in the formal language recognized by the Levenshtein automaton if and only if x can be transformed into w by at most n single-character insertions, deletions, and substitutions.
Applications
Levenshtein automata may be used for spelling correction, by finding words in a given dictionary that are close to a misspelled word. In this application, once a word is identified as being misspelled, its Levenshtein automaton may be constructed, and then applied to all of the words in the dictionary to determine which ones are close to the misspelled word. If the dictionary is stored in compressed form as a trie, the time for this algorithm (after the automaton has been constructed) is proportional to the number of nodes in the trie, significantly faster than using dynamic programming to compute the Levenshtein distance separately for each dictionary word.
It is also possible to find words in a regular language, rather than a finite dictionary, that are close to a given target word, by computing the Levenshtein automaton for the word, and then using a Cartesian product construction to combine it with an automaton for the regular language, giving an automaton for the intersection language. Alternatively, rather than using the product construction, both the Levenshtein automaton and the automaton for the given regular language may be traversed simultaneously using a backtracking algorithm.
Levenshtein automata are used in Lucene for full-text searches that can return relevant documents even if the query is misspelled.
Construction
For any fixed constant n, the Levenshtein automaton for w and n may be constructed in time O(|w|).
Mitankin studies a variant of this construction called the universal Levenshtein automaton, determined only by a numeric parameter n, th
|
https://en.wikipedia.org/wiki/Iodine-125
|
Iodine-125 (125I) is a radioisotope of iodine which has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumors. It is the second longest-lived radioisotope of iodine, after iodine-129.
Its half-life is 59.49 days and it decays by electron capture to an excited state of tellurium-125. This state is not the metastable 125mTe, but rather a lower energy state that decays immediately by gamma decay with a maximum energy of 35 keV. Some of the excess energy of the excited 125Te may be internally converted ejected electrons (also at 35 keV), or to x-rays (from electron bremsstrahlung), and also a total of 21 Auger electrons, which are produced at the low energies of 50 to 500 electron volts. Eventually, stable ground state 125Te is produced as the final decay product.
In medical applications, the internal conversion and Auger electrons cause little damage outside the cell which contains the isotope atom. The X-rays and gamma rays are of low enough energy to deliver a higher radiation dose selectively to nearby tissues, in "permanent" brachytherapy where the isotope capsules are left in place (125I competes with palladium-103 in such uses).
Because of its relatively long half-life and emission of low-energy photons which can be detected by gamma-counter crystal detectors, 125I is a preferred isotope for tagging antibodies in radioimmunoassay and other gamma-counting procedures involving proteins outside the body. The same properties of the isotope make it useful for brachytherapy, and for certain nuclear medicine scanning procedures, in which it is attached to proteins (albumin or fibrinogen), and where a half-life longer than that provided by 123I is required for diagnostic or lab tests lasting several days.
Iodine-125 can be used in scanning/imaging the thyroid, but iodine-123 is preferred for this purpose, due to better radiation penetr
|
https://en.wikipedia.org/wiki/List%20of%20flora%20on%20stamps%20of%20Australia
|
Australia's diverse and often attractive flora has been depicted on numerous Australian stamp issues:
Acacia baileyana – 1978
Acacia coriacea – 2002
Acacia dealbata (?) – 1982
Acacia melanoxylon – 1996
Acacia pycnantha – 1959, 1979, 1990
Acmena smithii – 2002
Actinodium cunninghamii – 2005
Actinotus helianthi – 1959
Adansonia gregorii – 2005
Anigozanthos 'Bush Tango' – 2003
Anigozanthos manglesii – 1962, 1968, 2006
Armillaria luteobubalina fungus – 1981
Banksia integrifolia – 2000
Banksia prionotes (?) – 1996
Banksia serrata – 1960, 1986
Barringtonia calyptrata – 2001
Blandfordia grandiflora – 1960, 1967
Blandfordia punicea – 2007
Brachychiton acerifolius – 1978
Callistemon glaucus – 2000
Callistemon teretifolius – 1975
Caleana major – 1986
Caltha introloba – 1986
Calytrix carinata – 2002
Celmisia asteliifolia – 1986
Cochlospermum gillivraei – 2001
Coprinus comatus fungus – 1981
Correa reflexa – 1986, 1999
Cortinarius austrovenetus fungus – 1981
Cortinarius cinnabarinus fungus – 1981
Dendrobium nindii – 1986, 2003
Dendrobium phalaenopsis – 1968, 1998
Dicksonia antarctica – 1996
Dillenia alata – 1986
Diuris magnifica – 2006
Elythranthera emarginata – 1986
Epacris impressa – 1968
Eucalyptus caesia – 1982
Eucalyptus calophylla 'Rosea' – 1982
Eucalyptus camaldulensis – 1974
Eucalyptus diversicolor – 2005
Eucalyptus ficifolia – 1982
Eucalyptus forrestiana – 1982
Eucalyptus globulus – 1968, 1982
Eucalyptus grossa – 2005
Eucalyptus pauciflora – 2005
Euschemon rafflesia – 1983
Eucalyptus papuana – 1978, 1993, 2002
Eucalyptus regnans – 1996
Eucalyptus sp. – 1985
Ficus macrophylla – 2005
Gossypium sturtianum – 1971, 1978, 2007
Grevillea juncifolia – 2002
Grevillea mucronulata – 2007
Grevillea 'Superb' – 2003
Hakea laurina – 2006
Hardenbergia violacea – 2000
Helichrysum thomsonii – 1975
Helipterum albicans – 1986
Hibbertia scandens – 1999
Hibiscus meraukensis – 1986
Ipomoea pes-caprae ssp. brasiliensis – 1999
Leucochrysum albicans – 1986
Microseros lanceolata – 2002
Nelumb
|
https://en.wikipedia.org/wiki/Network%20analyzer%20%28electrical%29
|
A network analyzer is an instrument that measures the network parameters of electrical networks. Today, network analyzers commonly measure s–parameters because reflection and transmission of electrical networks are easy to measure at high frequencies, but there are other network parameter sets such as y-parameters, z-parameters, and h-parameters. Network analyzers are often used to characterize two-port networks such as amplifiers and filters, but they can be used on networks with an arbitrary number of ports.
Overview
Network analyzers are used mostly at high frequencies; operating frequencies can range from 1 Hz to 1.5 THz. Special types of network analyzers can also cover lower frequency ranges down to 1 Hz. These network analyzers can be used, for example, for the stability analysis of open loops or for the measurement of audio and ultrasonic components.
The two basic types of network analyzers are
scalar network analyzer (SNA)—measures amplitude properties only
vector network analyzer (VNA)—measures both amplitude and phase properties
A VNA is a form of RF network analyzer widely used for RF design applications. A VNA may also be called a gain–phase meter or an automatic network analyzer. An SNA is functionally identical to a spectrum analyzer in combination with a tracking generator. , VNAs are the most common type of network analyzers, and so references to an unqualified "network analyzer" most often mean a VNA. Six prominent VNA manufacturers are Keysight, Anritsu, Advantest, Rohde & Schwarz, Siglent, Copper Mountain Technologies and OMICRON Lab.
For some years now, entry-level devices and do-it-yourself projects have also been available, some for less than $100, mainly from the amateur radio sector. Although these have significantly reduced features compared to professional devices and offer only a limited range of functions, they are often sufficient for private users - especially during studies and for hobby applications up to the single-digit GHz
|
https://en.wikipedia.org/wiki/Simple%20Grid%20Protocol
|
Simple Grid Protocol is a free open source grid computing package. Developed & maintained by Brendan Kosowski, the package includes the protocol & software tools needed to get a computational grid up and running on Linux & BSD.
Coded in SBCL (Steel Bank Common Lisp), Simple Grid Protocol allows computer programs to utilize the unused CPU resources of other computers on a network or the Internet.
As of version 1.2, Simple Grid Protocol can execute multiple programming threads on multiple computers concurrently. Custom multi-threading functions (utilizing operating system threads) for Linux & BSD allow multi-threading on single-thread SBCL implementations. Originally coded in CLISP, version 1.2 included the change to SBCL coding.
BSD Operating Systems supported include FreeBSD, NetBSD, OpenBSD & DragonFly BSD.
An optional XML interface allows any XML capable programming language to send Lisp programs to the grid for execution.
External links
Simple Grid Protocol home page
Grid computing
Common Lisp (programming language) software
|
https://en.wikipedia.org/wiki/Group%20signature
|
A group signature scheme is a method for allowing a member of a group to anonymously sign a message on behalf of the group. The concept was first introduced by David Chaum and Eugene van Heyst in 1991. For example, a group signature scheme could be used by an employee of a large company where it is sufficient for a verifier to know a message was signed by an employee, but not which particular employee signed it. Another application is for keycard access to restricted areas where it is inappropriate to track individual employee's movements, but necessary to secure areas to only employees in the group.
Essential to a group signature scheme is a group manager, who is in charge of adding group members and has the ability to reveal the original signer in the event of disputes. In some systems the responsibilities of adding members and revoking signature anonymity are separated and given to a membership manager and revocation manager respectively. Many schemes have been proposed, however all should follow these basic requirements:
Soundness and completeness Valid signatures by group members always verify correctly, and invalid signatures always fail verification.
Unforgeable Only members of the group can create valid group signatures.
Anonymity Given a message and its signature, the identity of the individual signer cannot be determined without the group manager's secret key.
Traceability Given any valid signature, the group manager should be able to trace which user issued the signature. (This and the previous requirement imply that only the group manager can break users' anonymity.)
Unlinkability Given two messages and their signatures, we cannot tell if the signatures were from the same signer or not.
No framing Even if all other group members (and the managers) collude, they cannot forge a signature for a non-participating group member.
Unforgeable tracing verification The revocation manager cannot falsely accuse a signer of creating a signature he did not create.
Co
|
https://en.wikipedia.org/wiki/Cleanroom%20software%20engineering
|
The cleanroom software engineering process is a software development process intended to produce software with a certifiable level of reliability. The central principles are software development based on formal methods, incremental implementation under statistical quality control, and statistically sound testing.
History
The cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM.
The cleanroom process first saw use in the mid to late 1980s. Demonstration projects within the military began in the early 1990s. Recent work on the cleanroom process has examined fusing cleanroom with the automated verification capabilities provided by specifications expressed in CSP.
Philosophy
The focus of the cleanroom process is on defect prevention, rather than defect removal. The name "cleanroom" was chosen to evoke the cleanrooms used in the electronics industry to prevent the introduction of defects during the fabrication of semiconductors.
Central principles
The basic principles of the cleanroom process are
Software development based on formal methods Software tool support based on some mathematical formalism includes model checking, process algebras, and Petri nets. The Box Structure Method might be one such means of specifying and designing a software product. Verification that the design correctly implements the specification is performed through team review, often with software tool support.
Incremental implementation under statistical quality control Cleanroom development uses an iterative approach, in which the product is developed in increments that gradually increase the implemented functionality. The quality of each increment is measured against pre-established standards to verify that the development process is proceeding acceptably. A failure to meet quality standards results in the cessation of testing for the current increment, and a return to the design phase.
Statistically sound testing Softw
|
https://en.wikipedia.org/wiki/Agent%20Communications%20Language
|
Agent Communication Language (ACL), proposed by the Foundation for Intelligent Physical Agents (FIPA), is a proposed standard language for agent communications. Knowledge Query and Manipulation Language (KQML) is another proposed standard.
The most popular ACLs are:
FIPA-ACL (by the Foundation for Intelligent Physical Agents, a standardization consortium)
KQML (Knowledge Query and Manipulation Language)
Both rely on speech act theory developed by Searle in the 1960s and enhanced by Winograd and Flores in the 1970s. They define a set of performatives, also called Communicative Acts, and their meaning (e.g. ask-one). The content of the performative is not standardized, but varies from system to system.
To make agents understand each other they have to not only speak the same language, but also have a common ontology. An ontology is a part of the agent's knowledge base that describes what kind of things an agent can deal with and how they are related to each other.
Examples of frameworks that implement a standard agent communication language (FIPA-ACL) include FIPA-OS
and Jade.
References
Formal languages
Knowledge representation
Agent communications languages
|
https://en.wikipedia.org/wiki/Charles%20Darwin%20Research%20Station
|
Charles Darwin Research Station (CDRS) (, ECCD) is a biological research station in Puerto Ayora, Santa Cruz Island, Galápagos, Ecuador. The station is operated by the Charles Darwin Foundation which was founded in 1959 under the auspices of UNESCO and the World Conservation Union. The research station serves as the headquarters for the Foundation, and is used to conduct scientific research and promote environmental education. It is located on the shore of Academy Bay in the village of Puerto Ayora on Santa Cruz Island in the Galapagos Islands, with satellite offices on Isabela and San Cristóbal islands.
Field station
The Charles Darwin Research Station (CDRS) (, ECCD) is a biological research station operated by the Charles Darwin Foundation. It is located on the shore of Academy Bay in the village of Puerto Ayora on Santa Cruz Island in the Galapagos Islands, with satellite offices on Isabela and San Cristóbal islands. In Puerto Ayora, Santa Cruz Island, Ecuadorian and foreign scientists work on research and projects for conservation of the Galápagos terrestrial and marine ecosystems. The Research Station, established in 1959 and dedicated in 1964, has a natural history interpretation center and also carries out educational projects in support of conservation of the Galápagos Islands, and in support of external researchers visiting the islands to conduct field work.
Objectives and work
The objectives of the CDRS is to conduct scientific research and environmental education for conservation. The Station has a team of over a hundred scientists, educators, volunteers, research students, and support staff from all over the world.
Scientific research and monitoring projects are conducted at the CDRS in conjunction and cooperation with its chief partner, the Galápagos National Park Directorate (GNPD), which functions as the principal government authority in charge of conservation and natural resource issues in the Galapagos.
The work of the CDRS has as its main
|
https://en.wikipedia.org/wiki/Sign%20extension
|
Sign extension (sometimes abbreviated as sext, particularly in mnemonics) is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most significant side of the number, following a procedure dependent on the particular signed number representation used.
For example, if six bits are used to represent the number "00 1010" (decimal positive 10) and the sign extend operation increases the word length to 16 bits, then the new representation is simply "0000 0000 0000 1010". Thus, both the value and the fact that the value was positive are maintained.
If ten bits are used to represent the value "11 1111 0001" (decimal negative 15) using two's complement, and this is sign extended to 16 bits, the new representation is "1111 1111 1111 0001". Thus, by padding the left side with ones, the negative sign and the value of the original number are maintained.
In the Intel x86 instruction set, for example, there are two ways of doing sign extension:
using the instructions cbw, cwd, cwde, and cdq: convert byte to word, word to doubleword, word to extended doubleword, and doubleword to quadword, respectively (in the x86 context a byte has 8 bits, a word 16 bits, a doubleword and extended doubleword 32 bits, and a quadword 64 bits);
using one of the sign extended moves, accomplished by the movsx ("move with sign extension") family of instructions.
Zero extension
A similar concept is zero extension (sometimes abbreviated as zext). In a move or convert operation, zero extension refers to setting the high bits of the destination to zero, rather than setting them to a copy of the most significant bit of the source. If the source of the operation is an unsigned number, then zero extension is usually the correct way to move it to a larger field while preserving its numeric value, while sign extension is correct for signed numbers.
In the x86
|
https://en.wikipedia.org/wiki/Steel%20Bank%20Common%20Lisp
|
Steel Bank Common Lisp (SBCL) is a free Common Lisp implementation that features a high-performance native compiler, Unicode support and threading.
The name "Steel Bank Common Lisp" is a reference to Carnegie Mellon University Common Lisp from which SBCL forked: Andrew Carnegie made his fortune in the steel industry and Andrew Mellon was a successful banker.
History
SBCL descends from CMUCL (created at Carnegie Mellon University), which is itself descended from Spice Lisp, including early implementations for the Mach operating system on the IBM RT PC, and the Three Rivers Computing Corporation PERQ computer, in the 1980s.
William Newman originally announced SBCL as a variant of CMUCL in December 1999. The main point of divergence at the time was a clean bootstrapping procedure: CMUCL requires an already compiled executable binary of itself to compile the CMUCL source code, whereas SBCL supported bootstrapping from theoretically any ANSI-compliant Common Lisp implementation.
SBCL became a SourceForge project in September 2000. The original rationale for the fork was to continue the initial work done by Newman without destabilizing CMUCL which was at the time already a mature and much-used implementation. The forking was amicable, and there have since then been significant flows of code and other cross-pollination between the two projects.
Since then SBCL has attracted several developers, been ported to multiple hardware architectures and operating systems, and undergone many changes and enhancements: while it has dropped support for several CMUCL extensions that it considers beyond the scope of the project (such as the Motif interface) it has also developed many new ones, including native threading and Unicode support.
Version 1.0 was released in November 2006, and active development continues.
William Newman stepped down as project administrator for SBCL in April 2008. Several other developers have taken over interim management of releases for the time being
|
https://en.wikipedia.org/wiki/Racket%20%28programming%20language%29
|
Racket is a general-purpose, multi-paradigm programming language and a multi-platform distribution that includes the Racket language, compiler, large standard library, IDE, development tools, and a set of additional languages including Typed Racket (a sister language of Racket with a static type-checker), Swindle, FrTime, Lazy Racket, R5RS & R6RS Scheme, Scribble, Datalog, Racklog, Algol 60 and several teaching languages.
The Racket language is a modern dialect of Lisp and a descendant of Scheme. It is designed as a platform for programming language design and implementation. In addition to the core Racket language, Racket is also used to refer to the family of programming languages and set of tools supporting development on and with Racket. Racket is also used for scripting, computer science education, and research.
The Racket platform provides an implementation of the Racket language (including a runtime system, libraries, and compiler supporting several compilation modes: machine code, machine-independent, interpreted, and JIT) along with the DrRacket integrated development environment (IDE) written in Racket. Racket is used by the ProgramByDesign outreach program, which aims to turn computer science into "an indispensable part of the liberal arts curriculum".
The core Racket language is known for its extensive macro system which enables creating embedded and domain-specific languages, language constructs such as classes or modules, and separate dialects of Racket with different semantics.
The platform distribution is free and open-source software distributed under the Apache 2.0 and MIT licenses. Extensions and packages written by the community may be uploaded to Racket's package catalog.
History
Development
Matthias Felleisen founded PLT Inc. in the mid 1990s, first as a research group, soon after as a project dedicated to producing pedagogic materials for novice programmers (lectures, exercises/projects, software). In January 1995, the group decided to
|
https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem
|
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.
History
Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914.
The case of a continuous-time process
For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, exists and is finite at every lag , then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that
where the integral is a Riemann–Stieltjes integral. The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum.
The Fourier transform of does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either.
However, if the measure is absolutely continuous, for example, if the process is purely indeterministic, then is differentiable almost everywhere and we can write . In this case, one can determine , the power spectral density of , by taking the averaged derivative of .
|
https://en.wikipedia.org/wiki/American%20Bureau%20of%20Shipping
|
The American Bureau of Shipping (ABS) is an American maritime classification society established in 1862. Its stated mission to promote the security of life, property, and the natural environment, primarily through the development and verification of standards for the design, construction and operational maintenance of marine and offshore assets.
ABS's core business is providing global classification services to the marine, offshore, and gas industries. As of 2020, ABS was the second largest class society with a classed fleet of over 12,000 commercial vessels and offshore facilities. ABS develops its standards and technical specifications, known collectively as the ABS Rules & Guides. These Rules form the basis for assessing the design and construction of new vessels and the integrity of existing vessels and marine structures.
History
ABS was first chartered in the state of New York in 1862 as the American Shipmasters’ Association (ASA) to certify qualified ship captains, or shipmasters, for safe ship operations during the Civil War. While ASA's certificates were not an official requirement for shipmasters, the certificate served as a recommendation for shipowners. Vessels that sailed with a certified ASA shipmaster were more likely to find favorable insurance coverage.
The ASA published its first technical standards, Rules for Survey and Classing Wooden Vessels, in 1870. In the late 19th century, wooden ships became obsolete and gave way to iron as a shipbuilding material. In response, ASA published its first Rules for Survey and Classing of Iron Vessels in 1877. Similarly, when iron gave way to steel, ABS Rules for Building and Classing Steel Vessels were established and published in 1890. These Steel Vessel Rules continue to be revised and published annually.
The ASA continued its program of certifying shipmasters until May 1900. By this time, federal law required that the United States government license most sea officers. As its business shifted from the
|
https://en.wikipedia.org/wiki/Navajo%20Indian%20Irrigation%20Project
|
The Navajo Indian Irrigation Project (NIIP) () is a large agricultural development located in the northwest corner of New Mexico. The NIIP is one of the largest Native American owned and operated agricultural businesses in the United States. This venture finds its origins in the 1930s when the federal government was looking for economic development for the Navajo Nation. The NIIP was approved in 1962 by Congress. The Bureau of Reclamation received the task of constructing this project.
The water supply is provided by Navajo Lake, the reservoir formed behind Navajo Dam on the San Juan River. Water is transported southwest and distributed via of main canals and of laterals. The project service area is composed of the high benchlands south of Farmington, which experience an arid climate.
Originally designed to provide jobs for Native American family farms the project has transformed into a large corporate entity. The project was authorized on June 13, 1962 for the irrigation of , and construction began in 1964. The canal systems and most of the drainage systems were completed by the end of 1977, and farmland was gradually brought into production in "blocks" averaging . As of 2011, seven blocks totaling of farmland were irrigated, with an eighth block under development.
The project is entitled to of San Juan River water each year.
References
Irrigation projects
Irrigation in the United States
Geography of the Navajo Nation
United States Bureau of Reclamation
Colorado River Storage Project
|
https://en.wikipedia.org/wiki/Supersingular%20variety
|
In mathematics, a supersingular variety is (usually) a smooth projective variety in nonzero characteristic such that for all n the slopes of the Newton polygon of the nth crystalline cohomology are all n/2 . For special classes of varieties such as elliptic curves it is common to use various ad hoc definitions of "supersingular", which are (usually) equivalent to the one given above.
The term "singular elliptic curve" (or "singular j-invariant") was at one times used to refer to complex elliptic curves whose ring of endomorphisms has rank 2, the maximum possible. Helmut Hasse discovered that, in finite characteristic, elliptic curves can have larger rings of endomorphisms of rank 4, and these were called "supersingular elliptic curves". Supersingular elliptic curves can also be characterized by the slopes of their crystalline cohomology, and the term "supersingular" was later extended to other varieties whose cohomology has similar properties. The terms "supersingular" or "singular" do not mean that the variety has singularities.
Examples include:
Supersingular elliptic curve. Elliptic curves in non-zero characteristic with an unusually large ring of endomorphisms of rank 4.
Supersingular Abelian variety Sometimes defined to be an abelian variety isogenous to a product of supersingular elliptic curves, and sometimes defined to be an abelian variety of some rank g whose endomorphism ring has rank (2g)2.
Supersingular K3 surface. Certain K3 surfaces in non-zero characteristic.
Supersingular Enriques surface. Certain Enriques surfaces in characteristic 2.
A surface is called Shioda supersingular if the rank of its Néron–Severi group is equal to its second Betti number.
A surface is called Artin supersingular if its formal Brauer group has infinite height.
References
Algebraic geometry
|
https://en.wikipedia.org/wiki/Maximum%20satisfiability%20problem
|
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
Example
The conjunctive normal form formula
is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false.
However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this.
Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three.
Hardness
The MAX-SAT problem is OptP-complete, and thus NP-hard, since its solution easily leads to the solution of the boolean satisfiability problem, which is NP-complete.
It is also difficult to find an approximate solution of the problem, that satisfies a number of clauses within a guaranteed approximation ratio of the optimal solution. More precisely, the problem is APX-complete, and thus does not admit a polynomial-time approximation scheme unless P = NP.
Weighted MAX-SAT
More generally, one can define a weighted version of MAX-SAT as follows: given a conjunctive normal form formula with non-negative weights assigned to each clause, find truth values for its variables that maximize the combined weight of the satisfied clauses. The MAX-SAT problem is an instance of weighted MAX-SAT where all weights are 1.
Approximation algorithms
1/2-approximation
Randomly assigning each variable to be true with probability 1/2 gives an expected 2-approximation. More precisely, if each clause has at least variables, then this yields a (1 − 2−)-approximation. This algorithm can be derandomized using the meth
|
https://en.wikipedia.org/wiki/Map%20communication%20model
|
The Map Communication Model is a theory in cartography that characterizes mapping as a process of transmitting geographic information via the map from the cartographer to the end-user. It was perhaps the first paradigm to gain widespread acceptance in cartography in the international cartographic community and between academic and practising cartographers.
Overview
By the mid-20th century, according to Crampton (2001) "cartographers as Arthur H. Robinson and others had begun to see the map as primarily a communication tool, and so developed a specific model for map communication, the map communication model (MCM)". This model, according to Andrews (1988) "can be grouped with the other major communication models of the time, such as the Shannon-Weaver and Lasswell models of communication. The map communication model led to a whole new body of research, methodologies and map design paradigms"
One of the implications of this communication model according to Crampton (2001) "endorsed an “epistemic break” that shifted our understandings of maps as communication systems to investigating them in terms of fields of power relations and exploring the “mapping environments in which knowledge is constructed”... This involved examining the social contexts in which maps were both produced and used, a departure from simply seeing maps as artifacts to be understood apart from this context".
A second implication of this model is the presumption inherited from positivism that it is possible to separate facts from values. As Harley stated: Maps are never value-free images; except in the narrowest Euclidean sense they are not in themselves either true or false. Both in the selectivity of their content and in their signs and styles of representation maps are a way of conceiving, articulating, and structuring the human world which is biased towards, promoted by, and exerts influence upon particular sets of social relations. By accepting such premises it becomes easier to see how app
|
https://en.wikipedia.org/wiki/Wavelet%20transform
|
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
Definition
A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions.
The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of ,
for integers .
If under the standard inner product on ,
this family is orthonormal, it is an orthonormal system:
where is the Kronecker delta.
Completeness is satisfied if every function may be expanded in the basis as
with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual.
The integral wavelet transform is the integral transform defined as
The wavelet coefficients are then given by
Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position.
Principle
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing,
where represents time and angular frequency (, where is ordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of .
When is large,
Bad time resolution
Good frequency resolution
Low frequency, large scaling factor
When is small
Good time
|
https://en.wikipedia.org/wiki/Ghost-canceling%20reference
|
Ghost-canceling reference (GCR) is a special sub-signal on a television channel that receivers can use to compensate for the ghosting effect of a television signal distorted by multipath propagation between transmitter and receiver.
In the United States, the GCR signal is a chirp in frequency of the modulating signal from 0 Hz to 4.2 MHz, transmitted during the vertical blanking interval over one video line (line 19 in the U.S.), shifted in phase by 180° once per frame, with this pattern inverted every four lines. Television receivers generate their own local versions of this signal and use the comparison between the local and remote signals to tune an adaptive equalizer that removes ghost images on the screen.
GCR was introduced after its recommendation in 1993 by the Advanced Television Systems Committee.
References
External links
Official GCR specification
Television technology
|
https://en.wikipedia.org/wiki/Glossary%20of%20invasion%20biology%20terms
|
The need for a clearly defined and consistent invasion biology terminology has been acknowledged by many sources. Invasive species, or invasive exotics, is a nomenclature term and categorization phrase used for flora and fauna, and for specific restoration-preservation processes in native habitats. Invasion biology is the study of these organisms and the processes of species invasion.
The terminology in this article contains definitions for invasion biology terms in common usage today, taken from accessible publications. References for each definition are included. Terminology relates primarily to invasion biology terms with some ecology terms included to clarify language and phrases on linked articles.
Introduction
Definitions of "invasive non-indigenous species have been inconsistent", which has led to confusion both in literature and in popular publications (Williams and Meffe 2005). Also, many scientists and managers feel that there is no firm definition of non-indigenous species, native species, exotic species, "and so on, and ecologists do not use the terms consistently." (Shrader-Frechette 2001) Another question asked is whether current language is likely to promote "effective and appropriate action" towards invasive species through cohesive language (Larson 2005). Biologists today spend more time and effort on invasive species work because of the rapid spread, economic cost, and effects on ecological systems, so the importance of effective communication about invasive species is clear. (Larson 2005)
Controversy in invasion biology terms exists because of past usage and because of preferences for certain terms. Even for biologists, defining a species as native may be far from being a straightforward matter of biological classification based on the location or the discipline a biologist is working in (Helmreich 2005). Questions often arise as to what exactly makes a species native as opposed to non-native, because some non-native species have no kno
|
https://en.wikipedia.org/wiki/Dual%20wavelet
|
In mathematics, a dual wavelet is the dual to a wavelet. In general, the wavelet series generated by a square-integrable function will have a dual series, in the sense of the Riesz representation theorem. However, the dual series is not itself in general representable by a square-integrable function.
Definition
Given a square-integrable function , define the series by
for integers .
Such a function is called an R-function if the linear span of is dense in , and if there exist positive constants A, B with such that
for all bi-infinite square summable series . Here, denotes the square-sum norm:
and denotes the usual norm on :
By the Riesz representation theorem, there exists a unique dual basis such that
where is the Kronecker delta and is the usual inner product on . Indeed, there exists a unique series representation for a square-integrable function f expressed in this basis:
If there exists a function such that
then is called the dual wavelet or the wavelet dual to ψ. In general, for some given R-function ψ, the dual will not exist. In the special case of , the wavelet is said to be an orthogonal wavelet.
An example of an R-function without a dual is easy to construct. Let be an orthogonal wavelet. Then define for some complex number z. It is straightforward to show that this ψ does not have a wavelet dual.
See also
Multiresolution analysis
References
Charles K. Chui, An Introduction to Wavelets (Wavelet Analysis & Its Applications), (1992), Academic Press, San Diego,
Wavelets
Wavelet
|
https://en.wikipedia.org/wiki/Norton%20SystemWorks
|
Norton SystemWorks is a discontinued utility software suite by Symantec Corp. It integrates three of Symantec's most popular products – Norton Utilities, Norton CrashGuard and Norton AntiVirus – into one program designed to simplify solving common PC issues. Backup software was added later to high-end editions. SystemWorks was innovative in that it combined several applications into an all-in-one software for managing computer health, thus saving significant costs and time often spent on using different unrelated programs. SystemWorks, which was introduced in 1998 has since inspired a host of competitors such as iolo System Mechanic, McAfee Nuts And Bolts, Badosoft First Aid and many others.
Norton SystemWorks for Windows was initially offered alongside Norton Utilities until it replaced it as Symantec's flagship (and only) utility software in 2003. SystemWorks was discontinued in 2009, allowing Norton Utilities to return as Symantec's main utility suite. The Mac edition, lasting only three versions, was discontinued in 2004 to allow Symantec to concentrate its efforts solely on Internet security products for the Mac.
Norton NT Tools
The precursor of Norton SystemWorks was released in March 1996 for PCs running Windows NT 3.51 or later.
It includes Norton AntiVirus Scanner, Norton File Manager (based on Norton Navigator), UNC browser, Norton Fast Find, Norton Zip/Unzip, Norton Folder Synchronization, Folder Compare, Norton System Doctor, System Information, Norton Control Center.
Norton Protected Desktop Solution
An application suite built similar to Norton SystemWorks but includes different set of tools to enable support of DOS, Windows 3.1, Windows 95, or Windows NT. Released in July 1998,
it includes Norton Software Distribution Utility 2.0, Norton CrashGuard 2.0 for Windows NT, Norton CrashGuard 3.0 for Windows 95, Norton Speed Disk for Windows 95/NT, Norton Disk Doctor for Windows 95/NT, Norton AntiVirus 4.0 for DOS/Windows 3.1, and Norton AntiVirus 4.0 fo
|
https://en.wikipedia.org/wiki/Incubator%20%28culture%29
|
An incubator is a device used to grow and maintain microbiological cultures or cell cultures. The incubator maintains optimal temperature, humidity and other conditions such as the CO2 and oxygen content of the atmosphere inside. Incubators are essential for much experimental work in cell biology, microbiology and molecular biology and are used to culture both bacterial and eukaryotic cells.
An incubator is made up of a chamber with a regulated temperature. Some incubators also regulate humidity, gas composition, or ventilation within that chamber.
The simplest incubators are insulated boxes with an adjustable heater, typically going up to 60 to 65 °C (140 to 150 °F), though some can go slightly higher (generally to no more than 100 °C). The most commonly used temperature both for bacteria such as the frequently used E. coli as well as for mammalian cells is approximately 37 °C (99 °F), as these organisms grow well under such conditions. For other organisms used in biological experiments, such as the budding yeast Saccharomyces cerevisiae, a growth temperature of 30 °C (86 °F) is optimal.
More elaborate incubators can also include the ability to lower the temperature (via refrigeration), or the ability to control humidity or CO2 levels. This is important in the cultivation of mammalian cells, where the relative humidity is typically >80% to prevent evaporation and a slightly acidic pH is achieved by maintaining a CO2 level of 5%.
History of the laboratory incubator
From aiding in hatching chicken eggs to enabling scientists to understand and develop vaccines for deadly viruses, the laboratory incubator has seen numerous applications over the years it has been in use. The incubator has also provided a foundation for medical advances and experimental work in cellular and molecular biology.
While many technological advances have occurred since the primitive incubators first used in ancient Egypt and China, the main purpose of the incubator has remained unchanged
|
https://en.wikipedia.org/wiki/Virtual%20instrumentation
|
Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.
Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.
The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.
Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.
The newly updated
|
https://en.wikipedia.org/wiki/Countercontrol
|
Countercontrol is a term used by Dr. B.F. Skinner in 1953 as a functional class in the analysis of social behavior.
Opposition or resistance to intervention defines countercontrol, however little systematic research has been conducted to document its occurrence. Skinner also distinguished it from the literature of freedom, which he said did not provide effective countercontrol strategies. The concept was identified as a mechanism to oppose control such as escape from the controller or waging an attack in order to weaken or destroy the controlling power. For this purpose, Skinner stressed the role of the individual as an instrument of countercontrol, emphasizing the notion of vigilance along with the concepts of freedom and dignity.
Behavior
Counter control can embed itself in both passive and active behavior. An individual may not respond to the demanding interventionist or may completely withdraw from the situation passively. The foundation for countercontrol is that human behavior is both a function of the environment and a source of control over it. Counter control originates from the essential behavior-analytic position which states that behavior is always caused or controlled. For Skinner, countercontrol is constituted by the behaviors that determine the behavior of the controller or those who hold authority.
Fundamental
Control is fundamental in conceptual, experimental and applied behavior analysis, as it is fundamental in all experimental science. To study functional relations in behavior and environment, one must manipulate (control) environmental variables to study their effect in behavior. Countercontrol can be defined as human operant behavior as a response to social aversive control. The individual that is exposed to aversive control may try to oppose controlling attempts through the process of negative reinforcement, such as by escaping, attacking, or passively resisting.
Countercontrol is a way in which individuals regain behavioral freedom when f
|
https://en.wikipedia.org/wiki/Pulsed%20columns
|
Pulsed columns are a type of liquid-liquid extraction equipment; examples of this class of extraction equipment is used at the BNFL plant THORP.
Special use in nuclear industries for fuel reprocessing, where spent fuel from reactors is subjected to solvent extraction. A pulsation is created using air by a pulse leg. The feed is aqueous solution containing radioactive solutes, and the solvent used is TBP (Tri-Butyl Phosphate) in suitable hydrocarbon. To create turbulence for dispersion of one phase in other, a mechanical agitator is used in conventional equipments. But, because of radioactivity, and frequent maintenance required for mechanical agitators, pulsing is used in extraction columns.
References
Chemical equipment
|
https://en.wikipedia.org/wiki/System%20Management%20Mode
|
System Management Mode (SMM, sometimes called ring −2 in reference to protection rings) is an operating mode of x86 central processor units (CPUs) in which all normal execution, including the operating system, is suspended. An alternate software system which usually resides in the computer's firmware, or a hardware-assisted debugger, is then executed with high privileges.
It was first released with the Intel 386SL. While initially special SL versions were required for SMM, Intel incorporated SMM in its mainline 486 and Pentium processors in 1993. AMD implemented Intel's SMM with the Am386 processors in 1991. It is available in all later microprocessors in the x86 architecture.
In ARM architecture the Exception Level 3 (EL3) mode is also referred as Secure Monitor Mode or System Management Mode.
Operation
SMM is a special-purpose operating mode provided for handling system-wide functions like power management, system hardware control, or proprietary OEM designed code. It is intended for use only by system firmware (BIOS or UEFI), not by applications software or general-purpose systems software. The main benefit of SMM is that it offers a distinct and easily isolated processor environment that operates transparently to the operating system or executive and software applications.
In order to achieve transparency, SMM imposes certain rules. The SMM can only be entered through SMI (System Management Interrupt). The processor executes the SMM code in a separate address space (SMRAM) that has to be made inaccessible to other operating modes of the CPU by the firmware.
System Management Mode can address up to 4 GB memory as huge real mode. In x86-64 processors, SMM can address >4 GB memory as real address mode.
Usage
Initially, System Management Mode was used for implementing power management and hardware control features like Advanced Power Management (APM). However, BIOS manufacturers and OEMs have relied on SMM for newer functionality like Advanced Configuration a
|
https://en.wikipedia.org/wiki/Green-beard%20effect
|
The green-beard effect is a thought experiment used in evolutionary biology to explain selective altruism among individuals of a species.
The idea of a green-beard gene was proposed by William D. Hamilton in his articles of 1964, and got the name from the example used by Richard Dawkins ("I have a green beard and I will be altruistic to anyone else with green beard") in The Selfish Gene (1976).
A green-beard effect occurs when an allele, or a set of linked alleles, produce three expressed (or phenotypic) effects:
a perceptible trait—the hypothetical "green beard"
recognition of this trait by others; and
preferential treatment of individuals with the trait by others with the trait
The carrier of the gene (or a specific allele) is essentially recognizing copies of the same gene (or a specific allele) in other individuals. Whereas kin selection involves altruism to related individuals who share genes in a non-specific way, green-beard alleles promote altruism toward individuals who share a gene that is expressed by a specific phenotypic trait. Some authors also note that the green-beard effects can include "spite" for individuals lacking the "green-beard" gene. This can have the effect of delineating a subset of organisms within a population that is characterized by members who show greater cooperation toward each other, this forming a "clique" that can be advantageous to its members who are not necessarily kin.
Green-beard effect could increase altruism on green-beard phenotypes and therefore its presence in a population even if genes assist in the increase of genes that are not exact copies; all that is required is that they express the three required characteristics. Green-beard alleles are vulnerable to mutations that produce the perceptible trait without the helping behaviour.
Altruistic behaviour is paradoxical when viewed in the light of old ideas of evolutionary theory that emphasised the role of competition. The evolution of altruism is better expl
|
https://en.wikipedia.org/wiki/Carnegie%20Mellon%20University%20Usable%20Privacy%20and%20Security%20Laboratory
|
The Carnegie Mellon University Usable Privacy and Security Laboratory (CUPS) was established in the Spring of 2004 to bring together Carnegie Mellon University researchers working on a diverse set of projects related to understanding and improving the usability of privacy and security software and systems. The privacy and security research community has become increasingly aware that usability problems severely impact the effectiveness of mechanisms designed to provide security and privacy in software systems. Indeed, one of the four grand research challenges in information security and assurance identified by the Computing Research Association in 2003 is: "Give end-users security controls they can understand and privacy they can control for the dynamic, pervasive computing environments of the future." This is the challenge that CUPS strives to address. CUPS is affiliated with Carnegie Mellon CyLab and has members from the Engineering and Public Policy Department, the School of Computer Science, the Electrical and Computer Engineering Department, the Heinz College, and the Department of Social and Decision Sciences. It is directed by Lorrie Cranor.
Projects
P3P and computer-readable privacy policies
Two members of the CUPS Lab are members of the W3C P3P Working Group, working on developing the P3P 1.1 specification.
In the fall of 2005, AT&T gave the rights to the source code and trademarks surrounding Privacy Bird, their P3P user-agent. Privacy Bird is currently maintained and distributed by the lab.
In the summer of 2005, the lab made available to the public a "P3P-enabled search engine", known as Privacy Finder. It allowed a user to reorder search results based on whether each site complied with his or her privacy preferences. This information was gleaned from P3P policies found on the web sites. Since 2012, Privacy Finder has been "temporarily out of service", with no indication of when service would be restored.
Additionally, the lab archives web sites
|
https://en.wikipedia.org/wiki/Patrick%27s%20test
|
Patrick's test or FABER test is performed to evaluate pathology of the hip joint or the sacroiliac joint.
The test is performed by having the tested leg flexed and the thigh abducted and externally rotated. If pain is elicited on the ipsilateral side anteriorly, it is suggestive of a hip joint disorder on the same side. If pain is elicited on the contralateral side posteriorly around the sacroiliac joint, it is suggestive of pain mediated by dysfunction in that joint.
History
Patrick's test is named after the American neurologist Hugh Talbot Patrick.
See also
Gaenslen's test
Physical medicine and rehabilitation
References
Orthopedic surgical procedures
|
https://en.wikipedia.org/wiki/Rho%28D%29%20immune%20globulin
|
Rho(D) immune globulin (RhIG) is a medication used to prevent RhD isoimmunization in mothers who are RhD negative and to treat idiopathic thrombocytopenic purpura (ITP) in people who are Rh positive. It is often given both during and following pregnancy. It may also be used when RhD-negative people are given RhD-positive blood. It is given by injection into muscle or a vein. A single dose lasts 12 weeks. It is made from human blood plasma.
Common side effects include fever, headache, pain at the site of injection, and red blood cell breakdown. Other side effects include allergic reactions, kidney problems, and a very small risk of viral infections. In those with ITP, the amount of red blood cell breakdown may be significant. Use is safe with breastfeeding. Rho(D) immune globulin is made up of antibodies to the antigen Rho(D) present on some red blood cells. It is believed to work by blocking a person's immune system from recognizing this antigen.
Rho(D) immune globulin came into medical use in the 1960s, following the pioneering work of John G. Gorman. In 1980, Gorman shared the Lasker-DeBakey Clinical Medical Research Award for pioneering work on the rhesus blood group system.
RhIG is on the World Health Organization's List of Essential Medicines.
Medical uses
In a pregnancy where the mother is RhD negative and the father is RhD positive, the probability of the fetus having RhD positive blood is dependent on whether the father is homozygous for RhD (i.e., both RhD alleles are present) or heterozygous (i.e., only one RhD allele is present). If the father is homozygous, the fetus will necessarily be RhD positive, as the father will necessarily pass on a Rh D positive allele. If the father is heterozygous, there is a 50% chance that the fetus will be RhD positive, as he will randomly pass on either the RhD positive allele or not.
If a fetus is RhD positive and the mother is RhD negative, the mother is at risk of RhD alloimmunization, where the mother mounts an i
|
https://en.wikipedia.org/wiki/Correlated%20equilibrium
|
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy (assuming the others also don't deviate), the distribution from which the signals are drawn is called a correlated equilibrium.
Formal definition
An -player strategic game is characterized by an action set and utility function for each player . When player chooses strategy and the remaining players choose a strategy profile described by the -tuple , then player 's utility is .
A strategy modification for player is a function . That is, tells player to modify his behavior by playing action when instructed to play .
Let be a countable probability space. For each player , let be his information partition, be 's posterior and let , assigning the same value to states in the same cell of 's information partition. Then is a correlated equilibrium of the strategic game if for every player and for every strategy modification :
In other words, is a correlated equilibrium if no player can improve his or her expected utility via a strategy modification.
An example
Consider the game of chicken pictured. In this game two individuals are challenging each other to a contest where each can either dare or chicken out. If one is going to dare, it is better for the other to chicken out. But if one is going to chicken out, it is better for the other to dare. This leads to an interesting situation where each wants to dare, but only if the other might chicken out.
In this game, there are three Nash equilibria. The two pure strategy Nash equilibria are (D, C) and (C, D). There is also a mixed strategy equilibrium where b
|
https://en.wikipedia.org/wiki/Irregular%20matrix
|
An irregular matrix, or ragged matrix, is a matrix that has a different number of elements in each row. Ragged matrices are not used in linear algebra, since standard matrix transformations cannot be performed on them, but they are useful in computing as arrays which are called jagged arrays. Irregular matrices are typically stored using Iliffe vectors.
For example, the following is an irregular matrix:
See also
Regular matrix (disambiguation)
Empty matrix
Sparse matrix
References
Paul E. Black, Ragged matrix, from Dictionary of Algorithms and Data Structures, Paul E. Black, ed., NIST, 2004.
Arrays
Matrices
|
https://en.wikipedia.org/wiki/Moore%20method
|
The Moore method is a deductive manner of instruction used in advanced mathematics courses. It is named after Robert Lee Moore, a famous topologist who first used a stronger version of the method at the University of Pennsylvania when he began teaching there in 1911. (Zitarelli, 2004)
The way the course is conducted varies from instructor to instructor, but the content of the course is usually presented in whole or in part by the students themselves. Instead of using a textbook, the students are given a list of definitions and, based on these, theorems which they are to prove and present in class, leading them through the subject material. The Moore method typically limits the amount of material that a class is able to cover, but its advocates claim that it induces a depth of understanding that listening to lectures cannot give.
The original method
F. Burton Jones, a student of Moore and a practitioner of his method, described it as follows:
The students were forbidden to read any book or article about the subject. They were even forbidden to talk about it outside of class. Hersh and John-Steiner (1977) claim that, "this method is reminiscent of a well-known, old method of teaching swimming called 'sink or swim' ".
Quotations
"That student is taught the best who is told the least." Moore, quote in Parker (2005: vii).
"I hear, I forget. I see, I remember. I do, I understand." (Chinese proverb that was a favorite of Moore's. Quoted in Halmos, P.R. (1985) I want to be a mathematician: an automathography. Springer-Verlag: 258)
References
Chalice, Donald R., 1995, "How to teach a class by the Modified Moore Method." American Mathematical Monthly 102: 317–321.
Cohen, David W., 1982, "A modified Moore method for teaching undergraduate mathematics", American Mathematical Monthly 89(7): 473–474,487-490.
Hersh, Reuben and John-Steiner, Vera, 1977, "Loving + Hating Mathematics".
Jones, F. Burton, 1977, "The Moore method," American Mathematical Monthly 84: 273–77.
P
|
https://en.wikipedia.org/wiki/Artificial%20reproduction
|
Artificial reproduction is the re-creation of life by other than the natural means and natural causes. It involves building of new life following human plans and projects. Examples include, artificial selection, artificial insemination, in vitro fertilization, artificial womb, artificial cloning, and kinematic replication.
Artificial reproduction is one aspect of artificial life. Artificial reproduction follow in two classes according to its capacity to be self-sufficient: non-assisted reproductive technology and assisted reproductive technology.
Cutting plants' stems and placing them in compost is a form of assisted artificial reproduction, xenobots are an example of a more autonomous type of reproduction, while the artificial womb presented in the movie the Matrix illustrates a non assisted hypothetical technology. The idea of artificial reproduction has led to various technologies.
Theology
Humans have aspired to create life since immemorial times. Most theologies and religions have conceived this possibility as exclusive of deities. Christian religions consider the possibility of artificial reproduction, in most cases, as heretical and sinful.
Philosophy
Although ancient Greek philosophy raised the possibility that man could imitate the creative capacity of nature, it was thought that if possible human beings would reproduce things as nature does, and vice versa, nature would do the things that man does in the same way. Aristotle, for example, wrote that if nature made tables, it would make them just as men do. In other words, Aristotle said that if nature were to create a table, such table will look like a human-made table. Similarly, Descartes envisioned the human body, and nature, as a machine. Cartesian philosophy does not stop seeing a perfect mirror between nature and the artificial.
However, Kant revolutionized this old idea by criticizing such naturalism. Kant pedagogically wrote:
Humans are not instructed by nature but rather use nature as raw
|
https://en.wikipedia.org/wiki/Circular%20convolution
|
Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period. Periodic convolution arises, for example, in the context of the discrete-time Fourier transform (DTFT). In particular, the DTFT of the product of two discrete sequences is the periodic convolution of the DTFTs of the individual sequences. And each DTFT is a periodic summation of a continuous Fourier transform function (see ). Although DTFTs are usually continuous functions of frequency, the concepts of periodic and circular convolution are also directly applicable to discrete sequences of data. In that context, circular convolution plays an important role in maximizing the efficiency of a certain kind of common filtering operation.
Definitions
The periodic convolution of two T-periodic functions, and can be defined as:
where to is an arbitrary parameter. An alternative definition, in terms of the notation of normal linear or aperiodic convolution, follows from expressing and as periodic summations of aperiodic components and , i.e.:
Then:
Both forms can be called periodic convolution. The term circular convolution arises from the important special case of constraining the non-zero portions of both and to the interval Then the periodic summation becomes a periodic extension, which can also be expressed as a circular function:
(any real number)
And the limits of integration reduce to the length of function :
Discrete sequences
Similarly, for discrete sequences, and a parameter N, we can write a circular convolution of aperiodic functions and as:
This function is N-periodic. It has at most N unique values. For the special case that the non-zero extent of both x and h are ≤ N, it is reducible to matrix multiplication where the kernel of the integral transform is a circulant matrix.
Example
A case of great practical interest is illustrated in the figure. The
|
https://en.wikipedia.org/wiki/Dynamic%20energy%20budget%20theory
|
The dynamic energy budget (DEB) theory is a formal metabolic theory which provides a single quantitative framework to dynamically describe the aspects of metabolism (energy and mass budgets) of all living organisms at the individual level, based on assumptions about energy uptake, storage, and utilization of various substances. The DEB theory adheres to stringent thermodynamic principles, is motivated by universally observed patterns, is non-species specific, and links different levels of biological organization (cells, organisms, and populations) as prescribed by the implications of energetics. Models based on the DEB theory have been successfully applied to over a 1000 species with real-life applications ranging from conservation, aquaculture, general ecology, and ecotoxicology (see also the Add-my-pet collection). The theory is contributing to the theoretical underpinning of the emerging field of metabolic ecology.
The explicitness of the assumptions and the resulting predictions enable testing against a wide variety of experimental results at the various levels of biological organization. The theory explains many general observations, such as the body size scaling relationships of certain physiological traits, and provides a theoretical underpinning to the widely used method of indirect calorimetry. Several popular empirical models are special cases of the DEB model, or very close numerical approximations.
Theoretical background
The theory presents simple mechanistic rules that describe the uptake and allocation of energy (and nutrients) and the consequences for physiological organization throughout an organism's life cycle, including the relationships of energetics with aging and effects of toxicants. Assumptions of the DEB theory are delineated in an explicit way, the approach clearly distinguishes mechanisms associated with intra‐ and interspecific variation in metabolic rates, and equations for energy flows are mathematically derived following the princ
|
https://en.wikipedia.org/wiki/Cyberwarfare
|
Cyberwarfare is the use of cyber attacks against an enemy state, causing comparable harm to actual warfare and/or disrupting vital computer systems. Some intended outcomes could be espionage, sabotage, propaganda, manipulation or economic warfare.
There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists. One view is that the term is a misnomer since no cyber attacks to date could be described as a war. An alternative view is that it is a suitable label for cyber attacks which cause physical damage to people and objects in the real world.
Many countries including the United States, United Kingdom, Russia, China, Israel, Iran, and North Korea have active cyber capabilities for offensive and defensive operations. As states explore the use of cyber operations and combine capabilities, the likelihood of physical confrontation and violence playing out as a result of, or part of, a cyber operation is increased. However, meeting the scale and protracted nature of war is unlikely, thus ambiguity remains.
The first instance of kinetic military action used in response to a cyber-attack resulting in the loss of human life was observed on 5 May 2019, when the Israel Defense Forces targeted and destroyed a building associated with an ongoing cyber-attack.
Definition
There is ongoing debate over how cyberwarfare should be defined and no absolute definition is widely agreed upon. While the majority of scholars, militaries, and governments use definitions that refer to state and state-sponsored actors, other definitions may include non-state actors, such as terrorist groups, companies, political or ideological extremist groups, hacktivists, and transnational criminal organizations depending on the context of the work.
Examples of definitions proposed by experts in the field are as follows.
Raymond Charles Parks and David P. Duggan focused on analyzing cyberwarfare in terms of computer networks and pointed out that "Cy
|
https://en.wikipedia.org/wiki/Materials%20informatics
|
Materials informatics is a field of study that applies the principles of informatics and data science to materials science and engineering to improve the understanding, use, selection, development, and discovery of materials. The term "materials informatics" is frequently used interchangeably with "data science", "machine learning", and "artificial intelligence" by the community. This is an emerging field, with a goal to achieve high-speed and robust acquisition, management, analysis, and dissemination of diverse materials data with the goal of greatly reducing the time and risk required to develop, produce, and deploy new materials, which generally takes longer than 20 years.
This field of endeavor is not limited to some traditional understandings of the relationship between materials and information. Some more narrow interpretations include combinatorial chemistry, process modeling, materials databases, materials data management, and product life cycle management. Materials informatics is at the convergence of these concepts, but also transcends them and has the potential to achieve greater insights and deeper understanding by applying lessons learned from data gathered on one type of material to others. By gathering appropriate meta data, the value of each individual data point can be greatly expanded.
Databases
Databases are essential for any informatics research and applications. In material informatics many databases exist containing both empirical data obtained experimentally, and theoretical data obtained computationally. Big data that can be used for machine learning is particularly difficult to obtain for experimental data due to the lack of a standard for reporting data and the variability in the experimental environment. This lack of big data has led to growing effort in developing machine learning techniques that utilize data extremely data sets. On the other hand, large uniform database of theoretical density functional theory (DFT) calculations exis
|
https://en.wikipedia.org/wiki/De%20novo%20mutation
|
A de novo mutation (DNM) is any mutation or alteration in the genome of an individual organism (human, animal, plant, microbe, etc.) that was not inherited from its parents. This type of mutation spontaneously occurs during the process of DNA replication during cell division. De novo mutations, by definition, are present in the affected individual but absent from both biological parents' genomes. These mutations can occur in any cell of the offspring, but those in the germ line (eggs or sperm) can be passed on to the next generation.
In most cases, such a mutation has little or no effect on the affected organism due to the redundancy and robustness of the genetic code. However, in rare cases, it can have notable and serious effects on overall health, physical appearance, and other traits. Disorders that most commonly involve de novo mutations include cri-du-chat syndrome, 1p36 deletion syndrome, genetic cancer syndromes, and certain forms of autism, among others.
Rate
The rate at which de novo mutations occur is not static and can vary among different organisms and even among individuals. In humans, the average number of spontaneous mutations (not present in the parents) an infant has in its genome is approximately 43.86 DNMs.
Various factors can influence this rate. For instance, a study in September 2019 by the University of Utah Health revealed that certain families have a higher spontaneous mutation rate than average. This finding indicates that the rate of de novo mutation can have a hereditary component, suggesting that it may "run in the family".
Additionally, the age of parents, particularly the paternal age, can significantly impact the rate of de novo mutations. Older parents, especially fathers, tend to have a higher risk of having children with de novo mutations due to the higher number of cell divisions in the male germ line as men age.
In genetic counselling, parents are often told that after having a first child with a condition caused by a de
|
https://en.wikipedia.org/wiki/List%20of%20input%20methods%20for%20Unix%20platforms
|
This is intended as a non-exhaustive list of input methods for Unix platforms. An input method is a means of entering characters and glyphs that have a corresponding encoding in a character set. See the input method page for more information.
Input methods
Unix
Computing-related lists
|
https://en.wikipedia.org/wiki/Tricotyledon%20theory%20of%20system%20design
|
In systems engineering, the tricotyledon theory of system design (T3SD) is a mathematical theory of system design developed by A. Wayne Wymore. T3SD consists of a language for describing systems and requirements, which is based on set theory, a mathematical systems model based on port automata, and a precise definition of the different types of system requirements and relationships between requirements.
System requirements model
I/O requirement
Performance requirement
System test requirement
Cost requirement
Tradeoff requirement
System design
Based on set theory
Transition systems
Functional system design
Buildable system design
References
Systems engineering
|
https://en.wikipedia.org/wiki/String%20art
|
__notoc__
String art or pin and thread art, is characterized by an arrangement of colored thread strung between points to form geometric patterns or representational designs such as a ship's sails, sometimes with other artist material comprising the remainder of the work. Thread, wire, or string is wound around a grid of nails hammered into a velvet-covered wooden board. Though straight lines are formed by the string, the slightly different angles and metric positions at which strings intersect gives the appearance of Bézier curves (as in the mathematical concept of envelope of a family of straight lines). Quadratic Bézier curve are obtained from strings based on two intersecting segments. Other forms of string art include Spirelli, which is used for cardmaking and scrapbooking, and curve stitching, in which string is stitched through holes.
String art has its origins in the 'curve stitch' activities invented by Mary Everest Boole at the end of the 19th century to make mathematical ideas more accessible to children. It was popularised as a decorative craft in the late 1960s through kits and books.
A computational form of string art that can produce photo-realistic artwork was introduced by Petros Vrellis, in 2016.
Gallery
See also
Bézier curve
Envelope (mathematics)
N-connectedness
References
Bibliography
Lois Kreischer (1971). Symmography. Crown Publishers, New York, NY.
Robert Sharpton (1972). Designing In String. Cunningham Art Products, Inc. No ISBN.
Mark Jansen, Ric Barline, Fred Fortune (1972). The Art of Geometric Thread Design. Open Door Company, Campbell, CA. No ISBN.
Brian and Patricia Eales (1973). Pin and Thread. Flarepath Printers Ltd., Great Britain. No ISBN.
Glen D. Saeger (1973). String Things You Can Make. Sterling Publishing Co., New York, NY.
Glen D. Saeger (1973). String Designs. Sterling Publishing Co., New York, NY.
Vivian Bowler (1974). 44 String and Nail Art Projects. Crown Publishers, New York, NY. No ISBN.
James E. Gic
|
https://en.wikipedia.org/wiki/Infill
|
In urban planning, infill, or in-fill, is the rededication of land in an urban environment, usually open-space, to new construction. Infill also applies, within an urban polity, to construction on any undeveloped land that is not on the urban margin. The slightly broader term "land recycling" is sometimes used instead. Infill has been promoted as an economical use of existing infrastructure and a remedy for urban sprawl. Its detractors view it as overloading urban services, including increased traffic congestion and pollution, and decreasing urban green-space. Many also detract it for social and historical reasons, partly due to its unproven effects and its similarity with gentrification.
In the urban planning and development industries, infill has been defined as the use of land within a built-up area for further construction, especially as part of a community redevelopment or growth management program or as part of smart growth.
It focuses on the reuse and repositioning of obsolete or underutilized buildings and sites.
Urban infill Projects can also be considered as a means of sustainable land development close to a city's urban core.
Redevelopment or land recycling are broad terms which describe development that occurs on previously developed land. Infill development differs in its specificity because it describes buildings that are constructed on vacant or underused property or between existing buildings. Terms describing types of redevelopment that do not involve using vacant land should not be confused with infill development. Infill development is commonly misunderstood to be gentrification, which is a different form of redevelopment.
Urban infill development vs. gentrification
The similarity between the concepts of gentrification and infill development are a source of confusion which may explain social opposition to infill development.
Gentrification is a term that is challenging to define because it manifests differently by location, and describes a
|
https://en.wikipedia.org/wiki/Diamond-square%20algorithm
|
The diamond-square algorithm is a method for generating heightmaps for computer graphics. It is a slightly better algorithm than the three-dimensional implementation of the midpoint displacement algorithm, which produces two-dimensional landscapes. It is also known as the random midpoint displacement fractal, the cloud fractal or the plasma fractal, because of the plasma effect produced when applied.
The idea was first introduced by Fournier, Fussell and Carpenter at SIGGRAPH in 1982.
The diamond-square algorithm starts with a two-dimensional grid, then randomly generates terrain height from four seed values arranged in a grid of points so that the entire plane is covered in squares.
Description
The diamond-square algorithm begins with a two-dimensional square array of width and height 2n + 1. The four corner points of the array must first be set to initial values.
The diamond and square steps are then performed alternately until all array values have been set.
The diamond step: For each square in the array, set the midpoint of that square to be the average of the four corner points plus a random value.
The square step: For each diamond in the array, set the midpoint of that diamond to be the average of the four corner points plus a random value.
Each random value is multiplied by a scale constant, which decreases with each iteration by a factor of 2−h, where h is a value between 0.0 and 1.0 (lower values produce rougher terrain).
During the square steps, points located on the edges of the array will have only three adjacent values set, rather than four. There are a number of ways to handle this complication - the simplest being to take the average of just the three adjacent values. Another option is to 'wrap around', taking the fourth value from the other side of the array. When used with consistent initial corner values, this method also allows generated fractals to be stitched together without discontinuities.
Visualization
The image below shows the
|
https://en.wikipedia.org/wiki/ISO%2031-11
|
ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019.
Its definitions include the following:
Mathematical logic
Sets
Miscellaneous signs and symbols
Operations
Functions
Exponential and logarithmic functions
Circular and hyperbolic functions
Complex numbers
Matrices
Coordinate systems
Vectors and tensors
Special functions
See also
Mathematical symbols
Mathematical notation
References and notes
Mathematical symbols
Mathematical notation
00031-11
|
https://en.wikipedia.org/wiki/Optical%20fiber
|
An optical fiber, or optical fibre in Commonwealth English, is a flexible glass or plastic fiber that can transmit light from one end to the other. Such fibers find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data transfer rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer. Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, such as fiber optic sensors and fiber lasers.
Glass optical fibers are typically made by drawing, while plastic fibers can be made either by drawing or by extrusion. Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than .
Being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is use
|
https://en.wikipedia.org/wiki/Accelerated%20aging
|
Accelerated aging is testing that uses aggravated conditions of heat, humidity, oxygen, sunlight, vibration, etc. to speed up the normal aging processes of items. It is used to help determine the long-term effects of expected levels of stress within a shorter time, usually in a laboratory by controlled standard test methods. It is used to estimate the useful lifespan of a product or its shelf life when actual lifespan data is unavailable. This occurs with products that have not existed long enough to have gone through their useful lifespan: for example, a new type of car engine or a new polymer for replacement joints.
Physical testing or chemical testing is carried out by subjecting the product to
representative levels of stress for long time periods,
unusually high levels of stress used to accelerate the effects of natural aging, or
levels of stress that intentionally force failures (for further analysis).
Mechanical parts are run at very high speed, far in excess of what they would receive in normal usage. Polymers are often kept at elevated temperatures, in order to accelerate chemical breakdown. Environmental chambers are often used.
Also, the device or material under test can be exposed to rapid (but controlled) changes in temperature, humidity, pressure, strain, etc. For example, cycles of heat and cold can simulate the effect of day and night for a few hours or minutes.
Library and archival preservation science
Accelerated aging is also used in library and archival preservation science. In this context, a material, usually paper, is subjected to extreme conditions in an effort to speed up the natural aging process. Usually, the extreme conditions consist of elevated temperature, but tests making use of concentrated pollutants or intense light also exist. These tests may be used for several purposes.
To predict the long-term effects of particular conservation treatments. In such a test, treated and untreated papers are both subjected to a sin
|
https://en.wikipedia.org/wiki/Three-point%20cross
|
In genetics, a three-point cross is used to determine the loci of three genes in an organism's genome.
An individual heterozygous for three mutations is crossed with a homozygous recessive individual, and the phenotypes of the progeny are scored. The two most common phenotypes that result are the parental gametes; the two least common phenotypes that result come from a double crossover in gamete formation. By comparing the parental and double-crossover phenotypes, the geneticist can determine which gene is located between the others on the chromosome.
The recombinant frequency is the ratio of non-parental phenotypes to total individuals. It is expressed as a percentage, which is equivalent to the number of map units (or centiMorgans) between two genes. For example, if 100 out of 1000 individuals display the phenotype resulting from a crossover between genes a and b, then the recombination frequency is 10 percent and genes a and b are 10 map-units apart on the chromosome.
If the recombination frequency is greater than 50 percent, it means that the genes are unlinked - they are either located on different chromosomes or are sufficiently distant from each other on the same chromosome. Any recombination frequency greater than 50 percent is expressed as exactly 50 percent because, being unlinked, they are equally as likely as not to be separated during gamete formation.
References
Genetics
|
https://en.wikipedia.org/wiki/Earth%20systems%20engineering%20and%20management
|
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman.
Introduction to ESEM
For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity".
In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability".
ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally.
|
https://en.wikipedia.org/wiki/Corporation%20for%20Education%20Network%20Initiatives%20in%20California
|
The Corporation for Education Network Initiatives in California (CENIC ( )) is a nonprofit corporation formed in 1997 to provide high-performance, high-bandwidth networking services to California universities and research institutions. Through this corporation, representatives from all of California's K-20 public education combine their networking resources toward the operation, deployment, and maintenance of the California Research and Education Network, or CalREN. Today, CalREN operates over 8,000 miles of fiber optic cable and serves more than 20 million users.
History
Beginning in the mid 1980s, research universities were served by a National Science Foundation (NSF) funded network, NSFNet. This funding ended, however, in 1995, as the NSF believed that the newly established commercial Internet could meet the needs of these institutions.
A model for wide-area networking began to emerge in the early 1990s, separating regional network infrastructure from national or international “backbone” infrastructure. Regional networks would connect to one or more “Internet exchange points” where traffic would be sent to or received from one or more backbone networks. When NSFNet ceased operation, this new network structure carried both research and commercial traffic.
Researchers at major universities soon began to complain that service from the commercial Internet was inadequate. This led to discussion of a separate network, funded by and for research universities, and the ultimate establishment of Internet2. The Internet2 backbone would have only two connection points in California.
At the same time, officials at the University of California, USC, Caltech, Stanford, and the California State University system (CSU) began discussing how to connect their institutions to the proposed new Internet2 network. They recognized that the key to a comprehensive information technology strategy was the development of a cohesive and seamless statewide, high-speed, advanced service
|
https://en.wikipedia.org/wiki/Ant%E2%80%93fungus%20mutualism
|
The ant–fungus mutualism is a symbiosis seen between certain ant and fungal species, in which ants actively cultivate fungus much like humans farm crops as a food source. There is only evidence of two instances in which this form of agriculture evolved in ants resulting in a dependence on fungi for food. These instances were the attine ants and some ants that are part of the Megalomyrmex genus. In some species, the ants and fungi are dependent on each other for survival. This type of codependency is prevalent among herbivores who rely on plant material for nutrition. The fungus’ ability to convert the plant material into a food source accessible to their host makes them the ideal partner. The leafcutter ant is a well-known example of this symbiosis. Leafcutter ants species can be found in southern South America up to the United States. However, ants are not the only ground-dwelling arthropods which have developed symbioses with fungi. A similar mutualism with fungi is also noted in termites within the subfamily Macrotermitinae which are widely distributed throughout the Old World tropics with the highest diversity in Africa.
Overview
Fungus-growing ants actively propagate, nurture, and defend Lepiotaceae and other lineages of basidiomycete fungus. In return, the fungus provides nutrients for the ants, which may accumulate in specialized hyphal-tips known as "gongylidia". These growths are synthesized from plant substrates and are rich in lipids and carbohydrates. In some advanced genera the queen ant may take a pellet of the fungus with her when she leaves to start a new colony. There are three castes of female worker ants in Attini colonies which all participate in foraging plant matter to feed the fungal cultivar. The lowest caste, minor, is smallest in size but largest in number and is primarily responsible for maintaining the fungal cultivar for the rest of the colony. The symbiosis between basidiomycete fungi and attine ants involves the fungal pathogen, Esco
|
https://en.wikipedia.org/wiki/Clipping%20%28audio%29
|
Clipping is a form of waveform distortion that occurs when an amplifier is overdriven and attempts to deliver an output voltage or current beyond its maximum capability. Driving an amplifier into clipping may cause it to output power in excess of its power rating.
In the frequency domain, clipping produces strong harmonics in the high-frequency range (as the clipped waveform comes closer to a squarewave). The extra high-frequency weighting of the signal could make tweeter damage more likely than if the signal was not clipped.
In most cases, the distortion associated with clipping is unwanted, and is visible on an oscilloscope even if it is inaudible. However, clipping is often used in music for artistic effect, especially in heavier genres.
Overview
When an amplifier is pushed to create a signal with more power than its power supply can produce, it will amplify the signal only up to its maximum capacity, at which point the signal can be amplified no further. As the signal simply "cuts" or "clips" at the maximum capacity of the amplifier, the signal is said to be "clipping". The extra signal which is beyond the capability of the amplifier is simply cut off, resulting in a sine wave becoming a distorted square-wave-type waveform.
Amplifiers have voltage, current and thermal limits. Clipping may occur due to limitations in the power supply or the output stage. Some amplifiers are able to deliver peak power without clipping for short durations before energy stored in the power supply is depleted or the amplifier begins to overheat.
Sound
Many electric guitar players intentionally overdrive their amplifiers (or insert a "fuzz box") to cause clipping in order to get a desired sound (see guitar distortion).
Some audiophiles believe that the clipping behavior of vacuum tubes with little or no negative feedback is superior to that of transistors, in that vacuum tubes clip more gradually than transistors (i.e. soft clipping, and mostly even harmonics), resulting in
|
https://en.wikipedia.org/wiki/Engineers%20for%20a%20Sustainable%20World
|
Engineers for a Sustainable World (ESW) is a not-for-profit network headquartered in Pittsburgh, PA, USA. ESW is an umbrella organization with chapters established at over 50 colleges, universities, and city chapters located primarily in the United States and Canada ESW members work on technical design projects that have a focus on sustainability and environmental issues. Projects can be located either on-campus, in the local community, or internationally. Chapters are made up of students or professionals and are semi-autonomous.
ESW was known as Engineers Without Frontiers USA (EWF-USA) through 2004. ESW was established in 2001 in Ithaca, New York at Cornell University. ESW was based at Cornell from 2001 through August 30, 2007, when it moved its headquarters to the San Francisco Bay Area. In July 2011, ESW moved its headquarters to Merced, California at the University of California, Merced. In July 2013, the organization became an independent legal entity with its headquarters currently in Pittsburgh, Pennsylvania.
Overview
ESW is managed by a leadership team that consists entirely of volunteers. They include the executive director, development director, chief operating officer, program directors, chapter relations director, professional relations director, along with affiliated departments. Volunteers include current chapter members as well as graduated professionals. Since incorporation, the national leadership team is overseen by a board of directors. It also has an advisory board of professionals.
ESW also has a board of directors with additional members from academia and corporations.
On its official website, ESW defines its vision as the following:
ESW defines its mission as:
ESW defines its goals as follows:
In support of the mission, ESW's primary goals are to:
Stimulate and foster an increased, and more diverse community of engineers;
Bring together students and professionals of various disciplines to create lasting solutions with immediate impacts
|
https://en.wikipedia.org/wiki/Washington%20Accord%20%28credentials%29
|
The Washington Accord is an international accreditation agreement for undergraduate professional engineering academic degrees and postgraduate professional
engineering academic degrees between the bodies responsible for accreditation in its signatory countries. The full signatories as of 2023 are Australia, Canada, China, Costa Rica, Hong Kong, India, Indonesia, Ireland, Japan, Korea, Malaysia, Mexico, New Zealand, Pakistan, Peru, Russia, Singapore, South Africa, Sri Lanka, Taiwan, Turkey, the United Kingdom and the United States.
Overview
The Washington Accord recognizes that there is substantial equivalence of programs accredited by those signatories. Graduates of accredited programs in any of the signatory countries are recognized by the other signatory countries as having met the academic requirements for entry to the practice of engineering. Recognition of accredited programs is not retroactive but takes effect only from the date of admission of the country to signatory status.
Scope
The Washington Accord covers both undergraduate and postgraduate engineering degrees. Engineering technology programs are not covered by the accord. Engineering technology programs are covered under the Sydney Accord and Dublin Accord. Only qualifications awarded after the signatory country or region became part of the Washington Accord are recognized. The accord is not directly responsible for the licensing of professional engineers and the registration of chartered engineers but it does cover the academic requirements that are part of the licensing processes in signatory countries.
Signatories
The following are the signatory countries and territories of the Washington Accord, their respective accreditation bodies and years of admission:
The following countries have provisional signatory status and may become full signatory members in the future:
See also
Regulation and licensure in engineering
Seoul Accord
European Engineer
References
External links
International Eng
|
https://en.wikipedia.org/wiki/Automata-based%20programming
|
Automata-based programming is a programming paradigm in which the program or part of it is thought of as a model of a finite-state machine (FSM) or any other (often more complicated) formal automaton (see automata theory). Sometimes a potentially infinite set of possible states is introduced, and such a set can have a complicated structure, not just an enumeration.
Finite-state machine-based programming is generally the same, but, formally speaking, does not cover all possible variants, as FSM stands for finite-state machine, and automata-based programming does not necessarily employ FSMs in the strict sense.
The following properties are key indicators for automata-based programming:
The time period of the program's execution is clearly separated down to the automaton steps. Each step is effectively an execution of a code section (same for all the steps) which has a single entry point. That section might be divided down to subsections to be executed depending on different states, although this is not necessary.
Any communication between the automaton steps is only possible via the explicitly noted set of variables named the automaton state. Between any two steps, the program cannot have implicit components of its state, such as local variables' values, return addresses, the current instruction pointer, etc. That is, the state of the whole program, taken at any two moments of entering an automaton step, can only differ in the values of the variables being considered as the automaton state.
The whole execution of the automata-based code is a cycle of the automaton steps.
Another reason for using the notion of automata-based programming is that the programmer's style of thinking about the program in this technique is very similar to the style of thinking used to solve mathematical tasks using Turing machines, Markov algorithms, etc.
Example
Task
Consider the task of reading a text from standard input line-by-line and writing the first word of each line to stan
|
https://en.wikipedia.org/wiki/Resolvent%20formalism
|
In mathematics, the resolvent formalism is a technique for applying concepts from complex analysis to the study of the spectrum of operators on Banach spaces and more general spaces. Formal justification for the manipulations can be found in the framework of holomorphic functional calculus.
The resolvent captures the spectral properties of an operator in the analytic structure of the functional. Given an operator , the resolvent may be defined as
Among other uses, the resolvent may be used to solve the inhomogeneous Fredholm integral equations; a commonly used approach is a series solution, the Liouville–Neumann series.
The resolvent of can be used to directly obtain information about the spectral decomposition
of . For example, suppose is an isolated eigenvalue in the
spectrum of . That is, suppose there exists a simple closed curve
in the complex plane that separates from the rest of the spectrum of .
Then the residue
defines a projection operator onto the eigenspace of .
The Hille–Yosida theorem relates the resolvent through a Laplace transform to an integral over the one-parameter group of transformations generated by . Thus, for example, if is a Hermitian, then is a one-parameter group of unitary operators. Whenever , the resolvent of A at z can be expressed as the Laplace transform
where the integral is taken along the ray .
History
The first major use of the resolvent operator as a series in (cf. Liouville–Neumann series) was by Ivar Fredholm, in a landmark 1903 paper in Acta Mathematica that helped establish modern operator theory.
The name resolvent was given by David Hilbert.
Resolvent identity
For all in , the resolvent set of an operator , we have that the first resolvent identity (also called Hilbert's identity) holds:
(Note that Dunford and Schwartz, cited, define the resolvent as , instead, so that the formula above differs in sign from theirs.)
The second resolvent identity is a generalization of the first reso
|
https://en.wikipedia.org/wiki/Kontorovich%E2%80%93Lebedev%20transform
|
In mathematics, the Kontorovich–Lebedev transform is an integral transform which uses a Macdonald function (modified Bessel function of the second kind) with imaginary index as its kernel. Unlike other Bessel function transforms, such as the Hankel transform, this transform involves integrating over the index of the function rather than its argument.
The transform of a function ƒ(x) and its inverse (provided they exist) are given below:
Laguerre previously studied a similar transform regarding Laguerre function as:
Erdélyi et al., for instance, contains a short list of Kontorovich–Lebedev transforms as well references to the original work of Kontorovich and Lebedev in the late 1930s. This transform is mostly used in solving the Laplace equation in cylindrical coordinates for wedge shaped domains by the method of separation of variables.
References
Erdélyi et al. Table of Integral Transforms Vol. 2 (McGraw Hill 1954)
I.N. Sneddon, The use of integral Transforms, (McGraw Hill, New York 1972)
Integral transforms
Special functions
|
https://en.wikipedia.org/wiki/Mizuna
|
, kyouna (京菜), Japanese mustard greens, or spider mustard, is a cultivar of Brassica rapa var. niposinica.
Description and use
Possessing dark green, serrated leaves, mizuna is described as having, when raw, a "piquant, mild peppery flavor...slightly spicy, but less so than arugula." It is also used in stir-fries, soups, and nabemono (Japanese hot pots).
Varieties
In addition to the term mizuna (and its alternates) being applied to at least two different species of Brassica, horticulturalists have defined and named a number of varieties. For example, a resource provided by Cornell University and the United States Department of Agriculture lists sixteen varieties including "Early Mizuna", "Kyona Mizuna", "Komatsuna Mizuna", "Vitamin Green Mizuna", "Kyoto Mizuna", "Happy Rich Mizuna", "Summer Fest Mizuna", "Tokyo Early Mizuna", "Mibuna Mizuna", "Red Komatsuna Mizuna", "Waido Mizuna" and "Purple Mizuna". There is also a variety known as pink mizuna.
Cultivation
Mizuna has been cultivated in Japan since ancient times. Mizuna was successfully grown in the International Space Station in 2019. It grows in hardiness zones 4 to 9, prefers full sun or partial shade, well-drained soil and a pH of 6.5-7.0. It can be grown as a microgreen, sowing every 3 cm, or for its leaves with a 20 cm spacing. It is produced by more than 30 countries around the world, but China, Japan, South Korea, India and United States account for 70% of global production.
References
External links
PROTAbase on Brassica rapa
Brassica
Leaf vegetables
Japanese vegetables
Space-flown life
|
https://en.wikipedia.org/wiki/Smart%20environment
|
Smart environments link computers and other smart devices to everyday settings and tasks. Smart environments include smart homes, smart cities and smart manufacturing.
Introduction
Smart environments are an extension of pervasive computing. According to Mark Weiser, pervasive computing promotes the idea of a world that is connected to sensors and computers. These sensors and computers are integrated with everyday objects in peoples' lives and are connected through networks.
Definition
Cook and Das define smart environment as "a small world where different kinds of smart device are continuously working to make inhabitants' lives more comfortable." Smart environments aim to satisfy the experience of individuals from every environment, by replacing the hazardous work, physical labor, and repetitive tasks with automated agents.
Poslad
differentiates three different kinds of smart environments for systems, services and devices: virtual (or distributed) computing environments, physical environments and human environments, or a hybrid combination of these:
Virtual computing environments enable smart devices to access pertinent services anywhere and anytime.
Physical environments may be embedded with a variety of smart devices of different types including tags, sensors and controllers and have different form factors ranging from nano- to micro- to macro-sized.
Human environments: humans, either individually or collectively, inherently form a smart environment for devices. However, humans may themselves be accompanied by smart devices such as mobile phones, use surface-mounted devices (wearable computing) and contain embedded devices (e.g., pacemakers to maintain a healthy heart operation or AR contact lenses).
Features
Smart environments are broadly classified to have the following features
Remote control of devices, like power line communication systems to control devices.
Device Communication, using middleware, and Wireless communication to form a picture of con
|
https://en.wikipedia.org/wiki/Magnetotactic%20bacteria
|
Magnetotactic bacteria (or MTB) are a polyphyletic group of bacteria that orient themselves along the magnetic field lines of Earth's magnetic field. Discovered in 1963 by Salvatore Bellini and rediscovered in 1975 by Richard Blakemore, this alignment is believed to aid these organisms in reaching regions of optimal oxygen concentration. To perform this task, these bacteria have organelles called magnetosomes that contain magnetic crystals. The biological phenomenon of microorganisms tending to move in response to the environment's magnetic characteristics is known as magnetotaxis. However, this term is misleading in that every other application of the term taxis involves a stimulus-response mechanism. In contrast to the magnetoreception of animals, the bacteria contain fixed magnets that force the bacteria into alignment—even dead cells are dragged into alignment, just like a compass needle.
Introduction
The first description of magnetotactic bacteria was in 1963 by Salvatore Bellini of the University of Pavia. While observing bog sediments under his microscope, Bellini noticed a group of bacteria that evidently oriented themselves in a unique direction. He realized these microorganisms moved according to the direction of the North Pole, and hence called them "magnetosensitive bacteria". The publications were academic (peer-reviewed by the Istituto di Microbiologias editorial committee under responsibility of the Institute's Director Prof. L. Bianchi, as usual in European universities at the time) and communicated in Italian with English, French and German short summaries in the official journal of a well-known institution, yet unexplainedly seem to have attracted little attention until they were brought to the attention of Richard Frankel in 2007. Frankel translated them into English and the translations were published in the Chinese Journal of Oceanography and Limnology.
Richard Blakemore, then a microbiology graduate student at the University of Massachus
|
https://en.wikipedia.org/wiki/Choke%20%28electronics%29
|
In electronics, a choke is an inductor used to block higher-frequency alternating currents (AC) while passing direct current (DC) and lower-frequency ACs in a circuit. A choke usually consists of a coil of insulated wire often wound on a magnetic core, although some consist of a doughnut-shaped ferrite bead strung on a wire. The choke's impedance increases with frequency. Its low electrical resistance passes both AC and DC with little power loss, but its reactance limits the amount of AC passed.
The name comes from blocking—"choking"—high frequencies while passing low frequencies. It is a functional name; the name "choke" is used if an inductor is used for blocking or decoupling higher frequencies, but the component is simply called an "inductor" if used in electronic filters or tuned circuits. Inductors designed for use as chokes are usually distinguished by not having low-loss construction (high Q factor) required in inductors used in tuned circuits and filtering applications.
Types and construction
Chokes are divided into two broad classes:
Audio frequency chokes—designed to block audio and power line frequencies while allowing DC to pass
Radio frequency chokes—designed to block radio frequencies while allowing audio and DC to pass.
Audio frequency choke
Audio frequency chokes usually have ferromagnetic cores to increase their inductance. They are often constructed similarly to transformers, with laminated iron cores and an air gap. The iron core increases the inductance for a given volume of the core. Chokes were frequently used in the design of rectifier power supplies for vacuum tube equipment such as radio receivers or amplifiers. They are commonly found in direct-current motor controllers to produce direct current (DC), where they were used in conjunction with large electrolytic capacitors to remove the voltage ripple (AC) at the output DC. A rectifier circuit designed for a choke-output filter may produce too much DC output voltage and subject the
|
https://en.wikipedia.org/wiki/History%20of%20the%20Hindu%E2%80%93Arabic%20numeral%20system
|
The Hindu–Arabic numeral system is a decimal place-value numeral system that uses a zero glyph as in "205".
Its glyphs are descended from the Indian Brahmi numerals. The full system emerged by the 8th to 9th centuries, and is first described outside India in Al-Khwarizmi's On the Calculation with Hindu Numerals (ca. 825), and second Al-Kindi's four-volume work On the Use of the Indian Numerals (ca. 830). Today the name Hindu–Arabic numerals is usually used.
Decimal system
Historians trace modern numerals in most languages to the Brahmi numerals, which were in use around the middle of the 3rd century BC. The place value system, however, developed later. The Brahmi numerals have been found in inscriptions in caves and on coins in regions near Pune, Maharashtra and Uttar Pradesh in India. These numerals (with slight variations) were in use up to the 4th century.
During the Gupta period (early 4th century to the late 6th century), the Gupta numerals developed from the Brahmi numerals and were spread over large areas by the Gupta empire as they conquered territory. Beginning around 7th century, the Gupta numerals developed into the Nagari numerals.
Development in India
During the Vedic period (1500–500 BCE), motivated by geometric construction of the fire altars and astronomy, the use of a numerical system and of basic mathematical operations developed in northern India. Hindu cosmology required the mastery of very large numbers such as the kalpa (the lifetime of the universe) said to be 4,320,000,000 years and the "orbit of the heaven" said to be 18,712,069,200,000,000 yojanas. Numbers were expressed using a "named place-value notation", using names for the powers of 10, like dasa, shatha, sahasra, ayuta, niyuta, prayuta, arbuda, nyarbuda, samudra, madhya, anta, parardha etc., the last of these being the name for a trillion (1012). For example, the number 26,432 was expressed as "2 ayuta, 6 sahasra, 4 shatha, 3 dasa, 2." In the Buddhist text Lalitavistara, the
|
https://en.wikipedia.org/wiki/Video%20display%20controller
|
A video display controller or VDC (also called a display engine or display interface) is an integrated circuit which is the main component in a video-signal generator, a device responsible for the production of a TV video signal in a computing or game system. Some VDCs also generate an audio signal, but that is not their main function.
VDCs were used in the home computers of the 1980s and also in some early video picture systems.
The VDC is the main component of the video signal generator logic, responsible for generating the timing of video signals such as the horizontal and vertical synchronization signals and the blanking interval signal. Sometimes other supporting chips were necessary to build a complete system, such as RAM to hold pixel data, ROM to hold character fonts, or some discrete logic such as shift registers.
Most often the VDC chip is completely integrated in the logic of the main computer system, (its video RAM appears in the memory map of the main CPU), but sometimes it functions as a coprocessor that can manipulate the video RAM contents independently.
Video display controller vs. graphics processing unit
The difference between a display controller, a graphics accelerator, and a video compression/decompression IC is huge, but, since all of this logic is usually found on the chip of a graphics processing unit and is usually not available separately to the end-customer, there is often much confusion about these very different functional blocks.
GPUs with hardware acceleration started appearing during the 1990s. VDCs often had special hardware for the creation of "sprites", a function that in more modern VDP chips is done with the "Bit Blitter" using the "Bit blit" function.
One example of a typical video display processor is the "VDP2 32-bit background and scroll plane video display processor" of the Sega Saturn.
Another example is the Lisa (AGA) chip that was used for the improved graphics of the later generation Amiga computers.
That said, i
|
https://en.wikipedia.org/wiki/Mocha%20%28decompiler%29
|
Mocha is a Java decompiler, which allows programmers to translate a program's bytecode into source code.
A beta version of Mocha was released in 1996, by Dutch developer Hanpeter van Vliet, alongside an obfuscator named Crema. A controversy erupted and he temporarily withdrew Mocha from public distribution. As of 2009 the program is still available for distribution, and may be used freely as long as it is not modified. Borland's JBuilder includes a decompiler based on Mocha. Van Vliet's websites went offline as he died of cancer on December 31, 1996 at the age of 34.
See also
JAD (JAva Decompiler)
JD
References
External links
Java decompilers
Software obfuscation
|
https://en.wikipedia.org/wiki/Braid-breaker
|
A braid-breaker is a filter that prevents television interference (TVI). In many cases, TVI is caused by a high field strength of a nearby high frequency (HF) transmitter, the aerial down lead plugged into the back of the TV acts as a longwire antenna or as a simple vertical element. The radio frequency (RF) current flowing through the tuner of the TV tends to generate harmonics which then spoil the viewing.
The braid breaker works by preventing RF signals picked up on the outside flowing into the TV set, while passing RF inside the coax from the antenna.
Designs
Designs for diminishing unwanted signals are based on two types of filters: a “choke” filter which blocks signals in the electrical mode most interference uses, and filters that selectively admit or impede signals depending on the signal frequency.
Further, carefully chosen combinations of filters of either one type or both types multiply each other's effects, so that even if only slightly different, two filters are more effective than a single filter, or either filter alone.
Ferrite choke
Ferrite ring chokes work by presenting a high impedance to signals traveling along the braid only, but passes through differential-mode ("balanced") currents unchanged. The wanted signal is in differential mode with an equal and opposite current flowing in the braid to that in the cable core. The alternating current in the braid is impeded by the magnetic fields created in the ferrite, effectively placing a large inductance in series with the braid. The currents from the wanted signal, however, produce equal and opposite magnetic flux in the ferrite which cancel out.
The device is called a "choke" because the ferrite in effect "chokes off" the signal path for interference.
High-pass filter
The other type of filters used are based on frequency: Below their operating frequency limit, inductors (coils) impede signals at higher frequencies more, and admit low frequencies, whereas capacitors do the opposite: capaci
|
https://en.wikipedia.org/wiki/Television%20interference
|
Television interference (TVI) is a particular case of electromagnetic interference which affects television reception. Many natural and man-made phenomena can disrupt the reception of television signals. These include naturally occurring and artificial spark discharges, and effects due to the operation of radio transmitters.
Analog television broadcasts display different effects due to different kinds of interference. Digital television reception generally gives a good quality picture until the interference is so large that it can no longer be eliminated by the error checking systems in the receiver, at which point the video display becomes pixelated, distorts, or goes blank.
Co-channel and multipath (ghost)
During unusual atmospheric conditions, a distant station normally undectable at a particular location may provide a much stronger signal than usual. The analog television picture may display the sum of the two signals, producing an image from the strong local signal with traces or "ghosts" from the distant, weaker signal. Television broadcast stations are located and assigned to channels so that such events are rare. Readjustment of the receiving antenna may allow more of the distant signal to be rejected, improving image quality.
A local signal may travel by more than one path from the transmitter to receiving antenna. "Multipath" reception is visible as multiple impressions of the same image, slightly shifted along the width of the screen due to the varying transmission path. Some multipath reception is momentary due to road vehicles or aircraft passing; other multipath problems may persist due to reflection off tall buildings or other landscape features. Strong multipath can cause the analog picture to "tear" or momentarily lose synchronization, causing it to roll or flip.
Static electricity and sparks
The sparks generated by static electricity can generate interference.
Many systems where radio frequency interface is caused by sparking can be modele
|
https://en.wikipedia.org/wiki/Pierre%20Gagnaire
|
Pierre Gagnaire (born 9 April 1950 in Apinac, Loire) is a French chef, and the head chef and owner of the eponymous Pierre Gagnaire restaurant at 6 rue Balzac in Paris (in the 8th arrondissement). Gagnaire is an iconoclastic chef at the forefront of the fusion cuisine movement. Beginning his career in St. Etienne where he won three Michelin Stars, Gagnaire tore at the conventions of classic French cooking by introducing jarring juxtapositions of flavours, tastes, textures, and ingredients.
On his website, Gagnaire gives his mission statement as the wish to run a restaurant which is 'facing tomorrow but respectful of yesterday' ("tourné vers demain mais soucieux d'hier").
In Europe
The restaurant, Pierre Gagnaire, specializes in modern French cuisine and has garnered three Michelin stars. Gagnaire is also head chef of Sketch in London. In 2005, both restaurants were ranked in the S.Pellegrino World's 50 Best Restaurants by industry magazine Restaurant, with Pierre Gagnaire ranking third for three consecutive years (2006, 2007, and 2008).
In the United States
In December 2009, Gagnaire made his United States debut with Twist, a new restaurant at the Mandarin Oriental in Las Vegas, which has since received a Forbes Five-Star Award but has since closed.
Media appearances
Pierre Gagnaire has made appearances on Fuji TV's Iron Chef. He represented France in the 1995 Iron Chef World Cup in Tokyo, with the other chefs chosen being Italy's Gianfranco Vissani and Hong Kong's Xu Cheng as well as Iron Chef Japanese Rokusaburo Michiba representing Japan. He also appeared in the "France Battle Special" at Château de Brissac, where he battled Iron Chef French Hiroyuki Sakai.
Awards
In 2015, Gagnaire won a Best Chef in the World award.
Restaurants
Paris, Pierre Gagnaire, 1996–
Paris, Gaya rive gauche par Pierre Gagnaire, 2005–
Berlin, Les Solistes by Pierre Gagnaire, 2013–2016 (closed)
Bordeaux, La Grande Maison
Châtelaillon, Gaya Cuisine De Bords de Mer
Courchevel, Pie
|
https://en.wikipedia.org/wiki/Estragole
|
Estragole (p-allylanisole, methyl chavicol) is a phenylpropene, a natural organic compound. Its chemical structure consists of a benzene ring substituted with a methoxy group and an allyl group. It is an isomer of anethole, differing with respect to the location of the double bond. It is a colorless liquid, although impure samples can appear yellow. It is a component of various trees and plants, including turpentine (pine oil), anise, fennel, bay, tarragon, and basil. It is used in the preparation of fragrances.
The compound is named for estragon, the French name of tarragon.
Production
Hundreds of tonnes of basil oil are produced annually by steam distillation of Ocimum basilicum (common basil). This oil is mainly estragole but also contains substantial amounts of linalool.
Estragole is the primary constituent of essential oil of tarragon (comprising 60–75%). It is also present in pine oil, turpentine, fennel, anise (2%), Clausena anisata and Syzygium anisatum.
Estragole is used in perfumes and is restricted in flavours as a biologically active principle: it can only be present in a flavour by using an essential oil. Upon treatment with potassium hydroxide, estragole converts to anethole. A known use of estragole is in the synthesis of magnolol.
Safety
Estragole is suspected to be carcinogenic and genotoxic, as is indicated by a report of the European Union Committee on Herbal Medicinal Products. Several studies have clearly established that the profiles of metabolism, metabolic activation, and covalent binding are dose dependent and that the relative importance diminishes markedly at low levels of exposure (that is, these events are not linear with respect to dose). In particular, rodent studies show that these events are minimal probably in the dose range of 1–10 mg/kg body weight, which is approximately 100 to 1,000 times the anticipated human exposure to this substance. For these reasons it is concluded that the present exposure to estragole resulting f
|
https://en.wikipedia.org/wiki/Neyer%20d-optimal%20test
|
The Neyer d-optimal test is a sensitivity test. It can be used to answer questions such as "How far can a carton of eggs fall, on average, before one breaks?" If these egg cartons are very expensive, the person running the test would like to minimize the number of cartons dropped, to keep the experiment cheaper and to perform it faster. The Neyer test allows the experimenter to choose the experiment that gives the most information. In this case, given the history of egg cartons which have already been dropped, and whether those cartons broke or not, the Neyer test says "you will learn the most if you drop the next egg carton from a height of 32.123 meters."
Applications
The Neyer test is useful in any situation when you wish to determine the average amount of a given stimulus needed in order to trigger a response. Examples:
Material Toughness - how far does this type of bottle filled with detergent need to fall before it breaks?
Drug Efficacy - how much of this drug is enough to cure this diseases?
Toxicology - what percentage of contaminated seed is enough to cause a bird of this species to die?
Sensory Threshold - how strong does the light have to be for this photodetector to sense it?
Damage Threshold - how loud does the sound have to be in order to damage this microphone ?
History
The Neyer-d optimal test was described by Barry T. Neyer in 1994. This method has replaced the earlier Bruceton analysis or "Up and Down Test" that was devised by Dixon and Mood in 1948 to allow computation with pencil and paper. Samples are tested at various stimulus levels, and the results (response or no response) noted. The Neyer Test guides the experimenter to pick test levels that provide the maximum amount of information. Unlike previous methods that have been developed, this method requires the use of a computer program to calculate the test levels.
Although not directly related to the test method, the likelihood ratio analysis method is often used to analyze th
|
https://en.wikipedia.org/wiki/Absorption%20wavemeter
|
An absorption wavemeter is a simple electronic instrument used to measure the frequency of radio waves. It is an older method of measuring frequency, widely used from the birth of radio in the early 20th century until the 1970s, when the development of inexpensive frequency counters, which have far greater accuracy, made it largely obsolete. A wavemeter consists of an adjustable resonant circuit calibrated in frequency, with a meter or other means to measure the voltage or current in the circuit. When adjusted to resonance with the unknown frequency, the resonant circuit absorbs energy, which is indicated by a dip on the meter. Then the frequency can be read from the dial.
Wavemeters are used for frequency measurements that do not require high accuracy, such as checking that a radio transmitter is operating within its correct frequency band, or checking for harmonics in the output. Many radio amateurs keep them as a simple way to check their output frequency. Similar devices can be made for detection of mobile phones. As an alternative, a dip meter can be used.
There are two categories of wavemeters: transmission wavemeters, which have an input and an output port and are inserted into the signal path, or absorption wavemeters, which are loosely coupled to the radio frequency source and absorb energy from it.
HF and VHF
The most simple form of the device is a variable capacitor with a coil wired across its terminals. Attached to one the terminals of the LC circuit is a diode, then between the end of the diode not wired to the LC circuit and the terminal of the LC circuit not bearing the diode is wired a ceramic decoupling capacitor. Finally a galvanometer is wired to the terminals of the decoupling capacitor. The device will be sensitive to strong sources of radiowaves at the frequency at which the LC circuit is resonant.
This is given by
When the device is exposed to an RF field which is at the resonant frequency a DC voltage will appear on the terminals
|
https://en.wikipedia.org/wiki/Validator
|
A validator is a computer program used to check the validity or syntactical correctness of a fragment of code or document. The term is commonly used in the context of validating HTML, CSS, and XML documents like RSS feeds, though it can be used for any defined format or language.
Accessibility validators are automated tools that are designed to verify compliance of a web page or a web site with respect to one or more accessibility guidelines (such as WCAG, Section 508 or those associated with national laws such as the Stanca Act).
See also
CSS HTML Validator for Windows
HTML Tidy
W3C Markup Validation Service
Well-formed element
XML validation
References
External links
W3C's HTML Validator
W3C's CSS Validator
Mauve, an accessibility validator developed by HIIS Lab – ISTI of CNR of Pisa (Italy).
WAVE – Online accessibility validator
Debugging
HTML
XML software
|
https://en.wikipedia.org/wiki/%C3%89cole%20Nationale%20Sup%C3%A9rieure%20d%27%C3%89lectrochimie%20et%20d%27%C3%89lectrom%C3%A9tallurgie%20de%20Grenoble
|
The École Nationale Supérieure d'Électrochimie et d'Électrométallurgie de Grenoble, or ENSEEG, was one of the French Grandes écoles of engineering (engineering schools). It has been created in 1921 under the name Institut d’électrochimie et d’électrométallurgie (IEE) (Institute of Electrochemistry and Electrometallurgy). The name ENSEEG has been chosen in 1948 and ENSEEG has been part of Grenoble Institute of Technology (INPG or GIT) since its creation in 1971. Therefore, the name INPG-ENSEEG has also been commonly used.
ENSEEG delivered a multidisciplinary education in physical chemistry. The ENSEEG engineers are especially competent in materials science, process engineering and electrochemistry. From September 2008, ENSEEG merged with two other Grandes écoles to create Phelma.
External links
ENSEEG Website
ENSEEG Student Website
ENSEEG Student Firm
Electrochimie et d'Électrométallurgie de Grenoble
Electrochemical engineering
Metallurgical organizations
Universities and colleges established in 1921
Educational institutions disestablished in 2008
1921 establishments in France
2008 disestablishments in France
|
https://en.wikipedia.org/wiki/Current%20divider
|
In electronics, a current divider is a simple linear circuit that produces an output current (IX) that is a fraction of its input current (IT). Current division refers to the splitting of current between the branches of the divider. The currents in the various branches of such a circuit will always divide in such a way as to minimize the total energy expended.
The formula describing a current divider is similar in form to that for the voltage divider. However, the ratio describing current division places the impedance of the considered branches in the denominator, unlike voltage division, where the considered impedance is in the numerator. This is because in current dividers, total energy expended is minimized, resulting in currents that go through paths of least impedance, hence the inverse relationship with impedance. Comparatively, voltage divider is used to satisfy Kirchhoff's voltage law (KVL). The voltage around a loop must sum up to zero, so the voltage drops must be divided evenly in a direct relationship with the impedance.
To be specific, if two or more impedances are in parallel, the current that enters the combination will be split between them in inverse proportion to their impedances (according to Ohm's law). It also follows that if the impedances have the same value, the current is split equally.
Current divider
A general formula for the current IX in a resistor RX that is in parallel with a combination of other resistors of total resistance RT (see Figure 1) is
where IT is the total current entering the combined network of RX in parallel with RT. Notice that when RT is composed of a parallel combination of resistors, say R1, R2, ... etc., then the reciprocal of each resistor must be added to find the reciprocal of the total resistance RT:
General case
Although the resistive divider is most common, the current divider may be made of frequency-dependent impedances. In the general case:
and the current IX is given by
where ZT refers to the
|
https://en.wikipedia.org/wiki/Parametric%20polymorphism
|
In programming languages and type theory, parametric polymorphism allows a single piece of code to be given a "generic" type, using variables in place of actual types, and then instantiated with particular types as needed. Parametrically polymorphic functions and data types are sometimes called generic functions and generic datatypes, respectively, and they form the basis of generic programming.
Parametric polymorphism may be contrasted with ad hoc polymorphism. Parametrically polymorphic definitions are uniform: they behave identically regardless of the type they are instantiated at. In contrast, ad hoc polymorphic definitions are given a distinct definition for each type. Thus, ad hoc polymorphism can generally only support a limited number of such distinct types, since a separate implementation has to be provided for each type.
Basic definition
It is possible to write functions that do not depend on the types of their arguments. For example, the identity function simply returns its argument unmodified. This naturally gives rise to a family of potential types, such as , , , and so on. Parametric polymorphism allows to be given a single, most general type by introducing a universally quantified type variable:
The polymorphic definition can then be instantiated by substituting any concrete type for , yielding the full family of potential types.
The identity function is a particularly extreme example, but many other functions also benefit from parametric polymorphism. For example, an function that appends two lists does not inspect the elements of the list, only the list structure itself. Therefore, can be given a similar family of types, such as , , and so on, where denotes a list of elements of type . The most general type is therefore
which can be instantiated to any type in the family.
Parametrically polymorphic functions like and are said to be parameterized over an arbitrary type . Both and are parameterized over a single type, but functions ma
|
https://en.wikipedia.org/wiki/List%20of%20portable%20software
|
For the purposes of this list, a portable application is software that can be used from portable storage devices such as USB flash drives, digital audio players, PDAs or external hard drives. To be considered for inclusion, an application must be executable on multiple computers from removable storage without installation, and without writing settings or data onto a computer's non-removable storage. This includes modified portable versions of non-portable applications.
Bundles
Ceedo
MojoPac
LiberKey
PortableApps.com
U3
WebLaminarTools
WinPenPack
Launchers
Appetizer (Dock application)
ASuite
Launchy
OpenDisc
RocketDock
Development
Scripting languages
Portable Python
Portable NSIS Version
Portable AutoIt
Portable AutoHotkey (zip file)
Portable Perl (Strawberry Perl Portable Version)
Compilers
MinGW
Tiny C Compiler
IDEs
Alice IDE
Portable Eclipse
Portable Code::Blocks (needs MinGW installed, which is portable too)
Portable Dev-C++
Hackety Hack, which is an educational version of Ruby
SharpDevelop Portable
Setup creators
Nullsoft Scriptable Install System Portable (PortableApps.com format)
Visual mapping/productivity tools
XMIND
Graphics
3D modeling and rendering
Anim8or – Free 3D modeling and animating software.
Blender:
BlenderPortable
Blender Pocket
XBlender
Animation
Anim8or
Blender
Pivot Stickfigure Animator
Graphic editors
ArtRage
Artweaver
Dia
EVE
Fotografix
GIMP:
GIMP Portable VS 2008 is the Gimp portable version of Gimp on Windows platforms (Windows XP, Vista, NT Server 2003, NT Server 2008)
Portable Gimp – for Mac OS X
X-Gimp
X-GimpShop
Inkscape:
X-Inkscape
Portable Inkscape – for Mac OS X
IrfanView
Pixia
Tux Paint
Icon editors
@icon sushi
GIMP – Supports reading and writing Windows ICO files.
IcoFX
IrfanView – Supports converting graphic file formats into Windows ICO files.
Viewers
FastStone Image Viewer: supports screen capture, multiple pix into a single PDF
Irfanview
XnView
Documen
|
https://en.wikipedia.org/wiki/Lighting%20control%20system
|
A lighting control system incorporates communication between various system inputs and outputs related to lighting control with the use of one or more central computing devices. Lighting control systems are widely used on both indoor and outdoor lighting of commercial, industrial, and residential spaces. Lighting control systems are sometimes referred to under the term smart lighting. Lighting control systems serve to provide the right amount of light where and when it is needed.
Lighting control systems are employed to maximize the energy savings from the lighting system, satisfy building codes, or comply with green building and energy conservation programs. Lighting control systems may include a lighting technology designed for energy efficiency, convenience and security. This may include high efficiency fixtures and automated controls that make adjustments based on conditions such as occupancy or daylight availability. Lighting is the deliberate application of light to achieve some aesthetic or practical effect (e.g. illumination of a security breach). It includes task lighting, accent lighting, and general lighting.
Lighting controls
The term lighting controls is typically used to indicate stand-alone control of the lighting within a space. This may include occupancy sensors, timeclocks, and photocells that are hard-wired to control fixed groups of lights independently. Adjustment occurs manually at each devices location. The efficiency of and market for residential lighting controls has been characterized by the Consortium for Energy Efficiency.
The term lighting control system refers to an intelligent networked system of devices related to lighting control. These devices may include relays, occupancy sensors, photocells, light control switches or touchscreens, and signals from other building systems (such as fire alarm or HVAC). Adjustment of the system occurs both at device locations and at central computer locations via software programs or other interf
|
https://en.wikipedia.org/wiki/MOS%20Technology%20CIA
|
The 6526/8520 Complex Interface Adapter (CIA) was an integrated circuit made by MOS Technology. It served as an I/O port controller for the 6502 family of microprocessors, providing for parallel and serial I/O capabilities as well as timers and a Time-of-Day (TOD) clock. The device's most prominent use was in the Commodore 64 and Commodore 128(D), each of which included two CIA chips. The Commodore 1570 and Commodore 1571 floppy disk drives contained one CIA each. Furthermore, the Amiga home computers and the Commodore 1581 floppy disk drive employed a modified variant of the CIA circuit called 8520. 8520 is functionally equivalent to the 6526 except for the simplified TOD circuitry. Predecessor to CIA was PIA.
Parallel I/O
The CIA had two 8-bit bidirectional parallel I/O ports. Each port had a corresponding Data Direction Register, which allowed each data line to be individually set to input or output mode. A read of these ports always returned the status of the individual lines, regardless of the data direction that had been set.
Serial I/O
An internal bidirectional 8-bit shift register enabled the CIA to handle serial I/O. The chip could accept serial input clocked from an external source, and could send serial output clocked with one of the built-in programmable timers. An interrupt was generated whenever an 8-bit serial transfer had completed. It was possible to implement a simple "network" by connecting the shift register and clock outputs of several computers together.
The maximum bitrate is 500 kbit/s for the 2 MHz version.
The CIA incorporates a fix to a bug in the serial-shift register in the earlier 6522 VIA. The CIA was originally intended to allow fast communication with a disk drive, but in the end couldn't be used because of a desire to keep disk drive compatibility with the VIC-20; in practice the firmware of 1541 drive had to be made even slower than its VIC-20 predecessor to workaround a behaviour of the C64's video processor, that, when dra
|
https://en.wikipedia.org/wiki/1seg
|
is a mobile terrestrial digital audio/video and data broadcasting service in Japan, Argentina, Brazil, Chile, Uruguay, Paraguay, Peru and the Philippines. Service began experimentally during 2005 and commercially on April 1, 2006. It is designed as a component of ISDB-T, the terrestrial digital broadcast system used in those countries, as each channel is divided into 13 segments, with a further segment separating it from the next channel; an HDTV broadcast signal occupies 12 segments, leaving the remaining (13th) segment for mobile receivers, hence the name, "1seg" or "One Seg".
Its use in Brazil was established in late 2007 (starting in just a few cities), with a slight difference from the Japanese counterpart: it is broadcast under a 30 frame/s transmission setting (Japanese broadcasts are under the 15 frame/s transmission setting).
Technical information
The ISDB-T system uses the UHF band at frequencies between 470 and 770 MHz (806 MHz in Brazil), giving a total bandwidth 300 MHz. The bandwidth is divided into fifty name channels 13 through 62. Each channel is 6 MHz wide consisting of a 5.57 MHz wide signalling band and a 430 kHz guard band to limit cross channel interference. Each of these channels is further divided into 13 segments, each with 428 kHz of bandwidth. 1 seg uses a single of these segments to carry the 1seg transport stream.
1seg, like ISDB-T uses QPSK for modulation, with 2/3 forward error correction and 1/4 guard ratio. The total datarate is 416 kbit/s.
The television system uses an H.264/MPEG-4 AVC video stream and an HE-AAC audio stream multiplexed into an MPEG transport stream. The maximum video resolution is 320x240 pixels, with a video bitrate of between 220 and 320 kbit/s. Audio conforms to HE-AAC profile, with a bitrate of 48 to 64 kbit/s. Additional data (EPG, interactive services, etc.) is transmitted using BML and occupies the remaining 10 to 100 kbit/s of bandwidth.
Conditional access and copy control are implemented in 1seg broa
|
https://en.wikipedia.org/wiki/Information%20security%20standards
|
Information security standards or cyber security standards are techniques generally outlined in published materials that attempt to protect the cyber environment of a user or organization. This environment includes users themselves, networks, devices, all software, processes, information in storage or transit, applications, services, and systems that can be connected directly or indirectly to networks.
The principal objective is to reduce the risks, including preventing or mitigating cyber-attacks. These published materials consist of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies.
History
Cybersecurity standards have existed over several decades as users and providers have collaborated in many domestic and international forums to effect the necessary capabilities, policies, and practices – generally emerging from work at the Stanford Consortium for Research on Information Security and Policy in the 1990s.
A 2016 US security framework adoption study reported that 70% of the surveyed organizations the NIST Cybersecurity Framework as the most popular best practice for Information Technology (IT) computer security, but many note that it requires significant investment. Cross-border, cyber-exfiltration operations by law enforcement agencies to counter international criminal activities on the dark web raise complex jurisdictional questions that remain, to some extent, unanswered. Tensions between domestic law enforcement efforts to conduct cross-border cyber-exfiltration operations and international jurisdiction are likely to continue to provide improved cybersecurity norms.
International Standards
The subsections below detail international standards related to cybersecurity.
ISO/IEC 27001 and 27002
ISO/IEC 27001, part of the growing ISO/IEC 27000 family of standards, is an information security management system (ISMS) standard, of which the last revis
|
https://en.wikipedia.org/wiki/William%20Hubert%20Burr
|
William Hubert Burr C.E. (1851–1934) was an American civil engineer, born at Watertown, Connecticut. He received his education at the Rensselaer Polytechnic Institute. Over several decades, he worked at various places. In 1884 he became assistant engineer to the Phoenix Bridge Company. After 1893 he was consulting engineer to New York departments, especially in connection with the Catskill Aqueduct work. In 1892–1893 he had been Professor at Harvard University and 1893–1916 Professor for Civil Engineering at Columbia University. In 1904 he was appointed a member of the Isthmian Canal Commission.
As a consulting engineer, Burr was also involved with the design of several bridges, tunnels, and infrastructure projects. In the New York metropolitan area, these included the University Heights (former Harlem Ship Canal) Bridge, Harlem River Speedway, the original City Island Bridge, the original 145th Street Bridge, the Holland Tunnel, the Lincoln Tunnel, and the George Washington Bridge. Burr was also involved with projects such as the Panama Canal; a design for the Arlington Memorial Bridge; and the New York State Barge Canal.
His published works are:
Stresses in Bridge and Roof Trusses (1879)
Ancient and Modern Engineering and the Isthmian Canal (1902)
The Elasticity and Resistance of the Materials of Engineering (1883, third edition, 1912)
The Graphic Method in Influence Lines for Bridge and Roof Computation (1905, with M. S. Falk)
References
External links
1851 births
1934 deaths
American civil engineers
20th-century American engineers
American engineering writers
Rensselaer Polytechnic Institute alumni
People from Watertown, Connecticut
|
https://en.wikipedia.org/wiki/KQCA
|
KQCA (channel 58) is a television station licensed to Stockton, California, United States, serving the Sacramento area as a dual affiliate of The CW and MyNetworkTV. It is owned by Hearst Television alongside NBC affiliate KCRA-TV (channel 3). Both stations share studios on Television Circle off D Street in downtown Sacramento, while KQCA's transmitter is located in Walnut Grove, California.
History
The station first signed on the air on April 13, 1986, as KSCH. The first program to air on the station was a "preview" show hosted by Jim Finnerty and Lori Sequest. It was 51 percent owned by Schuyler Communications, Inc., and 49 percent by the SFN Companies. It originally operated as an independent station and aired classic television series from the 1950s, 1960s, and 1970s, as well as some daytime programs that were preempted by KCRA-TV and KXTV (channel 10). The station originally operated from studios located on West Weber Avenue in Stockton. KSCH was also the first station in the Sacramento market to provide stereo sound from its sign-on.
On August 9 of that year, SFN sold the station to Pegasus Broadcasting, which consisted of SFN management and outside investors; channel 58 along with three other television stations and three radio stations sold for $154 million. In 1988, the station moved its studios to a new building located on Gold Canal Drive in Rancho Cordova. In 1990, GE Capital, which had been one of the investors that formed Pegasus, purchased the company outright.
In 1993, GE Capital began shopping KSCH-TV for sale; in one potential proposal, both KSCH and Koplar Communications-owned KRBK (channel 31, now KMAX-TV) would have been sold to one buyer, who would have been able to sell off one of the stations to a noncompetitive entity. In 1994, Sacramento restaurant owner Wing Fat and Barbara Scurfield purchased KSCH-TV from GE Capital for $8 million. The new owners entered into a local marketing agreement with Kelly Broadcasting, then-owner of KCRA. KCRA
|
https://en.wikipedia.org/wiki/Hindu%E2%80%93Arabic%20numeral%20system
|
The Hindu–Arabic numeral system or Indo-Arabic numeral system (also called the Hindu numeral system or Arabic numeral system) is a positional base ten numeral system for representing integers, which can be extended to include non-integers, i.e the Decimal numeral system, which is the most common system for the symbolic representation of numbers in the world.
It was invented between the 1st and 4th centuries by Indian mathematicians. The system was adopted in Arabic mathematics by the 9th century. It became more widely known through the writings of the Persian mathematician Al-Khwārizmī (On the Calculation with Hindu Numerals, ) and Arab mathematician Al-Kindi (On the Use of the Hindu Numerals, ). The system had spread to medieval Europe by the High Middle Ages.
The system is based upon ten (originally nine) glyphs. The symbols (glyphs) used to represent the system are in principle independent of the system itself. The glyphs in actual use are descended from Brahmi numerals and have split into various typographical variants since the Middle Ages.
These symbol sets can be divided into three main families: Western Arabic numerals used in the Greater Maghreb and in Europe; Eastern Arabic numerals used in the Middle East; and the Indian numerals in various scripts used in the Indian subcontinent.
Origins
The HinduArabic or IndoArabic numerals were invented by mathematicians in India. Persian and Arabic mathematicians called them "Hindu numerals". Later they came to be called "Arabic numerals" in Europe because they were introduced to the West by Arab merchants. According to some sources, this number system may have originated in Chinese Shang numerals (1200 BC), which was also a decimal positional numeral system.
Positional notation
The Hindu–Arabic system is designed for positional notation in a decimal system. In a more developed form, positional notation also uses a decimal marker (at first a mark over the ones digit but now more commonly a decimal point or a d
|
https://en.wikipedia.org/wiki/Activation
|
Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction.
Chemistry
In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction.
The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy).
The branch of chemistry that deals with this topic is called chemical kinetics.
Biology
Biochemistry
In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins.
An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem
|
https://en.wikipedia.org/wiki/Envelope%20theorem
|
In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models.
The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions that are optimized.
Statement
Let and be real-valued continuously differentiable functions on , where are choice variables and are parameters, and consider the problem of choosing , for a given , so as to:
subject to and .
The Lagrangian expression of this problem is given by
where are the Lagrange multipliers. Now let and together be the solution that maximizes the objective function f subject to the constraints (and hence are saddle points of the Lagrangian),
and define the value function
Then we have the following theorem.
Theorem: Assume that and are continuously differentiable. Then
where .
For arbitrary choice sets
Let denote the choice set and let the relevant parameter be . Letting denote the parameterized objective function, the value function and the optimal choice correspondence (set-valued function) are given by:
"Envelope theorems" describe sufficient conditions for the value function to be differentiable in the parameter and describe its derivative as
where denotes the partial derivative of with respect to . Namely, the derivative of the value function with respect to the parameter equals the partial derivative of the objective function with respect to holding the maximizer fixed at its optimal level.
Traditional envelope theorem derivations use the first-order condition for (), which requires that the choice set have t
|
https://en.wikipedia.org/wiki/B-Dienst
|
The B-Dienst (, observation service), also called xB-Dienst, X-B-Dienst and χB-Dienst, was a Department of the German Naval Intelligence Service (, MND III) of the OKM, that dealt with the interception and recording, decoding and analysis of the enemy, in particular British radio communications before and during World War II. B-Dienst worked on cryptanalysis and deciphering (decrypting) of enemy and neutral states' message traffic and security control of Kriegsmarine key processes and machinery.
"The ultimate goal of all evaluation was recognizing the opponent's goal by pro-active identification of data."
B-Dienst was instrumental in moulding Wehrmacht operations during the Battles of Norway and France in spring 1940, primarily due to the cryptanalysis successes it had achieved against early and less secure British Naval ciphers.
B-Dienst broke British Naval Combined Cypher No. 3 in October 1941, which was used to encrypt all communications between naval personnel, for Allied North Atlantic convoys. This enabled B-Dienst to provide valuable signals intelligence for the German Navy in the Battle of the Atlantic. The intelligence flow largely ended when the Admiralty introduced Naval Cipher No. 5 on 10 June 1943. The new cipher became secure in January 1944 with the introduction of the Stencil Subtractor system which was used to recipher it.
Background
The B-Dienst unit began as the German Radio Monitoring Service, or educational and news analysis service () by the end of World War I, in 1918, as part of the navy of the German Empire.
A counterpart to the B service on the British side was the Y-service or Y Service. The Y was onomatopoeic for the initial syllable of the word wireless, similar to the B initial for the German service.
Little was known outside about the internal organization and workings of the B-Dienst section. After the armistice of Italy (Armistice of Cassibile), officers of the Italian naval communications intelligence (SIM, ) in conversation
|
https://en.wikipedia.org/wiki/Disk%20encryption%20theory
|
Disk encryption is a special case of data at rest protection when the storage medium is a sector-addressable device (e.g., a hard disk). This article presents cryptographic aspects of the problem. For an overview, see disk encryption. For discussion of different software packages and hardware devices devoted to this problem, see disk encryption software and disk encryption hardware.
Problem definition
Disk encryption methods aim to provide three distinct properties:
The data on the disk should remain confidential.
Data retrieval and storage should both be fast operations, no matter where on the disk the data is stored.
The encryption method should not waste disk space (i.e., the amount of storage used for encrypted data should not be significantly larger than the size of plaintext).
The first property requires defining an adversary from whom the data is being kept confidential. The strongest adversaries studied in the field of disk encryption have these abilities:
they can read the raw contents of the disk at any time;
they can request the disk to encrypt and store arbitrary files of their choosing;
and they can modify unused sectors on the disk and then request their decryption.
A method provides good confidentiality if the only information such an adversary can determine over time is whether the data in a sector has or has not changed since the last time they looked.
The second property requires dividing the disk into several sectors, usually 512 bytes ( bits) long, which are encrypted and decrypted independently of each other. In turn, if the data is to stay confidential, the encryption method must be tweakable; no two sectors should be processed in exactly the same way. Otherwise, the adversary could decrypt any sector of the disk by copying it to an unused sector of the disk and requesting its decryption.
The third property is generally non-controversial. However, it indirectly prohibits the use of stream ciphers, since stream ciphers require, for the
|
https://en.wikipedia.org/wiki/Scale%20analysis%20%28mathematics%29
|
Scale analysis (or order-of-magnitude analysis) is a powerful tool used in the mathematical sciences for the simplification of equations with many terms. First the approximate magnitude of individual terms in the equations is determined. Then some negligibly small terms may be ignored.
Example: vertical momentum in synoptic-scale meteorology
Consider for example the momentum equation of the Navier–Stokes equations in the vertical coordinate direction of the atmosphere
where R is Earth radius, Ω is frequency of rotation of the Earth, g is gravitational acceleration, φ is latitude, ρ is density of air and ν is kinematic viscosity of air (we can neglect turbulence in free atmosphere).
In synoptic scale we can expect horizontal velocities about U = 101 m.s−1 and vertical about W = 10−2 m.s−1. Horizontal scale is L = 106 m and vertical scale is H = 104 m. Typical time scale is T = L/U = 105 s. Pressure differences in troposphere are ΔP = 104 Pa and density of air ρ = 100 kg⋅m−3. Other physical properties are approximately:
R = 6.378 × 106 m;
Ω = 7.292 × 10−5 rad⋅s−1;
ν = 1.46 × 10−5 m2⋅s−1;
g = 9.81 m⋅s−2.
Estimates of the different terms in equation () can be made using their scales:
Now we can introduce these scales and their values into equation ():
We can see that all terms — except the first and second on the right-hand side — are negligibly small. Thus we can simplify the vertical momentum equation to the hydrostatic equilibrium equation:
Rules of scale analysis
Scale analysis is very useful and widely used tool for solving problems in the area of heat transfer and fluid mechanics, pressure-driven wall jet, separating flows behind backward-facing steps, jet diffusion flames, study of linear and non-linear dynamics. Scale analysis is an effective shortcut for obtaining approximate solutions to equations often too complicated to solve exactly. The object of scale analysis is to use the basic principles of convective heat transfer to produce order-of-mag
|
https://en.wikipedia.org/wiki/Algebraic%20connectivity
|
The algebraic connectivity (also known as Fiedler value or Fiedler eigenvalue after Miroslav Fiedler) of a graph G is the second-smallest eigenvalue (counting multiple eigenvalues separately) of the Laplacian matrix of G. This eigenvalue is greater than 0 if and only if G is a connected graph. This is a corollary to the fact that the number of times 0 appears as an eigenvalue in the Laplacian is the number of connected components in the graph. The magnitude of this value reflects how well connected the overall graph is. It has been used in analyzing the robustness and synchronizability of networks.
Properties
The algebraic connectivity of undirected graphs with nonnegative weights, with the inequality being strict if and only if G is connected. However, the algebraic connectivity can be negative for general directed graphs, even if G is a connected graph. Furthermore, the value of the algebraic connectivity is bounded above by the traditional (vertex) connectivity of the graph, . If the number of vertices of an undirected connected graph with nonnegative edge weights is n and the diameter is D, the algebraic connectivity is also known to be bounded below by , and in fact (in a result due to Brendan McKay) by . For the graph with 6 nodes show above (n=6,D=3) these bound means, 4/18 = 0.222 ≤ algebraic connectivity 0.722 ≤ connectivity 1.
Unlike the traditional connectivity, the algebraic connectivity is dependent on the number of vertices, as well as the way in which vertices are connected. In random graphs, the algebraic connectivity decreases with the number of vertices, and increases with the average degree.
The exact definition of the algebraic connectivity depends on the type of Laplacian used. Fan Chung has developed an extensive theory using a rescaled version of the Laplacian, eliminating the dependence on the number of vertices, so that the bounds are somewhat different.
In models of synchronization on networks, such as the Kuramoto model, the Lap
|
https://en.wikipedia.org/wiki/Thomae%27s%20function
|
Thomae's function is a real-valued function of a real variable that can be defined as:
It is named after Carl Johannes Thomae, but has many other names: the popcorn function, the raindrop function, the countable cloud function, the modified Dirichlet function, the ruler function, the Riemann function, or the Stars over Babylon (John Horton Conway's name). Thomae mentioned it as an example for an integrable function with infinitely many discontinuities in an early textbook on Riemann's notion of integration.
Since every rational number has a unique representation with coprime (also termed relatively prime) and , the function is well-defined. Note that is the only number in that is coprime to
It is a modification of the Dirichlet function, which is 1 at rational numbers and 0 elsewhere.
Properties
Related probability distributions
Empirical probability distributions related to Thomae's function appear in DNA sequencing. The human genome is diploid, having two strands per chromosome. When sequenced, small pieces ("reads") are generated: for each spot on the genome, an integer number of reads overlap with it. Their ratio is a rational number, and typically distributed similarly to Thomae's function.
If pairs of positive integers are sampled from a distribution and used to generate ratios , this gives rise to a distribution on the rational numbers. If the integers are independent the distribution can be viewed as a convolution over the rational numbers, . Closed form solutions exist for power-law distributions with a cut-off. If (where is the polylogarithm function) then . In the case of uniform distributions on the set , which is very similar to Thomae's function.
The ruler function
For integers, the exponent of the highest power of 2 dividing gives 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, ... . If 1 is added, or if the 0s are removed, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... . The values resemble tick-marks on a 1/16th graduated ruler, he
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.