source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/KPackage
KPackage was KDE's package manager frontend. It supported BSD, Debian, Gentoo, RPM and Slackware packages. It provided a GUI for the management and upgrade of existing packages and the installation and acquirement of new packages. Additionally, it provided functionality to help manage package caches. KPackage was part of kdeadmin, and was developed at KDE.org. See also PackageKit Synaptic (software) Ubuntu Software Center References External links KPackage user wiki KDE software Linux PMS graphical front-ends Package management software that uses Qt Software update managers
https://en.wikipedia.org/wiki/High%20Precision%20Event%20Timer
The High Precision Event Timer (HPET) is a hardware timer available in modern x86-compatible personal computers. Compared to older types of timers available in the x86 architecture, HPET allows more efficient processing of highly timing-sensitive applications, such as multimedia playback and OS task switching. It was developed jointly by Intel and Microsoft and has been incorporated in PC chipsets since 2005. Formerly referred to by Intel as a Multimedia Timer, the term HPET was selected to avoid confusion with the software multimedia timers introduced in the MultiMedia Extensions to Windows 3.0. Older operating systems that do not support a hardware HPET device can only use older timing facilities, such as the programmable interval timer (PIT) or the real-time clock (RTC). Windows XP, when fitted with the latest hardware abstraction layer (HAL), can also use the processor's Time Stamp Counter (TSC), or ACPI Power Management Timer (ACPI PMTIMER), together with the RTC to provide operating system features that would, in later Windows versions, be provided by the HPET hardware. Confusingly, such Windows XP systems quote "HPET" connectivity in the device driver manager even though the Intel HPET device is not being used. Features An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit-wide. The HPET is programmed via a memory mapped I/O window that is discoverable via ACPI. The HPET circuit in modern PCs is integrated into the southbridge chip. Each comparator can generate an interrupt when the least significant bits are equal to the corresponding bits of the 64-bit main counter value. The comparators can be put into one-shot mode or periodic mode, with at least one comparator supporting periodic mode and all of them supporting one-shot mode. In one-shot mode the comparator fires an interrupt once when the main counter reaches the
https://en.wikipedia.org/wiki/Pythagoras%20tree%20%28fractal%29
The Pythagoras tree is a plane fractal constructed from squares. Invented by the Dutch mathematics teacher Albert E. Bosman in 1942, it is named after the ancient Greek mathematician Pythagoras because each triple of touching squares encloses a right triangle, in a configuration traditionally used to depict the Pythagorean theorem. If the largest square has a size of L × L, the entire Pythagoras tree fits snugly inside a box of size 6L × 4L. The finer details of the tree resemble the Lévy C curve. Construction The construction of the Pythagoras tree begins with a square. Upon this square are constructed two squares, each scaled down by a linear factor of /2, such that the corners of the squares coincide pairwise. The same procedure is then applied recursively to the two smaller squares, ad infinitum. The illustration below shows the first few iterations in the construction process. This is the simplest symmetric triangle. Alternatively, the sides of the triangle are recursively equal proportions, leading to the sides being proportional to the square root of the inverse golden ratio, and the areas of the squares being in golden ratio proportion. Area Iteration n in the construction adds 2n squares of area , for a total area of 1. Thus the area of the tree might seem to grow without bound in the limit as n → ∞. However, some of the squares overlap starting at the order 5 iteration, and the tree actually has a finite area because it fits inside a 6×4 box. It can be shown easily that the area A of the Pythagoras tree must be in the range 5 < A < 18, which can be narrowed down further with extra effort. Little seems to be known about the actual value of A. Varying the angle An interesting set of variations can be constructed by maintaining an isosceles triangle but changing the base angle (90 degrees for the standard Pythagoras tree). In particular, when the base half-angle is set to (30°) = arcsin(0.5), it is easily seen that the size of the squares remains constan
https://en.wikipedia.org/wiki/Serial%20presence%20detect
In computing, serial presence detect (SPD) is a standardized way to automatically access information about a memory module. Earlier 72-pin SIMMs included five pins that provided five bits of parallel presence detect (PPD) data, but the 168-pin DIMM standard changed to a serial presence detect to encode more information. When an ordinary modern computer is turned on, it starts by doing a power-on self-test (POST). Since about the mid-1990s, this process includes automatically configuring the hardware currently present. SPD is a memory hardware feature that makes it possible for the computer to know what memory is present, and what memory timings to use to access the memory. Some computers adapt to hardware changes completely automatically. In most cases, there is a special optional procedure for accessing BIOS parameters, to view and potentially make changes in settings. It may be possible to control how the computer uses the memory SPD data—to choose settings, selectively modify memory timings, or possibly to completely override the SPD data (see overclocking). Stored information For a memory module to support SPD, the JEDEC standards require that certain parameters be in the lower 128 bytes of an EEPROM located on the memory module. These bytes contain timing parameters, manufacturer, serial number and other useful information about the module. Devices utilizing the memory automatically determine key parameters of the module by reading this information. For example, the SPD data on an SDRAM module might provide information about the CAS latency so the system can set this correctly without user intervention. The SPD EEPROM firmware is accessed using SMBus, a variant of the I²C protocol. This reduces the number of communication pins on the module to just two: a clock signal and a data signal. The EEPROM shares ground pins with the RAM, has its own power pin, and has three additional pins (SA0–2) to identify the slot, which are used to assign the EEPROM
https://en.wikipedia.org/wiki/Joconde
Joconde is the central database created in 1975 and now available online, maintained by the French Ministry of Culture, for objects in the collections of the main French public and private museums listed as Musées de France, according to article L. 441-1 of the Code du patrimoine amounting to more than 1,200 institutions. "La Joconde" is the French name of the Mona Lisa, which like about half of the collections of the Louvre, is included in the database, as one of 295 items by, after, or connected with Leonardo da Vinci; of these, only 42 works are by Leonardo da Vinci, including 6 paintings. By November 2012, Joconde contained over 475,000 object online and over 290,000 with images, from 366 collections in France, including 209,350 drawings, 63,547 paintings, 34,561 prints, 34,102 sculptures or 16,631 costumes and their accessories and is still expanding. By June 2022 it counted 636,405 objects. The database is not only dedicated to the information of the public but as well to the needs of the administrators and curators of the museums, thanks to the online presentation of professional tools to facilitate notably the museums collections cataloguing and state inventory (récolement). This explains the great precision of the listings. Since the museums participate on a voluntary basis to the regular enrichment of the database, some can present a large part of their collection, while others appear only because of the mere permanent deposits made by the first ones. Live on the French Minitel system from 1992, the database went online in 1995. Originally just for objects from the fine arts and decorative arts, in 2004 Joconde was united with what had been separate databases for objects from archeology and ethnology. It comes under the "Direction des Musées de France" (DMF) section of the Ministry. A small number of the best known objects have a prose commentary. Not all images are in colour, especially for the archaeological collections. When an object created a
https://en.wikipedia.org/wiki/Game%20physics
Computer animation physics or game physics are laws of physics as they are defined within a simulation or video game, and the programming logic used to implement these laws. Game physics vary greatly in their degree of similarity to real-world physics. Sometimes, the physics of a game may be designed to mimic the physics of the real world as accurately as is feasible, in order to appear realistic to the player or observer. In other cases, games may intentionally deviate from actual physics for gameplay purposes. Common examples in platform games include the ability to start moving horizontally or change direction in mid-air and the double jump ability found in some games. Setting the values of physical parameters, such as the amount of gravity present, is also a part of defining the game physics of a particular game. There are several elements that form components of simulation physics including the physics engine, program code that is used to simulate Newtonian physics within the environment, and collision detection, used to solve the problem of determining when any two or more physical objects in the environment cross each other's path. Physics simulations There are two central types of physics simulations: rigid body and soft-body simulators. In a rigid body simulation objects are grouped into categories based on how they should interact and are less performance intensive. Soft-body physics involves simulating individual sections of each object such that it behaves in a more realistic way. Particle systems A common aspect of computer games that model some type of conflict is the explosion. Early computer games used the simple expedient of repeating the same explosion in each circumstance. However, in the real world an explosion can vary depending on the terrain, altitude of the explosion, and the type of solid bodies being impacted. Depending on the processing power available, the effects of the explosion can be modeled as the split and shattered comp
https://en.wikipedia.org/wiki/Dikaryon
The dikaryon is a nuclear feature that is unique to certain fungi. (The green alga Derbesia had been long considered an exception, until the heterokaryotic hypothesis was challenged by later studies.) Compatible cell-types can fuse cytoplasms (plasmogamy). When this occurs, the two nuclei of two cells pair off and cohabit without fusing (karyogamy). This can be maintained for all the cells of the hyphae by synchronously dividing so that pairs are passed to newer cells. In the Ascomycota this attribute is most often found in the ascogenous hyphae and ascocarp while the bulk of the mycelium remains monokaryotic. In the Basidiomycota this is the dominant phase, with most Basidiomycota monokaryons weakly growing and short-lived. The formation of a dikaryon is a plesiomorphic character for the subkingdom Dikarya, which consists of the Basidiomycota and the Ascomycota. The formation of croziers in the Ascomycota and of clamp connections in the Basidiomycota facilitates maintenance of the dikaryons. However, some fungi in each of these phyla have evolved other methods for maintaining the dikaryons, and therefore neither croziers nor clamp connections are ubiquitous in either phylum. Etymology The name dikaryon comes from the Greek δι- (di-) meaning "two" and κάρυον (karyon) meaning "nut", referring to the cell nucleus. See also Binucleated cells (as a pathological state) Heterokaryon Multinucleated cells Syncytium References External links Fungi Online Page, Formation of Dikaryons Mycology
https://en.wikipedia.org/wiki/Sealed%20server
A sealed server is a type of server which is designed to run without users logging in. This setup has several potential benefits over a traditional server: Stronger security. Since users do not log in, it is possible for a sealed server to use stronger authentication than a password mechanism. Transparency. Since files are not accessed directly, a sealed server can store its payload in any format, without the clients needing any information about this. Less opportunity for user error. Since a user does not have full control over the files on the server, there is less opportunity for them to, for example, change the mode of a private file to be world-readable. A sealed server is primarily useful for data-centric mechanisms such as email, and is unsuited to file-centric protocols such as FTP. Servers (computing)
https://en.wikipedia.org/wiki/Complex%20wavelet%20transform
The complex wavelet transform (CWT) is a complex-valued extension to the standard discrete wavelet transform (DWT). It is a two-dimensional wavelet transform which provides multiresolution, sparse representation, and useful characterization of the structure of an image. Further, it purveys a high degree of shift-invariance in its magnitude, which was investigated in. However, a drawback to this transform is that it exhibits (where is the dimension of the signal being transformed) redundancy compared to a separable (DWT). The use of complex wavelets in image processing was originally set up in 1995 by J.M. Lina and L. Gagnon in the framework of the Daubechies orthogonal filters banks . It was then generalized in 1997 by Prof. Nick Kingsbury of Cambridge University. In the area of computer vision, by exploiting the concept of visual contexts, one can quickly focus on candidate regions, where objects of interest may be found, and then compute additional features through the CWT for those regions only. These additional features, while not necessary for global regions, are useful in accurate detection and recognition of smaller objects. Similarly, the CWT may be applied to detect the activated voxels of cortex and additionally the temporal independent component analysis (tICA) may be utilized to extract the underlying independent sources whose number is determined by Bayesian information criterion . Dual-tree complex wavelet transform The Dual-tree complex wavelet transform (DTCWT) calculates the complex transform of a signal using two separate DWT decompositions (tree a and tree b). If the filters used in one are specifically designed different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary. This redundancy of two provides extra information for analysis but at the expense of extra computational power. It also provides approximate shift-invariance (unlike the DWT) yet still allows perfect reconst
https://en.wikipedia.org/wiki/Subculture%20%28biology%29
In biology, a subculture is either a new cell culture or a microbiological culture made by transferring some or all cells from a previous culture to fresh growth medium. This action is called subculturing or passaging the cells. Subculturing is used to prolong the lifespan and/or increase the number of cells or microorganisms in the culture. Role Cell lines and microorganisms cannot be held in culture indefinitely due to the gradual rise in metabolites which may be toxic, the depletion of nutrients present in the culture medium, and an increase in cell count or population size due to growth. Once nutrients are depleted and levels of toxic byproducts increase, microorganisms in culture will enter the stationary phase, where proliferation is greatly reduced or ceased (the cell density value plateaus). When microorganisms from this culture are transferred into fresh media, nutrients trigger the growth of the microorganisms which will go through lag phase, a period of slow growth and adaptation to the new environment, followed by log phase, a period where the cells grow exponentially. Subculture is therefore used to produce a new culture with a lower density of cells than the originating culture, fresh nutrients and no toxic metabolites allowing continued growth of the cells without risk of cell death. Subculture is important for both proliferating (e.g. a microorganism like E. coli) and non-proliferating (e.g. terminally differentiated white blood cells) cells. Subculturing can also be used for growth curve calculations (ex. generation time) and obtaining log-phase microorganisms for experiments (ex. Bacterial transformation). Typically, subculture is from a culture of a certain volume into fresh growth medium of equal volume, this allows long-term maintenance of the cell line. Subculture into a larger volume of growth medium is used when wanting to increase the number of cells for, for example, use in an industrial process or scientific experiment. Passage number
https://en.wikipedia.org/wiki/Kell%20factor
The Kell factor, named after RCA engineer Raymond D. Kell, is a parameter used to limit the bandwidth of a sampled image signal to avoid the appearance of beat frequency patterns when displaying the image in a discrete display device, usually taken to be 0.7. The number was first measured in 1934 by Raymond D. Kell and his associates as 0.64 but has suffered several revisions given that it is based on image perception, hence subjective, and is not independent of the type of display. It was later revised to 0.85 but can go higher than 0.9, when fixed pixel scanning (e.g., CCD or CMOS) and fixed pixel displays (e.g., LCD or plasma) are used, or as low as 0.7 for electron gun scanning. From a different perspective, the Kell factor defines the effective resolution of a discrete display device since the full resolution cannot be used without viewing experience degradation. The actual sampled resolution will depend on the spot size and intensity distribution. For electron gun scanning systems, the spot usually has a Gaussian intensity distribution. For CCDs, the distribution is somewhat rectangular, and is also affected by the sampling grid and inter-pixel spacing. Kell factor is sometimes incorrectly stated to exist to account for the effects of interlacing. Interlacing itself does not affect Kell factor, but because interlaced video must be low-pass filtered (i.e., blurred) in the vertical dimension to avoid spatio-temporal aliasing (i.e., flickering effects), the Kell factor of interlaced video is said to be about 70% that of progressive video with the same scan line resolution. The beat frequency problem To understand how the distortion comes about, consider an ideal linear process from sampling to display. When a signal is sampled at a frequency that is at least double the Nyquist frequency, it can be fully reconstructed by low-pass filtering since the first repeat spectra does not overlap the original baseband spectra. In discrete displays the image signal is n
https://en.wikipedia.org/wiki/List%20of%20storage%20area%20network%20management%20systems
This is a list of Storage area network (SAN) management systems. A storage area network is a dedicated network that provides access to consolidated, block level data storage. Systems Brocade Network Advisor Cisco Fabric Manager Enterprise Fabric Connectivity (EFC) Manager EMC ControlCenter EMC VisualSRM EMC Invista Hitachi Data Systems HiCommand HP OpenView Storage Area Manager IBM SAN Volume Controller Symantec Veritas Command Central Storage KernSafe Cross-Platform iSCSI SAN References Network management Storage area networks
https://en.wikipedia.org/wiki/Interruptible%20operating%20system
An interruptible operating system is an operating system with ability to handle multiple interrupts concurrently, or in other words, which allow interrupts to be interrupted. Concurrent interrupt handling essentially mean concurrent execution of kernel code and hence induces the additional complexity of concurrency control in accessing kernel datastructures. It also means that the system can stop any program that is already running, which is a feature on nearly all modern operating systems. See also Operating system Operating System Projects Interrupts
https://en.wikipedia.org/wiki/Coherence%20%28statistics%29
In probability theory and statistics, coherence can have several different meanings. Coherence in statistics is an indication of the quality of the information, either within a single data set, or between similar but not identical data sets. Fully coherent data are logically consistent and can be reliably combined for analysis. In probability When dealing with personal probability assessments, or supposed probabilities derived in nonstandard ways, it is a property of self-consistency across a whole set of such assessments. In gambling strategy One way of expressing such self-consistency is in terms of responses to various betting propositions, as described in relation to coherence (philosophical gambling strategy). In Bayesian decision theory The coherency principle in Bayesian decision theory is the assumption that subjective probabilities follow the ordinary rules/axioms of probability calculations (where the validity of these rules corresponds to the self-consistency just referred to) and thus that consistent decisions can be obtained from these probabilities. In time series analysis In time series analysis, and particularly in spectral analysis, it is used to describe the strength of association between two series where the possible dependence between the two series is not limited to simultaneous values but may include leading, lagged and smoothed relationships. The concepts here are sometimes known as coherency and are essentially those set out for coherence as for signal processing. However, note that the quantity coefficient of coherence may sometimes be called the squared coherence. References Probability assessment Bayesian statistics Frequency-domain analysis Statistical principles de:Kohärenz (Signalanalyse)
https://en.wikipedia.org/wiki/Concurrency%20semantics
In computer science, concurrency semantics is a way to give meaning to concurrent systems in a mathematically rigorous way. Concurrency semantics is often based on mathematical theories of concurrency such as various process calculi, the actor model, or Petri nets. A more detailed account of concurrency semantics is given here: Concurrency (computer science). Semantics Formal methods
https://en.wikipedia.org/wiki/Mo-Sai
Mo-Sai is a method of producing precast concrete cladding panels. It was patented by John Joseph Earley in 1940. The Mo-Sai institute later refined Earley's method and became the leader in exposed aggregate concrete. The Mo-Sai Institute, an organization of precast concrete manufacturers, adhered to the Mo-Sai method of producing the exposed aggregate precast concrete panels. A pivotal development in this technique occurred in 1938, when the administration buildings at the David Taylor Model Basin were built with panels used as permanent forms for cast-in-place walls. This was the first use of the Mo-Sai manufacturing technique produced in collaboration with the Dextrone Company of New Haven, Connecticut. Working from this background, the Dextone Company refined and obtained patents and copyrights in 1940 for the methods under which the Mo-Sai Associates, later known as Mo-Sai Institute Inc. The Mo-Sai Institute grew to include a number of licensed manufacturing firms throughout the United States. Buildings featuring Mo-Sai panels include the Columbine Building in Colorado Springs (1960), Prudential Building in Toronto, Ontario, Canada (1960), Denver Hilton Hotel (now the Sheraton Denver) in Denver, Colorado (1960), Los Angeles Temple (1956), Equitable Center in Portland, Oregon (1964), the Hartford National Bank and Trust Hartford, CT (1967) and the PanAm Building in New York City (1962). External links https://web.archive.org/web/20041221051514/http://bg.concreteproducts.com/ar/concrete_time_tested/index.htm https://web.archive.org/web/20070927080334/http://www.precastguide.com/industry_icon/view_article.pl?id=104 Concrete
https://en.wikipedia.org/wiki/Universal%20Transverse%20Mercator%20coordinate%20system
The Universal Transverse Mercator (UTM) is a map projection system for assigning coordinates to locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, which means it ignores altitude and treats the earth surface as a perfect ellipsoid. However, it differs from global latitude/longitude in that it divides earth into 60 zones and projects each to the plane as a basis for its coordinates. Specifying a location means specifying the zone and the x, y coordinate in that plane. The projection from spheroid to a UTM zone is some parameterization of the transverse Mercator projection. The parameters vary by nation or region or mapping system. Most zones in UTM span 6 degrees of longitude, and each has a designated central meridian. The scale factor at the central meridian is specified to be 0.9996 of true scale for most UTM systems in use. History The National Oceanic and Atmospheric Administration (NOAA) website states that the system was developed by the United States Army Corps of Engineers, starting in the early 1940s. However, a series of aerial photos found in the Bundesarchiv-Militärarchiv (the military section of the German Federal Archives) apparently dating from 1943–1944 bear the inscription UTMREF followed by grid letters and digits, and projected according to the transverse Mercator, a finding that would indicate that something called the UTM Reference system was developed in the 1942–43 time frame by the Wehrmacht. It was probably carried out by the Abteilung für Luftbildwesen (Department for Aerial Photography). From 1947 onward the US Army employed a very similar system, but with the now-standard 0.9996 scale factor at the central meridian as opposed to the German 1.0. For areas within the contiguous United States the Clarke Ellipsoid of 1866 was used. For the remaining areas of Earth, including Hawaii, the International Ellipsoid was used. The World Geodetic System WGS84 ell
https://en.wikipedia.org/wiki/Information%20technology%20consulting
In management, information technology consulting (also called IT consulting, computer consultancy, business and technology services, computing consultancy, technology consulting, and IT advisory) is a field of activity which focuses on advising organizations on how best to use information technology (IT) in achieving their business objectives, but it can also refer more generally to IT outsourcing. Once a business owner defines the needs to take a business to the next level, a decision maker will define a scope, cost and a time frame of the project. The role of the IT consultancy company is to support and nurture the company from the very beginning of the project until the end, and deliver the project not only in the scope, time and cost but also with complete customer satisfaction. See also List of major IT consulting firms Consultant Outsourcing References Software industry
https://en.wikipedia.org/wiki/Graded%20poset
In mathematics, in the branch of combinatorics, a graded poset is a partially-ordered set (poset) P equipped with a rank function ρ from P to the set N of all natural numbers. ρ must satisfy the following two properties: The rank function is compatible with the ordering, meaning that for all x and y in the order, if x < y then ρ(x) < ρ(y), and The rank is consistent with the covering relation of the ordering, meaning that for all x and y, if y covers x then ρ(y) = ρ(x) + 1. The value of the rank function for an element of the poset is called its rank. Sometimes a graded poset is called a ranked poset but that phrase has other meanings; see Ranked poset. A rank or rank level of a graded poset is the subset of all the elements of the poset that have a given rank value. Graded posets play an important role in combinatorics and can be visualized by means of a Hasse diagram. Examples Some examples of graded posets (with the rank function in parentheses) are: the natural numbers N with their usual order (rank: the number itself), or some interval [0, N] of this poset, Nn, with the product order (sum of the components), or a subposet of it that is a product of intervals, the positive integers, ordered by divisibility (number of prime factors, counted with multiplicity), or a subposet of it formed by the divisors of a fixed N, the Boolean lattice of finite subsets of a set (number of elements of the subset), the lattice of partitions of a set into finitely many parts, ordered by reverse refinement (number of parts), the lattice of partitions of a finite set X, ordered by refinement (number of elements of X minus number of parts), a group and a generating set, or equivalently its Cayley graph, ordered by the weak or strong Bruhat order, and ranked by word length (length of shortest reduced word). In particular for Coxeter groups, for example permutations of a totally ordered n-element set, with either the weak or strong Bruhat order (number of adjacent inver
https://en.wikipedia.org/wiki/DisplayPort
DisplayPort (DP) is a digital display interface developed by a consortium of PC and chip manufacturers and standardized by the Video Electronics Standards Association (VESA). It is primarily used to connect a video source to a display device such as a computer monitor. It can also carry audio, USB, and other forms of data. DisplayPort was designed to replace VGA, FPD-Link, and Digital Visual Interface (DVI). It is backward compatible with other interfaces, such as HDMI and DVI, through the use of either active or passive adapters. It is the first display interface to rely on packetized data transmission, a form of digital communication found in technologies such as Ethernet, USB, and PCI Express. It permits the use of internal and external display connections. Unlike legacy standards that transmit a clock signal with each output, its protocol is based on small data packets known as micro packets, which can embed the clock signal in the data stream, allowing higher resolution using fewer pins. The use of data packets also makes it extensible, meaning more features can be added over time without significant changes to the physical interface. DisplayPort can be used to transmit audio and video simultaneously, although each can be transmitted without the other. The video signal path can range from six to sixteen bits per color channel, and the audio path can have up to eight channels of 24-bit, 192kHz uncompressed PCM audio. A bidirectional, half-duplex auxiliary channel carries device management and device control data for the Main Link, such as VESA EDID, MCCS, and DPMS standards. The interface is also capable of carrying bidirectional USB signals. The interface uses a differential signal that is not compatible with DVI or HDMI. However, dual-mode DisplayPort ports are designed to transmit a single-link DVI or HDMI protocol (TMDS) across the interface through the use of an external passive adapter, enabling compatibility mode and converting the signal from 3.3 to
https://en.wikipedia.org/wiki/Universal%20polar%20stereographic%20coordinate%20system
The universal polar stereographic (UPS) coordinate system is used in conjunction with the universal transverse Mercator (UTM) coordinate system to locate positions on the surface of the Earth. Like the UTM coordinate system, the UPS coordinate system uses a metric-based cartesian grid laid out on a conformally projected surface. UPS covers the Earth's polar regions, specifically the areas north of 84°N and south of 80°S, which are not covered by the UTM grids, plus an additional 30 minutes of latitude extending into UTM grid to provide some overlap between the two systems. In the polar regions, directions can become complicated, with all geographic north–south lines converging at the poles. The difference between UPS grid north and true north can therefore be anything up to 180°—in some places, grid north is true south, and vice versa. UPS grid north is arbitrarily defined as being along the prime meridian in the Antarctic and the 180th meridian in the Arctic; thus, east and west on the grids when moving directly away from the pole are along the 90°E and 90°W meridians respectively. Projection system As the name indicates, the UPS system uses a stereographic projection. Specifically, the projection used in the system is a secant version based on an elliptical model of the earth. The scale factor at each pole is adjusted to 0.994 so that the latitude of true scale is 81.11451786859362545° (about 81° 06' 52.3") North and South. The scale factor inside the regions at latitudes higher than this parallel is too small, whereas the regions at latitudes below this line have scale factors that are too large, reaching 1.0016 at 80° latitude. The scale factor at the origin (the poles) is adjusted to minimize the overall distortion of scale within the mapped region. As with the Mercator projection, the region near the tangent (or secant) point on a Stereographic map remains very close to true scale for an angular distance of a few degrees. In the ellipsoidal model, a stereog
https://en.wikipedia.org/wiki/Perfect%20ruler
A perfect ruler of length is a ruler with integer markings , for which there exists an integer such that any positive integer is uniquely expressed as the difference for some . This is referred to as an -perfect ruler. An optimal perfect ruler is one of the smallest length for fixed values of and . Example A 4-perfect ruler of length is given by . To verify this, we need to show that every positive integer is uniquely expressed as the difference of two markings: See also Golomb ruler Sparse ruler All-interval tetrachord Combinatorics
https://en.wikipedia.org/wiki/Regime%20shift
Regime shifts are large, abrupt, persistent changes in the structure and function of ecosystems, the climate, financial systems or other complex systems. A regime is a characteristic behaviour of a system which is maintained by mutually reinforced processes or feedbacks. Regimes are considered persistent relative to the time period over which the shift occurs. The change of regimes, or the shift, usually occurs when a smooth change in an internal process (feedback) or a single disturbance (external shocks) triggers a completely different system behavior. Although such non-linear changes have been widely studied in different disciplines ranging from atoms to climate dynamics, regime shifts have gained importance in ecology because they can substantially affect the flow of ecosystem services that societies rely upon, such as provision of food, clean water or climate regulation. Moreover, regime shift occurrence is expected to increase as human influence on the planet increases – the Anthropocene – including current trends on human induced climate change and biodiversity loss. When regime shifts are associated with a critical or bifurcation point, they may also be referred to as critical transitions. History of the concept Scholars have been interested in systems exhibiting non-linear change for a long time. Since the early twentieth century, mathematicians have developed a body of concepts and theory for the study of such phenomena based on the study of non-linear system dynamics. This research led to the development of concepts such as catastrophe theory; a branch of bifurcation theory in dynamical systems. In ecology the idea of systems with multiple regimes, domains of attraction called alternative stable states, only arose in the late '60s based upon the first reflections on the meaning of stability in ecosystems by Richard Lewontin and Crawford "Buzz" Holling. The first work on regime shifts in ecosystems was done in a diversity of ecosystems and included impor
https://en.wikipedia.org/wiki/Structural%20stability
In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion. In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability. Definition Let G be an open domain in Rn with compact closure and smooth (n−1)-dimensional boundary. Consider the space X1(G) consisting of restrictions to G of C1 vector fields on Rn that are transversal to the boundary of G and are inward oriented. This space is endowed with the C1 metric in the usual fashion. A vector field F ∈ X1(G) is weakly structurally stable if for any sufficiently small perturbation F1, the corresponding flows are topologically
https://en.wikipedia.org/wiki/Generation%20of%20primes
In computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers. For relatively small numbers, it is possible to just apply trial division to each successive odd number. Prime sieves are almost always faster. Prime sieving is the fastest known way to deterministically enumerate the primes. There are some known formulas that can calculate the next prime but there is no known way to express the next prime in terms of the previous primes. Also, there is no effective known general manipulation and/or extension of some mathematical expression (even such including later primes) that deterministically calculates the next prime. Prime sieves A prime sieve or prime number sieve is a fast type of algorithm for finding primes. There are many prime sieves. The simple sieve of Eratosthenes (250s BCE), the sieve of Sundaram (1934), the still faster but more complicated sieve of Atkin (2003), and various wheel sieves are most common. A prime sieve works by creating a list of all integers up to a desired limit and progressively removing composite numbers (which it directly generates) until only primes are left. This is the most efficient way to obtain a large range of primes; however, to find individual primes, direct primality tests are more efficient. Furthermore, based on the sieve formalisms, some integer sequences are constructed which also could be used for generating primes in certain intervals. Large primes For the large primes used in cryptography, provable primes can be generated based on variants of Pocklington primality test, while probable primes can be generated with probabilistic primality tests such as the Baillie–PSW primality test or the Miller–Rabin primality test. Both the provable and probable primality tests rely on modular exponentiation. To further reduce the computational c
https://en.wikipedia.org/wiki/Herbrand%27s%20theorem
Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Herbrand's theorem is the logical foundation for most automatic theorem provers. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. Statement Let be a formula of first-order logic with quantifier-free, though it may contain additional free variables. This version of Herbrand's theorem states that the above formula is valid if and only if there exists a finite sequence of terms , possibly in an expansion of the language, with and , such that is valid. If it is valid, it is called a Herbrand disjunction for Informally: a formula in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of is a tautology (propositionally derivable). The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. Proof sketch A proof of the non-trivial direction of the theorem can be constructed according to the following steps: If the formula is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of . Starting from leaves and working downwards, remove the inferences that introduce existent
https://en.wikipedia.org/wiki/Thickening%20agent
A thickening agent or thickener is a substance which can increase the viscosity of a liquid without substantially changing its other properties. Edible thickeners are commonly used to thicken sauces, soups, and puddings without altering their taste; thickeners are also used in paints, inks, explosives, and cosmetics. Thickeners may also improve the suspension of other ingredients or emulsions which increases the stability of the product. Thickening agents are often regulated as food additives and as cosmetics and personal hygiene product ingredients. Some thickening agents are gelling agents (gellants), forming a gel, dissolving in the liquid phase as a colloid mixture that forms a weakly cohesive internal structure. Others act as mechanical thixotropic additives with discrete particles adhering or interlocking to resist strain. Thickening agents can also be used when a medical condition such as dysphagia causes difficulty in swallowing. Thickened liquids play a vital role in reducing risk of aspiration for dysphagia patients. Many other food ingredients are used as thickeners, usually in the final stages of preparation of specific foods. These thickeners have a flavor and are not markedly stable, thus are not suitable for general use. However, they are very convenient and effective, and hence are widely used. Different thickeners may be more or less suitable in a given application, due to differences in taste, clarity, and their responses to chemical and physical conditions. For example, for acidic foods, arrowroot is a better choice than cornstarch, which loses thickening potency in acidic mixtures. At (acidic) pH levels below 4.5, guar gum has sharply reduced aqueous solubility, thus also reducing its thickening capability. If the food is to be frozen, tapioca or arrowroot are preferable over cornstarch, which becomes spongy when frozen. Types Food thickeners frequently are based on either polysaccharides (starches, vegetable gums, and pectin), or protein
https://en.wikipedia.org/wiki/Prescaler
A prescaler is an electronic counting circuit used to reduce a high frequency electrical signal to a lower frequency by integer division. The prescaler takes the basic timer clock frequency (which may be the CPU clock frequency or may be some higher or lower frequency) and divides it by some value before feeding it to the timer, according to how the prescaler register(s) are configured. The prescaler values, referred to as prescales, that may be configured might be limited to a few fixed values (powers of 2), or they may be any integer value from 1 to 2^P, where P is the number of prescaler bits. The purpose of the prescaler is to allow the timer to be clocked at the rate a user desires. For shorter (8 and 16-bit) timers, there will often be a tradeoff between resolution (high resolution requires a high clock rate) and range (high clock rates cause the timer to overflow more quickly). For example, one cannot (without some tricks) achieve 1 µs resolution and a 1 sec maximum period using a 16-bit timer. In this example using 1 µs resolution would limit the period to about 65ms maximum. However the prescaler allows tweaking the ratio between resolution and maximum period to achieve a desired effect. Example of use Prescalers are typically used at very high frequency to extend the upper frequency range of frequency counters, phase locked loop (PLL) synthesizers, and other counting circuits. When used in conjunction with a PLL, a prescaler introduces a normally undesired change in the relationship between the frequency step size and phase detector comparison frequency. For this reason, it is common to either restrict the integer to a low value, or use a dual-modulus prescaler in this application. A dual-modulus prescaler is one that has the ability to selectively divide the input frequency by one of two (normally consecutive) integers, such as 32 and 33. Common fixed-integer microwave prescalers are available in modulus 2, 4, 8, 5 and 10, and can operate at frequencie
https://en.wikipedia.org/wiki/Star%20product
In mathematics, the star product is a method of combining graded posets with unique minimal and maximal elements, preserving the property that the posets are Eulerian. Definition The star product of two graded posets and , where has a unique maximal element and has a unique minimal element , is a poset on the set . We define the partial order by if and only if: 1. , and ; 2. , and ; or 3. and . In other words, we pluck out the top of and the bottom of , and require that everything in be smaller than everything in . Example For example, suppose and are the Boolean algebra on two elements. Then is the poset with the Hasse diagram below. Properties The star product of Eulerian posets is Eulerian. See also Product order, a different way of combining posets References Stanley, R., Flag -vectors and the -index, Math. Z. 216 (1994), 483-499. Combinatorics
https://en.wikipedia.org/wiki/Stream%20capture
Stream capture, river capture, river piracy or stream piracy is a geomorphological phenomenon occurring when a stream or river drainage system or watershed is diverted from its own bed, and flows instead down the bed of a neighbouring stream. This can happen for several reasons, including: Tectonic earth movements, where the slope of the land changes, and the stream is tipped out of its former course Natural damming, such as by a landslide or ice sheet Erosion, either Headward erosion of one stream valley upwards into another, or Lateral erosion of a meander through the higher ground dividing the adjacent streams. Within an area of karst topography, where streams may sink, or flow underground (a sinking or losing stream) and then reappear in a nearby stream valley Glacier retreat The additional water flowing down the capturing stream may accelerate erosion and encourage the development of a canyon (gorge). The now-dry valley of the original stream is known as a wind gap. Capture mechanisms Sea level rise The Kaituna and Pelorus rivers, New Zealand: About 8,000 years ago, a single river was divided by sea water to form two rivers. Tectonic uplift Barmah Choke: About 25,000 years ago, an uplift of the plains near Moama on the Cadell Fault first dammed the Murray River and then forced it to take a new course. The new course dug its way through the so-called Barmah Choke and captured the lower course of the Goulburn River for . Indus-Sutlej-Sarasvati-Yamuna: The Yamuna earlier flowed into the Ghaggar-Hakra River (identified with the Sarasvati River) and later changed its course due to plate tectonics. The Sutlej River flowed into the current channel of the Ghaggar-Hakra River until the 13th century after which it was captured by the Indus River due to plate tectonics. Barrier Range: It was theorised that the original course of the Murray River was to a mouth near Port Pirie where a large delta is still visible protruding into the calm waters of Spencer Gulf.
https://en.wikipedia.org/wiki/384%20%28number%29
384 (three hundred [and] eighty-four) is the natural number following 383 and preceding 385. It is an even composite positive integer. In mathematics 384 is: the sum of a twin prime pair (191 + 193). the sum of six consecutive primes (53 + 59 + 61 + 67 + 71 + 73). the order of the hyperoctahedral group for n = 4 the double factorial of 8. an abundant number. the third 129-gonal number after 1, 129 and before 766 and 1275. a Harshad number in bases 2, 3, 4, 5, 7, 8, 9, 13, 17, and 62 other bases. a refactorable number. Computing Being a low multiple of a power of two, 384 occurs often in the field of computing. For example, the digest length of the secure hash function SHA-384, the screen resolution of Virtual Boy is 384x224, MP3 Audio layer 1 encoding is 384 kibps, in 3G phones the WAN implementation of CDMA is up to 384 kbit/s. References External links Integers
https://en.wikipedia.org/wiki/Plant%20milk
Plant milk is a plant beverage with a color resembling that of milk. Plant milks are non-dairy beverages made from a water-based plant extract for flavoring and aroma. Plant milks are consumed as alternatives to dairy milk, and may provide a creamy mouthfeel. As of 2021, there are about 17 different types of plant milks; almond, oat, soy, coconut, and pea are the highest-selling worldwide. Production of plant-based milks, particularly soy, oat, and pea milks, can offer environmental advantages over animal milks in terms of greenhouse gas emissions, land and water use. Plant-based beverages have been consumed for centuries, with the term "milk-like plant juices" used since the 13th century. In the 21st century, they are commonly referred to as plant-based milk, alternative milk, non-dairy milk or vegan milk. For commerce, plant-based beverages are typically packaged in containers similar and competitive to those used for dairy milk, but cannot be labeled as "milk" within the European Union. Across various cultures, plant milk has been both a beverage and a flavor ingredient in sweet and savory dishes, such as the use of coconut milk in curries. It is compatible with vegetarian and vegan lifestyles. Plant milks are also used to make ice cream alternatives, plant cream, vegan cheese, and yogurt-analogues, such as soy yogurt. The global plant milk market was estimated to reach 62billion by 2030. History Before commercial production of 'milks' from legumes, beans and nuts, plant-based mixtures resembling milk have existed for centuries. The Wabanaki and other Native American tribal nations in the northeastern United States made milk and infant formula from nuts. Horchata, a beverage originally made in North Africa from soaked, ground, and sweetened tiger nuts, spread to Iberia (now Spain) before the year 1000. In English, the word "milk" has been used to refer to "milk-like plant juices" since 1200 CE. Recipes from the 13th-century Levant exist describing almond m
https://en.wikipedia.org/wiki/Ethernet%20flow%20control
Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The goal of this mechanism is to avoid packet loss in the presence of network congestion. The first flow control mechanism, the pause frame, was defined by the IEEE 802.3x standard. The follow-on priority-based flow control, as defined in the IEEE 802.1Qbb standard, provides a link-level flow control mechanism that can be controlled independently for each class of service (CoS), as defined by IEEE P802.1p and is applicable to data center bridging (DCB) networks, and to allow for prioritization of voice over IP (VoIP), video over IP, and database synchronization traffic over default data traffic and bulk file transfers. Description A sending station (computer or network switch) may be transmitting data faster than the other end of the link can accept it. Using flow control, the receiving station can signal the sender requesting suspension of transmissions until the receiver catches up. Flow control on Ethernet can be implemented at the data link layer. The first flow control mechanism, the pause frame, was defined by the Institute of Electrical and Electronics Engineers (IEEE) task force that defined full duplex Ethernet link segments. The IEEE standard 802.3x was issued in 1997. Pause frame An overwhelmed network node can send a pause frame, which halts the transmission of the sender for a specified period of time. A media access control (MAC) frame (EtherType 0x8808) is used to carry the pause command, with the Control opcode set to 0x0001 (hexadecimal). Only stations configured for full-duplex operation may send pause frames. When a station wishes to pause the other end of a link, it sends a pause frame to either the unique 48-bit destination address of this link or to the 48-bit reserved multicast address of . The use of a well-known address makes it unnecessary for a station to discover and store the address of the station at the othe
https://en.wikipedia.org/wiki/Protein%20tag
Protein tags are peptide sequences genetically grafted onto a recombinant protein. Tags are attached to proteins for various purposes. They can be added to either end of the target protein, so they are either C-terminus or N-terminus specific or are both C-terminus and N-terminus specific. Some tags are also inserted at sites within the protein of interest; they are known as internal tags. Affinity tags are appended to proteins so that they can be purified from their crude biological source using an affinity technique. Affinity tags include chitin binding protein (CBP), maltose binding protein (MBP), Strep-tag and glutathione-S-transferase (GST). The poly(His) tag is a widely used protein tag, which binds to matrices bearing immobilized metal ions. Solubilization tags are used, especially for recombinant proteins expressed in species such as E. coli, to assist in the proper folding in proteins and keep them from aggregating in inclusion bodies. These tags include thioredoxin (TRX) and poly(NANP). Some affinity tags have a dual role as a solubilization agent, such as MBP and GST. Chromatography tags are used to alter chromatographic properties of the protein to afford different resolution across a particular separation technique. Often, these consist of polyanionic amino acids, such as FLAG-tag or polyglutamate tag. Epitope tags are short peptide sequences which are chosen because high-affinity antibodies can be reliably produced in many different species. These are usually derived from viral genes, which explain their high immunoreactivity. Epitope tags include ALFA-tag, V5-tag, Myc-tag, HA-tag, Spot-tag, T7-tag and NE-tag. These tags are particularly useful for western blotting, immunofluorescence and immunoprecipitation experiments, although they also find use in antibody purification. Fluorescence tags are used to give visual readout on a protein. Green fluorescent protein (GFP) and its variants are the most commonly used fluorescence tags. More ad
https://en.wikipedia.org/wiki/Quarantine%20%28antivirus%20program%29
Quarantine was an antivirus software from the early 90s that automatically isolated infected files on a computer's hard disk. Files put in quarantine were then no longer capable of infecting their hosting system. Development and release In December, 1988, shortly after the Morris Worm, work started on Quarantine, an anti-malware and file reliability product. Released in April, 1989, Quarantine was the first such product to use file signature instead of viral signature methods. The original Quarantine used Hunt's B-tree database of files with both their CRC16 and CRC-CCITT signatures. Doubling the signatures rendered useless, or at least immoderately difficult, attacks based on CRC invariant modifications. Release 2, April 1990, used a CRC-32 signature and one based on CRC-32 but with a few bits in each word shuffled. The subsequent MS-AV from Microsoft, designed by Check Point, apparently relied on only an eight bit checksum—at least out of a few thousand files there were hundreds with identical signatures. Functionality Quarantine allowed suspect files to be Deleted Moved to a quarantine area Flagged in a report Standard executables were scanned, or one could use up to twenty file matching patterns Twenty exclusion patterns were available Twenty directory paths could be included, or twenty excluded The 1990 version also allowed Background processing Checking of executables and libraries as a file is opened Timing of checks, e.g. if one opened a word file, WORD and all its libraries could be checked: Immediately Every half an hour Once a day or every ten days, etc. Quarantine allowed system managers to track all modifications of a selected files or file structures, hence Quarantine users also got early warnings of failing disks or disk interface cards. Achievements In 1990 Quarantine received the LAN Magazine, Best of Year, Security award. In that year "Quarantine" was reportedly responsible for finding the first stealth virus at the University o
https://en.wikipedia.org/wiki/Multiferroics
Multiferroics are defined as materials that exhibit more than one of the primary ferroic properties in the same phase: ferromagnetism – a magnetisation that is switchable by an applied magnetic field ferroelectricity – an electric polarisation that is switchable by an applied electric field ferroelasticity – a deformation that is switchable by an applied stress While ferroelectric ferroelastics and ferromagnetic ferroelastics are formally multiferroics, these days the term is usually used to describe the magnetoelectric multiferroics that are simultaneously ferromagnetic and ferroelectric. Sometimes the definition is expanded to include nonprimary order parameters, such as antiferromagnetism or ferrimagnetism. In addition, other types of primary order, such as ferroic arrangements of magnetoelectric multipoles of which ferrotoroidicity is an example, were proposed. Besides scientific interest in their physical properties, multiferroics have potential for applications as actuators, switches, magnetic field sensors and new types of electronic memory devices. History A Web of Science search for the term multiferroic yields the year 2000 paper "Why are there so few magnetic ferroelectrics?" from N. A. Spaldin (then Hill) as the earliest result. This work explained the origin of the contraindication between magnetism and ferroelectricity and proposed practical routes to circumvent it, and is widely credited with starting the modern explosion of interest in multiferroic materials. The availability of practical routes to creating multiferroic materials from 2000 stimulated intense activity. Particularly key early works were the discovery of large ferroelectric polarization in epitaxially grown thin films of magnetic BiFeO3, the observation that the non-collinear magnetic ordering in orthorhombic TbMnO3 and TbMn2O5 causes ferroelectricity, and the identification of unusual improper ferroelectricity that is compatible with the coexistence of magnetism in hexagonal man
https://en.wikipedia.org/wiki/Chow%20variety
In mathematics, particularly in the field of algebraic geometry, a Chow variety is an algebraic variety whose points correspond to effective algebraic cycles of fixed dimension and degree on a given projective space. More precisely, the Chow variety is the fine moduli variety parametrizing all effective algebraic cycles of dimension and degree in . The Chow variety may be constructed via a Chow embedding into a sufficiently large projective space. This is a direct generalization of the construction of a Grassmannian variety via the Plücker embedding, as Grassmannians are the case of Chow varieties. Chow varieties are distinct from Chow groups, which are the abelian group of all algebraic cycles on a variety (not necessarily projective space) up to rational equivalence. Both are named for Wei-Liang Chow(周煒良), a pioneer in the study of algebraic cycles. Background on algebraic cycles If X is a closed subvariety of of dimension , the degree of X is the number of intersection points between X and a generic -dimensional projective subspace of . Degree is constant in families of subvarieties, except in certain degenerate limits. To see this, consider the following family parametrized by t. . Whenever , is a conic (an irreducible subvariety of degree 2), but degenerates to the line (which has degree 1). There are several approaches to reconciling this issue, but the simplest is to declare to be a line of multiplicity 2 (and more generally to attach multiplicities to subvarieties) using the language of algebraic cycles. A -dimensional algebraic cycle is a finite formal linear combination . in which s are -dimensional irreducible closed subvarieties in , and s are integers. An algebraic cycle is effective if each . The degree of an algebraic cycle is defined to be . A homogeneous polynomial or homogeneous ideal in n-many variables defines an effective algebraic cycle in , in which the multiplicity of each irreducible component is the order of vanishing at
https://en.wikipedia.org/wiki/Unusual%20number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than . A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non--smooth. Relation to prime numbers All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2). Examples The first few unusual numbers are 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... The first few non-prime (composite) unusual numbers are 6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... Distribution If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows: Richard Schroeppel stated in 1972 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words: External links Integer sequences
https://en.wikipedia.org/wiki/Transaction%20authentication%20number
A transaction authentication number (TAN) is used by some online banking services as a form of single use one-time passwords (OTPs) to authorize financial transactions. TANs are a second layer of security above and beyond the traditional single-password authentication. TANs provide additional security because they act as a form of two-factor authentication (2FA). If the physical document or token containing the TANs is stolen, it will be useless without the password. Conversely, if the login data are obtained, no transactions can be performed without a valid TAN. Classic TAN TANs often function as follows: The bank creates a set of unique TANs for the user. Typically, there are 50 TANs printed on a list, enough to last half a year for a normal user; each TAN being six or eight characters long. The user picks up the list from the nearest bank branch (presenting a passport, an ID card or similar document) or is sent the TAN list through mail. The password (PIN) is mailed separately. To log on to their account, the user must enter user name (often the account number) and password (PIN). This may give access to account information but the ability to process transactions is disabled. To perform a transaction, the user enters the request and authorizes the transaction by entering an unused TAN. The bank verifies the TAN submitted against the list of TANs they issued to the user. If it is a match, the transaction is processed. If it is not a match, the transaction is rejected. The TAN has now been used and will not be recognized for any further transactions. If the TAN list is compromised, the user may cancel it by notifying the bank. However, as any TAN can be used for any transaction, TANs are still prone to phishing attacks where the victim is tricked into providing both password/PIN and one or several TANs. Further, they provide no protection against man-in-the-middle attacks (where an attacker intercepts the transmission of the TAN, and uses it for a
https://en.wikipedia.org/wiki/Bulletproof%20hosting
Bulletproof hosting (BPH) is technical infrastructure service provided by an Internet hosting service that is resilient to complaints of illicit activities, which serves criminal actors as a basic building block for streamlining various cyberattacks. BPH providers allow online gambling, illegal pornography, botnet command and control servers, spam, copyrighted materials, hate speech and misinformation, despite takedown court orders and law enforcement subpoenas, allowing such material in their acceptable use policies. BPH providers usually operate in jurisdictions which have lenient laws against such conduct. Most non-BPH service providers prohibit transferring materials over their network that would be in violation of their terms of service and the local laws of the incorporated jurisdiction, and oftentimes any abuse reports would result in takedowns to avoid their autonomous system's IP address block being blacklisted by other providers and by Spamhaus. History BPH first became the subject of research in 2006 when security researchers from VeriSign revealed the Russian Business Network, an internet service provider that hosted a phishing group, was responsible for about $150 million in phishing-related scams. RBN also become known for identity thefts, child pornography, and botnets. The following year, McColo, the web hosting provider responsible for more than 75% of global spam was shut down and de-peered by Global Crossing and Hurricane Electric after the public disclosure by then-Washington Post reporter Brian Krebs on his Security Fix blog on that newspaper. Difficulties Since any abuse reports to the BPH will be disregarded, in most cases, the whole IP block ("netblock") assigned to the BPH's autonomous system will be blacklisted by other providers and third party spam filters. Additionally, BPH also have difficulty in finding network peering points for establishing Border Gateway Protocol sessions, since routing a BPH provider's network can affect the
https://en.wikipedia.org/wiki/Zein
Zein is a class of prolamine protein found in corn (maize). It is usually manufactured as a powder from corn gluten meal. Zein is one of the best understood plant proteins. Pure zein is clear, odorless, tasteless, hard, water-insoluble, and edible, and it has a variety of industrial and food uses. Commercial uses Historically, zein has been used in the manufacture of a wide variety of commercial products, including coatings for paper cups, soda bottle cap linings, clothing fabric, buttons, adhesives, coatings and binders. The dominant historical use of zein was in the textile fibers market where it was produced under the name "Vicara". With the development of synthetic alternatives, the use of zein in this market eventually disappeared. By using electrospinning, zein fibers have again been produced in the lab, where additional research will be performed to re-enter the fiber market. It can be used as a water and grease coating for paperboards and allows recyclability. Zein's properties make it valuable in processed foods and pharmaceuticals, in competition with insect shellac. It is now used as a coating for candy, nuts, fruit, pills, and other encapsulated foods and drugs. In the United States, it may be labeled as "confectioner's glaze" (which may also refer to shellac-based glazes) and used as a coating on bakery products or as "vegetable protein." It is classified as Generally Recognized as Safe (GRAS) by the U.S. Food and Drug Administration. For pharmaceutical coating, zein is preferred over food shellac, since it is all natural and requires less testing per the USP monographs. Zein can be further processed into resins and other bioplastic polymers, which can be extruded or rolled into a variety of plastic products. With increasing environmental concerns about synthetic coatings (such as PFOA) and the current higher prices of hydrocarbon-based petrochemicals, there is increased focus on zein as a raw material for a variety of nontoxic and renewable polym
https://en.wikipedia.org/wiki/Utility%20vault
A utility vault is an underground room providing access to subterranean public utility equipment, such as valves for water or natural gas pipes, or switchgear for electrical or telecommunications equipment. A vault is often accessible directly from a street, sidewalk or other outdoor space, thereby distinct from a basement of a building. Utility vaults are commonly constructed out of reinforced concrete boxes, poured concrete or brick. Small ones are usually entered through a manhole or grate on the topside and closed up by a manhole cover. Such vaults are considered confined spaces and can be hazardous to enter. Large utility vaults are similar to mechanical or electrical rooms in design and content. See also Dartford Cable Tunnel Telecommunications pedestal Utility cut Utility tunnel References External links Underground Utility Vaults - National Precast Concrete Association Building engineering Rooms Subterranea (geography)
https://en.wikipedia.org/wiki/Domain%20model
In software engineering, a domain model is a conceptual model of the domain that incorporates both behavior and data. In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic. Overview A domain model is a system of abstractions that describes selected aspects of a sphere of knowledge, influence or activity (a domain). The model can then be used to solve problems related to that domain. The domain model is a representation of meaningful real-world concepts pertinent to the domain that need to be modeled in software. The concepts include the data involved in the business and rules the business uses in relation to that data. A domain model leverages natural language of the domain. A domain model generally uses the vocabulary of the domain, thus allowing a representation of the model to be communicated to non-technical stakeholders. It should not refer to any technical implementations such as databases or software components that are being designed. Usage A domain model is generally implemented as an object model within a layer that uses a lower-level layer for persistence and "publishes" an API to a higher-level layer to gain access to the data and behavior of the model. In the Unified Modeling Language (UML), a class diagram is used to represent the domain model. See also Domain-driven design (DDD) Domain layer Feature-driven development Logical data model OntoUML References Software requirements Data modeling
https://en.wikipedia.org/wiki/Hiroshi%20Okamura
was a Japanese mathematician who made contributions to analysis and the theory of differential equations. He was a professor at Kyoto University. He discovered the necessary and sufficient conditions on initial value problems of ordinary differential equations for the solution to be unique. He also refined the second mean value theorem of integration. Works (posthumous) References 1905 births 1948 deaths 20th-century Japanese mathematicians Mathematical analysts Academic staff of Kyoto University Kyoto University alumni
https://en.wikipedia.org/wiki/XML%20Encryption
XML Encryption, also known as XML-Enc, is a specification, governed by a W3C recommendation, that defines how to encrypt the contents of an XML element. Although XML Encryption can be used to encrypt any kind of data, it is nonetheless known as "XML Encryption" because an XML element (either an EncryptedData or EncryptedKey element) contains or refers to the cipher text, keying information, and algorithms. Both XML Signature and XML Encryption use the KeyInfo element, which appears as the child of a SignedInfo, EncryptedData, or EncryptedKey element and provides information to a recipient about what keying material to use in validating a signature or decrypting encrypted data. The KeyInfo element is optional: it can be attached in the message, or be delivered through a secure channel. XML Encryption is different from and unrelated to Transport Layer Security, which is used to send encrypted messages (including xml content, both encrypted and otherwise) over the internet. It has been reported that this specification has severe security concerns. References External links W3C info Apache Santuario - Apache XML Security Implementation for Java and C++ XMLSec - XML Security Library for C An Introduction to XML Signature and XML Encryption with XMLSec XML Cryptography standards XML-based standards
https://en.wikipedia.org/wiki/Phenylacetic%20acid
Phenylacetic acid (conjugate base phenylacetate), also known by various synonyms, is an organic compound containing a phenyl functional group and a carboxylic acid functional group. It is a white solid with a strong honey-like odor. Endogenously, it is a catabolite of phenylalanine. As a commercial chemical, because it can be used in the illicit production of phenylacetone (used in the manufacture of substituted amphetamines), it is subject to controls in countries including the United States and China. Occurrence Phenylacetic acid has been found to be an active auxin (a type of plant hormone), found predominantly in fruits. However, its effect is much weaker than the effect of the basic auxin molecule indole-3-acetic acid. In addition the molecule is naturally produced by the metapleural gland of most ant species and used as an antimicrobial. It is also the oxidation product of phenethylamine in humans following metabolism by monoamine oxidase and subsequent metabolism of the intermediate product, phenylacetaldehyde, by the aldehyde dehydrogenase enzyme; these enzymes are also found in many other organisms. Preparation This compound may be prepared by the hydrolysis of benzyl cyanide: Applications Phenylacetic acid is used in some perfumes, as it possesses a honey-like odor even in low concentrations. It is also used in penicillin G production and diclofenac production. It is also employed to treat type II hyperammonemia to help reduce the amounts of ammonia in a patient's bloodstream by forming phenylacetyl-CoA, which then reacts with nitrogen-rich glutamine to form phenylacetylglutamine. This compound is then excreted from the patient's body. It's also used in the illicit production of phenylacetone, which is used in the manufacture of methamphetamine. The sodium salt of phenylacetic acid, sodium phenylacetate, is used as a pharmaceutical drug for the treatment of urea cycle disorders, including as the combination drug sodium phenylacetate/sodium benzoate (Am
https://en.wikipedia.org/wiki/Biomedical%20technology
Biomedical technology is the application of engineering and technology principles to the domain of living or biological systems, with an emphasis on human health and diseases. Biomedical engineering and Biotechnology alike are often loosely called Biomedical Technology or Bioengineering. The Biomedical technology field is currently growing at a rapid pace. Biomedical news has often been reported on various platforms, including the MediUnite Journal; and required jobs for the industry expect to grow 23% by 2024, and with the pay averaging over $86,000. Biomedical technology involves: Biomedical science Biomedical informatics Biomedical research Biomedical engineering Bioengineering Biotechnology Biomedical technologies: Cloning Therapeutic cloning References Biological engineering
https://en.wikipedia.org/wiki/Foundation%20Fieldbus
Foundation Fieldbus (styled Fieldbus) is an all-digital, serial, two-way communications system that serves as the base-level network in a plant or factory automation environment. It is an open architecture, developed and administered by FieldComm Group. It is targeted for applications using basic and advanced regulatory control, and for much of the discrete control associated with those functions. Foundation Fieldbus technology is mostly used in process industries, but has recently been implemented in powerplants. Two related implementations of Foundation Fieldbus have been introduced to meet different needs within the process automation environment. These two implementations use different physical media and communication speeds. Foundation Fieldbus H1 - Operates at 31.25 kbit/s and is generally used to connect to field devices and host systems. It provides communication and power over standard stranded twisted-pair wiring in both conventional and intrinsic safety applications. H1 is currently the most common implementation. HSE (High-speed Ethernet) - Operates at 100/1000 Mbit/s and generally connects input/output subsystems, host systems, linking devices and gateways. It doesn't currently provide power over the cable, although work is under way to address this using the IEEE802.3af Power over Ethernet (PoE) standard. Foundation Fieldbus was originally intended as a replacement for the 4-20 mA standard, and today it coexists alongside other technologies such as Modbus, Profibus, and Industrial Ethernet. Foundation Fieldbus today enjoys a growing installed base in many heavy process applications such as refining, petrochemicals, power generation, and even food and beverage, pharmaceuticals, and nuclear applications. Foundation Fieldbus was developed over a period of many years by the International Society of Automation, or ISA, as SP50. In 1996 the first H1 (31.25 kbit/s) specifications were released. In 1999 the first HSE (High Speed Ethernet) specifications
https://en.wikipedia.org/wiki/Fixed-pattern%20noise
Fixed-pattern noise (FPN) is the term given to a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the average intensity. Overview FPN is a general term that identifies a temporally constant lateral non-uniformity (forming a constant pattern) in an imaging system with multiple detector or picture elements (pixels). It is characterised by the same pattern of variation in pixel-brightness occurring in images taken under the same illumination conditions in an imaging array. This problem arises from small differences in the individual responsitivity of the sensor array (including any local postamplification stages) that might be caused by variations in the pixel size, material or interference with the local circuitry. It might be affected by changes in the environment like different temperatures, exposure times, etc. The term "fixed pattern noise" usually refers to two parameters. One is the dark signal non-uniformity (DSNU), which is the offset from the average across the imaging array at a particular setting (temperature, integration time) but no external illumination and the photo response non-uniformity (PRNU), which describes the gain or ratio between optical power on a pixel versus the electrical signal output. The latter is often simplified as a single value measured at e.g. 50% saturation level, implying a linear approximation of the not perfectly linear photo response non-linearity (PRNL). Often PRNU as defined above is subdivided in pure "(offset) FPN" which is the part not dependent on temperature and integration time, and the integration time and temperature dependent "DSNU". Sometimes pixel noise as the average deviation from the array average under different illumination and temperature conditions is specified. Pixel noise therefore gives a number (commonly expressed in rms) that identifies FPN in all permitted imaging conditions
https://en.wikipedia.org/wiki/Species%20complex
In biology, a species complex is a group of closely related organisms that are so similar in appearance and other features that the boundaries between them are often unclear. The taxa in the complex may be able to hybridize readily with each other, further blurring any distinctions. Terms that are sometimes used synonymously but have more precise meanings are cryptic species for two or more species hidden under one species name, sibling species for two (or more) species that are each other's closest relative, and species flock for a group of closely related species that live in the same habitat. As informal taxonomic ranks, species group, species aggregate, macrospecies, and superspecies are also in use. Two or more taxa that were once considered conspecific (of the same species) may later be subdivided into infraspecific taxa (taxa within a species, such as bacterial strains or plant varieties), which may be a complex ranking but it is not a species complex. In most cases, a species complex is a monophyletic group of species with a common ancestor, but there are exceptions. It may represent an early stage after speciation in which the species were separated for a long time period without evolving morphological differences. Hybrid speciation can be a component in the evolution of a species complex. Species complexes exist in all groups of organisms and are identified by the rigorous study of differences between individual species that uses minute morphological details, tests of reproductive isolation, or DNA-based methods, such as molecular phylogenetics and DNA barcoding. The existence of extremely similar species may cause local and global species diversity to be underestimated. The recognition of similar-but-distinct species is important for disease and pest control and in conservation biology although the drawing of dividing lines between species can be inherently difficult. Definition A species complex is typically considered as a group of close, but distin
https://en.wikipedia.org/wiki/Banzhaf%20power%20index
The Banzhaf power index, named after John Banzhaf (originally invented by Lionel Penrose in 1946 and sometimes called Penrose–Banzhaf index; also known as the Banzhaf–Coleman index after James Samuel Coleman), is a power index defined by the probability of changing an outcome of a vote where voting rights are not necessarily equally divided among the voters or shareholders. To calculate the power of a voter using the Banzhaf index, list all the winning coalitions, then count the critical voters. A critical voter is a voter who, if he changed his vote from yes to no, would cause the measure to fail. A voter's power is measured as the fraction of all swing votes that he could cast. There are some algorithms for calculating the power index, e.g., dynamic programming techniques, enumeration methods and Monte Carlo methods. Examples Voting game Simple voting game A simple voting game, taken from Game Theory and Strategy by Philip D. Straffin: [6; 4, 3, 2, 1] The numbers in the brackets mean a measure requires 6 votes to pass, and voter A can cast four votes, B three votes, C two, and D one. The winning groups, with underlined swing voters, are as follows: AB, AC, ABC, ABD, ACD, BCD, ABCD There are 12 total swing votes, so by the Banzhaf index, power is divided thus: A = 5/12, B = 3/12, C = 3/12, D = 1/12 U.S. Electoral College Consider the United States Electoral College. Each state has different levels of voting power. There are a total of 538 electoral votes. A majority vote is 270 votes. The Banzhaf power index would be a mathematical representation of how likely a single state would be able to swing the vote. A state such as California, which is allocated 55 electoral votes, would be more likely to swing the vote than a state such as Montana, which has 3 electoral votes. Assume the United States is having a presidential election between a Republican (R) and a Democrat (D). For simplicity, suppose that only three states are participating: Californ
https://en.wikipedia.org/wiki/Serial%20over%20LAN
Serial over LAN (SOL) is a mechanism that enables the input and output of the serial port of a managed system to be redirected over IP. Details On some managed systems, notably blade server systems, the serial ports on the managed computers are not normally connected to a traditional serial port socket. To allow users to access applications on these computers via the serial port, the input/output of the serial port is redirected to the network. For example, a user wishing to access a blade server via the serial port can telnet to a network address and log in. On the blade server the login will be seen as coming through the serial port. SOL is implemented as a payload type under the RMCP+ protocol in IPMI. See also Console redirection Emergency Management Services (EMS) IPMI LAN Shell shoveling References External links IPMI Technical Resources Local area networks Out-of-band management Serial buses
https://en.wikipedia.org/wiki/Wetware%20computer
A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they use biological materials, and offer the possibility of substantially more energy-efficient computing. While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future. The most notable prototypes have stemmed from the research completed by biological engineer William Ditto during his time at the Georgia Institute of Technology. His work constructing a simple neurocomputer capable of basic addition from leech neurons in 1999 was a significant discovery for the concept. This research acted as a primary example driving interest in the creation of these artificially constructed, but still organic brains. Overview The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore’s law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary (on/off), a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large differe
https://en.wikipedia.org/wiki/Patch%20test
A patch test is a diagnostic method used to determine which specific substances cause allergic inflammation of a patient's skin. Patch testing helps identify which substances may be causing a delayed-type allergic reaction in a patient and may identify allergens not identified by blood testing or skin prick testing. It is intended to produce a local allergic reaction on a small area of the patient's back, where the diluted chemicals were planted. The chemicals included in the patch test kit are the offenders in approximately 85–90 percent of contact allergic eczema and include chemicals present in metals (e.g., nickel), rubber, leather, formaldehyde, lanolin, fragrance, toiletries, hair dyes, medicine, pharmaceutical items, food, drink, preservative, and other additives. Mechanism A patch test relies on the principle of a type IV hypersensitivity reaction. The first step in becoming allergic is sensitization. When skin is exposed to an allergen, the antigen-presenting cells (APCs) – also known as Langerhans cell or Dermal Dendritic Cell – phagocytize the substance, break it down to smaller components and present them on their surface bound major histocompatibility complex type two (MHC-II) molecules. The APC then travels to a lymph node, where it presents the displayed allergen to a CD4+ T-cell, or T-helper cell. The T-cell undergoes clonal expansion and some clones of the newly formed antigen specific sensitized T-cells travel back to the site of antigen exposure. When the skin is again exposed to the antigen, the memory t-cells in the skin recognize the antigen and produce cytokines (chemical signals), which cause more T-cells to migrate from blood vessels. This starts a complex immune cascade leading to skin inflammation, itching, and the typical rash of contact dermatitis. In general, it takes 2–4 days for a response in patch testing to develop. The patch test is just induction of contact dermatitis in a small area. Process Application of the patch t
https://en.wikipedia.org/wiki/Phoenix-RTOS
Phoenix-RTOS is a real-time operating system designed for Internet of Things appliances. The main goal of the system is to facilitate the creation of "Software Defined Solutions". History Phoenix-RTOS is the successor to the Phoenix operating system, developed from 1999 to 2001 by Pawel Pisarczyk at the Department of Electronics and Information Technology at Warsaw University of Technology. Phoenix was originally implemented for IA-32 microprocessors and was adapted to the ARM7TDMI processor in 2003, and the PowerPC in 2004. The system is available under the GPL license. Phoenix-RTOS 2.0 The decision to abandon the development of Phoenix and write the Phoenix-RTOS from scratch was taken by its creator in 2004. In 2010, the Phoenix Systems company was established, aiming to commercialize the system. Phoenix-RTOS 2.0 is based on a monolithic kernel. Initially versions for the IA-32 processor and configurable eSi-RISC were developed. In cooperation with NXP Semiconductors, Phoenix-RTOS 2.0 was also adapted to the Vybrid (ARM Cortex-A5) platform. This version is equipped with PRIME (Phoenix-PRIME) and the G3-PLC (Phoenix-G3) protocol support, used in Smart Grid networks. Phoenix-RTOS runs applications designed and written for the Unix operating system. Phoenix-RTOS 3.0 Phoenix-RTOS version 3.0 is based on a microkernel. It is geared towards measuring devices with low power consumption. The main problem with the first implementation was low kernel modularity and difficulties with the management process of software development (device drivers, file system drivers). It is an open source operating system (on BSD license), available on GitHub. HaaS modules The Phoenix-RTOS can be equipped with HaaS (Hardware as a Software) modules that allow the implementation of rich devices functionality, e.g. modems. Existing HaaS modules include: Phoenix-PRIME - software implementation of PRIME PLC standard certified in 2014. Phoenix-G3 - a software implementation of the G3
https://en.wikipedia.org/wiki/Gibbons%E2%80%93Hawking%E2%80%93York%20boundary%20term
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary. The Einstein–Hilbert action is the basis for the most elementary variational principle from which the field equations of general relativity can be defined. However, the use of the Einstein–Hilbert action is appropriate only when the underlying spacetime manifold is closed, i.e., a manifold which is both compact and without boundary. In the event that the manifold has a boundary , the action should be supplemented by a boundary term so that the variational principle is well-defined. The necessity of such a boundary term was first realised by York and later refined in a minor way by Gibbons and Hawking. For a manifold that is not closed, the appropriate action is where is the Einstein–Hilbert action, is the Gibbons–Hawking–York boundary term, is the induced metric (see section below on definitions) on the boundary, its determinant, is the trace of the second fundamental form, is equal to where the normal to is spacelike and where the normal to is timelike, and are the coordinates on the boundary. Varying the action with respect to the metric , subject to the condition gives the Einstein equations; the addition of the boundary term means that in performing the variation, the geometry of the boundary encoded in the transverse metric is fixed (see section below). There remains ambiguity in the action up to an arbitrary functional of the induced metric . That a boundary term is needed in the gravitational case is because , the gravitational Lagrangian density, contains second derivatives of the metric tensor. This is a non-typical feature of field theories, which are usually formulated in terms of Lagrangians that involve first derivatives of fields to be varied over only. The GHY term is desirable, as it possesses a number of other key features. When passing to the Hamilton
https://en.wikipedia.org/wiki/Interrupt%20request
In a computer, an interrupt request (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, an interrupt handler, to run instead. Hardware interrupts are used to handle events such as receiving data from a modem or network card, key presses, or mouse movements. Interrupt lines are often identified by an index with the format of IRQ followed by a number. For example, on the Intel 8259 family of programmable interrupt controllers (PICs) there are eight interrupt inputs commonly referred to as IRQ0 through IRQ7. In x86 based computer systems that use two of these PICs, the combined set of lines are referred to as IRQ0 through IRQ15. Technically these lines are named IR0 through IR7, and the lines on the ISA bus to which they were historically attached are named IRQ0 through IRQ15 (although historically as the number of hardware devices increased, the total possible number of interrupts was increased by means of cascading requests, by making one of the IRQ numbers cascade to another set or sets of numbered IRQs, handled by one or more subsequent controllers). Newer x86 systems integrate an Advanced Programmable Interrupt Controller (APIC) that conforms to the Intel APIC Architecture. These APICs support a programming interface for up to 255 physical hardware IRQ lines per APIC, with a typical system implementing support for only around 24 total hardware lines. During the early years of personal computing, IRQ management was often of user concern. With the introduction of plug and play devices this has been alleviated through automatic configuration. Overview When working with personal computer hardware, installing and removing devices, the system relies on interrupt requests. There are default settings that are configured in the system BIOS and recognized by the operating system. These default settings can be altered by advanced users. Modern plug and play technology has not only reduced the need f
https://en.wikipedia.org/wiki/Standard%20gravity
The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity and denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by standard as . This value was established by the 3rd General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator. Although the symbol is sometimes used for standard gravity, (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol should not be confused with , the gravitational constant, or g, the symbol for gram. The is also used as a unit for any form of acceleration, with the value defined as above; see g-force. The value of defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force. History Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with
https://en.wikipedia.org/wiki/X87
x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that work in tandem with corresponding x86 CPUs. These microchips have names ending in "87". This is also known as the NPX (Numeric Processor eXtension). Like other extensions to the basic instruction set, x87 instructions are not strictly needed to construct working programs, but provide hardware and microcode implementations of common numerical tasks, allowing these tasks to be performed much faster than corresponding machine code routines can. The x87 instruction set includes instructions for basic floating-point operations such as addition, subtraction and comparison, but also for more complex numerical operations, such as the computation of the tangent function and its inverse, for example. Most x86 processors since the Intel 80486 have had these x87 instructions implemented in the main CPU, but the term is sometimes still used to refer to that part of the instruction set. Before x87 instructions were standard in PCs, compilers or programmers had to use rather slow library calls to perform floating-point operations, a method that is still common in (low-cost) embedded systems. Description The x87 registers form an eight-level deep non-strict stack structure ranging from ST(0) to ST(7) with registers that can be directly accessed by either operand, using an offset relative to the top, as well as pushed and popped. (This scheme may be compared to how a stack frame may be both pushed/popped and indexed.) There are instructions to push, calculate, and pop values on top of this stack; unary operations (FSQRT, FPTAN etc.) then implicitly address the topmost ST(0), while binary operations (FADD, FMUL, FCOM, etc.) implicitly address ST(0) and ST(1). The non-strict stack model also allows binary operations to use ST(0) together with a direct memory operand or with an explicitly sp
https://en.wikipedia.org/wiki/Catalytic%20triad
A catalytic triad is a set of three coordinated amino acids that can be found in the active site of some enzymes. Catalytic triads are most commonly found in hydrolase and transferase enzymes (e.g. proteases, amidases, esterases, acylases, lipases and β-lactamases). An acid-base-nucleophile triad is a common motif for generating a nucleophilic residue for covalent catalysis. The residues form a charge-relay network to polarise and activate the nucleophile, which attacks the substrate, forming a covalent intermediate which is then hydrolysed to release the product and regenerate free enzyme. The nucleophile is most commonly a serine or cysteine amino acid, but occasionally threonine or even selenocysteine. The 3D structure of the enzyme brings together the triad residues in a precise orientation, even though they may be far apart in the sequence (primary structure). As well as divergent evolution of function (and even the triad's nucleophile), catalytic triads show some of the best examples of convergent evolution. Chemical constraints on catalysis have led to the same catalytic solution independently evolving in at least 23 separate superfamilies. Their mechanism of action is consequently one of the best studied in biochemistry. History The enzymes trypsin and chymotrypsin were first purified in the 1930s. A serine in each of trypsin and chymotrypsin was identified as the catalytic nucleophile (by diisopropyl fluorophosphate modification) in the 1950s. The structure of chymotrypsin was solved by X-ray crystallography in the 1960s, showing the orientation of the catalytic triad in the active site. Other proteases were sequenced and aligned to reveal a family of related proteases, now called the S1 family. Simultaneously, the structures of the evolutionarily unrelated papain and subtilisin proteases were found to contain analogous triads. The 'charge-relay' mechanism for the activation of the nucleophile by the other triad members was proposed in the late 1960s. As
https://en.wikipedia.org/wiki/Padovan%20polynomials
In mathematics, Padovan polynomials are a generalization of Padovan sequence numbers. These polynomials are defined by: The first few Padovan polynomials are: The Padovan numbers are recovered by evaluating the polynomials Pn−3(x) at x = 1. Evaluating Pn−3(x) at x = 2 gives the nth Fibonacci number plus (−1)n. The ordinary generating function for the sequence is See also Polynomial sequences Polynomials
https://en.wikipedia.org/wiki/Neumann%20series
A Neumann series is a mathematical series of the form where is an operator and its times repeated application. This generalizes the geometric series. The series is named after the mathematician Carl Neumann, who used it in 1877 in the context of potential theory. The Neumann series is used in functional analysis. It forms the basis of the Liouville-Neumann series, which is used to solve Fredholm integral equations. It is also important when studying the spectrum of bounded operators. Properties Suppose that is a bounded linear operator on the normed vector space . If the Neumann series converges in the operator norm, then is invertible and its inverse is the series: , where is the identity operator in . To see why, consider the partial sums . Then we have This result on operators is analogous to geometric series in , in which we find that: One case in which convergence is guaranteed is when is a Banach space and in the operator norm or is convergent. However, there are also results which give weaker conditions under which the series converges. Example Let be given by: We need to show that C is smaller than unity in some norm. Therefore, we calculate: Thus, we know from the statement above that exists. Approximate matrix inversion A truncated Neumann series can be used for approximate matrix inversion. To approximate the inverse of an invertible matrix , we can assign the linear operator as: where is the identity matrix. If the norm condition on is satisfied, then truncating the series at , we get: The set of invertible operators is open A corollary is that the set of invertible operators between two Banach spaces and is open in the topology induced by the operator norm. Indeed, let be an invertible operator and let be another operator. If , then is also invertible. Since , the Neumann series is convergent. Therefore, we have Taking the norms, we get The norm of can be bounded by Applications The Neumann series has
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20Windows%20versions
Microsoft Windows is a computer operating system developed by Microsoft. It was first launched in 1985 as a graphical operating system built on MS-DOS. The initial version was followed by several subsequent releases, and by the early 1990s, the Windows line had split into two separate lines of releases: Windows 9x for consumers and Windows NT for businesses and enterprises. In the following years, several further variants of Windows would be released: Windows CE in 1996 for embedded systems; Pocket PC in 2000 (renamed to Windows Mobile in 2003 and Windows Phone in 2010) for personal digital assistants and, later, smartphones; Windows Holographic in 2016 for AR/VR headsets; and several other editions. Personal computer versions A "personal computer" version of Windows is considered to be a version that end-users or OEMs can install on personal computers, including desktop computers, laptops, and workstations. The first five versions of Windows–Windows 1.0, Windows 2.0, Windows 2.1, Windows 3.0, and Windows 3.1–were all based on MS-DOS, and were aimed at both consumers and businesses. However, Windows 3.1 had two separate successors, splitting the Windows line in two: the consumer-focused "Windows 9x" line, consisting of Windows 95, Windows 98, and Windows Me; and the professional Windows NT line, comprising Windows NT 3.1, Windows NT 3.5, Windows NT 3.51, Windows NT 4.0, and Windows 2000. These two lines were reunited into a single line with the NT-based Windows XP; this Windows release succeeded both Windows Me and Windows 2000 and had separate editions for consumer and professional use. Since Windows XP, multiple further versions of Windows have been released, the most recent of which is Windows 11. Mobile versions Mobile versions refer to versions of Windows that can run on smartphones or personal digital assistants. Server versions High-performance computing (HPC) servers Windows Essential Business Server Windows Home Server Windows MultiPoint Server Wi
https://en.wikipedia.org/wiki/Algebra%20Project
The Algebra Project is a national U.S. mathematics literacy program aimed at helping low-income students and students of color achieve the mathematical skills in high school that are a prerequisite for a college preparatory mathematics sequence. Founded by Civil Rights activist and Math educator Bob Moses in the 1980s, the Algebra Project provides curricular materials, teacher training, and professional development support and community involvement activities for schools to improve mathematics education. By 2001, the Algebra Project had trained approximately 300 teachers and was reaching 10,000 students in 28 locations in 10 states. History The Algebra Project was founded in 1982 by Bob Moses in Cambridge, Massachusetts. Moses worked with his daughter's eighth-grade teacher, Mary Lou Mehrling, to provide extra tutoring for several students in her class in algebra. Moses, who had taught secondary school mathematics in New York City and Tanzania, wanted to ensure that those students had sufficient algebra skills to qualify for honors math and science courses in high school. Through his tutorage, students from the Open Program of the Martin Luther King School passed the citywide algebra examination and qualified for ninth grade honors geometry, the first students from the program to do so. The Algebra Project grew out of attempts to recreate this on a wider community level, to provide similar students with a higher level of mathematical literacy. The Algebra Project now focuses on the southern states of the United States, where the Southern Initiative of the Algebra Project is directed by Dave Dennis. Young People's Project Founded in 1996, the Young People's Project (YPP) is a spin-off of the Algebra Project, which recruits and trains high school and college age "Math Literacy Workers" to tutor younger students in mathematics, and is directed by Omowale Moses. YPP has established sites in Jackson, Mississippi, Chicago, and the Greater Boston area of Massachusetts
https://en.wikipedia.org/wiki/Mail-11
Mail-11 was the native email transport protocol used by Digital Equipment Corporation's VMS operating system, and supported by several other DEC operating systems such as Ultrix. It normally used the DECnet networking system as opposed to TCP/IP. Similar to Internet SMTP based mail, Mail-11 mail had To: Cc: and Subj: headers and date-stamped each message. Mail-11 was one of the most widely used email systems of the 1980s, and was still in fairly wide use until as late as the mid-1990s. Messages from Mail-11 systems were frequently gatewayed out to SMTP, Usenet, and Bitnet systems, and thus are sometimes encountered browsing archives of those systems dating from when Mail-11 was in common use. Several very large DECnet networks with Mail-11 service existed, most notably ENET, which was DEC's worldwide internal network. Another big user was HEPnet, a network for the high-energy physics research community that linked many universities and research labs. Mail-11 used two colons (::) rather than an at sign (@) to separate user and hostname, and hostname came first. Some example headers To: THEWAL::HARKAWIK A message to user HARKAWIK on a machine or cluster of machines called THEWAL. Note that under VMS, usernames were not case-sensitive and were usually shown in uppercase, but under Ultrix, usernames were case-sensitive, and most sites followed the unix convention of using lower case usernames. Names of machines on a DECnet network were not case-sensitive. Thus, the header above implies that the mail is going to a VMS system, but the one following implies the user is on a Unix system. To: DS5353::tabak A message to user tabak on node DS5353. Probably an Ultrix system. From: GUESS::YERAZUNIS "it's.. it's DIP !" 21-SEP-1989 10:28:38.87 To: DECWRL::"decvax!peregrine!dmi" CC: YERAZUNIS This message was sent to the gateway at DEC's Western Research Labs, one of DEC's main Internet gateways. From there, it was expected to travel via uucp, from host
https://en.wikipedia.org/wiki/Hydrodynamic%20focusing
In microbiology, hydrodynamic focusing is a technique used to provide more accurate results when using flow cytometers or Coulter counters for determining the size of bacteria or cells. Technique Measuring particles Cells are counted as they are forced to pass through a small channel (often referred to as a flow cell), causing disruptions in a laser light beam or electricity flow. These disruptions are analyzed by the instruments. It is difficult to create tunnels narrow enough for this purpose using ordinary manufacturing processes, as the diameter must be in the magnitude of micrometers, and the length of the tunnel should exceed several millimeters. The standard channel size used in most production flow cytometers is 250 by 250 micrometers. Focusing with a fluid Hydrodynamic focusing solves this problem by building up the walls of the tunnel from fluid, using the effects of fluid dynamics. A wide (hundreds of micrometers in diameter) tube made of glass or plastic is used, through which a "wall" of fluid called the sheath flow is pumped. The sample is injected into the middle of the sheath flow. If the two fluids differ enough in their velocity or density, they do not mix: they form a two-layer stable flow. Sources References Microbiology techniques
https://en.wikipedia.org/wiki/Regelation
Regelation is the phenomenon of ice melting under pressure and refreezing when the pressure is reduced. This can be demonstrated by looping a fine wire around a block of ice, with a heavy weight attached to it. The pressure exerted on the ice slowly melts it locally, permitting the wire to pass through the entire block. The wire's track will refill as soon as pressure is relieved, so the ice block will remain intact even after wire passes completely through. This experiment is possible for ice at −10 °C or cooler, and while essentially valid, the details of the process by which the wire passes through the ice are complex. The phenomenon works best with high thermal conductivity materials such as copper, since latent heat of fusion from the top side needs to be transferred to the lower side to supply latent heat of melting. In short, the phenomenon in which ice converts to liquid due to applied pressure and then re-converts to ice once the pressure is removed is called regelation. Regelation was discovered by Michael Faraday. It occurs only for substances such as ice, that have the property of expanding upon freezing, for the melting points of those substances decrease with the increasing external pressure. The melting point of ice falls by 0.0072 °C for each additional atm of pressure applied. For example, a pressure of 500 atmospheres is needed for ice to melt at −4 °C. Surface melting For a normal crystalline ice far below its melting point, there will be some relaxation of the atoms near the surface. Simulations of ice near to its melting point show that there is significant melting of the surface layers rather than a symmetric relaxation of atom positions. Nuclear magnetic resonance provided evidence for a liquid layer on the surface of ice. In 1998, using atomic force microscopy, Astrid Döppenschmidt and Hans-Jürgen Butt measured the thickness of the liquid-like layer on ice to be roughly 32 nm at −1 °C, and 11 nm at −10 °C. The surface melting can account
https://en.wikipedia.org/wiki/Label%20printer
A label printer is a computer printer that prints on self-adhesive label material and/or card-stock (tags). A label printer with built-in keyboard and display for stand-alone use (not connected to a separate computer) is often called a label maker. Label printers are different from ordinary printers because they need to have special feed mechanisms to handle rolled stock, or tear sheet (fanfold) stock. Common connectivity for label printers include RS-232 serial, Universal Serial Bus (USB), parallel, Ethernet and various kinds of wireless. Label printers have a wide variety of applications, including supply chain management, retail price marking, packaging labels, blood and laboratory specimen marking, and fixed assets management. Mechanisms Label printers use a wide range of label materials, including paper and synthetic polymer ("plastic") materials. Several types of print mechanisms are also used, including laser and impact, but thermal printer mechanisms are perhaps the most common. There are two common types of thermal printer. Direct thermal printers use heat sensitive paper (similar to thermal fax paper). Direct thermal labels tend to fade over time (typically 6 to 12 months); if exposed to heat, direct sunlight or chemical vapors, the life is shortened. Therefore, direct thermal labels are primarily used for short duration applications, such as shipping labels. thermal transfer printers use heat to transfer ink from ribbon onto the label for a permanent print. Some thermal transfer printers are also capable of direct thermal printing. Using a PVC vinyl can increase the longevity of the label life as seen in pipe markers and industrial safety labels found in much of the market place today. There are three grades of ribbon for use with thermal transfer printers. Wax is the most popular with some smudge resistance, and is suitable for matte and semi-gloss paper labels. Wax/resin is smudge resistant, suitable for semi-gloss paper and some syntheti
https://en.wikipedia.org/wiki/Video%20quality
Video quality is a characteristic of a video passed through a video transmission or processing system that describes perceived video degradation (typically, compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impacts the user's perception of a system. For many stakeholders in video production and distribution, assurance of video quality is an important task. Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (by mathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality). From analog to digital video Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systems encode video streams and transmit them over various kinds of networks or channels. In the ages of analog video systems, it was possible to evaluate the quality aspects of a video processing system by calculating the system's frequency response using test signals (for example, a collection of color bars and circles). Digital video systems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends on many factors including the characteristics of the input video signal (e.g. amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity or network performance. Objective video quality Objective video quality models are mathematical models that approximate results from subjective quality assessment, in which human observers are asked to rate the quality of a video. In this cont
https://en.wikipedia.org/wiki/Continuous%20design
Evolutionary design, continuous design, evolutive design, or incremental design is directly related to any modular design application, in which components can be freely substituted to improve the design, modify performance, or change another feature at a later time. Informatics In particular, it applies (with the name continuous design) to software development. In this field it is a practice of creating and modifying the design of a system as it is developed, rather than purporting to specify the system completely before development starts (as in the waterfall model). Continuous design was popularized by extreme programming. Continuous design also uses test driven development and refactoring. Martin Fowler wrote a popular book called Refactoring, as well as a popular article entitled "Is Design Dead?", that talked about continuous/evolutionary design. James Shore wrote an article in IEEE titled "Continuous Design". Industrial design Modular design states that a product is made of subsystems that are joined together to create a full product. The above design model defined in electronics and evolved in industrial design into well consolidated industrial standards related to platform concept and its evolution. See also Rapid application development Continuous integration Evolutionary database design References External links Is Design Dead? Software design
https://en.wikipedia.org/wiki/Object%20Modeling%20in%20Color
UML color standards are a set of four colors associated with Unified Modeling Language (UML) diagrams. The coloring system indicates which of several archetypes apply to the UML object. UML typically identifies a stereotype with a bracketed comment for each object identifying whether it is a class, interface, etc. These colors were first suggested by Peter Coad, Eric Lefebvre, and Jeff De Luca in a series of articles in The Coad Letter, and later published in their book Java Modeling In Color With UML. Over hundreds of domain models, it became clear that four major "types" of classes appeared again and again, though they had different names in different domains. After much discussion, these were termed archetypes, which is meant to convey that the classes of a given archetype follow more or less the same form. That is, attributes, methods, associations, and interfaces are fairly similar among classes of a given archetype. When attempting to classify a given domain class, one typically asks about the color standards in this order: pinkmoment-interval — Does it represent a moment or interval of time that we need to remember and work with for legal or business reasons? Examples in business systems generally model activities involving people, places and things such as a sale, an order, a rental, an employment, making a journey, etc. yellowroles — Is it a way of participating in an activity (by either a person, place, or thing)? A person playing the role of an employee in an employment, a thing playing the role of a product in a sale, a location playing the role of a classroom for a training course, are examples of roles. bluedescription — Is it simply a catalog-entry-like description which classifies or 'labels' an object? For example, the make and model of a car categorises or describes a number of physical vehicles. The relationship between the blue description and green party, place or thing is a type-instance relationship based on differences in the values of da
https://en.wikipedia.org/wiki/Actin-binding%20protein
Actin-binding proteins (also known as ABPs) are proteins that bind to actin. This may mean ability to bind actin monomers, or polymers, or both. Many actin-binding proteins, including α-actinin, β-spectrin, dystrophin, utrophin and fimbrin, do this through the actin-binding calponin homology domain. This is a list of actin-binding proteins in alphabetical order. 0–9 25kDa 25kDa ABP from aorta p185neu 30akDA 110 kD dimer ABP 30bkDa 110 kD (Drebrin) 34kDA 45kDa p53 p58gag p116rip A a-actinin Abl ABLIM Actin-Interacting MAPKKK ABP120 ABP140 Abp1p ABP280 (Filamin) ABP50 (EF-1a) Acan 125 (Carmil) ActA Actibind Actin Actinfilin Actinogelin Actin-regulating kinases Actin-Related Proteins Actobindin Actolinkin Actopaxin Actophorin Acumentin (= L-plastin) Adducin ADF/Cofilin Adseverin (scinderin) Afadin AFAP-110 Affixin Aginactin AIP1 Aldolase Angiogenin Anillin Annexins Aplyronine Archvillin Arginine kinase Arp2/3 complex B Band 4.1 Band 4.9(Dematin) b-actinin b-Cap73 Bifocal Bistramide A BPAG1 Brevin (Gelsolin) C c-Abl Calpactin (Annexin) CHO1 Cortactin CamKinase II Calponin Chondramide Cortexillin CAP Caltropin CH-ILKBP CPb3 Cap100 Calvasculin Ciboulot Coactosin CAP23 CARMIL Acan125 Cingulin Cytovillin (Ezrin) CapZ/Capping Protein a-Catenin Cofilin CR16 Caldesmon CCT Comitin Calicin Centuarin Coronin D DBP40 Drebrin Dematin (Band 4.9) Dynacortin Destrin (ADF/cofilin) Dystonins Diaphanous Dystroglycan DNase I Dystrophin Doliculide Dolastatins E EAST Endossin EF-1a (ABP50) Eps15 EF-1b EPLIN EF-2 Epsin EGF receptor ERK ENC-1 ERM proteins (ezrin, radixin, moesin, plus merlin) END3p Ezrin (the E of ERM protein family) F F17R Fodrin (spectrin) Fascin Formins Fessilin Frabin FHL3 Fragmin Fhos FLNA (filamin A) Fimbrin (plastin) G GAP43 Glycogenins Gas2 G-proteins Gastrin-Binding Protein Gelactins I-IV Gelsolins Girdin Glucokinase H Harmonin b Hrp36 Hexokinase Hrp65-2 Hectochlorin HS1 (actin binding protein) Helicase II Hsp27 HIP1 (Huntingtin Interacting protein 1) H
https://en.wikipedia.org/wiki/IPFC
IPFC stands for Internet Protocol over Fibre Channel. It governs a set of standards created in January 2006 for address resolution (ARP) and transmitting IPv4 and IPv6 network packets over a Fibre Channel (FC) network. IPFC makes up part of the FC-4 protocol-mapping layer of a Fibre Channel system. In IPFC, each IP datagram packet is wrapped into a FC frame, with its own header, and transmitted as a sequence of one or more frames. The receiver at the other end receives the frames, strips the FC headers and reassembles the IP packet. IP datagrams of up to 65,280 bytes in size may be accommodated. ARP packet transmission works in the same fashion. Each IP datagram exchange is unidirectional, although IP and TCP allow for bidirectional communication within their protocols. IPFC is an application protocol that is typically implemented as a device driver in an operating system. IP over FC plays a less important role in storage area networking than SCSI over Fibre Channel or IP over Ethernet. IPFC has been used, for example, to provide clock synchronization via the Network Time Protocol (NTP). See also iFCP - Internet Fibre Channel Protocol Fibre Channel over IP References External links RFC 4338 - Transmission of IPv6, IPv4, and Address Resolution Protocol (ARP) Packets over Fibre Channel RFC 5494 - An update of RFC 4338 specifying IANA guidelines for ARP RFC 2625, RFC 3831 were older versions of IPFC obsoleted by RFC 4338 Internet protocols Fibre Channel
https://en.wikipedia.org/wiki/Nanotribology
Nanotribology is the branch of tribology that studies friction, wear, adhesion and lubrication phenomena at the nanoscale, where atomic interactions and quantum effects are not negligible. The aim of this discipline is characterizing and modifying surfaces for both scientific and technological purposes. Nanotribological research has historically involved both direct and indirect methodologies. Microscopy techniques, including Scanning Tunneling Microscope (STM), Atomic-Force Microscope (AFM) and Surface Forces Apparatus, (SFA) have been used to analyze surfaces with extremely high resolution, while indirect methods such as computational methods and Quartz crystal microbalance (QCM) have also been extensively employed. Changing the topology of surfaces at the nanoscale, friction can be either reduced or enhanced more intensively than macroscopic lubrication and adhesion; in this way, superlubrication and superadhesion can be achieved. In micro- and nano-mechanical devices problems of friction and wear, that are critical due to the extremely high surface volume ratio, can be solved covering moving parts with super lubricant coatings. On the other hand, where adhesion is an issue, nanotribological techniques offer a possibility to overcome such difficulties. History Friction and wear have been technological issues since ancient periods. On the one hand, the scientific approach of the last centuries towards the comprehension of the underlying mechanisms was focused on macroscopic aspects of tribology. On the other hand, in nanotribology, the systems studied are composed of nanometric structures, where volume forces (such as those related to mass and gravity) can often be considered negligible compared to surface forces. Scientific equipment to study such systems have been developed only in the second half of the 20th century. In 1969 the very first method to study the behavior of a molecularly thin liquid film sandwiched between two smooth surfaces through the SFA
https://en.wikipedia.org/wiki/Dual%20graph
In the mathematical discipline of graph theory, the dual graph of a planar graph is a graph that has a vertex for each face of . The dual graph has an edge for each pair of faces in that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge of has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of . The definition of the dual depends on the choice of embedding of the graph , so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph. Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces. These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph. The term dual is used because the property of being a dual graph is symmetric, meaning that if is a dual of a connected graph , then is a dual of . When discussing the dual of a graph , the graph itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs.
https://en.wikipedia.org/wiki/Categories%20for%20the%20Working%20Mathematician
Categories for the Working Mathematician (CWM) is a textbook in category theory written by American mathematician Saunders Mac Lane, who cofounded the subject together with Samuel Eilenberg. It was first published in 1971, and is based on his lectures on the subject given at the University of Chicago, the Australian National University, Bowdoin College, and Tulane University. It is widely regarded as the premier introduction to the subject. Contents The book has twelve chapters, which are: Chapter I. Categories, Functors, and Natural Transformations. Chapter II. Constructions on Categories. Chapter III. Universals and Limits. Chapter IV. Adjoints. Chapter V. Limits. Chapter VI. Monads and Algebras. Chapter VII. Monoids. Chapter VIII. Abelian Categories. Chapter IX. Special Limits. Chapter X. Kan Extensions. Chapter XI. Symmetry and Braiding in Monoidal Categories Chapter XII. Structures in Categories. Chapters XI and XII were added in the 1998 second edition, the first in view of its importance in string theory and quantum field theory, and the second to address higher-dimensional categories that have come into prominence. Although it is the classic reference for category theory, some of the terminology is not standard. In particular, Mac Lane attempted to settle an ambiguity in usage for the terms epimorphism and monomorphism by introducing the terms epic and monic, but the distinction is not in common use. References Notes 1971 non-fiction books Graduate Texts in Mathematics Monographs Category theory
https://en.wikipedia.org/wiki/Carrier%20grade%20open%20framework
Carrier grade open framework (CGOF) is a hardware-independent architecture for the telecommunications industry. CGOF is based on a collection of open standards and is offered as a basis for new solution development. CGOF specifies the functional components needed to create next generation network (NGN) solutions, the relationship of those components to each other, and the interfaces among the components. External links IBM white paper on CGOF Oracle CFG Home Page Network architecture
https://en.wikipedia.org/wiki/Biologist
A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer). Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans. In modern times, most biologists have one or more academic degrees such as a bachelor's degree plus an advanced degree like a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government. History Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells. Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics. In 1953, James D. Watson and Francis
https://en.wikipedia.org/wiki/Oseledets%20theorem
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier. Cocycles The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence. A cocycle of an autonomous dynamical system X is a map C : X×T → Rn×n satisfying where X and T (with T = Z⁺ or T = R⁺) are the phase space and the time range, respectively, of the dynamical system, and In is the n-dimensional unit matrix. The dimension n of the matrices C is not related to the phase space X. Examples A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X. For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle. Statement of the theorem Let μ be an ergodic invariant measure on X and C a cocycle of the dynamical system such that for each t ∈ T, the maps and are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit exists and assumes, depending on u but not on x, up to n different values. These are the Lyapunov exponents. Further, if λ1 > ... > λm are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi
https://en.wikipedia.org/wiki/Filter%20%28software%29
A filter is a computer program or subroutine to process a stream, producing another stream. While a single filter can be used individually, they are frequently strung together to form a pipeline. Some operating systems such as Unix are rich with filter programs. Windows 7 and later are also rich with filters, as they include Windows PowerShell. In comparison, however, few filters are built into cmd.exe (the original command-line interface of Windows), most of which have significant enhancements relative to the similar filter commands that were available in MS-DOS. OS X includes filters from its underlying Unix base but also has Automator, which allows filters (known as "Actions") to be strung together to form a pipeline. Unix In Unix and Unix-like operating systems, a filter is a program that gets most of its data from its standard input (the main input stream) and writes its main results to its standard output (the main output stream). Auxiliary input may come from command line flags or configuration files, while auxiliary output may go to standard error. The command syntax for getting data from a device or file other than standard input is the input operator (<). Similarly, to send data to a device or file other than standard output is the output operator (>). To append data lines to an existing output file, one can use the append operator (>>). Filters may be strung together into a pipeline with the pipe operator ("|"). This operator signifies that the main output of the command to the left is passed as main input to the command on the right. The Unix philosophy encourages combining small, discrete tools to accomplish larger tasks. The classic filter in Unix is Ken Thompson's , which Doug McIlroy cites as what "ingrained the tools outlook irrevocably" in the operating system, with later tools imitating it. at its simplest prints any lines containing a character string to its output. The following is an example: cut -d : -f 1 /etc/passwd | grep foo This finds
https://en.wikipedia.org/wiki/225%20%28number%29
225 (two hundred [and] twenty-five) is the natural number following 224 and preceding 226. In mathematics 225 is the smallest number that is a polygonal number in five different ways. It is a square number , an octagonal number, and a squared triangular number . As the square of a double factorial, counts the number of permutations of six items in which all cycles have even length, or the number of permutations in which all cycles have odd length. And as one of the Stirling numbers of the first kind, it counts the number of permutations of six items with exactly three cycles. 225 is a highly composite odd number, meaning that it has more divisors than any smaller odd numbers. After 1 and 9, 225 is the third smallest number n for which , where σ is the sum of divisors function and φ is Euler's totient function. 225 is a refactorable number. 225 is the smallest square number to have one of every digit in some number base (225 is 3201 in base 4) 225 is the first odd number with exactly 9 divisors. References Integers
https://en.wikipedia.org/wiki/Autolysis%20%28biology%29
In biology, autolysis, more commonly known as self-digestion, refers to the destruction of a cell through the action of its own enzymes. It may also refer to the digestion of an enzyme by another molecule of the same enzyme. The term derives from the Greek αὐτο- 'self' and λύσις 'splitting'. Biochemical mechanisms of cell destruction Autolysis is uncommon in living adult organisms and usually occurs in necrotic tissue as enzymes act on components of the cell that would not normally serve as substrates. These enzymes are released due to the cessation of active processes in the cell that provide substrates in healthy, living tissue; autolysis in itself is not an active process. In other words, though autolysis resembles the active process of digestion of nutrients by live cells, the dead cells are not actively digesting themselves as is often claimed, and as the synonym self-digestion suggests. Failure of respiration and subsequent failure of oxidative phosphorylation is the trigger of the autolytic process. The reduced availability and subsequent absence of high-energy molecules that are required to maintain the integrity of the cell and maintain homeostasis causes significant changes in the biochemical operation of the cell. Molecular oxygen serves as the terminal electron acceptor in the series of biochemical reactions known as oxidative phosphorylation that are ultimately responsible for the synthesis of adenosine triphosphate, the main source of energy for otherwise thermodynamically unfavorable cellular processes. Failure of delivery of molecular oxygen to cells results in a metabolic shift to anaerobic glycolysis, in which glucose is converted to pyruvate as an inefficient means of generating adenosine triphosphate. Glycolysis has a lower ATP yield than oxidative phosphorylation and generates acidic byproducts that decrease the pH of the cell, which enables many of the enzymatic processes involved in autolysis. Limited synthesis of adenosine triphosphate
https://en.wikipedia.org/wiki/Origamic%20architecture
Origamic architecture is a form of kirigami that involves the three-dimensional reproduction of architecture and monuments, on various scales, using cut-out and folded paper, usually thin paperboard. Visually, these creations are comparable to intricate 'pop-ups', indeed, some works are deliberately engineered to possess 'pop-up'-like properties. However, origamic architecture tends to be cut out of a single sheet of paper, whereas most pop-ups involve two or more. To create the three-dimensional image out of the two-dimensional surface requires skill akin to that of an architect. Origin The development of origamic architecture began with Professor Masahiro Chatani's (then a newly appointed professor at the Tokyo Institute of Technology) experiments with designing original and unique greeting cards. Japanese culture encourages the giving and receiving of cards for various special occasions and holidays, particularly the Japanese New Year, and according to his own account, Professor Chatani personally felt that greeting cards were a significant form of connection and communication between people. He worried that in today's fast-paced modern world, the emotional connections called up and created by the exchange of greeting cards would become scarce. In the early 1980s, Professor Chatani began to experiment with cutting and folding paper to make unique and interesting pop-up cards. He used the techniques of origami (Japanese paper folding) and kirigami (Japanese papercutting), as well as his experience in architectural design, to create intricate patterns that played with light and shadow. Many of his creations are made of stark white paper which emphasizes the shadowing effects of the cuts and folds. In the preface to one of his books, he called the shadows of the three-dimensional cutouts a "dreamy scene" that invited the viewer into a "fantasy world". At first, Professor Chatani simply gave the cards to his friends and family. Over the next nearly thirty
https://en.wikipedia.org/wiki/Two%20envelopes%20problem
The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example: Since the situation is symmetric, it seems obvious that there is no point in switching envelopes. On the other hand, a simple calculation using expected values suggests the opposite conclusion, that it is always beneficial to swap envelopes, since the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have. Introduction Problem A person is given two indistinguishable envelopes, each of which contains a sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead. The switching argument Now suppose the person reasons as follows: The puzzle The puzzle is to find the flaw in the line of reasoning in the switching argument. This includes determining exactly why and under what conditions that step is not correct, to be sure not to make this mistake in a situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. The puzzle is not solved by finding another way to calculate the probabilities that does not lead to a contradiction. Multiplicity of proposed solutions There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in voluminous literature on the subject. No p
https://en.wikipedia.org/wiki/Route%20poisoning
Route poisoning is a method to prevent a router from sending packets through a route that has become invalid within computer networks. Distance-vector routing protocols in computer networks use route poisoning to indicate to other routers that a route is no longer reachable and should not be considered from their routing tables. Unlike the split horizon with poison reverse, route poisoning provides for sending updates with unreachable hop counts immediately to all the nodes in the network. When the protocol detects an invalid route, all of the routers in the network are informed that the bad route has an infinite (∞) route metric. This makes all nodes on the invalid route seem infinitely distant, preventing any of the routers from sending packets over the invalid route. Some distance-vector routing protocols, such as RIP, use a maximum hop count to determine how many routers the traffic must go through to reach the destination. Each route has a hop count number assigned to it which is incremented as the routing information is passed from router to router. A route is considered unreachable if the hop count exceeds the maximum allowed. Route poisoning is a method of quickly forgetting outdated routing information from other router's routing tables by changing its hop count to be unreachable (higher than the maximum number of hops allowed) and sending a routing update. In the case of RIP, the maximum hop count is 15, so to perform route poisoning on a route its hop count is changed to 16, deeming it unreachable, and a routing update is sent. If these updates are lost, some nodes in the network would not be informed that a route is invalid, so they could attempt to send packets over the bad route and cause a problem known as a routing loop. Therefore, route poisoning is used in conjunction with holddowns to keep update messages from falsely reinstating the validity of a bad route. This prevents routing loops, improving the overall efficiency of the network. Referenc
https://en.wikipedia.org/wiki/Clustered%20web%20hosting
Clustered hosting is a type of web hosting that spreads the load of hosting across multiple physical machines, or node, increasing availability and decreasing the chances of one service (e.g., FTP or email) affecting another (e.g., MySQL). Many large websites run on clustered hosting solutions, for example, large discussion forums will tend to run using multiple front-end webservers with multiple back-end database servers. Typically, most hosting infrastructures are based on the paradigm of using a single physical machine to host multiple hosted services, including web, database, email, FTP and others. A single physical machine is not only a single point of failure, but also has finite capacity for traffic, that in practice can be troublesome for a busy website or for a website that is experiencing transient bursts in traffic. By clustering services across multiple hardware machines and using load balancing, single points of failure can be eliminated, increasing availability of a website and other web services beyond that of ordinary single server hosting. A single server can require periodic reboots for software upgrades and the like, whereas in a clustered platform you can stagger the restarts such that the service is still available whilst still upgrading all necessary machines in the cluster. Clustered hosting is similar to cloud hosting, in that the resources of many machines are available for a website to utilize on demand, making scalability a large advantage to a clustered hosting solution. See also High-availability cluster References Web hosting
https://en.wikipedia.org/wiki/SystemVerilog
SystemVerilog, standardized as IEEE 1800, is a hardware description and hardware verification language used to model, design, simulate, test and implement electronic systems. SystemVerilog is based on Verilog and some extensions, and since 2008, Verilog is now part of the same IEEE standard. It is commonly used in the semiconductor and electronic design industry as an evolution of Verilog. History SystemVerilog started with the donation of the Superlog language to Accellera in 2002 by the startup company Co-Design Automation. The bulk of the verification functionality is based on the OpenVera language donated by Synopsys. In 2005, SystemVerilog was adopted as IEEE Standard 1800-2005. In 2009, the standard was merged with the base Verilog (IEEE 1364-2005) standard, creating IEEE Standard 1800-2009. The current version is IEEE standard 1800-2017. The feature-set of SystemVerilog can be divided into two distinct roles: SystemVerilog for register-transfer level (RTL) design is an extension of Verilog-2005; all features of that language are available in SystemVerilog. Therefore, Verilog is a subset of SystemVerilog. SystemVerilog for verification uses extensive object-oriented programming techniques and is more closely related to Java than Verilog. These constructs are generally not synthesizable. The remainder of this article discusses the features of SystemVerilog not present in Verilog-2005. Design features Data lifetime There are two types of data lifetime specified in SystemVerilog: static and automatic. Automatic variables are created the moment program execution comes to the scope of the variable. Static variables are created at the start of the program's execution and keep the same value during the entire program's lifespan, unless assigned a new value during execution. Any variable that is declared inside a task or function without specifying type will be considered automatic. To specify that a variable is static place the "static" keyword in the decla
https://en.wikipedia.org/wiki/Darwin%20%28programming%20game%29
Darwin was a programming game invented in August 1961 by Victor A. Vyssotsky, Robert Morris Sr., and M. Douglas McIlroy. (Dennis Ritchie is sometimes incorrectly cited as a co-author, but was not involved.) The game was developed at Bell Labs, and played on an IBM 7090 mainframe there. The game was only played for a few weeks before Morris developed an "ultimate" program that eventually brought the game to an end, as no-one managed to produce anything that could defeat it. Description The game consisted of a program called the umpire and a designated section of the computer's memory known as the arena, into which two or more small programs, written by the players, were loaded. The programs were written in 7090 machine code, and could call a number of functions provided by the umpire in order to probe other locations within the arena, kill opposing programs, and claim vacant memory for copies of themselves. The game ended after a set amount of time, or when copies of only one program remained alive. The player who wrote the last surviving program was declared winner. Up to 20 memory locations within each program (fewer in later versions of the game) could be designated as protected. If one of these protected locations was probed by another program, the umpire would immediately transfer control to the program that was probed. This program would then continue to execute until it, in turn, probed a protected location of some other program, and so forth. While the programs were responsible for copying and relocating themselves, they were forbidden from altering memory locations outside themselves without permission from the umpire. As the programs were executed directly by the computer, there was no physical mechanism in place to prevent cheating. Instead, the source code for the programs was made available for study after each game, allowing players to learn from each other and to verify that their opponents hadn't cheated. The smallest program that could
https://en.wikipedia.org/wiki/Multiplication%20%28music%29
The mathematical operations of multiplication have several applications to music. Other than its application to the frequency ratios of intervals (for example, Just intonation, and the twelfth root of two in equal temperament), it has been used in other ways for twelve-tone technique, and musical set theory. Additionally ring modulation is an electrical audio process involving multiplication that has been used for musical effect. A multiplicative operation is a mapping in which the argument is multiplied. Multiplication originated intuitively in interval expansion, including tone row order number rotation, for example in the music of Béla Bartók and Alban Berg. Pitch number rotation, Fünferreihe or "five-series" and Siebenerreihe or "seven-series", was first described by Ernst Krenek in Über neue Musik. Princeton-based theorists, including James K. Randall, Godfrey Winham, and Hubert S. Howe "were the first to discuss and adopt them, not only with regards to twelve-tone series". Pitch-class multiplication modulo 12 When dealing with pitch-class sets, multiplication modulo 12 is a common operation. Dealing with all twelve tones, or a tone row, there are only a few numbers which one may multiply a row by and still end up with a set of twelve distinct tones. Taking the prime or unaltered form as P0, multiplication is indicated by Mx, x being the multiplicator: Mx(y) ≡ xy mod 12 The following table lists all possible multiplications of a chromatic twelve-tone row: Note that only M1, M5, M7, and M11 give a one-to-one mapping (a complete set of 12 unique tones). This is because each of these numbers is relatively prime to 12. Also interesting is that the chromatic scale is mapped to the circle of fourths with M5, or fifths with M7, and more generally under M7 all even numbers stay the same while odd numbers are transposed by a tritone. This kind of multiplication is frequently combined with a transposition operation. It was first described in print by Herbert Eime
https://en.wikipedia.org/wiki/Equidistribution%20theorem
In mathematics, the equidistribution theorem is the statement that the sequence a, 2a, 3a, ... mod 1 is uniformly distributed on the circle , when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure . History While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day. In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes. George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988. Specifically, Khinchin showed that the identity holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity might hold, given some general sequence bk. One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming. A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponen
https://en.wikipedia.org/wiki/Host%20signal%20processing
Host signal processing (HSP) is a term used in computing to describe hardware such as a modem or printer which is emulated (to various degrees) in software. Intel refers to the technology as native signal processing (NSP). HSP replaces dedicated DSP or ASIC hardware by using the general purpose CPU of the host computer. Modems using HSP are known as winmodems (a term trademarked by 3COM / USRobotics, but genericized) or softmodems. Printers using HSP are known as GDI printers (after the MS Windows GDI software interface), winprinters (named after winmodems) or softprinters. The Apple II Disk II floppy drive used the host CPU to process drive control signals, instead of a microcontroller. This instance of HSP predates the usage of the terms HSP and NSP. In the mid- to late-1990s, Intel pursued native signal processing technology to improve multimedia handling. According to testimony by Intel, Microsoft opposed development of NSP because the technology could reduce the necessity of the Microsoft Windows operating system. Intel claims to have terminated development of NSP because of threats from Microsoft. References Computing terminology Digital signal processing
https://en.wikipedia.org/wiki/Extracellular%20digestion
Extracellular phototropic digestion is a process in which saprobionts feed by secreting enzymes through the cell membrane onto the food. The enzymes catalyze the digestion of the food ie diffusion, transport, osmotrophy or phagocytosis. Since digestion occurs outside the cell, it is said to be extracellular. It takes place either in the lumen of the digestive system, in a gastric cavity or other digestive organ, or completely outside the body. During extracellular digestion, food is broken down outside the cell either mechanically or with acid by special molecules called enzymes. Then the newly broken down nutrients can be absorbed by the cells nearby. Humans use extracellular digestion when they eat. Their teeth grind the food up, enzymes and acid in the stomach liquefy it, and additional enzymes in the small intestine break the food down into parts their cells can use. Extracellular digestion is a form of digestion found in all saprobiontic annelids, crustaceans, arthropods, lichens and chordates, including vertebrates. In fungi Fungi are heterotrophic organisms. Heterotrophic nutrition means that fungi utilize extracellular sources of organic energy, organic material or organic matter, for their maintenance, growth and reproduction. Energy is derived from the breakdown of the chemical bond between carbon and either carbon or other components of compounds such as a phosphate ion. The extracellular sources of energy may be simple sugars, polypeptides or more complex carbohydrate. Fungi can only absorb small molecules through their walls. For fungi to gain their energy needs, they find and absorb organic molecules appropriate to their needs, either immediately or following some form of enzyme diminution outside the thallus. The small molecules are then absorbed, used directly or reconstituted (transformed) into organic molecules within the cell. When a skeletonized leaf is seen in the litter, it is because recalcitrant materials remain and digestion is continui
https://en.wikipedia.org/wiki/American%20Morse%20code
American Morse Code — also known as Railroad Morse—is the latter-day name for the original version of the Morse Code developed in the mid-1840s, by Samuel Morse and Alfred Vail for their electric telegraph. The "American" qualifier was added because, after most of the rest of the world adopted "International Morse Code," the companies that continued to use the original Morse Code were mainly located in the United States. American Morse is now nearly extinct—it is most frequently seen in American railroad museums and American Civil War reenactments—and "Morse Code" today virtually always means the International Morse which supplanted American Morse. History American Morse Code was first used on the Baltimore-Washington telegraph line, a telegraph line constructed between Baltimore, Maryland, and the old Supreme Court chamber in the Capitol building in Washington, D.C. The first public message "What hath God wrought" was sent on May 24, 1844, by Morse in Washington to Alfred Vail at the Baltimore and Ohio Railroad (B&O) "outer depot" (now the B&O Railroad Museum) in Baltimore. The message is a Bible verse from Numbers 23:23, chosen for Morse by Annie Ellsworth, daughter of the Governor of Connecticut. The original paper tape received by Vail in Baltimore is on display in the Library of Congress in Washington, D.C. In its original implementation, the Morse Code specification included the following: short mark or dot () longer mark or dash () intra-character gap (standard gap between the dots and dashes in a character) short gap (between letters) medium gap (between words) long gap (between sentences) long intra-character gap (longer internal gap used in C, O, R, Y, Z and &) "long dash" (, the letter L) even longer dash (, the numeral 0) Various other companies and countries soon developed their own variations of the original Morse Code. Of special importance was one standard, originally created in Germany by Friedrich Clemens Gerke in 1848, which was sim
https://en.wikipedia.org/wiki/Yearbook%20of%20International%20Organizations
The Yearbook of International Organizations is a reference work on non-profit international organizations, published by the Union of International Associations. It was first published in 1908 under the title Annuaire de la vie internationale, and has been known under its current title since 1950. It is seen as a quasi-official source associated with the United Nations. The Yearbook contains profiles of over 67,000 organizations active in about 300 countries and territories in every field of human endeavor. It profiles both international intergovernmental organizations and non-governmental organizations, from formal structures to informal networks, from professional bodies to recreational clubs. The Yearbook does not, however, include for-profit enterprises. Profiles include names and addresses, historical and structural information, aims, links with other organizations, as well as specifics on activities, events, publications and membership. In addition to organization profiles, the Yearbook also provides biographies of important members, a bibliography of the important publications of international organizations, and statistics. The Yearbook is published in six book volumes and online. See also International Congress Calendar Encyclopedia of World Problems and Human Potential References William M. Modrow, (2004) "Yearbook of International Organizations", Reference Reviews, Vol. 18 Iss: 2, pp. 13 – 14 Walter W. Powell, Richard Steinberg, The nonprofit sector: a research handbook, Yale University Press, 2006, External links Yearbooks
https://en.wikipedia.org/wiki/Content%20reference%20identifier
Overview A content reference identifier or CRID is a concept from the standardization work done by the TV-Anytime forum. It is or closely matches the concept of the Uniform Resource Locator, or URL, as used on the World-Wide Web: The concept of CRID permits referencing contents unambiguously, regardless of their location, i.e., without knowing specific broadcast information (time, date and channel) or how to obtain them through a network, for instance, by means of a streaming service or by downloading a file from an Internet server. The receiver must be capable of resolving these unambiguous references, i.e. of translating them into specific data that will allow it to obtain the location of that content in order to acquire it. This makes it possible for recording processes to take place without knowing that information, and even without knowing beforehand the duration of the content to be recorded: a complete series by a simple click, a program that has not been scheduled yet, a set of programs grouped by a specific criterion… This framework allows for the separation between the reference to a given content (the CRID) and the necessary information to acquire it, which is called a “locator”. Each CRID may lead to one or more locators which will represent different copies of the same content. They may be identical copies broadcast in different channels or dates, or cost different prices. They may also be distinct copies with different technical parameters such as format or quality. It may also be the case that the resolution process of a CRID provides another CRID as a result (for example, its reference in a different network, where it has an alternative identifier assigned by a different operator) or a set of CRIDs (for instance, if the original CRID represents a TV series, in which case the resolution process would result in the list of CRIDs representing each episode). From the above it can be concluded that provided that a given content can belong to many
https://en.wikipedia.org/wiki/TV-Anytime
TV-Anytime is a set of specifications for the controlled delivery of multimedia content to a user's local storage. It seeks to exploit the evolution in convenient, high capacity storage of digital information to provide consumers with a highly personalized TV experience. Users will have access to content from a wide variety of sources, tailored to their needs and personal preferences. TV-Anytime specifications are specified by the TV-Anytime Forum. The TV-Anytime Forum The global TV-Anytime Forum is an association of organizations which seeks to develop specifications to enable audio-visual and other services based on mass-market high volume digital storage in consumer platforms. It was formed in Newport Beach, California, United States, on 27–29 September 1999 after DAVIC was closed down. It was wound up on 27 July 2005 following the publication of RFC 4078 (reference: http://www.tv-anytime.org/). Its first specifications were published by the European Telecommunications Standards Institute (ETSI) on August 1, 2003 as TS 102 822-1 'Broadcast and On-line Services: Search, select, and rightful use of content on personal storage systems ("TV-Anytime")'. RFC 4078 (The TV-Anytime Content Reference Identifier (CRID)) was published in May 2005. TV-Anytime has more than 60 member companies from Europe (including BBC, BSkyB, Canal+ Technologies, Disney, EBU, Nederlands Omroep Productie Bedrijf (NOB) , France Telecom, Nokia, Philips, PTT Research , Thomson), Asia (including ETRI, KETI, NHK, NTT, Dentsu, Hakuhodo, Nippon TV, Sony, Panasonic, LG, Samsung, Sharp, Toshiba) and the USA (including Motorola, Microsoft, and Nielsen). The objectives The TV-Anytime Forum has set up the following four objectives for their standardization work: Develop specifications that will enable applications to exploit local persistent storage in consumer electronics platforms. Be network independent with regard to the means for content delivery to consumer electronics equipment, including
https://en.wikipedia.org/wiki/Chorus%20%28audio%20effect%29
Chorus (or chorusing, choruser or chorused effect) is an audio effect that occurs when individual sounds with approximately the same time, and very similar pitches, converge. While similar sounds coming from multiple sources can occur naturally, as in the case of a choir or string orchestra, it can also be simulated using an electronic effects unit or signal processing device. When the effect is produced successfully, none of the constituent sounds are perceived as being out of tune. It is characteristic of sounds with a rich, shimmering quality that would be absent if the sound came from a single source. The shimmer occurs because of beating. The effect is more apparent when listening to sounds that sustain for longer periods of time. The chorus effect is especially easy to hear when listening to a choir or string ensemble. A choir has multiple people singing each part (alto, tenor, etc.). A string ensemble has multiple violinists and possibly multiples of other stringed instruments. Acoustically created Although most acoustic instruments cannot produce a chorus effect by themselves, some instruments (particularly, chordophones with multiple courses of strings) can produce it as part of their own design. The effect can make these acoustic instruments sound fuller and louder than by using a single tone generator (b.e.: a single vibrating string or a reed). Some examples: Piano – Each of the hammers strikes a course of multiple strings tuned to nearly the same pitch (for all notes except the bass notes). Professional piano tuners carefully control the mistuning of each string to add movement without losing clarity. However, in some poorly-cared instruments (like the honky-tonk pianos), the effect is more prominent. Santur (and similar coursed-hammered dulcimers) – As well as on the piano, the player can strike (by using a pair of manual hammers instead) a course of multiple strings tuned to nearly the same pitch. As the instrument is frequently tuned by the mus
https://en.wikipedia.org/wiki/GUID%20Partition%20Table
The GUID Partition Table (GPT) is a standard for the layout of partition tables of a physical computer storage device, such as a hard disk drive or solid-state drive, using universally unique identifiers, which are also known as globally unique identifiers (GUIDs). Forming a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum-proposed replacement for the PC BIOS), it is nevertheless also used for some BIOSs, because of the limitations of master boot record (MBR) partition tables, which use 32 bits for logical block addressing (LBA) of traditional 512-byte disk sectors. All modern personal computer operating systems support GPT. Some, including macOS and Microsoft Windows on the x86 architecture, support booting from GPT partitions only on systems with EFI firmware, but FreeBSD and most Linux distributions can boot from GPT partitions on systems with either the BIOS or the EFI firmware interface. History The Master Boot Record (MBR) partitioning scheme, widely used since the early 1980s, imposed limitations for use of modern hardware. The available size for block addresses and related information is limited to 32 bits. For hard disks with 512byte sectors, the MBR partition table entries allow a maximum size of 2 TiB (2³² × 512bytes) or 2.20 TB (2.20 × 10¹² bytes). In the late 1990s, Intel developed a new partition table format as part of what eventually became the Unified Extensible Firmware Interface (UEFI). The GUID Partition Table is specified in chapter 5 of the UEFI 2.8 specification. GPT uses 64 bits for logical block addresses, allowing a maximum disk size of 264 sectors. For disks with 512byte sectors, the maximum size is 8 ZiB (264 × 512bytes) or 9.44 ZB (9.44 × 10²¹ bytes). For disks with 4,096byte sectors the maximum size is 64 ZiB (264 × 4,096bytes) or 75.6 ZB (75.6 × 10²¹ bytes). In 2010, hard-disk manufacturers introduced drives with 4,096byte sectors (Advanced Format). For compatibility with legacy hardware and so
https://en.wikipedia.org/wiki/Orientation%20%28geometry%29
In geometry, the orientation, attitude, bearing, direction, or angular position of an object – such as a line, plane or rigid body – is part of the description of how it is placed in the space it occupies. More specifically, it refers to the imaginary rotation that is needed to move the object from a reference placement to its current placement. A rotation may not be enough to reach the current placement, in which case it may be necessary to add an imaginary translation to change the object's position (or linear position). The position and orientation together fully describe how the object is placed in space. The above-mentioned imaginary rotation and translation may be thought to occur in any order, as the orientation of an object does not change when it translates, and its position does not change when it rotates. Euler's rotation theorem shows that in three dimensions any orientation can be reached with a single rotation around a fixed axis. This gives one common way of representing the orientation using an axis–angle representation. Other widely used methods include rotation quaternions, rotors, Euler angles, or rotation matrices. More specialist uses include Miller indices in crystallography, strike and dip in geology and grade on maps and signs. Unit vector may also be used to represent an object's normal vector orientation or the relative direction between two points. Typically, the orientation is given relative to a frame of reference, usually specified by a Cartesian coordinate system. Two objects sharing the same direction are said to be codirectional (as in parallel lines). Two directions are said to be opposite if they are the additive inverse of one another, as in an arbitrary unit vector and its multiplication by -1. Two directions are obtuse if they form an obtuse angle (greater than a right angle) or, equivalently, if their scalar product or scalar projection is negative. Mathematical representations Three dimensions In general the position and
https://en.wikipedia.org/wiki/National%20Aerospace%20Laboratory%20of%20Japan
The National Aerospace Laboratory of Japan (NAL), was established in July 1955. Originally known as the National Aeronautical Laboratory, it assumed its present name with the addition of the Aerospace Division in 1963. Since its establishment, it has pursued research on aircraft, rockets, and other aeronautical transportation systems, as well as peripheral technology. NAL was involved in the development of the autonomous ALFLEX aircraft and the cancelled HOPE-X spaceplane. NAL has also endeavored to develop and enhance large-scale test facilities and make them available for use by related organizations, with the aim of improving test technology in these facilities. The NAL began using computers to process data since the 1960s. It began working to develop supercomputer and numerical simulation technologies in order to execute full-scale numeric simulations. The NAL, in collaboration with Fujitsu, developed the Numerical Wind Tunnel parallel supercomputer system, which went into operation in 1993. From 1993 to 1995, it was the most power supercomputer in the world, and was one of the top 3 in the world until 1997. It remained in use for 9 years after it began operations. On October 1, 2003, NAL, which had focused on research and development of next-generation aviation, merged with the Institute of Space and Astronautical Science (ISAS), and the National Space Development Agency (NASDA) of Japan into one Independent Administrative Institution: the Japan Aerospace Exploration Agency (JAXA). References 1955 establishments in Japan JAXA Aeronautics organizations Aviation research institutes Aerospace research institutes