source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Error%20correction%20code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast. Long-latency connections also benefit; in the case of a satellite orbiting Uranus, retransmission due to errors can create a delay of five hours. FEC is widely used in modems and in cellular networks, as well. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added to mass storage (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as ECC computer memory on systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be c
https://en.wikipedia.org/wiki/Dry%20loop
A dry loop is an unconditioned leased pair of telephone line from a telephone company. The pair does not provide dial tone or battery (continuous electric potential), as opposed to a wet pair, a line usually without dial tone but with battery. A dry pair was originally used with a security system but more recently may also be used with digital subscriber line (DSL) service or an Ethernet extender to connect two locations, as opposed to a costlier means such as a Frame Relay. The pair in many cases goes through the local telephone exchange. Wet pair naming comes from the battery used to sustain the loop, which was made from wet cells. Many carriers market dry loops to independent DSL providers as a BANA for basic analog loop or in some locales PANA for plain analog loop, OPX (off-premises extension) line, paging circuit, or finally LADS (local area data service). Local availability In the United States, these circuits typically incur a monthly recurring charge of $3.00 per ¼ mile (approximately), plus an additional handling fee of around ($5–10). In Canada, a CRTC ruling of 21 July 2003 requires telcos (such as Bell Canada) permit dry loop and some companies do provide this service. Naked DSL is currently provided by third-party DSL (digital subscriber line) vendors in the provinces of Ontario and Quebec, but incurs an additional dry loop fee (often $5 or more monthly, depending on the distance from the exchange). There is not yet widespread adoption, as this extra fee often renders dry-loop DSL more costly than comparable cable modem service in most locations. A Bell Canada "dry loop" DSL connection does supply battery, but the underlying phone line is non-functional except to call 958-ANAC, 9-1-1 or the 310-BELL telco business office. See also Current loop Local-loop unbundling Naked DSL Permitted attached private lines
https://en.wikipedia.org/wiki/Choro-Q
is a line of Japanese 3–4 cm pullback car toys produced by Takara. Known in North America as Penny Racers, they were introduced in late 1978 and have seen multiple revisions and successors since. Choro-Qs are stylized after real-world automobiles, with real rubber wheels and a pullback motor that makes them move. Each car has a coin slot at the back, where inserting a penny will make it perform a wheelie when the car is released. Takara created the Choro-Q line after noticing the popularity of miniature car toys in Japan. They are based on an earlier Takara toy line called "Mame Dash", which only lasted a few years before being discontinued in 1980. A wide variety of car models was chosen to make the Choro-Q series appeal to everybody, ranging from sports cars to formula racers. The name comes from the Japanese term choro-choro, meaning "dash around", as well as an abbreviation of the Japanese borrowing from "cute" (kyūto) to connote their petite size. Features Most Choro-Q feature real rubber tires (usually with larger ones on the rear) and the characteristic coil-spring pullback motor. Also, each Choro-Q is a "cute" squeezed design caricature of the actual vehicle it represents. This type of caricature is also known as "deformed scale" as it gives the car a foreshortened or deformed appearance. What is also distinct about the cars is the slot at the rear, where a small coin can be inserted for the wheelie effect. The toy line is highly popular and has become collectible, even outside Japan, due to its low price and its merchandising line which includes JGTC and various licensed car editions and has also spawned a series of videogames bearing the same name. The toy line has also lent its moulding to the Micro-Change and Transformers line of toys. In addition to "Penny Racers", Choro-Q pullback cars were also marketed under the Tonka branding in the late 1980s as "Tonka Turbo Tricksters". "Penny Racers" in the US are still marketed by Funrise, but are less pop
https://en.wikipedia.org/wiki/Spurline
The spurline is a type of radio-frequency and microwave distributed element filter with band-stop (notch) characteristics, most commonly used with microstrip transmission lines. Spurlines usually exhibit moderate to narrow-band rejection, at about 10% around the central frequency. Spurline filters are very convenient for dense integrated circuits because of their inherently compact design and ease of integration: they occupy surface that corresponds only to a quarter-wavelength transmission line. Structure description It consists of a normal microstrip line breaking into a pair of smaller coupled lines that rejoin after a quarter-wavelength distance. Only one of the input ports of the coupled lines is connected to the feed microstrip, as shown in the figure below. The orange area of the illustration is the microstrip transmission line conductor and the gray color the exposed dielectric. Where is the wavelength corresponding to the central rejection frequency of the bandstop filter, measured - of course - in the microstrip line material. This is the most important parameter of the filter that sets the rejection band. The distance between the two coupled lines can be selected appropriately to fine-tune the filter. The smaller the distance, the narrower the stop-band in terms of rejection. Of course that is limited by the circuit-board printing resolution, and it is usually considered at about 10% of the input microstrip width. The gap between the input microstrip line and the one open-circuited line of the coupler has a negligible effect on the frequency response of the filter. Therefore, it is considered approximately equal to the distance of the two coupled lines. Printed antennae Spurlines can also be used in printed antennae such as the planar inverted-F antenna. The additional resonances can be designed to widen the antenna bandwidth or to create multiple bands, for instance, for a tri-band mobile phone. History A spurline filter was first proposed by S
https://en.wikipedia.org/wiki/Simulated%20fluorescence%20process%20algorithm
The Simulated Fluorescence Process (SFP) is a computing algorithm used for scientific visualization of 3D data from, for example, fluorescence microscopes. By modeling a physical light/matter interaction process, an image can be computed which shows the data as it would have appeared in reality when viewed under these conditions. Principle The algorithm considers a virtual light source producing excitation light that illuminates the object. This casts shadows either on parts of the object itself or on other objects below it. The interaction between the excitation light and the object provokes the emission light, which also interacts with the object before it finally reaches the eye of the viewer. See also Computer graphics lighting Rendering (computer graphics)
https://en.wikipedia.org/wiki/Winged%20sun
The winged sun is a solar symbol associated with divinity, royalty, and power in the Ancient Near East (Egypt, Mesopotamia, Anatolia, and Persia). Ancient Egypt In ancient Egypt, the symbol is attested from the Old Kingdom (Sneferu, 26th century BC ), often flanked on either side with a uraeus. Behdety In early Egyptian religion, the symbol Behdety represented Horus of Edfu, later identified with Ra-Horakhty. It is sometimes depicted on the neck of Apis, the bull of Ptah. As time passed (according to interpretation) all of the subordinated gods of Egypt were considered to be aspects of the sun god, including Khepri. The name "Behdety" means the inhabitant of Behdet. He was the sky god of the region called Behdet in the Nile basin. His image was first found in the inscription on a comb's body, as a winged solar panel. The period of the comb is about 3000 BC. Such winged solar panels were later found in the funeral picture of Pharaoh Sahure of the fifth dynasty. Behdety is seen as the protector of Pharaoh. On both sides of his picture are seen the Uraeus, which a symbol for the cobra headed goddess Wadjet. He resisted the intense heat of Egyptian sun with his two wings. Mesopotamia From roughly 2000 BCE, the symbol also appears in Mesopotamia. It appears in reliefs with Assyrian rulers as a symbol for royalty, transcribed into Latin as (literally, "his own self, the Sun", i.e. "His Majesty"). Iran In Zoroastrian Persia, the symbol of the winged sun became part of the iconography of the Faravahar, the symbol of the divine power and royal glory in Persian culture. Israel and Judah From around the 8th century BC, the winged solar disk appears on Hebrew seals connected to the royal house of the Kingdom of Judah. Many of these are seals and jar handles from Hezekiah's reign, together with the inscription l'melekh ("belonging to the king"). Typically, Hezekiah's royal seals feature two downward-pointing wings and six rays emanating from the central sun disk
https://en.wikipedia.org/wiki/Sphenoid%20sinus
The sphenoid sinus is a paired paranasal sinus occurring within the body of the sphenoid bone. It represents one pair of the four paired paranasal sinuses. The pair of sphenoid sinuses are separated in the middle by a septum of sphenoid sinuses. Each sphenoid sinus communicates with the nasal cavity via the opening of sphenoidal sinus. The two sphenoid sinuses vary in size and shape, and are usually asymmetrical. Anatomy On average, a sphenoid sinus measures 2.2 cm vertical height, 2 cm in transverse breadth; and 2.2 cm antero-posterior depth. Each spehoid sinus is contained within the body of sphenoid bone, being situated just inferior to the sella turcica. The two sphenoid sinuses are separated medially by the septum of sphenoidal sinuses (which is usually asymmetrical). An opening of sphenoidal sinus forms a passage between each sphenoidal sinus, and the nasal cavity. Posteriorly, an opening of sphenoidal sinus opens into the sphenoidal sinus by an aperture high on the anterior wall the sinus; anteriorly, an opening of sphenoidal sinus opens into the roof of the nasal cavity via an aperture on the posterior wall of the sphenoethmoidal recess (occurring just superior the choana). Innervation The mucous membrane receives sensory innervation from the posterior ethmoidal nerve (branch of the ophthalmic nerve (CN V1)), and branches of the maxillary nerve (CN V2). Postganglionic parasympathetic fibers of the facial nerve that synapsed at the pterygopalatine ganglion control mucus secretion. Anatomical relations Proximal structures include: the optic canal and optic nerve, internal carotid artery, cavernous sinus, trigeminal nerve, pituitary gland, and the anterior ethmoidal cells. Anatomical variation The sphenoid sinuses vary in size and shape, and, owing to the lateral displacement of the intervening septum of sphenoid sinuses, are rarely symmetrical. When exceptionally large, the sphenoid sinuses may extend into the roots of the pterygoid processes or gr
https://en.wikipedia.org/wiki/Glow%20plate
Glow plates are sheets of glass or plastic that "glow" when light is supplied to one of their edges. The light source for a glow plate can be artificial, such as fluorescent light, or natural, with sunlight being directly exposed to the plate or fed through a fiber-optic system. A joint effort between Florida State University and Oak Ridge National Laboratory is focused on the design of a "spiral bio-reactor light sheet", which consists of a plexiglas sheet that has been micro-etched on one side and rolled into a spiral shape. Aside from aesthetic or utilitarian lighting purposes, much interest in using glow plates as a source of light comes from recent developments in algal cultivation. External links Algae used to mitigate carbon dioxide emissions The energy blog Fabrication of spiral bio-reactor light sheets Student abstracts: engineering at ORNL Lighting Fiber optics Algaculture
https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20gamma%20function
The gamma function is an important special function in mathematics. Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations. Integers and half-integers For positive integer arguments, the gamma function coincides with the factorial. That is, and hence and so on. For non-positive integers, the gamma function is not defined. For positive half-integers, the function values are given exactly by or equivalently, for non-negative integer values of : where denotes the double factorial. In particular, {| |- | | | | |- | | | | |- | | | | |- | | | | |} and by means of the reflection formula, {| |- | | | | |- | | | | |- | | | | |} General rational argument In analogy with the half-integer formula, where denotes the th multifactorial of . Numerically, . As tends to infinity, where is the Euler–Mascheroni constant and denotes asymptotic equivalence. It is unknown whether these constants are transcendental in general, but and were shown to be transcendental by G. V. Chudnovsky. has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that , , and are algebraically independent. The number is related to the lemniscate constant by and it has been conjectured by Gramain that where is the Masser–Gramain constant , although numerical work by Melquiond et al. indicates that this conjecture is false. Borwein and Zucker have found that can be expressed algebraically in terms of , , , , and where is a complete elliptic integral of the first kind. This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example: No similar relations are known for or other denominator
https://en.wikipedia.org/wiki/Ethernet%20extender
An Ethernet extender (also network extender or LAN extender) is any device used to extend an Ethernet or network segment beyond its inherent distance limitation which is approximately for most common forms of twisted pair Ethernet. These devices employ a variety of transmission technologies and physical media (wireless, copper wire, fiber-optic cable, coaxial cable). The extender forwards traffic between LANs transparent to higher network-layer protocols over distances that far exceed the limitations of standard Ethernet. Options Extenders that use copper wire include 2- and 4-wire variants using unconditioned copper wiring to extend a LAN. Network extenders use various methods (line encodings), such as TC-PAM, 2B1Q or DMT, to transmit information. While transmitting over copper wire does not allow for the speeds that fiber-optic transmission does, it allows the use of existing voice-grade copper or CCTV coaxial cable wiring. Copper-based Ethernet extenders must be used on unconditioned wire (without load coils), such as unused twisted pairs and alarm circuits. Connecting a private LAN between buildings or more distant locations is a challenge. Wi-Fi requires a clear line-of-sight, special antennas, and is subject to weather. If the buildings are within 100m, a normal Ethernet cable segment can be used, with due consideration of potential grounding problems between the locations. Up to 200m, it may be possible to set up an ordinary Ethernet bridge or router in the middle, if power and weather protection can be arranged. Fiber optic connection is ideal, allowing connections of over a km and high speeds with no electrical shock or surge issues, but is technically specialized and expensive for both the end equipment interfaces and the cable. Damage to the cable requires special skills to repair or total replacement. Specialized equipment can inter-connect two LANs over a single twisted pair of wires, such as the Moxa IEX Series, Cisco LRE (Long Reach Etherne
https://en.wikipedia.org/wiki/Interval%20vector
In musical set theory, an interval vector is an array of natural numbers which summarize the intervals present in a set of pitch classes. (That is, a set of pitches where octaves are disregarded.) Other names include: ic vector (or interval-class vector), PIC vector (or pitch-class interval vector) and APIC vector (or absolute pitch-class interval vector, which Michiel Schuijer states is more proper.) While primarily an analytic tool, interval vectors can also be useful for composers, as they quickly show the sound qualities that are created by different collections of pitch class. That is, sets with high concentrations of conventionally dissonant intervals (i.e., seconds and sevenths) sound more dissonant, while sets with higher numbers of conventionally consonant intervals (i.e., thirds and sixths) sound more consonant. While the actual perception of consonance and dissonance involves many contextual factors, such as register, an interval vector can nevertheless be a helpful tool. Definition In twelve-tone equal temperament, an interval vector has six digits, with each digit representing the number of times an interval class appears in the set. Because interval classes are used, the interval vector for a given set remains the same, regardless of the set's permutation or vertical arrangement. The interval classes designated by each digit ascend from left to right. That is: minor seconds/major sevenths (1 or 11 semitones) major seconds/minor sevenths (2 or 10 semitones) minor thirds/major sixths (3 or 9 semitones) major thirds/minor sixths (4 or 8 semitones) perfect fourths/perfect fifths (5 or 7 semitones) tritones (6 semitones) (The tritone is inversionally equivalent to itself.) Interval class 0, representing unisons and octaves, is omitted. In his 1960 book, The Harmonic Materials of Modern Music, Howard Hanson introduced a monomial method of notation for this concept, which he termed intervallic content: pemdnc.sbdatf for what would now be written
https://en.wikipedia.org/wiki/Epoophoron
The epoophoron or epoöphoron (also called organ of Rosenmüller or the parovarium) is a remnant of the mesonephric tubules that can be found next to the ovary and fallopian tube. Anatomy It may contain 10–15 transverse small ducts or tubules that lead to the Gartner's duct (also longitudinal duct of epoophoron) that represents the caudal remnant of the mesonephric duct and passes through the broad ligament and the lateral wall of the cervix and vagina. The epoophoron is a homologue to the epididymis in the male. While the epoophoron is located in the lateral portion of the mesosalpinx and mesovarium, the paroophoron (residual remnant of that part of the mesonephric duct that forms the paradidymis in the male) lies more medially in the mesosalpinx. Histology It has a unique histological profile. Clinical significance Clinically the organ may give rise to a local paraovarian cyst or adenoma. See also List of homologues of the human reproductive system Vesicular appendages of epoophoron
https://en.wikipedia.org/wiki/Porphyra
Porphyra is a genus of coldwater seaweeds that grow in cold, shallow seawater. More specifically, it belongs to red algae phylum of laver species (from which comes laverbread), comprising approximately 70 species. It grows in the intertidal zone, typically between the upper intertidal zone and the splash zone in cold waters of temperate oceans. In East Asia, it is used to produce the sea vegetable products nori (in Japan) and gim (in Korea). There are considered to be 60 to 70 species of Porphyra worldwide and seven around Britain and Ireland where it has been traditionally used to produce edible sea vegetables on the Irish Sea coast. The species Porphyra purpurea has one of the largest plastid genomes known, with 251 genes. Life cycle Porphyra displays a heteromorphic alternation of generations. The thallus we see is the haploid generation; it can reproduce asexually by forming spores which grow to replicate the original thallus. It can also reproduce sexually. Both male and female gametes are formed on the one thallus. The female gametes while still on the thallus are fertilized by the released male gametes, which are non-motile. The fertilized, now diploid, carposporangia after mitosis produce spores (carpospores) which settle, then bore into shells, germinate and form a filamentous stage. This stage was originally thought to be a different species of alga, and was referred to as Conchocelis rosea. That Conchocelis was the diploid stage of Porphyra was discovered in 1949 by the British phycologist Kathleen Mary Drew-Baker for the European species Porphyra umbilicalis. It was later shown for species from other regions as well. Food Most human cultures with access to use it as a food or somehow in the diet, making it perhaps the most domesticated of the marine algae, known as laver, (Vietnamese), nori (Japanese:), amanori (Japanese), zakai, gim (Korean:), zǐcài (Chinese:), karengo, sloke or slukos. The marine red alga Porphyra has been cultivated extensively
https://en.wikipedia.org/wiki/Current%20ratio
The current ratio is a liquidity ratio that measures whether a firm has enough resources to meet its short-term obligations. It compares a firm's current assets to its current liabilities, and is expressed as follows:- The current ratio is an indication of a firm's liquidity. Acceptable current ratios vary from industry to industry. In many cases, a creditor would consider a high current ratio to be better than a low current ratio, because a high current ratio indicates that the company is more likely to pay the creditor back. Large current ratios are not always a good sign for investors. If the company's current ratio is too high it may indicate that the company is not efficiently using its current assets or its short-term financing facilities. If current liabilities exceed current assets the current ratio will be less than 1. A current ratio of less than 1 indicates that the company may have problems meeting its short-term obligations. Some types of businesses can operate with a current ratio of less than one, however. If inventory turns into cash much more rapidly than the accounts payable become due, then the firm's current ratio can comfortably remain less than one. Inventory is valued at the cost of acquiring it and the firm intends to sell the inventory for more than this cost. The sale will therefore generate substantially more cash than the value of inventory on the balance sheet. Low current ratios can also be justified for businesses that can collect cash from customers long before they need to pay their suppliers. Limitations The ratio is only useful when two companies are compared within industry because inter industry business operations differ substantially. To determine liquidity, the current ratio is not as helpful as the quick ratio, because it includes all those assets that may not be easily liquidated, like prepaid expenses and inventory. See also Debt ratio Quick ratio Ratio
https://en.wikipedia.org/wiki/National%20Information%20Assurance%20Certification%20and%20Accreditation%20Process
The National Information Assurance Certification and Accreditation Process (NIACAP) formerly was the minimum-standard process for the certification and accreditation of computer and telecommunications systems that handle U.S. national-security information. NIACAP was derived from the Department of Defense Certification and Accreditation Process (DITSCAP), and it played a key role in the National Information Assurance Partnership. The Committee on National Security Systems (CNSS) Policy (CNSSP) No. 22 dated January 2012 cancelled CNSS Policy No. 6, “National Policy on Certification and Accreditation of National Security Systems,” dated October 2005, and National Security Telecommunications and Information Systems Security Instruction (NSTISSI) 1000, “National Information Assurance Certification and Accreditation Process (NIACAP),” dated April 2000. CNSSP No. 22 also states that "The CNSS intends to adopt National Institute of Standards and Technology (NIST) issuances where applicable. Additional CNSS issuances will occur only when the needs of NSS are not sufficiently addressed in a NIST document. Annex B identifies the guidance documents, which includes NIST Special Publications (SP), for establishing an organization-wide risk management program." It directs the organization to make use of NIST Special Publication 800-37, which implies that the Risk management framework (RMF) STEP 6 – AUTHORIZE INFORMATION SYSTEM replaces the Certification and Accreditation process for National Security Systems, just as it did for all other areas of the Federal government who fall under SP 800-37 Rev. 1.
https://en.wikipedia.org/wiki/Octet%20%28computing%29
The octet is a unit of digital information in computing and telecommunications that consists of eight bits. The term is often used when the term byte might be ambiguous, as the byte has historically been used for storage units of a variety of sizes. The term octad(e) for eight bits is no longer common. Definition The international standard IEC 60027-2, chapter 3.8.2, states that a byte is an octet of bits. However, the unit byte has historically been platform-dependent and has represented various storage sizes in the history of computing. Due to the influence of several major computer architectures and product lines, the byte became overwhelmingly associated with eight bits. This meaning of byte is codified in such standards as ISO/IEC 80000-13. While byte and octet are often used synonymously, those working with certain legacy systems are careful to avoid ambiguity. Octets can be represented using number systems of varying bases such as the hexadecimal, decimal, or octal number systems. The binary value of all eight bits set (or activated) is , equal to the hexadecimal value , the decimal value , and the octal value . One octet can be used to represent decimal values ranging from 0 to 255. The term octet (symbol: o) is often used when the use of byte might be ambiguous. It is frequently used in the Request for Comments (RFC) publications of the Internet Engineering Task Force to describe storage sizes of network protocol parameters. The earliest example is from 1974. In 2000, Bob Bemer claimed to have earlier proposed the usage of the term octet for "8-bit bytes" when he headed software operations for Cie. Bull in France in 1965 to 1966. In France, French Canada and Romania, octet is used in common language instead of byte when the eight-bit sense is required; for example, a megabyte (MB) is termed a megaoctet (Mo). A variable-length sequence of octets, as in Abstract Syntax Notation One (ASN.1), is referred to as an octet string. Octad Historically, in We
https://en.wikipedia.org/wiki/Cytochalasin
Cytochalasins are fungal metabolites that have the ability to bind to actin filaments and block polymerization and the elongation of actin. As a result of the inhibition of actin polymerization, cytochalasins can change cellular morphology, inhibit cellular processes such as cell division, and even cause cells to undergo apoptosis. Cytochalasins have the ability to permeate cell membranes, prevent cellular translocation and cause cells to enucleate. Cytochalasins can also have an effect on other aspects of biological processes unrelated to actin polymerization. For example, cytochalasin A and cytochalasin B can also inhibit the transport of monosaccharides across the cell membrane, cytochalasin H has been found to regulate plant growth, cytochalasin D inhibits protein synthesis and cytochalasin E prevents angiogenesis. Binding to actin filaments Cytochalasins are known to bind to the barbed, fast growing plus ends of microfilaments, which then blocks both the assembly and disassembly of individual actin monomers from the bound end. Once bound, cytochalasins essentially cap the end of the new actin filament. One cytochalasin will bind to one actin filament. Studies done with cytochalasin D (CD) have found that CD-actin dimers contain ATP-bound actin upon formation. These CD-actin dimers are reduced to CD-actin monomers as a result of ATP hydrolysis. The resulting CD-actin monomer can bind ATP-actin monomer to reform the CD-actin dimer. CD is very effective; only low concentrations (0.2 μM) are needed to prevent membrane ruffling and disrupt treadmilling. The effects of many different cytochalasins on actin filaments were analyzed and higher concentrations (2-20 μM) of CD were found to be needed to remove stress fibers. In contrast, latrunculin inhibits actin filament polymerization by binding to actin monomers. Uses and applications of cytochalasins Actin microfilaments have been widely studied using cytochalasins. Due to their chemical nature, cytochalasins
https://en.wikipedia.org/wiki/Wanted%20sa%20Radyo
Wanted sa Radyo () is a public affairs show that airs on Monday to Friday from 2:00 to 4:00 pm (PST) on 92.3 Radyo5 True FM (DWFM) with simulcast on television via One PH and online via livestreaming on the "Raffy Tulfo in Action" YouTube channel and Facebook page. It also replays on Tuesday to Friday from 2:30 to 4:30 am, Saturdays from 2:00 to 4:00 pm, and Sundays from 2:00 to 4:00 am and 9:00 pm to 11:00 pm on One PH. It is hosted by Senator Raffy Tulfo, alongside Sharee Roman. Whenever Tulfo is absent from the show mostly due to his duties as a senator, he is filled in by his daughter Maricel Tulfo-Tungol, his son-in-law Atty. Gareth Tungol, his co-parent-in-law Atty. Danilo Tungol, Atty. Blessie Abad, Atty. Gabriel Ilaya, Atty. Ina Magpale, Atty. Gail Dela Cruz, Atty. Joren Tan, Atty. Freddie Villamor, Atty. Pau Cruz, Atty. Paul Castillo, Aanaan Singh, or Marsh Salcedo. History Wanted sa Radyo first aired on DZXL-AM from 1994 to 2010. It transferred to the newly-launched Radyo5 92.3 News FM (now 92.3 Radyo5 True FM) in 2010, while retaining Raffy Tulfo and Niña Taduran as its hosts with the same airtime from 2:00 pm to 4:00 pm. Its television simulcast began on February 21, 2011, with the launch of AksyonTV. From January to May 2012, the show used to temporarily air for one hour from 12:30 pm to 1:30 pm whenever Radyo5 and AksyonTV aired the News5 coverage of the impeachment trial of then-Chief Justice Renato Corona at 1:30 pm onwards. On December 23, 2013, the Radyo5 studios used by the show moved from the TV5 Studio Complex in Novaliches, Quezon City to TV5 Media Center in Mandaluyong. On October 15, 2018, Niña Taduran left the show to run for party-list representative for ACT-CIS in 2019. She was replaced by Sharee Roman. The TV simulcast was carried over to One PH in January 2019, when AksyonTV was rebranded as 5 Plus. From October 18, 2021, the show moved from TV5 Media Center to its new studio at the Raffy Tulfo Action Center in Quezon City and it wen
https://en.wikipedia.org/wiki/Author%20Domain%20Signing%20Practices
In computing, Author Domain Signing Practices (ADSP) is an optional extension to the DKIM E-mail authentication scheme, whereby a domain can publish the signing practices it adopts when relaying mail on behalf of associated authors. ADSP was adopted as a standards track RFC 5617 in August 2009, but declared "Historic" in November 2013 after "...almost no deployment and use in the 4 years since..." Concepts Author address The author address is the one specified in the header field defined in RFC 5322. In the unusual cases where more than one address is defined in that field, RFC 5322 provides for a field to be used instead. The domains in 5322-From addresses are not necessarily the same as in the more elaborated Purported Responsible Address covered by Sender ID specified in RFC 4407. The domain in a 5322-From address is also not necessarily the same as in the envelope sender address defined in RFC 5321, also known as SMTP MAIL FROM, envelope-From, 5321-From, or , optionally protected by SPF specified in RFC 7208. Author Domain Signature An Author Domain Signature is a valid DKIM signature in which the domain name of the DKIM signing entity, i.e., the d tag in the DKIM-Signature header field, is the same as the domain name in the author address. This binding recognizes a higher value for author domain signatures than other valid signatures that may happen to be found in a message. In fact, it proves that the entity that controls the DNS zone for the author — and hence also the destination of replies to the message's author — has relayed the author's message. Most likely, the author has submitted the message through the proper message submission agent. Such message qualification can be verified independently of any published domain signing practice. Author Domain Signing Practices The practices are published in a DNS record by the author domain. For an author address , it may be set as _adsp._domainkey.example.com. in txt "dkim=unknown" Three possible s
https://en.wikipedia.org/wiki/Triangles%20of%20the%20neck
Anatomists use the term triangles of the neck to describe the divisions created by the major muscles in the region. The side of the neck presents a somewhat quadrilateral outline, limited, above, by the lower border of the body of the mandible, and an imaginary line extending from the angle of the mandible to the mastoid process; below, by the upper border of the clavicle; in front, by the middle line of the neck; behind, by the anterior margin of the trapezius. This space is subdivided into two large triangles by sternocleidomastoid, which passes obliquely across the neck, from the sternum and clavicle below, to the mastoid process and occipital bone above. The triangular space in front of this muscle is called the anterior triangle of the neck; and that behind it, the posterior triangle of the neck. The anterior triangle is further divided into muscular, carotid, submandibular and submental and the posterior into occipital and subclavian triangles. Clinical relevance The use of the divisions described as the triangles of the neck permit the effective communication of the location of palpable masses located in the neck between healthcare professionals. The common swellings anterior of the midline are: Enlarged submental lymph nodes and sublingual dermoid in the submental region. Thyroglossal cyst and inflamed subhyoid bursa just below the hyoid bone. Goitre, carcinoma of larynx and enlarged lymph nodes in the suprasternal region. Additional images
https://en.wikipedia.org/wiki/Melzer%27s%20reagent
Melzer's reagent (also known as Melzer's iodine reagent, Melzer's solution or informally as Melzer's) is a chemical reagent used by mycologists to assist with the identification of fungi, and by phytopathologists for fungi that are plant pathogens. Composition Melzer's reagent is an aqueous solution of chloral hydrate, potassium iodide, and iodine. Depending on the formulation, it consists of approximately 2.50-3.75% potassium iodide and 0.75–1.25% iodine, with the remainder of the solution being 50% water and 50% chloral hydrate. Melzer's is toxic to humans if ingested due to the presence of iodine and chloral hydrate. Due to the legal status of chloral hydrate, Melzer's reagent is difficult to obtain in the United States. In response to difficulties obtaining chloral hydrate, scientists at Rutgers formulated Visikol (compatible with Lugol's iodine) as a replacement. In 2019, research showed that Visikol behaves differently to Melzer’s reagent in several key situations, noting it should not be recommended as a viable substitute. Melzer's reagent is part of a class of iodine/potassium iodide (IKI)-containing reagents used in biology; Lugol's iodine is another such formula. Reactions Melzer's is used by exposing fungal tissue or cells to the reagent, typically in a microscope slide preparation, and looking for any of three color reactions: Amyloid or Melzer's-positive reaction, in which the material reacts blue to black. Pseudoamyloid or dextrinoid reaction, in which the material reacts brown to reddish brown. Inamyloid or Melzer's-negative, in which the tissues do not change color, or react faintly yellow-brown. Among the amyloid reaction, two types can be distinguished: Euamyloid reaction, in which the material turns blue without potassium hydroxide (KOH)-pretreatment. Hemiamyloid reaction, in which the material turns red in Lugol's solution, but shows no reaction in Melzer's reagent; when KOH-pretreated it turns blue in both reagents (hemiamyloidity). M
https://en.wikipedia.org/wiki/Silver%20telluride
Silver telluride (Ag2Te) is a chemical compound, a telluride of silver, also known as disilver telluride or silver(I) telluride. It forms a monoclinic crystal. In a wider sense, silver telluride can be used to denote AgTe (silver(II) telluride, a metastable compound) or Ag5Te3. Silver(I) telluride occurs naturally as the mineral hessite, whereas silver(II) telluride is known as empressite. Silver telluride is a semiconductor which can be doped both n-type and p-type. Stoichiometric Ag2Te has n-type conductivity. On heating silver is lost from the material. Non-stoichiometric silver telluride has shown extraordinary magnetoresistance. Synthesis Porous silver telluride (AgTe) is synthesized by an electrochemical deposition method. The experiment can be performed using a potentiostat and a three-electrode cell with 200 mL of 0.5 M sulfuric acid electrolyte solution containing Ag nanoparticles at room temperature. Then a sliver paste used in the tungsten ditelluride (WTe2) attachment leach into the electrolyte which causes small amounts of Ag to dissolve in the electrolyte. The electrolyte was stirred by a magnetic bar to remove hydrogen bubbles. A sliver- sliver chloride electrode and a platinum wire can be used as reference and counter electrodes. All the potentials can be measured against the reference electrode, and it was calibrated using the equation ERHE = EAg/AgCl + .059 pH + .197. In order to grow the porous AgTe, the WTe2 was treated using multiple cyclic voltammetry between -1.2 and 0 volts with a scan rate of 100 mV/s. Glutathione coated Ag2Te Nanoparticles can be synthesized by preparing a 9 mL solution containing 10 mM AgNO3, 5mM Na2TeO3, and 30 mM glutathione. Place that solution in an ice bath. N2H4 was added to the solution and the reaction is allowed to proceed for 5 min under constant stirring. Then the nanoparticles are washed three times by a way of centrifugation, after the three washes the nanoparticles are suspended in PBS and washed again
https://en.wikipedia.org/wiki/Russula%20emetica
Russula emetica, commonly known as the sickener, emetic russula, or vomiting russula, is a basidiomycete mushroom, and the type species of the genus Russula. It has a red, convex to flat cap up to in diameter, with a cuticle that can be peeled off almost to the centre. The gills are white to pale cream, and closely spaced. A smooth white stem measures up to long and thick. First described in 1774, the mushroom has a wide distribution in the Northern Hemisphere, where it grows on the ground in damp woodlands in a mycorrhizal association with conifers, especially pine. The mushroom's common names refer to the gastrointestinal distress they cause when consumed raw. The flesh is extremely peppery, but this offensive taste, along with its toxicity, can be removed by parboiling or pickling. Although it used to be widely eaten in Russia and eastern European countries, it is generally not recommended for consumption. There are many similar Russula species that have a red cap with white stem and gills, some of which can be reliably distinguished from R. emetica only by microscopic characteristics. Taxonomy Russula emetica was first officially described as Agaricus emeticus by Jacob Christian Schaeffer in 1774, in his series on fungi of Bavaria and the Palatinate, Fungorum qui in Bavaria et Palatinatu circa Ratisbonam nascuntur icones. Christian Hendrik Persoon placed it in its current genus Russula in 1796, where it remains. According to the nomenclatural database MycoBank, Agaricus russula is a synonym of R. emetica that was published by Giovanni Antonio Scopoli in 1772, two years earlier than Schaeffer's description. However, this name is unavailable as Persoon's name is sanctioned. Additional synonyms include Jean-Baptiste Lamarck's Amanita rubra (1783), and Augustin Pyramus de Candolle's subsequent new combination Agaricus ruber (1805). The specific epithet is derived from the Ancient Greek emetikos/εμετικος 'emetic' or 'vomit-inducing'. Similarly, its common name
https://en.wikipedia.org/wiki/Russula%20xerampelina
Russula xerampelina, also commonly known as the shrimp russula, crab brittlegill, or shrimp mushroom, is a basidiomycete mushroom of the brittlegill genus Russula. Two subspecies are recognised. The fruiting bodies appear in coniferous woodlands in autumn in northern Europe and North America. Their caps are coloured various shades of wine-red, purple to green. Mild tasting and edible, it is one of the most highly regarded brittlegills for the table. It is also notable for smelling of shellfish or crab when fresh. Taxonomy Russula xerampelina was originally described in 1770 as Agaricus xerampelina from a collection in Bavaria by the German mycologist Jacob Christian Schaeffer, who noted the colour as fusco-purpureus or "purple-brown". It was later given its present binomial name by Swedish mycologist Elias Magnus Fries. Its specific epithet is taken from the Ancient Greek meaning "colour of dried vine leaves", xeros meaning "dry", and ampělinos or "of the vine". Two subspecies have been recognised, var. xerampelina and var. tenuicarnosa, with thinner flesh in the cap and the stipe. The name R. erythropoda is now considered a synonym, and former subspecies R. (xerampelina subsp.) amoenipes (originally named by Henri Romagnesi) now a separate species. A former variety with a greenish cap, R. xerampelina var. elaeodes, is now classified as R. clavipes. As the first defined species, it gives its name to the section Xerampelinae, a group of related species within the genus Russula, occasionally all termed R. xerampelina in the past. Common names include shrimp mushroom, shrimp Russula, crab brittlegill, and shellfish-scented Russula. Description Russula xerampelina has a characteristic odour of boiled crustacean. The cap is wide, domed, flat, or with a slightly depressed centre, and sticky. The colour is variable, most commonly purple to wine-red, or greenish, and darker towards the centre of the cap. There are fine grooves up to a centimetre long running perpe
https://en.wikipedia.org/wiki/Cellular%20architecture
Cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications. Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM. Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software. See also Cellular automaton External links Cellular architecture builds next generation supercomputers ORNL, IBM, and the Blue Gene Project Energy, IBM are partners in biological supercomputing project Cell-based Architecture Parallel computing Computer architecture Classes of computers
https://en.wikipedia.org/wiki/Gustafson%27s%20law
In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline. To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988. Definition Gustafson estimated the speedup of a program gained by using parallel computing as follows: where is the theoretical speedup of the program with parallelism (scaled speedup); is the number of processors; and are the fractions of time spent executing the serial parts and the parallel parts of the program on the parallel system, where . Alternatively, can be expressed using : Gustafson's law addresses the shortcomings of Amdahl's law, which is based on the assumption of a fixed problem size, that is of an execution workload that does not change with respect to the improvement of the resources. Gustafson's law instead proposes that programmers tend to increase the size of problems to fully exploit the computing power that becomes available as the resources improve. Gustafson and his colleagues further observed from their workloads that time for the serial part typically does not grow as the problem and the system scale, that is, is fixed. This gives a linear model between the processor count and the speedup with slope , as shown in the figure above (which uses different notations: for and for ). Also, scales linearly with rather than exponentially in the Amdahl's Law. With these observations, Gustafson "expect[ed] to extend [their] success [on parallel computing] to a broader range of applications and even larger values for ". The impact of Gustafson's law was to shift research goals to select or reformulate problems
https://en.wikipedia.org/wiki/Russula%20vesca
Russula vesca, known by the common names of bare-toothed Russula or the flirt, is a basidiomycete mushroom of the genus Russula. Taxonomy Russula vesca was described, and named by the eminent Swedish mycologist Elias Magnus Fries (1794–1878). The specific epithet is the feminine of the Latin adjective vescus, meaning "edible". Description The skin of the cap typically does not reach the margins (resulting in the common names). The cap is 5–10 cm wide, flat, convex, or with slightly depressed centre, weakly sticky, colour brownish to dark brick-red. Taste mild. Gills close apart, white. The stipe narrows toward the base, 2–7 cm long, 1.5–2.5 cm wide, white. It turns deep salmon when rubbed with iron salts (Ferrous sulfate). The spore print is white. Distribution and habitat Russula vesca appears in summer or autumn, and grows primarily in deciduous forests in Europe, and North America. Edibility Russula vesca is considered edible and good, with a mild nutty flavour. In some countries, including Russia, Ukraine and Finland it is considered entirely edible even in the raw state. See also List of Russula species
https://en.wikipedia.org/wiki/Suillus%20luteus
Suillus luteus is a bolete fungus, and the type species of the genus Suillus. A common fungus native all across Eurasia from Ireland to Korea, it has been introduced widely elsewhere, including North and South America, southern Africa, Australia and New Zealand. Commonly referred to as slippery jack or sticky bun in English-speaking countries, its names refer to the brown cap, which is characteristically slimy in wet conditions. The fungus, initially described as Boletus luteus ("yellow mushroom") by Carl Linnaeus in 1753, is now classified in a different fungus family as well as genus. Suillus luteus (literally "yellow pig", from its greasy look in rain) is edible, though not as highly regarded as other bolete mushrooms. It is commonly prepared and eaten in soups, stews or fried dishes. The slime coating, however, may cause indigestion if not removed before eating. It is often sold as a dried mushroom. The fungus grows in coniferous forests in its native range, and pine plantations in countries where it has become naturalized. It forms symbiotic ectomycorrhizal associations with living trees by enveloping the tree's underground roots with sheaths of fungal tissue. The fungus produces spore-bearing fruit bodies, often in large numbers, above ground in summer and autumn. The fruit body cap often has a distinctive conical shape before flattening with age, reaching up to in diameter. Like other boletes, it has tubes extending downward from the underside of the cap, rather than gills; spores escape at maturity through the tube openings, or pores. The pore surface is yellow, and covered by a membranous partial veil when young. The pale stipe, or stem, measures up to 10 cm (4 in) tall and thick and bears small dots near the top. Unlike most other boletes, it bears a distinctive membranous ring that is tinged brown to violet on the underside. Taxonomy and naming The slippery jack was one of the many species first described in 1753 by the "father of taxonomy" Carl Linn
https://en.wikipedia.org/wiki/Russula%20virescens
Russula virescens is a basidiomycete mushroom of the genus Russula, and is commonly known as the green-cracking russula, the quilted green russula, or the green brittlegill. It can be recognized by its distinctive pale green cap that measures up to in diameter, the surface of which is covered with darker green angular patches. It has crowded white gills, and a firm, white stipe that is up to tall and thick. Considered to be one of the best edible mushrooms of the genus Russula, it is especially popular in Spain and China. With a taste that is described variously as mild, nutty, fruity, or sweet, it is cooked by grilling, frying, sautéeing, or eaten raw. Mushrooms are rich in carbohydrates and proteins, with a low fat content. The species was described as new to science in 1774 by Jacob Christian Schaeffer. Its distribution encompasses Asia, North Africa, Europe, and Central America. Its presence in North America has not been clarified, due to confusion with the similar species Russula parvovirescens and R. crustosa. R. virescens fruits singly or scattered on the ground in both deciduous and mixed forests, forming mycorrhizal associations with broadleaf trees such as oak, European beech, and aspen. In Asia, it associates with several species of tropical lowland rainforest trees of the family Dipterocarpaceae. R. virescens has a ribonuclease enzyme with a biochemistry unique among edible mushrooms. It also has biologically active polysaccharides, and a laccase enzyme that can break down several dyes used in the laboratory and in the textile industry. Taxonomy Russula virescens was first described by German polymath Jacob Christian Schaeffer in 1774 as Agaricus virescens. The species was subsequently transferred to the genus Russula by Elias Fries in 1836. According to the nomenclatural authority MycoBank, Russula furcata var. aeruginosa (published by Christian Hendrik Persoon in 1796) and Agaricus caseosus (published by Karl Friedrich Wilhelm Wallroth in 1883)
https://en.wikipedia.org/wiki/Talaromycosis
Talaromycosis is a fungal infection that presents with painless skin lesions of the face and neck, as well as an associated fever, anaemia, and enlargement of the lymph glands and liver. It is caused by the fungus Talaromyces marneffei, which is found in soil and decomposing organic matter. The infection is thought to be contracted by inhaling the fungus from the environment, though the environmental source of the organism is not known. People already suffering from a weakened immune system due to conditions such as HIV/AIDS, cancer, organ transplant, long-term steroid use, old age, malnutrition or autoimmune disease are typically the ones to contract this infection. It generally does not affect healthy people and does not spread from person to person. Diagnosis is usually made by identification of the fungus from clinical specimens, either by microscopy or culture. Biopsies of skin lesions, lymph nodes, and bone marrow demonstrate the presence of organisms on histopathology. Medical imaging may reveal shadows in the lungs. The disease can look similar to tuberculosis and histoplasmosis. Talaromycosis may be prevented in people at high risk, using the antifungal medication itraconazole, and is treatable with amphotericin B followed by itraconazole or voriconazole. The disease is fatal in 75% of those not given treatment. Talaromycosis is endemic exclusively to southeast Asia (including southern China and eastern India), and particularly in young farmers. The exact number of people in the world affected is not known. Men are affected more than women. The first natural human case of talaromycosis was reported in 1973 in an American minister with Hodgkin's disease who lived in Southeast Asia. Signs and symptoms There may be no symptoms, or talaromycosis may present with small painless skin lesions. The head and neck are most often affected. Other features include: fever, general discomfort, weight loss, cough, difficulty breathing, diarrhoea, abdominal pain, swell
https://en.wikipedia.org/wiki/Potsdam%20Denkschrift
The Potsdam Denkschrift is a declaration of Hans-Peter Dürr, J. Daniel Dahm and Rudolf zur Lippe under the patronage of the Federation of German Scientists-VDW. It is the base – the “mother” of the abstract condensed version, the Potsdam Manifesto ‚We have to learn to think in a new way’ what was up to now signed by more than 130 scientists and personalities from all over the world. Both were presented to the public in Berlin in autumn 2005. The collapse of a course of action legitimised by the materialistic-deterministic world view of the classical physics is elucidated in several chapters. The "progressing uniformity of all ideas of value and affluence, habits of consumption and economic strategies on the pattern of a Western/American/European knowledge society" (Potsdam Manifesto & Potsdam Denkschrift) and its hazards is accentuated and a rethinking towards a more holistic view and behaviour is claimed. Creativity, differentiation and connectedness are basic characteristics of life. The future is essentially open. Process For the initiation, scientific feedbacking in content and composition and for the multiplication of Potsdam Denkschrift & Potsdam Manifesto an intensive international scientific discourse was carried out since beginning of the Einstein-Year 2005. An interdisciplinary symposium in Potsdam to discuss a first draft of the “Denkschrift” (therefore Potsdam Denkschrift & Potsdam Manifesto) was carried out in June 2005. The resultant development of the critical and bracing symposium was the full backing of all participants, as well as the decision to publish a short Manifesto additionally to the “Denkschrift”. On October 14, 2005 both - the short Potsdam Manifesto 2005 "We have to learn to think in a new way" and the more explicit Potsdam Denkschrift 2005 - were presented to the public in Berlin. In the following months various international signees of the Potsdam Manifesto were contacted and successfully mobilized as supporters. Background
https://en.wikipedia.org/wiki/Open%20Graphics%20Project
The Open Graphics Project (OGP) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards, primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card. OGD1 The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card could not compete with graphics cards on the market at the time in terms of performance or functionality, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, some six years after the project began and 3 years after the appearance of the first prototypes. Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses. The RTL (in Verilog) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL). It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC, EDID, DPMS and VBE VESA standards. TV-out is also planned. Versioning schema Versioning schema for OGD1 will go like this: {Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed} OGD1 components Main components of OGD1 graphics car
https://en.wikipedia.org/wiki/Handel-C
Handel-C is a high-level programming language which targets low-level hardware, most commonly used in the programming of FPGAs. It is a rich subset of C, with non-standard extensions to control hardware instantiation with an emphasis on parallelism. Handel-C is to hardware design what the first high-level programming languages were to programming CPUs. Unlike many other design languages that target a specific architecture Handel-C can be compiled to a number of design languages and then synthesised to the corresponding hardware. This frees developers to concentrate on the programming task at hand rather than the idiosyncrasies of a specific design language and architecture. Additional features The subset of C includes all common C language features necessary to describe complex algorithms. Like many embedded C compilers, floating point data types were omitted. Floating point arithmetic is supported through external libraries that are very efficient. Parallel programs In order to facilitate a way to describe parallel behavior some of the CSP keywords are used, along with the general file structure of Occam. For example: par { ++c; a = d + e; b = d + e; } Channels Channels provide a mechanism for message passing between parallel threads. Channels can be defined as asynchronous or synchronous (with or without an inferred storage element respectively). A thread writing to a synchronous channel will be immediately blocked until the corresponding listening thread is ready to receive the message. Likewise the receiving thread will block on a read statement until the sending thread executes the next send. Thus they may be used as a means of synchronizing threads. par { chan int a; // declare a synchronous channel int x; // begin sending thread seq (i = 0; i < 10; i++) { a ! i; // send the values 0 to 9 sequentially into the channel } // begin receiving thread seq (j = 0; j < 10; j++) { a ? x; // pe
https://en.wikipedia.org/wiki/ODMRP
In wireless networking, On-Demand Multicast Routing Protocol is a protocol for routing multicast and unicast traffic throughout Ad hoc wireless mesh networks. ODMRP creates routes on demand, rather than proactively creating routes as OLSR does. This suffers from a route acquisition delay, although it helps reduce network traffic in general. To help reduce the problem of this delay, some implementations send the first data packet along with the route discovery packet. Because some links may be asymmetric, the path from one node to another is not necessarily the same as the reverse path of these nodes. See also AODV List of ad hoc routing protocols Mesh Networks External links IETF Draft The latest draft specification published by the IETF Original publication The first paper presenting ODMRP. Wireless networking Routing Routing algorithms Ad hoc routing protocols
https://en.wikipedia.org/wiki/Balto%20II%3A%20Wolf%20Quest
Balto II: Wolf Quest is a 2002 American animated adventure film produced and directed by Phil Weinstein. It is the sequel to Universal Pictures/Amblin Entertainment's 1995 Northern animated film Balto. Plot One year after his heroic journey, Balto has mated with Jenna, and they now have a new family of six puppies in Alaska. Five of their puppies resemble their husky mother, while one pup named Aleu takes her looks from her wolfdog father. When they all reach eight weeks old, all of the other pups are adopted to new homes, but no one wants Aleu due to her wild animal looks, forcing her to live with her father. A year later when she is grown, Aleu is almost killed by a hunter who mistakes her for a wild wolf. Balto tells Aleu the truth about her wolf heritage, causing her to run away, hoping to find her place in the world. Balto then goes out into the Alaskan wilderness to find her. At the same time, Balto has been struggling with strange dreams of a raven and a pack of wolves, and he cannot understand their meaning. Balto resolves to find the meaning of these dreams as he searches for Aleu. His friends Boris, Muk, and Luk attempt to join him, but after they are halted by some unknown force, they realize that this journey is meant only for the father and daughter themselves. Taking refuge in a cave, Aleu meets the field mouse Muru, who explains that Aleu should not be ashamed of her lineage, which tells her what she is but not who she is. Muru reveals himself to be Aleu's spirit guide and tells her to go on a journey of self-discovery. Balto and Aleu reunite when he saves her from the grizzly bear and reconcile, and find their way to the ocean, where they are attacked by a group of starving Northwestern wolves led by Niju, an arrogant and vicious wolf. The confrontation is defused by the elderly Nava, the true leader of the pack, who welcomes Balto and Aleu. Nava announces to his pack that the wolf spirit Aniu has contacted him in "dream visions". Aniu has told hi
https://en.wikipedia.org/wiki/Joel%20Spencer
Joel Spencer (born April 20, 1946) is an American mathematician. He is a combinatorialist who has worked on probabilistic methods in combinatorics and on Ramsey theory. He received his doctorate from Harvard University in 1970, under the supervision of Andrew Gleason. He is currently () a professor at the Courant Institute of Mathematical Sciences of New York University. Spencer's work was heavily influenced by Paul Erdős, with whom he coauthored many papers (giving him an Erdős number of 1). In 1963, while studying at the Massachusetts Institute of Technology, Spencer became a Putnam Fellow. In 1984 Spencer received a Lester R. Ford Award. He was an Erdős Lecturer at Hebrew University of Jerusalem in 2001. In 2012 he became a fellow of the American Mathematical Society. He was elected as a fellow of the Society for Industrial and Applied Mathematics in 2017, "for contributions to discrete mathematics and theory of computing, particularly random graphs and networks, Ramsey theory, logic, and randomized algorithms". In 2021 he received the Leroy P. Steele Prize for Mathematical Exposition with his coauthor Noga Alon for their book The Probabilistic Method. Selected publications Probabilistic methods in combinatorics, with Paul Erdős, New York: Academic Press, 1974. Ramsey theory, with Bruce L. Rothschild and Ronald L. Graham, New York: Wiley, 1980; 2nd ed., 1990. Ten lectures on the probabilistic method, Philadelphia: Society for Industrial and Applied Mathematics, 1987; 2nd ed., 1994. The strange logic of random graphs, Berlin: Springer-Verlag, 2001. The probabilistic method, with Noga Alon, New York: Wiley, 1992; 2nd ed., 2000; 3rd ed., 2008. Deterministic random walks on regular trees, American Mathematical Society, New York, 2008. Asymptopia, with Laura Florescu, American Mathematical Society, 2014. See also Packing in a hypergraph
https://en.wikipedia.org/wiki/Playout
In broadcasting, channel playout is the generation of the source signal of a radio or television channel produced by a broadcaster, coupled with the transmission of this signal for primary distribution or direct-to-audience distribution via any network. Such radio or television distribution networks include terrestrial broadcasting (analogue or digital radio), cable networks, satellites (either for primary distribution intended for cable television headends or for direct reception, DTH / DBS), IPTV, OTT Video, point-to-point transport over managed networks or the public Internet, etc. The television channel playout happens in master control room (MCR) in a playout area, which can be either situated in the central apparatus room or in purposely built playout centres, which can be owned by a broadcaster or run by an independent specialist company that has been contracted to handle the playout for a number of channels from different broadcasters. Some of the larger playout centres in Europe, Southeast Asia and the United States handle well in excess of 50 radio and television "feeds". Feeds will often consist of several different versions of a core service, often different language versions or with separately scheduled content, such as local opt outs for news or promotions. Playout systems Centralcasting is multi-channel playout that generally uses broadcast automation systems with broadcast programming applications. These systems generally work in a similar way, controlling video servers, video tape recorder (VTR) devices, Flexicarts, audio mixing consoles, vision mixers and video routers, and other devices using a serial communications 9-Pin Protocol (RS-232 or RS-422). This provides deterministic control, enabling frame accurate playback, Instant replay or video switching. Many systems consist of a front end operator interface on a separate platform to the controllers – e.g. a Windows GUI will present a friendly easy to use method of editing a playlist, but actua
https://en.wikipedia.org/wiki/Summer%20in%20the%20City%20%28song%29
"Summer in the City" is a song by the American folk rock band the Lovin' Spoonful. Written by John Sebastian, Mark Sebastian and Steve Boone, the song was released as a non-album single in July 1966 and was included on the album Hums of the Lovin' Spoonful later that year. The single was the Lovin' Spoonful's fifth to break the top ten in the United States and their only to reach . A departure from the band's lighter sound, the recording features a harder rock style. The lyrics differ from most songs about the summer by lamenting the heat, contrasting the unpleasant warmth and noise of the daytime with the relief offered by the cool night, which allows for the nightlife to begin. John Sebastian reworked the lyrics and melody of "Summer in the City" from a song written by his teenage brother Mark. Boone contributed the song's bridge while in the studio. The Lovin' Spoonful recorded "Summer in the City" in two sessions at Columbia Records' 7th Avenue Studio in New York in March 1966. Erik Jacobsen produced the sessions with assistance from engineer Roy Halee, while Artie Schroeck performed as a session musician on a Wurlitzer electric piano. The recording is an early instance in pop music of added sound effects, made up of car horns and a pneumatic drill to mimic city noises. "Summer in the City" has since received praise from several music critics and musicologists for its changing major-minor keys and its inventive use of sound effects. The song has been covered by several artists, including Quincy Jones, whose 1973 version won the Grammy Award for Best Instrumental Arrangement and has since been sampled by numerous hip hop artists. Background and composition The Lovin' Spoonful completed their first two albums, Do You Believe in Magic and Daydream, over a period of six months. In need of new material for their next LP, John Sebastian, the band's principal songwriter, recalled a song composed and informally taped by his teenage brother, Mark, titled "It's a Dif
https://en.wikipedia.org/wiki/Andrew%20M.%20Gleason
Andrew Mattei Gleason (November 4, 1921October 17, 2008) was an American mathematician who made fundamental contributions to widely varied areas of mathematics, including the solution of Hilbert's fifth problem, and was a leader in reform and innovation in teaching at all levels.<ref name="mactutor"></ref> Gleason's theorem in quantum logic and the Greenwood–Gleason graph, an important example in Ramsey theory, are named for him. As a young World War II naval officer, Gleason broke German and Japanese military codes. After the war he spent his entire academic career at Harvard University, from which he retired in 1992. His numerous academic and scholarly leadership posts included chairmanship of the Harvard Mathematics Department and the Harvard Society of Fellows, and presidency of the American Mathematical Society. He continued to advise the United States government on cryptographic security, and the Commonwealth of Massachusetts on education for children, almost until the end of his life. Gleason won the Newcomb Cleveland Prize in 1952 and the Gung–Hu Distinguished Service Award of the American Mathematical Society in 1996. He was a member of the National Academy of Sciences and of the American Philosophical Society, and held the Hollis Chair of Mathematics and Natural Philosophy at Harvard. He was fond of saying that proofs "really aren't there to convince you that something is truethey're there to show you why it is true." The Notices of the American Mathematical Society called him "one of the quiet giants of twentieth-century mathematics, the consummate professor dedicated to scholarship, teaching, and service in equal measure." Biography Gleason was born in Fresno, California, the youngest of three children; his father Henry Gleason was a botanist and a member of the Mayflower Society, and his mother was the daughter of Swiss-American winemaker Andrew Mattei. His older brother Henry Jr. became a linguist. He grew up in Bronxville, New York, where his
https://en.wikipedia.org/wiki/List%20of%20spreadsheet%20software
The following is a list of spreadsheets. Free and open-source software Cloud and on-line spreadsheets Collabora Online Calc — Enterprise-ready LibreOffice. EtherCalc (successor to SocialCalc, which is based on wikiCalc) LibreOffice Online Calc ONLYOFFICE - Community Server Edition Sheetster – "Community Edition" is available under the Affero GPL Simple Spreadsheet Tiki Wiki CMS Groupware includes a spreadsheet since 2004 and migrated to jQuery.sheet in 2010. Spreadsheets that are parts of suites Apache OpenOffice Calc — for MS Windows, Linux and the Apple Macintosh. Started as StarOffice, later as OpenOffice.org. It has not received a major update since 2014 and security fixes have not been prompt. Collabora Online Calc — Enterprise-ready LibreOffice, included with Online, Mobile and Desktop apps Gnumeric — for Linux. Started as the GNOME desktop spreadsheet. Reasonably lightweight but has very advanced features. KSpread — following the fork of the Calligra Suite from KOffice in mid-2010, superseded by KCells in KOffice and Sheets in the Calligra Suite. LibreOffice Calc — developed for MS Windows, Linux, BSD and Apple Macintosh (Mac) operating systems by The Document Foundation. The Document Foundation was formed in mid-2010 by several large organisations such as Google, Red Hat, Canonical (Ubuntu) and Novell along with the OpenOffice.org community (developed by Sun) and various OpenOffice.org forks, notably Go-oo. Go-oo had been the "OpenOffice" used in Ubuntu and elsewhere. Started as StarOffice in the late 1990s, it became OpenOffice under Sun and then LibreOffice in mid-2010. The Document Foundation works with external organisations such as NeoOffice and Apache Foundation to help drive all three products forward. NeoOffice Calc — for Mac. Started as an OpenOffice.org port to Mac, but by using the Mac-specific Aqua user interface, instead of the more widely used X11 windowing server, it aimed to be far more stable than the normal ports of
https://en.wikipedia.org/wiki/West%20Virginia%20Broadband
West Virginia Broadband is a Wireless community network located in Braxton County, West Virginia operated by local volunteers and coordinated by the Gilmer-Braxton Research Zone. The effort gained recent attention by a National Public Radio story and MuniWireless and SmartMobs bloggers detailing how modified off-the-shelf Wi-Fi adapters were used to connect 7 communities with wireless internet for a total cost of little more than 4000 US dollars. The research group now coordinates wireless technology training throughout the United States.
https://en.wikipedia.org/wiki/Electronic%20system-level%20design%20and%20verification
Electronic system level (ESL) design and verification is an electronic design methodology, focused on higher abstraction level concerns. The term Electronic System Level or ESL Design was first defined by Gartner Dataquest, an EDA-industry-analysis firm, on February 1, 2001. It is defined in ESL Design and Verification as: "the utilization of appropriate abstractions in order to increase comprehension about a system, and to enhance the probability of a successful implementation of functionality in a cost-effective manner." The basic premise is to model the behavior of the entire system using a high-level language such as C, C++, or using graphical "model-based" design tools. Newer languages are emerging that enable the creation of a model at a higher level of abstraction including general purpose system design languages like SysML as well as those that are specific to embedded system design like SMDL and SSDL. Rapid and correct-by-construction implementation of the system can be automated using EDA tools such as high-level synthesis and embedded software tools, although much of it is performed manually today. ESL can also be accomplished through the use of SystemC as an abstract modeling language. ESL is an established approach at many of the world’s leading System-on-a-chip (SoC) design companies, and is being used increasingly in system design. From its genesis as an algorithm modeling methodology with 'no links to implementation', ESL is evolving into a set of complementary methodologies that enable embedded system design, verification, and debugging through to the hardware and software implementation of custom SoC, system-on-FPGA, system-on board, and entire multi-board systems. Design and verification are two distinct disciplines within this methodology. Some practices are to keep the two elements separate, while others advocate for closer integration between design and verification. Design Whether ESL or other systems, design refers to "the concurrent d
https://en.wikipedia.org/wiki/Antenna%20rotator
An antenna rotator (or antenna rotor) is a device used to change the orientation, within the horizontal plane, of a directional antenna. Most antenna rotators have two parts, the rotator unit and the controller. The controller is normally placed near the equipment which the antenna is connected to, while the rotator is mounted on the antenna mast directly below the antenna. Rotators are commonly used in amateur radio and military communications installations. They are also used with TV and FM antennas, where stations are available from multiple directions, as the cost of a rotator is often significantly less than that of installing a second antenna to receive stations from multiple directions. Rotators are manufactured for different sizes of antennas and installations. For example, a consumer TV antenna rotator has enough torque to turn a TV/FM or small ham antenna. These units typically cost around US$70 . Heavy-duty ham rotators are designed to turn extremely large, heavy, high frequency (shortwave) beam antennas, and cost hundreds or possibly thousands of dollars. In the center of the reference picture, the accompanying image includes an AzEl installation rotator, so named for its controlling of both the azimuth and the elevation components of the direction of an antenna system or array. Such antenna configurations are used in, for example, amateur radio satellite or moon-bounce communications. An open hardware AzEl rotator system is provided by the SatNOGS Groundstation project. The Alliance Manufacturing Co. of Alliance, Ohio, and the Astatic Corporation of Conneaut, Ohio, manufactured popular radio and TV booster and rotary antenna systems. These products were heavily advertised for radio use in newspapers starting in the early 1940s, and for use with commercial television sets from 1949 into the 1960s. Cinécraft Productions, a pioneer in early TV advertising, produced six commercials for the Astatic Booster TV in 1949 and 112 for the Alliance Tenna-Roto
https://en.wikipedia.org/wiki/CD%20publishing
CD publishing is the use of CD duplication systems to create a large number of unique discs. For instance, storing a unique serial number on each copy of a software application disc would be considered CD publishing. The term CD publishing is believed to have been coined by the Rimage Corporation as part of a marketing program which referred to CD-R discs as "digital paper." Automated disc production and printing systems, such as those made by Rimage, can be shared on a computer network much like an office printer to facilitate the creation of unique discs. This is the root of both the digital paper and CD publishing terms. The extension into CD publishing is a distinct advantage of CD duplication systems over traditional CD replication - where large quantities of identical discs must be made. External links Understanding CD-R & CD-RW: Duplication, Replication, and Publishing @ the Optical Storage Technology Association Computer storage media Optical disc authoring
https://en.wikipedia.org/wiki/Zimm%E2%80%93Bragg%20model
In statistical mechanics, the Zimm–Bragg model is a helix-coil transition model that describes helix-coil transitions of macromolecules, usually polymer chains. Most models provide a reasonable approximation of the fractional helicity of a given polypeptide; the Zimm–Bragg model differs by incorporating the ease of propagation (self-replication) with respect to nucleation. It is named for co-discoverers Bruno H. Zimm and J. K. Bragg. Helix-coil transition models Helix-coil transition models assume that polypeptides are linear chains composed of interconnected segments. Further, models group these sections into two broad categories: coils, random conglomerations of disparate unbound pieces, are represented by the letter 'C', and helices, ordered states where the chain has assumed a structure stabilized by hydrogen bonding, are represented by the letter 'H'. Thus, it is possible to loosely represent a macromolecule as a string such as CCCCHCCHCHHHHHCHCCC and so forth. The number of coils and helices factors into the calculation of fractional helicity, , defined as where is the average helicity and is the number of helix or coil units. Zimm–Bragg The Zimm–Bragg model takes the cooperativity of each segment into consideration when calculating fractional helicity. The probability of any given monomer being a helix or coil is affected by which the previous monomer is; that is, whether the new site is a nucleation or propagation. By convention, a coil unit ('C') is always of statistical weight 1. Addition of a helix state ('H') to a previously coiled state (nucleation) is assigned a statistical weight , where is the nucleation parameter and is the equilibrium constant Adding a helix state to a site that is already a helix (propagation) has a statistical weight of . For most proteins, which makes the propagation of a helix more favorable than nucleation of a helix from coil state. From these parameters, it is possible to compute the fractional helicity . The av
https://en.wikipedia.org/wiki/Gauche%20effect
In the study of conformational isomerism, the Gauche effect is an atypical situation where a gauche conformation (groups separated by a torsion angle of approximately 60°) is more stable than the anti conformation (180°). There are both steric and electronic effects that affect the relative stability of conformers. Ordinarily, steric effects predominate to place large substituents far from each other. However, this is not the case for certain substituents, typically those that are highly electronegative. Instead, there is an electronic preference for these groups to be gauche. Typically studied examples include 1,2-difluoroethane (H2FCCFH2), ethylene glycol, and vicinal-difluoroalkyl structures. There are two main explanations for the gauche effect: hyperconjugation and bent bonds. In the hyperconjugation model, the donation of electron density from the C–H σ bonding orbital to the C–F σ* antibonding orbital is considered the source of stabilization in the gauche isomer. Due to the greater electronegativity of fluorine, the C–H σ orbital is a better electron donor than the C–F σ orbital, while the C–F σ* orbital is a better electron acceptor than the C–H σ* orbital. Only the gauche conformation allows good overlap between the better donor and the better acceptor. Key in the bent bond explanation of the gauche effect in difluoroethane is the increased p orbital character of both C–F bonds due to the large electronegativity of fluorine. As a result, electron density builds up above and below to the left and right of the central C–C bond. The resulting reduced orbital overlap can be partially compensated when a gauche conformation is assumed, forming a bent bond. Of these two models, hyperconjugation is generally considered the principal cause behind the gauche effect in difluoroethane. The molecular geometry of both rotamers can be obtained experimentally by high resolution infrared spectroscopy augmented with in silico work. In accordance with the model describ
https://en.wikipedia.org/wiki/Computational%20aeroacoustics
Computational aeroacoustics is a branch of aeroacoustics that aims to analyze the generation of noise by turbulent flows through numerical methods. History The origin of computational aeroacoustics can only very likely be dated back to the middle of the 1980s, with a publication of Hardin and Lamkin who claimed, that "[...] the field of computational fluid mechanics has been advancing rapidly in the past few years and now offers the hope that "computational aeroacoustics," where noise is computed directly from a first principles determination of continuous velocity and vorticity fields, might be possible, [...]" Later in a publication 1986 the same authors introduced the abbreviation CAA. The term was initially used for a low Mach number approach (Expansion of the acoustic perturbation field about an incompressible flow) as it is described under EIF. Later in the beginning 1990s the growing CAA community picked up the term and extensively used it for any kind of numerical method describing the noise radiation from an aeroacoustic source or the propagation of sound waves in an inhomogeneous flow field. Such numerical methods can be far field integration methods (e.g. FW-H) as well as direct numerical methods optimized for the solutions (e.g.) of a mathematical model describing the aerodynamic noise generation and/or propagation. With the rapid development of the computational resources this field has undergone spectacular progress during the last three decades. Methods Direct numerical simulation (DNS) approach to CAA The compressible Navier-Stokes equation describes both the flow field, and the aerodynamically generated acoustic field. Thus both may be solved for directly. This requires very high numerical resolution due to the large differences in the length scale present between the acoustic variables and the flow variables. It is computationally very demanding and unsuitable for any commercial use. Hybrid approach In this approach the computational domain is
https://en.wikipedia.org/wiki/Insyde%20Software
Insyde Software () is a company that specializes in UEFI system firmware and engineering support services, primarily for OEM and ODM computer and component device manufacturers. They are listed on the Gre Tai Market of Taiwan and headquartered in Taipei, with offices in Westborough, Massachusetts, and Portland, Oregon. The company's market capitalization of the company's common shares is currently around $115M. Overview The company's product portfolio includes InsydeH2O BIOS (Insyde Software's implementation of the Intel Platform Innovation Framework for UEFI/EFI), BlinkBoot, a UEFI-based boot loader for enabling Internet of Things devices, and Supervyse, which is a full-featured systems management/BMC firmware for providing out-of-band remote management for server computers. Insyde Software was formed when it purchased the BIOS assets of SystemSoft Corporation (NASDAQ:SYSF) in October, 1998. Initially Insyde Software was a privately held company that included investments from Intel Pacific Inc., China Development Industrial Bank, Professional Computer Technology Limited (PCT), company management and selected employees. At that time, Insyde Software's management team consisted of Jeremy Wang, Chairman (also the Chairman of PCT); Jonathan Joseph, President (a former founder of SystemSoft); Hansen Liou, the General Manager of Taiwan Operations and Asia-Pacific Sales, and Stephen Gentile, the Vice President of Marketing. Shortly after the initial investment, the company was introduced by Intel to a new BIOS coding architecture called EFI (now UEFI) and the two companies began working together on it. In 2001, the two companies entered into a joint development agreement and Insyde’s first shipment of the technology occurred in October 2003 as InsydeH2O UEFI BIOS. Since that time, UEFI has become the mainstay of Insyde’s business. On 23 January 2003, Insyde Software announced its initial public offering on the GreTai Securities Market (GTSM) based in Taipei, Taiwan
https://en.wikipedia.org/wiki/Cypress%20PSoC
PSoC (programmable system on a chip) is a family of microcontroller integrated circuits by Cypress Semiconductor. These chips include a CPU core and mixed-signal arrays of configurable integrated analog and digital peripherals. History In 2002, Cypress began shipping commercial quantities of the PSoC 1. To promote the PSoC, Cypress sponsored a "PSoC Design Challenge" in Circuit Cellar magazine in 2002 and 2004. In April 2013, Cypress released the fourth generation, PSoC 4. The PSoC 4 features a 32-bit ARM Cortex-M0 CPU, with programmable analog blocks (operational amplifiers and comparators), programmable digital blocks (PLD-based UDBs), programmable routing and flexible GPIO (route any function to any pin), a serial communication block (for SPI, UART, I²C), a timer/counter/PWM block and more. PSoC is used in devices as simple as Sonicare toothbrushes and Adidas sneakers, and as complex as the TiVo set-top box. One PSoC implements capacitive sensing for the touch-sensitive scroll wheel on the Apple iPod click wheel. In 2014, Cypress extended the PSoC 4 family by integrating a Bluetooth Low Energy radio along with a PSoC 4 Cortex-M0-based SoC in a single, monolithic die. In 2016, Cypress released PSoC 4 S-Series, featuring ARM Cortex-M0+ CPU. Overview A PSoC integrated circuit is composed of a core, configurable analog and digital blocks, and programmable routing and interconnect. The configurable blocks in a PSoC are the biggest difference from other microcontrollers. PSoC has three separate memory spaces: paged SRAM for data, Flash memory for instructions and fixed data, and I/O registers for controlling and accessing the configurable logic blocks and functions. The device is created using SONOS technology. PSoC resembles an ASIC: blocks can be assigned a wide range of functions and interconnected on-chip. Unlike an ASIC, there is no special manufacturing process required to create the custom configuration — only startup code that is created by Cypress'
https://en.wikipedia.org/wiki/Concurrent%20Haskell
Concurrent Haskell extends Haskell 98 with explicit concurrency. Its two main underlying concepts are: A primitive type MVar α implementing a bounded/single-place asynchronous channel, which is either empty or holds a value of type α. The ability to spawn a concurrent thread via the forkIO primitive. Built atop this is a collection of useful concurrency and synchronisation abstractions such as unbounded channels, semaphores and sample variables. Haskell threads have very low overhead: creation, context-switching and scheduling are all internal to the Haskell runtime. These Haskell-level threads are mapped onto a configurable number of OS-level threads, usually one per processor core. Software Transactional Memory The software transactional memory (STM) extension to GHC reuses the process forking primitives of Concurrent Haskell. STM however: avoids MVars in favour of TVars. introduces the retry and orElse primitives, allowing alternative atomic actions to be composed together. STM monad The STM monad is an implementation of software transactional memory in Haskell. It is implemented in GHC, and allows for mutable variables to be modified in transactions. Traditional approach Consider a banking application as an example, and a transaction in it—the transfer function, which takes money from one account, and puts it into another account. In the IO monad, this might look like: type Account = IORef Integer transfer :: Integer -> Account -> Account -> IO () transfer amount from to = do fromVal <- readIORef from -- (A) toVal <- readIORef to writeIORef from (fromVal - amount) writeIORef to (toVal + amount) This causes problems in concurrent situations where multiple transfers might be taking place on the same account at the same time. If there were two transfers transferring money from account from, and both calls to transfer ran line (A) before either of them had written their new values, it is possible that money would be put into the o
https://en.wikipedia.org/wiki/Corona%20radiata%20%28embryology%29
The corona radiata is the innermost layer of the cells of the cumulus oophorus and is directly adjacent to the zona pellucida, the inner protective glycoprotein layer of the ovum. Cumulus oophorus are the cells surrounding corona radiata, and are the cells between corona radiata and follicular antrum. Its main purpose in many animals is to supply vital proteins to the cell. It is formed by follicle cells adhering to the oocyte before it leaves the ovarian follicle, and originates from the squamous granulosa cells present at the primordial stage of follicular development. The corona radiata is formed when the granulosa cells enlarge and become cuboidal, which occurs during the transition from the primordial to primary stage. These cuboidal granulosa cells, also known as the granulosa radiata, form more layers throughout the maturation process, and remain attached to the zona pellucida after the ovulation of the Graafian follicle. For fertilization to occur, sperm cells rely on hyaluronidase (an enzyme found in the acrosome of spermatozoa) to disperse the corona radiata from the zona pellucida of the secondary (ovulated) oocyte, thus permitting entry into the perivitelline space and allowing contact between the sperm cell and the nucleus of the oocyte.
https://en.wikipedia.org/wiki/Parieto-occipital%20sulcus
In neuroanatomy, the parieto-occipital sulcus (also called the parieto-occipital fissure) is a deep sulcus in the cerebral cortex that marks the boundary between the cuneus and precuneus, and also between the parietal and occipital lobes. Only a small part can be seen on the lateral surface of the hemisphere, its chief part being on the medial surface. The lateral part of the parieto-occipital sulcus (Fig. 726) is situated about 5 cm in front of the occipital pole of the hemisphere, and measures about 1.25 cm. in length. The medial part of the parieto-occipital sulcus (Fig. 727) runs downward and forward as a deep cleft on the medial surface of the hemisphere, and joins the calcarine fissure below and behind the posterior end of the corpus callosum. In most cases, it contains a submerged gyrus. Function The parieto-occipital lobe has been found in various neuroimaging studies, including PET (positron-emission-tomography) studies, and SPECT (single-photon emission computed tomography) studies, to be involved along with the dorsolateral prefrontal cortex during planning. Gallery
https://en.wikipedia.org/wiki/Ridged%20mirror
In atomic physics, a ridged mirror (or ridged atomic mirror, or Fresnel diffraction mirror) is a kind of atomic mirror, designed for the specular reflection of neutral particles (atoms) coming at a grazing incidence angle. In order to reduce the mean attraction of particles to the surface and increase the reflectivity, this surface has narrow ridges. Reflectivity of ridged atomic mirrors Various estimates for the efficiency of quantum reflection of waves from ridged mirror were discussed in the literature. All the estimates explicitly use the de Broglie theory about wave properties of reflected atoms. Scaling of the van der Waals force The ridges enhance the quantum reflection from the surface, reducing the effective constant of the van der Waals attraction of atoms to the surface. Such interpretation leads to the estimate of the reflectivity , where is width of the ridges, is distance between ridges, is grazing angle, and is wavenumber and is coefficient of reflection of atoms with wavenumber from a flat surface at the normal incidence. Such estimate predicts the enhancement of the reflectivity at the increase of period ; this estimate is valid at . See quantum reflection for the approximation (fit) of the function . Interpretation as Zeno effect For narrow ridges with large period , the ridges just blocks the part of the wavefront. Then, it can be interpreted in terms of the Fresnel diffraction of the de Broglie wave, or the Zeno effect; such interpretation leads to the estimate the reflectivity , where the grazing angle is supposed to be small. This estimate predicts enhancement of the reflectivity at the reduction of period . This estimate requires that . Fundamental limit For efficient ridged mirrors, both estimates above should predict high reflectivity. This implies reduction of both, width, of the ridges and the period, . The width of the ridges cannot be smaller than the size of an atom; this sets the limit of performance of the ridged mirro
https://en.wikipedia.org/wiki/String%20galvanometer
A string galvanometer is a sensitive fast-responding measuring instrument that uses a single fine filament of wire suspended in a strong magnetic field to measure small currents. In use, a strong light source is used to illuminate the fine filament, and the optical system magnifies the movement of the filament allowing it to be observed or recorded by photography. The principle of the string galvanometer remained in use for electrocardiograms until the advent of electronic vacuum-tube amplifiers in the 1920s. History Submarine cable telegraph systems of the late 19th century used a galvanometer to detect pulses of electric current, which could be observed and transcribed into a message. The speed at which pulses could be detected by the galvanometer was limited by its mechanical inertia, and by the inductance of the multi-turn coil used in the instrument. Clément Adair, a French engineer, replaced the coil with a much faster wire or "string" producing the first string galvanometer. For most telegraphic purposes it was sufficient to detect the existence of a pulse. In 1892 André Blondel described the dynamic properties of an instrument that could measure the wave shape of an electrical impulse, an oscillograph. Augustus Waller had discovered electrical activity from the heart and produced the first electrocardiogram in 1887. But his equipment was slow. Physiologists worked to find a better instrument. In 1901, Willem Einthoven described the science background and potential utility of a string galvanometer, stating "Mr. Adair as already built an instrument with a wires stretched between poles of a magnet. It was a telegraph receiver." Einthoven developed a sensitive form of string galvanomter that allowed photographic recording of the impulses associated with the heat beat. He was a leader in applying the string galvanometer to physiology and medicine, leading to today's electrocardiography. Einthoven was awarded the 1924 Nobel prize in Physiology or Medicine f
https://en.wikipedia.org/wiki/Posterior%20triangle%20of%20the%20neck
The posterior triangle (or lateral cervical region) is a region of the neck. Boundaries The posterior triangle has the following boundaries: Apex: Union of the sternocleidomastoid and the trapezius muscles at the superior nuchal line of the occipital bone Anteriorly: Posterior border of the sternocleidomastoideus Posteriorly: Anterior border of the trapezius Inferiorly: Middle one third of the clavicle Roof: Investing layer of the deep cervical fascia Floor: (From superior to inferior) 1) M. semispinalis capitis 2) M. splenius capitis 3) M. levator scapulae 4) M. scalenus posterior 5) M. scalenus medius Divisions The posterior triangle is crossed, about 2.5 cm above the clavicle, by the inferior belly of the omohyoid muscle, which divides the space into two triangles: an upper or occipital triangle a lower or subclavian triangle (or supraclavicular triangle) Contents A) Nerves and plexuses: Spinal accessory nerve (Cranial Nerve XI) Branches of cervical plexus Roots and trunks of brachial plexus Phrenic nerve (C3,4,5) B) Vessels: Subclavian artery (Third part) Transverse cervical artery Suprascapular artery Terminal part of external jugular vein C) Lymph nodes: Occipital Supraclavicular D) Muscles: Inferior belly of omohyoid muscle Anterior Scalene Middle Scalene Posterior Scalene Levator Scapulae Muscle Splenius Clinical significance The accessory nerve (CN XI) is particularly vulnerable to damage during lymph node biopsy. Damage results in an inability to shrug the shoulders or raise the arm above the head, particularly due to compromised trapezius muscle innervation. The external jugular vein's superficial location within the posterior triangle also makes it vulnerable to injury. See also Anterior triangle of the neck
https://en.wikipedia.org/wiki/Coronis%20%28textual%20symbol%29
A coronis (, korōnís,  , korōnídes) is a textual symbol found in ancient Greek papyri that was used to mark the end of an entire work or of a major section in poetic and prose texts. The coronis was generally placed in the left-hand margin of the text and was often accompanied by a paragraphos or a forked paragraphos (diple obelismene). The coronis is encoded by Unicode as part of the Supplemental Punctuation block, at . Etymology Liddell and Scott's Greek–English Lexicon gives the basic meaning of as "crook-beaked" from which a general meaning of "curved" is supposed to have derived. concurs and derives the word from (), "crow", assigning the meaning of the epithet's use in reference to the textual symbol to the same semantic range of "curve". But, given the fact that the earliest coronides actually take the form of birds, there has been debate about whether the name of the textual symbol initially referred to use of a decorative bird to mark a major division in a text or if these pictures were a secondary development that played upon the etymological relation between , "crow", and , as in "curved". Examples See also Obelism Notes Sources Chantraine, P., Dictionnaire étymologique de la langue grecque (Paris: Éditions Klincksieck, 1968). Liddell, H. G.; Scott, R., A Greek–English Lexicon, 9th ed. (Oxford: OUP, 1996). Schironi, F., Τὸ Μέγα Βιβλίον: Book-Ends, End-Titles, and Coronides in Papyri with Hexametric Poetry (Durham, NC: The American Society of Papyrologists, 2010). Turner, E. G., Greek Manuscripts of the Ancient World, 2nd rev. ed. by P.J. Parsons (London: Institute of Classical Studies, 1987). Palaeography Punctuation Ancient Greek punctuation
https://en.wikipedia.org/wiki/Auxiliary%20field
In physics, and especially quantum field theory, an auxiliary field is one whose equations of motion admit a single solution. Therefore, the Lagrangian describing such a field contains an algebraic quadratic term and an arbitrary linear term, while it contains no kinetic terms (derivatives of the field): The equation of motion for is and the Lagrangian becomes Auxiliary fields generally do not propagate, and hence the content of any theory can remain unchanged in many circumstances by adding such fields by hand. If we have an initial Lagrangian describing a field , then the Lagrangian describing both fields is Therefore, auxiliary fields can be employed to cancel quadratic terms in in and linearize the action . Examples of auxiliary fields are the complex scalar field F in a chiral superfield, the real scalar field D in a vector superfield, the scalar field B in BRST and the field in the Hubbard–Stratonovich transformation. The quantum mechanical effect of adding an auxiliary field is the same as the classical, since the path integral over such a field is Gaussian. To wit: See also Bosonic field Fermionic field Composite Field
https://en.wikipedia.org/wiki/PocketMail
PocketMail was a very small and inexpensive mobile computer, with a built-in acoustic coupler, developed by PocketScience. History PocketMail was developed by the company PocketScience and used technology developed by NASA. This was the first ever mass-market mobile email. The hardware cost around US$100 and the service was initially US$9.95 per month for unlimited use. Later the monthly fee increased. After the company made a reference hardware design, leading consumer electronics manufacturers Audioxo, Sharp, JVC, and others made their own PocketMail devices. Later a PocketMail dongle was created for the PalmPilot. PocketMail users were given a custom email address or able to synch up PocketMail with their existing email account (including AOL accounts). Although actually a computer, its main function was email. Its main advantages were that it was simple, and that it worked with any phone, even outside the United States. It was a low-cost personal digital assistant (PDA) with an inbuilt acoustic coupler which allowed users to send and receive email while connected to a normal telephone, thus allowing use outside of mobile phone range, or without the need to be signed up with a mobile telephone provider. Popularity of the PocketMail peaked around 2000, when the company stopped investing in new technology development. In Australia, the company known as PocketMail in 2007 stopped marketing the PocketMail service, changed its name to Adavale Resources Limited and now owns uranium mining prospects in Queensland and South Australia.
https://en.wikipedia.org/wiki/Gene
In biology, the word gene (from , ; meaning generation or birth or gender) can have several different meanings. The Mendelian gene is a basic unit of heredity and the molecular gene is a sequence of nucleotides in DNA that is transcribed to produce a functional RNA. There are two types of molecular genes: protein-coding genes and non-coding genes. During gene expression, the DNA is first copied into RNA. The RNA can be directly functional or be the intermediate template for a protein that performs a function. (Some viruses have an RNA genome so the genes are made of RNA that may function directly without being copied into RNA. This is an exception to the strict definition of a gene described above.) The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. These genes make up different DNA sequences called genotypes. Genotypes along with environmental and developmental factors determine what the phenotypes will be. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, and some are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life. A gene can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a gene, which may cause different phenotypical traits. Usage of the term "having a gene" (e.g., "good genes," "hair color gene") typically refers to containing a different allele of the same, shared gene. Genes evolve due to natural selection / survival of the fittest and genetic drift of the alleles. The term gene was introduced by Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1909. It is inspired by the Ancient Greek: γόνος, gonos, that means offspring and procreation
https://en.wikipedia.org/wiki/Persona%20%28user%20experience%29
A persona (also user persona, customer persona, buyer persona) in user-centered design and marketing is a personalized fictional character created to represent a user type that might use a site, brand, or product in a similar way. Personas represent the similarities of consumer groups or segments. They are based on demographic and behavioural personal information collected from users, qualitative interviews, and participant observation. Personas are one of the outcomes of market segmentation, where marketers use the results of statistical analysis and qualitative observations to draw profiles, giving them names and personalities to paint a picture of a person that could exist in real life. The term persona is used widely in online and technology applications as well as in advertising, where other terms such as pen portraits may also be used. Personas are useful in considering the goals, desires, and limitations of brand buyers and users in order to help to guide decisions about a service, product or interaction space such as features, interactions, and visual design of a website. Personas may be used as a tool during the user-centered design process for designing software. They can introduce interaction design principles to things like industrial design and online marketing. A user persona is a representation of the goals and behavior of a hypothesized group of users. In most cases, personas are synthesized from data collected from interviews or surveys with users. They are captured in short page descriptions that include behavioral patterns, goals, skills, attitudes, with a few fictional personal details to make the persona a realistic character. In addition to Human-Computer Interaction (HCI), personas are also widely used in sales, advertising, marketing and system design. Personas provide common behaviors, outlooks, and potential objections of people matching a given persona. History Within software design, Alan Cooper, a noted pioneer software developer, p
https://en.wikipedia.org/wiki/Three-point%20flexural%20test
The three-point bending flexural test provides values for the modulus of elasticity in bending , flexural stress , flexural strain and the flexural stress–strain response of the material. This test is performed on a universal testing machine (tensile testing machine or tensile tester) with a three-point or four-point bend fixture. The main advantage of a three-point flexural test is the ease of the specimen preparation and testing. However, this method has also some disadvantages: the results of the testing method are sensitive to specimen and loading geometry and strain rate. Testing method The test method for conducting the test usually involves a specified test fixture on a universal testing machine. Details of the test preparation, conditioning, and conduct affect the test results. The sample is placed on two supporting pins a set distance apart. Calculation of the flexural stress for a rectangular cross section for a circular cross section Calculation of the flexural strain Calculation of flexural modulus in these formulas the following parameters are used: = Stress in outer fibers at midpoint, (MPa) = Strain in the outer surface, (mm/mm) = flexural Modulus of elasticity,(MPa) = load at a given point on the load deflection curve, (N) = Support span, (mm) = Width of test beam, (mm) = Depth or thickness of tested beam, (mm) = maximum deflection of the center of the beam, (mm) = The gradient (i.e., slope) of the initial straight-line portion of the load deflection curve, (N/mm) = The radius of the beam, (mm) Fracture toughness testing The fracture toughness of a specimen can also be determined using a three-point flexural test. The stress intensity factor at the crack tip of a single edge notch bending specimen is where is the applied load, is the thickness of the specimen, is the crack length, and is the width of the specimen. In a three-point bend test, a fatigue crack is created at the tip of the no
https://en.wikipedia.org/wiki/Calcium%20ascorbate
Calcium ascorbate is a compound with the molecular formula CaC12H14O12. It is the calcium salt of ascorbic acid, one of the mineral ascorbates. It is approximately 10% calcium by mass. As a food additive, it has the E number E 302. It is approved for use as a food in the EU, USA and Australia and New Zealand.
https://en.wikipedia.org/wiki/Sodium%20ascorbate
Sodium ascorbate is one of a number of mineral salts of ascorbic acid (vitamin C). The molecular formula of this chemical compound is C6H7NaO6. As the sodium salt of ascorbic acid, it is known as a mineral ascorbate. It has not been demonstrated to be more bioavailable than any other form of vitamin C supplement. Sodium ascorbate normally provides 131 mg of sodium per 1,000 mg of ascorbic acid (1,000 mg of sodium ascorbate contains 889 mg of ascorbic acid and 111 mg of sodium). As a food additive, it has the E number E301 and is used as an antioxidant and an acidity regulator. It is approved for use as a food additive in the EU, USA, Australia, and New Zealand. In in vitro studies, sodium ascorbate has been found to produce cytotoxic effects in various malignant cell lines, which include melanoma cells that are particularly susceptible. Production Sodium ascorbate is produced by dissolving ascorbic acid in water and adding an equivalent amount of sodium bicarbonate in water. After cessation of effervescence, the sodium ascorbate is precipitated by the addition of isopropanol.
https://en.wikipedia.org/wiki/Potassium%20ascorbate
Potassium ascorbate is a compound with formula KC6H7O6. It is the potassium salt of ascorbic acid (vitamin C) and a mineral ascorbate. As a food additive, it has E number E303, INS number 303. Although it is not a permitted food additive in the UK, USA and the EU, it is approved for use in Australia and New Zealand. According to some studies, it has shown a strong antioxidant activity and antitumoral properties.
https://en.wikipedia.org/wiki/Disjunctive%20sequence
A disjunctive sequence is an infinite sequence (over a finite alphabet of characters) in which every finite string appears as a substring. For instance, the binary Champernowne sequence formed by concatenating all binary strings in shortlex order, clearly contains all the binary strings and so is disjunctive. (The spaces above are not significant and are present solely to make clear the boundaries between strings). The complexity function of a disjunctive sequence S over an alphabet of size k is pS(n) = kn. Any normal sequence (a sequence in which each string of equal length appears with equal frequency) is disjunctive, but the converse is not true. For example, letting 0n denote the string of length n consisting of all 0s, consider the sequence obtained by splicing exponentially long strings of 0s into the shortlex ordering of all binary strings. Most of this sequence consists of long runs of 0s, and so it is not normal, but it is still disjunctive. A disjunctive sequence is recurrent but never uniformly recurrent/almost periodic. Examples The following result can be used to generate a variety of disjunctive sequences: If a1, a2, a3, ..., is a strictly increasing infinite sequence of positive integers such that n → ∞ (an+1 / an) = 1, then for any positive integer m and any integer base b ≥ 2, there is an an whose expression in base b starts with the expression of m in base b. (Consequently, the infinite sequence obtained by concatenating the base-b expressions for a1, a2, a3, ..., is disjunctive over the alphabet {0, 1, ..., b-1}.) Two simple cases illustrate this result: an = nk, where k is a fixed positive integer. (In this case, n → ∞ (an+1 / an) = n → ∞ ( (n+1)k / nk ) = n → ∞ (1 + 1/n)k = 1.) E.g., using base-ten expressions, the sequences 123456789101112... (k = 1, positive natural numbers), 1491625364964... (k = 2, squares), 182764125216343... (k = 3, cubes), etc., are disjunctive on {0,1,2,3,4,5,6,7,8,9}. an = pn, where pn is the nt
https://en.wikipedia.org/wiki/Scale%20factor%20%28computer%20science%29
In computer science, a scale factor is a number used as a multiplier to represent a number on a different scale, functioning similarly to an exponent in mathematics. A scale factor is used when a real-world set of numbers needs to be represented on a different scale in order to fit a specific number format. Although using a scale factor extends the range of representable values, it also decreases the precision, resulting in rounding error for certain calculations. Uses Certain number formats may be chosen for an application for convenience in programming, or because of certain advantages offered by the hardware for that number format. For instance, early processors did not natively support the IEEE floating-point standard for representing fractional values, so integers were used to store representations of the real world values by applying a scale factor to the real value. Similarly, because hardware arithmetic has a fixed width (commonly 16, 32, or 64 bits, depending on the data type), scale factors allow representation of larger numbers (by manually multiplying or dividing by the specified scale factor), though at the expense of precision. By necessity, this was done in software, since the hardware did not support fractional value. Scale factors are also used in floating-point numbers, and most commonly are powers of two. For example, the double-precision format sets aside 11 bits for the scaling factor (a binary exponent) and 53 bits for the significand, allowing various degrees of precision for representing different ranges of numbers, and expanding the range of representable numbers beyond what could be represented using 64 explicit bits (though at the cost of precision). As an example of where precision is lost, a 16-bit unsigned integer (uint16) can only hold a value as large as 65,53510. If unsigned 16-bit integers are used to represent values from 0 to 131,07010, then a scale factor of would be introduced, such that the scaled values correspond exactly t
https://en.wikipedia.org/wiki/Overflow%20flag
In computer processors, the overflow flag (sometimes called the V flag) is usually a single bit in a system status register used to indicate when an arithmetic overflow has occurred in an operation, indicating that the signed two's-complement result would not fit in the number of bits used for the result. Some architectures may be configured to automatically generate an exception on an operation resulting in overflow. An example, suppose we add 127 and 127 using 8-bit registers. 127+127 is 254, but using 8-bit arithmetic the result would be 1111 1110 binary, which is the two's complement encoding of −2, a negative number. A negative sum of positive operands (or vice versa) is an overflow. The overflow flag would then be set so the program can be aware of the problem and mitigate this or signal an error. The overflow flag is thus set when the most significant bit (here considered the sign bit) is changed by adding two numbers with the same sign (or subtracting two numbers with opposite signs). Overflow cannot occur when the sign of two addition operands are different (or the sign of two subtraction operands are the same). When binary values are interpreted as unsigned numbers, the overflow flag is meaningless and normally ignored. One of the advantages of two's complement arithmetic is that the addition and subtraction operations do not need to distinguish between signed and unsigned operands. For this reason, most computer instruction sets do not distinguish between signed and unsigned operands, generating both (signed) overflow and (unsigned) carry flags on every operation, and leaving it to following instructions to pay attention to whichever one is of interest. Internally, the overflow flag is usually generated by an exclusive or of the internal carry into and out of the sign bit. Bitwise operations (and, or, xor, not, rotate) do not have a notion of signed overflow, so the defined value varies on different processor architectures. Some processors clear the
https://en.wikipedia.org/wiki/Rotational%20temperature
The characteristic rotational temperature ( or ) is commonly used in statistical thermodynamics to simplify the expression of the rotational partition function and the rotational contribution to molecular thermodynamic properties. It has units of temperature and is defined as where is the rotational constant, is a molecular moment of inertia, is the Planck constant, is the speed of light, is the reduced Planck constant and is the Boltzmann constant. The physical meaning of is as an estimate of the temperature at which thermal energy (of the order of ) is comparable to the spacing between rotational energy levels (of the order of ). At about this temperature the population of excited rotational levels becomes important. Some typical values are given in the table. In each case the value refers to the most common isotopic species.
https://en.wikipedia.org/wiki/Ruslan%20Stratonovich
Ruslan Leont'evich Stratonovich () was a Russian physicist, engineer, and probabilist and one of the founders of the theory of stochastic differential equations. Biography Ruslan Stratonovich was born on 31 May 1930 in Moscow. He studied from 1947 at the Moscow State University, specializing in there under P. I. Kuznetsov on radio physics (a Soviet term for oscillation physics – including noise – in the broadest sense, but especially in the electromagnetic spectrum). In 1953 he graduated and came into contact with the mathematician Andrey Kolmogorov. In 1956 he received his doctorate on the application of the theory of correlated random points to the calculation of electronic noise. In 1969 he became professor of physics at the Moscow State University. Research Stratonovich invented a stochastic calculus which serves as an alternative to the Itō calculus; the Stratonovich calculus is most natural when physical laws are being considered. The Stratonovich integral appears in his stochastic calculus. Here, the Stratonovich integral is named after him (at the same time developed by Donald Fisk). He also solved the problem of optimal non-linear filtering based on his theory of conditional Markov processes, which was published in his papers in 1959 and 1960. The Kalman-Bucy (linear) filter (1961) is a special case of Stratonovich's filter. The Hubbard-Stratonovich transformation in the theory of path integrals (or distribution functions of statistical mechanics) was introduced by him (and used by John Hubbard in solid state physics). In 1965, he developed the theory of pricing information (Value of information), which describes decision-making situations in which it comes to the question of how much someone is going to pay for information. Awards Lomonosov Prize of the Moscow University, 1984 USSR State Prize, 1988 State Prize of the Russian Federation, 1996 See also Filtering problem (stochastic processes) Works with P. I. Kuznetsov: The propagation of electrom
https://en.wikipedia.org/wiki/Q%20%28number%20format%29
The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8 means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part. A number of other notations have been used for the same purpose. Definition Texas Instruments version The Q notation, as defined by Texas Instruments, consists of the letter followed by a pair of numbers mn, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits. By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus the total number w of bits used is 1 + m + n. For example, the specification describes a signed binary fixed-point number with a w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12 In particular, when n is zero, the numbers are just integers. If m is zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1 (exclusive). The m and the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus means a signed integer with any number of bits, that is implicitly multiplied by 2−12. The letter can be prefixed to the to denote an unsigned binary fixed-point format. For example, describes values represented as unsigned 16-bit integers with implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215 = +1.999969482421875. ARM version A variant of the Q notation has been in use by ARM.
https://en.wikipedia.org/wiki/Mobile%20web
The mobile web comprises mobile browser-based World Wide Web services accessed from handheld mobile devices, such as smartphones or feature phones, through a mobile or other wireless network. History and development Traditionally, the World Wide Web has been accessed via fixed-line services on laptops and desktop computers. However, the web is now more accessible by portable and wireless devices. Early 2010 ITU (International Telecommunication Union) report said that with current growth rates, web access by people on the go via laptops and smart mobile devices was likely to exceed web access from desktop computers within the following five years. In January 2014, mobile internet use exceeded desktop use in the United States. The shift to mobile Web access has accelerated since 2007 with the rise of larger multitouch smartphones, and since 2010 with the rise of multitouch tablet computers. Both platforms provide better Internet access, screens, and mobile browsers, or application-based user Web experiences than previous generations of mobile devices. Web designers may work separately on such pages, or pages may be automatically converted, as in Mobile Wikipedia. Faster speeds, smaller, feature-rich devices, and a multitude of applications continue to drive explosive growth for mobile internet traffic. The 2017 Virtual Network Index (VNI) report produced by Cisco Systems forecasts that by 2021, there will be 5.5 billion global mobile users (up from 4.9 billion in 2016). Additionally, the same 2017 VNI report forecasts that average access speeds will increase by roughly three times from 6.8 Mbit/s to 20 Mbit/s in that same period with video comprising the bulk of the traffic (78%). The distinction between mobile web applications and native applications is anticipated to become increasingly blurred, as mobile browsers gain direct access to the hardware of mobile devices (including accelerometers and GPS chips), and the speed and abilities of browser-based applicatio
https://en.wikipedia.org/wiki/The%20Secret%20%282006%20film%29
The Secret is a 2006 Australian-American spirituality documentary consisting of a series of interviews designed to demonstrate the New Thought "law of attraction", the belief that everything one wants or needs can be satisfied by believing in an outcome, repeatedly thinking about it, and maintaining positive emotional states to "attract" the desired outcome. The film and the subsequent publication of the book of the same name attracted interest from media figures such as Oprah Winfrey, Ellen DeGeneres and Larry King. Synopsis The Secret, described as a self-help film, uses a documentary format to present a concept titled "law of attraction". As described in the film, the "Law of Attraction" hypothesis posits that feelings and thoughts can attract events, feelings, and experiences, from the workings of the cosmos to interactions among individuals in their physical, emotional, and professional affairs. The film also suggests that there has been a strong tendency by those in positions of power to keep this central principle hidden from the public. Foundations in New Thought ideas The authors of The Secret cite the New Thought movement which began in the late 18th century as the historical basis for their ideas. The New Thought book The Science of Getting Rich by Wallace Wattles, the source Rhonda Byrne cites as inspiration for the film, was preceded by numerous other New Thought books, including the 1906 book Thought Vibration or the law of attraction in the Thought World by William Walker Atkinson, editor of New Thought magazine. Other New Thought books Byrne is purported to have read include self-help authors like Prentice Mulford's 19th-century Thoughts Are Things; and Robert Collier's Secret of the Ages from 1926. Carolyn Sackariason of the Aspen Times, when commenting about Byrne's intention to share The Secret with the world, identifies the Rosicrucians as keepers of The Secret. Production The film was created by Prime Time Productions of Melbourne, Austr
https://en.wikipedia.org/wiki/Double%20subscript%20notation
In engineering, double-subscript notation is notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically. Electronic usage IEEE standard 255-1963, "Letter Symbols for Semiconductor Devices", defined eleven original quantity symbols expressed as abbreviations. This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this: represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying "the voltage drop from C to B", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121. would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying "the current in the direction going from C to E". Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning
https://en.wikipedia.org/wiki/Size%20consistency%20and%20size%20extensivity
In quantum chemistry, size consistency and size extensivity are concepts relating to how the behaviour of quantum chemistry calculations changes with size. Size consistency (or strict separability) is a property that guarantees the consistency of the energy behaviour when interaction between the involved molecular system is nullified (for example, by distance). Size-extensivity, introduced by Bartlett, is a more mathematically formal characteristic which refers to the correct (linear) scaling of a method with the number of electrons. Let A and B be two non-interacting systems. If a given theory for the evaluation of the energy is size consistent, then the energy of the supersystem A+B, separated by a sufficiently large distance so there is essentially no shared electron density, is equal to the sum of the energy of A plus the energy of B taken by themselves . This property of size consistency is of particular importance to obtain correctly behaving dissociation curves. Others have more recently argued that the entire potential energy surface should be well-defined. Size consistency and size extensivity are sometimes used interchangeably in the literature, However, there are very important distinctions to be made between them. Hartree–Fock, coupled cluster, many-body perturbation theory (to any order), and full configuration interaction (CI) are size extensive but not always size consistent. For example, the Restricted Hartree–Fock model is not able to correctly describe the dissociation curves of H2 and therefore all post HF methods that employ HF as a starting point will fail in that matter (so-called single-reference methods). Sometimes numerical errors can cause a method that is formally size-consistent to behave in a non-size-consistent manner. Core-extensivity is yet another related property, which extends the requirement to the proper treatment of excited states.
https://en.wikipedia.org/wiki/FO4
In digital electronics, Fan-out of 4 is a measure of time used in digital CMOS technologies: the gate delay of a component with a fan-out of 4. Fan out = Cload / Cin, where Cload = total MOS gate capacitance driven by the logic gate under consideration Cin = the MOS gate capacitance of the logic gate under consideration As a delay metric, one FO4 is the delay of an inverter, driven by an inverter 4x smaller than itself, and driving an inverter 4x larger than itself. Both conditions are necessary since input signal rise/fall time affects the delay as well as output loading. FO4 is generally used as a delay metric because such a load is generally seen in case of tapered buffers driving large loads, and approximately in any logic gate of a logic path sized for minimum delay. Also, for most technologies the optimum fanout for such buffers generally varies from 2.7 to 5.3. A fan out of 4 is the answer to the canonical problem stated as follows: Given a fixed size inverter, small in comparison to a fixed large load, minimize the delay in driving the large load. After some math, it can be shown that the minimum delay is achieved when the load is driven by a chain of N inverters, each successive inverter ~4x larger than the previous; N ~ log4(Cload/Cin) . In the absence of parasitic capacitances (drain diffusion capacitance and wire capacitance), the result is "a fan out of e" (now N ~ ln(Cload/Cin). If the load itself is not large, then using a fan out of 4 scaling in successive logic stages does not make sense. In these cases, minimum sized transistors may be faster. Because scaled technologies are inherently faster (in absolute terms), circuit performance can be more fairly compared using the fan out of 4 as a metric. For example, given two 64-bit adders, one implemented in a 0.5 µm technology and the other in 90 nm technology, it would be unfair to say the 90 nm adder is better from a circuits and architecture standpoint just because it has less latency. Th
https://en.wikipedia.org/wiki/PfSense
pfSense is a firewall/router computer software distribution based on FreeBSD. The open source pfSense Community Edition (CE) and pfSense Plus is installed on a physical computer or a virtual machine to make a dedicated firewall/router for a network. It can be configured and upgraded through a web-based interface, and requires no knowledge of the underlying FreeBSD system to manage. Overview The pfSense project began in 2004 as a fork of the m0n0wall project by Chris Buechler and Scott Ullrich. Its first release was in October 2006. The name derives from the fact that the software uses the packet-filtering tool, PF. Notable functions of pfSense include traffic shaping, VPNs using IPsec or PPTP, captive portal, stateful firewall, network address translation, 802.1q support for VLANs, and dynamic DNS. pfSense can be installed on hardware with an x86-64 processor architecture. It can also be installed on embedded hardware using Compact Flash or SD cards, or as a virtual machine. WireGuard protocol support In February 2021, pfSense CE 2.5.0 and pfSense Plus 21.02 added support for a kernel WireGuard implementation. Support for WireGuard was temporarily removed in March 2021 after implementation issues were discovered by WireGuard founder Jason Donenfeld. The July 2021 release of pfSense CE 2.5.2 version re-included WireGuard. See also Comparison of firewalls List of router and firewall distributions
https://en.wikipedia.org/wiki/Oval%20%28projective%20plane%29
In projective geometry an oval is a point set in a plane that is defined by incidence properties. The standard examples are the nondegenerate conics. However, a conic is only defined in a pappian plane, whereas an oval may exist in any type of projective plane. In the literature, there are many criteria which imply that an oval is a conic, but there are many examples, both infinite and finite, of ovals in pappian planes which are not conics. As mentioned, in projective geometry an oval is defined by incidence properties, but in other areas, ovals may be defined to satisfy other criteria, for instance, in differential geometry by differentiability conditions in the real plane. The higher dimensional analog of an oval is an ovoid in a projective space. A generalization of the oval concept is an abstract oval, which is a structure that is not necessarily embedded in a projective plane. Indeed, there exist abstract ovals which can not lie in any projective plane. Definition of an oval In a projective plane a set of points is called an oval, if: Any line meets in at most two points, and For any point there exists exactly one tangent line through , i.e., }. When the line is an exterior line (or passant), if a tangent line and if the line is a secant line. For finite planes (i.e. the set of points is finite) we have a more convenient characterization: For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line). A set of points in an affine plane satisfying the above definition is called an affine oval. An affine oval is always a projective oval in the projective closure (adding a line at infinity) of the underlying affine plane. An oval can also be considered as a special quadratic set. Examples Conic sections In any pappian projective plane there exist nondegenerate projective conic sections and any nondegenerate projective conic sectio
https://en.wikipedia.org/wiki/Distributed%20constraint%20optimization
Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized. Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents. Problems defined with this framework can be solved by any of the algorithms that are designed for it. The framework was used under different names in the 1980s. The first known usage with the current name is in 1990. Definitions DCOP The main ingredients of a DCOP problem are agents and variables. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a tuple , where: is the set of agents, . is the set of variables, . is the set of variable-domains, where each is a finite set containing the possible values of variable . If contains only two values (e.g. 0 or 1), then is called a binary variable. is the cost function. It is a function that maps every possible partial assignment to a cost. Usually, only few values of are non-zero, and it is represented as a list of the tuples that are assigned a non-zero value. Each such tuple is called a constraint. Each constraint in this set is a function assigning a real value to each possible assignment of the variables. Some special kinds of constraints are: Unary constraints - constraints on a single variable, i.e., for some . Binary constraints - constraints on two variables, i.e, for some is the ownership function. It is a function mapping each variable to its associated agent. means that variable "belongs" to agent . This implies that it is agent 's responsibility
https://en.wikipedia.org/wiki/Games%20People%20Play%20%28book%29
Games People Play: The Psychology of Human Relationships is a 1964 book by psychiatrist Eric Berne. The book was a bestseller at the time of its publication, despite drawing academic criticism for some of the psychoanalytic theories it presented. It popularized Berne's model of transactional analysis among a wide audience, and has been considered one of the first pop psychology books. Background The author Eric Berne was a psychiatrist specializing in psychotherapy who began developing alternate theories of interpersonal relationship dynamics in the 1950s. He sought to explain recurring patterns of interpersonal conflicts that he observed, which eventually became the basis of transactional analysis. After being rejected by a local psychoanalytic institute, he focused on writing about his own theories. In 1961, he published Transactional Analysis in Psychotherapy. That book was followed by Games People Play, in 1964. Berne did not intend for Games People Play to explore all aspects of transactional analysis, viewing it instead as an introduction to some of the concepts and patterns he identified. He borrowed money from friends and used his own savings to publish the book. Summary In the first half of the book, Berne introduces his theory of transactional analysis as a way of interpreting social interactions. He proposes that individuals encompass three roles or ego states, known as the Parent, the Adult, and the Child, which they switch between. He postulates that while Adult to Adult interactions are largely healthy, dysfunctional interactions can arise when people take on mismatched roles such as Parent and Child or Child and Adult. The second half of the book catalogues a series of "mind games" identified by Berne, in which people interact through a patterned and predictable series of "transactions" based on these mismatched roles. He states that although these interactions may seem plausible, they are actually a way to conceal hidden motivations under scripte
https://en.wikipedia.org/wiki/Iberian%20ribbed%20newt
The Iberian ribbed newt, gallipato or Spanish ribbed newt (Pleurodeles waltl) is a newt endemic to the central and southern Iberian Peninsula and Morocco. It is the largest European newt species and it is also known for its sharp ribs which can puncture through its sides, and as such is also called the sharp-ribbed newt. This species should not be confused with the different species with similar common name, the Iberian newt (Lissotriton boscai). Description The Iberian ribbed newt has tubercles running down each side. Through these, its sharp ribs can puncture. The ribs act as a defense mechanism, causing little harm to the newt. This mechanism could be considered as a primitive and rudimentary system of envenomation, but is completely harmless to humans. At the same time as pushing its ribs out the newt begins to secrete poison from special glands on its body. The poison coated ribs create a highly effective stinging mechanism, injecting toxins through the thin skin in predator's mouths. The newt's effective immune system and collagen coated ribs mean the pierced skin quickly regrows without infection. In the wild, this amphibian grows up to , but rarely more than in captivity. Its color is dark gray dorsally, and lighter gray on its ventral side, with rust-colored small spots where its ribs can protrude. This newt has a flat, spade-shaped head and a long tail, which is about half its body length. Males are more slender and usually smaller than females. The larvae have bushy external gills and usually paler color patterns than the adults. Pleurodeles waltl is more aquatic-dwelling than many other European tailed amphibians. Though they are quite able to walk on land, most rarely leave the water, living usually in ponds, cisterns, and ancient village wells that were common in Portugal and Spain in the past. They prefer cool, quiet, and deep waters, where they feed on insects, aquatic molluscs, worms, and tadpoles. Sex determination Sex determination is reg
https://en.wikipedia.org/wiki/Comparison%20of%20application%20virtualization%20software
Application virtualization software refers to both application virtual machines and software responsible for implementing them. Application virtual machines are typically used to allow application bytecode to run portably on many different computer architectures and operating systems. The application is usually run on the computer using an interpreter or just-in-time compilation (JIT). There are often several implementations of a given virtual machine, each covering a different set of functions. Comparison of virtual machines JavaScript machines not included. See List of ECMAScript engines to find them. The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation. Virtual machine instructions process data in local variables using a main model of computation, typically that of a stack machine, register machine, or random access machine often called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security. Memory management in these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popular Java virtual machines (JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers. Common Intermediate Language (CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries
https://en.wikipedia.org/wiki/Denjoy%27s%20theorem%20on%20rotation%20number
In mathematics, the Denjoy theorem gives a sufficient condition for a diffeomorphism of the circle to be topologically conjugate to a diffeomorphism of a special kind, namely an irrational rotation. proved the theorem in the course of his topological classification of homeomorphisms of the circle. He also gave an example of a C1 diffeomorphism with an irrational rotation number that is not conjugate to a rotation. Statement of the theorem Let ƒ: S1 → S1 be an orientation-preserving diffeomorphism of the circle whose rotation number θ = ρ(ƒ) is irrational. Assume that it has positive derivative ƒ(x) > 0 that is a continuous function with bounded variation on the interval [0,1). Then ƒ is topologically conjugate to the irrational rotation by θ. Moreover, every orbit is dense and every nontrivial interval I of the circle intersects its forward image ƒ°q(I), for some q > 0 (this means that the non-wandering set of ƒ is the whole circle). Complements If ƒ is a C2 map, then the hypothesis on the derivative holds; however, for any irrational rotation number Denjoy constructed an example showing that this condition cannot be relaxed to C1, continuous differentiability of ƒ. Vladimir Arnold showed that the conjugating map need not be smooth, even for an analytic diffeomorphism of the circle. Later Michel Herman proved that nonetheless, the conjugating map of an analytic diffeomorphism is itself analytic for "most" rotation numbers, forming a set of full Lebesgue measure, namely, for those that are badly approximable by rational numbers. His results are even more general and specify differentiability class of the conjugating map for Cr diffeomorphisms with any r ≥ 3. See also Circle map
https://en.wikipedia.org/wiki/Algorithmically%20random%20sequence
Intuitively, an algorithmically random sequence (or random sequence) is a sequence of binary digits that appears random to any algorithm running on a (prefix-free or not) universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet (e.g. decimal digits). Random sequences are key objects of study in algorithmic information theory. As different types of algorithms are sometimes considered, ranging from algorithms with specific bounds on their running time to algorithms which may ask questions of an oracle machine, there are different notions of randomness. The most common of these is known as Martin-Löf randomness (K-randomness or 1-randomness), but stronger and weaker forms of randomness also exist. When the term "algorithmically random" is used to refer to a particular single (finite or infinite) sequence without clarification, it is usually taken to mean "incompressible" or, in the case the sequence is infinite and prefix algorithmically random (i.e., K-incompressible), "Martin-Löf–Chaitin random". It is important to disambiguate between algorithmic randomness and stochastic randomness. Unlike algorithmic randomness, which is defined for computable (and thus deterministic) processes, stochastic randomness is usually said to be a property of a sequence that is a priori known to be generated by (or is the outcome of) an independent identically distributed equiprobable stochastic process. Because infinite sequences of binary digits can be identified with real numbers in the unit interval, random binary sequences are often called (algorithmically) random real numbers. Additionally, infinite binary sequences correspond to characteristic functions of sets of natural numbers; therefore those sequences might be seen as sets of natural numbers. The class of all Martin-Löf random (binary) sequences is denoted by RAND or MLR. History The first suitable definition of a random sequence was given by Per Martin-Löf in 1966. Earlier
https://en.wikipedia.org/wiki/Lebesgue%27s%20density%20theorem
In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set , the "density" of A is 0 or 1 at almost every point in . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as where Bε denotes the closed ball of radius ε centered at x. Lebesgue's density theorem asserts that for almost every point x of A the density exists and is equal to 0 or 1. In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and , then there are always points of Rn where the density is neither 0 nor 1. For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion. See also
https://en.wikipedia.org/wiki/Jade%20Ribbon%20Campaign
The Jade Ribbon Campaign (JRC) also known as JoinJade, was launched by the Asian Liver Center (ALC) at Stanford University in May 2001 during Asian Pacific American Heritage Month to help spread awareness internationally about hepatitis B (HBV) and liver cancer in Asian and Pacific Islander (API) communities. The objective of the Jade Ribbon Campaign is twofold: (1) to eradicate HBV worldwide; and (2) to reduce the incidence and mortality associated with liver cancer. Considered to be the essence of heaven and earth, Jade is believed in many Asian cultures to bring good luck and longevity while deflecting negativity. Folded like the Chinese character "人" (ren) meaning "person" or "people", the Jade Ribbon symbolizes the spirit of the campaign in bringing the Asian and global community together to combat this silent epidemic. Outreach efforts Since the campaign's founding, the Asian Liver Center (ALC) has been spearheading the Jade Ribbon Campaign through public service announcements in various media such as newspapers, magazines, TV, radio, billboard, and buses targeting communities with large API populations. The ALC also holds numerous seminars for health professionals and the public, cultural fairs, conferences, and HBV screening/vaccination events. 3 For Life One of the ALC's largest achievements was the founding of 3 for Life in September 2004, a pilot program in collaboration with the San Francisco Department of Public Health that provided low-cost hepatitis A and B vaccinations and free hepatitis B testing to the San Francisco community every first and third Saturday of the month for a year. The program tested and vaccinated over 1,200 people—50% of which were found to be unprotected against HBV and 10% to be positive for HBV. Upon the completion of 3 for Life in September 2005, the ALC currently is working on plans to launch a similar screening/vaccination program to service the large API population in Los Angeles. LIVERight The Answer to Cancer (A2C)
https://en.wikipedia.org/wiki/Locally%20integrable%20function
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions. Definition Standard definition . Let be an open set in the Euclidean space and be a Lebesgue measurable function. If on is such that i.e. its Lebesgue integral is finite on all compact subsets of , then is called locally integrable. The set of all such functions is denoted by : where denotes the restriction of to the set . The classical definition of a locally integrable function involves only measure theoretic and topological concepts and can be carried over abstract to complex-valued functions on a topological measure space : however, since the most common application of such functions is to distribution theory on Euclidean spaces, all the definitions in this and the following sections deal explicitly only with this important case. An alternative definition . Let be an open set in the Euclidean space . Then a function such that for each test function is called locally integrable, and the set of such functions is denoted by . Here denotes the set of all infinitely differentiable functions with compact support contained in . This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space, developed by the Nicolas Bourbaki school: it is also the one adopted by and by . This "distribution theoretic" definition is eq
https://en.wikipedia.org/wiki/X-ray%20optics
X-ray optics is the branch of optics that manipulates X-rays instead of visible light. It deals with focusing and other ways of manipulating the X-ray beams for research techniques such as X-ray crystallography, X-ray fluorescence, small-angle X-ray scattering, X-ray microscopy, X-ray phase-contrast imaging, and X-ray astronomy. Since X-rays and visible light are both electromagnetic waves they propagate in space in the same way, but because of the much higher frequency and photon energy of X-rays they interact with matter very differently. Visible light is easily redirected using lenses and mirrors, but because the real part of the complex refractive index of all materials is very close to 1 for X-rays, they instead tend to initially penetrate and eventually get absorbed in most materials without changing direction much. X-ray techniques There are many different techniques used to redirect X-rays, most of them changing the directions by only minute angles. The most common principle used is reflection at grazing incidence angles, either using total external reflection at very small angles or multilayer coatings. Other principles used include diffraction and interference in the form of zone plates, refraction in compound refractive lenses that use many small X-ray lenses in series to compensate by their number for the minute index of refraction, Bragg reflection from a crystal plane in flat or bent crystals. X-ray beams are often collimated or reduced in size using pinholes or movable slits typically made of tungsten or some other high-Z material. Narrow parts of an X-ray spectrum can be selected with monochromators based on one or multiple Bragg reflections by crystals. X-ray spectra can also be manipulated by having the X-rays pass through a filter (optics). This will typically reduce the low-energy part of the spectrum, and possibly parts above absorption edges of the elements used for the filter. Focusing optics Analytical X-ray techniques such as X-ray cry
https://en.wikipedia.org/wiki/Hazard%20%28logic%29
In digital logic, a hazard is an undesirable effect caused by either a deficiency in the system or external influences in both synchronous and asynchronous circuits. Logic hazards are manifestations of a problem in which changes in the input variables do not change the output correctly due to some form of delay caused by logic elements (NOT, AND, OR gates, etc.) This results in the logic not performing its function properly. The three different most common kinds of hazards are usually referred to as static, dynamic and function hazards. Hazards are a temporary problem, as the logic circuit will eventually settle to the desired function. Therefore, in synchronous designs, it is standard practice to register the output of a circuit before it is being used in a different clock domain or routed out of the system, so that hazards do not cause any problems. If that is not the case, however, it is imperative that hazards be eliminated as they can have an effect on other connected systems. Static hazards A static hazard is a change of a signal state twice in a row when the signal is expected to stay constant. When one input signal changes, the output changes momentarily before stabilizing to the correct value. There are two types of static hazards: Static-1 Hazard: the output is currently 1 and after the inputs change, the output momentarily changes to 0,1 before settling on 1 Static-0 Hazard: the output is currently 0 and after the inputs change, the output momentarily changes to 1,0 before settling on 0 In properly formed two-level AND-OR logic based on a Sum Of Products expression, there will be no static-0 hazards (but may still have static-1 hazards). Conversely, there will be no static-1 hazards in an OR-AND implementation of a Product Of Sums expression (but may still have static-0 hazards). The most commonly used method to eliminate static hazards is to add redundant logic (consensus terms in the logic expression). Example of a static hazard Consider an i
https://en.wikipedia.org/wiki/X.PC
X.PC is a deprecated communications protocol developed by McDonnell-Douglas for connecting a personal computer to its Tymnet packet-switched public data telecommunications network. It is a subset of X.25, a CCITT standard for packet-switched networks. It is a full-duplex, asynchronous and error-correcting network protocol that supports up to 15 simultaneous channels. It maintains automatic error correction during any communications session between two or more computers. X.PC was originally developed to enable connections up to 9600 baud. Unlike MNP, a competing standard proposed by Microcom, X.PC was placed in the public domain for royalty-free usage. MNP, on the other hand, initially required a $2,500 licensing fee. Microcom battled X.PC for acceptance in the marketplace, and eventually released MNP 1 through 3 into the public domain to compete. At the time, several modem manufacturers supported MNP in their products, Microcom and Racal-Vadic being major examples. Several companies announced support for X.PC, including Hayes, but none of the companies announcing support offered it in their modems. X.PC quickly disappeared, and Microcom would go on to release MNP 4 and 5 into the public domain as well.
https://en.wikipedia.org/wiki/Paragraphos
A paragraphos (, , from , 'beside', and , 'to write') was a mark in ancient Greek punctuation, marking a division in a text (as between speakers in a dialogue or drama) or drawing the reader's attention to another division mark, such as the two dot punctuation mark . There are many variants of this symbol, sometimes supposed to have developed from Greek gamma (), the first letter of the word . It was usually placed at the beginning of a line and trailing a little way under or over the text. It was referenced by Aristotle, who was dismissive of its use. Unicode encodes multiple versions: See also Obelus and Obelism, Greek marginal notes Coronis, the Greek paragraph mark Pilcrow (¶), the English paragraph mark Section sign (§), the English section mark
https://en.wikipedia.org/wiki/XPDL
The XML Process Definition Language (XPDL) is a format standardized by the Workflow Management Coalition (WfMC) to interchange business process definitions between different workflow products, i.e. between different modeling tools and management suites. XPDL defines an XML schema for specifying the declarative part of workflow / business process. XPDL is designed to exchange the process definition, both the graphics and the semantics of a workflow business process. XPDL is currently the best file format for exchange of BPMN diagrams; it has been designed specifically to store all aspects of a BPMN diagram. XPDL contains elements to hold graphical information, such as the X and Y position of the nodes, as well as executable aspects which would be used to run a process. This distinguishes XPDL from BPEL which focuses exclusively on the executable aspects of the process. BPEL does not contain elements to represent the graphical aspects of a process diagram. It is possible to say that XPDL is the XML Serialization of BPMN. History The Workflow Management Coalition, founded in August 1993, began by defining the Workflow Reference Model (ultimately published in 1995) that outlined the five key interfaces that a workflow management system must have. Interface 1 was for defining the business process, which includes two aspects: a process definition expression language and a programmatic interface to transfer the process definition to/from the workflow management system. The first revision of a process definition expression language was called Workflow Process Definition Language (WPDL) which was published in 1998. This process meta-model contained all the key concepts required to support workflow automation expressed using URL Encoding. Interoperability demonstrations were held to confirm the usefulness of this language as a way to communicate process models. By 1998, the first standards based on XML began to appear. The Workflow Management Coalition Working Gr
https://en.wikipedia.org/wiki/Adaptive%20heap%20sort
In computer science, adaptive heap sort is a comparison-based sorting algorithm of the adaptive sort family. It is a variant of heap sort that performs better when the data contains existing order. Published by Christos Levcopoulos and Ola Petersson in 1992, the algorithm utilizes a new measure of presortedness, Osc, as the number of oscillations. Instead of putting all the data into the heap as the traditional heap sort did, adaptive heap sort only take part of the data into the heap so that the run time will reduce significantly when the presortedness of the data is high. Heapsort Heap sort is a sorting algorithm that utilizes binary heap data structure. The method treats an array as a complete binary tree and builds up a Max-Heap/Min-Heap to achieve sorting. It usually involves the following four steps. Build a Max-Heap(Min-Heap): put all the data into the heap so that all nodes are either greater than or equal (less than or equal to for Min-Heap) to each of its child nodes. Swap the first element of the heap with the last element of the heap. Remove the last element from the heap and put it at the end of the list. Adjust the heap so that the first element ends up at the right place in the heap. Repeat Step 2 and 3 until the heap has only one element. Put this last element at the end of the list and output the list. The data in the list will be sorted. Below is a C/C++ implementation that builds up a Max-Heap and sorts the array after the heap is built. /* A C/C++ sample heap sort code that sort an array to an increasing order */ // A function that build up a max-heap binary tree void heapify(int array[], int start, int end) { int parent = start; int child = parent * 2 + 1; while (child <= end) { if (child + 1 <= end) // when there are two child nodes { if (array[child + 1] > array[child]) { child ++; //take the bigger child node } } if (array[parent] > array[child]) { return; //if the parent node is greater, then it's
https://en.wikipedia.org/wiki/Adaptive%20sort
A sorting algorithm falls into the adaptive sort family if it takes advantage of existing order in its input. It benefits from the presortedness in the input sequence – or a limited amount of disorder for various definitions of measures of disorder – and sorts faster. Adaptive sorting is usually performed by modifying existing sorting algorithms. Motivation Comparison-based sorting algorithms have traditionally dealt with achieving an optimal bound of O(n log n) when dealing with time complexity. Adaptive sort takes advantage of the existing order of the input to try to achieve better times, so that the time taken by the algorithm to sort is a smoothly growing function of the size of the sequence and the disorder in the sequence. In other words, the more presorted the input is, the faster it should be sorted. This is an attractive feature for a sorting algorithm because nearly sorted sequences are common in practice. Thus, the performance of existing sort algorithms can be improved by taking into account the existing order in the input. Note that most worst-case sorting algorithms that do optimally well in the worst-case, notably heap sort and merge sort, do not take existing order within their input into account, although this deficiency is easily rectified in the case of merge sort by checking if the last element of the lefthand group is less than (or equal) to the first element of the righthand group, in which case a merge operation may be replaced by simple concatenation – a modification that is well within the scope of making an algorithm adaptive. Examples A classic example of an adaptive sorting algorithm is Straight Insertion Sort. In this sorting algorithm, we scan the input from left to right, repeatedly finding the position of the current item, and insert it into an array of previously sorted items. In pseudo-code form, the Straight Insertion Sort algorithm could look something like this (array X is zero-based): procedure Straight Insertion
https://en.wikipedia.org/wiki/Basketball%20statistics
Statistics in basketball are kept to evaluate a player's or a team's performance. Examples Examples of basketball statistics include: GM, GP; GS: games played; games started PTS: points FGM, FGA, FG%: field goals made, attempted and percentage FTM, FTA, FT%: free throws made, attempted and percentage 3FGM, 3FGA, 3FG%: three-point field goals made, attempted and percentage REB, OREB, DREB: rebounds, offensive rebounds, defensive rebounds AST: assists STL: steals BLK: blocks TO: turnovers TD: triple double EFF: efficiency: NBA's efficiency rating: (PTS + REB + AST + STL + BLK − ((FGA − FGM) + (FTA − FTM) + TO)) PF: personal fouls MIN: minutes AST/TO: assist to turnover ratio PER: Player Efficiency Rating: John Hollinger's Player Efficiency Rating PIR: Performance Index Rating: Euroleague's and Eurocup's Performance Index Rating: (Points + Rebounds + Assists + Steals + Blocks + Fouls Drawn) − (Missed Field Goals + Missed Free Throws + Turnovers + Shots Rejected + Fouls Committed) Averages per game are denoted by *PG (e.g. BLKPG or BPG, STPG or SPG, APG, RPG and MPG). Sometime the players statistics are divided by minutes played and multiplied by 48 minutes (had he played the entire game), denoted by * per 48 min. or *48M. A player who makes double digits in a game in any two of the PTS, REB, AST, STL, and BLK statistics is said to make a double double; in three statistics, a triple double; in four statistics, a quadruple double. A quadruple double is extremely rare (and has only occurred four times in the NBA). There is also a 5x5, when a player records at least a 5 in each of the 5 statistics. The NBA also posts to the statistics section of its Web site a simple composite efficiency statistic, denoted EFF and derived by the formula, ((Points + Rebounds + Assists + Steals + Blocks) − ((Field Goals Attempted − Field Goals Made) + (Free Throws Attempted − Free Throws Made) + Turnovers)). While conveniently distilling most of a player's key statist
https://en.wikipedia.org/wiki/Renato%20Caccioppoli
Renato Caccioppoli (; 20 January 1904 – 8 May 1959) was an Italian mathematician, known for his contributions to mathematical analysis, including the theory of functions of several complex variables, functional analysis, measure theory. Life and career Born in Naples, he was the son of Giuseppe Caccioppoli (1852–1947), a surgeon, and his second wife Sofia Bakunin (1870–1956), daughter of the Russian revolutionary Mikhail Bakunin. After earning his high-school diploma in 1921, he enrolled in the Department of engineering to swap to mathematics in November 1923. Immediately after earning his laurea in 1925, he became the assistant of Mauro Picone, who in that year was called to the University of Naples, where he remained until 1932. Picone immediately discovered Caccioppoli's brilliance and pointed him towards research in mathematical analysis. During the following five years, Caccioppoli published about 30 works on topics developed in the complete autonomy provided by a ministerial award for mathematics in 1931, a competition he won at the age of 27 and the chair of algebraic analysis at the University of Padova. In 1934 he returned to Naples to accept the chair in group theory; later he took the chair of superior analysis, and from 1943 onwards, the chair in mathematical analysis. In 1931 he became a correspondent member of the Academy of Physical and Mathematical Sciences of Naples, becoming an ordinary member in 1938. In 1944 he became an ordinary member of the Accademia Pontaniana, and in 1947 a correspondent member of the Accademia Nazionale dei Lincei, and a national member in 1958. He was also a correspondent member of the Paduan Academy of Sciences, Letters, and Arts. In the years from 1947 to 1957, he directed, together with Carlo Miranda, the journal Giornale di Matematiche, founded by Giuseppe Battaglini. In 1948 he became a member of the editing committee of Annali di Matematica, and starting in 1952 he was also a member of the editing committee of Rice
https://en.wikipedia.org/wiki/Thermodynamic%20diagrams
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor. Overview Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification. By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day. The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air. Types of thermodynamic diagrams General purpose diagrams include: PV diagram T–s diagram h–s (Mollier) diagram Psychrometric chart Cooling curve Indicator diagram Saturation vapor curve Thermodynamic surface Specific to weather services, there are mainly three different types of thermodynamic diagrams used: Skew-T log-P diagram Tephigram Emagram All three diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram s
https://en.wikipedia.org/wiki/Animal%20model%20of%20ischemic%20stroke
Animal models of ischemic stroke are procedures inducing cerebral ischemia. The aim is the study of basic processes or potential therapeutic interventions in this disease, and the extension of the pathophysiological knowledge on and/or the improvement of medical treatment of human ischemic stroke. Ischemic stroke has a complex pathophysiology involving the interplay of many different cells and tissues such as neurons, glia, endothelium, and the immune system. These events cannot be mimicked satisfactorily in vitro yet. Thus a large portion of stroke research is conducted on animals. Overview Several models in different species are currently known to produce cerebral ischemia. Global ischemia models, both complete and incomplete, tend to be easier to perform. However, they are less immediately relevant to human stroke than the focal stroke models, because global ischemia is not a common feature of human stroke. However, in various settings global ischemia is also relevant, e.g. in global anoxic brain damage due to cardiac arrest. Different species also vary in their susceptibility to the various types of ischemic insults. An example is gerbils. They do not have a Circle of Willis and stroke can be induced by common carotid artery occlusion alone. Mechanisms of inducing ischemic stroke Some of the mechanisms which have been used are: Complete global ischemia Decapitation Aorta/vena cava occlusion External neck tourniquet or cuff Cardiac arrest Incomplete global ischemia Hemorrhage or hypotension Hypoxic ischemia Intracranial hypertension and common carotid artery occlusion Two-vessel occlusion and hypotension Four-vessel occlusion Unilateral common carotid artery occlusion (in some species only) Focal cerebral ischemia Endothelin-1-induced constriction of arteries and veins Middle cerebral artery occlusion Spontaneous brain infarction (in spontaneously hypertensive rats) Macrosphere embolization Multifocal cerebral ischemia Blood clot embolizatio
https://en.wikipedia.org/wiki/Perfectly%20matched%20layer
A perfectly matched layer (PML) is an artificial absorbing layer for wave equations, commonly used to truncate computational regions in numerical methods to simulate problems with open boundaries, especially in the FDTD and FE methods. The key property of a PML that distinguishes it from an ordinary absorbing material is that it is designed so that waves incident upon the PML from a non-PML medium do not reflect at the interface—this property allows the PML to strongly absorb outgoing waves from the interior of a computational region without reflecting them back into the interior. PML was originally formulated by Berenger in 1994 for use with Maxwell's equations, and since that time there have been several related reformulations of PML for both Maxwell's equations and for other wave-type equations, such as elastodynamics, the linearized Euler equations, Helmholtz equations, and poroelasticity. Berenger's original formulation is called a split-field PML, because it splits the electromagnetic fields into two unphysical fields in the PML region. A later formulation that has become more popular because of its simplicity and efficiency is called uniaxial PML or UPML, in which the PML is described as an artificial anisotropic absorbing material. Although both Berenger's formulation and UPML were initially derived by manually constructing the conditions under which incident plane waves do not reflect from the PML interface from a homogeneous medium, both formulations were later shown to be equivalent to a much more elegant and general approach: stretched-coordinate PML. In particular, PMLs were shown to correspond to a coordinate transformation in which one (or more) coordinates are mapped to complex numbers; more technically, this is actually an analytic continuation of the wave equation into complex coordinates, replacing propagating (oscillating) waves by exponentially decaying waves. This viewpoint allows PMLs to be derived for inhomogeneous media such as wavegui
https://en.wikipedia.org/wiki/Replisome
The replisome is a complex molecular machine that carries out replication of DNA. The replisome first unwinds double stranded DNA into two single strands. For each of the resulting single strands, a new complementary sequence of DNA is synthesized. The total result is formation of two new double stranded DNA sequences that are exact copies of the original double stranded DNA sequence. In terms of structure, the replisome is composed of two replicative polymerase complexes, one of which synthesizes the leading strand, while the other synthesizes the lagging strand. The replisome is composed of a number of proteins including helicase, RFC, PCNA, gyrase/topoisomerase, SSB/RPA, primase, DNA polymerase III, RNAse H, and ligase. Overview of prokaryotic DNA replication process For prokaryotes, each dividing nucleoid (region containing genetic material which is not a nucleus) requires two replisomes for bidirectional replication. The two replisomes continue replication at both forks in the middle of the cell. Finally, as the termination site replicates, the two replisomes separate from the DNA. The replisome remains at a fixed, midcell location in the cell, attached to the membrane, and the template DNA threads through it. DNA is fed through the stationary pair of replisomes located at the cell membrane. Overview of eukaryotic DNA replication process For eukaryotes, numerous replication bubbles form at origins of replication throughout the chromosome. As with prokaryotes, two replisomes are required, one at each replication fork located at the terminus of the replication bubble. Because of significant differences in chromosome size, and the associated complexities of highly condensed chromosomes, various aspects of the DNA replication process in eukaryotes, including the terminal phases, are less well-characterised than for prokaryotes. Challenges of DNA replication The replisome is a system in which various factors work together to solve the structural and che
https://en.wikipedia.org/wiki/Cavity%20method
The cavity method is a mathematical method presented by Marc Mézard, Giorgio Parisi and Miguel Angel Virasoro in 1987 to solve some mean field type models in statistical physics, specially adapted to disordered systems. The method has been used to compute properties of ground states in many condensed matter and optimization problems. Initially invented to deal with the Sherrington–Kirkpatrick model of spin glasses, the cavity method has shown wider applicability. It can be regarded as a generalization of the Bethe—Peierls iterative method in tree-like graphs, to the case of a graph with loops that are not too short. The different approximations that can be done with the cavity method are usually named after their equivalent with the different steps of the replica method which is mathematically more subtle and less intuitive than the cavity approach. The cavity method has proved useful in the solution of optimization problems such as k-satisfiability and graph coloring. It has yielded not only ground states energy predictions in the average case, but also has inspired algorithmic methods. See also The cavity method originated in the context of statistical physics, but is also closely related to methods from other areas such as belief propagation.