source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/TUGboat
|
TUGboat (, DOI prefix 10.47397) is a journal published three times per year by the TeX Users Group. It covers a wide range of topics in digital typography relevant to the TeX typesetting system. The editor is Barbara Beeton.
See also
The PracTeX Journal
External links
TUGboat home page
List of TeX-related publications and journals
TeX
Typesetting
Academic journals established in 1980
Computer science journals
|
https://en.wikipedia.org/wiki/Harvard%20Mark%20II
|
The Harvard Mark II, also known as the Aiken Relay Calculator, was an electromechanical computer built under the direction of Howard Aiken at Harvard University, completed in 1947. It was financed by the United States Navy and used for ballistic calculations at Naval Proving Ground Dahlgren. Howard Aiken and Grace Hopper worked together to build and program the Mark II.
Overview
The contract to build the Mark II was signed with Harvard in February 1945, after the successful demonstration of the Mark I in 1944. It was completed and debugged in 1947, and delivered to the US Navy Proving Ground at Dahlgren, Virginia in March 1948, becoming fully operational by the end of that year.
The Mark II was constructed with high-speed electromagnetic relays instead of the electro-mechanical counters used in the Mark I, making it much faster than its predecessor. It weighed and occupied over of floor space. Its addition time was 0.125 seconds (8 Hz) and the multiplication time was 0.750 seconds. This was a factor of 2.6 faster for addition and a factor of 8 faster for multiplication compared to the Mark I. It was the second machine (after the Bell Labs Relay Calculator) to have floating-point hardware. A unique feature of the Mark II is that it had built-in hardware for several functions such as the reciprocal, square root, logarithm, exponential, and some trigonometric functions. These took between five and twelve seconds to execute. Additionally, the Mark II was actually composed of two sub-computers that could either work in tandem or operate on separate functions, to cross-check results and debug malfunctions.
The Mark I and Mark II were not stored-program computers – they read instructions of the program one at a time from a tape and executed them. The Mark II had a peculiar programming method that was devised to ensure that the contents of a register were available when needed. The tape containing the program could encode only eight instructions, so what a particular
|
https://en.wikipedia.org/wiki/Harvard%20Mark%20IV
|
The Harvard Mark IV was an electronic stored-program computer built by Harvard University under the supervision of Howard Aiken for the United States Air Force. The computer was finished being built in 1952. It stayed at Harvard, where the Air Force used it extensively.
The Mark IV was all electronic. The Mark IV used magnetic drum and had 200 registers of ferrite magnetic-core memory (one of the first computers to do so). It separated the storage of data and instructions in what is now sometimes referred to as the Harvard architecture although that term was not coined until the 1970s (in the context of microcontrollers).
See also
Harvard Mark I
Harvard Mark II
Harvard Mark III
List of vacuum-tube computers
Howard Aiken
Harvard (World War II advanced trainer aircraft)
References
Further reading
A History of Computing Technology, Michael R. Williams, 1997, IEEE Computer Society Press,
External links
Harvard Mark IV 64-bit Magnetic Shift Register at ComputerHistory.org
1950s computers
Computer-related introductions in 1952
Vacuum tube computers
One-of-a-kind computers
Harvard University
|
https://en.wikipedia.org/wiki/Futures%20and%20promises
|
In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.
The term promise was proposed in 1976 by Daniel P. Friedman and David Wise,
and Peter Hibbard called it eventual.
A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt.
The terms future, promise, delay, and deferred are often used interchangeably, although some differences in usage between future and promise are treated below. Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future. Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together and associated with each other: the future is the value, the promise is the function that sets the value – essentially the return value (future) of an asynchronous function (promise). Setting the value of a future is also called resolving, fulfilling, or binding it.
Applications
Futures and promises originated in functional programming and related paradigms (such as logic programming) to decouple a value (a future) from how it was computed (a promise), allowing the computation to be done more flexibly, notably by parallelizing it. Later, it found use in distributed computing, in reducing the latency from communication round trips. Later still, it gained more use by allowing writing asynchronous programs in direct style, rather than in continuation-passing style.
Implicit vs. explicit
Use of futures may be implicit (any use of t
|
https://en.wikipedia.org/wiki/Helminthology
|
Helminthology is the study of parasitic worms (helminths). The field studies the taxonomy of helminths and their effects on their hosts.
The origin of the first compound of the word is the Greek ἕλμινς - helmins, meaning "worm".
In the 18th and early 19th century there was wave of publications on helminthology; this period has been described as the science's "Golden Era". During that period the authors Félix Dujardin, William Blaxland Benham, Peter Simon Pallas, Marcus Elieser Bloch, Otto Friedrich Müller, Johann Goeze, Friedrich Zenker, Charles Wardell Stiles, Carl Asmund Rudolphi, Otto Friedrich Bernhard von Linstow
and Johann Gottfried Bremser started systematic scientific studies of the subject.
The Japanese parasitologist Satyu Yamaguti was one of the most active helminthologists of the 20th century; he wrote the six-volume Systema Helminthum.
See also
Nematology
References
Subfields of zoology
|
https://en.wikipedia.org/wiki/How%20to%20Design%20Programs
|
How to Design Programs (HtDP) is a textbook by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi on the systematic design of computer programs. MIT Press published the first edition in 2001, and the second edition in 2018, which is freely available online and in print. The book introduces the concept of a design recipe, a six-step process for creating programs from a problem statement. While the book was originally used along with the education project TeachScheme! (renamed ProgramByDesign), it has been adopted at many colleges and universities for teaching program design principles.
According to HtDP, the design process starts with a careful analysis of a problem statement with the goal of extracting a rigorous description of the kinds of data that the desired program consumes and produces. The structure of these data descriptions determines the organization of the program.
Then, the book carefully introduces data forms of progressively growing complexity. It starts with data of atomic forms and then progresses to compound forms, including data that can be arbitrarily large. For each kind of data definition, the book explains how to organize the program in principle, thus enabling a programmer who encounters a new form of data to still construct a program systematically.
Like Structure and Interpretation of Computer Programs (SICP), HtDP relies on a variant of the programming language Scheme. It includes its own programming integrated development environment (IDE), named DrRacket, which provides a series of programming languages. The first language supports only functions, atomic data, and simple structures. Each language adds expressive power to the prior one. Except for the largest teaching language, all languages for HtDP are functional programming languages.
Pedagogical basis
In the 2004 paper, The Structure and Interpretation of the Computer Science Curriculum, the same authors compared and contrasted the pedagogical focus
|
https://en.wikipedia.org/wiki/Amplified%20fragment%20length%20polymorphism
|
AFLP-PCR or just AFLP is a PCR-based tool used in genetics research, DNA fingerprinting, and in the practice of genetic engineering. Developed in the early 1990s by KeyGene, AFLP uses restriction enzymes to digest genomic DNA, followed by ligation of adaptors to the sticky ends of the restriction fragments. A subset of the restriction fragments is then selected to be amplified. This selection is achieved by using primers complementary to the adaptor sequence, the restriction site sequence and a few nucleotides inside the restriction site fragments (as described in detail below). The amplified fragments are separated and visualized on denaturing on agarose gel electrophoresis, either through autoradiography or fluorescence methodologies, or via automated capillary sequencing instruments.
Although AFLP should not be used as an acronym, it is commonly referred to as "Amplified fragment length polymorphism". However, the resulting data are not scored as length polymorphisms, but instead as presence-absence polymorphisms.
AFLP-PCR is a highly sensitive method for detecting polymorphisms in DNA. The technique was originally described by Vos and Zabeau in 1993. In detail, the procedure of this technique is divided into three steps:
Digestion of total cellular DNA with one or more restriction enzymes and ligation of restriction half-site specific adaptors to all restriction fragments.
Selective amplification of some of these fragments with two PCR primers that have corresponding adaptor and restriction site specific sequences.
Electrophoretic separation of amplicons on a gel matrix, followed by visualisation of the band pattern.
Applications
The AFLP technology has the capability to detect various polymorphisms in different genomic regions simultaneously. It is also highly sensitive and reproducible. As a result, AFLP has become widely used for the identification of genetic variation in strains or closely related species of plants, fungi, animals, and bacteria. The AFL
|
https://en.wikipedia.org/wiki/GeneXus
|
GeneXus is a low code, cross-platform, knowledge representation-based development tool, mainly oriented towards enterprise-class applications for web applications, smart devices, and the Microsoft Windows platform.
GeneXus uses mostly declarative language to generate native code for multiple environments. It includes a normalization module, which creates and maintains an optimal database structure based on user views. The languages for which code can be generated include COBOL, Java, Objective-C, RPG, Ruby, Visual Basic, and Visual FoxPro. Some of the DBMSs supported are Microsoft SQL Server, Oracle, IBM Db2, Informix, PostgreSQL, and MySQL.
GeneXus was developed by Uruguayan company ARTech Consultores SRL which later renamed to Genexus SA. The latest version is GeneXus 17, which was released on October 20, 2020.
See also
Comparison of code generation tools
List of low-code development platforms
References
External links
1988 software
Declarative programming languages
Integrated development environments
Programming tools
|
https://en.wikipedia.org/wiki/Spaghetti%20sort
|
Spaghetti sort is a linear-time, analog algorithm for sorting a sequence of items, introduced by A. K. Dewdney in his Scientific American column. This algorithm sorts a sequence of items requiring O(n) stack space in a stable manner. It requires a parallel processor.
Algorithm
For simplicity, assume we are sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti:
For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in the list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, break a rod in two so that one piece is of length x units; discard the other piece.)
Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod—this one is clearly the longest. Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed.
Analysis
Preparing the n rods of spaghetti takes linear time. Lowering the rods on the table takes constant time, O(1). This is possible because the hand, the spaghetti rods and the table work as a fully parallel computing device. There are then n rods to remove so, assuming each contact-and-removal operation takes constant time, the worst-case time complexity of the algorithm is O(n).
References
External links
A. K. Dewdney's homepage
Implementations of a model of physical sorting, Boole Centre for Research in Informatics
Classical/Quantum Computing, IFF-Institute
Sorting algorithms
Metaphors referring to spaghetti
|
https://en.wikipedia.org/wiki/Xbox%20Linux
|
Xbox Linux was a project that ported the Linux operating system to the Xbox video game console. Because the Xbox uses a digital signature system to prevent the public from running unsigned code, one must either use a modchip, or a softmod. Originally, modchips were the only option; however, it was later demonstrated that the TSOP chip on which the Xbox's BIOS is held may be reflashed. This way, one may flash on the "Cromwell" BIOS, which was developed legally by the Xbox Linux project. Catalyzed by a large cash prize for the first team to provide the possibility of booting Linux on an Xbox without the need of a hardware hack, numerous software-only hacks were also found. For example, a buffer overflow was found in the game 007: Agent Under Fire that allowed the booting of a Linux loader ("xbeboot") straight from a save game.
The Xbox is essentially a PC with a custom 733 MHz Intel Pentium III processor, a 10 GB hard drive (8 GB of which is accessible to the user), 64MB of RAM (although on all earlier boxes this is upgradable to 128MB), and 4 USB ports. (The controller ports are actually USB 1.1 ports with a modified connector.) These specifications are enough to run several readily available Linux distributions.
From the Xbox-Linux home page:
The Xbox is a legacy-free PC by Microsoft that consists of an Intel Celeron 733 MHz CPU, an nVidia GeForce 3MX, 64 MB of RAM, a 8/10 GB hard disk, a DVD drive and 10/100 Ethernet. As on every PC, you can run Linux on it.
An Xbox with Linux can be a full desktop computer with mouse and keyboard, a web/email box connected to TV, a server or router or a node in a cluster. You can either dual-boot or use Linux only; in the latter case, you can replace both IDE devices. And yes, you can connect the Xbox to a VGA monitor.
Uses
An Xbox with Linux installed can act as a full desktop computer with mouse and keyboard, a web/email box connected to a television, a server, router or a node in a cluster. One can either dual-boot or
|
https://en.wikipedia.org/wiki/The%20Right%20to%20Read
|
The Right to Read is a short story by Richard Stallman, the founder of the Free Software Foundation, which was first published in 1997 in Communications of the ACM. It is a cautionary tale set in the year 2047, when DRM-like technologies are employed to restrict the readership of books; when the sharing of books and written material is a crime punishable by imprisonment.
In particular, the story touches on the impact of such a system on university students, due to their need for materials, one (Dan Halbert) of whom is forced into a dilemma in which he must decide whether to loan his computer to a fellow student (Lissa Lenz), who would then have the ability to illegally access his purchased documents.
It is notable for being written before the use of Digital Rights Management (DRM) technology was widespread (although DVD video discs which used DRM had appeared the year before, and various proprietary software since the 1970s had made use of some form of copy protection), and for predicting later hardware-based attempts to restrict how users could use content, such as Trusted Computing.
See also
The Digital Imprimatur
External links
The Right to Read
Free content
Digital rights management
Free software
1997 short stories
|
https://en.wikipedia.org/wiki/Preemption%20%28computing%29
|
In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of a processor are known as context switching.
User mode and kernel mode
In any given system design, some operations performed by the system may not be preemptable. This usually applies to kernel functions and service interrupts which, if not permitted to run to completion, would tend to produce race conditions resulting in deadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense of system responsiveness. The distinction between user mode and kernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems have preemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems are Solaris 2.0/SunOS 5.0, Windows NT, Linux kernel (2.5.4 and newer), AIX and some BSD systems (NetBSD, since version 5).
Preemptive multitasking
The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources.
In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multit
|
https://en.wikipedia.org/wiki/Honda%20E%20series
|
The E series was a collection of successive humanoid robots created by the Honda Motor Company between the years of 1986 and 1993. These robots were only experimental, but later evolved into the Honda P series, with Honda eventually amassing the knowledge and experience necessary to create Honda's advanced humanoid robot: ASIMO. The fact that Honda had been developing the robots was kept secret from the public until the announcement of the Honda P2 in 1996.
E0, developed in 1986, was the first robot. It walked in a straight line on two feet, in a manner resembling human locomotion, taking around 5 seconds to complete a single step. Quickly engineers realised that in order to walk up slopes, the robot would need to travel faster. The model has 6 degrees of freedom: one in each groin, one in each knee and one in each ankle.
Models
E0, developed in 1986.
E1, developed in 1987, was larger than the first and walked at 0.25 km/h. This model and subsequent E-series robots have 12 degrees of freedom: 3 in each groin, 1 in each knee and 2 in each ankle.
E2, developed in 1989, could travel at 1.2km/h, through the development of "dynamic movement".
E3, developed in 1991, travelled at 3km/h, the average speed of a walking human.
E4, developed in 1991, lengthened the knee to achieve speeds of up to 4.7km/h.
E5, developed in 1992, was able to walk autonomously, albeit with a very large head.
E6, developed in 1993, was able to autonomously balance, walk over obstacles, and even climb stairs.
See also
Honda P series
References
External links
History of humanoid robots - Honda official website
Robotics at Honda
Bipedal humanoid robots
1986 robots
1987 robots
1989 robots
1991 robots
1992 robots
1993 robots
Products and services discontinued in 1993
Japanese inventions
|
https://en.wikipedia.org/wiki/Raunki%C3%A6r%20plant%20life-form
|
The Raunkiær system is a system for categorizing plants using life-form categories, devised by Danish botanist Christen C. Raunkiær and later extended by various authors.
History
It was first proposed in a talk to the Danish Botanical Society in 1904 as can be inferred from the printed discussion of that talk, but not the talk itself, nor its title. The journal, Botanisk Tidsskrift, published brief comments on the talk by M.P. Porsild, with replies by Raunkiær. A fuller account appeared in French the following year. Raunkiær elaborated further on the system and published this in Danish in 1907.
The original note and the 1907 paper were much later translated to English and published with Raunkiær's collected works.
Modernization
Raunkiær's life-form scheme has subsequently been revised and modified by various authors, but the main structure has survived. Raunkiær's life-form system may be useful in researching the transformations of biotas and the genesis of some groups of phytophagous animals.
Subdivisions
The subdivisions of the Raunkiær system are premised on the location of the bud of a plant during seasons with adverse conditions, i. e. cold seasons and dry seasons:
Phanerophytes
These plants, normally woody perennials, grow stems into the air, with their resting buds being more than 50 cm above the soil surface, e.g. trees and shrubs, and also epiphytes, which Raunkiær later separated as a distinct class (see below).
Raunkiær further divided the phanerophytes according to height as
Megaphanerophytes,
Mesophanerophytes,
Microphanerophytes, and
Nanophanerophytes.
Further division was premised on the characters of duration of foliage, i. e. evergreen or deciduous, and presence of covering bracts on buds, for 8 classes. 3 further divisions were made to increase the total of classes to 12:
Phanerophytic stem succulents,
Phanerophytic epiphytes, and
Phanerophytic herbs.
Epiphytes
Epiphytes were originally included in the phanerophytes (see above) but then sepa
|
https://en.wikipedia.org/wiki/AUO%20Corporation
|
AUO Corporation (AUO; ) is a Taiwanese company that specialises in optoelectronic solutions. It was formed in September 2001 by the merger of Acer Display Technology, Inc. (the predecessor of AUO, established in 1996) and Unipac Optoelectronics Corporation. AUO offers display panel products and solutions, and in recent years expanded its business to smart retail, smart transportation, general health, solar energy, circular economy and smart manufacturing service.
AUO employs 38,000 people.
History
August 1996 Acer Display Technology, Inc. (the predecessor of AUO) was founded
September 2000 Listed at Taiwan Stock Exchange Corporation
September 2001 Merged with Unipac Optoelectronics Corporation to form AUO
October 2006 Merged with Quanta Display Inc.
December 2008 Entered solar business
March 2009 G8.5 Fab in Taichung recognized as world's first LEED Gold certified TFT-LCD facility
June 2009 Joint venture with Changhong in Sichuan, China to set up module plant
April 2010 Joint venture with TCL in China to set up module plant
July 2010 Acquired AFPD Pte., Ltd.("AFPD"), subsidiary of Toshiba Mobile Display Co., Ltd. in Singapore
May 2011 G8.5 Fab in Houli recognized as world's first LEED Platinum certified TFT-LCD facility
June 2011 Acquired world's first ISO 50001 certification for manufacturing facilities
February 2012 OLED strategic alliance formed with Idemitsu in Japan
July 2012 Acquired world's first ISO 14045 eco-efficiency assessment of product systems verification
June 2013 G8.5 Fab in Houli received Taiwan's first Diamond Certification for Green Factory Building
November 2013 Named to 2013 / 2014 Ocean Tomo 300® Patent Index
April 2014 Initiated new model of solar power plant operation by founding Star River Energy Corporation
May 2014 CSR Report acquired Taiwan's first GRI G4 certificate among the manufacturing industry
May 2015 Ranked among the top 5% companies in the first Corporate Governance Evaluation released by Taiwan
|
https://en.wikipedia.org/wiki/Concurrent%20lines
|
In geometry, lines in a plane or higher-dimensional space are concurrent if they intersect at a single point. They are in contrast to parallel lines.
Examples
Triangles
In a triangle, four basic types of sets of concurrent lines are altitudes, angle bisectors, medians, and perpendicular bisectors:
A triangle's altitudes run from each vertex and meet the opposite side at a right angle. The point where the three altitudes meet is the orthocenter.
Angle bisectors are rays running from each vertex of the triangle and bisecting the associated angle. They all meet at the incenter.
Medians connect each vertex of a triangle to the midpoint of the opposite side. The three medians meet at the centroid.
Perpendicular bisectors are lines running out of the midpoints of each side of a triangle at 90 degree angles. The three perpendicular bisectors meet at the circumcenter.
Other sets of lines associated with a triangle are concurrent as well. For example:
Any median (which is necessarily a bisector of the triangle's area) is concurrent with two other area bisectors each of which is parallel to a side.
A cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. The three cleavers concur at the center of the Spieker circle, which is the incircle of the medial triangle.
A splitter of a triangle is a line segment having one endpoint at one of the three vertices of the triangle and bisecting the perimeter. The three splitters concur at the Nagel point of the triangle.
Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter, and each triangle has one, two, or three of these lines. Thus if there are three of them, they concur at the incenter.
The Tarry point of a triangle is the point of concurrency of the lines through the vertices of the triangle perpendicular to the corresponding sides of the triangle's first
|
https://en.wikipedia.org/wiki/Disk%20Utility
|
Disk Utility is a system utility for performing disk and disk volume-related tasks on the macOS operating system by Apple Inc.
Functions
The functions currently supported by Disk Utility include:
Creation, conversion, backup, compression, and encryption of logical volume images from a wide range of formats read by Disk Utility to .dmg or, for CD/DVD images, .cdr
Mounting, unmounting and ejecting disk volumes (including both hard disks, removable media, and disk volume images)
Enabling or disabling journaling
Verifying a disk's integrity, and repairing it if the disk is damaged (this will work for both Mac compatible format partitions and for FAT32 partitions with Microsoft Windows installed)
Erasing, formatting, partitioning, and cloning disks
Secure deletion of free space or disk using a "zero out" data, a 7-pass DOD 5220-22 M standard, or a 35-pass Gutmann algorithm
Adding or changing partition table between Apple Partition Map, GUID Partition Table, and master boot record (MBR)
Restoring volumes from Apple Software Restore (ASR) images
Checking the S.M.A.R.T. status of a hard disk
Disk Utility functions may also be accessed from the macOS command line with the diskutil and hdiutil commands. It is also possible to create and manage RAM disk images by using hdiutil and diskutil in terminal.
History
In the classic Mac OS, similar functionality to the verification features of Disk Utility could be found in the Disk First Aid application. Another application called Drive Setup was used for drive formatting and partitioning and the application Disk Copy was used for working with disk images.
Before Mac OS X Panther, the functionality of Disk Utility was spread across two applications: Disk Copy and Disk Utility. Disk Copy was used for creating and mounting disk image files whereas Disk Utility was used for formatting, partitioning, verifying, and repairing file structures. The ability to "zero" all data (multi-pass formatting) on a disk was not added until
|
https://en.wikipedia.org/wiki/DEC%20Multia
|
The Multia, later re-branded the Universal Desktop Box, was a line of desktop computers introduced by Digital Equipment Corporation on 7 November 1994. The line is notable in that units were offered with either an Alpha AXP or Intel Pentium processor as the CPU, and most hardware other than the backplane and CPU were interchangeable. Both the Alpha and Intel versions were intended to run Windows NT.
The Multia had a compact case that left little room for expansion cards and restricted air flow, which can cause premature hardware failure due to overheating if not properly cared for. Enthusiasts remedy this by placing the Multia vertically instead of horizontally, allowing the heated air to escape via vents at the top, although this still requires preventing the Multia from overheating due to other factors, e.g. environmental.
Hardware specifications
The Alpha Multias included either an Alpha 21066 or Alpha 21066A microprocessor running at 166 MHz or 233 MHz respectively, and came with 16 or 24 MB of RAM as standard (expandable to 128 MB officially, but in practice 256 MB). Because the 21066 was a budget version of the Alpha 21064 processor, it had a narrower (64-bit versus 128-bit) and slower bus and thus performance was roughly equivalent to a Pentium running at 100 MHz for integer operations, but superior in floating-point; furthermore, the standard RAM capacity was a severe restriction on the performance of these workstations. The Alpha-based Multias came with the TGA (DEC 21030) graphics adapter.
Standard peripherals on both Alpha and Intel models included a SCSI host adapter, DEC 21040 Ethernet controller, two PCMCIA slots, two RS-232 ports, a bi-directional parallel port, a 2.5 in or 3.5 in SCSI or ATA hard disk (340 MB to 1.6 GB), PS/2 keyboard and mouse ports, and a PCI slot (on models with 2.5-inch hard disks).
Models
Multia models comprised:
Alpha Multia (codenamed QuickSilver):
VX40: 166 MHz 21066, optional floppy disk drive and external SCS
|
https://en.wikipedia.org/wiki/Direct%20fluorescent%20antibody
|
A direct fluorescent antibody (DFA or dFA), also known as "direct immunofluorescence", is an antibody that has been tagged in a direct fluorescent antibody test. Its name derives from the fact that it directly tests the presence of an antigen with the tagged antibody, unlike western blotting, which uses an indirect method of detection, where the primary antibody binds the target antigen, with a secondary antibody directed against the primary, and a tag attached to the secondary antibody.
Commercial DFA testing kits are available, which contain fluorescently labelled antibodies, designed to specifically target unique antigens present in the bacteria or virus, but not present in mammals (Eukaryotes). This technique can be used to quickly determine if a subject has a specific viral or bacterial infection.
In the case of respiratory viruses, many of which have similar broad symptoms, detection can be carried out using nasal wash samples from the subject with the suspected infection. Although shedding cells in the respiratory tract can be obtained, it is often in low numbers, and so an alternative method can be adopted where compatible cell culture can be exposed to infected nasal wash samples, so if the virus is present it can be grown up to a larger quantity, which can then give a clearer positive or negative reading.
As with all types of fluorescence microscopy, the correct absorption wavelength needs to be determined in order to excite the fluorophore tag attached to the antibody, and detect the fluorescence given off, which indicates which cells are positive for the presence of the virus or bacteria being detected.
Direct immunofluorescence can be used to detect deposits of immunoglobulins and complement proteins in biopsies of skin, kidney and other organs. Their presence is indicative of an autoimmune disease. When skin not exposed to the sun is tested, a positive direct IF (the so-called Lupus band test) is an evidence of systemic lupus erythematosus. Direct
|
https://en.wikipedia.org/wiki/Immunomagnetic%20separation
|
Immunomagnetic separation (IMS) is a laboratory tool that can efficiently isolate cells out of body fluid or cultured cells. It can also be used as a method of quantifying the pathogenicity of food, blood or feces. DNA analysis have supported the combined use of both this technique and Polymerase Chain Reaction (PCR). Another laboratory separation tool is the affinity magnetic separation (AMS), which is more suitable for the isolation of prokaryotic cells.
IMS deals with the isolation of cells, proteins, and nucleic acids through the specific capture of biomolecules through the attachment of small-magnetized particles, beads, containing antibodies and lectins. These beads are coated to bind to targeted biomolecules, gently separated and goes through multiple cycles of washing to obtain targeted molecules bound to these super paramagnetic beads, which can differentiate based on strength of magnetic field and targeted molecules, are then eluted to collect supernatant and then are able to determine the concentration of specifically targeted biomolecules. IMS obtains certain concentrations of specific molecules within targeted bacteria.
A mixture of cell population will be put into a magnetic field where cells then are attached to super paramagnetic beads, specific example are Dynabeads (4.5-μm), will remain once excess substrate is removed binding to targeted antigen. Dynabeads consists of iron-containing cores, which is covered by a thin layer of a polymer shell allowing the absorption of biomolecules. The beads are coated with primary antibodies, specific-specific antibodies, lectins, enzymes, or streptavidin; the linkage between magnetized beads coated materials are cleavable DNA linker cell separation from the beads when the culturing of cells is more desirable.
Many of these beads have the same principles of separation; however, the presence and different strength s of magnetic fields requires certain sizes of beads, based on the ramifications of the separation
|
https://en.wikipedia.org/wiki/Rainbow%20Series
|
The Rainbow Series (sometimes known as the Rainbow Books) is a series of computer security standards and guidelines published by the United States government in the 1980s and 1990s. They were originally published by the U.S. Department of Defense Computer Security Center, and then by the National Computer Security Center.
Objective
These standards describe a process of evaluation for trusted systems. In some cases, U.S. government entities (as well as private firms) would require formal validation of computer technology using this process as part of their procurement criteria. Many of these standards have influenced, and have been superseded by, the Common Criteria.
The books have nicknames based on the color of its cover. For example, the Trusted Computer System Evaluation Criteria was referred to as "The Orange Book." In the book entitled Applied Cryptography, security expert Bruce Schneier states of NCSC-TG-021 that he "can't even begin to describe the color of [the] cover" and that some of the books in this series have "hideously colored covers." He then goes on to describe how to receive a copy of them, saying "Don't tell them I sent you."
Most significant Rainbow Series books
References
External links
Rainbow Series from Federation of American Scientists, with more explanation
Rainbow Series from Archive of Information Assurance
Computer security standards
|
https://en.wikipedia.org/wiki/Surface%20power%20density
|
In physics and engineering, surface power density is power per unit area.
Applications
The intensity of electromagnetic radiation can be expressed in W/m2. An example of such a quantity is the solar constant.
Wind turbines are often compared using a specific power measuring watts per square meter of turbine disk area, which is , where r is the length of a blade. This measure is also commonly used for solar panels, at least for typical applications.
Radiance is surface power density per unit of solid angle (steradians) in a specific direction. Spectral radiance is radiance per unit of frequency (Hertz) at a specific frequency.
Surface power densities of energy sources
Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to "power density" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende.
The following table shows median surface power density of renewable and non-renewable energy sources.
Background
As an electromagnetic wave travels through space, energy is transferred from the source to other objects (receivers). The rate of this energy transfer depends on the strength of the EM field components. Simply put, the rate of energy transfer per unit area (power
|
https://en.wikipedia.org/wiki/Power%20density
|
Power density is the amount of power (time rate of energy transfer) per unit volume.
In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3.
In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size.
Examples
See also
Surface power density, energy per unit of area
Energy density, energy per unit volume
Specific energy, energy per unit mass
Power-to-weight ratio/specific power, power per unit mass
Specific absorption rate (SAR)
References
Power (physics)
|
https://en.wikipedia.org/wiki/Macro-engineering
|
In engineering, macro-engineering (alternatively known as mega engineering) is the implementation of large-scale design projects. It can be seen as a branch of civil engineering or structural engineering applied on a large landmass. In particular, macro-engineering is the process of marshaling and managing of resources and technology on a large scale to carry out complex tasks that last over a long period. In contrast to conventional engineering projects, macro-engineering projects (called macro-projects or mega-projects) are multidisciplinary, involving collaboration from all fields of study. Because of the size of macro-projects they are usually international.
Macro-engineering is an evolving field that has only recently started to receive attention. Because we routinely deal with challenges that are multinational in scope, such as global warming and pollution, macro-engineering is emerging as a transcendent solution to worldwide problems.
Macro-engineering is distinct from Megascale engineering due to the scales where they are applied. Where macro-engineering is currently practical, mega-scale engineering is still within the domain of speculative fiction because it deals with projects on a planetary or stellar scale.
Projects
Macro engineering examples include the construction of the Panama Canal and the Suez Canal.
Planned projects
Examples of projects include the Channel Tunnel and the planned Gibraltar Tunnel.
Two intellectual centers focused on macro-engineering theory and practice are the Candida Oancea Institute in Bucharest, and The Center for Macro Projects and Diplomacy at Roger Williams University in Bristol, Rhode Island.
See also
Afforestation
Agroforestry
Atlantropa (Gibraltar Dam)
Analog forestry
Bering Strait bridge
Buffer strip
Biomass
Biomass (ecology)
Climate engineering (Geoengineering)
Collaborative innovation network
Deforestation
Deforestation during the Roman period
Ecological engineering
Ecological engineering method
|
https://en.wikipedia.org/wiki/Super%20black
|
Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light.
Technology
The technology to create super black involves chemically etching a nickel-phosphorus alloy.
Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 µm to 50% at 20 µm.
In 2009, a competitor to the super black material, Vantablack, was developed based on carbon nanotubes. It has a relatively flat reflectance in a wide spectral range.
In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics.
Nanotube-based superblack arrays and coatings have recently become commercially available.
See also
Vantablack
Emissivity
Black hole
Black body
References
External links
Materials science
Optical materials
Shades of black
|
https://en.wikipedia.org/wiki/Net2Phone
|
net2phone is a Cloud Communications provider offering cloud based telephony services to businesses worldwide. The company is a subsidiary of IDT Corporation.
History
net2phone was founded in 1990 by telecom entrepreneur Howard Jonas, the chairman and chief executive officer of net2phone’s parent company, IDT Corporation. The company was an early pioneer in the commercialization of voice-over-Internet protocol (VoIP) technologies leveraging the global carrier business and infrastructure of IDT and focusing on transitioning businesses and consumers from PSTN, traditional telecom interconnects, to Voice over IP.
On July 30, 1999, during the dot-com bubble, the company became a public company via an initial public offering, raising $81 million. Shares rose 77% on the first day of trading to $26 per share. After completion of the IPO, IDT owned 57% of the company. Within a few weeks, the shares increased another 100% in value, to $53 per share.
In March 2000, in a transaction facilitated by IDT CEO Howard Jonas, a consortium of telecommunications companies led by AT&T announced a $1.4 billion investment for a 32% stake in the company, buying shares for $75 each. The transaction was completed in August 2000. AOL had expressed an interest in buying all or part of the company but was not agreeable to the price.
In August 2000, Jonathan Fram, president of the company, left the company to join eVoice.
In September 2000, the company formed Adir Technologies, a joint venture with Cisco Systems. In March 2002, the company sued Cisco for breach of contract.
In February 2002, the company announced 110 layoffs, or 28% of its workforce.
In October 2004, Liore Alroy became chief executive officer of the company.
On March 13, 2006, IDT Corporation acquired the shares of the company that it did not already own for $2.05 per share.
In 2015, net2phone began providing Unified Communications as a Service (UCaaS) targeted to the SMB market. net2phone’s UCaaS initiative was develope
|
https://en.wikipedia.org/wiki/TRADIC
|
The TRADIC (for TRAnsistor DIgital Computer or TRansistorized Airborne DIgital Computer) was the first transistorized computer in the USA, completed in 1954.
The computer was built by Jean Howard Felker of Bell Labs for the United States Air Force while L.C. Brown ("Charlie Brown") was a lead engineer on the project, which started in 1951. The project initially examined the feasibility of constructing a transistorized airborne digital computer. A second application was a transistorized digital computer to be used in a Navy track-while-scan shipboard radar system. Several models were completed: TRADIC Phase One computer, Flyable TRADIC, Leprechaun (using germanium alloy junction transistors in 1956) and XMH-3 TRADIC. TRADIC Phase One was developed to explore the feasibility, in the laboratory, of using transistors in a digital computer that could be used to solve aircraft bombing and navigation problems. Flyable TRADIC was used to establish the feasibility of using an airborne solid-state computer as the control element of a bombing and navigation system. Leprechaun was a second-generation laboratory research transistor digital computer designed to explore direct-coupled transistor logic (DCTL). The TRADIC Phase One computer was completed in January 1954.
The TRADIC Phase One computer has been claimed to be the world's first fully transistorized computer, ahead of the Mailüfterl in Austria or the Harwell CADET in the UK, which were each completed in 1955. In the UK, the Manchester University Transistor Computer demonstrated a working prototype in 1953 which incorporated transistors before TRADIC was operational, although that was not a fully transistorized computer because it used vacuum tubes to generate the clock signal. The 30 watts of power for the 1 MHz clock in the TRADIC was also supplied by a vacuum tube supply because no transistors were available that could supply that much power at that frequency. If the TRADIC can be called fully transistorized while in
|
https://en.wikipedia.org/wiki/Finite%20strain%20theory
|
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.
Displacement field
Deformation gradient tensor
The deformation gradient tensor is related to both the reference and current configuration, as seen by the unit vectors and , therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of , has the inverse , where is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant must be nonsingular, i.e.
The material deformation gradient tensor is a second-order tensor that represents the gradient of the mapping function or functional relation , which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector , i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function , i.e. differentiable function of and time , which implies that cracks and voids do not open or close during the deformation. Thus we have,
Relative displacement vector
Consider a particle or material point with position vector in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by in the new configuration is given by the vector position . The coordinate systems for t
|
https://en.wikipedia.org/wiki/Closed-loop%20authentication
|
Closed-loop authentication, as applied to computer network communication, refers to a mechanism whereby one party verifies the purported identity of another party by requiring them to supply a copy of a token transmitted to the canonical or trusted point of contact for that identity. It is also sometimes used to refer to a system of mutual authentication whereby two parties authenticate one another by signing and passing back and forth a cryptographically signed nonce, each party demonstrating to the other that they control the secret key used to certify their identity.
E-mail Authentication
Closed-loop email authentication is useful for simple i another, as a weak form of identity verification. It is not a strong form of authentication in the face of host- or network-based attacks (where an imposter, Chuck, is able to intercept Bob's email, intercepting the nonce and thus masquerading as Bob.)
A use of closed-loop email authentication is used by parties with a shared secret relationship (for example, a website and someone with a password to an account on that website), where one party has lost or forgotten the secret and needs to be reminded. The party still holding the secret sends it to the other party at a trusted point of contact. The most common instance of this usage is the "lost password" feature of many websites, where an untrusted party may request that a copy of an account's password be sent by email, but only to the email address already associated with that account. A problem associated with this variation is the tendency of a naïve or inexperienced user to click on a URL if an email encourages them to do so. Most website authentication systems mitigate this by permitting unauthenticated password reminders or resets only by email to the account holder, but never allowing a user who does not possess a password to log in or specify a new one.
In some instances in web authentication, closed-loop authentication is employed before any access is gra
|
https://en.wikipedia.org/wiki/Sampled%20data%20system
|
In systems science, a sampled-data system is a control system in which a continuous-time plant is controlled with a digital device. Under periodic sampling, the sampled-data system is time-varying but also periodic; thus, it may be modeled by a simplified discrete-time system obtained by discretizing the plant. However, this discrete model does not capture the inter-sample behavior of the real system, which may be critical in a number of applications.
The analysis of sampled-data systems incorporating full-time information leads to challenging control problems with a rich mathematical structure. Many of these problems have only been solved recently.
References
External links
Digital control
Sampling (signal processing)
Discretization
Control theory
Control engineering
Systems engineering
Systems theory
|
https://en.wikipedia.org/wiki/Lov%C3%A1sz%20local%20lemma
|
In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows one to relax the independence condition slightly: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. It is most commonly used in the probabilistic method, in particular to give existence proofs.
There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions. For other versions, see . In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma, which uses entropy compression to provide an efficient randomized algorithm for finding an outcome in which none of the events occurs.
Statements of the lemma (symmetric version)
Let A1, A2,..., Ak be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them.
Lemma I (Lovász and Erdős 1973; published 1975) If
then there is a nonzero probability that none of the events occurs.
Lemma II (Lovász 1977; published by Joel Spencer) If
where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs.
Lemma II today is usually referred to as "Lovász local lemma".
Lemma III (Shearer 1985) If
then there is a nonzero probability that none of the events occurs.
The threshold in Lemma III is optimal and it implies that the bound
is also sufficient.
Asymmetric Lovász local lemma
A statement of the asymmetric version (which a
|
https://en.wikipedia.org/wiki/Lustre%20%28programming%20language%29
|
Lustre is a formally defined, declarative, and synchronous dataflow programming language for programming reactive systems. It began as a research project in the early 1980s. A formal presentation of the language can be found in the 1991 Proceedings of the IEEE. In 1993 it progressed to practical, industrial use in a commercial product as the core language of the industrial environment SCADE, developed by Esterel Technologies. It is now used for critical control software in aircraft, helicopters, and nuclear power plants.
Structure of Lustre programs
A Lustre program is a series of node definitions, written as:
node foo(a : bool) returns (b : bool);
let
b = not a;
tel
Where foo is the name of the node, a is the name of the single input of this node and b is the name of the single output.
In this example the node foo returns the negation of its input a, which is the expected result.
Inner variables
Additional internal variables can be declared as follows:
node Nand(X,Y: bool) returns (Z: bool);
var U: bool;
let
U = X and Y;
Z = not U;
tel
Note: The equations order doesn't matter, the order of lines U = X and Y; and Z = not U; doesn't change the result.
Special operators
Examples
Edge detection
node Edge (X : bool) returns (E : bool);
let
E = false -> X and not pre X;
tel
See also
Esterel
SIGNAL (another dataflow-oriented synchronous language)
Synchronous programming language
Dataflow programming
References
External links
Synchrone Lab Official website
SCADE product page
Declarative programming languages
Synchronous programming languages
Hardware description languages
Formal methods
Software modeling language
|
https://en.wikipedia.org/wiki/Averest
|
Averest is a synchronous programming language and set of tools to specify, verify, and implement reactive systems. It includes a compiler for synchronous programs, a symbolic model checker, and a tool for hardware/software synthesis.
It can be used to model and verify finite and infinite state systems, at varied abstraction levels. It is useful for hardware design, modeling communication protocols, concurrent programs, software in embedded systems, and more.
Components: compiler to translate synchronous programs to transition systems, symbolic model checker, tool for hardware/software synthesis. These cover large parts of the design flow of reactive systems, from specifying to implementing. Though the tools are part of a common framework, they are mostly independent of each other, and can be used with 3rd-party tools.
See also
Synchronous programming language
Esterel
External links
Averest Toolbox Official home site
Embedded Systems Group Research group that develops the Averest Toolbox
Synchronous programming languages
Hardware description languages
|
https://en.wikipedia.org/wiki/Inspissation
|
Inspissation is the process of increasing the viscosity of a fluid, or even of causing it to solidify, typically by dehydration or otherwise reducing its content of solvents. The term also has been applied to coagulation by heating of some substances such as albumens, or cooling some such as solutions of gelatin or agar. Some forms of inspissation may be reversed by re-introducing solvent, such as by adding water to molasses or gum arabic; in other forms, its resistance to flow may include cross-linking or mutual adhesion of its component particles or molecules, in ways that prevent their dissolving again, such as in the irreversible setting or gelling of some kinds of rubber latex, egg-white, adhesives, or coagulation of blood.
Intentional use
Inspissation is the process used when heating high-protein containing media; for example to enable recovery of bacteria for testing. Once inspissation has occurred, any stained bacteria, such as Mycobacteria, can then be isolated.
A Serum inspissation or Fractional sterilization is a process of heating an article on 3 successive days as follows:
Pathologic inspissation
In cystic fibrosis, inspissation of secretions in the respiratory and gastrointestinal tracts is a major mechanism of causing the disease.
References
Further reading
Textbook of Microbiology by Prof. C P Baveja,
Textbook of Microbiology by Ananthanarayan and Panikar,
Microbiology
Zoology
|
https://en.wikipedia.org/wiki/Test%20harness
|
In software testing, a test harness is a collection of stubs and drivers configured to assist with the testing of an application or component. It acts as imitation infrastructure for test environments or containers where the full infrastructure is either not available or not desired.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness provides a hook for the developed code, which can be tested using an automation framework.
A test harness is used to facilitate testing where all or some of an application's production infrastructure is unavailable, this may be due to licensing costs, security concerns meaning test environments are air gapped, resource limitations, or simply to increase the execution speed of tests by providing pre-defined test data and smaller software components instead of calculated data from full applications.
These individual objectives may be fulfilled by unit test framework tools, stubs or drivers.
Example
When attempting to build an application that needs to interface with an application on a mainframe computer, but no mainframe is available during development, a test harness may be built to use as a substitute this can mean that normally complex operations can be handled with a small amount of resources by providing pre-defined data and responses so the calculations performed by the mainframe are not needed.
A test harness may be part of a project deliverable. It may be kept separate from the application source code and may be reused on multiple projects. A test harness simulates application functionality; it has no knowledge of test suites, test cases or test reports. Those things are provided by a testing framework and associated automated testing tools.
A part of its job is to set up suitable test fixtures.
The test harness will generally be specific to a development environment such as Java. However, interoper
|
https://en.wikipedia.org/wiki/JSON-RPC
|
JSON-RPC is a remote procedure call protocol encoded in JSON. It is similar to the XML-RPC protocol, defining only a few data types and commands. JSON-RPC allows for notifications (data sent to the server that does not require a response) and for multiple calls to be sent to the server which may be answered asynchronously.
History
Usage
JSON-RPC works by sending a request to a server implementing this protocol. The client in that case is typically software intending to call a single method of a remote system. Multiple input parameters can be passed to the remote method as an array or object, whereas the method itself can return multiple output data as well. (This depends on the implemented version.)
All transfer types are single objects, serialized using JSON. A request is a call to a specific method provided by a remote system. It can contain three members:
method - A String with the name of the method to be invoked. Method names that begin with "rpc." are reserved for rpc-internal methods.
params - An Object or Array of values to be passed as parameters to the defined method. This member may be omitted.
id - A string or non-fractional number used to match the response with the request that it is replying to. This member may be omitted if no response should be returned.
The receiver of the request must reply with a valid response to all received requests. A response can contain the members mentioned below.
result - The data returned by the invoked method. If an error occurred while invoking the method, this member must not exist.
error - An error object if there was an error invoking the method, otherwise this member must not exist. The object must contain members code (integer) and message (string). An optional data member can contain further server-specific data. There are pre-defined error codes which follow those defined for XML-RPC.
id - The id of the request it is responding to.
Since there are situations where no response is needed or even desi
|
https://en.wikipedia.org/wiki/Detailed%20balance
|
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
History
The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility.
Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64).
Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation.
In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry.
The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state.
Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics.
Microscopic background
The microscopic "reversing of time" turns at
|
https://en.wikipedia.org/wiki/Generic%20filter
|
In the mathematical field of set theory, a generic filter is a kind of object used in the theory of forcing, a technique used for many purposes, but especially to establish the independence of certain propositions from certain formal theories, such as ZFC. For example, Paul Cohen used forcing to establish that ZFC, if consistent, cannot prove the continuum hypothesis, which states that there are exactly aleph-one real numbers. In the contemporary re-interpretation of Cohen's proof, it proceeds by constructing a generic filter that codes more than reals, without changing the value of .
Formally, let P be a partially ordered set, and let F be a filter on P; that is, F is a subset of P such that:
F is nonempty
If p, q ∈ P and p ≤ q and p is an element of F, then q is an element of F (F is closed upward)
If p and q are elements of F, then there is an element r of F such that r ≤ p and r ≤ q (F is downward directed)
Now if D is a collection of dense open subsets of P, in the topology whose basic open sets are all sets of the form {q | q ≤ p} for particular p in P, then F is said to be D-generic if F meets all sets in D; that is,
for all E ∈ D.
Similarly, if M is a transitive model of ZFC (or some sufficient fragment thereof), with P an element of M, then F is said to be M-generic, or sometimes generic over M, if F meets all dense open subsets of P that are elements of M.
See also
in computability
References
Forcing (mathematics)
|
https://en.wikipedia.org/wiki/Physical%20plant
|
Physical plant, mechanical plant or industrial plant (and where context is given, often just plant) refers to the necessary infrastructure used in operation and maintenance of a given facility. The operation of these facilities, or the department of an organization which does so, is called "plant operations" or facility management. Industrial plant should not be confused with "manufacturing plant" in the sense of "a factory". This is a holistic look at the architecture, design, equipment, and other peripheral systems linked with a plant required to operate or maintain it.
Power plants
Nuclear power
The design and equipment in a Nuclear Power Plant, has for the most part remained stagnant over the last 30 years There are three types of reactor cooling mechanisms: “Light water reactors, Liquid Metal Reactors and High Temperature Gas-Cooled Reactors”. While for the most part equipment remains the same, there have been some minimal modifications to existing reactors improving safety and efficiency. There have also been significant design changes for all these reactors. However, they remain theoretical and unimplemented. Nuclear power plant equipment can be separated into two categories: Primary systems and Balance-Of-Plant Systems. Primary systems are equipment involved in the production and safety of nuclear power. The reactor specifically has equipment such as, reactor vessels usually surrounding the core for protection, the reactor core which holds fuel rods. It also includes reactor cooling equipment consisting of liquid cooling loops, circulating coolant. These loops are usually separate systems each having at least one pump. Other equipment includes Steam generators and pressurizers that ensures pressure in the plant is adjusted as needed. Containment equipment is the physical structure built around the reactor to protect the surroundings from reactor failure. Lastly primary systems also include Emergency core cooling Equipment and Reactor protection Equipmen
|
https://en.wikipedia.org/wiki/Two-vector
|
A two-vector or bivector is a tensor of type and it is the dual of a two-form, meaning that it is a linear functional which maps two-forms to the real numbers (or more generally, to scalars).
The tensor product of a pair of vectors is a two-vector. Then, any two-form can be expressed as a linear combination of tensor products of pairs of vectors, especially a linear combination of tensor products of pairs of basis vectors. If f is a two-vector, then
where the f α β are the components of the two-vector. Notice that both indices of the components are contravariant. This is always the case for two-vectors, by definition. A bivector may operate on a one-form, yielding a vector:
,
although a problem might be which of the upper indices of the bivector to contract with. (This problem does not arise with mixed tensors because only one of such tensor's indices is upper.) However, if the bivector is symmetric then the choice of index to contract with is indifferent.
An example of a bivector is the stress–energy tensor. Another one is the orthogonal complement of the metric tensor.
Matrix notation
If one assumes that vectors may only be represented as column matrices and covectors as row matrices; then, since a square matrix operating on a column vector must yield a column vector, it follows that square matrices can only represent mixed tensors. However, there is nothing in the abstract algebraic definition of a matrix that says that such assumptions must be made. Then dropping that assumption matrices can be used to represent bivectors as well as two-forms. Example:
or .
If f is symmetric, i.e., , then .
See also
Two-point tensor
Bivector § Tensors and matrices (but note that the stress–energy tensor is symmetric, not skew-symmetric)
Dyadics
References
Tensors
|
https://en.wikipedia.org/wiki/Numerical%20taxonomy
|
Numerical taxonomy is a classification system in biological systematics which deals with the grouping by numerical methods of taxonomic units based on their character states. It aims to create a taxonomy using numeric algorithms like cluster analysis rather than using subjective evaluation of their properties. The concept was first developed by Robert R. Sokal and Peter H. A. Sneath in 1963 and later elaborated by the same authors. They divided the field into phenetics in which classifications are formed based on the patterns of overall similarities and cladistics in which classifications are based on the branching patterns of the estimated evolutionary history of the taxa.
Although intended as an objective method, in practice the choice and implicit or explicit weighting of characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to create dendrograms and cladograms using numerical methods rather than subjective synthesis of data.
See also
Computational phylogenetics
Taxonomy (biology)
References
Taxonomy (biology)
es:Taxonomía numérica
ru:Фенетика
|
https://en.wikipedia.org/wiki/Property%20of%20Baire
|
A subset of a topological space has the property of Baire (Baire property, named after René-Louis Baire), or is called an almost open set, if it differs from an open set by a meager set; that is, if there is an open set such that is meager (where denotes the symmetric difference).
Definitions
A subset of a topological space is called almost open and is said to have the property of Baire or the Baire property if there is an open set such that is a meager subset, where denotes the symmetric difference. Further, has the Baire property in the restricted sense if for every subset of the intersection has the Baire property relative to .
Properties
The family of sets with the property of Baire forms a σ-algebra. That is, the complement of an almost open set is almost open, and any countable union or intersection of almost open sets is again almost open. Since every open set is almost open (the empty set is meager), it follows that every Borel set is almost open.
If a subset of a Polish space has the property of Baire, then its corresponding Banach–Mazur game is determined. The converse does not hold; however, if every game in a given adequate pointclass is determined, then every set in has the property of Baire. Therefore, it follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set (in a Polish space) has the property of Baire.
It follows from the axiom of choice that there are sets of reals without the property of Baire. In particular, the Vitali set does not have the property of Baire. Already weaker versions of choice are sufficient: the Boolean prime ideal theorem implies that there is a nonprincipal ultrafilter on the set of natural numbers; each such ultrafilter induces, via binary representations of reals, a set of reals without the Baire property.
See also
References
External links
Springer Encyclopaedia of Mathematics article on Baire property
Descriptive set theory
De
|
https://en.wikipedia.org/wiki/Memorex
|
Memorex Corp. began as a computer tape producer and expanded to become both a consumer media supplier and a major IBM plug compatible peripheral supplier. It was broken up and ceased to exist after 1996 other than as a consumer electronics brand specializing in disk recordable media for CD and DVD drives, flash memory, computer accessories and other electronics.
History and evolution
Established in 1961 in Silicon Valley, Memorex started by selling computer tapes, then added other media such as disk packs. The company then expanded into disk drives and other peripheral equipment for IBM mainframes. During the 1970s and into the early 1980s, Memorex was worldwide one of the largest independent suppliers of disk drives and communications controllers to users of IBM-compatible mainframes, as well as media for computer uses and consumers. The company's name is a portmanteau of "memory excellence".
Memorex entered the consumer media business in 1971 and started the ad campaign, first with its "shattering glass" advertisements and then with a series of legendary television commercials featuring Ella Fitzgerald. In the commercials, she would sing a note that shattered a glass while being recorded to a Memorex audio cassette. The tape was played back and the recording also broke the glass, asking "Is it live, or is it Memorex?" This would become the company slogan which was used in a series of advertisements released through 1970s and 1980s.
In 1982, Memorex was bought by Burroughs for its enterprise businesses; the company’s consumer business, a small segment of the company’s revenue at that time was sold to Tandy. Over the next six years, Burroughs and its successor Unisys shut down, sold off or spun out the various remaining parts of Memorex.
The computer media, communications and IBM end user sales and service organization were spun out as Memorex International. In 1988, Memorex International acquired the Telex Corporation becoming Memorex Telex NV, a corporatio
|
https://en.wikipedia.org/wiki/Sector%20mass%20spectrometer
|
A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston, Arthur Jeffrey Dempster, Kenneth Bainbridge and Josef Mattauch in 1936) in that they focus the ion beams both in direction and velocity.
Theory
The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general.
where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product.
So the force on an ion in a linear homogenous electric field (an electric sector) is:
,
in the direction of the electric field, with positive ions and opposite that with negative ions.
The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector.
And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is:
,
perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge.
Th
|
https://en.wikipedia.org/wiki/Skype%20Technologies
|
Skype Technologies S.A.R.L (also known as Skype Software S.A.R.L, Skype Communications S.A.R.L, Skype Inc., and Skype Limited) is a telecommunications company headquartered in Luxembourg City, Luxembourg, whose chief business is the manufacturing and marketing of the video chat and instant messaging computer software program Skype, and various Internet telephony services associated with it. Microsoft purchased the company in 2011, and it has since then operated as their wholly owned subsidiary; as of 2016, it is operating as part of Microsoft's Office Product Group. The company is a Société à responsabilité limitée, or SARL, equivalent to an American limited liability company.
Skype, a voice over IP (VoIP) service, was first released in 2003 as a way to make free computer-to-computer calls, or reduced-rate calls from a computer to telephones. Support for paid services such as calling landline/mobile phones from Skype (formerly called SkypeOut), allowing landline/mobile phones to call Skype (formerly called SkypeIn and now Skype Number), and voice messaging generates the majority of Skype's revenue.
eBay acquired Skype Technologies S.A. in September 2005 and in April 2009 announced plans to spin it off in a 2010 initial public offering (IPO). In September 2009, Silver Lake, Andreessen Horowitz and the Canada Pension Plan Investment Board announced the acquisition of 65% of Skype for $1.9 billion from eBay, valuing the business at $2.75 billion. Skype was acquired by Microsoft in May 2011 for $8.5 billion (~$ in ).
As of 2010, Skype was available in 27 languages and has 660 million worldwide users, an average of over 100 million active each month, and has faced challenges to its intellectual property amid political concerns by governments wishing to control telecommunications systems within their borders.
History
Skype was founded in 2003 by Janus Friis from Denmark and Niklas Zennström from Sweden, having its headquarters in Luxembourg with offices now in Berli
|
https://en.wikipedia.org/wiki/Koha%20%28software%29
|
Koha is an open-source integrated library system (ILS), used world-wide by public, school and special libraries. The name comes from a Māori term for a gift or donation.
Features
Koha is a web-based ILS, with a SQL database (MariaDB or MySQL preferred) back end with cataloguing data stored in MARC and accessible via Z39.50 or SRU. The user interface is very configurable and adaptable and has been translated into many languages. Koha has most of the features that would be expected in an ILS, including:
Various Web 2.0 facilities like tagging, comment, social sharing and RSS feeds
Union catalog facility
Customizable search
Online circulation
Bar code printing
Patron card creation
Report generation
Patron self registration form through OPAC
History
Koha was created in 1999 by Katipo Communications for the Horowhenua Library Trust in New Zealand, and the first installation went live in January 2000.
From 2000, companies started providing commercial support for Koha, building to more than 50 today.
In 2001, Paul Poulain (of Marseille, France) began adding many new features to Koha, most significantly support for multiple languages. By 2010, Koha has been translated from its original English into French, Chinese, Arabic and several other languages. Support for the cataloguing and search standards MARC and Z39.50 was added in 2002 and later sponsored by the Athens County Public Libraries. Poulain co-founded BibLibre in 2007.
In 2005, an Ohio-based company, Metavore, Inc., trading as LibLime, was established to support Koha and added many new features, including support for Zebra sponsored by the Crawford County Federated Library System. Zebra support increased the speed of searches as well as improving scalability to support tens of millions of bibliographic records.
In 2007 a group of libraries in Vermont began testing the use of Koha for Vermont libraries. At first a separate implementation was created for each library. Then the Vermont Organization of K
|
https://en.wikipedia.org/wiki/Geometry%20processing
|
Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.
Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment and classical computer-aided design, to biomedical computing, reverse engineering, and scientific computing.
Geometry processing is a common research topic at SIGGRAPH, the premier computer graphics academic conference, and the main topic of the annual Symposium on Geometry Processing.
Geometry processing as a life cycle
Geometry processing involves working with a shape, usually in 2D or 3D, although the shape can live in a space of arbitrary dimensions. The processing of a shape involves three stages, which is known as its life cycle. At its "birth," a shape can be instantiated through one of three methods: a model, a mathematical representation, or a scan. After a shape is born, it can be analyzed and edited repeatedly in a cycle. This usually involves acquiring different measurements, such as the distances between the points of the shape, the smoothness of the shape, or its Euler characteristic. Editing may involve denoising, deforming, or performing rigid transformations. At the final stage of the shape's "life," it is consumed. This can mean it is consumed by a viewer as a rendered asset in a game or movie, for instance. The end of a shape's life can also be defined by a de
|
https://en.wikipedia.org/wiki/Key%20Tronic
|
Key Tronic Corporation (branded Keytronic) is a technology company founded in 1969. Its core products initially included keyboards, mice and other input devices. KeyTronic currently specializes in PCBA and full product assembly. The company is among the ten largest contract manufacturers providing electronic manufacturing services in the US. The company offers full product design or assembly of a wide variety of household goods and electronic products such as keyboards, printed circuit board assembly, plastic molding, thermometers, toilet bowl cleaners, satellite tracking systems, etc.
Keyboards
After the introduction of the IBM PC, Keytronic began manufacturing keyboards compatible with those computer system units.
Most of their keyboards are based on the 8048 microcontroller to communicate to the computer. Their early keyboards used an Intel 8048 MCU. However, as the company evolved, they began to use their own 8048-based and 83C51KB-based MCUs.
In 1978, Keytronic Corporation introduced keyboards with capacitive-based switches, one of the first keyboard technologies to not use self-contained switches. There was simply a sponge pad with a conductive-coated Mylar plastic sheet on the switch plunger, and two half-moon trace patterns on the printed circuit board below. As the key was depressed, the capacitance between the plunger pad and the patterns on the PCB below changed, which was detected by integrated circuits (IC). These keyboards were claimed to have the same reliability as the other "solid-state switch" keyboards such as inductive and Hall-Effect, but competitive with direct-contact keyboards.
ErgoForce
Among modern keyboard enthusiasts, Keytronic is known mostly for its "ErgoForce" technology, where different keys have rubber domes with different stiffness. The alphabetic keys intended to be struck with the little finger need only 35 grams of force to actuate, while other alphabetic keys need 45 grams. Other keys can be as stiff as 80 grams.
Corpora
|
https://en.wikipedia.org/wiki/Z2%20%28computer%29
|
The Z2 was an electromechanical (mechanical and relay-based) digital computer that was completed by Konrad Zuse in 1940. It was an improvement on the Z1 Zuse built in his parents' home, which used the same mechanical memory. In the Z2, he replaced the arithmetic and control logic with 600 electrical relay circuits, weighing over 600 pounds.
The Z2 could read 64 words from punch cards. Photographs and plans for the Z2 were destroyed by the Allied bombing during World War II. In contrast to the Z1, the Z2 used 16-bit fixed-point arithmetic instead of 22-bit floating point.
Zuse presented the Z2 in 1940 to members of the DVL (today DLR) and member , whose support helped fund the successor model Z3.
Specifications
See also
Z1
Z3
Z4
References
Further reading
External links
Z2 via Horst Zuse (son) web page
Electro-mechanical computers
Z02
Mechanical computers
Computer-related introductions in 1940
Konrad Zuse
German inventions of the Nazi period
1940s computers
Computers designed in Germany
|
https://en.wikipedia.org/wiki/Nuclear%20reaction%20analysis
|
Nuclear reaction analysis (NRA) is a nuclear method of nuclear spectroscopy in materials science to obtain concentration vs. depth distributions for certain target chemical elements in a solid thin film.
Mechanism of NRA
If irradiated with select projectile nuclei at kinetic energies Ekin, target solid thin-film chemical elements can undergo a nuclear reaction under resonance conditions for a sharply defined resonance energy. The reaction product is usually a nucleus in an excited state which immediately decays, emitting ionizing radiation.
To obtain depth information the initial kinetic energy of the projectile nucleus (which has to exceed the resonance energy) and its stopping power (energy loss per distance traveled) in the sample has to be known. To contribute to the nuclear reaction the projectile nuclei have to slow down in the sample to reach the resonance energy. Thus each initial kinetic energy corresponds to a depth in the sample where the reaction occurs (the higher the energy, the deeper the reaction).
NRA profiling of hydrogen
For example, a commonly used reaction to profile hydrogen with an energetic 15N ion beam is
15N + 1H → 12C + α + γ (4.43 MeV)
with a sharp resonance in the reaction cross section at 6.385 MeV of only 1.8 keV. Since the incident 15N ion loses energy along its trajectory in the material it must have an energy higher than the resonance energy to induce the nuclear reaction with hydrogen nuclei deeper in the target.
This reaction is usually written 1H(15N,αγ)12C. It is inelastic because the Q-value is not zero (in this case it is 4.965 MeV). Rutherford backscattering (RBS) reactions are elastic (Q = 0), and the interaction (scattering) cross-section σ given by the famous formula derived by Lord Rutherford in 1911. But non-Rutherford cross-sections (so-called EBS, elastic backscattering spectrometry) can also be resonant: for example, the 16O(α,α)16O reaction has a strong and very useful resonance at 3038.1 ± 1.3 keV.
|
https://en.wikipedia.org/wiki/Labyrinth%3A%20The%20Computer%20Game
|
Labyrinth: The Computer Game is a 1986 graphic adventure game developed by Lucasfilm Games and published by Activision. Based on the fantasy film Labyrinth, it tasks the player with navigating a maze while solving puzzles and evading dangers. The player's goal is to find and defeat the main antagonist, Jareth, within 13 real-time hours. Unlike other adventure games of the period, Labyrinth does not feature a command-line interface. Instead, the player uses two scrolling "word wheel" menus on the screen to construct basic sentences.
Labyrinth was the first adventure game created by Lucasfilm. The project was led by designer David Fox, who invented its word wheels to avoid the text parsers and syntax guessing typical of text-based adventure games. Early in development, the team collaborated with author Douglas Adams in a week-long series of brainstorming sessions, which inspired much of the final product. Labyrinth received positive reviews and, in the United States, was a bigger commercial success than the film upon which it was based. Its design influenced Lucasfilm's subsequent adventure title, the critically acclaimed Maniac Mansion.
This game is entirely different from another game based on the same movie entitled "Labyrinth: Maō no Meikyū" ("Maze of the Goblin King"), released exclusively in Japan for the Famicom and MSX in 1987, developed by Atlus and published by Tokuma Shoten.
Overview
Labyrinth: The Computer Game is a graphic adventure game in which the player maneuvers a character through a maze while solving puzzles and evading dangers. It is an adaptation of the 1986 film Labyrinth, many of whose events and characters are reproduced in the game. However, it does not follow the plot of the film. At the beginning, the player enters their name, sex and favorite color: the last two fields determine the appearance of the player character. Afterward, a short text-based adventure sequence unfolds, wherein the player enters a movie theater to watch the film L
|
https://en.wikipedia.org/wiki/Term%20algebra
|
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra.
From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category.
A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them.
An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand.
Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration.
Universal algebra
A type is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer , let denote the function symbols in of arity . A constant is a function symbol of arity 0.
Let be a type, and let be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume and are disjoint.) Then the set of terms of type ove
|
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20constant
|
In mathematics, Apéry's constant is the sum of the reciprocals of the positive cubes. That is, it is defined as the number
where is the Riemann zeta function. It has an approximate value of
.
The constant is named after Roger Apéry. It arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio using quantum electrodynamics. It also arises in the analysis of random minimum spanning trees and in conjunction with the gamma function when solving certain integrals involving exponential functions in a quotient, which appear occasionally in physics, for instance, when evaluating the two-dimensional case of the Debye model and the Stefan–Boltzmann law.
Irrational number
was named Apéry's constant after the French mathematician Roger Apéry, who proved in 1978 that it is an irrational number. This result is known as Apéry's theorem. The original proof is complex and hard to grasp, and simpler proofs were found later.
Beukers's simplified irrationality proof involves approximating the integrand of the known triple integral for ,
by the Legendre polynomials.
In particular, van der Poorten's article chronicles this approach by noting that
where , are the Legendre polynomials, and the subsequences are integers or almost integers.
It is still not known whether Apéry's constant is transcendental.
Series representations
Classical
In addition to the fundamental series:
Leonhard Euler gave the series representation:
in 1772, which was subsequently rediscovered several times.
Fast convergence
Since the 19th century, a number of mathematicians have found convergence acceleration series for calculating decimal places of . Since the 1990s, this search has focused on computationally efficient series with fast convergence rates (see section "Known digits").
The following series representation was found by A. A. Markov in 1890, rediscovered by Hjortnaes in 1953, and rediscovered once more
|
https://en.wikipedia.org/wiki/Native%20capacity
|
In computing, Native capacity refers to the uncompressed storage capacity of any medium that is usually spoken of in compressed sizes. For example, tape cartridges are rated in compressed capacity, which usually assumes 2:1 compression ratio over the native capacity.
References
Computer storage media
|
https://en.wikipedia.org/wiki/Linear%20amplifier
|
A linear amplifier is an electronic circuit whose output is proportional to its input, but capable of delivering more power into a load. The term usually refers to a type of radio-frequency (RF) power amplifier, some of which have output power measured in kilowatts, and are used in amateur radio. Other types of linear amplifier are used in audio and laboratory equipment. Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input. A linear amplifier responds to different frequency components independently, and tends not to generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity however, because the amplifying devices—transistors or vacuum tubes—follow nonlinear transfer function and rely on circuitry techniques to reduce those effects. There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy.
Explanation
Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input, generally at increased power levels. Load impedance, supply voltage, input base current, and power output capabilities can affect the efficiency of the amplifier.
Class-A amplifiers can be designed to have good linearity in both single ended and push-pull topologies. Amplifiers of classes AB1, AB2 and B can be linear only when a tuned tank circuit is employed, or in the push-pull topology, in which two active elements (tubes, transistors) are used to amplify positive and negative parts of the RF cycle respectively. Class-C amplifiers are not linear in any topology.
Amplifier classes
There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Their use in RF applications are listed briefly below:
Class-A amplifiers are very inefficient, they can never have an efficiency better than 50%. The semiconductor or vacuum tube cond
|
https://en.wikipedia.org/wiki/Circular%20symmetry
|
In geometry, circular symmetry is a type of continuous symmetry for a planar object that can be rotated by any arbitrary angle and map onto itself.
Rotational circular symmetry is isomorphic with the circle group in the complex plane, or the special orthogonal group SO(2), and unitary group U(1). Reflective circular symmetry is isomorphic with the orthogonal group O(2).
Two dimensions
A 2-dimensional object with circular symmetry would consist of concentric circles and annular domains.
Rotational circular symmetry has all cyclic symmetry, Zn as subgroup symmetries. Reflective circular symmetry has all dihedral symmetry, Dihn as subgroup symmetries.
Three dimensions
In 3-dimensions, a surface or solid of revolution has circular symmetry around an axis, also called cylindrical symmetry or axial symmetry. An example is a right circular cone. Circular symmetry in 3 dimensions has all pyramidal symmetry, Cnv as subgroups.
A double-cone, bicone, cylinder, toroid and spheroid have circular symmetry, and in addition have a bilateral symmetry perpendular to the axis of system (or half cylindrical symmetry). These reflective circular symmetries have all discrete prismatic symmetries, Dnh as subgroups.
Four dimensions
In four dimensions, an object can have circular symmetry, on two orthogonal axis planes, or duocylindrical symmetry. For example, the duocylinder and Clifford torus have circular symmetry in two orthogonal axes. A spherinder has spherical symmetry in one 3-space, and circular symmetry in the orthogonal direction.
Spherical symmetry
An analogous 3-dimensional equivalent term is spherical symmetry.
Rotational spherical symmetry is isomorphic with the rotation group SO(3), and can be parametrized by the Davenport chained rotations pitch, yaw, and roll. Rotational spherical symmetry has all the discrete chiral 3D point groups as subgroups. Reflectional spherical symmetry is isomorphic with the orthogonal group O(3) and has the 3-dimensional discrete po
|
https://en.wikipedia.org/wiki/Solenoid%20%28mathematics%29
|
This page discusses a class of topological groups. For the wrapped loop of wire, see Solenoid.
In mathematics, a solenoid is a compact connected topological space (i.e. a continuum) that may be obtained as the inverse limit of an inverse system of topological groups and continuous homomorphisms
where each is a circle and fi is the map that uniformly wraps the circle for times () around the circle . This construction can be carried out geometrically in the three-dimensional Euclidean space R3. A solenoid is a one-dimensional homogeneous indecomposable continuum that has the structure of a compact topological group.
Solenoids were first introduced by Vietoris for the case, and by van Dantzig the case, where is fixed. Such a solenoid arises as a one-dimensional expanding attractor, or Smale–Williams attractor, and forms an important example in the theory of hyperbolic dynamical systems.
Construction
Geometric construction and the Smale–Williams attractor
Each solenoid may be constructed as the intersection of a nested system of embedded solid tori in R3.
Fix a sequence of natural numbers {ni}, ni ≥ 2. Let T0 = S1 × D be a solid torus. For each i ≥ 0, choose a solid torus Ti+1 that is wrapped longitudinally ni times inside the solid torus Ti. Then their intersection
is homeomorphic to the solenoid constructed as the inverse limit of the system of circles with the maps determined by the sequence {ni}.
Here is a variant of this construction isolated by Stephen Smale as an example of an expanding attractor in the theory of smooth dynamical systems. Denote the angular coordinate on the circle S1 by t (it is defined mod 2π) and consider the complex coordinate z on the two-dimensional unit disk D. Let f be the map of the solid torus T = S1 × D into itself given by the explicit formula
This map is a smooth embedding of T into itself that preserves the foliation by meridional disks (the constants 1/2 and 1/4 are somewhat arbitrary, but it is essential tha
|
https://en.wikipedia.org/wiki/STN%20display
|
A super-twisted nematic (STN) display is a type of monochrome passive-matrix liquid crystal display (LCD).
History
This type of LCD was first patented by C. M. Waters and E. P. Raynes in 1982 whilst work was also conducted at the Brown Boveri Research Center, Baden, Switzerland, in 1983. For years a better scheme for multiplexing was sought. Standard twisted nematic (TN) LCDs with a 90 degrees twisted structure of the molecules have a contrast vs. voltage characteristic unfavorable for passive-matrix addressing as there is no distinct threshold voltage. STN displays, with the molecules twisted from 180 to 270 degrees, have superior characteristics.
Features
The main advantage of STN LCDs is their more pronounced electro-optical threshold allowing for passive-matrix addressing with many more lines and columns. For the first time, a prototype STN matrix display with 540x270 pixels was made by Brown Boveri (today ABB) in 1984, which was considered a breakthrough for the industry.
STN LCDs require less power and are less expensive to manufacture than TFT LCDs, another popular type of LCD that has largely superseded STN for mainstream laptops. STN displays typically suffer from lower image quality and slower response time than TFT displays. However, STN LCDs can be made purely reflective for viewing under direct sunlight. STN displays are used in some inexpensive mobile phones and informational screens of some digital products. In the early 1990s, they had been used in some portable computers such as Amstrad's PPC512 and PPC640, and in Nintendo's Game Boy.
Variants
CSTN (color super-twist nematic) is a color form for electronic display screens originally developed by Sharp Electronics. The CSTN uses red, green and blue filters to display color. The original CSTN displays developed in the early 1990s suffered from slow response times and ghosting (where text or graphic changes are blurred because the pixels cannot turn off and on fast enough). Recent advances in
|
https://en.wikipedia.org/wiki/Reverse%20video
|
Reverse video (or invert video or inverse video or reverse screen) is a computer display technique whereby the background and text color values are inverted. On older computers, displays were usually designed to display text on a black background by default. For emphasis, the color scheme was swapped to bright background with dark text. Nowadays the two tend to be switched, since most computers today default to white as a background color. The opposite of reverse video is known as true video.
Video is usually reversed by inverting the brightness values of the pixels of the involved region of the display. If there are 256 levels of brightness, encoded as 0 to 255, the 255 value becomes 0 and vice versa. A value of 1 becomes 254, 2 of 253, and so on: n is swapped for r - n, for r levels of brightness. This is occasionally called a ones' complement. If the source image is of middle brightness, reverse video can be difficult to see, 127 becomes 128 for example, which is only one level of brightness different. The computer displays where it was most commonly used were monochrome and only displayed two values so this issue seldom arose.
Reverse video is commonly used in software programs as a visual aid to highlight a selection that has been made as an aid in preventing description errors, where an intended action is performed on an object that is not the one intended. It is more common in modern desktop environments to change the background to other colors such as blue, or to use a semi-transparent background to "highlight" the selected text.
On a terminal understanding ANSI escape sequences, the reverse video function is activated using the escape sequence CSI 7 m (which equals SGR 7).
Accessibility
Reverse video is also sometimes used for accessibility reasons. When most computer displays were light-on-dark, it was found that users looking back and forth between a white paper and dark screen would experience eyestrain due to their pupils constantly dilating and c
|
https://en.wikipedia.org/wiki/Crystallographic%20restriction%20theorem
|
The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman.
Crystals are modeled as discrete lattices, generated by a list of independent finite translations . Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups.
Dimensions 2 and 3
The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together.
Lattice proof
A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts , illustrated with lattice vectors in the figure.
Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon,
|
https://en.wikipedia.org/wiki/Gecos%20field
|
The gecos field, or GECOS field is a field in each record in the /etc/passwd file on Unix and similar operating systems. On UNIX, it is the 5th of 7 fields in a record.
It is typically used to record general information about the account or its user(s) such as their real name and phone number.
Format
The typical format for the GECOS field is a comma-delimited list with this order:
User's full name (or application name, if the account is for a program)
Building and room number or contact person
Office telephone number
Home telephone number
Any other contact information (pager number, fax, external e-mail address, etc.)
In most UNIX systems non-root users can change their own information using the chfn or chsh command.
History
Some early Unix systems at Bell Labs used GECOS machines for print spooling and various other services, so this field was added to carry information on a user's GECOS identity.
Other uses
On Internet Relay Chat (IRC), the real name field is sometimes referred to as the gecos field. IRC clients are able to supply this field when connecting. Hexchat, an X-Chat fork, defaults to 'realname', TalkSoup.app on GNUstep defaults to 'John Doe', and irssi reads the operating system user's full name, replacing it with 'unknown' if not defined. Some IRC clients use this field for advertising; for example, ZNC defaulted to "Got ZNC?", but changed it to "RealName = " to match its configuration syntax in 2015.
See also
General Comprehensive Operating System
References
Unix
|
https://en.wikipedia.org/wiki/Polymersome
|
In biotechnology, polymersomes are a class of artificial vesicles, tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 µm or more. Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems.
Synthosomes are polymersomes engineered to contain channels (transmembrane proteins) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances.
The term "polymersome" for vesicles made from block copolymers was coined in 1999. Polymersomes are similar to liposomes, which are vesicles formed from naturally occurring lipids. While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome.
Preparation
Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic; the other block or blocks are hydrophilic. Other morphologies used include comb copolymers, where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers, where the dendrimer portion is hydrophilic.
In the case of diblock, comb and dendronized copolymers the polymersome membrane has the
|
https://en.wikipedia.org/wiki/Blum%20integer
|
In mathematics, a natural number n is a Blum integer if is a semiprime for which p and q are distinct prime numbers congruent to 3 mod 4. That is, p and q must be of the form , for some integer t. Integers of this form are referred to as Blum primes. This means that the factors of a Blum integer are Gaussian primes with no imaginary part. The first few Blum integers are
21, 33, 57, 69, 77, 93, 129, 133, 141, 161, 177, 201, 209, 213, 217, 237, 249, 253, 301, 309, 321, 329, 341, 381, 393, 413, 417, 437, 453, 469, 473, 489, 497, ...
The integers were named for computer scientist Manuel Blum.
Properties
Given a Blum integer, Qn the set of all quadratic residues modulo n and coprime to n and . Then:
a has four square roots modulo n, exactly one of which is also in Qn
The unique square root of a in Qn is called the principal square root of a modulo n
The function f : Qn → Qn defined by f(x) = x2 mod n is a permutation. The inverse function of f is: f(x) = .
For every Blum integer n, −1 has a Jacobi symbol mod n of +1, although −1 is not a quadratic residue of n:
History
Before modern factoring algorithms, such as MPQS and NFS, were developed, it was thought to be useful to select Blum integers as RSA moduli. This is no longer regarded as a useful precaution, since MPQS and NFS are able to factor Blum integers with the same ease as RSA moduli constructed from randomly selected primes.
References
Integer sequences
|
https://en.wikipedia.org/wiki/Mental%20poker
|
Mental poker is the common name for a set of cryptographic problems that concerns playing a fair game over distance without the need for a trusted third party. The term is also applied to the theories surrounding these problems and their possible solutions. The name comes from the card game poker which is one of the games to which this kind of problem applies. Similar problems described as two party games are Blum's flipping a coin over a distance, Yao's Millionaires' Problem, and Rabin's oblivious transfer.
The problem can be described thus: "How can one allow only authorized actors to have access to certain information while not using a trusted arbiter?" (Eliminating the trusted third-party avoids the problem of trying to determine whether the third party can be trusted or not, and may also reduce the resources required.)
In poker, this could translate to: "How can we make sure no player is stacking the deck or peeking at other players' cards when we are shuffling the deck ourselves?". In a physical card game, this would be relatively simple if the players were sitting face to face and observing each other, at least if the possibility of conventional cheating can be ruled out. However, if the players are not sitting at the same location but instead are at widely separate locations and pass the entire deck between them (using the postal mail, for instance), this suddenly becomes very difficult. And for electronic card games, such as online poker, where the mechanics of the game are hidden from the user, this is impossible unless the method used is such that it cannot allow any party to cheat by manipulating or inappropriately observing the electronic "deck".
Several protocols for doing this have been suggested, the first by Adi Shamir, Ron Rivest and Len Adleman (the creators of the RSA-encryption protocol). This protocol was the first example of two parties conducting secure computation rather than secure message transmission, employing cryptography; later on
|
https://en.wikipedia.org/wiki/Veritas%20Volume%20Manager
|
The Veritas Volume Manager (VVM or VxVM) is a proprietary logical volume manager from Veritas (which was part of Symantec until January 2016).
Details
It is available for Windows, AIX, Solaris, Linux, and HP-UX. A modified version is bundled with HP-UX as its built-in volume manager. It offers volume management and Multipath I/O functionalities (when used with Veritas Dynamic Multi-Pathing feature). The Veritas Volume Manager Storage Administrator (VMSA) is a GUI manager.
Versions
Veritas Volume Manager 7.4.1
Release date (Windows): February 2019
Veritas Volume Manager 6.0
Release date (Windows): December 2011
Release date (UNIX): December 2011
Veritas Volume Manager 5.1
Release date (Windows): August 2008
Release date (UNIX): December 2009
Veritas Volume Manager 5.0
Release date (UNIX): August 2006
Release date (Windows): January 2007
Veritas Volume Manager 4.1
Release date (UNIX): April 2005
Release date (Windows): June 2004
Veritas Volume Manager 4.0
Release date: February 2004
Veritas Volume Manager 3.5
Release date: September 2002
Veritas Volume Manager 3.2
Veritas Volume Manager 3.1
Release date: August 2000
Veritas Volume Manager 3.0
Microsoft once licensed a version of Veritas Volume Manager for Windows 2000, allowing operating systems to store and modify large amounts of data. Symantec acquired Veritas on July 2, 2005, and claimed Microsoft misused their intellectual property to develop functionalities in Windows Server 2003, later Windows Vista and Windows Server 2008, which competed with Veritas' Storage Foundation, according to Michael Schallop, the director of legal affairs at Symantec. A representative claims Microsoft bought all "intellectual property rights for all relevant technologies from Veritas in 2004". The lawsuit was dropped in 2008; terms were not disclosed.
See also
Veritas Storage Foundation
Veritas Volume Replicator
Symantec Operations Readiness Tools (SORT)
References
Storage software
|
https://en.wikipedia.org/wiki/Proofs%20involving%20the%20addition%20of%20natural%20numbers
|
This article contains mathematical proofs for some properties of addition of the natural numbers: the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers.
Definitions
This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S(a) by the two rules
For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is,
1 = S(0).
For every natural number a, one has
Proof of associativity
We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c.
For the base case c = 0,
(a+b)+0 = a+b = a+(b+0)
Each equation follows by definition [A1]; the first with a + b, the second with b.
Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c,
(a+b)+c = a+(b+c)
Then it follows,
In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete.
Proof of identity element
Definition [A1] states directly that 0 is a right identity.
We prove that 0 is a left identity by induction on the natural number a.
For the base case a = 0, 0 + 0 = 0 by definition [A1].
Now we assume the induction hypothesis, that 0 + a = a.
Then
This completes the induction on a.
Proof of commutativity
We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything).
The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above:
a + 0 = a = 0 + a.
Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). We have pro
|
https://en.wikipedia.org/wiki/Check%20Point
|
Check Point is an American-Israeli multinational provider of software and combined hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management.
, the company has approximately 6,000 employees worldwide. Headquartered in Tel Aviv, Israel and San Carlos, California, the company has development centers in Israel and Belarus and previously held in the United States (ZoneAlarm), Sweden (former Protect Data development centre) following acquisitions of companies who owned these centers. The company has offices in over 70 locations worldwide including main offices in North America, 10 in the United States (including in San Carlos, California and Dallas, Texas), 4 in Canada (including Ottawa, Ontario) as well as in Europe (London, Paris, Munich, Madrid) and in Asia Pacific (Singapore, Japan, Bengaluru, Sydney).
History
Check Point was established in Ramat Gan, Israel in 1993, by Gil Shwed (CEO ), Marius Nacht (Chairman ) and Shlomo Kramer (who left Check Point in 2003). Shwed had the initial idea for the company's core technology known as stateful inspection, which became the foundation for the company's first product, FireWall-1; soon afterwards they also developed one of the world's first VPN products, VPN-1. Shwed developed the idea while serving in the Unit 8200 of the Israel Defense Forces, where he worked on securing classified networks.
Initial funding of US$250,000 was provided by venture capital fund BRM Group.
In 1994 Check Point signed an OEM agreement with Sun Microsystems, followed by a distribution agreement with HP in 1995. The same year, the U.S. head office was established in Redwood City, California.
By February 1996, the company was named worldwide firewall market leader by IDC, with a market share of 40 percent.
In June 1996 Check Point raised $67 million from its initial public offering on NASDAQ.
In 1998, Check Point established a partnership
|
https://en.wikipedia.org/wiki/Calspan
|
Calspan Corporation is a science and technology company founded in 1943 as part of the Research Laboratory of the Curtiss-Wright Airplane Division at Buffalo, New York. Calspan consists of four primary operating units: Flight Research, Transportation Research, Aerospace Sciences Transonic Wind Tunnel, and Crash Investigations. The company's main facility is in Cheektowaga, New York, while it has other facilities such as the Flight Research Center in Niagara Falls, New York, and remote flight test operations at Edwards Air Force Base, California, and Patuxent River, Maryland. Calspan also has thirteen field offices throughout the Eastern United States which perform accident investigations on behalf of the United States Department of Transportation. Calspan was acquired by TransDigm Group in 2023.
History
The facility was started as a private defense contractor in the home front of World War II. As a part of its tax planning in the wake of the war effort, Curtiss-Wright donated the facility to Cornell University to operate "as a public trust." Seven other east coast aircraft companies also donated $675,000 to provide working capital for the lab.
The lab operated under the name Cornell Aeronautical Laboratory from 1946 until 1972. During this same time, Cornell formed a new Graduate School of Aerospace Engineering on its Ithaca, New York campus. During the late 1960s and early 1970s, universities came under criticism for conducting war-related research particularly as the Vietnam War became unpopular, and Cornell University tried to sever its ties. Similar laboratories at other colleges, such as the Lincoln Laboratory and Draper Laboratory at MIT came under similar criticism, but some labs, such as Lincoln, retained their collegiate ties. Cornell accepted a $25 million offer from EDP Technology, Inc. to purchase the lab in 1968. However, a group of lab employees who had made a competing $15 million offer organized a lawsuit to block the sale. In May 1971, New York's
|
https://en.wikipedia.org/wiki/Rudder%20ratio
|
Rudder ratio refers to a value that is monitored by the computerized flight control systems in modern aircraft. The ratio relates the aircraft airspeed to the rudder deflection setting that is in effect at the time. As an aircraft accelerates, the deflection of the rudder needs to be reduced proportionately within the range of the rudder pedal depression by the pilot. This automatic reduction process is needed because if the rudder is fully deflected when the aircraft is in high-speed flight, it will cause the plane to sharply and violently yaw, or swing from side to side, leading to loss of control and rudder, tail and other damages, even causing the aircraft to crash.
See also
American Airlines Flight 587
Aerospace engineering
Engineering ratios
|
https://en.wikipedia.org/wiki/Cooperative%20board%20game
|
Cooperative board games are board games in which players work together to achieve a common goal rather than competing against each other. Either the players win the game by reaching a pre-determined objective, or all players lose the game, often by not reaching the objective before a certain event ends the game.
Definition
In cooperative board games, all players win or lose the game together. These games should not be confused with noncompetitive games, such as The Ungame, which simply do not have victory conditions or any set objective to complete. While adventure board games with role playing and dungeon crawl elements like Gloomhaven may be included, pure tabletop role-playing games like Descent: Journeys in the Dark are excluded as they have potentially infinite victory conditions with persistent player characters. Furthermore, games in which players compete together in two or more groups, teams or partnerships (such as Axis & Allies, and card games like Bridge and Spades) fall outside of this definition, even though there is temporary cooperation between some of the players. Multiplayer conflict games like Diplomacy may also feature temporary cooperation during the course of the game. These games are not considered cooperative though, because players are eliminated and ultimately only one individual can win.
History and development
20th century
Early cooperative games were used by parents and teachers in educational settings. In 1903 Elizabeth Magie patented "The Landlord's Game", inspired by the principles and philosophy of Henry George. The Landlords' and designed as a protest against the monopolists of the time, the game is considered to be the game from which Monopoly was largely derived. In it, Magie had two rule-sets - the Monopoly rules, in which players all vied to accrue the largest revenue and crush their opponents, and a co-operative set. Her dualistic approach was a teaching tool meant to demonstrate that the co-operative rules were morally su
|
https://en.wikipedia.org/wiki/Cycles%20and%20fixed%20points
|
In mathematics, the cycles of a permutation of a finite set S correspond bijectively to the orbits of the subgroup generated by acting on S. These orbits are subsets of S that can be written as , such that
for , and .
The corresponding cycle of is written as ( c1 c2 ... cn ); this expression is not unique since c1 can be chosen to be any element of the orbit.
The size of the orbit is called the length of the corresponding cycle; when , the single element in the orbit is called a fixed point of the permutation.
A permutation is determined by giving an expression for each of its cycles, and one notation for permutations consist of writing such expressions one after another in some order. For example, let
be a permutation that maps 1 to 2, 6 to 8, etc. Then one may write
= ( 1 2 4 3 ) ( 5 ) ( 6 8 ) (7) = (7) ( 1 2 4 3 ) ( 6 8 ) ( 5 ) = ( 4 3 1 2 ) ( 8 6 ) ( 5 ) (7) = ...
Here 5 and 7 are fixed points of , since (5) = 5 and (7)=7. It is typical, but not necessary, to not write the cycles of length one in such an expression. Thus, = (1 2 4 3)(6 8), would be an appropriate way to express this permutation.
There are different ways to write a permutation as a list of its cycles, but the number of cycles and their contents are given by the partition of S into orbits, and these are therefore the same for all such expressions.
Counting permutations by number of cycles
The unsigned Stirling number of the first kind, s(k, j) counts the number of permutations of k elements with exactly j disjoint cycles.
Properties
(1) For every k > 0 :
(2) For every k > 0 :
(3) For every k > j > 1,
Reasons for properties
(1) There is only one way to construct a permutation of k elements with k cycles: Every cycle must have length 1 so every element must be a fixed point.
(2.a) Every cycle of length k may be written as permutation of the number 1 to k; there are k! of these permutations.
(2.b) There are k different ways to write a given cycle of length k, e.g. ( 1 2
|
https://en.wikipedia.org/wiki/Electroceramics
|
Electroceramics are a class of ceramic materials used primarily for their electrical properties.
While ceramics have traditionally been admired and used for their mechanical, thermal and chemical stability, their unique electrical, optical and magnetic properties have become of increasing importance in many key technologies including communications, energy conversion and storage, electronics and automation. Such materials are now classified under electroceramics, as distinguished from other functional ceramics such as advanced structural ceramics.
Historically, developments in the various subclasses of electroceramics have paralleled the growth of new technologies. Examples include: ferroelectrics - high dielectric capacitors, non-volatile memories; ferrites - data and information storage; solid electrolytes - energy storage and conversion; piezoelectrics - sonar; semiconducting oxides - environmental monitoring. Recent advances in these areas are described in the Journal of Electroceramics.
Dielectric ceramics
Dielectric materials used for construction of ceramic capacitors include: Lead Zirconate titanate (PZT), Barium titanate(BT), strontium titanate (ST), calcium titanate (CT), magnesium titanate (MT), calcium magnesium titanate (CMT), zinc titanate (ZT), lanthanum titanate (LT), and neodymium titanate (NT), barium zirconate (BZ), calcium zirconate (CZ), lead magnesium niobate (PMN), lead zinc niobate (PZN), lithium niobate (LN), barium stannate (BS), calcium stannate (CS), magnesium aluminium silicate, magnesium silicate, barium tantalate, titanium dioxide, niobium oxide, zirconia, silica, sapphire, beryllium oxide, and zirconium tin titanate
Some piezoelectric materials can be used as well; the EIA Class 2 dielectrics are based on mixtures rich on barium titanate. In turn, EIA Class 1 dielectrics contain little or no barium titanate.
Electronically conductive ceramics
Indium tin oxide (ITO), lanthanum-doped strontium titanate (SLT), yttrium-doped strontiu
|
https://en.wikipedia.org/wiki/Solid%20solution
|
A solid solution, a term popularly used for metals, is a homogeneous mixture of two different kinds of atoms in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
Nomenclature
The IUPAC definition of a solid solution is a "solid in which components ar
|
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Radziszowski
|
Stanisław P. Radziszowski (born June 7, 1953) is a Polish-American mathematician and computer scientist, best known for his work in Ramsey theory.
Radziszowski was born in Gdańsk, Poland, and received his PhD from the Institute of Informatics of the University of Warsaw in 1980. His thesis topic was "Logic and Complexity of Synchronous Parallel Computations". From 1976 to 1980 he worked as a visiting professor in various universities in Mexico City. In 1984, he moved to the United States, where he took up a position in the Department of Computer Science at the Rochester Institute of Technology.
Radziszowski has published many papers in graph theory, Ramsey theory, block designs, number theory and computational complexity.
In a 1995 paper with Brendan McKay he determined the Ramsey number R(4,5)=25. His survey of Ramsey numbers, last updated in March 2017, is a standard reference on the subject and published at the Electronic Journal of Combinatorics.
References
External links
Radziszowski's survey of small Ramsey numbers
Home Page
Sound file of Radziszowski speaking his own name (au format)
1953 births
Living people
Polish academics
Polish mathematicians
Polish computer scientists
Rochester Institute of Technology faculty
Combinatorialists
University of Warsaw alumni
|
https://en.wikipedia.org/wiki/Plantronics%20Colorplus
|
The Plantronics Colorplus is a graphics card for IBM PC computers, first sold in 1982. It is a superset of the then-current CGA standard, using the same monitor standard (4-bit digital TTL RGBI monitor) and providing the same pixel resolutions. It was produced by Frederick Electronics (of Frederick, Maryland), a subsidiary of Plantronics since 1968, and sold by Plantronics' Enhanced Graphics Products division.
The Colorplus has twice the memory of a standard CGA board (32k, compared to 16k). The additional memory can be used in graphics modes to double the color depth, giving two additional graphics modes—16 colors at resolution, or 4 colors at resolution.
It uses the same Motorola MC6845 display controller as the previous MDA and CGA adapters.
The original card also includes a parallel printer port.
Output capabilities
CGA compatible modes:
16 color mode (actual a text mode using , ▌, ▐ and █)
in 4 colors from a 16 color hardware palette. Pixel aspect ratio of 1:1.2.
in 2 colors. Pixel aspect ratio of 1:2.4
with pixel font text mode (effective resolution of )
with pixel font text mode (effective resolution of )
In addition to the CGA modes, it offers:
with 16 colors
with 4 colors
"New high-resolution" text font, selectable by hardware jumper
The "new" font was actually the unused "thin" font already present in the IBM CGA ROMs, with 1-pixel wide vertical strokes. This offered greater clarity on RGB monitors, versus the default "thick" / 2-pixel font more suitable for output to composite monitors and over RF to televisions but, contrary to Plantronics' advertising claims, was drawn at the same pixel resolution.
Software support
Few software made use of the enhanced Plantronics modes, for which there was no BIOS support.
A 1984 advertisement listed the following software as compatible:
Color-It
UCSD P-system
Peachtree Graphics Language
Business Graphics System
Graph Power
The Draftsman
Videogram
Stock View
GSX
CompuShow ( mode)
Some contempora
|
https://en.wikipedia.org/wiki/American%20Coalition%20for%20Clean%20Coal%20Electricity
|
The American Coalition for Clean Coal Electricity (ACCCE, formerly ABEC or Americans for Balanced Energy Choices) is a U.S. non-profit advocacy group representing major American coal producers, utility companies and railroads. The organization seeks to influence public opinion and legislation in favor of coal-generated electricity in the United States, placing emphasis on the development and deployment of clean coal technologies.
Since carbon capture and sequestration—which ACCCE and its member companies advocate to reduce greenhouse gas emissions from coal burning—has yet to be tested on a large scale, some have questioned whether this approach is feasible or realistic.
In 2009, ACCCE faced a Congressional investigation when it was discovered that a lobbying firm hired by ACCCE had sent forged letters to lawmakers. The letters, purporting to come from a variety of minority-focused non-profit groups, were in fact forged by a lobbying firm hired by ACCCE.
History
The ACCCE began operations in 2008, the result of a combination of two organizations: the Center for Energy and Economic Development (CEED) and Americans for Balanced Energy Choices (ABEC). CEED had been founded in 1992 and since then had been involved in a wide range of climate and energy policies related to coal-based electricity. ABEC, formed in 2000, had focused on consumer based advocacy programs concerning the use of coal-based electricity. In 2008 these two groups were combined to form ACCCE, with the goal of focusing on both legislative and public advocacy efforts. The main programs include the America's Power campaign, launched in 2007 by ABEC, which had a significant presence during the 2008 and 2012 elections, as well as legislative efforts during the United States House of Representatives debate over the Waxman-Markey cap and trade legislation.
Mike Duncan became President and CEO of ACCCE in 2012. By 2017, Duncan had been succeeded in that position by Paul Bailey, who had previously bee
|
https://en.wikipedia.org/wiki/SPARCstation
|
The SPARCstation, SPARCserver and SPARCcenter product lines are a series of SPARC-based computer workstations and servers in desktop, desk side (pedestal) and rack-based form factor configurations, that were developed and sold by Sun Microsystems.
The first SPARCstation was the SPARCstation 1 (also known as the Sun 4/60), introduced in 1989. The series was very popular and introduced the Sun-4c architecture, a variant of the Sun-4 architecture previously introduced in the Sun 4/260. Thanks in part to the delay in the development of more modern processors from Motorola, the SPARCstation series was very successful across the entire industry. The last model bearing the SPARCstation name was the SPARCstation 4. The workstation series was replaced by the Sun Ultra series in 1995; the next Sun server generation was the Sun Enterprise line introduced in 1996.
Models
Desktop and deskside SPARCstations and SPARCservers of the same model number were essentially identical systems, the only difference being that systems designated as servers were usually "headless" (that is, configured without a graphics card and monitor), and were sold with a "server" rather than a "desktop" OS license. For example, the SPARCstation 20 and SPARCserver 20 were almost identical in motherboard, CPU, case design and most other hardware specifications.
Most desktop SPARCstations and SPARCservers shipped in either "pizzabox" or "lunchbox" enclosures, a significant departure from earlier Sun and competing systems of the time. The SPARCstation 1, 2, 4, 5, 10 and 20 were "pizzabox" machines. The SPARCstation SLC and ELC were integrated into Sun monochrome monitor enclosures, and the SPARCstation IPC, IPX, SPARCclassic, SPARCclassic X and SPARCstation LX were "lunchbox" machines.
SPARCserver models ending in "30" or "70" were housed in deskside pedestal enclosures (respectively 5-slot and 12-slot VMEbus chassis); models ending in "90" and the SPARCcenter 2000 came in rackmount cabinet enclosures. T
|
https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant
|
In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number
This can be easily re-written into the far more quickly converging product representation
which can then be compactly represented in infinite product form by:
The constant σ arises when studying the asymptotic behaviour of the sequence
with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows:
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent:
where ln is the natural logarithm and (z, s, q) is the Lerch transcendent.
Finally,
.
Notes
References
Steven R. Finch, Mathematical Constants (2003), Cambridge University Press, p. 446. .
Jesus Guillera and Jonathan Sondow, "Double integrals and infinite products for some classical constants via analytic continuations of Lerch's transcendent", Ramanujan Journal 16 (2008), 247–270 (Provides an integral and a series representation).
Mathematical constants
Infinite products
|
https://en.wikipedia.org/wiki/Digi%20International
|
Digi International is an American Industrial Internet of Things (IIoT) technology company with headquarters based in Hopkins, Minnesota. The company was founded in 1985 and went public as Digi International in 1989. The company initially offered intelligent ISA/PCI boards (the 'DigiBoard') with multiple asynchronous serial interfaces for PCs. Multiport serial boards are still sold, but the company focuses on embedded and external network (wired and wireless) communications as well as scalable USB products. The company's products also include radio modems and embedded modules based on LTE (4G) communications platforms.
Acquisition history
Since going public, Digi International has acquired a number of companies.
2021 Digi acquired Ventus Holdings.
2021 Digi acquired Ctek, a company specializing in remote monitoring and industrial controls.
2021 Digi acquired Haxiot, a provider of wireless connection services.
2019 Digi acquired Opengear.
2018 Digi acquired Accelerated Concepts, a provider of secure, enterprise-grade, cellular (LTE) networking equipment for primary and backup connectivity.
2017 Digi acquired TempAlert, a provider of temperature and task management for retail pharmacy, food service, and industrial applications.
2017 Digi acquired SMART Temps, LLC, a provider of real-time food service temperature management for restaurant, grocery, education and hospital settings as well as real-time temperature management for healthcare.
2016 Digi acquired FreshTemps, temperature monitoring and task management for the food industry.
2015 Digi acquired Bluenica, Toronto-based company focused on temperature monitoring of perishable goods in the food industry.
2012 Digi acquired Etherios a Chicago-based salesforce.com Platinum Partner.
2009 Digi acquired Mobiapps a fabless manufacturer of satellite modems on the Orbcomm satellite network.
2008 Digi acquired Spectrum Design Solutions Inc. for $10 million, a design services company specializing in Wireless Design techn
|
https://en.wikipedia.org/wiki/Strong%20cryptography
|
Strong cryptography or cryptographically strong are general terms used to designate the cryptographic algorithms that, when used correctly, provide a very high (usually unsurmountable) level of protection against any eavesdropper, including the government agencies. There is no precise definition of the boundary line between the strong cryptography and (breakable) weak cryptography, as this border constantly shifts due to improvements in hardware and cryptanalysis techniques. These improvements eventually place the capabilities once available only to the NSA within the reach of a skilled individual, so in practice there are only two levels of cryptographic security, "cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files" (Bruce Schneier).
The strong cryptography algorithms have high security strength, for practical purposes usually defined as a number of bits in the key. For example, the United States government, when dealing with export control of encryption, considers any implementation of the symmetric encryption algorithm with the key length above 56 bits or its public key equivalent to be strong and thus potentially a subject to the export licensing. To be strong, an algorithm needs to have a sufficiently long key and be free of known mathematical weaknesses, as exploitation of these effectively reduces the key size. At the beginning of the 21st century, the typical security strength of the strong symmetrical encryption algorithms is 128 bits (slightly lower values still can be strong, but usually there is little technical gain in using smaller key sizes).
Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Good algorithms and protocols are required, and good system design and implementation is needed as well. For instance, the operating system on which the cryptogra
|
https://en.wikipedia.org/wiki/Spatial%20capacity
|
Spatial capacity is an indicator of "data intensity" in a transmission medium. It is usually used in conjunction with wireless transport mechanisms. This is analogous to the way that lumens per square meter determine illumination intensity.
Spatial capacity focuses not only on bit rates for data transfer but on bit rates available in confined spaces defined by short transmission ranges. It is measured in bits per second per square meter.
Among those leading research in spatial capacity are Jan Rabaey at the University of California, Berkeley. Some have suggested the term "spatial efficiency" as more descriptive. Marc Weiser, former chief technologist of Xerox PARC, was another contributor to the field who commented on the importance of spatial capacity.
The System spectral efficiency is the spatial capacity divided by the bandwidth in hertz of the available frequency band.
Relative spatial capacities
Engineers at Intel and elsewhere have reported the relative spatial capacities of various wireless technologies as follows:
IEEE 802.11b 1,000 (bit/s)/m²
Bluetooth 30,000 (bit/s)/m²
IEEE 802.11a 83,000 (bit/s)/m²
Ultra-wideband 1,000,000 (bit/s)/m²
IEEE 802.11g N/A
See also
System spectral efficiency
References
Wireless networking
Network performance
Radio resource management
|
https://en.wikipedia.org/wiki/Bitrate%20peeling
|
Bitrate peeling is a technique used in Ogg Vorbis audio encoded streams, wherein a stream can be encoded at one bitrate but can be served at that or any lower bitrate.
The purpose is to provide access to the clip for people with slower Internet connections, and yet still allow people with faster connections to enjoy the higher quality content. The server automatically chooses which stream to deliver to the user, depending on user's connection speed.
, Ogg Vorbis bitrate peeling existed only as a concept as there was not yet an encoder capable of producing peelable datastreams Bounties - XiphWiki.
Difference from other technologies
The difference between SureStream and bitrate peeling is that SureStream is limited to only a handful of pre-defined bitrates, with significant difference between them, and SureStream encoded files are big because they contain all of the bitrates used, while bitrate peeling uses much smaller steps to change the available bitrate and quality, and only the highest bitrate is used to encode the file/stream, which results in smaller files on servers.
A related technique to the SureStream approach is hierarchical modulation, used in broadcast, where severally different streams at different qualities (and bitrates) are all broadcast, with the higher quality stream used if possible, with the lower quality streams fallen back on if not.
Lossy and correction
A similar technology is to feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (scalable to lossless), WavPack, DTS-HD Master Audio and OptimFROG DualStream.
SureStream example
A SureStream encoded file is encoded at bitrates of 16 kbit/s, 32 kbit/s and 96 kbit/s. The file will be about the same in size as three separate files encoded at those bitrates and put together, or one file encoded at the sum of those bitrates, which is about 144 kbit/s (16 + 32 + 96). When a dial-up
|
https://en.wikipedia.org/wiki/International%20Society%20for%20the%20Interdisciplinary%20Study%20of%20Symmetry
|
The International Symmetry Society ("International Society for the Interdisciplinary Study of Symmetry"; abbreviated name SIS) is an international non-governmental, non-profit organization registered in Hungary (Budapest, Vármegye u. 7. II. 3., H-1052).
Its main objectives are:
to bring together artists and scientists, educators and students devoted to, or interested in, the research and understanding of the concept and application of symmetry (asymmetry, dissymmetry);
to provide regular information to the general public about events in symmetry studies;
to ensure a regular forum (including the organization of symposia, and the publication of a periodical) for all those interested in symmetry studies.
The topic was introduced for the first time by Russian and Polish scholars. Then in 1952, Hermann Weyl published his fascinating book Symmetry, which was later translated into 10 languages. Since then, it became an attractive subject of research in various fields. A variety of manifestations of the principle of symmetry in sculpture, painting, architecture, ornament, and design, in organic and inorganic nature, has been revealed; the philosophical and mathematical significance of this principle has been studied.
During the 1980s the discussions concerning the nature of the world, whether it was essentially probabilistic or naturally geometric, revived the interest of the researchers to the topic. The intellectual atmosphere of this period facilitated the idea of establishment of a new institution devoted to the study of all forms of complexity and patterns of symmetry and orderly structures pervading science, nature and society, which ultimately led to the establishment of the International Society for the Interdisciplinary Study of Symmetry.
The Society's community comprises several branches of science and art, while symmetry studies have gained the rank of an individual interdisciplinary field in the judgement of the scientific community. The Society has member
|
https://en.wikipedia.org/wiki/Functional%20block%20diagram
|
A functional block diagram, in systems engineering and software engineering, is a block diagram that describes the functions and interrelationships of a system.
The functional block diagram can picture:
Functions of a system pictured by blocks
input and output elements of a block pictured with lines
the relationships between the functions, and
the functional sequences and paths for matter and or signals
The block diagram can use additional schematic symbols to show particular properties.
Functional block diagrams have been used in a wide range applications, from systems engineering to software engineering, since the late 1950s. They became a necessity in complex systems design to "understand thoroughly from exterior design the operation of the present system and the relationship of each of the parts to the whole."
Many specific types of functional block diagrams have emerged. For example, the functional flow block diagram is a combination of the functional block diagram and the flowchart. Many software development methodologies are built with specific functional block diagram techniques. An example from the field of industrial computing is the Function Block Diagram (FBD), a graphical language for the development of software applications for programmable logic controllers.
See also
Function model
Functional flow block diagram
References
Diagrams
Systems engineering
Management cybernetics
|
https://en.wikipedia.org/wiki/C%20Traps%20and%20Pitfalls
|
C Traps and Pitfalls is a slim computer programming book by former AT&T Corporation researcher and programmer Andrew Koenig, its first edition still in print in 2017, which outlines the many ways in which beginners and even sometimes quite experienced C programmers can write poor, malfunctioning and dangerous source code.
It evolved from an earlier technical report, by the same name, published internally at Bell Labs. This, in turn was inspired by a prior paper given by Koenig on "PL/I Traps and Pitfalls" at a SHARE conference in 1977. Koenig wrote that this title was inspired by a 1968 science fiction anthology by Robert Sheckley, "The People Trap and other Pitfalls, Snares, Devices and Delusions, as Well as Two Sniggles and a Contrivance".
References
1989 non-fiction books
Computer programming books
Software bugs
C (programming language)
Software anomalies
Addison-Wesley books
|
https://en.wikipedia.org/wiki/J%C3%BAlio%20C%C3%A9sar%20de%20Mello%20e%20Souza
|
Júlio César de Mello e Souza (Rio de Janeiro, May 6, 1895 – Recife, June 18, 1974), was a Brazilian writer and mathematics teacher. He was well known in Brazil and abroad for his books on recreational mathematics, most of them published under the pen names of Malba Tahan and Breno de Alencar Bianco.
He wrote 69 novels and 51 books of mathematics and other subjects, with over than two million books sold by 1995. His most famous work, The Man Who Counted, saw its 54th printing in 2001.
Júlio César's most popular books, including The Man Who Counted, are collections of mathematical problems, puzzles, curiosities, and embedded in tales inspired by the Arabian Nights. He thoroughly researched his subject matters — not only the mathematics, but also the history, geography, and culture of the Islamic Empire which was the backdrop and connecting thread of his books. Yet Júlio César's travels outside Brazil were limited to short visits to Buenos Aires, Montevideo, and Lisbon: he never set foot in the deserts and cities which he so vividly described in his books.
Júlio César was very critical of the educational methods used in Brazilian classrooms, especially for mathematics. "The mathematics teacher is a sadist," he claimed, "who loves to make everything as complicated as possible." In education, he was decades ahead of his time, and his proposals are still more praised than implemented today.
For his books, Júlio César received a prize by the prestigious Brazilian Literary Academy and was made a member of the Pernambuco Literary Academy. The Malba Tahan Institute was founded in 2004 at Queluz to preserve his legacy. The State Legislature of Rio de Janeiro determined his birthday, May 6, to be commemorated as the Mathematician's Day.
Early life
Júlio César was born in Rio de Janeiro but spent most of his childhood in Queluz, a small rural town in the State of São Paulo. His father, João de Deus de Mello e Souza, was a civil servant with limited salary and eight (some
|
https://en.wikipedia.org/wiki/1458%20%28number%29
|
1458 is the integer after 1457 and before 1459.
The maximum determinant of an 11 by 11 matrix of zeroes and ones is 1458.
1458 is one of three numbers which, when its base 10 digits are added together, produces a sum which, when multiplied by its reversed self, yields the original number:
1 + 4 + 5 + 8 = 18
18 × 81 = 1458
The only other non-trivial numbers with this property are 81 and 1729, as well as the trivial solutions 1 and 0. It was proven by Masahiko Fujiwara.
References
Integers
|
https://en.wikipedia.org/wiki/Free%20and%20open-source%20graphics%20device%20driver
|
A free and open-source graphics device driver is a software stack which controls computer-graphics hardware and supports graphics-rendering application programming interfaces (APIs) and is released under a free and open-source software license. Graphics device drivers are written for specific hardware to work within a specific operating system kernel and to support a range of APIs used by applications to access the graphics hardware. They may also control output to the display if the display driver is part of the graphics hardware. Most free and open-source graphics device drivers are developed by the Mesa project. The driver is made up of a compiler, a rendering API, and software which manages access to the graphics hardware.
Drivers without freely (and legally) -available source code are commonly known as binary drivers. Binary drivers used in the context of operating systems that are prone to ongoing development and change (such as Linux) create problems for end users and package maintainers. These problems, which affect system stability, security and performance, are the main reason for the independent development of free and open-source drivers. When no technical documentation is available, an understanding of the underlying hardware is often gained by clean-room reverse engineering. Based on this understanding, device drivers may be written and legally published under any software license.
In rare cases, a manufacturer's driver source code is available on the Internet without a free license. This means that the code can be studied and altered for personal use, but the altered (and usually the original) source code cannot be freely distributed. Solutions to bugs in the driver cannot be easily shared in the form of modified versions of the driver. Therefore, the utility of such drivers is significantly reduced in comparison to free and open-source drivers.
Problems with proprietary drivers
Software developer's view
There are objections to binary-only drive
|
https://en.wikipedia.org/wiki/Springbok%20Radio
|
Springbok Radio (spelled Springbokradio in Afrikaans, ) was a South African nationwide radio station that operated from 1950 to 1986.
History
SABC's decision in December 1945 to develop a commercial service was constrained by post-war financial issues. After almost five years of investigation and after consulting Lord Reith of the BBC and the South African government, it decided to introduce commercial radio to supplement the SABC's public service English and Afrikaans networks and help solve the SABC's financial problems. The SABC would build the equipment and facilities and would place them at the disposal of advertisers and their agencies at cost for productions and allow them to make use of SABC's production staff.
On 1 May 1950, the first commercial radio station in South Africa, Springbok Radio, took to the air. Bilingual in English and Afrikaans, it broadcast from the Johannesburg Centre for 113 and a half hours a week. The service proved so popular with advertisers at its launch that commercial time had been booked well in advance. The service started at 6:43am with the music Vat Jou Goed en Trek, Ferreira. The first voice on air was that of Eric Egan, well remembered for his daily "Corny Crack" and catch phrase "I Love You".
Many drama programmes during the 1950s were imported from Australia, but as more funding became available, Springbok Radio produced almost all its programmes within South Africa through a network of independent production houses. By the end of 1950, some 30 per cent of Springbok Radio shows were produced by South African talent or material and independent productions were sold to sponsors. At the same time all air time had been sold or used and transmission time was extended. By the end of the 1950s, the revenue of Springbok Radio was £205,439, in 1961 it had grown to over two million rand (£1 million) and by 1970 had reached R 6.5 million.
By 1985, Springbok Radio was operating at a heavy loss. Stiff competition from television, d
|
https://en.wikipedia.org/wiki/Brocade%20Communications%20Systems
|
Brocade was an American technology company specializing in storage networking products, now a subsidiary of Broadcom Inc. The company is known for its Fibre Channel storage networking products and technology. Prior to the acquisition, the company expanded into adjacent markets including a wide range of IP/Ethernet hardware and software products. Offerings included routers and network switches for data center, campus and carrier environments, IP storage network fabrics; Network Functions Virtualization (NFV) and software-defined networking (SDN) markets such as a commercial edition of the OpenDaylight Project controller; and network management software that spans physical and virtual devices.
On November 2, 2016, Singapore-based chip maker Broadcom Limited announced it was buying Brocade for about $5.5 billion. As part of the acquisition, Broadcom divested all of the IP networking hardware and software-defined networking assets. Broadcom has since re-domesticated to the United States and is now known as Broadcom Inc.
History
Brocade was founded in August 1995, by Seth Neiman (a venture capitalist, a former executive from Sun Microsystems and a professional auto racer), Kumar Malavalli (a co-author of the Fibre Channel specification) and Paul R. Bonderson (a former executive from Intel Corporation and Sun). Neiman became the first CEO of the company. Brocade was incorporated on May 14, 1998, in Delaware.
The company's first product, SilkWorm, which was a Fibre Channel switch, was released in early 1997.
On May 25, 1999, the company went public at a split-adjusted price of $4.75. On initial public offering (IPO), the company offered 3,250,000 shares, with an additional 487,500 shares offered to the underwriters to cover over-allotments. The top three underwriters (based on number of shares) for Brocade's IPO were, in order, Morgan Stanley Dean Witter, BT Alex.Brown, and Dain Rauscher Wessels.
Brocade stock is traded in the National Market System of the NASDAQ GS
|
https://en.wikipedia.org/wiki/Outside%20broadcasting
|
Outside broadcasting (OB) is the electronic field production (EFP) of television or radio programmes (typically to cover television news and sports television events) from a mobile remote broadcast television studio. Professional video camera and microphone signals come into the production truck for processing, recording and possibly transmission.
Some outside broadcasts use a mobile production control room (PCR) inside a production truck.
History
Outside radio broadcasts have been taking place since the early 1920s and television ones since the late 1920s. The first large-scale outside broadcast was the televising of the Coronation of George VI and Elizabeth in May 1937, done by the BBC's first Outside Broadcast truck, MCR 1 (short for Mobile Control Room).
After the Second World War, the first notable outside broadcast was of the 1948 Summer Olympics. The Coronation of Elizabeth II followed in 1953, with 21 cameras being used to cover the event.
In December 1963 instant replays were used for the first time. Director Tony Verna used the technique on the Army-Navy game which aired on CBS Sports on December 7, 1963.
The 1968 Summer Olympics was the first with competitions televised in colour. The 1972 Olympic Games were the first where all competitions were captured by outside broadcast cameras.
During the 1970s, ITV franchise holder Southern Television was unique in having an outside broadcast boat, named Southener.
The wedding of Prince Charles and Lady Diana Spencer in July 1981 was the biggest outside broadcast at the time, with an estimated 750 million viewers.
New technology
In 2008, the first 3D outside broadcast took place with the transmission of a Calcutta Cup rugby match, but only to an audience of industry professionals who had been invited by BBC Sport.
In March 2010, the first public 3D outside broadcast took place with an NHL game between the New York Rangers and New York Islanders.
The first commercial ultra-high definition outside broad
|
https://en.wikipedia.org/wiki/Photoheterotroph
|
Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply.
Research
Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span.
Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII).
Metabolism
Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane
|
https://en.wikipedia.org/wiki/Recursive%20data%20type
|
In computer programming languages, a recursive data type (also known as a recursively-defined, inductively-defined or inductive data type) is a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used for algebraic data types which are not necessarily recursive.
Example
An example is the list type, in Haskell:
data List a = Nil | Cons a (List a)
This indicates that a list of a's is either an empty list or a cons cell containing an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type in Java:
class List<E> {
E value;
List<E> next;
}
This indicates that non-empty list of type E contains a data member of type E, and a reference to another List object for the rest of the list (or a null reference to indicate that this is the end of the list).
Mutually recursive data types
Data types can also be defined by mutual recursion. The most important basic example of this is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t
|
https://en.wikipedia.org/wiki/Langmuir%E2%80%93Blodgett%20film
|
A Langmuir–Blodgett (LB) film is a nanostructured system formed when Langmuir films—or Langmuir monolayers (LM)—are transferred from the liquid-gas interface to solid supports during the vertical passage of the support through the monolayers. LB films can contain one or more monolayers of an organic material, deposited from the surface of a liquid onto a solid by immersing (or emersing) the solid substrate into (or from) the liquid. A monolayer is adsorbed homogeneously with each immersion or emersion step, thus films with very accurate thickness can be formed. This thickness is accurate because the thickness of each monolayer is known and can therefore be added to find the total thickness of a Langmuir–Blodgett film.
The monolayers are assembled vertically and are usually composed either of amphiphilic molecules (see chemical polarity) with a hydrophilic head and a hydrophobic tail (example: fatty acids) or nowadays commonly of nanoparticles.
Langmuir–Blodgett films are named after Irving Langmuir and Katharine B. Blodgett, who invented this technique while working in Research and Development for General Electric Co.
Historical background
Advances to the discovery of LB and LM films began with Benjamin Franklin in 1773 when he dropped about a teaspoon of oil onto a pond. Franklin noticed that the waves were calmed almost instantly and that the calming of the waves spread for about half an acre. What Franklin did not realize was that the oil had formed a monolayer on top of the pond surface. Over a century later, Lord Rayleigh quantified what Benjamin Franklin had seen. Knowing that the oil, oleic acid, had spread evenly over the water, Rayleigh calculated that the thickness of the film was 1.6 nm by knowing the volume of oil dropped and the area of coverage.
With the help of her kitchen sink, Agnes Pockels showed that area of films can be controlled with barriers. She added that surface tension varies with contamination of water. She used different oils to dedu
|
https://en.wikipedia.org/wiki/Network%20redirector
|
In DOS and Windows, a network redirector, or redirector, is an operating system driver that sends data to and receives data from a remote device. A network redirector provides mechanisms to locate, open, read, write, and delete files and submit print jobs.
It provides application services such as named pipes and MailSlots. When an application needs to send or receive data from a remote device, it sends a call to the redirector. The redirector provides the functionality of the presentation layer of the OSI model.
Networks Hosts communicate through use of this client software: Shells, Redirectors and Requesters.
In Microsoft Networking, the network redirectors are implemented as Installable File System (IFS) drivers.
See also
Universal Naming Convention (UNC)
References
External links
Network Redirector Drivers at Microsoft Docs
Device drivers
Operating system technology
|
https://en.wikipedia.org/wiki/Looking%20Glass%20server
|
Looking Glass servers (LG servers) are servers on the Internet running one of a variety of publicly available Looking Glass software implementations. They are commonly deployed by autonomous systems (AS) to offer access to their routing infrastructure in order to facilitate debugging network issues. A Looking Glass server is accessed remotely for the purpose of viewing routing information. Essentially, the server acts as a limited, read-only portal to routers of whatever organization is running the LG server.
Typically, Looking Glass servers are run by autonomous systems like Internet service providers (ISPs), Network Service Providers (NSPs), and Internet exchange points (IXPs).
Implementation
Looking glasses are web scripts directly connected to routers' admin interfaces such as telnet and SSH. These scripts are designed to relay textual commands from the web to the router and print back the response. They are often implemented in Perl PHP, and Python, and are publicly available on GitHub.
Security concerns
A 2014 paper demonstrated the potential security concerns of Looking Glass servers, noting that even an "attacker with very limited resources can exploit such flaws in operators' networks and gain access to core Internet infrastructure", resulting in anything from traffic disruption to global Border Gateway Protocol (BGP) route injection. This is due in part because looking glass servers are "an often overlooked critical part of an operator infrastructure" because it sits at the intersection of the public internet and "restricted admin consoles". As of 2014, most Looking Glass software was small and old, having last been updated in the early 2000's.
See also
Autonomous system (Internet)
Internet backbone
References
External links
Source code for the *original* Multi-Router Looking Glass (MRLG) by John Fraizer @ OP-SEC.US
Packet Clearing House Looking Glass servers around the world.
Looking Glass server source code
Clickable map of known Reverse
|
https://en.wikipedia.org/wiki/Quadrupole%20ion%20trap
|
In experimental physics, a quadrupole ion trap or paul trap is a type of ion trap that uses dynamic electric fields to trap charged particles. They are also called radio frequency (RF) traps or Paul traps in honor of Wolfgang Paul, who invented the device and shared the Nobel Prize in Physics in 1989 for this work. It is used as a component of a mass spectrometer or a trapped ion quantum computer.
Overview
A charged particle, such as an atomic or molecular ion, feels a force from an electric field. It is not possible to create a static configuration of electric fields that traps the charged particle in all three directions (this restriction is known as Earnshaw's theorem). It is possible, however, to create an average confining force in all three directions by use of electric fields that change in time. To do so, the confining and anti-confining directions are switched at a rate faster than it takes the particle to escape the trap. The traps are also called "radio frequency" traps because the switching rate is often at a radio frequency.
The quadrupole is the simplest electric field geometry used in such traps, though more complicated geometries are possible for specialized devices. The electric fields are generated from electric potentials on metal electrodes. A pure quadrupole is created from hyperbolic electrodes, though cylindrical electrodes are often used for ease of fabrication. Microfabricated ion traps exist where the electrodes lie in a plane with the trapping region above the plane. There are two main classes of traps, depending on whether the oscillating field provides confinement in three or two dimensions. In the two-dimension case (a so-called "linear RF trap"), confinement in the third direction is provided by static electric fields.
Theory
The 3D trap itself generally consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. The ions are trapped in th
|
https://en.wikipedia.org/wiki/Genetic%20redundancy
|
Genetic redundancy is a term typically used to describe situations where a given biochemical function is redundantly encoded by two or more genes. In these cases, mutations (or defects) in one of these genes will have a smaller effect on the fitness of the organism than expected from the genes’ function. Characteristic examples of genetic redundancy include (Enns, Kanaoka et al. 2005) and (Pearce, Senis et al. 2004). Many more examples are thoroughly discussed in (Kafri, Levy & Pilpel. 2006).
The main source of genetic redundancy is the process of gene duplication which generates multiplicity in gene copy number. A second and less frequent source of genetic redundancy are convergent evolutionary processes leading to genes that are close in function but unrelated in sequence (Galperin, Walker & Koonin 1998). Genetic redundancy is typically associated with signaling networks, in which many proteins act together to accomplish teleological functions. In contrast to expectations, genetic redundancy is not associated with gene duplications [Wagner, 2007], neither do redundant genes mutate faster than essential genes [Hurst 1999]. Therefore, genetic redundancy has classically aroused much debate in the context of evolutionary biology (Nowak et al., 1997; Kafri, Springer & Pilpel . 2009).
From an evolutionary standpoint, genes with overlapping functions imply minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Indeed it has been shown that the functional divergence of paralogous pairs in both yeast and human is an extremely rapid process. Taking these notions into account, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take
|
https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20second%20kind
|
In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by or . Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling.
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definition
The Stirling numbers of the second kind, written or or with other notations, count the number of ways to partition a set of labelled objects into nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely equivalence classes that can be defined on an element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously,
for n ≥ 0, and for n ≥ 1,
as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:
The Stirling numbers of the second kind may also be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials
(In particular, (x)0 = 1 because it is an empty product.)
In other words
Notation
Various notations have been used for Stirling numbers of the second kind. The brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.<ref>Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matema
|
https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20first%20kind
|
In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed points as cycles of length one).
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the first kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definitions
Stirling numbers of the first kind are the coefficients in the expansion of the falling factorial
into powers of the variable :
For example, , leading to the values , , and .
Subsequently, it was discovered that the absolute values of these numbers are equal to the number of permutations of certain kinds. These absolute values, which are known as unsigned Stirling numbers of the first kind, are often denoted or . They may be defined directly to be the number of permutations of elements with disjoint cycles. For example, of the permutations of three elements, there is one permutation with three cycles (the identity permutation, given in one-line notation by or in cycle notation by ), three permutations with two cycles (, , and ) and two permutations with one cycle ( and ). Thus, , and . These can be seen to agree with the previous calculation of for .
It was observed by Alfréd Rényi that the unsigned Stirling number also count the number
of permutations of size with left-to-right maxima.
The unsigned Stirling numbers may also be defined algebraically, as the coefficients of the rising factorial:
.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources. (The square bracket notation is also common notation for the Gaussian coefficients.)
Definition by permutation
can be defined as the number of permutations on elem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.